This paper reviews various Signature Verification approaches; various feature sets,
various online databases and types of features. Processing on an online database, post extracting a
combination of global and local features onto a signature as an image, using MultiLayer Perceptron Feed
Forward Network alongwith Back Propogation Algorithm for training is proposed to classify a genuine
and forged (random, simple and skilled) offline signatures.
Authentication of a person is the major concern in this era for security purposes. In biometric systems Signature is one of the behavioural features used for the authentication purpose. In this paper we work on the offline signature collected through different persons. Morphological operations are applied on these signature images with Hough transform to determine regular shape which assists in authentication process. The values extracted from this Hough space is used in the feed forward neural network which is trained using back-propagation algorithm. After the different training stages efficiency found above more than 95%. Application of this system will be in the security concerned fields, in the defence security, biometric authentication, as biometric computer protection or as method of the analysis of person’s behaviour changes.
This slide proposed a method to authenticate a signature in off-line. Our proposed method uses "Harris Corner Detector", "Orientation Assignment" , "KNN Classifier", "Hungarian Algorithm".
An offline signature recognition and verification system based on neural networkeSAT Journals
Abstract Various techniques are already introduced for personal identification and verification based on different types of biometrics which can be physiological or behavioral. Signatures lies in the category of behavioral biometric which can distort or changed with course of time. Signatures are considered to be most promising authentication method in all legal and financial documents. It is necessary to verify signers and their respective signatures. This paper presents an Offline Signature recognition and verification system(SRVS). In this system signature database of signature images is created, followed by image preprocessing, feature extraction, neural network design and training, and classification of signature as genuine or counterfeit. Keywords: biometrics, neural network design, feature extraction, classification etc.
Authentication of a person is the major concern in this era for security purposes. In biometric systems Signature is one of the behavioural features used for the authentication purpose. In this paper we work on the offline signature collected through different persons. Morphological operations are applied on these signature images with Hough transform to determine regular shape which assists in authentication process. The values extracted from this Hough space is used in the feed forward neural network which is trained using back-propagation algorithm. After the different training stages efficiency found above more than 95%. Application of this system will be in the security concerned fields, in the defence security, biometric authentication, as biometric computer protection or as method of the analysis of person’s behaviour changes.
This slide proposed a method to authenticate a signature in off-line. Our proposed method uses "Harris Corner Detector", "Orientation Assignment" , "KNN Classifier", "Hungarian Algorithm".
An offline signature recognition and verification system based on neural networkeSAT Journals
Abstract Various techniques are already introduced for personal identification and verification based on different types of biometrics which can be physiological or behavioral. Signatures lies in the category of behavioral biometric which can distort or changed with course of time. Signatures are considered to be most promising authentication method in all legal and financial documents. It is necessary to verify signers and their respective signatures. This paper presents an Offline Signature recognition and verification system(SRVS). In this system signature database of signature images is created, followed by image preprocessing, feature extraction, neural network design and training, and classification of signature as genuine or counterfeit. Keywords: biometrics, neural network design, feature extraction, classification etc.
This documentation provides a brief insight of face recognition based attendance system using neural networks in terms of product architecture which can be used for educational purpose.
Attendance Management System using Face RecognitionNanditaDutta4
The project ppt presentation is made for the academic session for the completion of the work from Bharati Vidyapeeth Deemed University(IMED) MCA department
Detecting malaria using a deep convolutional neural networkYusuf Brima
Experiment with Deep Residual Convolutional Neural Network to classify microscopic blood cell images (Uninfected, Parasitized)
Utiling ResNet,Deep Residual Learning for Image Recognition (He et al, 2015) architecture.
Uses Keras with a Tensorflow backend.
We have developed a secure, effficient electroni /voting systems to enable organizations eg Saccos, Professional Organizations, Societies and Chamas elect their official online from a computer or a mobile device.
Main Features Include
Calling of elections,
Registration of candidates,
Preparation of polling list,
Electronic Voting
Counting/ tallying of votes.
www.posmart.co.ke
A CASE Lab Report - Project File on "ATM - Banking System"joyousbharat
A CASE Lab Report - Project File on "ATM - Banking System"
The software to be designed will control a simulated automated teller machine
(ATM) having a magnetic stripe reader for reading an ATM card, a keyboard and
display for interaction with the customer, a slot for depositing envelopes, a
dispenser for cash (in multiples of $20), a printer for printing customer receipts, and
a key-operated switch to allow an operator to start or stop the machine. The ATM
will communicate with the bank's computer over an appropriate communication
link. (The software on the latter is not part of the requirements for this problem.)
Each chapter shall be numbered as Chapter 1, Chapter 2, etc. The name of the chapter shall be written immediately below. Both shall be centered horizontally as well as vertically.
detailed project report format excel
A Review on Robust identity verification using signature of a personEditor IJMTER
Signature is behavioural type biometrics characteristics of human. Signature has been a
distinguishing feature for person identification. In these days increasing number of transactions,
especially related to financial and business are being authorized via signatures. Two types of
verification methods are: Offline signature verification and online signature verification. In this paper
we review various components of offline signature reorganization and verification system, feature
extraction techniques and available techniques.
This documentation provides a brief insight of face recognition based attendance system using neural networks in terms of product architecture which can be used for educational purpose.
Attendance Management System using Face RecognitionNanditaDutta4
The project ppt presentation is made for the academic session for the completion of the work from Bharati Vidyapeeth Deemed University(IMED) MCA department
Detecting malaria using a deep convolutional neural networkYusuf Brima
Experiment with Deep Residual Convolutional Neural Network to classify microscopic blood cell images (Uninfected, Parasitized)
Utiling ResNet,Deep Residual Learning for Image Recognition (He et al, 2015) architecture.
Uses Keras with a Tensorflow backend.
We have developed a secure, effficient electroni /voting systems to enable organizations eg Saccos, Professional Organizations, Societies and Chamas elect their official online from a computer or a mobile device.
Main Features Include
Calling of elections,
Registration of candidates,
Preparation of polling list,
Electronic Voting
Counting/ tallying of votes.
www.posmart.co.ke
A CASE Lab Report - Project File on "ATM - Banking System"joyousbharat
A CASE Lab Report - Project File on "ATM - Banking System"
The software to be designed will control a simulated automated teller machine
(ATM) having a magnetic stripe reader for reading an ATM card, a keyboard and
display for interaction with the customer, a slot for depositing envelopes, a
dispenser for cash (in multiples of $20), a printer for printing customer receipts, and
a key-operated switch to allow an operator to start or stop the machine. The ATM
will communicate with the bank's computer over an appropriate communication
link. (The software on the latter is not part of the requirements for this problem.)
Each chapter shall be numbered as Chapter 1, Chapter 2, etc. The name of the chapter shall be written immediately below. Both shall be centered horizontally as well as vertically.
detailed project report format excel
A Review on Robust identity verification using signature of a personEditor IJMTER
Signature is behavioural type biometrics characteristics of human. Signature has been a
distinguishing feature for person identification. In these days increasing number of transactions,
especially related to financial and business are being authorized via signatures. Two types of
verification methods are: Offline signature verification and online signature verification. In this paper
we review various components of offline signature reorganization and verification system, feature
extraction techniques and available techniques.
RELATIVE STUDY ON SIGNATURE VERIFICATION AND RECOGNITION SYSTEMAM Publications
Signature verification is amongst the first few biometrics to be used for verification and one of the natural
ways of authenticating a person’s identity. The user introduces into the computer the scanned images of the signature,
then after image enhancement and reduction of noise of the image. Followed by feature extraction and neural network
training images of signature are verified. Yet now thousands of financial and business transactions are being
authorized via signatures. Therefore an automatic signature verification system is needed. This paper represents a brief
review on various approaches based on different datasets, features and training techniques used for verification.
Freeman Chain Code (FCC) Representation in Signature Fraud Detection Based On...CSCJournals
This paper presents a signature verification system that used Freeman Chain Code (FCC) as directional feature and data representation. There are 47 features were extracted from the signature images from six global features. Before extracting the features, the raw images were undergoing pre-processing stages which were binarization, noise removal by using media filter, cropping and thinning to produce Thinned Binary Image (TBI). Euclidean distance is measured and matched between nearest neighbours to find the result. MCYT-SignatureOff-75 database was used. Based on our experiment, the lowest FRR achieved is 6.67% and lowest FAR is 12.44% with only 1.12 second computational time from nearest neighbour classifier. The results are compared with Artificial Neural Network (ANN) classifier.
Comprehensive Review of Offline Signature Verification Mechanismsijtsrd
One of the oldest and most well known biometric testifying procedures in modern culture is the authentication of handwritten signatures. The field is divided into areas that operate online and offline depending on the acquisition procedure. In online signature verification, the entire signing procedure is carried out using some sort of acquisition equipment, whereas offline signature verification just uses scanned photographs of the signatures. In this paper, we propose an image based offline signature realization and verification system. Support Vector Machine and artificial neural network are both employed to support the goal intended for this thesis. Modern better processes for features extraction are presented. Two independent sequential neural networks are created, one for verifying and the other for recognizing signatures i.e. for detecting forgery . A recognition network regulates the parameters of the verification network, which are generated separately for each signature. A signature code and acceptable dataset are used to rigorously validate the Systems overall performance. Shilpee Agrawal | Dr. Mohd Ahmed "Comprehensive Review of Offline Signature Verification Mechanisms" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-6 , October 2022, URL: https://www.ijtsrd.com/papers/ijtsrd51950.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/51950/comprehensive-review-of-offline-signature-verification-mechanisms/shilpee-agrawal
Proposed Method for Off-line Signature Recognition and Verification using Neu...Editor IJMTER
Computers have become common and are used in almost every field including financial
transactions, thus providing additional security measures is necessary. According to consumer’s
expectations, these security measures must be cheap, reliable and un-intrusive to the authorized
person. The technique which meets these requirements is handwritten signature verification.
Signature verification technique has advantage over other biometric techniques: including voice, iris,
fingerprint, palm etc. as it is mostly used for daily routine procedures link banking operations,
document analysis, electronic funds transfer, access control and many other. Most importantly, it’s
easy and people are less likely to object it. Proposed technique involves using a new approach that
depends on a neural network which enables the user to recognize whether a signature is original or a
fraud. Scanned images are introduced into the computer, their quality is modified with the help of
image enhancement and noise reduction techniques, specific features are extracted and neural
network is trained, The different stages of the process includes: image pre-processing, feature
extraction and pattern recognition through neural networks.
Offline Handwritten Signature Verification using Neural Networkijiert bestjournal
The different biometric techniques have been discussed for ident ification. Such as face reading,fingerprint recognition and retina scanning and these are known as vision based i dentification. There are non vision based identifications such as signature verification and the voice recogni tion. Signature verification plays a vital role in the field of the financial,commercial and for the legal matters. Signature by any person considered as the approval for any work so the signature is the preferred authenticat ion. In this paper signature verification is done by means of image processing,geometric feature extraction and by using neural network technique.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Signature verification based on proposed fast hyper deep neural networkIAESIJAI
Many industries have made widespread use of the handwittern signature verification system, including banking, education, legal proceedings, and criminal investigation, in which verification and identification are absolutely necessary. In this research, we have developed an accurate offline signature verification model that can be used in a writer-independent scenario. First, the handwitten signature images went through four preprocessing stages in order to be suitable for finding the unique features. Then, three different types of features namely principal component analysis (PCA) as appearance-based features, gray-level co-occurrence matrix (GLCM) as texture-features, and fast Fourier transform (FFT) as frequency-features are extracted from signature images in order to build a hybrid feature vector for each image. Finally, to classify signature features, we have designed a proposed fast hyper deep neural network (FHDNN) architecture. Two different datasets are used to evaluate our model these are SigComp2011, and CEDAR datasets. The results collected demonstrate that the suggested model can operate with accuracy equal to 100%, outperforming several of its predecessors. In the terms of (precision, recall, and F-score) it gives a very good results for both datasets and exceeds (1.00, 0.487, and 0.655 respectively) on Sigcomp2011 dataset and (1.00, 0.507, and 0.672 respectively) on CEDAR dataset.
The human signature provides a natural and publically-accepted legally-admissible method for providing authentication to a process. Automatic biometric signature systems assess both the drawn image and the temporal aspects of signature construction, providing enhanced verification rates over and above conventional outcome assessment. To enable the capture of these constructional data requires the use of specialist ‘tablet’ devices. In this paper we explore the enrolment performance using a range of common signature capture devices and investigate the reasons behind user preference. The results show that writing feedback and familiarity with conventional ‘paper and pen’ donation configurations are the primary motivation for user preference. These results inform the choice of signature device from both technical performance and user acceptance viewpoints.
As the use of signatures for identification purposes is pervasive in society and has a long history in business, dynamic signature verification (DSV) could be an answer to authenticating a document signed electronically and establishing the identity of that document in a dispute. DSV has the advantage in that traits of the signature can be collected on a digitizer. The research question of this paper is to understand how the individual variables vary across devices. In applied applications, this is important because if the signature variables change across the digitizers this will impact performance and the ability to use those variable. Understanding which traits are consistent across devices will aid dynamic signature algorithm designers to create more robust algorithms.
Similar to Handwritten Signature Verification using Artificial Neural Network (20)
A NEW DATA ENCODER AND DECODER SCHEME FOR NETWORK ON CHIPEditor IJMTER
System-on-chip (soc) based system has so many disadvantages in power-dissipation as
well as clock rate while the data transfer from one system to another system in on-chip. At the same
time, a higher operated system does not support the lower operated bus network for data transfer.
However an alternative scheme is proposed for high speed data transfer. But this scheme is limited to
SOCs. Unlike soc, network-on-chip (NOC) has so many advantages for data transfer. It has a special
feature to transfer the data in on-chip named as transitional encoder. Its operation is based on input
transitions. At the same time it supports systems which are higher operated frequencies. In this
project, a low-power encoding scheme is proposed. The proposed system yields lower dynamic
power dissipation due to the reduction of switching activity and coupling switching activity when
compared to existing system. Even-though many factors which is based on power dissipation, the
dynamic power dissipation is only considerable for reasonable advantage. The proposed system is
synthesized using quartus II 9.1 software. Besides, the proposed system will be extended up to
interlink PE communication with help of routers and PE’s which are performed by various
operations. To implement this system in real NOC’s contains the proposed encoders and decoders for
data transfer with regular traffic scenarios should be considered.
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...Editor IJMTER
Coins are important part of our life. We use coins in a places like stores, banks, buses, trains
etc. So it becomes a basic need that coins can be sorted, counted automatically. For this, there is
necessary that the coins can be recognized automatically. Automated Coin Recognition System for the
Indian Coins of Rs. 1, 2, 5 and 10 with the rotation invariance. We have taken images from the both
sides of coin. So this system is capable to recognizing coins from both sides. Features are taken from the
images using techniques as a Hough Transformation, Pattern Averaging etc.
Analysis of VoIP Traffic in WiMAX EnvironmentEditor IJMTER
Worldwide Interoperability for Microwave Access (WiMAX) is currently one of the
hottest technologies in wireless communication. It is a standard based on the IEEE 802.16 wireless
technology that provides a very high throughput broadband connections over long distances. In
parallel, Voice Over Internet Protocol (VoIP) is a new technology which provides access to voice
communication over internet protocol and hence it is becomes an alternative to public switched
telephone networks (PSTN) due to its capability of transmission of voice as packets over IP
networks. A lot of research has been done in analyzing the performances of VoIP traffic over
WiMAX network. In this paper we review the analysis carried out by several authors for the most
common VoIP codec’s which are G.711, G.723.1 and G.729 over a WiMAX network using various
service classes. The objective is to compare the results for different types of service classes with
respect to the QoS parameters such as throughput, average delay and average jitter.
A Hybrid Cloud Approach for Secure Authorized De-DuplicationEditor IJMTER
The cloud backup is used for the personal storage of the people in terms of reducing the
mainlining process and managing the structure and storage space managing process. The challenging
process is the deduplication process in both the local and global backup de-duplications. In the prior
work they only provide the local storage de-duplication or vice versa global storage de-duplication in
terms of improving the storage capacity and the processing time. In this paper, the proposed system
is called as the ALG- Dedupe. It means the Application aware Local-Global Source De-duplication
proposed system to provide the efficient de-duplication process. It can provide the efficient deduplication process with the low system load, shortened backup window, and increased power
efficiency in the user’s personal storage. In the proposed system the large data is partitioned into
smaller part which is called as chunks of data. Here the data may contain the redundancy it will be
avoided before storing into the storage area.
Aging protocols that could incapacitate the InternetEditor IJMTER
The biggest threat to the Internet is the fact that it was never really designed. For e.g., the
BGP protocol is used by Internet routers to exchange information about changes to the Internet's
network topologies. However, it also is among the most fundamentally broken; as Internet routing
information can be poisoned with bogus routing information. Instead, it evolved in fits and start,
thanks to various protocols that have been cobbled together to fulfill the needs of the moment. Few
of protocols from them were designed with security in mind. or if they were sported no more than
was needed to keep out a nosy neighbor, not a malicious attacker. The result is a welter of aging
protocols susceptible to exploit on an Internet scale. Here are six Internet protocols that could stand
to be replaced sooner rather than later or are (mercifully) on the way out.
A Cloud Computing design with Wireless Sensor Networks For Agricultural Appli...Editor IJMTER
The emergence of exactitude agriculture has been promoted by the numerous developments within
the field of wireless sensing element actor networks (WSAN). These WSANs offer important data
for gathering, work management, development of crops, and limitation of crop diseases. Goals of
this paper to introducing cloud computing as a brand new way (technique) to be utilized in addition
to WSANs to any enhance their application and benefits to the area of agriculture.
A CAR POOLING MODEL WITH CMGV AND CMGNV STOCHASTIC VEHICLE TRAVEL TIMESEditor IJMTER
Carpooling (also car-sharing, ride-sharing, lift-sharing), is the sharing of car journeys so
that more than one person travels in a car. It helps to resolve a variety of problems that continue to
plague urban areas, ranging from energy demands and traffic congestion to environmental pollution.
Most of the existing method used stochastic disturbances arising from variations in vehicle travel
times for carpooling. However it doesn’t deal with the unmet demand with uncertain demand of the
vehicle for car pooling. To deal with this the proposed system uses Chance constrained
formulation/Programming (CCP) approach of the problem with stochastic demand and travel time
parameters, under mild assumptions on the distribution of stochastic parameters; and relates it with a
robust optimization approach. Since real problem sizes can be large, it could be difficult to find
optimal solutions within a reasonable period of time. Therefore solution algorithm using tabu
heuristic solution approach is developed to solve the model. Therefore, we constructed a stochastic
carpooling model that considers the in- fluence of stochastic travel times. The model is formulated as
an integer multiple commodity network flow problem. Since real problem sizes can be large, it could
be difficult to find optimal solutions within a reasonable period of time.
Sustainable Construction With Foam Concrete As A Green Green Building MaterialEditor IJMTER
A green building is an environmentally sustainable building, designed, constructed and
operated to minimise the total environmental impacts. Carbon dioxide (CO2) is the primary
greenhouse gas emitted through human activities. It is claimed that 5% of the world’s carbon dioxide
emission is attributed to cement industry, which is the vital constituent of concrete. Due to the
significant contribution to the environmental pollution, there is a need for finding an optimal solution
along with satisfying the civil construction needs. Apart from normal concrete bricks, a clay brick,
Foam concrete is a new innovative technology for sustainable building and civil construction which
fulfills the criteria of being a Green Material. This paper concludes that Foam Concrete can be an
effective sustainable material for construction and also focuses on the cost effectiveness in using
Foam Concrete as a building material in replacement with Clay Brick or other bricks.
USE OF ICT IN EDUCATION ONLINE COMPUTER BASED TESTEditor IJMTER
A good education system is required for overall prosperity of a nation. A tremendous
growth in the education sector had made the administration of education institutions complex. Any
researches reveal that the integration of ICT helps to reduce the complexity and enhance the overall
administration of education. This study has been undertaken to identify the various functional areas
to which ICT is deployed for information administration in education institutions and to find the
current extent of usage of ICT in all these functional areas pertaining to information administration.
The various factors that contribute to these functional areas were identified. A theoretical model was
derived and validated.
Textual Data Partitioning with Relationship and Discriminative AnalysisEditor IJMTER
Data partitioning methods are used to partition the data values with similarity. Similarity
measures are used to estimate transaction relationships. Hierarchical clustering model produces tree
structured results. Partitioned clustering produces results in grid format. Text documents are
unstructured data values with high dimensional attributes. Document clustering group ups unlabeled text
documents into meaningful clusters. Traditional clustering methods require cluster count (K) for the
document grouping process. Clustering accuracy degrades drastically with reference to the unsuitable
cluster count.
Textual data elements are divided into two types’ discriminative words and nondiscriminative
words. Only discriminative words are useful for grouping documents. The involvement of
nondiscriminative words confuses the clustering process and leads to poor clustering solution in return.
A variation inference algorithm is used to infer the document collection structure and partition of
document words at the same time. Dirichlet Process Mixture (DPM) model is used to partition
documents. DPM clustering model uses both the data likelihood and the clustering property of the
Dirichlet Process (DP). Dirichlet Process Mixture Model for Feature Partition (DPMFP) is used to
discover the latent cluster structure based on the DPM model. DPMFP clustering is performed without
requiring the number of clusters as input.
Document labels are used to estimate the discriminative word identification process. Concept
relationships are analyzed with Ontology support. Semantic weight model is used for the document
similarity analysis. The system improves the scalability with the support of labels and concept relations
for dimensionality reduction process.
Testing of Matrices Multiplication Methods on Different ProcessorsEditor IJMTER
There are many algorithms we found for matrices multiplication. Until now it has been
found that complexity of matrix multiplication is O(n3). Though Further research found that this
complexity can be decreased. This paper focus on the algorithm and its complexity of matrices
multiplication methods.
Malware is a worldwide pandemic. It is designed to damage computer systems without
the knowledge of the owner using the system. Software‟s from reputable vendors also contain
malicious code that affects the system or leaks information‟s to remote servers. Malware‟s includes
computer viruses, spyware, dishonest ad-ware, rootkits, Trojans, dialers etc. Malware detectors are
the primary tools in defense against malware. The quality of such a detector is determined by the
techniques it uses. It is therefore imperative that we study malware detection techniques and
understand their strengths and limitations. This survey examines different types of Malware and
malware detection methods.
SURVEY OF TRUST BASED BLUETOOTH AUTHENTICATION FOR MOBILE DEVICEEditor IJMTER
Practical requirements for securely demonstrating identities between two handheld
devices are an important concern. The adversary can inject a Man-In- The-Middle (MITM) attack to
intrude the protocol. Protocols that employ secret keys require the devices to share private
information in advance, in which it is not feasible in the above scenario. Apart from insecurely
typing passwords into handheld devices or comparing long hexadecimal keys displayed on the
devices’ screen, many other human-verifiable protocols have been proposed in the literature to solve
the problem. Unfortunately, most of these schemes are unsalable to more users. Even when there are
only three entities attempt to agree a session key, these protocols need to be rerun for three times.
So, in the existing method a bipartite and a tripartite authentication protocol is presented using a
temporary confidential channel. Besides, further extend the system into a transitive authentication
protocol that allows multiple handheld devices to establish a conference key securely and efficiently.
But this method detects only the outsider attacks. Method does not consider the insider attacks. So,
in the proposed method trust score based method is introduced which computes the trust values for
the nodes and provide the security. The trust score is computed has a positive influence on the
confidence with which an entity conducts transactions with that node. Network the behavior of the
node will be monitored periodically and its trust value is also updated .So depending on the behavior
of the node in the network trust relation will be established between two nodes.
GLAUCOMA is a chronic eye disease that can damage optic nerve. According to WHO It
is the second leading cause of blindness, and is predicted to affect around 80 million people by 2020.
Development of the disease leads to loss of vision, which occurs increasingly over a long period of
time. As the symptoms only occur when the disease is quite advanced so that glaucoma is called the
silent thief of sight. Glaucoma cannot be cured, but its development can be slowed down by
treatment. Therefore, detecting glaucoma in time is critical. However, many glaucoma patients are
unaware of the disease until it has reached its advanced stage. In this paper, some manual and
automatic methods are discussed to detect glaucoma. Manual analysis of the eye is time consuming
and the accuracy of the parameter measurements also varies with different clinicians. To overcome
these problems with manual analysis, the objective of this survey is to introduce a method to
automatically analyze the ultrasound images of the eye. Automatic analysis of this disease is much
more effective than manual analysis.
Survey: Multipath routing for Wireless Sensor NetworkEditor IJMTER
Reliability is playing very vital role in some application of Wireless Sensor Networks
and multipath routing is one of the ways to increase the probability of reliability. More over energy
consumption is constraint. In this paper, we provide a survey of the state-of-the-art of proposed
multipath routing algorithm for Wireless Sensor Networks. We study the design, analyze the tradeoff
of each design, and overview several presenting algorithms.
Step up DC-DC Impedance source network based PMDC Motor DriveEditor IJMTER
This paper is devoted to the Quasi Z source network based DC Drive. The cascaded
(two-stage) Quasi Z Source network could be derived by the adding of one diode, one inductor,
and two capacitors to the traditional quasi-Z-source inverter The proposed cascaded qZSI inherits all
the advantages of the traditional solution (voltage boost and buck functions in a single stage,
continuous input current, and improved reliability). Moreover, as compared to the conventional qZSI,
the proposed solution reduces the shoot-through duty cycle by over 30% at the same voltage boost
factor. Theoretical analysis of the two-stage qZSI in the shoot-through and non-shoot-through
operating modes is described. The proposed and traditional qZSI-networks are compared. A
prototype of a Quasi Z Source network based DC Drive was built to verify the theoretical
assumptions. The experimental results are presented and analyzed.
SPIRITUAL PERSPECTIVE OF AUROBINDO GHOSH’S PHILOSOPHY IN TODAY’S EDUCATIONEditor IJMTER
The paper reflects the spiritual philosophy of Aurobindo Ghosh which is helpful in today’s
education. In 19th century he wrote about spirituality, in accordance with that it is a core and vital part
of today’s education. It is very much essential for today’s kid. Here I propose the overview of that
philosophy.At the utmost regeneration of those values in today’s generation is the great deal with
education system. To develop the values and spiritual education in the youngers is the great moto of
mine. It is the materialistic world and without value redefinition among them is the harder task but not
difficult.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
Software Defect Prediction Using Local and Global AnalysisEditor IJMTER
The software defect factors are used to measure the quality of the software. The software
effort estimation is used to measure the effort required for the software development process. The defect
factor makes an impact on the software development effort. Software development and cost factors are
also decided with reference to the defect and effort factors. The software defects are predicted with
reference to the module information. Module link information are used in the effort estimation process.
Data mining techniques are used in the software analysis process. Clustering techniques are used
in the property grouping process. Rule mining methods are used to learn rules from clustered data
values. The “WHERE” clustering scheme and “WHICH” rule mining scheme are used in the defect
prediction and effort estimation process. The system uses the module information for the defect
prediction and effort estimation process.
The proposed system is designed to improve the defect prediction and effort estimation process.
The Single Objective Genetic Algorithm (SOGA) is used in the clustering process. The rule learning
operations are carried out sing the Apriori algorithm. The system improves the cluster accuracy levels.
The defect prediction and effort estimation accuracy is also improved by the system. The system is
developed using the Java language and Oracle relation database environment.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Online aptitude test management system project report.pdfKamal Acharya
The purpose of on-line aptitude test system is to take online test in an efficient manner and no time wasting for checking the paper. The main objective of on-line aptitude test system is to efficiently evaluate the candidate thoroughly through a fully automated system that not only saves lot of time but also gives fast results. For students they give papers according to their convenience and time and there is no need of using extra thing like paper, pen etc. This can be used in educational institutions as well as in corporate world. Can be used anywhere any time as it is a web based application (user Location doesn’t matter). No restriction that examiner has to be present when the candidate takes the test.
Every time when lecturers/professors need to conduct examinations they have to sit down think about the questions and then create a whole new set of questions for each and every exam. In some cases the professor may want to give an open book online exam that is the student can take the exam any time anywhere, but the student might have to answer the questions in a limited time period. The professor may want to change the sequence of questions for every student. The problem that a student has is whenever a date for the exam is declared the student has to take it and there is no way he can take it at some other time. This project will create an interface for the examiner to create and store questions in a repository. It will also create an interface for the student to take examinations at his convenience and the questions and/or exams may be timed. Thereby creating an application which can be used by examiners and examinee’s simultaneously.
Examination System is very useful for Teachers/Professors. As in the teaching profession, you are responsible for writing question papers. In the conventional method, you write the question paper on paper, keep question papers separate from answers and all this information you have to keep in a locker to avoid unauthorized access. Using the Examination System you can create a question paper and everything will be written to a single exam file in encrypted format. You can set the General and Administrator password to avoid unauthorized access to your question paper. Every time you start the examination, the program shuffles all the questions and selects them randomly from the database, which reduces the chances of memorizing the questions.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Water billing management system project report.pdfKamal Acharya
Our project entitled “Water Billing Management System” aims is to generate Water bill with all the charges and penalty. Manual system that is employed is extremely laborious and quite inadequate. It only makes the process more difficult and hard.
The aim of our project is to develop a system that is meant to partially computerize the work performed in the Water Board like generating monthly Water bill, record of consuming unit of water, store record of the customer and previous unpaid record.
We used HTML/PHP as front end and MYSQL as back end for developing our project. HTML is primarily a visual design environment. We can create a android application by designing the form and that make up the user interface. Adding android application code to the form and the objects such as buttons and text boxes on them and adding any required support code in additional modular.
MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software. It is a stable ,reliable and the powerful solution with the advanced features and advantages which are as follows: Data Security.MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software.
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
Handwritten Signature Verification using Artificial Neural Network
1. Scientific Journal Impact Factor (SJIF): 1.711
International Journal of Modern Trends in Engineering
and Research
www.ijmter.com
@IJMTER-2014, All rights Reserved 308
e-ISSN: 2349-9745
p-ISSN: 2393-8161
Handwritten Signature Verification using Artificial Neural
Network
Shiwani Gupta
Asstt. Prof., CMPN,TCET,Mumbai
ABSTRACT: This paper reviews various Signature Verification approaches; various feature sets,
various online databases and types of features. Processing on an online database, post extracting a
combination of global and local features onto a signature as an image, using MultiLayer Perceptron Feed
Forward Network alongwith Back Propogation Algorithm for training is proposed to classify a genuine
and forged (random, simple and skilled) offline signatures.
Keywords: Offline Signature Verification, Artificial Neural Networks, Feature extraction, forgery,
preprocessing.
I. INTRODUCTION
Biometrics can be classified into two broad categories — Behavioral (signature verification, keystroke
dynamics, handwriting, speech etc.) and Physiological (face, iris, fingerprint, retina etc.)[1].
A. Signature
Human Signature is proven to be the most natural, widely accepted[1,2] biometric attribute of a human
being which can be used to authenticate human identity and is even less intrusive and has no negative or
undesirable health connotations [3]. The verification of signer from scanned signs of documents has
received much more importance in day to day life because still various transactions are done with the faith
of signs. But great variability can be observed in signatures according to country, age, time, habits,
psychological or mental state, physical and practical conditions [15].
B. Signature Verification
Signatures are composed of special characters and therefore most of the time they can be unreadable.
Also intrapersonal variations and interpersonal differences make it necessary to analyze them as complete
images and not as letters and words put together [1]. As signature is the primary mechanism both for
authentication and authorization in legal transactions, the need for research in efficient auto-mated
solutions for signature recognition and verification has increased in recent years.
Recognition is finding the identification of the signature owner whereas Verification is the decision about
whether the signature is genuine or forged [1]. Signature verification is widely studied and discussed
using two approaches depending on the data acquisition method used[15][19][20].
On-line approach / Dynamic Signature Verification Technique – It uses an electronic pressure
sensitive digitizing tablet / stylus operated PDA to extract information about a signature and takes
dynamic information like number of order of the strokes, the overall speed of the signature and the pen
pressure at each point[4] etc. for verification purpose.
Offline approach / Static Signature Verification Technique – Users write their signature on paper,
digitize it through an optical scanner or a camera, and the biometric system recognizes the signature
analyzing its shape, length, height, duration, etc [4][14][29]. The features used are [4]. For this only the
2. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 309
pixel image needs to be estimated. In this sense, signature verification becomes a typical pattern
recognition task. The task of signature authentication can be narrowed to drawing the threshold of the
range of genuine variation [4].
Offline systems are more applicable and easy to use in comparison with on-line systems which are more
unique and difficult to forge[4], however it is considered more difficult to design offline than on-line due
to the lack of dynamic information such as no. of strokes, velocity etc.
C. Forgeries
The main task of any signature verification system is to detect whether the signature is genuine or
counterfeit. Forgery is a crime that aims at deceiving people. Since actual forgeries are difficult to obtain,
the instrument and the results of the verification depend on the type of the forgery [17]. Basically there
are three types that have been defined [1][14][15][20]:
1. Random forgery: are not based on any knowledge of the original signature i.e. the forger has no
information whatsoever about the signature style and the name of the person.
2. Simple forgery: are produced knowing the name of the signer or same shape but without having an
example of signer’s signature.
3. Skilled forgery: is signed by a person who has had access to a genuine signature for practice [18].
Every type of forgery requests a different recognition approach. Methods based on Static approach are
usually used to identify random and simple forgeries. The reason is that these methods have shown to be
more suitable to describe characteristics related to the signature shape [1]. Although a great amount of
work has been done on random and simple forgery detection, more hard work is still needed to tackle the
problem of skilled forgery detection. No verification algorithms are proposed which might be dealing with
skilled forgeries [20][30].
II. DATA ACQUISITION AND PREPROCESSING
The design of a signature verification system requires the solution of four problems: data acquisition, pre-
processing, feature extraction and verification [1,33].
A. Data Acquisition
There are different datasets which are available consisting of different signers including some forgeries.
Some available datasets are given below [15]:
Real DB1: MCYT-75 Signature DB. This dataset includes 75 signers collected at four different Spanish
universities. The corpus includes 15 genuine signatures acquired in two sessions. All the signatures were
acquired with the same inking pen and the same paper templates, over the WACOM Intuos A6 pen
tablet. The paper templates were scanned at 600 dpi with 256 grey levels. The database is distributed by
the Biometric Recognition Group-ATVS from UAM1.
Real DB2: GPDS-960 Signature DB [16]. This dataset contains 24 genuine signatures from 881
individuals acquired in one site in just one session. For the current work, only the first 350 users of the
database were considered in the experiments. Each sheet provided two different box sizes for the
signature. The sheets were scanned at 600 dpi with 256 grey levels. The database is distributed by the
Grupo Procesado Digital de Se˜nales (GPDS) of the ULPGC2.
Synthetic DB1: SSig-DB 1-Ink. This dataset was produced following the proposed synthetic off-line
signature generation method, and comprises 30 samples of 350 synthetic signers. All samples were
generated with the _spot = 0:35mm: ballpoint and the viscous ink. The database may be obtained from
the Biometric Recognition Group-ATVS website.
3. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 310
Synthetic DB2: SSig-DB Multiple Inks. As the SSig- DB 1-Ink this dataset comprises 30 samples of 350
synthetic signers. However, in this case, samples were generated using the 6 standard ballpoint sizes and
three different types of inks. For each signature, both the ballpoint and the ink were randomly selected.
The database may also be obtained from the Biometric Recognition Group-ATVS website.
Figure 1: Flowchart of Signature Recognition System
B. Pre Processing
Image pre-processing represents a wide range of techniques that exist for the manipulation and
modification of images [15]. The following steps are done [1][19][20]:
1) Thresholding: It is the most trivial and effortlessly appropriate method for segregating objects from
the background. Thresholding method can be used for distinguishing the signature from the background.
In proposed application, we are concerned in dark objects on light background and hence a threshold
value T entitled as the brightness threshold is suitably chosen and applied to image [21]. After the
Thresholding, the pixels of the signature would be 1 and the other pixels which belong to the background
would be 0 [23]. The brightness threshold [24] can be chosen such that it satisfies the following
conditions;
Suppose image pixels f(x, y) then,
If f(x, y) ≥ T
Then f(x, y) = Background
Else f(x, y) = Object
2) Cropping: The image is cropped, to the bounding rectangle of the signature.
3) Color Normalization: Transformation from color to grayscale, and finally to black and white (binary).
4. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 311
Gray colour = (0.299*Red) + (0.5876*Green) + (0.114*Blue)
4) Signature Normalization[20]: Irregularity in image capturing and scanning process causes
dimensions of signature to fluctuate [21]. Height and width of signatures fluctuate from person to person
and occasionally even the same person may exercise different sizes of signature [24]. Therefore it is
needed to get rid of the size variation and achieve a benchmark signature size for all input signatures.
Throughout the normalization process, the characteristic ratio between the width and height of a
signature is kept undamaged and following the process, all the signatures will have the similar dimension
[24]. Normalization process made use of the following equations:
Xnew = [(Xold-Xmin)/ (Xmax-Xmin)]*M
Ynew = [(Yold-Ymin)/ (Ymax-Ymin)]*M
where,
Xnew, Ynew = Pixel coordinates for the normalized signature,
Xold, Yold = Pixel coordinates for the original signature,
M = Width/height meant for the normalized signature
5) Filtering: Images are contaminated due to stemming from decoding errors or noisy channels. An
image also gets degraded because of the detrimental effects due to illumination and other objects in the
environment. Median filter is extensively used for smoothing and restoring images corrupted by noise
[24]. In a median filter, a window slides over the image, and for each location of the window, the median
concentration of the pixels within it decide the intensity of the pixel positioned in the middle of the
window. As weigh against to the mean filter, median filter has striking properties for suppressing impulse
noise while preserving edges; due to this feature we recommended this filter in our proposed system [24].
6) Thinning[20]: It was introduced to describe the global properties of objects and to reduce the original
image into a more compact representation [23]. It uses a Stentiford algorithm for thinning process.
Thinning means reducing binary objects or shapes to strokes that are single pixel wide. It makes the
extracted features invariant to image characteristics like quality of pen and paper but thinning the black
and white image results always in a huge information loss.
C. Feature Extraction
In Feature extraction, the essential features are extracted from the original input signature based on the
application, and fluctuate accordingly [23]. The choice of a powerful set of features is crucial in signature
verification systems [22]. The features that are extracted from this phase are used to create a feature
vector which is then used to uniquely characterize a candidate signature [21].
Features for offline signature verification using scanned images can be divided into three types
[1][20][25][31]:
1) Global features describe or identify the signature shape like signature area, signature height-to-width
ratio, slope & slope direction, skewness of signature etc. They are extracted from every pixel that lies
within a rectangle circumscribing the signature. These features do not reflect any local, geometrical, or
topological properties of the signature. Although global features are easily extractable and insensitive to
noise, they are dependent upon the position alignment and highly sensitive to distortion and style
variations and they only deliver limited information for signature verification [21].
5. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 312
2) Local Features are calculated to describe the geometrical characteristics such as location, tangent
track, and curving. They provide affluent descriptions of writing shapes and are powerful for cultivated
writers, but the extraction of consistent local features is still a hard problem [21]. The local features based
approaches are more popular in online verification than in the offline one. This is because it is much easier
to calculate local shape and to find their corresponding relations in 1D succession than in 2D images [21].
3) Geometrical and topological features describe the characteristic geometry and topology of a
signature and thereby preserve the signatures global and local properties, e.g. local correspondence of
stroke segments to trace signature. They have a high tolerance to distortion and style variations, and they
can also tolerate a certain degree of translation and rotation variations.
4) Statistical features are derived from the distribution of pixels of a signature, e.g. statistics of high
gray-level pixels to identify pseudo-dynamic characteristics of signatures. This technique includes the
extraction of high pressure factors with respect to vertically segmented zones (for example, upper, middle
and lower zones) [7] and the ratio of signature width to short- or long-stroke height [8]. The statistical
features take some topological and dynamic information into account and consequently can tolerate
minor distortions and style variations.
5) Mask Features provide information about guidelines of the lines of the signature for the reason that
the angles of signature have interpersonal variation.
6) Texture Features are the pixel positions with respect to the property of the feature. These can be
processed using a matcher which uses co-occurrence matrix of the picture image. It includes[23]:
• End points are points where a signature stroke begins or ends.
• Branch points are points where a signature stroke bifurcates into two strokes.
• Crossing points are points where one signature stroke crosses another stroke.
To extract these features, it is necessary to apply the pre-processing techniques like thresholding and
thinning on a gray scale signature image [23].
The features extracted from signatures or handwriting play a vital role in the success of any feature-based
HSV system. If a poorly constructed feature set is used with little insight into the writer’s natural style,
then no amount of modeling or analysis is going to result in a successful system. Further, it is necessary to
have multiple, meaningful features in the input vector to guarantee useful learning by the NN [27].
The properties of “useful” features must satisfy the following three requirements [28]:
1) The writer must be able to write in a standard, consistent way (i.e., not unnaturally fast or slow in
order to produce a particular feature);
2) The writer must be somewhat separable from other writers based on the feature;
3) The features must be environment invariant (remain consistent irrespective of what is being written).
The third point is more relevant to the process of writer identification than HSV, as a person’s signature
is most often a fixed text.
6. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 313
What follows now is a description of each of the features that are extracted from a given signature, as
well as their significance and method of calculation. Each of these features acts as a single input to the
NN. These features are extracted as follows [5,6]:
Center of mass: Split the signature image in two equal parts and finds center of mass for individual parts.
Normalized area of signature: It is the ratio of area of signature image to the area of signature enclosed
in a bounding box. Area of a signature is the number of pixels comprising it.
Aspect Ratio: This is the ratio of the writing length to the writing height. It remains invariant to scaling.
If the user signs in a different size, the height and length will be altered proportionally to retain the aspect
ratio.
Wrinkleless: It is the total number of black pixels available in the image after all the pre-processing has
been done. Since the pixel count parameter is a unique value, this property of handwritings is used to
distinguish between genuine and forged signature.
Slope And Slope Direction - To calculate approximately the slope of the signature the algorithm
proposed by Ammar is used [29]. This algorithm formulates the use of the thinned image obtained during
the pre-processing. A 3*3 sliding window is used for calculation. The window is stimulated starting from
the top left pixel to the bottom right pixel, one pixel at a time in a row major order [29].
Density of Thinned Image - The density of thinned image can be designed after thinning which can be
calculated by the following formula [30].
Density of thinned image = No of non zero pixels in the thinned image / Total no of pixels in the thinned
image.
Width to Height Ratio- Width to height ratio is the ratio of range of x coordinates to the range of y
coordinates [30]. The formula for calculating width to height ratio is given as,
Width to Height Ratio = (Xmax - Xmin) / (Ymax - Ymin)
where, Xmax & Xmin = Maximum & Minimum values of x coordinates of non-zero pixels, Ymax &
Ymin = Maximum & Minimum values of y coordinates of non-zero pixels [30].
Skewness- Skewness is a measure of symmetry. It allows us to determine how the curves in each segment
of the signature are. The proportion of this torsion is afterwards calculated and extracted. Moreover, this
percentage is weighed against those extracted from the other images [30].
Signature Duration: The time taken to perform a signature is perhaps the single most discriminatory
feature in HSV. A study reported found that 59% of forgeries can be rejected on the basis of the
signature duration being more than 20% different from the mean.
Pen-Down Ratio: This is the ratio of the pen-down time to the total writing time. This feature does not
undergo a large amount of variation when signing, irrespective of the writer’s mood or emotions. In
addition, it is very difficult to forge as there is no way of determining the pendown ratio from an off-line
writing copy. Calculation is performed by removing leading and trailing zeroes from the captured data,
then taking the ratio of the number of non-zero points to the total number of points.
Horizontal Length: This is the horizontal distance measured between the two most extreme points in the
x direction (often simply the distance between the first point captured and the last point captured). Any
fragments such as ‘t’ crossings or ‘i’ dottings are excluded (such fragments far less stable and individual
7. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 314
traits such as extravagant ‘t’ crossings can cause high variability with this feature). The horizontal length
tends to remain stable with a practiced word and particularly with a signature, irrespective of the presence
of a bounding box, horizontal line or even with no line present.
Number of “pen-ups”: This indicates the number of times the pen is lifted while signing after the first
contact with the tablet and excluding the final pen-lift. This is highly stable and almost never changes in
an established signature. This can be a difficult feature for a forger to discern from an off-line copy of the
signature.
Cursivity: This is a number normalized to between zero and one that represents the degree to which a
writer produces isolated handprint characters or fully-connected cursive script. The higher the cursivity
value, the more connected the word is. A value of one means that there were no pen-ups over the entire
word and value closer to zero means that the writing was mostly printed rather than cursive. However,
the drawback with this approach is that it is necessary to have a priori knowledge of how many letters are
there in the word being written.
Top Heaviness: This is a measure of the proportion of the signature that lies above the vertical midpoint.
The only real issue in the calculation of top heaviness is to decide which measure of central tendency is
most indicative of the true vertical midpoint. It would be pointless to use the median in this situation as,
by definition, half of the points would lie above the median and half below. It is therefore a question as to
which of the other two standard central measures (mean or mode) is more appropriate.
Horizontal Dispersion: This is the same as top heaviness but with respect to the horizontal spread of the
handwriting rather than the vertical spread. Calculation is done in a similar fashion.
Curvature: This is a measure of how “flat” or how “curved” the handwriting is. A high curvature value
means that the writing is more dramatically curved, which is associated with more thorough or
exaggerated completion of handwritten characters. Curvature is slightly susceptible to change depending
on the writer’s mood or demeanor. Curvature is calculated as the ratio of the signature path length to the
word length. The path length is the sum of distances between each consecutive point in the sample so is
generally quite large, of the order of 10,000 pixels. The word length is the physical, or Euclidean,
distance between the captured writing’s first and last point.
Average Curvature per Stroke: This is a feature based on the curvature value described above, except
that the curvature value is calculated for each individual stroke in the handwriting sample, then averaged.
The difference between this feature and the global curvature value is that by examining the curvature of
the individual strokes, it is possible to obtain a more insightful measure of the depth of the local curves in
the handwriting.
Number of Strokes: This feature is indicative of how many segments or states the handwriting goes
through during the signature’s production. This feature remains quite stable over a user’s various samples
as even with the natural variations in a user’s signature, the segmentation remains quite similar.
Furthermore, this feature is nontrivial for a forger to reproduce as the segmentation is based purely on the
pen-tip velocity, which is not visible with just a written version of the word.
Mean Ascender Height: This is the mean height of “ascenders” in the handwriting. Ascenders are letters
such as‘d’, ‘f’ and‘t’ in which a part of the letter extends above the main body of the sample [1]. Formal
detection of ascenders in the body of a signature involves computing the mean of the data, as well as
points at one quarter and three quarters of the maximum height. The ascender’s peaks are the local
maxima in the y direction that are above the three quarter mark. The distance between a local maximum
and the y mean is found and this distance is taken as the height of that ascender.
8. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 315
Mean Descender Depth: Descenders are the opposite of ascenders. They are letters such as ‘g’, ‘y’ and
‘j’ that typically contain parts extending below the main body of the sample (it is possible for an individual
letter to be an ascender and a descender – the letter ‘f’ is sometimes written in this way). Finding the
descender extremities is done in a similar fashion to ascenders and uses the same frequency histogram.
The descender extremities are the local minima in the y direction that fall below the lower quarter of the
sample. The depth value for each extremity is measured as the distance between the local minimum and
the y mean expressed as a positive integer. Maximum Height: This is the distance between the lowest
point in a word (the lowest descender’s depth) and the highest point in a word (the highest ascender’s
height). This calculation ignores ‘i’ dottings and‘t’ crossings or other such artifacts occurring in the
handwriting. Also removed from consideration is the final trailing stroke in a signature. This feature
remains reasonably stable across several written samples.
Average Velocity: This measure how fast the pen-tip is travelling across the tablet’s surface. This is
calculated as the mean of all individual velocity values (there is one velocity value for each pair of
consecutive points).
Average Absolute Acceleration: This is the absolute value of the acceleration and deceleration
measurements. The average absolute acceleration captures the mean rate of velocity change in both
positive and negative directions.
Maximum Acceleration: While this feature is less stable than some others, the purely dynamic nature and
difficulty in forging still make it a useful characteristic.
Maximum Deceleration: This is the rate which the pentip’s velocity decreases as it approaches a
stroke’s end.
Handwriting Slant Using All Points: Slant calculation bears much importance in handwriting analysis. It
is not trivial and several different approaches were considered. The first approach involved using all
captured writing points. The points are spatially resampled and the angle (expressed as a gradient)
between each pair of consecutive points in the signature is calculated, giving several gradient values. The
slant is given by the mean of these gradient values. ‘i’ dottings, ‘t’ crossings and other such artifacts are
removed from consideration.
Horizontal Velocity: This is the average velocity over the x direction. It measures how fast the signature
moves horizontally and is related to pen-tip velocity, cursivity, horizontal length and acceleration. It is
impossible for a potential forger to discern the horizontal velocity from an off-line copy of the writing.
This feature is calculated as the ratio of horizontal distance to the duration in which the sample’s body
was produced.
Degree of Parallelism: This refers to the extent to which slant remains consistent throughout the entire
sample. This is a feature intrinsic to a writer’s natural handwriting and is a characteristic naturally
produced without conscious thought. This feature’s main problem is that users tend to write with higher
parallelism if they are forcing themselves to write slower and more deliberately. Calculation is based on
the handwriting slant. Long strokes are extracted and the standard deviation of the slant of the long
strokes is obtained (a higher value indicates a lower slant consistency).
Baseline Consistency: The baseline of a single handwritten word is the line-of-best-fit drawn through the
bottom of all non-descender characters. The baseline is analogous to the position of the line when a user
is signing on a ruled or dotted line. This is another very personal feature and is particularly representative
of a signer’s natural tendency when no ruled line is present as a guide. Some writers are naturally more
irregular or “sloppy” when forming their baseline than others. Calculation is done by extracting the set of
9. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 316
minima from all non-descender characters (i.e., y-minima that fall below the mean of the y data and above
the lower quarter).
Circularity: This feature tries to capture how “round” or “distended” the handwritten characters are. It is
measured as the ratio of the area to the horizontal length. This feature is one that is quite difficult for a
forger to judge so proves useful in preventing false acceptances. The area of signatures with multiple
components is found by summing each of the independently calculated component areas. Circularity is a
computationally expensive feature as it requires several iterations through the x and y profiles.
Middle-Heaviness: This is the percentage of the handwritten samples bounding box that is interior to the
signature itself. It measures the concentration of the handwriting around the midpoint. Calculation is
undertaken by finding the signature’s area (as performed previously) and dividing this value by the area of
the bounding box. The bounding box is a rectangle drawn around the sample using the two extremities in
each of the x and y data streams (with artifacts removed).
Component Physical Spacing: The average spacing between the components is again indicative of a
writer’s natural style and is very stable across multiple instances of the signature. Calculation involves
taking the Euclidean distance between the last point sampled in a component and the first point sampled
in the following component (if any). This value is calculated for each pen-up instance and averaged to
obtain the final feature value.
Component Time Spacing: This is the average duration of a pen-up instance in a signature (often
referred to as pen-up time). It is slightly less stable than physical component spacing, but it is impossible
for a forger to copy this feature from an off-line signature image.
D. Signature Verification Approaches [19]
1. Hidden Markov Model[1]: Hidden Markov Model (HMM) is one of the most widely used models for
sequence analysis in signature verification. A well chosen set of feature vectors for HMM could lead to
the design of an efficient signature verification system. If the results show a higher probability than the
test signatures probability, then the signatures is by the original person, otherwise the signatures are
rejected. There are various topologies for the HMM models, each of which adapt to one particular
characteristic. Yacoubi et al. proposed a basic and robust system for the verification of static or offline
signatures. AERs of 0.46% and 0.91% are reported in his experiment.
Justino et al. [6] in his work presented a robust system for off-line signature verification using simple
features, different cell resolutions and multiple codebooks in an HMM framework. An FRR of 2.83%and
an FAR of 1.44%, 2.50%, and 22.67% are reported for random, casual, and skilled forgeries,
respectively.
2. Neural Networks Approach[1]: The main reasons for the widespread usage of neural networks
(NNs) in pattern recognition are their power and ease of use. There are many ways to structure the NN
training, but a very simple approach is to firstly extract a feature set representing the signature (details
like length, height, duration, etc.), with several samples from different signers. The second step is for the
NN to learn the relationship between a signature and its class (either “genuine” or “forgery”). Once this
relationship has been learned, the network can be presented with test signatures that can be classified as
belonging to a particular signer.
10. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 317
Alan McCabe et al. [10] proposed a method for verifying handwritten signatures by using NN
architecture. Various static (e.g., height, slant, etc.) and dynamic (e.g., velocity, pen tip pressure, etc.)
signature features are extracted and used to train the NN. Several Network topologies are tested and
their accuracy is compared. The resulting system performs reasonably well with an overall error rate of
3.3% being reported for the best case.
Rasha Abbas in his earlier research investigated the suitability of using backpropagation neural networks
for the purpose of offline signature verification however later on in [4] the suitability of using
multilayered feed forward neural network was investigated.
3. Template Matching Approach [1][9]: Fang et al. proposed two methods for the detection of skilled
forgeries using template matching. One method is based on the optimal matching of the one-dimensional
projection profiles of the signature patterns and the other is based on the elastic matching of the strokes
in the two-dimensional signature patterns.
Given a test signature to be verified, the positional variations are compared with the statistics of the
training set and a decision based on a distance measure is made. Both binary and gray-level signature
images are tested. The average verification error rate of 18.1% was achieved when the local peaks of the
vertical projection profiles of gray-level signature images were used for matching and with the full
estimated covariance matrix incorporated.
4. Statistical Approach[1]: Using statistical knowledge, the relation, deviation, etc between two or more
data items can easily be found out. To verify an entered signature with the help of an average signature,
which is obtained from the set of, previously collected signatures, this approach follows the concept of
correlation to find out the amount of divergence in between them. A Bayesian model for off-line signature
verification involving the representation of a signature through its curvature is developed by
McKeague[11]. The prior model makes use of a spatial point process for specifying the knots in an
approximation restricted to a buffer region close to a template curvature, along with an independent time
warping mechanism. In this way, prior shape information about the signature can be built into the
analysis. The observation model is based on additive white noise superimposed on the underlying
curvature. The approach is implemented using Markov chain Monte Carlo (MCMC) algorithm and
applied to a collection of documented instances of Shakespeare’s signature.
The algorithm proposed in [5] has the flexibility of choosing the number of signature for testing purpose
to generate a signature containing the specialized mean features set from the test signatures set. After
collecting the signatures for testing, the algorithm converts them into a set of 2D arrays of binary data
values-0 and 1.
5. Support Vector Machines(Structural/Syntactic Approach) [1]: The key idea in structural and
syntactic pattern recognition is the representation of patterns by means of symbolic data structures such
as strings, trees, and graphs. In order to recognize an unknown pattern, its symbolic representation is
compared with a number of prototypes stored in a database. Structural features use modified direction
and transition distance feature (MDF) which extracts the transition locations and are based on the
relational organization of low-level features into higher-level structures. The Modified Direction Feature
(MDF) [12] utilizes the location of transitions from background to foreground pixels in the vertical and
horizontal directions of the boundary representation of an object.
11. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 318
Nguyen et al [2] presents a new method in which structural features are extracted from the signature's
contour using MDF and its extended version: the Enhanced MDF (EMDF) and further two neural
network-based techniques and Support Vector Machines (SVMs) are investigated and compared for the
process of signature verification. The classifiers were trained using genuine specimens and other
randomly selected signatures taken from a publicly available database. A distinguishing error rate (DER)
of 17.78% was obtained with the SVM whilst keeping the false acceptance rate for random forgeries
(FARR) below 0.16%.
6. Wavelet Based Approach [1]: A novel approach to off-line signature verification is proposed by Wei
Tian et al. [13]. Both static and pseudodynamic features are extracted as original signal, which are
processed by Discrete Wavelet Transform (DWT) and converted into stable features in each sub-band
which can enhance the difference between a genuine signature and its forgery. During the training phase,
the proposed fuzzy net is trained with genuine signatures only. The signatures with the maximal ratio of
the mean value of the similarity to the standard deviation are selected as the training samples from a set of
genuine signatures. The verification scheme is achieved by combining the proposed fuzzy net output in
each sub-band level. The entire system was tested by using two databases of English and Chinese
signatures, and the average error rates of 12.57% and 13.96% were obtained, respectively.
Each type of forgery requires a different verification approach. Hence it becomes mandatory to compare
these approaches with respect to various levels of forgeries [1].
• Template matching is suitable for rigid matching to detect genuine signatures however these
methods are not very efficient in detecting skilled forgeries.
• Neural networks are among the most commonly used classifiers for pattern recognition problems.
This approach offers a significant advantage that each time we want to add a set of signatures (a new
person) to the systems database; we only have to train three new small neural networks (one for each
set of features) and not the entire neural network. This approach gives very promising results with
extremely low FAR and FRR.
• Methods based on the statistical approach are generally used to identify random and simple
forgeries. The reason for this is that these methods have proven to be more suitable for describing
characteristics related to the signature shape. For this purpose, the graphometry-based approach has
many features that can be used, such as calibration, proportion, guideline and base behaviors. In
addition, other features have been applied in this approach, like pixel density, pixel distributions.
However, static features do not describe adequately the handwriting motion. Therefore, it is not
enough to detect skilled forgery.
• When using HMMs for signature verification, we can find that the simple and random forgery error
rates have shown to be low and close to each other, but the type II error rate in skilled forgery
signatures are high.
• Structural techniques are suitable for detecting genuine signatures and targeted forged signatures
however, this approach is exhaustive due to demand for large training sets and computational efforts.
III. ARTIFICIAL NEURAL NETWORKS
12. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 319
Neural networks (NNs) have been a fundamental part of computerised pattern recognition tasks for more
than half a century, and continue to be used in a very broad range of problem domains [26]. Concentrated
efforts at applying NNs to HSV have been undertaken for over a decade with varying degrees of success.
The main attractions include:
1) Expressiveness: NNs are an attribute-based representation and are well-suited for continuous inputs
and outputs. The class of multi-layer networks as a whole can represent any desired function of a set of
attributes, and signatures can be readily modeled as a function of a set of attributes.
2) Ability to generalise: NNs are an excellent generalization tool (under normal conditions) and are a
useful means of coping with the diversity and variations inherent in handwritten signatures.
3) Sensitivity to noise: NNs are designed to simply find the best fit through the input points within the
constraints of the network topology (using nonlinear regression). As a result, NNs are very tolerant of
noise in the input data.
4) Graceful degradation: NNs tend to display graceful degradation rather than a sharp drop-off in
performance as conditions worsen.
5) Execution speed: The NN training phase can take a large amount of time. In HSV this training is a
oneoff cost undertaken off-line (i.e., rarely performed while a user waits for verification results).
Figure 2: Simple Neural Network
Dr. Robert Hecht-Nielsen defines a neural network as: “A computing system made up of a number of
simple, extremely interrelated processing elements, which practice information by their dynamic state
response to peripheral inputs” [20]. They are usually presented as organization of interconnected
"neurons" that can calculate values from inputs by providing information through the network. Neural
networks are characteristically structured in layers. Layers are consisting of a number of interrelated
'nodes' which hold an 'activation function'. Patterns are available to the network by means of the 'input
layer', which communicate to one or more 'hidden layers' where the concrete processing is done using a
system of subjective 'connections'. The hidden layers then unite to an 'output layer' where the answer is
final output of the system.
Artificial neural network i.e. ANNs are not chronological or essentially deterministic. There are no
intricate central processors, to a certain extent there are numerous uncomplicated ones which generally
do nothing more than take the weighted summation of their inputs from other processors. ANNs do not
execute programmed instructions; they react in parallel to the pattern of inputs offered to it. There are no
separate memory addresses to accumulate data. Instead, information is controlled in the general
activation 'state' of the network. 'Knowledge' is represented by the network itself, which is fairly more
than the summation of its individual components. Although there are many different types of knowledge
rules used by neural networks, this expression is concerned with the delta rule. The delta rule is
13. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 320
frequently use by the most ordinary class of ANNs called 'back propagation neural networks'. With the
delta rule, 'learning' is a organized process that take place with each cycle during a forward activation
stream of outputs, and the backwards error propagation of weight regulation. Levenberg-Marquardt
backpropagation algorithm [19] trains a neural network using error backpropagation training algorithm
[20].
IV. PROPOSED WORK
A multilayer Perceptron with single hidden layer having approx 15-30 nodes in hidden layers with
learning rates ranging from 0.05 to 0.95 with increment of 0.5 and sigmoidal activation function.
Backpropogation (unsupervised) training algorithm would be used [6]. Data might vary with
experimentation.
After relating a feature vector of test signature if the output neuron generates value close to +1 test
signature is declared as genuine or if it generates value close to -1 it is declared as forged [20,22].
False Acceptance Rate (FAR), False Rejection Rate (FRR), Overall Error Rate (OER) and Correct
Classification Rate (CCR) are the three constraint used for measuring performance of system. All these
constraints are calculated [22] by following equations [20],
FAR = (Number of forgeries accepted/Number of forgeries tested) * 100
FRR = (Number of originals rejected/Number of original tested) * 100
OER=FAR_FRR
CCR = (Number of samples correctly Recognized/Number of samples tested) * 100
V. CONCLUSION
Recognition and verification ability of the system can be increased by using additional features in the
input data set to reduce to a minimum the cases of forgery in business transaction [19,20,32]. Hence I
propose a Handwritten Signature Verification System using Artificial Neural Networks for classification
on random, simple and skilled forgeries using more no. of feature sets for better accuracy.
REFERENCES
[1] Meenakshi S. Arya, Vandana S. Ianmdar, “A Preliminary Study on Various Off-line Hand Written Signature
Verification Approaches” International Journal of Computer Applications (0975 – 8887) Volume 1 – No. 9 2010.
[2] Vu Nguyen; Blumenstein, M.; Muthukkumarasamy V.; Leedham G., “Off-line Signature Verification Using
Enhanced Modified Direction Features in Conjunction with Neural Classifiers and Support Vector Machines”, in
Proc. 9th Int Conf on document analysis and recognition, vol 02, pp. 734-738, Sep 2007.
[3] MI C. Fairhurst, “Signature verification revisited: promoting practical exploitation of biometric technology”,
Electronics & communication engineering journal, December 1997.
[4] Rasha Abbas and Victor Ciesielski, “A Prototype System for Off-line Signature Verification Using
Multilayered Feed forward Neural Networks,” February 1995.
[5] Bhattacharyya Debnath, Bandyopadhyay Samir Kumar, Das, Poulami, Ganguly Debashis, Mukherjee
Swarnendu, “Statistical approach for offline handwritten signature verification”, Journal of Computer Science
March 01, 2008.
15. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 322
[27] P. Gallinari, S. Thiria, F. Badran and F. Fogelman-Soulie. On the Relations between Discriminant Analysis
and Multilayer Perceptrons. Neural Networks, Vol. 4, pp 349–360, 1991.
[28] E. Zois and V. Anastassopoulos. Methods for Writer Identification.Proceedings of the Third IEEE
International Conference on Electronics, Circuits and Systems (ICECS), 1996.
[29] “An Introduction to Artificial Neural Systems” by Jacek M. Zurada, West Publishing Company 1992.
[30] H. Baltzakis, N. Papamarkos, “A new signature verification technique based on a two-stage neural network
classifier”, Engineering Applications of Artificial Intelligence 14 (2001) 95±103, 0952-1976/01/$ - PII: S 0 9 5 2 -
1 9 7 6 (0 0) 0 0 0 6.
[31] M. Arathi, A. Govardhan, “An efficient offline signature verification system”, International Journal of
Machine Learning and Computing, Vol. 4, No. 6, December 2014.
[32] H.B.Kekre, V.A.Bharadi, S.Gupta, A.A.Ambardekar, V.B.Kulkarni, “Off-Line Signature Recognition Using
Morphological Pixel Variance Analysis”, ICWET’10.