This document presents a method for recognizing handwritten Kannada characters using structural features and a support vector machine (SVM) classifier. The method extracts structural features like perimeter, area, eccentricity from preprocessed character images. These features are used to train an SVM classifier. On average, the method achieved 89.84% recognition accuracy for handwritten Kannada vowels and 85.14% for consonants. The method works in two phases - a training phase where the SVM is trained on extracted structural features, and a testing phase where unknown characters are classified based on their structural features and the trained SVM model.
An Optical Character Recognition for Handwritten Devanagari ScriptIJERA Editor
Optical Character Recognition is process of recognition of character from scanned document and lots of OCR now available in the market. But most of these systems work for Roman, Chinese, Japanese and Arabic characters . There are no sufficient number of work on Indian language script like Devanagari so this paper present a review on optical character recognition on handwritten Devanagari script
Handwriting character recognition (HCR) is the ability of a computer to receive and interpret handwritten input. Handwritten Character Recognition is one of the active and challenging research areas in the field of Pattern Recognition. Pattern recognition is a process that taking in raw data and making an action based on the category of the pattern. HCR is one of the well-known applications of pattern recognition. Handwriting recognition especially for Indian languages is still in infant stage because not much work has been done it. This paper discuss about an idea to recognize Kannada vowels using chain code features. Kannada is a South Indian language. For any recognition system, an important part is feature extraction. A proper feature extraction method can increase the recognition ratio. In this paper, a chain code based feature extraction method is investigated for developing HCR system. Chain code is working based on 4-neighborhood or 8–neighborhood methods. Chain code is a sequence of code directions of a character and connection to a starting point which is often used in image processing. In this paper, 8–neighborhood method has been implemented which allows generation of eight different codes for each character. These codes have been used as features of the character image, which have been later on used for training and testing for K-Nearest Neighbor (KNN) classifiers. The level of accuracy reached to 100%.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
FREEMAN CODE BASED ONLINE HANDWRITTEN CHARACTER RECOGNITION FOR MALAYALAM USI...acijjournal
Handwritten character recognition is conversion of handwritten text to machine readable and editable form. Online character recognition deals with live conversion of characters. Malayalam is a language spoken by millions of people in the state of Kerala and the union territories of Lakshadweep and Pondicherry in India. It is written mostly in clockwise direction and consists of loops and curves. The method aims at training a simple neural network with three layers using backpropagation algorithm.
Freeman codes are used to represent each character as feature vector. These feature vectors act as inputs to the network during the training and testing phases of the neural network. The output is the character expressed in the Unicode format.
Optical character recognition is one of the emerging research topics in the field of image processing, and it has extensive area of application in pattern recognition. Odia handwritten script is the most research concern area because it has eldest and most likable language in the state of odisha, India. Odia character is a usually handwritten, which was generally occupied by scanner into machine readable form. In this regard several recognition technique have been evolved for variance kind of languages but writing pattern of odia character is just like as curve appearance; Hence it is more difficult for recognition. In this article we have presented the novel approach for Odia character recognition based on the different angle based symmetric axis feature extraction technique which gives high accuracy of recognition pattern. This empirical model generates a unique angle based boundary points on every skeletonised character images. These points are interconnected with each other in order to extract row and column symmetry axis. We extracted feature matrix having mean distance of row, mean angle of row, mean distance of column and mean angle of column from centre of the image to midpoint of the symmetric axis respectively. The system uses a 10 fold validation to the random forest (RF) classifier and SVM for feature matrix. We have considered the standard database on 200 images having each of 47 Odia character and 10 Odia numeric for simulation. As we have noted outcome of simulation of SVM and RF yields 96.3% and 98.2% accuracy rate on NIT Rourkela Odia character database and 88.9% and 93.6% from ISI Kolkata Odia numerical database.
Angular Symmetric Axis Constellation Model for Off-line Odia Handwritten Char...IJAAS Team
Optical character recognition is one of the emerging research topics in the field of image processing, and it has extensive area of application in pattern recognition. Odia handwritten script is the most research concern area because it has eldest and most likable language in the state of odisha, India. Odia character is a usually handwritten, which was generally occupied by scanner into machine readable form. In this regard several recognition technique have been evolved for variance kind of languages but writing pattern of odia character is just like as curve appearance; Hence it is more difficult for recognition. In this article we have presented the novel approach for Odia character recognition based on the different angle based symmetric axis feature extraction technique which gives high accuracy of recognition pattern. This empirical model generates a unique angle based boundary points on every skeletonised character images. These points are interconnected with each other in order to extract row and column symmetry axis. We extracted feature matrix having mean distance of row, mean angle of row, mean distance of column and mean angle of column from centre of the image to midpoint of the symmetric axis respectively. The system uses a 10 fold validation to the random forest (RF) classifier and SVM for feature matrix. We have considered the standard database on 200 images having each of 47 Odia character and 10 Odia numeric for simulation. As we have noted outcome of simulation of SVM and RF yields 96.3% and 98.2% accuracy rate on NIT Rourkela Odia character database and 88.9% and 93.6% from ISI Kolkata Odia numerical database.
An Optical Character Recognition for Handwritten Devanagari ScriptIJERA Editor
Optical Character Recognition is process of recognition of character from scanned document and lots of OCR now available in the market. But most of these systems work for Roman, Chinese, Japanese and Arabic characters . There are no sufficient number of work on Indian language script like Devanagari so this paper present a review on optical character recognition on handwritten Devanagari script
Handwriting character recognition (HCR) is the ability of a computer to receive and interpret handwritten input. Handwritten Character Recognition is one of the active and challenging research areas in the field of Pattern Recognition. Pattern recognition is a process that taking in raw data and making an action based on the category of the pattern. HCR is one of the well-known applications of pattern recognition. Handwriting recognition especially for Indian languages is still in infant stage because not much work has been done it. This paper discuss about an idea to recognize Kannada vowels using chain code features. Kannada is a South Indian language. For any recognition system, an important part is feature extraction. A proper feature extraction method can increase the recognition ratio. In this paper, a chain code based feature extraction method is investigated for developing HCR system. Chain code is working based on 4-neighborhood or 8–neighborhood methods. Chain code is a sequence of code directions of a character and connection to a starting point which is often used in image processing. In this paper, 8–neighborhood method has been implemented which allows generation of eight different codes for each character. These codes have been used as features of the character image, which have been later on used for training and testing for K-Nearest Neighbor (KNN) classifiers. The level of accuracy reached to 100%.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
FREEMAN CODE BASED ONLINE HANDWRITTEN CHARACTER RECOGNITION FOR MALAYALAM USI...acijjournal
Handwritten character recognition is conversion of handwritten text to machine readable and editable form. Online character recognition deals with live conversion of characters. Malayalam is a language spoken by millions of people in the state of Kerala and the union territories of Lakshadweep and Pondicherry in India. It is written mostly in clockwise direction and consists of loops and curves. The method aims at training a simple neural network with three layers using backpropagation algorithm.
Freeman codes are used to represent each character as feature vector. These feature vectors act as inputs to the network during the training and testing phases of the neural network. The output is the character expressed in the Unicode format.
Optical character recognition is one of the emerging research topics in the field of image processing, and it has extensive area of application in pattern recognition. Odia handwritten script is the most research concern area because it has eldest and most likable language in the state of odisha, India. Odia character is a usually handwritten, which was generally occupied by scanner into machine readable form. In this regard several recognition technique have been evolved for variance kind of languages but writing pattern of odia character is just like as curve appearance; Hence it is more difficult for recognition. In this article we have presented the novel approach for Odia character recognition based on the different angle based symmetric axis feature extraction technique which gives high accuracy of recognition pattern. This empirical model generates a unique angle based boundary points on every skeletonised character images. These points are interconnected with each other in order to extract row and column symmetry axis. We extracted feature matrix having mean distance of row, mean angle of row, mean distance of column and mean angle of column from centre of the image to midpoint of the symmetric axis respectively. The system uses a 10 fold validation to the random forest (RF) classifier and SVM for feature matrix. We have considered the standard database on 200 images having each of 47 Odia character and 10 Odia numeric for simulation. As we have noted outcome of simulation of SVM and RF yields 96.3% and 98.2% accuracy rate on NIT Rourkela Odia character database and 88.9% and 93.6% from ISI Kolkata Odia numerical database.
Angular Symmetric Axis Constellation Model for Off-line Odia Handwritten Char...IJAAS Team
Optical character recognition is one of the emerging research topics in the field of image processing, and it has extensive area of application in pattern recognition. Odia handwritten script is the most research concern area because it has eldest and most likable language in the state of odisha, India. Odia character is a usually handwritten, which was generally occupied by scanner into machine readable form. In this regard several recognition technique have been evolved for variance kind of languages but writing pattern of odia character is just like as curve appearance; Hence it is more difficult for recognition. In this article we have presented the novel approach for Odia character recognition based on the different angle based symmetric axis feature extraction technique which gives high accuracy of recognition pattern. This empirical model generates a unique angle based boundary points on every skeletonised character images. These points are interconnected with each other in order to extract row and column symmetry axis. We extracted feature matrix having mean distance of row, mean angle of row, mean distance of column and mean angle of column from centre of the image to midpoint of the symmetric axis respectively. The system uses a 10 fold validation to the random forest (RF) classifier and SVM for feature matrix. We have considered the standard database on 200 images having each of 47 Odia character and 10 Odia numeric for simulation. As we have noted outcome of simulation of SVM and RF yields 96.3% and 98.2% accuracy rate on NIT Rourkela Odia character database and 88.9% and 93.6% from ISI Kolkata Odia numerical database.
Performance Comparison between Different Feature Extraction Techniques with S...IJERA Editor
This paper represent the offline handwritten character recognition for Gurumukhi script. It is a major script of india. Many work has been done in many languages such as English , Chinese , Devanagri , Tamil etc. Gurumukhi is a script of Punjabi Language which is widely spoken across the globe. In this paper focus on better character recognition accuracy. The dataset include 7000 samples collected in different writing styles. These dataset divided in two set Training and Test. For Training set collect 5600 samples and 1400 as test set. The evaluated feature extraction include: Distance Profile, Diagonal feature and BDD(Background Direction Distribution). These features were classified by using SVM classifier. The Performance comparison have been made using one classifier with different feature extraction techniques. The experiment show that Diagonal feature extraction method has achieved highest recognition accuracy 95.39% than other features extraction method.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Driving cycle development for Kuala Terengganu city using k-means methodIJECEIAES
Driving cycle plays a vital role in the production and evaluating the performance of the vehicle. Driving cycle is a representative speed-time profile of driving behavior of specific region or city. Many countries has developed their own driving cycle such as United State of America, United Kingdom, India, China, Ireland, Slovenia, Singapore, and many more. The objectives of this paper are to characterize and develop driving cycle of Kuala Terengganu city at 8.00 a.m. along five different routes using k-means method, to analyze fuel rate and emissions using the driving cycle developed and to compare the fuel rate and emissions with conventional engine vehicles, parallel plug-in hybrid electric vehicle, series plug-in hybrid electric vehicle and single split-mode plug-in hybrid electric vehicle. The methodology involves three major steps which are route selection, data collection using on-road measurement method and driving cycle development using k-means method. Matrix Laboratory software (MATLAB) has been used as the computer program platform in order to produce the best driving cycle and Vehicle System Simulation Tool Development (AUTONOMIE) software has been used to analyze fuel rate and gas emission. Based on the findings, it can be concluded that, Route C and single spilt-mode PHEV powertrain used and emit least amount of fuel and emissions.
A New Method for Identification of Partially Similar Indian ScriptsCSCJournals
In this paper, the texture symmetry/non symmetry factor has been exploited to get the script texture by using the Bi Wavelants which give the factor of symmetry/non symmetry in terms of the third cumulant and the Bi-spectra gives the quadratically coupled frequencies. The envelope of Bi-spectra (Bi-Wavelant) provides an accurate behavior of the symmetry/non symmetry factor of the script texture. Classification has been better performed by SVM with training set of roots of the envelope found using the Newton-Raphson technique. The method could successfully identify 8 Indian scripts like Devanagari, Urdu, Gujrati, Telugu, Assamese, Gurmukhi, Kannada, and Bangla. The method can segment any kind of document with very good results. The identification results are excellent.
Fragmentation of handwritten touching characters in devanagari scriptZac Darcy
Character Segmentation of handwritten words is a difficult task because of different writing styles and
complex structural features. Segmentation of handwritten text in Devanagari script is an uphill task. The
occurrence of header line, overlapped characters in middle zone & half characters make the segmentation
process more difficultt. Sometimes, interline space and noise makes line fragmentation a difficult task.
Sometimes, interline space and noise makes line fragmentation a difficult task. Without separating the
touching characters, it will be difficult to identify the characters, hence fragmentation is necessary of the
touching characters in a word. So, we devised a technique, according to that first step will be
preprocessing of a word, than identify the joint points, form the bounding boxes around all vertical &
horizontal lines and finally fragment the touching characters on the basis of their height and width.
Handwritten character recognition is one of the most challenging and ongoing areas of research in the
field of pattern recognition. HCR research is matured for foreign languages like Chinese and Japanese but
the problem is much more complex for Indian languages. The problem becomes even more complicated for
South Indian languages due to its large character set and the presence of vowels modifiers and compound
characters. This paper provides an overview of important contributions and advances in offline as well as
online handwritten character recognition of Malayalam scripts.
Character Recognition (Devanagari Script)IJERA Editor
Character Recognition is has found major interest in field of research and practical application to analyze and study characters in different languages using image as their input. In this paper the user writes the Devanagari character using mouse as a plotter and then the corresponding character is saved in the form of image. This image is processed using Optical Character Recognition in which location, segmentation, pre-processing of image is done. Later Neural Networks is used to identify all the characters by the further process of OCR i.e. by using feature extraction and post-processing of image. This entire process is done using MATLAB.
OCR-THE 3 LAYERED APPROACH FOR DECISION MAKING STATE AND IDENTIFICATION OF TE...ijaia
Optical Character recognition is the method of digitalization of hand and type written or printed text into
machine-encoded form and is superfluity of the various applications of envision of human’s life. In present
human life OCR has been successfully using in finance, legal, banking, health care and home need
appliances. India is a multi cultural, literature and traditional scripted country. Telugu is the southern
Indian language, it is a syllabic language, symbol script represents a complete syllable and formed with the
conjunct mixed consonants in their representation. Recognition of mixed conjunct consonants is critical
than the normal consonants, because of their variation in written strokes, conjunct maxing with pre and
post level of consonants. This paper proposes the layered approach methodology to recognize the
characters, conjunct consonants, mixed- conjunct consonants and expressed the efficient classification of
the hand written and printed conjunct consonants. This paper implements the Advanced Fuzzy Logic system
controller to take the text in the form of written or printed, collected the text images from the scanned file,
digital camera, Processing the Image with Examine the high intensity of images based on the quality
ration, Extract the image characters depends on the quality then check the character orientation and
alignment then to check the character thickness, base and print ration. The input image characters can
classify into the two ways, first way represents the normal consonants and the second way represents
conjunct consonants. Digitalized image text divided into three layers, the middle layer represents normal
consonants and the top and bottom layer represents mixed conjunct consonants. Here recognition process
starts from middle layer, and then it continues to check the top and bottom layers. The recognition process
treat as conjunct consonants when it can detect any symbolic characters in top and bottom layers of
present base character otherwise treats as normal consonants. The post processing technique applied to all
three layered characters. Post processing of the image: concentrated on the image text readability and
compatibility, if the readability is not process then repeat the process again. In this recognition process
includes slant correction, thinning, normalization, segmentation, feature extraction and classification. In
the process of development of the algorithm the pre-processing, segmentation, character recognition and
post-processing modules were discussed. The main objectives to the development of this paper are: To
develop the classification, identification of deference prototyping for written and printed consonants,
conjunct consonants and symbols based on 3 layered approaches with different measurable area by using
fuzzy logic and to determine suitable features for handwritten character recognition.
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
An exhaustive font and size invariant classification scheme for ocr of devana...ijnlc
Main challenge in any Optical Character Recognition (OCR) system is to deal with multiple fonts and sizes. In OCR of Indian languages, one also has to deal with a huge number of conjunct characters whose shape changes drastically with fonts. Separating the conjunct characters into its constituent symbols leads to segmentation errors. The proposed approach handles both the above listed problems in the context of Devanagari script. An attempt is made to identify all possible connected symbols of Devanagari (could be a consonant, vowel, half consonant or conjunct consonant henceforward shall be referred as a basic symbol) in the middle zone without segmenting the conjunct characters. On observing 469580 words from a variety of sources in our study, it is found that only 345 symbols are used more frequently in the middle zone and cover 99.97% of the text. They are then classified into 16 different classes on the basis of structural properties which are invariant across fonts and sizes. To validate the proposed classification scheme, results are presented on 25 fonts and three sizes.
Optical Character Recognition System for Urdu (Naskh Font)Using Pattern Match...CSCJournals
The offline optical character recognition (OCR) for different languages has been developed over the recent years. Since 1965, the US postal service has been using this system for automating their services. The range of the applications under this area is increasing day by day, due to its utility in almost major areas of government as well as private sector. This technique has been very useful in making paper free environment in many major organizations as far as the backup of their previous file record is concerned. Our this system has been proposed for the Offline Character Recognition for Isolated Characters of Urdu language, as Urdu language forms words by combining Isolated Characters. Urdu is a cursive language, having connected characters making words. The major area of utility for Urdu OCR will be digitizing of a lot of literature related material already stocked in libraries. Urdu language is famous and spoken in more than 3 big countries including Pakistan, India and Bangladesh. A lot of work has been done in Urdu poetry and literature up to the recent century. Creation of OCR for Urdu language will make an important role in converting all those work from physical libraries to electronic libraries. Most of the stuff already placed on internet is in the form of images having text, which took a lot of space to transfer and even read online. So the need of an Urdu OCR is a must. The system is of training system type. It consists of the image preprocessing, line and character segmentation, creation of xml file for training purpose. While Recognition system includes taking xml file, the image to be recognized, segment it and creation of chain codes for character images and matching with already stored in xml file. The system has been implemented and it has 89% recognition accuracy with a 15 char/sec recognition rate.
Video Audio Interface for recognizing gestures of Indian sign LanguageCSCJournals
We proposed a system to robotically recognize gestures of sign language from a video stream of the signer. The developed system converts words and sentences of Indian sign language into voice and text in English. We have used the power of image processing techniques and artificial intelligence techniques to achieve the objective. To accomplish the task we used powerful image processing techniques such as frame differencing based tracking, edge detection, wavelet transform, image fusion techniques to segment shapes in our videos. It also uses Elliptical Fourier descriptors for shape feature extraction and principal component analysis for feature set optimization and reduction. Database of extracted features are compared with input video of the signer using a trained fuzzy inference system. The proposed system converts gestures into a text and voice message with 91 percent accuracy. The training and testing of the system is done using gestures from Indian Sign Language (INSL). Around 80 gestures from 10 different signers are used. The entire system was developed in a user friendly environment by creating a graphical user interface in MATLAB. The system is robust and can be trained for new gestures using GUI.
The paper addresses the automation of the task of an epigraphist in reading and deciphering inscriptions.
The automation steps include Pre-processing, Segmentation, Feature Extraction and Recognition. Preprocessing
involves, enhancement of degraded ancient document images which is achieved through Spatial
filtering methods, followed by binarization of the enhanced image. Segmentation is carried out using Drop
Fall and Water Reservoir approaches, to obtain sampled characters. Next Gabor and Zonal features are
extracted for the sampled characters, and stored as feature vectors for training. Artificial Neural Network
(ANN) is trained with these feature vectors and later used for classification of new test characters. Finally
the classified characters are mapped to characters of modern form. The system showed good results when
tested on the nearly 150 samples of ancient Kannada epigraphs from Ashoka and Hoysala periods. An
average Recognition accuracy of 80.2% for Ashoka period and 75.6% for Hoysala period is achieved.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Character Recognition using Data Mining Technique (Artificial Neural Network)Sudipto Krishna Dutta
This Presentation is on Character Recognition using Artificial Neural networks,
Presented to
Farhana Afrin Duty
Assistant Professor
Department of Statistics
Jahangirnagar University
Savar, Dhaka-1342, Bangladesh
Recognition of compound characters in Kannada languageIJECEIAES
Recognition of degraded printed compound Kannada characters is a challenging research problem. It has been verified experimentally that noise removal is an essential preprocessing step. Proposed are two methods for degraded Kannada character recognition problem. Method 1 is conventionally used histogram of oriented gradients (HOG) feature extraction for character recognition problem. Extracted features are transformed and reduced using principal component analysis (PCA) and classification performed. Various classifiers are experimented with. Simple compound character classification is satisfactory (more than 98% accuracy) with this method. However, the method does not perform well on other two compound types. Method 2 is deep convolutional neural networks (CNN) model for classification. This outperforms HOG features and classification. The highest classification accuracy is found as 98.8% for simple compound character classification. The performance of deep CNN is far better for other two compound types. Deep CNN turns out to better for pooled character classes.
Performance Comparison between Different Feature Extraction Techniques with S...IJERA Editor
This paper represent the offline handwritten character recognition for Gurumukhi script. It is a major script of india. Many work has been done in many languages such as English , Chinese , Devanagri , Tamil etc. Gurumukhi is a script of Punjabi Language which is widely spoken across the globe. In this paper focus on better character recognition accuracy. The dataset include 7000 samples collected in different writing styles. These dataset divided in two set Training and Test. For Training set collect 5600 samples and 1400 as test set. The evaluated feature extraction include: Distance Profile, Diagonal feature and BDD(Background Direction Distribution). These features were classified by using SVM classifier. The Performance comparison have been made using one classifier with different feature extraction techniques. The experiment show that Diagonal feature extraction method has achieved highest recognition accuracy 95.39% than other features extraction method.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Driving cycle development for Kuala Terengganu city using k-means methodIJECEIAES
Driving cycle plays a vital role in the production and evaluating the performance of the vehicle. Driving cycle is a representative speed-time profile of driving behavior of specific region or city. Many countries has developed their own driving cycle such as United State of America, United Kingdom, India, China, Ireland, Slovenia, Singapore, and many more. The objectives of this paper are to characterize and develop driving cycle of Kuala Terengganu city at 8.00 a.m. along five different routes using k-means method, to analyze fuel rate and emissions using the driving cycle developed and to compare the fuel rate and emissions with conventional engine vehicles, parallel plug-in hybrid electric vehicle, series plug-in hybrid electric vehicle and single split-mode plug-in hybrid electric vehicle. The methodology involves three major steps which are route selection, data collection using on-road measurement method and driving cycle development using k-means method. Matrix Laboratory software (MATLAB) has been used as the computer program platform in order to produce the best driving cycle and Vehicle System Simulation Tool Development (AUTONOMIE) software has been used to analyze fuel rate and gas emission. Based on the findings, it can be concluded that, Route C and single spilt-mode PHEV powertrain used and emit least amount of fuel and emissions.
A New Method for Identification of Partially Similar Indian ScriptsCSCJournals
In this paper, the texture symmetry/non symmetry factor has been exploited to get the script texture by using the Bi Wavelants which give the factor of symmetry/non symmetry in terms of the third cumulant and the Bi-spectra gives the quadratically coupled frequencies. The envelope of Bi-spectra (Bi-Wavelant) provides an accurate behavior of the symmetry/non symmetry factor of the script texture. Classification has been better performed by SVM with training set of roots of the envelope found using the Newton-Raphson technique. The method could successfully identify 8 Indian scripts like Devanagari, Urdu, Gujrati, Telugu, Assamese, Gurmukhi, Kannada, and Bangla. The method can segment any kind of document with very good results. The identification results are excellent.
Fragmentation of handwritten touching characters in devanagari scriptZac Darcy
Character Segmentation of handwritten words is a difficult task because of different writing styles and
complex structural features. Segmentation of handwritten text in Devanagari script is an uphill task. The
occurrence of header line, overlapped characters in middle zone & half characters make the segmentation
process more difficultt. Sometimes, interline space and noise makes line fragmentation a difficult task.
Sometimes, interline space and noise makes line fragmentation a difficult task. Without separating the
touching characters, it will be difficult to identify the characters, hence fragmentation is necessary of the
touching characters in a word. So, we devised a technique, according to that first step will be
preprocessing of a word, than identify the joint points, form the bounding boxes around all vertical &
horizontal lines and finally fragment the touching characters on the basis of their height and width.
Handwritten character recognition is one of the most challenging and ongoing areas of research in the
field of pattern recognition. HCR research is matured for foreign languages like Chinese and Japanese but
the problem is much more complex for Indian languages. The problem becomes even more complicated for
South Indian languages due to its large character set and the presence of vowels modifiers and compound
characters. This paper provides an overview of important contributions and advances in offline as well as
online handwritten character recognition of Malayalam scripts.
Character Recognition (Devanagari Script)IJERA Editor
Character Recognition is has found major interest in field of research and practical application to analyze and study characters in different languages using image as their input. In this paper the user writes the Devanagari character using mouse as a plotter and then the corresponding character is saved in the form of image. This image is processed using Optical Character Recognition in which location, segmentation, pre-processing of image is done. Later Neural Networks is used to identify all the characters by the further process of OCR i.e. by using feature extraction and post-processing of image. This entire process is done using MATLAB.
OCR-THE 3 LAYERED APPROACH FOR DECISION MAKING STATE AND IDENTIFICATION OF TE...ijaia
Optical Character recognition is the method of digitalization of hand and type written or printed text into
machine-encoded form and is superfluity of the various applications of envision of human’s life. In present
human life OCR has been successfully using in finance, legal, banking, health care and home need
appliances. India is a multi cultural, literature and traditional scripted country. Telugu is the southern
Indian language, it is a syllabic language, symbol script represents a complete syllable and formed with the
conjunct mixed consonants in their representation. Recognition of mixed conjunct consonants is critical
than the normal consonants, because of their variation in written strokes, conjunct maxing with pre and
post level of consonants. This paper proposes the layered approach methodology to recognize the
characters, conjunct consonants, mixed- conjunct consonants and expressed the efficient classification of
the hand written and printed conjunct consonants. This paper implements the Advanced Fuzzy Logic system
controller to take the text in the form of written or printed, collected the text images from the scanned file,
digital camera, Processing the Image with Examine the high intensity of images based on the quality
ration, Extract the image characters depends on the quality then check the character orientation and
alignment then to check the character thickness, base and print ration. The input image characters can
classify into the two ways, first way represents the normal consonants and the second way represents
conjunct consonants. Digitalized image text divided into three layers, the middle layer represents normal
consonants and the top and bottom layer represents mixed conjunct consonants. Here recognition process
starts from middle layer, and then it continues to check the top and bottom layers. The recognition process
treat as conjunct consonants when it can detect any symbolic characters in top and bottom layers of
present base character otherwise treats as normal consonants. The post processing technique applied to all
three layered characters. Post processing of the image: concentrated on the image text readability and
compatibility, if the readability is not process then repeat the process again. In this recognition process
includes slant correction, thinning, normalization, segmentation, feature extraction and classification. In
the process of development of the algorithm the pre-processing, segmentation, character recognition and
post-processing modules were discussed. The main objectives to the development of this paper are: To
develop the classification, identification of deference prototyping for written and printed consonants,
conjunct consonants and symbols based on 3 layered approaches with different measurable area by using
fuzzy logic and to determine suitable features for handwritten character recognition.
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
An exhaustive font and size invariant classification scheme for ocr of devana...ijnlc
Main challenge in any Optical Character Recognition (OCR) system is to deal with multiple fonts and sizes. In OCR of Indian languages, one also has to deal with a huge number of conjunct characters whose shape changes drastically with fonts. Separating the conjunct characters into its constituent symbols leads to segmentation errors. The proposed approach handles both the above listed problems in the context of Devanagari script. An attempt is made to identify all possible connected symbols of Devanagari (could be a consonant, vowel, half consonant or conjunct consonant henceforward shall be referred as a basic symbol) in the middle zone without segmenting the conjunct characters. On observing 469580 words from a variety of sources in our study, it is found that only 345 symbols are used more frequently in the middle zone and cover 99.97% of the text. They are then classified into 16 different classes on the basis of structural properties which are invariant across fonts and sizes. To validate the proposed classification scheme, results are presented on 25 fonts and three sizes.
Optical Character Recognition System for Urdu (Naskh Font)Using Pattern Match...CSCJournals
The offline optical character recognition (OCR) for different languages has been developed over the recent years. Since 1965, the US postal service has been using this system for automating their services. The range of the applications under this area is increasing day by day, due to its utility in almost major areas of government as well as private sector. This technique has been very useful in making paper free environment in many major organizations as far as the backup of their previous file record is concerned. Our this system has been proposed for the Offline Character Recognition for Isolated Characters of Urdu language, as Urdu language forms words by combining Isolated Characters. Urdu is a cursive language, having connected characters making words. The major area of utility for Urdu OCR will be digitizing of a lot of literature related material already stocked in libraries. Urdu language is famous and spoken in more than 3 big countries including Pakistan, India and Bangladesh. A lot of work has been done in Urdu poetry and literature up to the recent century. Creation of OCR for Urdu language will make an important role in converting all those work from physical libraries to electronic libraries. Most of the stuff already placed on internet is in the form of images having text, which took a lot of space to transfer and even read online. So the need of an Urdu OCR is a must. The system is of training system type. It consists of the image preprocessing, line and character segmentation, creation of xml file for training purpose. While Recognition system includes taking xml file, the image to be recognized, segment it and creation of chain codes for character images and matching with already stored in xml file. The system has been implemented and it has 89% recognition accuracy with a 15 char/sec recognition rate.
Video Audio Interface for recognizing gestures of Indian sign LanguageCSCJournals
We proposed a system to robotically recognize gestures of sign language from a video stream of the signer. The developed system converts words and sentences of Indian sign language into voice and text in English. We have used the power of image processing techniques and artificial intelligence techniques to achieve the objective. To accomplish the task we used powerful image processing techniques such as frame differencing based tracking, edge detection, wavelet transform, image fusion techniques to segment shapes in our videos. It also uses Elliptical Fourier descriptors for shape feature extraction and principal component analysis for feature set optimization and reduction. Database of extracted features are compared with input video of the signer using a trained fuzzy inference system. The proposed system converts gestures into a text and voice message with 91 percent accuracy. The training and testing of the system is done using gestures from Indian Sign Language (INSL). Around 80 gestures from 10 different signers are used. The entire system was developed in a user friendly environment by creating a graphical user interface in MATLAB. The system is robust and can be trained for new gestures using GUI.
The paper addresses the automation of the task of an epigraphist in reading and deciphering inscriptions.
The automation steps include Pre-processing, Segmentation, Feature Extraction and Recognition. Preprocessing
involves, enhancement of degraded ancient document images which is achieved through Spatial
filtering methods, followed by binarization of the enhanced image. Segmentation is carried out using Drop
Fall and Water Reservoir approaches, to obtain sampled characters. Next Gabor and Zonal features are
extracted for the sampled characters, and stored as feature vectors for training. Artificial Neural Network
(ANN) is trained with these feature vectors and later used for classification of new test characters. Finally
the classified characters are mapped to characters of modern form. The system showed good results when
tested on the nearly 150 samples of ancient Kannada epigraphs from Ashoka and Hoysala periods. An
average Recognition accuracy of 80.2% for Ashoka period and 75.6% for Hoysala period is achieved.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Character Recognition using Data Mining Technique (Artificial Neural Network)Sudipto Krishna Dutta
This Presentation is on Character Recognition using Artificial Neural networks,
Presented to
Farhana Afrin Duty
Assistant Professor
Department of Statistics
Jahangirnagar University
Savar, Dhaka-1342, Bangladesh
Recognition of compound characters in Kannada languageIJECEIAES
Recognition of degraded printed compound Kannada characters is a challenging research problem. It has been verified experimentally that noise removal is an essential preprocessing step. Proposed are two methods for degraded Kannada character recognition problem. Method 1 is conventionally used histogram of oriented gradients (HOG) feature extraction for character recognition problem. Extracted features are transformed and reduced using principal component analysis (PCA) and classification performed. Various classifiers are experimented with. Simple compound character classification is satisfactory (more than 98% accuracy) with this method. However, the method does not perform well on other two compound types. Method 2 is deep convolutional neural networks (CNN) model for classification. This outperforms HOG features and classification. The highest classification accuracy is found as 98.8% for simple compound character classification. The performance of deep CNN is far better for other two compound types. Deep CNN turns out to better for pooled character classes.
Script Identification of Text Words from a Tri-Lingual Document Using Voting ...CSCJournals
In a multi script environment, majority of the documents may contain text information printed in more than one script/language forms. For automatic processing of such documents through Optical Character Recognition (OCR), it is necessary to identify different script regions of the document. In this context, this paper proposes to develop a model to identify and separate text words of Kannada, Hindi and English scripts from a printed tri-lingual document. The proposed method is trained to learn thoroughly the distinct features of each script. The binary tree classifier is used to classify the input text image. Experimentation conducted involved 1500 text words for learning and 1200 text words for testing. Extensive experimentation has been carried out on both manually created data set and scanned data set. The results are very encouraging and prove the efficacy of the proposed model. The average success rate is found to be 99% for manually created data set and 98.5% for data set constructed from scanned document images.
Recognition of Offline Handwritten Hindi Text Using SVMCSCJournals
Handwritten Hindi text recognition is emerging areas of research in the field of optical character recognition. In this paper, a segmentation based approach is used to recognize the text. The offline handwritten text is segmented into lines, lines into words and words into character for recognition. Shape features are extracted from the characters and fed into SVM classifier for recognition. The results obtained with the proposed feature set using SVM classifier is very challenging.
Mixed Language Based Offline Handwritten Character Recognition Using First St...CSCJournals
Artificial Neural Network is an artificial representation of the human brain that tries to simulate its learning process. To train a network and measure how well it performs, an objective function must be defined. A commonly used performance criterion function is the sum of squares error function. Full end-to-end text recognition in natural images is a challenging problem that has recently received much attention in computer vision and machine learning. Traditional systems in this area have relied on elaborate models that incorporate carefully hand-engineered features or large amounts of prior knowledge. Language identification and interpretation of handwritten characters is one of the challenges faced in various industries. For example, it is always a big challenge in data interpretation from cheques in banks, language identification and translated messages from ancient script in the form of manuscripts, palm scripts and stone carvings to name a few. Handwritten character recognition using Soft computing methods like Neural networks is always a big area of research for long time and there are multiple theories and algorithms developed in the area of neural networks for handwritten character recognition.
Wavelet Packet Based Features for Automatic Script IdentificationCSCJournals
In a multi script environment, an archive of documents having the text regions printed in different scripts is in practice. For automatic processing of such documents through Optical Character Recognition (OCR), it is necessary to identify different script regions of the document. In this paper, a novel texture-based approach is presented to identify the script type of the collection of documents printed in seven scripts, to categorize them for further processing. The South Indian documents printed in the seven scripts - Kannada, Tamil, Telugu, Malayalam, Urdu, Hindi and English are considered here. The document images are decomposed through the Wavelet Packet Decomposition using the Haar basis function up to level two. The texture features are extracted from the sub bands of the wavelet packet decomposition. The Shannon entropy value is computed for the set of sub bands and these entropy values are combined to use as the texture features. Experimentation conducted involved 2100 text images for learning and 1400 text images for testing. Script classification performance is analyzed using the K-nearest neighbor classifier. The average success rate is found to be 99.68%.
OCR-THE 3 LAYERED APPROACH FOR CLASSIFICATION AND IDENTIFICATION OF TELUGU HA...csandit
Optical Character recognition is the method of digitalization of hand and type written or
printed text into machine-encoded form and is superfluity of the various applications of envision
of human’s life. In present human life OCR has been successfully using in finance, legal,
banking, health care and home need appliances. India is a multi cultural, literature and
traditional scripted country. Telugu is the southern Indian language, it is a syllabic language,
symbol script represents a complete syllable and formed with the conjunct mixed consonants in
their representation. Recognition of mixed conjunct consonants is critical than the normal
consonants, because of their variation in written strokes, conjunct maxing with pre and post
level of consonants. This paper proposes the layered approach methodology to recognize the
characters, conjunct consonants, mixed- conjunct consonants and expressed the efficient
classification of the hand written and printed conjunct consonants. This paper implements the
Advanced Fuzzy Logic system controller to take the text in the form of written or printed,
collected the text images from the scanned file, digital camera, Processing the Image with
Examine the high intensity of images based on the quality ration, Extract the image characters
depends on the quality then check the character orientation and alignment then to check the
character thickness, base and print ration. The input image characters can classify into the two
ways, first way represents the normal consonants and the second way represents conjunct
consonants. Digitalized image text divided into three layers, the middle layer represents normal
consonants and the top and bottom layer represents mixed conjunct consonants. Here
recognition process starts from middle layer, and then it continues to check the top and bottom
layers. The recognition process treat as conjunct consonants when it can detect any symbolic
characters in top and bottom layers of present base character otherwise treats as normal
consonants. The post processing technique applied to all three layered characters. Post
processing of the image: concentrated on the image text readability and compatibility, if the
readability is not process then repeat the process again. In this recognition process includes
slant correction, thinning, normalization, segmentation, feature extraction and classification. In
the process of development of the algorithm the pre-processing, segmentation, character
recognition and post-processing modules were discussed. The main objectives to the
development of this paper are: To develop the classification, identification of deference
prototyping for written and printed consonants, conjunct consonants and symbols based on 3
layered approaches with different measurable area by using fuzzy logic and to determine
suitable features for handwritten character recognition.
OCR-THE 3 LAYERED APPROACH FOR CLASSIFICATION AND IDENTIFICATION OF TELUGU HA...cscpconf
Optical Character recognition is the method of digitalization of hand and type written or printed text into machine-encoded form and is superfluity of the various applications of envision of human’s life. In present human life OCR has been successfully using in finance, legal, banking, health care and home need appliances. India is a multi cultural, literature and traditional scripted country. Telugu is the southern Indian language, it is a syllabic language, symbol script represents a complete syllable and formed with the conjunct mixed consonants in their representation. Recognition of mixed conjunct consonants is critical than the normal
consonants, because of their variation in written strokes, conjunct maxing with pre and post level of consonants. This paper proposes the layered approach methodology to recognize the characters, conjunct consonants, mixed- conjunct consonants and expressed the efficient classification of the hand written and printed conjunct consonants. This paper implements the Advanced Fuzzy Logic system controller to take the text in the form of written or printed, collected the text images from the scanned file, digital camera, Processing the Image with Examine the high intensity of images based on the quality ration, Extract the image characters depends on the quality then check the character orientation and alignment then to check the character thickness, base and print ration. The input image characters can classify into the two ways, first way represents the normal consonants and the second way represents conjunct consonants. Digitalized image text divided into three layers, the middle layer represents normal consonants and the top and bottom layer represents mixed conjunct consonants. Here
recognition process starts from middle layer, and then it continues to check the top and bottom layers. The recognition process treat as conjunct consonants when it can detect any symbolic characters in top and bottom layers of present base character otherwise treats as normal consonants. The post processing technique applied to all three layered characters. Post processing of the image: concentrated on the image text readability and compatibility, if the
readability is not process then repeat the process again. In this recognition process includes slant correction, thinning, normalization, segmentation, feature extraction and classification. In the process of development of the algorithm the pre-processing, segmentation, character recognition and post processing modules were discussed. The main objectives to the development of this paper are: To develop the classification, identification of deference prototyping for written and printed consonants, conjunct consonants and symbols based on 3 layered approaches with different measurable area by using fuzzy logic and to determine suitable features for handwritten character recognition.
DIMENSION REDUCTION FOR SCRIPT CLASSIFICATION- PRINTED INDIAN DOCUMENTSijait
Automatic identification of a script in a given document image facilitates many important applications such
as automatic archiving of multilingual documents, searching online archives of document images and for
the selection of script specific OCR in a multilingual environment. This paper provides a comparison study
of three dimension reduction techniques, namely partial least squares (PLS), sliced inverse regression (SIR)
and principal component analysis (PCA), and evaluates the relative performance of classification
procedures incorporating those methods
Dimension Reduction for Script Classification - Printed Indian Documentsijait
Automatic identification of a script in a given document image facilitates many important applications such as automatic archiving of multilingual documents, searching online archives of document images and for the selection of script specific OCR in a multilingual environment. This paper provides a comparison study of three dimension reduction techniques, namely partial least squares (PLS), sliced inverse regression (SIR)
and principal component analysis (PCA), and evaluates the relative performance of classification procedures incorporating those methods. For given script we extracted different features like Gray Level Co-occurrence Method (GLCM) and Scale invariant feature transform (SIFT) features. The features are
extracted globally from a given text block which does not require any complex and reliable segmentation of the document image into lines and characters. Extracted features are reduced using various dimension reduction techniques. The reduced features are fed into Nearest Neighbor classifier. Thus the proposed
scheme is efficient and can be used for many practical
pplications which require processing large volumes
of data. The scheme has been tested on 10 Indian scripts and found to be robust in the process of scanning and relatively insensitive to change in font size. This proposed system achieves good classification accuracy on a large testing data set.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A NOVEL APPROACH FOR WORD RETRIEVAL FROM DEVANAGARI DOCUMENT IMAGESijnlc
Large amount of information is lying dormant in historical documents and manuscripts. This information would go futile if not stored in digital form. Searching some relevant information from these scanned images would ideally require converting these document images to text form by doing optical character
recognition (OCR). For indigenous scripts of India, there are very few OCRs that can successfully recognize printed text images of varying quality, size, style and font. An alternate approach using word spotting can be effective to access large collections of document images. We propose a word spotting
technique based on codes for matching the word images of Devanagari script. The shape information is utilised for generating integer codes for words in the document image and these codes are matched for final retrieval of relevant documents. The technique is illustrated using Marathi document images.
Similar to STRUCTURAL FEATURES FOR RECOGNITION OF HAND WRITTEN KANNADA CHARACTER BASED ON SVM (20)
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
4. An Overview of Sugarcane White Leaf Disease in Vietnam.pdf
STRUCTURAL FEATURES FOR RECOGNITION OF HAND WRITTEN KANNADA CHARACTER BASED ON SVM
1. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
DOI : 10.5121/ijcseit.2015.5203 25
STRUCTURAL FEATURES FOR RECOGNITION
OF HAND WRITTEN KANNADA CHARACTER
BASED ON SVM
S.A.Angadi a
and Sharanabasavaraj.H.Angadi b
a
Department of Computer Science and Engineering, Visvesvaraya Technological
University,Belgavi, Karnataka,India
b
Department of Computer Science and Engineering, Rural Engineering College,Hulkoti
Karnataka,India.
ABSTRACT
Research in image processing involves many active areas, of these Recognition of Handwritten character
holds lots of promises and is challenging one .The idea is to enable the computer to be able to recognize
intelligibly hand written inputs In this paper, a new method that uses structural features and support
vector Machine (SVM) classifier for recognition of Handwritten Kannada characters is presented. On an
average recognition accuracy of 89.84 % and 85.14% for handwritten Kannada vowels and Consonants
obtained with this proposed method, inspite of inherent variations.
KEYWORDS—Handwritten Character recognition (HCR), Kannada script, preprocessing, feature
extraction, SVM Classifier.
1. Introduction
Character recognition is an important subtask of document image processing [1, 2]. It involves
identifying the various characters that make up the text of the document. The field of character
recognition has seen many reported works, most of which are on English and other foreign
languages. It has been noticed that optical Character Recognition finds application in various
areas and many researchers have shown interest in finding better accuracy of character
recognition on foreign languages, but to the some extent researchers are working on Indian
languages and finds difficulties on these Indian script because of multi lingual, hence there is very
less work been seen in Kannada language, which means to say there's a scope of work to be done.
One of challenging field is hand written Character Recognition within the applications of
character recognition. Hand written information used as an important mode of communication
between the people, since the beginning of mankind and will continue through many ages.
Therefore handwritten recognition [3, 4] plays an important role in the fields of pattern
recognition .Typical pattern recognition system operates in two phases, the very first is Training
(learning) and second one is Testing (Recognition). In first phase that’s training method system
learns from large number of patterns of which the classes are known: in the recognition phase the
system is required to classify patterns for which the classes are unknown. First method does
consist of image preprocessing, feature extraction and feature storage. In the second i.e.
2. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
26
recognition phase preprocessing of unknown image is occurred then its features are obtained and
compared to those learned in the training phase that’s first phase.
Enormous work has been recently noted down on the development of handwritten character
recognizers for English, Arabic, Chinese and other scripts. However, there is very less work has
been noted down on developing OCR system especially for Indian languages like Tamil, Telugu,
Gurumukhi, Oriya , because of India is a multi-lingual and multi script country. However, to the
best of our knowledge very little work has been carried out with respect to Kannada language.
Due to the impact and the advancement in the information Technology , more emphasis is being
given in Karnataka to the use of Kannada at all levels and hence the use of Kannada in a
computer system is also a necessity Therefore , efficient OCR system for Kannada are need of the
day.
The proposed method works in two phases. The first phase(Training phase), preprocessing
operations like resizing of image is done so as to make the recognition process independent of
size then the normalization of the image is carried out to extract the structural features like
perimeter, area, eccentricity etc from the set of samples and support vector machine is trained.
During the second phase (testing phase) the set of samples are preprocessed and the structural
features are obtained. The trained feature set along with the testing features is given to the
recognition module which employs support vector machine classifier to recognize the characters.
The method has been evaluated on hand written Kannada vowels and consonants collected from
students of schools and colleges. These images are scanned by the flatbed scanner using 300 dpi.
The system has given a recognition accuracy of 89.84 % and 85.14% for vowels and consonants
respectively.
The remaining part of the paper is organized as follows Section 2 presents a survey of literature in
recognition of Handwritten Indian language text. Section 3 describes Kannada character set and
database. Section 4 describes the method for recognition of hand written Kannada character.
Section 5 presents experimentation and Section 6 gives conclusions.
2. Literature Review
Handwriting recognition (or HWR) is the ability of a computer to receive and interpret intelligible
handwritten input from sources such as paper documents, photographs, touch-screens and other
devices. Recognition of handwritten characters by a computer is a difficult problem due to the
human handwriting variability, uneven skew, orientation writing habit, style. A few such
recognition algorithms are described in the following
Amitabh Wahi, [5] et.al, presented a paper on Handwritten Tamil Character Recognition using
Moments. Zernike moments and Legendre Polynomial features are used in the pattern recognition
to extract the features of Tamil characters. Neural classifier has been used for the classification
purpose. Handwritten Bangla characters features are extracted by local chain-code histograms
using MLP classifier proposed by Bhattacharya et al [6] . Das et al [7] have extracted features
based on quad tree, shadow, and longest run. And using these features multi-layer perceptron
(MLP) and support vector machine (SVM) classifiers recognizes different groups of characters.
Thechallenges involved in the identification of scripts particularly on handwritten documents are
reviewed in [8]. Here described the challenges involved in script identification, the applications of
script identification. Ashwin et al [9] have formed three basic Zones for the underlying character
3. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
27
image Support vector machines are employed for the classification of characters and have
achieved an accuracy of 86.11%.
The Gaussian filters are used to down sample each segmented block to extract directional features
for Kannada,Tamil and Telugu hand written characters recognition with aid of quadratic classifier
is presentedin [10]. In [11] FLD based unconstrained handwritten Kannada character recognition
by using Euclidean distance measure is described and the mean recognition accuracy of 68% is
reported. Shreya N. Patankar et.al[12] have proposed a method that aims at recognizing Marathi
language -Barakhadi characters by recognizing a vowel and a consonant separately using
Invariant moment features with quadratic classifier. Recognition of isolated handwritten Kannada
vowels is proposed in [13].The Invariant moments are used as features and K-NN classifier is
used for classification. The recognition results for vowels are 85% on average. Mamatha et.al [14]
describesatechnique to remove the noise induced in the handwritten Kannada documents. Tamil
handwritten recognition system [15] having different font size and type extracts features from
Zernike moments and Legendre Polynomial which have been used in pattern recognition with aid
of Neural classifiers. Using Support Vector Machine (SVM) an attempt is made to recognize the
similar looking Bangla basic characters, numerals and vowel modifiers [16]. From the literature
survey, it is evident that there is still lot of on scope for research in handwritten Kannada
character recognition.
In this paper, a SVM based approach for handwritten character recognition using structural
features is proposed. As an initial attempt, the work is restricted to isolated and constrained
vowels and consonant characters rather than considering entire character set.
3. Kannada Character Set and Database
Kannada script consists of 49 basic characters which are grouped into swaragalu
(vowels),Vyanjanagalu-consonants and Yogavahakagalu..i.e, The Kannada script also has 10
numerals from 0 to 9 Kannada handwriting recognition is challenging task due to large character
set, complex shape presence of compound characters and modifiers and similarity between
characters etc. There is no standard dataset of handwritten Kannada texts available, hence we
have collected handwritten text and built our own dataset for characters. These datasets were
collected from students of primary schools and engineering college. For this purpose every
individual was asked to write vowels and consonants on prescribed forms. The forms were
scanned at 300 dpi. Asgray scale images, using a flatbed HP scanner. The characters were then
manually extracted from the scanned Images. For each of the 49 symbols we selected 50 samples
are used. In total there are 2490 samples in the whole dataset for training and testing. The detailed
description of the feature extraction and classification of proposed methodology is given in the
following section.
4. Proposed Methodology
Structural and topological features used by the proposed method for recognition of hand written
Kannada vowels and consonants. Figure 3. Illustrates block schematic of the proposed method for
recognition of handwritten Kannada character under which major steps included are
preprocessing, feature extraction, knowledge base for training and classifier. SVM consists of
training module (SVM_train) and classification module (SVM_test).The proposed methodology
works as follows;
4. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
28
• Collect the data samples
• Scan with 300 dpi and store it as bmp format
• Normalize the image to the size of 30x30 pixels and apply thinning operation on it.
• Extract the structural features and store the features as vector
• Train SVM with the above features. The support vectors are stored as models
• Test the SVM with the features of unknown characters.
The detailed explanation of each stage is given in the following
Image acquisition
In image acquisition, the recognition system acquires a scanned image as an input in bmp format
using flatbed scanner. The acquired image is given to the preprocessing phase.
Pre Processing
The images acquired may contain skewness, noises etc, which leads to the miss classification of
characters. In order to reduce these factors, preprocessing is very essential.
Figure 3.Proposed flow diagram of Handwritten Character Recognition System
The goal of preprocessing is to increase the quality of hand written data. The preprocessing stage
performs size normalization, bounding box generation and further the thinning operation. The
sequences of pre-processing steps are described below.
5. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
29
Normalization
It is the process of converting the random sized image into standard sized image. For this purpose
the character images are resized to 30x 30 pixel size. The normalized image is subjected to the
thinning process.
Thinning
To reduce the storage space and processing time thinning is carried out, without distorting shape.
The thinning process is as follows afterapplying bounding box, it extracts the shape information
of the character. Further thinning is carried out on the image which is cropped and the
morphological operator is employed for the purpose. The thinned image is further processed for
feature extraction.
Feature Extraction
The feature extraction [17, 18, 19, 20, 21, 22, 23, 24] stage captures the distinct characteristics of
the digitized character for recognition. The main goal of feature extraction process is to find
unique patterns in image discriminating pattern classes of images. In this phase for each
character, a structural feature vector is extracted and it comprises of
The following structural/topological features. In this stage, the structural features are extracted
from the images using regionprop( ) and are stored into a feature vector F as
F= [ StructFeatures ] StructFeature=[SFi] 1=<i<=43 Where SFi is the structural features of the
character images. The dataset from the images are then applied to the training phase using
svmtrain() .The training process trains images with the given structural feature vectors. These
structural features are used by the classifier to categorize the character image.
Classification
Support vector machine (SVM) classifiers [25, 26, 27, 28, 29, 30, and 31] have gained
prominence in the field of pattern recognition/classification. The SVM process generates the
support vectors on margins on which data points lies. SVM approximates the function using the
following form
f(x) = sgn( w . Ф(x) + b ) (1)
Where, w is the weight vector, b is a bias and Ф(x) represents a high-dimensional feature space
which is nonlinearly mapped from the input space x. The coefficients w and b are estimated by
minimizing the regularized risk function. The ONA approach is used in the proposed method for
decomposition of the classification problem from N class pattern recognition into several two
class classification problems. The explanation using the above mentioned techniques for Kannada
character recognition is described in the following section.
6. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
30
5. Experimentation
The proposed methodology has been evaluated for hand written Kannada vowels and consonants
which are collected from different persons. These images are having uneven thickness and other
degradations. Therefore a bounding box is fit to extract the exact character image and the image is
then resized to 30x30 pixel sizes and the resulting image is thinned.
Further the structural/topological features of the character like orientation, Filled Area, Perimeter,
Eccentricity, EquiviDiameter, Convex Area and so on are extracted .
The recognition results are also brought out in Figure 5 and Figure 6 for the vowel and
consonants. The overall recognition rates of vowels is 89.84 % and consonant is 85.14% The
method is implemented on Pentium Processor T4300(2.1 GHz,800MHzFSB) System with 3GB
RAM with Mat lab 7.8.0 and results as shown are satisfactory.
Figure 5. Vowel recognition performance
Figure 6. Consonant recognition performance
6. Conclusion
For hand written Kannada vowels and consonants recognition the proposed method is evaluated
using structural features. The proposed method is capable of recognizing isolated and constrained
handwritten Kannada vowels and consonants. The recognition system has training and testing
phase. The characters are written from different persons are scanned using a flatbed scanner with
300 dpi. The images have uneven thickness and other degradations. Therefore preprocessing
8. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
32
[16] Khondker Nayef Reza, Mumit Khan, : Grouping of Handwritten Bangla Basic Characters,Numerals
and Vowel Modifiers for Multilayer Classification, International Conference Frontiers in Handwriting
Recognition, (2012).
[17] L. Heutte , T. Paquet , J.V. Moreau , Y. Lecourtier , C. Olivier . : A structural/statistical feature based
vector for handwritten character recognition, Pattern Recognition Lettersvol19, pp 629–641, (1998).
[18] Leena R Ragha and M Sasikumar, : Feature Analysis for Handwritten Kannada Kagunita
Recognition, InternationalJournal of Computer Theory and Engineering, Vol.3, No.1, (February,
2011).
[19] Aditya Raj, Ranjeet Srivastava, Tushar Patnaik, Bhupendra Kumar, : A Survey of FeatureExtraction
and Classification Techniques Used In Character Recognition for Indian Scripts, International Journal
of Engineeringand Advanced Technology (IJEAT) ISSN: 2249–8958, Volume-2, Issue-3, (February
2013).[20]Mamatha H. R., Karthik S., SrikantaMurthy K., : Feature Based Recognition of
Handwritten Kannada Numerals – A Comparative Study, International Conference onComputing,
Communication and Applications (ICCCA), (Feb, 2012).
[21] Anil. K. Jain and Torfinn Taxt, : Feature extraction methods for character recognition-A Survey,
PatternRecognition, vol. 29, no. 4.Newyork pp 641-662, (1996).
[22] B.V.Dhandra, R.G.Benne and Mallikarjun Hangarge, : Multi-font multi-size Kannada numeral
recognition based on structural feature, eIT- 07,pp-193-199, 2nd National conference on Emerging
trends in Informatiom Technology(eIT-2007). (2007).
[23] Kartar Singh Siddharth, Renu Dhir, Rajneesh Rani, : Handwritten Gurumukhi Character Recognition
Using Zoning Density and Background Directional Distribution Features(IJCSIT), International
Journal of Computer Science andInformation Technologies, Vol. 2 (3) , 1036-1041, (2011).
[24] R Sanjeev Kunte, R D Sudhaker Samuel,: An OCR System for Printed Kannada Text Using Two -
Stage Multi-network Classification Approach Employing Wavelet Features, IEEE pp. 349 – 353.
(Dec-2007).
[25] C. J. C. Burges, : A tutorial on support vector machines for pattern recognition., Dat aining and
KnowledgeDiscovery, pp 121-167. (1998).
[26] Arvind C.S., Nithya E. And Nabanita Bhattacharjee, : Kannada Language Ocr System Using SVM
Classifier, Journal of Information Systems and Communication ISSN: 0976-8742, E- ISSN: 0976-
8750, Volume 3, Issue 1, pp-92-95, (2012).
[27] G. G. Rajput, Rajeswari Horakeri, Sidramappa Chandrakant, : Printed and handwritten mixed
Kannada numerals recognition using SVM, International Journal on Computer Science and
Engineering Vol. 02, No. 05, 1622-1626. (2010).
[28] J. Manikandan , B.Venkataramani, : Study and evaluation of a multi-class SVM classifierusing
diminishing learning technique, Neurocomputing 73 1676–1685, (2010).
[29] G. G. Rajput, Rajeswari Horakeri, Sidramappa Chandrakant, : Printed and Handwritten Mixed
Kannada Numerals Recognition Using SVM, (IJCSE) International Journal on Computer Science and
Engineering Vol. 02, No. 05, , 1622-1626, (2010).
[30] S V. Rajashekararadhya, P. Vanaja Ranjan, : Neural Network Based Handwritten Numeral
Recognition of Kannada and Telugu Scripts, TENCON 2008 - IEEE Region Conference, Hyderabad,
(2008).
[31] R.O. Duda, P.E. Hart, D.G. Stork, Pattern Classification, 2nd ed., Wiley- Newyork