Braille system has been used by the visually impaired people for reading.The shortage of Braille books has caused a need for conversion of Braille to text. This paper addresses the geometric correction of a Braille document images. Due to the standard measurement of the Braille cells, identification of Braille characters could be achieved by simple cell overlapping procedure. The standard measurement varies in a scaled document and fitting of the cells become difficult if the
document is tilted. This paper proposes a line fitting algorithm for identifying the tilt (skew) angle. The horizontal and vertical scale factor is identified based on the ratio of distance between characters to the distance between dots. These are used in geometric transformation matrix for correction. Rotation correction is done prior to scale correction. This process aids in increased accuracy. The results for various Braille documents are tabulated.
An Application of Eight Connectivity based Two-pass Connected-Component Label...CSCJournals
The intrinsic noise present in the image during the acquisition phase marks the recognition of Braille dots a challenging task in Optical Braille Recognition (OBR). Further, while the Braille document is being embossed on either side in the case of Inter-Point Braille, this problem of Braille dot recognition is aggravated and it makes the differentiation between recto (convex) dots and verso (concave) dots more complex. Also, the recognition of Braille dots should be carried out by reading information recorded on both sides of paper by scanning only one side. This work proposes a novelty to circumvent this issue for distinguishing convex points from concave points even if they are adjacent to each other by using only the shadow patterns of the dots and by employing the connected component labelling using two-pass algorithm and the eight connectivity property of a pixel. Enthused by the fact that, during the acquisition phase, the reflection of light through the verso dots results in a high pixel count for them when compared to the recto dots, this technique works perfectly well with good quality Braille. Furthermore, due to the natural problems like ageing and frequent usage of the document the Braille dots tend to deteriorate resulting in the down fall of the performance of the algorithm for the Braille image. Besides to this for the recognition of the Braille cell in a Braille document with some special cases an adaptive grid construction technique has also been proposed. The results extracted reveal that the enactment of the proposed technique is much consistent and dependable and that the accuracy is very much comparable to the modern state of the art techniques.
This document summarizes a research paper that proposes a new method for segmenting text lines in machine printed Telugu script documents. It begins by noting common challenges with existing horizontal projection profile methods for line segmentation when text lines overlap. The proposed method first estimates the top and bottom bounding lines of the inter-line space using statistical analysis of the horizontal profile. It then predicts the segmentation path, which is set at 70% of the inter-line space to follow character contours. When two possible paths exist, it analyzes the horizontal profile of a small sub-image to select the correct path. The method achieved a success rate of 99.1% on tested documents.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Enhancement and Segmentation of Historical Recordscsandit
Document Analysis and Recognition (DAR) aims to extract automatically the information in the document and also addresses to human comprehension. The automatic processing of degraded
historical documents are applications of document image analysis field which is confronted with many difficulties due to the storage condition and the complexity of the script. The main interest
of enhancement of historical documents is to remove undesirable statistics that appear in the
background and highlight the foreground, so as to enable automatic recognition of documents
with high accuracy. This paper addresses pre-processing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions.
Pre-processing involves, enhancement of degraded ancient document images which is achieved through four different Spatial filtering methods for smoothing or sharpening namely Median,
Gaussian blur, Mean and Bilateral filter, with different mask sizes. This is followed by
binarization of the enhanced image to highlight the foreground information, using Otsu
thresholding algorithm. In the second phase Segmentation is carried out using Drop Fall and
WaterReservoir approaches, to obtain sampled characters, which can be used in later stages of
OCR. The system showed good results when tested on the nearly 150 samples of varying
degraded epigraphic images and works well giving better enhanced output for, 4x4 mask size
for Median filter, 2x2 mask size for Gaussian blur, 4x4 mask size for Mean and Bilateral filter.
The system can effectively sample characters from enhanced images, giving a segmentation rate of 85%-90% for Drop Fall and 85%-90% for Water Reservoir techniques respectively
An effective approach to offline arabic handwriting recognitionijaia
Segmentation is the most challenging part of the Arabic handwriting recognition, due to the unique
characteristics of Arabic writing that allows the same shape to denote different characters. In this paper,
an off-line Arabic handwriting recognition system is proposed. The processing details are presented in
three main stages. Firstly, the image is skeletonized to one pixel thin. Secondly, transfer each diagonally
connected foreground pixel to the closest horizontal or vertical line. Finally, these orthogonal lines are
coded as vectors of unique integer numbers; each vector represents one letter of the word. In order to
evaluate the proposed techniques, the system has been tested on the IFN/ENIT database, and the
experimental results show that our method is superior to those methods currently available.
A NOVEL PROBABILISTIC BASED IMAGE SEGMENTATION MODEL FOR REALTIME HUMAN ACTIV...sipij
Automatic human activity detection is one of the difficult tasks in image segmentation application due to
variations in size, type, shape and location of objects. In the traditional probabilistic graphical
segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also,
both directed and undirected graphical models such as Markov model, conditional random field have
limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have
studied and proposed a natural solution for automatic human activity segmentation using the enhanced
probabilistic chain graphical model. This system has three main phases, namely activity pre-processing,
iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental
results show that proposed system efficiently detects the human activities at different levels of the action
datasets.
Nowadays character recognition has gained lot of attention in the field of pattern recognition due to its application in various fields. It is one of the most successful applications of automatic pattern recognition. Research in OCR is popular for its application potential in banks, post offices, office automation etc. HCR is useful in cheque processing in banks; almost all kind of form processing systems, handwritten postal address resolution and many more. This paper presents a simple and efficient approach for the implementation of OCR and translation of scanned images of printed text into machine-encoded text. It makes use of different image analysis phases followed by image detection via pre-processing and post-processing. This paper also describes scanning the entire document (same as the segmentation in our case) and recognizing individual characters from image irrespective of their position, size and various font styles and it deals with recognition of the symbols from English language, which is internationally accepted.
Free-scale Magnification for Single-Pixel-Width Alphabetic Typeface Charactersinventionjournals
This document presents a novel approach for magnifying single-pixel-width alphabetic typeface characters. It first removes useless serif patterns from the character. It then applies an intuitive stroke transcribing algorithm to describe the character. Each stroke, represented by cubic B-spline functions, is scaled using wavelet transform with arbitrary size. Experimental results on computer fonts show the proposed algorithm performs magnification well while maintaining character shape and structure without additional distortion.
An Application of Eight Connectivity based Two-pass Connected-Component Label...CSCJournals
The intrinsic noise present in the image during the acquisition phase marks the recognition of Braille dots a challenging task in Optical Braille Recognition (OBR). Further, while the Braille document is being embossed on either side in the case of Inter-Point Braille, this problem of Braille dot recognition is aggravated and it makes the differentiation between recto (convex) dots and verso (concave) dots more complex. Also, the recognition of Braille dots should be carried out by reading information recorded on both sides of paper by scanning only one side. This work proposes a novelty to circumvent this issue for distinguishing convex points from concave points even if they are adjacent to each other by using only the shadow patterns of the dots and by employing the connected component labelling using two-pass algorithm and the eight connectivity property of a pixel. Enthused by the fact that, during the acquisition phase, the reflection of light through the verso dots results in a high pixel count for them when compared to the recto dots, this technique works perfectly well with good quality Braille. Furthermore, due to the natural problems like ageing and frequent usage of the document the Braille dots tend to deteriorate resulting in the down fall of the performance of the algorithm for the Braille image. Besides to this for the recognition of the Braille cell in a Braille document with some special cases an adaptive grid construction technique has also been proposed. The results extracted reveal that the enactment of the proposed technique is much consistent and dependable and that the accuracy is very much comparable to the modern state of the art techniques.
This document summarizes a research paper that proposes a new method for segmenting text lines in machine printed Telugu script documents. It begins by noting common challenges with existing horizontal projection profile methods for line segmentation when text lines overlap. The proposed method first estimates the top and bottom bounding lines of the inter-line space using statistical analysis of the horizontal profile. It then predicts the segmentation path, which is set at 70% of the inter-line space to follow character contours. When two possible paths exist, it analyzes the horizontal profile of a small sub-image to select the correct path. The method achieved a success rate of 99.1% on tested documents.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Enhancement and Segmentation of Historical Recordscsandit
Document Analysis and Recognition (DAR) aims to extract automatically the information in the document and also addresses to human comprehension. The automatic processing of degraded
historical documents are applications of document image analysis field which is confronted with many difficulties due to the storage condition and the complexity of the script. The main interest
of enhancement of historical documents is to remove undesirable statistics that appear in the
background and highlight the foreground, so as to enable automatic recognition of documents
with high accuracy. This paper addresses pre-processing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions.
Pre-processing involves, enhancement of degraded ancient document images which is achieved through four different Spatial filtering methods for smoothing or sharpening namely Median,
Gaussian blur, Mean and Bilateral filter, with different mask sizes. This is followed by
binarization of the enhanced image to highlight the foreground information, using Otsu
thresholding algorithm. In the second phase Segmentation is carried out using Drop Fall and
WaterReservoir approaches, to obtain sampled characters, which can be used in later stages of
OCR. The system showed good results when tested on the nearly 150 samples of varying
degraded epigraphic images and works well giving better enhanced output for, 4x4 mask size
for Median filter, 2x2 mask size for Gaussian blur, 4x4 mask size for Mean and Bilateral filter.
The system can effectively sample characters from enhanced images, giving a segmentation rate of 85%-90% for Drop Fall and 85%-90% for Water Reservoir techniques respectively
An effective approach to offline arabic handwriting recognitionijaia
Segmentation is the most challenging part of the Arabic handwriting recognition, due to the unique
characteristics of Arabic writing that allows the same shape to denote different characters. In this paper,
an off-line Arabic handwriting recognition system is proposed. The processing details are presented in
three main stages. Firstly, the image is skeletonized to one pixel thin. Secondly, transfer each diagonally
connected foreground pixel to the closest horizontal or vertical line. Finally, these orthogonal lines are
coded as vectors of unique integer numbers; each vector represents one letter of the word. In order to
evaluate the proposed techniques, the system has been tested on the IFN/ENIT database, and the
experimental results show that our method is superior to those methods currently available.
A NOVEL PROBABILISTIC BASED IMAGE SEGMENTATION MODEL FOR REALTIME HUMAN ACTIV...sipij
Automatic human activity detection is one of the difficult tasks in image segmentation application due to
variations in size, type, shape and location of objects. In the traditional probabilistic graphical
segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also,
both directed and undirected graphical models such as Markov model, conditional random field have
limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have
studied and proposed a natural solution for automatic human activity segmentation using the enhanced
probabilistic chain graphical model. This system has three main phases, namely activity pre-processing,
iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental
results show that proposed system efficiently detects the human activities at different levels of the action
datasets.
Nowadays character recognition has gained lot of attention in the field of pattern recognition due to its application in various fields. It is one of the most successful applications of automatic pattern recognition. Research in OCR is popular for its application potential in banks, post offices, office automation etc. HCR is useful in cheque processing in banks; almost all kind of form processing systems, handwritten postal address resolution and many more. This paper presents a simple and efficient approach for the implementation of OCR and translation of scanned images of printed text into machine-encoded text. It makes use of different image analysis phases followed by image detection via pre-processing and post-processing. This paper also describes scanning the entire document (same as the segmentation in our case) and recognizing individual characters from image irrespective of their position, size and various font styles and it deals with recognition of the symbols from English language, which is internationally accepted.
Free-scale Magnification for Single-Pixel-Width Alphabetic Typeface Charactersinventionjournals
This document presents a novel approach for magnifying single-pixel-width alphabetic typeface characters. It first removes useless serif patterns from the character. It then applies an intuitive stroke transcribing algorithm to describe the character. Each stroke, represented by cubic B-spline functions, is scaled using wavelet transform with arbitrary size. Experimental results on computer fonts show the proposed algorithm performs magnification well while maintaining character shape and structure without additional distortion.
Detection of crossover & bifurcation points on a retinal fundus image by anal...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Detection of crossover & bifurcation points on a retinal fundus image by ...eSAT Journals
Abstract Over the last few decades, analysis of retinal vascular images has gained its popularity among the cyber scientists. Retina is unique in nature because of its most important feature, bifurcation & crossover points, serving reliable authentication basis. Using these two points many problem of authentication could be handled easily. Literature has shown that in a retinal vascular structure, bifurcation& crossover points need to be separated for the purpose of authentication. With this motivation in mind, we propose a novel method to segregate vascular bifurcation points from crossover points in any retinal image by analyzing neighborhood connectivity of non-vascular regions around a junction point on retinal blood vessels. Keywords: Retinal vessel analysis, retinal blood vessels, bifurcation and crossover point’s detection.
A Survey on Tamil Handwritten Character Recognition using OCR Techniquescscpconf
This document summarizes research on recognizing Tamil handwritten characters using optical character recognition (OCR) techniques. It discusses various approaches that have been used for pre-processing images, segmenting characters, extracting features, and classifying characters. For pre-processing, techniques like binarization, noise removal, normalization, and skew correction are discussed. For segmentation, methods like projection-based, smearing, and graph-based techniques are mentioned. Feature extraction approaches include statistical, structural, and hybrid methods. Classification is done using techniques such as k-nearest neighbors, neural networks, and support vector machines. The document also provides two tables summarizing accuracy levels achieved by different studies on Tamil character recognition and the limitations of those studies.
Text detection and recognition in scene images or natural images has applications in computer
vision systems like registration number plate detection, automatic traffic sign detection, image retrieval
and help for visually impaired people. Scene text, however, has complicated background, blur image,
partly occluded text, variations in font-styles, image noise and ranging illumination. Hence scene text
recognition could be a difficult computer vision problem. In this paper connected component method
is used to extract the text from background. In this work, horizontal and vertical projection profiles,
geometric properties of text, image binirization and gap filling method are used to extract the text from
scene images. Then histogram based threshold is applied to separate text background of the images.
Finally text is extracted from images.
IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mechanical and civil engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mechanical and civil engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Study on a Hybrid Segmentation Approach for Handwritten Numeral Strings in Fo...inventionjournals
This paper presents a hybrid approach to segment single- or multiple-touching handwritten numeral strings in form document, the core of which is the combined use of foreground, background and recognition analysis. The algorithm first located some feature points on both the foreground and background skeleton images containing connected numeral strings in form document. Possible segmentation paths were then constructed by matching these feature points, with an unexpected benefit of removing useless strokes. Subsequently, all these segmentation paths were validated and ranked by a recognition-based analysis, where a well-trained two-stage classifier was applied to each separated digit image to obtain its reliability. Finally, by introducing a locally optimal strategy to accelerate the recognition process, the top ranked segmentation path survived to help make a decision on whether to accept or not. Experimental results show that the proposed method can achieve a correct segmentation rate of 96.2 percent on a large dataset collected by our own.
An Efficient Segmentation Technique for Machine Printed Devanagiri Script: Bo...iosrjce
Segmentation technique plays a major role in scripting the documents for extraction of various
features. Many researchers are doing various research works in this field to make the segmenting process
simple as well as efficient. In this paper a simple segmentation technique for both the line and word
segmentation of a script document has been proposed. The main objective of this technique is to recognize the
spaces that separate two text lines.For the Word segmentation technique also similar procedure is followed. In
this work ,three different scanned document have been taken as input images for both line and word
segmentation techniques. The results found were outstanding with average accuracy for both line and word. It
provides 100% accuracy for line segmentation and 100% for line segmentation as well. Evaluation results show
that our method outperforms several competing methods.
Offline Signiture and Numeral Recognition in Context of ChequeIJERA Editor
Signature is considered as one of the biometrics. Signature Verification System is required in almost all places where it is compulsory to authenticate a person or his/her credentials to proceed further transaction especially when it comes to bank cheques. For this purpose signature verification system must be powerful and accurate. Till date various methods have been used to make signature verification system powerful and accurate. Research here is related to offline signature verification. Shape Contexts have been used to verify whether 2 shapes are similar or not. It has been used for various applications such as digit recognition, 3D Object recognition, trademark retrieval etc. In this paper we present a modified version of shape context for signature verification on bank cheques using K-Nearest Neighbor classifier.
Multimodal Approach for Face Recognition using 3D-2D Face Feature FusionCSCJournals
This document describes a method for multimodal face recognition using 3D and 2D face features. It extracts features from 3D point cloud data and 2D texture maps using discrete Fourier transform (DFT) and discrete cosine transform (DCT) to generate multiple spectral representations of the data. Principal component analysis (PCA) is applied to the spectral representations. Matching scores from the different representations are then fused to improve recognition accuracy compared to using the representations individually. The method is tested on the FRAV3D face database without performing pose correction preprocessing.
Data entry is a time consuming and erroneous procedure in its nature. In addition, validity
check of submitted information is not easier than retyping it. In a mega-corporation like Kanoon
Farhangi Amoozesh, there are almost no way to control the authenticity of students' educational
background. By the virtue of fast computer architectures, optical character recognition, a.k.a.
OCR, systems have become viable. Unfortunately, general-purpose OCR systems like Google's
Tesseract are not handful because they don't have any a-priori information about what they are
reading. In this paper the authors have taken a in-depth look on what has done in the field of
OCR in the last 60 years. Then, a custom-made system adapted to the problem is presented
which is way more accurate than general purpose OCRs. The developed system reads more than
60 digits per second. As shown in the Results section, the accuracy of the devised method is
reasonable enough to be exposed in public use.
Using Aspect Ratio to Classify Red Blood ImagesIOSR Journals
This document discusses a method for classifying red blood cell (RBC) images using aspect ratio. It involves extracting shape boundaries from RBC images through preprocessing steps. Fourier descriptors are then applied to the normalized shape signatures to represent the shapes. Aspect ratio and invariant moments are used to filter irrelevant shapes and improve retrieval accuracy when matching shapes to those in a database. Testing showed 92% accuracy when using 4 geometric features, decreasing to 90% accuracy when using more features. The method demonstrates potential for diagnosing anemia by analyzing abnormal RBC shapes.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comparative study of two methods for Handwritten Devanagari Numeral RecognitionIOSR Journals
Abstract : In this paper two different methods for Numeral Recognition are proposed and their results are
compared. The objective of this paper is to provide an efficient and reliable method for recognition of
handwritten numerals. First method employs Grid based feature extraction and recognition algorithm. In this
method the features of the image are extracted by using grid technique and this feature set is then compared
with the feature set of database image for classification. While second method contains Image Centroid Zone
and Zone Centroid Zone algorithms for feature extraction and the features are applied to Artificial Neural
Network for recognition of input image. Machine text recognition is important research area because of its
applications in many areas like Bank, Post office, Hospitals etc.
Keywords: Handwritten Numeral Recognition, Grid Technique, ANN, Feature Extraction, Classification.
DEVNAGARI DOCUMENT SEGMENTATION USING HISTOGRAM APPROACHijcseit
This document summarizes a research paper on Devnagari document segmentation using a histogram approach. It discusses challenges in segmenting the Devnagari script used for several Indian languages. A simple algorithm is proposed using horizontal and vertical histograms to segment documents into lines, words and characters. The algorithm achieves near 100% accuracy for line segmentation but lower accuracy for word and character segmentation due to complexities in the Devnagari script. Future work is needed to improve character segmentation handling connected and modified characters.
A Hybrid Technique for Shape Matching Based on chain code and DFS TreeIOSR Journals
This document proposes a hybrid technique for shape matching that combines chain code and depth-first search (DFS) tree methods. Chain code is used to detect boundaries and represent shapes, but can result in long codes that reduce accuracy. DFS is applied to break long chain codes into smaller subgraphs, producing more compact patterns that improve matching performance. The technique is tested on 500 images from different categories, achieving higher precision and recall than chain code alone, demonstrating the effectiveness of the hybrid approach.
Segmentation and recognition of handwritten digit numeral string using a mult...ijfcstjournal
In this paper, the use of Multi-Layer Perceptron (MLP) Neural Network model is proposed for recognizing
unconstrained offline handwritten Numeral strings. The Numeral strings are segmented and isolated
numerals are obtained using a connected component labeling (CCL) algorithm approach. The structural
part of the models has been modeled using a Multilayer Perceptron Neural Network. This paper also
presents a new technique to remove slope and slant from handwritten numeral string and to normalize the
size of text images and classify with supervised learning methods. Experimental results on a database of
102 numeral string patterns written by 3 different people show that a recognition rate of 99.7% is obtained
on independent digits contained in the numeral string of digits includes both the skewed and slant data.
This document discusses a system for extracting text from images. It begins with an introduction describing the need for such a system. It then covers related work on text detection techniques. The proposed method involves converting images to grayscale, binarization, connected component analysis, horizontal/vertical projections, reconstruction and using OCR for recognition. Applications discussed include wearable devices, video coding, image indexing and license plate recognition. While the system is robust, OCR recognition of noisy extracted text remains a challenge.
This paper presents a survey of various reversible data hiding methods. Data hiding is the process of hiding information in a cover media . Most commonly used media for data hiding is image. But during the hiding and extraction of data there are chances for the distortion of image. Reversible data hiding methods are used to solve this problem.
National Flags Recognition Based on Principal Component Analysisijtsrd
Recognizing an unknown flag in a scene is challenging due to the diversity of the data and to the complexity of the identification process. And flags are associated with geographical regions, countries and nations. But flag identification of different countries is a challenging and difficult task. Recognition of an unknown flag image in a scene is challenging due to the diversity of the data and to the complexity of the identification process. The aim of the study is to propose a feature extraction based recognition system for Myanmar's national flag. Image features are acquired from the region and state of flags which are identified by using principal component analysis PCA . PCA is a statistical approach used for reducing the number of features in National flags recognition system. Soe Moe Myint | Moe Moe Myint | Aye Aye Cho "National Flags Recognition Based on Principal Component Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26775.pdfPaper URL: https://www.ijtsrd.com/other-scientific-research-area/other/26775/national-flags-recognition-based-on-principal-component-analysis/soe-moe-myint
Conversion of braille to text in English, hindi and tamil languagesIJCSEA Journal
This document describes a method for converting scanned Braille documents to text in English, Hindi, and Tamil. The Braille characters are extracted from preprocessed images using segmentation and thresholding. The dot patterns in each Braille cell are converted to number sequences and mapped to letters in the appropriate language based on standard Braille codes. The converted text can then be synthesized to speech. A keyboard interface is also proposed that allows typing Braille characters via number keys corresponding to dot positions.
Script Identification for printed document images at text-line level using DC...IOSR Journals
This document summarizes a research paper that proposes a script identification approach for Indian scripts at the text-line level of printed documents. The approach uses visual appearance-based recognition by extracting features from text lines using Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). These features are classified using a Modified K-Nearest Neighbor algorithm. The method achieves 95% accuracy in recognizing 11 major Indian languages and discusses the importance of script identification in multilingual environments like India for applications such as optical character recognition.
Detection of crossover & bifurcation points on a retinal fundus image by anal...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Detection of crossover & bifurcation points on a retinal fundus image by ...eSAT Journals
Abstract Over the last few decades, analysis of retinal vascular images has gained its popularity among the cyber scientists. Retina is unique in nature because of its most important feature, bifurcation & crossover points, serving reliable authentication basis. Using these two points many problem of authentication could be handled easily. Literature has shown that in a retinal vascular structure, bifurcation& crossover points need to be separated for the purpose of authentication. With this motivation in mind, we propose a novel method to segregate vascular bifurcation points from crossover points in any retinal image by analyzing neighborhood connectivity of non-vascular regions around a junction point on retinal blood vessels. Keywords: Retinal vessel analysis, retinal blood vessels, bifurcation and crossover point’s detection.
A Survey on Tamil Handwritten Character Recognition using OCR Techniquescscpconf
This document summarizes research on recognizing Tamil handwritten characters using optical character recognition (OCR) techniques. It discusses various approaches that have been used for pre-processing images, segmenting characters, extracting features, and classifying characters. For pre-processing, techniques like binarization, noise removal, normalization, and skew correction are discussed. For segmentation, methods like projection-based, smearing, and graph-based techniques are mentioned. Feature extraction approaches include statistical, structural, and hybrid methods. Classification is done using techniques such as k-nearest neighbors, neural networks, and support vector machines. The document also provides two tables summarizing accuracy levels achieved by different studies on Tamil character recognition and the limitations of those studies.
Text detection and recognition in scene images or natural images has applications in computer
vision systems like registration number plate detection, automatic traffic sign detection, image retrieval
and help for visually impaired people. Scene text, however, has complicated background, blur image,
partly occluded text, variations in font-styles, image noise and ranging illumination. Hence scene text
recognition could be a difficult computer vision problem. In this paper connected component method
is used to extract the text from background. In this work, horizontal and vertical projection profiles,
geometric properties of text, image binirization and gap filling method are used to extract the text from
scene images. Then histogram based threshold is applied to separate text background of the images.
Finally text is extracted from images.
IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mechanical and civil engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mechanical and civil engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Study on a Hybrid Segmentation Approach for Handwritten Numeral Strings in Fo...inventionjournals
This paper presents a hybrid approach to segment single- or multiple-touching handwritten numeral strings in form document, the core of which is the combined use of foreground, background and recognition analysis. The algorithm first located some feature points on both the foreground and background skeleton images containing connected numeral strings in form document. Possible segmentation paths were then constructed by matching these feature points, with an unexpected benefit of removing useless strokes. Subsequently, all these segmentation paths were validated and ranked by a recognition-based analysis, where a well-trained two-stage classifier was applied to each separated digit image to obtain its reliability. Finally, by introducing a locally optimal strategy to accelerate the recognition process, the top ranked segmentation path survived to help make a decision on whether to accept or not. Experimental results show that the proposed method can achieve a correct segmentation rate of 96.2 percent on a large dataset collected by our own.
An Efficient Segmentation Technique for Machine Printed Devanagiri Script: Bo...iosrjce
Segmentation technique plays a major role in scripting the documents for extraction of various
features. Many researchers are doing various research works in this field to make the segmenting process
simple as well as efficient. In this paper a simple segmentation technique for both the line and word
segmentation of a script document has been proposed. The main objective of this technique is to recognize the
spaces that separate two text lines.For the Word segmentation technique also similar procedure is followed. In
this work ,three different scanned document have been taken as input images for both line and word
segmentation techniques. The results found were outstanding with average accuracy for both line and word. It
provides 100% accuracy for line segmentation and 100% for line segmentation as well. Evaluation results show
that our method outperforms several competing methods.
Offline Signiture and Numeral Recognition in Context of ChequeIJERA Editor
Signature is considered as one of the biometrics. Signature Verification System is required in almost all places where it is compulsory to authenticate a person or his/her credentials to proceed further transaction especially when it comes to bank cheques. For this purpose signature verification system must be powerful and accurate. Till date various methods have been used to make signature verification system powerful and accurate. Research here is related to offline signature verification. Shape Contexts have been used to verify whether 2 shapes are similar or not. It has been used for various applications such as digit recognition, 3D Object recognition, trademark retrieval etc. In this paper we present a modified version of shape context for signature verification on bank cheques using K-Nearest Neighbor classifier.
Multimodal Approach for Face Recognition using 3D-2D Face Feature FusionCSCJournals
This document describes a method for multimodal face recognition using 3D and 2D face features. It extracts features from 3D point cloud data and 2D texture maps using discrete Fourier transform (DFT) and discrete cosine transform (DCT) to generate multiple spectral representations of the data. Principal component analysis (PCA) is applied to the spectral representations. Matching scores from the different representations are then fused to improve recognition accuracy compared to using the representations individually. The method is tested on the FRAV3D face database without performing pose correction preprocessing.
Data entry is a time consuming and erroneous procedure in its nature. In addition, validity
check of submitted information is not easier than retyping it. In a mega-corporation like Kanoon
Farhangi Amoozesh, there are almost no way to control the authenticity of students' educational
background. By the virtue of fast computer architectures, optical character recognition, a.k.a.
OCR, systems have become viable. Unfortunately, general-purpose OCR systems like Google's
Tesseract are not handful because they don't have any a-priori information about what they are
reading. In this paper the authors have taken a in-depth look on what has done in the field of
OCR in the last 60 years. Then, a custom-made system adapted to the problem is presented
which is way more accurate than general purpose OCRs. The developed system reads more than
60 digits per second. As shown in the Results section, the accuracy of the devised method is
reasonable enough to be exposed in public use.
Using Aspect Ratio to Classify Red Blood ImagesIOSR Journals
This document discusses a method for classifying red blood cell (RBC) images using aspect ratio. It involves extracting shape boundaries from RBC images through preprocessing steps. Fourier descriptors are then applied to the normalized shape signatures to represent the shapes. Aspect ratio and invariant moments are used to filter irrelevant shapes and improve retrieval accuracy when matching shapes to those in a database. Testing showed 92% accuracy when using 4 geometric features, decreasing to 90% accuracy when using more features. The method demonstrates potential for diagnosing anemia by analyzing abnormal RBC shapes.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comparative study of two methods for Handwritten Devanagari Numeral RecognitionIOSR Journals
Abstract : In this paper two different methods for Numeral Recognition are proposed and their results are
compared. The objective of this paper is to provide an efficient and reliable method for recognition of
handwritten numerals. First method employs Grid based feature extraction and recognition algorithm. In this
method the features of the image are extracted by using grid technique and this feature set is then compared
with the feature set of database image for classification. While second method contains Image Centroid Zone
and Zone Centroid Zone algorithms for feature extraction and the features are applied to Artificial Neural
Network for recognition of input image. Machine text recognition is important research area because of its
applications in many areas like Bank, Post office, Hospitals etc.
Keywords: Handwritten Numeral Recognition, Grid Technique, ANN, Feature Extraction, Classification.
DEVNAGARI DOCUMENT SEGMENTATION USING HISTOGRAM APPROACHijcseit
This document summarizes a research paper on Devnagari document segmentation using a histogram approach. It discusses challenges in segmenting the Devnagari script used for several Indian languages. A simple algorithm is proposed using horizontal and vertical histograms to segment documents into lines, words and characters. The algorithm achieves near 100% accuracy for line segmentation but lower accuracy for word and character segmentation due to complexities in the Devnagari script. Future work is needed to improve character segmentation handling connected and modified characters.
A Hybrid Technique for Shape Matching Based on chain code and DFS TreeIOSR Journals
This document proposes a hybrid technique for shape matching that combines chain code and depth-first search (DFS) tree methods. Chain code is used to detect boundaries and represent shapes, but can result in long codes that reduce accuracy. DFS is applied to break long chain codes into smaller subgraphs, producing more compact patterns that improve matching performance. The technique is tested on 500 images from different categories, achieving higher precision and recall than chain code alone, demonstrating the effectiveness of the hybrid approach.
Segmentation and recognition of handwritten digit numeral string using a mult...ijfcstjournal
In this paper, the use of Multi-Layer Perceptron (MLP) Neural Network model is proposed for recognizing
unconstrained offline handwritten Numeral strings. The Numeral strings are segmented and isolated
numerals are obtained using a connected component labeling (CCL) algorithm approach. The structural
part of the models has been modeled using a Multilayer Perceptron Neural Network. This paper also
presents a new technique to remove slope and slant from handwritten numeral string and to normalize the
size of text images and classify with supervised learning methods. Experimental results on a database of
102 numeral string patterns written by 3 different people show that a recognition rate of 99.7% is obtained
on independent digits contained in the numeral string of digits includes both the skewed and slant data.
This document discusses a system for extracting text from images. It begins with an introduction describing the need for such a system. It then covers related work on text detection techniques. The proposed method involves converting images to grayscale, binarization, connected component analysis, horizontal/vertical projections, reconstruction and using OCR for recognition. Applications discussed include wearable devices, video coding, image indexing and license plate recognition. While the system is robust, OCR recognition of noisy extracted text remains a challenge.
This paper presents a survey of various reversible data hiding methods. Data hiding is the process of hiding information in a cover media . Most commonly used media for data hiding is image. But during the hiding and extraction of data there are chances for the distortion of image. Reversible data hiding methods are used to solve this problem.
National Flags Recognition Based on Principal Component Analysisijtsrd
Recognizing an unknown flag in a scene is challenging due to the diversity of the data and to the complexity of the identification process. And flags are associated with geographical regions, countries and nations. But flag identification of different countries is a challenging and difficult task. Recognition of an unknown flag image in a scene is challenging due to the diversity of the data and to the complexity of the identification process. The aim of the study is to propose a feature extraction based recognition system for Myanmar's national flag. Image features are acquired from the region and state of flags which are identified by using principal component analysis PCA . PCA is a statistical approach used for reducing the number of features in National flags recognition system. Soe Moe Myint | Moe Moe Myint | Aye Aye Cho "National Flags Recognition Based on Principal Component Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26775.pdfPaper URL: https://www.ijtsrd.com/other-scientific-research-area/other/26775/national-flags-recognition-based-on-principal-component-analysis/soe-moe-myint
Conversion of braille to text in English, hindi and tamil languagesIJCSEA Journal
This document describes a method for converting scanned Braille documents to text in English, Hindi, and Tamil. The Braille characters are extracted from preprocessed images using segmentation and thresholding. The dot patterns in each Braille cell are converted to number sequences and mapped to letters in the appropriate language based on standard Braille codes. The converted text can then be synthesized to speech. A keyboard interface is also proposed that allows typing Braille characters via number keys corresponding to dot positions.
Script Identification for printed document images at text-line level using DC...IOSR Journals
This document summarizes a research paper that proposes a script identification approach for Indian scripts at the text-line level of printed documents. The approach uses visual appearance-based recognition by extracting features from text lines using Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). These features are classified using a Modified K-Nearest Neighbor algorithm. The method achieves 95% accuracy in recognizing 11 major Indian languages and discusses the importance of script identification in multilingual environments like India for applications such as optical character recognition.
Modified Skip Line Encoding for Binary Image Compressionidescitation
This paper proposes a modified skip line encoding technique for lossless compression of binary images. Skip line encoding exploits correlation between successive scan lines by encoding only one line and skipping similar lines. The proposed technique improves upon existing skip line encoding by allowing a scan line to be skipped if a similar line exists anywhere in the image, rather than just successive lines. Experimental results on sample images show the modified technique achieves higher compression ratios than conventional skip line encoding.
This document presents a method for detecting Devnagari text in scene images. The method uses two main characteristics of Devnagari text: uniform stroke width and the presence of a headline with vertical strokes below it. Candidate text regions are identified using distance transforms to verify uniform stroke width. A probabilistic Hough transform is then used to detect horizontal lines in each region, which are analyzed to identify headlines indicating Devnagari text. The method was tested on 10,000 images and achieved a precision of 0.7994 and recall of 0.778 for Devnagari text detection, representing an improvement over previous work. Some limitations are noted and future work is proposed to address them through machine learning approaches.
Text Skew Angle Detection in Vision-Based Scanning of Nutrition LabelsVladimir Kulyukin
This document presents an algorithm for detecting the text skew angle in images of nutrition labels. The algorithm applies multiple iterations of the 2D Haar Wavelet Transform to downsample the image and compute horizontal, vertical, and diagonal change matrices. It then binarizes and combines these matrices into a set of 2D change points. Finally, it uses a convex hull algorithm to find the minimum area rectangle containing all text pixels, and calculates the text skew angle as the rotation angle of this rectangle. The algorithm's performance is compared to two other text skew detection algorithms on a sample of over 600 nutrition label images, finding a median error of 4.62 degrees compared to 68.85 and 20.92 degrees for the other algorithms.
OPTICAL BRAILLE TRANSLATOR FOR SINHALA BRAILLE SYSTEM: PAPER COMMUNICATION TO...ijma
In this paper we proposed a system; Optical Braille Translator (OBT), that identify Sinhala Braille characters in single sided Braille document and translates to Sinhala language. This system also capable of identifying Grade1 English Braille characters, numbers, capital letters and some words in Grade 2
English Braille system. Image processing techniques were used to developed the proposed system in MATLAB environment. The translated text displayed in a word application as the final outcome. Performance evaluation results reflect that the proposed method can recognize Braille characters and
translated to user selected language either Sinhala or English efficiently, over 99% of accuracy.
Finding similarities between structured documents as a crucial stage for gene...Alexander Decker
This document discusses methods for classifying structured documents by finding similarities between them. It describes how pre-processing steps like thresholding and size normalization are used. A key step is tilting documents based on detecting reference lines using clustering. Features are then extracted from the tilted images, like reference lines and logos, to classify documents. The proposed method calculates tilt angle based on the largest cluster of connected line pixels to properly orient documents for classification.
Optical Character Recognition from Text ImageEditor IJCATR
Optical Character Recognition (OCR) is a system that provides a full alphanumeric recognition of printed or handwritten
characters by simply scanning the text image. OCR system interprets the printed or handwritten characters image and converts it into
corresponding editable text document. The text image is divided into regions by isolating each line, then individual characters with
spaces. After character extraction, the texture and topological features like corner points, features of different regions, ratio of
character area and convex area of all characters of text image are calculated. Previously features of each uppercase and lowercase
letter, digit, and symbols are stored as a template. Based on the texture and topological features, the system recognizes the exact
character using feature matching between the extracted character and the template of all characters as a measure of similarity.
Text-Image Separation in Document Images using Boundary/Perimeter DetectionIDES Editor
Document analysis plays an important role in office
automation, especially in intelligent signal processing. The
proposed system consists of two modules: block segmentation
and block identification. In this approach, first a document is
segmented into several non-overlapping blocks by utilizing a
novel recursive segmentation technique, and then extracts
the features embedded in each segmented block are extracted.
Two kinds of features, connected components and image
boundary/perimeter features are extracted. Document with
text inside image pose limitations in earlier reported literature.
This is taken care of by applying additional pass of the Run
Length Smearing on the extracted image that contains text.
Proposed scheme is independent of type and language of the
document.
IRJET - Object Detection using Hausdorff DistanceIRJET Journal
This document proposes using Hausdorff distance for object detection as it can better handle noise compared to other methods like Euclidean distance. The document discusses preprocessing images using Gaussian filtering for noise cancellation. It then represents shapes as point sets for feature extraction before using Hausdorff distance to match shapes between reference and test images for object recognition. Encouraging results were obtained when testing on MNIST, COIL and private handwritten digit datasets.
IRJET- Object Detection using Hausdorff DistanceIRJET Journal
This document proposes a new object recognition system using Hausdorff distance. The system aims to improve on existing methods like YOLO that struggle with small objects and can capture garbage data. The document outlines preprocessing steps like noise cancellation, representing shapes as point sets, and extracting features. It then describes using Hausdorff distance and shape context to find the best match between input and reference shapes. Testing on datasets showed encouraging results for recognizing handwritten digits.
Image hashing is an efficient way to handle digital data authentication problem. Image hashing represents quality summarization of image features in compact manner. In this paper, the modified center symmetric local binary pattern (CSLBP) image hashing algorithm is proposed. Unlike CSLBP 16 bin histogram, Modified CSLBP generates 8 bin histogram without compromise on quality to generate compact hash. It has been found that, uniform quantization on a histogram with more bin results in more precision loss. To overcome quantization loss, modified CSLBP generates the two histogram of a four bin. Uniform quantization on a 4 bin histogram results in less precision loss than a 16 bin histogram. The first generated histogram represents the nearest neighbours and second one is for the diagonal neighbours. To enhance quality in terms of discrimination power, different weight factor are used during histogram generation. For the nearest and the diagonal neighbours, two local weight factors are used. One is the Standard Deviation (SD) and other is the Laplacian of Gaussian (LoG). Standard deviation represents a spread of data which captures local variation from mean. LoG is a second order derivative edge detection operator which detects edges well in presence of noise. The proposed algorithm is resilient to the various kinds of attacks. The proposed method is tested on database having malicious and non-malicious images using benchmark like NHD and ROC which confirms theoretical analysis. The experimental results shows good performance of the proposed method for various attacks despite the short hash length.
Defended Data Embedding For Chiseler Avoidance in Visible Cryptography by Usi...IOSR Journals
Abstract :This paper proposes a data-veiling technique for binate images in morphological transform domain
for authen- tication purpose. To attain blind watermark drawing, it is difficult to use the detail coordinate
precisely as location map to regulate the data-veiling locations. Thus, we look flipping an edge pixel in binate
images as deviating the edge location one pixel horizontally ,one vertically. Positioned on this conclusion, we
propose an interlaced morphological binate wavelet transform to path the alter edges, which thus ease blind
watermark drawing and fusion of cryptographic indication.Unlike current block-based approach, in that the
block size is given as 3 x 3 pixels and larger, we establish the image in 2 x 2 pixel blocks. It allows resilience in
discovering the edges and also gets the low computational complication. There are two case that twisting the
candidates of one do not change the flippability circumstances of other are engaged for orthogonal embedding,
that deliver more relevant candidates can be determined so that a larger quqntity can be accomplished.A
contemporary effective Backward-Forward Minimization method is suggested, which acknowledge the backward
i.e enclose candidates and forward those twisted candidates that may be concerned by spining the present
pixel. By this way, the complete visual bias can be minimized. Experimental results determine the validity of our
arguments.
Keywords: Verification, binate images, data cloaking, mor- phological binate wavelet transform, Quadrate
embedding.
The document proposes a novel algorithm called Intermediate Skeletal Model (ISM) for modeling unsegmented cloud point data. ISM grows a sphere recursively over data points to connect feature points and build a skeletal model. It is modified to use an adaptive radius for the sphere to better capture features. With a fixed radius, features may be lost, but an adaptive sphere can take on different shapes like a disc or ellipsoid to segment the data during modeling based on local curvature. This approach models and segments simultaneously without requiring separate steps.
In this paper, we present a segmentation-based word spotting method for handwritten documents using Bag of Visual Words (BoVW) framework based on curvature features. The BoVW based word spotting methods extract SIFT or SURF features at each keypoint using fixed sized window. The drawbacks of these techniques are that they are memory intensive; the window size cannot be adapted to the length of the query and requires alignment between the keypoint sets. In order to overcome the drawbacks of SIFT or SURF local features based existing methods, we proposed to extract curvature feature at each keypoint of word image in BoVW framework. The curvature feature is scalar value describes the geometrical shape of the strokes and requires less memory space to store. The proposed method is evaluated using mean Average Precision metric through experimentation conducted on popular datasets such as GW, IAM and Bentham datasets. The yielded performances confirmed that our method outperforms existing word spotting techniques
BAG OF VISUAL WORDS FOR WORD SPOTTING IN HANDWRITTEN DOCUMENTS BASED ON CURVA...ijcsit
In this paper, we present a segmentation-based word spotting method for handwritten documents using
Bag of Visual Words (BoVW) framework based on curvature features. The BoVW based word spotting
methods extract SIFT or SURF features at each keypoint using fixed sized window. The drawbacks of these
techniques are that they are memory intensive; the window size cannot be adapted to the length of the
query and requires alignment between the keypoint sets. In order to overcome the drawbacks of SIFT or
SURF local features based existing methods, we proposed to extract curvature feature at each keypoint of
word image in BoVW framework. The curvature feature is scalar value describes the geometrical shape of
the strokes and requires less memory space to store. The proposed method is evaluated using mean
Average Precision metric through experimentation conducted on popular datasets such as GW, IAM and
Bentham datasets. The yielded performances confirmed that our method outperforms existing word spotting
techniques.
In this paper, we present a segmentation-based word spotting method for handwritten documents using Bag of Visual Words (BoVW) framework based on curvature features. The BoVW based word spotting methods extract SIFT or SURF features at each keypoint using fixed sized window. The drawbacks of these techniques are that they are memory intensive; the window size cannot be adapted to the length of the query and requires alignment between the keypoint sets. In order to overcome the drawbacks of SIFT or SURF local features based existing methods, we proposed to extract curvature feature at each keypoint of word image in BoVW framework. The curvature feature is scalar value describes the geometrical shape of the strokes and requires less memory space to store. The proposed method is evaluated using mean Average Precision metric through experimentation conducted on popular datasets such as GW, IAM and Bentham datasets. The yielded performances confirmed that our method outperforms existing word spotting techniques
A Run Length Smoothing-Based Algorithm for Non-Manhattan Document SegmentationUniversity of Bari (Italy)
This document proposes a run length smoothing-based algorithm called RLSO for segmenting non-Manhattan document layouts. RLSO is a variant of the Run Length Smoothing Algorithm (RLSA) that uses the OR logical operator instead of AND to group connected components. Like RLSA, RLSO requires setting thresholds but these are based on different criteria. The document also presents a technique for automatically assessing the run length thresholds needed for RLSO based on the distribution of spacing in each individual document.
Review of ocr techniques used in automatic mail sorting of postal envelopessipij
This paper presents a review of various OCR techniq
ues used in the automatic mail sorting process. A
complete description on various existing methods fo
r address block extraction and digit recognition th
at
were used in the literature is discussed. The objec
tive of this study is to provide a complete overvie
w about
the methods and techniques used by many researchers
for automating the mail sorting process in postal
service in various countries. The significance of Z
ip code or Pincode recognition is discussed.
Based on the cross section contour surface model reconstructionIJRES Journal
Based on the characteristics of the reverse engineering modeling technology, can effectively restore
the original design intent of products, to shorten the development period of industrial products, improve the
industrial fields of product innovation design ability of major significance and value. This paper STL Viewer
software to obtain the section curve of refactoring, obtaining high quality of reconstruction model, through cross
section data preprocessing in the first place, the cross section data are the initial segmentation; Secondly to add
constraint section curve reconstruction; The last generation CAD model. Examples prove that this method is
feasible and effective.
Similar to GEOMETRIC CORRECTION FOR BRAILLE DOCUMENT IMAGES (20)
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
This document discusses using social media technologies to promote student engagement in a software project management course. It describes the course and objectives of enhancing communication. It discusses using Facebook for 4 years, then switching to WhatsApp based on student feedback, and finally introducing Slack to enable personalized team communication. Surveys found students engaged and satisfied with all three tools, though less familiar with Slack. The conclusion is that social media promotes engagement but familiarity with the tool also impacts satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The document proposes a blockchain-based digital currency and streaming platform called GoMAA to address issues of piracy in the online music streaming industry. Key points:
- GoMAA would use a digital token on the iMediaStreams blockchain to enable secure dissemination and tracking of streamed content. Content owners could control access and track consumption of released content.
- Original media files would be converted to a Secure Portable Streaming (SPS) format, embedding watermarks and smart contract data to indicate ownership and enable validation on the blockchain.
- A browser plugin would provide wallets for fans to collect GoMAA tokens as rewards for consuming content, incentivizing participation and addressing royalty discrepancies by recording
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This document discusses the importance of verb suffix mapping in discourse translation from English to Telugu. It explains that after anaphora resolution, the verbs must be changed to agree with the gender, number, and person features of the subject or anaphoric pronoun. Verbs in Telugu inflect based on these features, while verbs in English only inflect based on number and person. Several examples are provided that demonstrate how the Telugu verb changes based on whether the subject or pronoun is masculine, feminine, neuter, singular or plural. Proper verb suffix mapping is essential for generating natural and coherent translations while preserving the context and meaning of the original discourse.
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The document discusses automated penetration testing and provides an overview. It compares manual and automated penetration testing, noting that automated testing allows for faster, more standardized and repeatable tests but has limitations in developing new exploits. It also reviews some current automated penetration testing methodologies and tools, including those using HTTP/TCP/IP attacks, linking common scanning tools, a Python-based tool targeting databases, and one using POMDPs for multi-step penetration test planning under uncertainty. The document concludes that automated testing is more efficient than manual for known vulnerabilities but cannot replace manual testing for discovering new exploits.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
The document proposes a new validation method for fuzzy association rules based on three steps: (1) applying the EFAR-PN algorithm to extract a generic base of non-redundant fuzzy association rules using fuzzy formal concept analysis, (2) categorizing the extracted rules into groups, and (3) evaluating the relevance of the rules using structural equation modeling, specifically partial least squares. The method aims to address issues with existing fuzzy association rule extraction algorithms such as large numbers of extracted rules, redundancy, and difficulties with manual validation.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
2. 36 Computer Science & Information Technology (CS & IT)
The dimension of a Braille character is a standard one irrespective of the Language. Each dot in a
Braille cell has a diameter of 1.5 mm. Within each cell the dot centres are placed at a distance of
2.5 mm vertically and 2.3 mm horizontally. Two cells are separated by a distance of 6 to 6.4 mm
horizontally and 10.4 to 12.7mm vertically. These are illustrated in Figure.1.
Since the number of Braille books available are limited in number, when an ordinary text book is
scanned and converted to text or audio, benefits many people who are in need. This also enables
the corresponding document to be sent across the web. Identifying the Braille cells and
decrypting them to the corresponding language are the major processes involved in the
conversion[ 7]. Identifying Braille cells is difficult if the scanned documents are rotated or if
those sheets are at different scale. These issues are briefed as follows
• When tilted to some angle, mapping of Braille to text is inappropriate and hence results
in incorrect text.
• An inverted document may result in different mapping. An example of 2 Tamil letters,
Inverted இis equivalent toஈ. Their Braille representations are in shown in Figure.2.
Figure 2.Inverted symbol mapping
• If the documents are either zoomed or shrunk their scaling varies and the spacing
between the Braille dots or cells will be different from standard measurements.
Hence, the identification of Braille cells becomes difficult.
This paper proposes a technique which corrects the tilting of the Braille documents and brings the
scanned document to the standard scale. The scanned document is converted to binary image
before processing. The tilt or skew angle is identified by fitting a straight line to the set of first
encountered vertical pixels. An inverse rotation matrix is used to correct the tilt. This paper
utilizes the fact that ratio of the between character separation to within character separation
should remain constant irrespective of scaling factor. A wide separation and a narrow separation
horizontally signify the horizontal distance between two Braille cells (i.e. 2 characters and
horizontal distance between the dots of the same Braille cell respectively. These distances cannot
be calculated directly as the presence of dots varies for the characters present in the document.
Since the dots raised may occur in different position, these distances varies for different character
combinations. The ratio of distance between characters to distance between dots is between 2.6 to
2.78 in horizontal direction and 2.08 to 2.54 in vertical direction irrespective of the scaling factor.
The distance pair satisfying these criteria is chosen and compared with the standard values to
calculate the horizontal scale factor. A similar procedure is used for vertical distance and the
vertical scale factor is calculated. An inverse geometric transformation matrix is used to bring the
scale to the standard form. Once the documents are brought to the standard form the Braille
characters could be converted to text as explained in [7].
In this paper, section 2 discusses about some of the methods proposed for finding rotation and
3. Computer Science & Information Technology (CS & IT) 37
scaling of scanned documents. Third section explains in detail about the solution proposed and
fourth section covers the outcomes of the proposed method. Final section concludes with issues
involved in implementing the proposed method and future works.
2. LITERATURE REVIEW
The issues that must be taken into consideration when implementing Braille to text conversion
are scaling and rotation factors. Some of the methods to find them are as follows: Using the
deviation over the sum of rows, [1] image was slanted over an angle. Each time it was slanted
one pixel in vertical direction, deviation over sum of rows was calculated. When dots are
horizontal, maximum was obtained. In [3], a probabilistic approach was proposed to estimate
skewness and line-spacing of the Braille documents. Idea is to extract locations of the pixels
belonging to shadow regions of dots and consider them as samples drawn from a two-
dimensional Probabilistic Density Function (PDF). Parameters of the PDF are related to skewness
and scale properties of the document. These parameters are estimated in a Maximum-Likelihood
framework using the Expectation Maximum approach. Paper [4] does the following for finding
the rotation angle: Geometric correction aims to bring all scanned Braille document images to a
common reference point. A simple technique is used to solve this issue: manually draw a 15x10
mm rectangle using black ink pen at a fixed position on each page of Braille document. By
detecting the 15x10 mm rectangle in the scanned image and estimating the position and
orientation of the rectangle, transform the scanned image to a common reference point. Abdul
Malik Al-Salman et al.[2] have proposed a probabilistic algorithm to correct any de-skewing in
tilted scanned images. They also mention that maximum degree of recognizing a de-skewed
image is 4 degrees from either the left or the right side.
As referred in [8] Supernova is a window-based magnifier, screen reader and a Braille system
that supports the conversion of text to speech, Braille displays and note-takers. [9] refers to a
paper on Braille word segmentation and transformation of Mandarin Braille to Chinese
character. [10] discuss the main concepts related to OBR systems; list the work of different
researchers with respect to the main areas of an OBR system, such as pre-processing, dot
extraction, and classification. [11] focuses on developing a system to recognize an image of
embossed Arabic Braille and then convert it to text. [12] proposes a software solution prototype
to optically recognise single sided embossed Braille documents using a simple image processing
algorithm and probabilistic neural network. [13] describes an approach to a prototype system that
uses a commercially available flat-bed scanner to acquire a grey-scale image of a Braille
document from which the characters are recognised and encoded for production or further
processing. [14] presents an automatic system to recognize the Braille pages and convert the
Braille documents into English/Chinese text for editing.[15] describes a new technique for
recognizing Braille cells in Arabic single side Braille document. [16] developes a system that
converts, within acceptable consrtraints , (Braille image) to a computer readable form. [17]
provides a detailed description of a method for converting Braille as it is stored as characters in a
computer into print.
3. PROPOSED SOLUTION
Most of the commercial software focuses on converting text to Braille and vice versa. Very few
consider scanned Braille documents as input and the geometric corrections are done based on
4. 38 Computer Science & Information Technology (CS & IT)
probabilistic models. This paper proposes a method that uses a geometric transformation matrix
for rotation and scale correction. The parameters of the matrix are identified using standard
measurement of the Braille cells. The overall block diagram for the method proposed is shown in
Figure 3.
Figure.3. Block diagram of proposed method
The Braille document is scanned and taken as input. The scanned document is converted to gray
scale and the noises are removed using Gaussian filter. To identify the dots, the image is
convolved with the Prewitt filter and thresholding the resulting image. The skew correction of the
document is done in two steps. First the rotation is corrected which is followed by scale
correction. The Braille dots are represented as white and non-dots are represented as black in the
resulting binary image.
3.1 Rotation Correction
Since the document is scanned, the maximum tilt is assumed to be 4 degrees in clockwise or
anticlockwise direction. The extreme coordinate of the white dots are used to identify the
direction of the tilt.
Let xmin and xmax represent the minimum and maximum x coordinates of the white dots. Their
corresponding y values are represented as y(xmin) and y(xmax) respectively. Then the conditions in
Eq(1) and (2) specify the anticlockwise and clockwise rotation as illustrated in Figure.4. and
Figure.5 respectively.
(xmin)<y(xmax) (1)
y(xmin) >y(xmax) (2)
5. Computer Science & Information Technology (CS & IT) 39
If both conditions fail then the document is not rotated.In the rotated document the global ymin is
identified and vertical lines are drawn to the first encountered white dots from ymin. The dots are
chosen only if its Y value falls in the range [ymin,T], where T represents a threshold.
Figure.4. Sample image for anticlockwise rotation Figure.5. Sample image for clockwise rotation
A straight line is fit on the dots selected such that it minimizes the squared deviations of observed
Y co-ordinates from any alternative line as in Eq(4).
y'i= a + b xi (3)
∑ei
2
=∑(yi – y'i)2
(4)
Where yi is actual y value, xi is independent variable x value at ith
position, y'i is the predicted
value of dependent variable, a is a constant which representsthe point at which the line crosses
the Y axis when X = 0, b is regression slope, ∑ei
2
is the sum of squared errors in prediction. The
angle made by this line with the x-axis is calculated and identified as the tilt angle ‘α’. The sign
of the tilt angle varies based on the direction of rotation. The rotation is corrected by applying the
geometric transformation matrix as given in equation (5).
1
y
x
=
1
0
0
0
cos
sin
0
sin
cos
α
α
α
α
−
x
1
'
'
y
x
(5)
3.2 Scale Correction
The tilt corrected image is used for scale correction to fit the Braille document to standard
measurements. The binary image when projected on to Y axis gives the frequency of the white
dots for each horizontal line. A zero frequency signifies the horizontal blank lines, which may
occur between dots and between characters. A similar projection on X axis gives the frequency of
white dots for each vertical line. A zero frequency here signifies the vertical blank lines that
occur between dots and between characters. The presence of dots varies depending on the Braille
character, hence the distance between the dots and characters also vary. To get a reasonable
6. 40 Computer Science & Information Technology (CS & IT)
judgement of the distances, a sufficiently large sub-window near the top left corner is considered.
This portion of the image is cropped such that the left and top blank lines are removed. The
centroids of the dots in the cropped image are considered for further processing. The ratios of the
horizontal distance between characters to distance between dots remain same irrespective of
horizontal scaling factor. It ranges from 2.6 to 2.78. A similar concept is applicable for vertical
distance ratio, which ranges from 2.08 to 2.54. The horizontal distance between every successive
pair of dot centroids are calculated. If the ratio of the distance falls in the specified range those
distances are considered. Since the ratio falls in a range, the most frequently occurring value is
found and its corresponding distance is used for calculating the scaling factor as shown in
equation (6)
(6)
Where Sf is the scaling factor. The standard distance differs according to the dpi of the input
image.For a 100 dpi image, pixel-mm equivalence could be found as follows. We know,
1 inch=25.4 mm (7)
100 pix/inch=100pix/25.4mm (8)
1mm = 100 pix/25.4=3.93 pix (9)
Hence from the Braille cell dimension the standard horizontal distance between characters varies
from 23.58 to 25.15 pixels, the standard vertical distance between characters varies from 39.3 to
47.16 pixels. Once the scaling factor is known the image is de-scaled to standard size using
scaling inverse transformation matrix in equation (10).
1
y
x
=
1
0
0
0
/1
0
0
0
/1
q
p
x
1
'
'
y
x
(10)
where p and q are the scaling factors obtained in horizontal and vertical direction from the input
image respectively and (x’,y’) are input pixel values and (x, y) are their corresponding output
pixels.
Simple Braille cell overlapping could be done on the scale corrected image. The extracted cells
can converted to Braille characters which can then be mapped to the corresponding alphabet as in
[7].
4. EXPERIMENTAL RESULTS
Several Braille documents of different dpi’s (say 100,300,400,600) were taken as input for both
scaling and rotation correction. The result shown below refers to documents of 100 dpi. A portion
of the scanned Braille document is shown in Figure.6.
7. Computer Science & Information Technolo
Figure.6. Scanned input image
The image is pre-processed and converted into binary image as discussed in previous section. A
portion of the binary image is shown in
4.1 Rotation
The first occurring white dot when tracing vertically up to a threshold
such dots along the horizontal direction is indicated by vertical yellow lines in Figure.8. These
dots are horizontally joined by the cyan line.
Figure.8 Before line fitting algorithm
Image obtained after using line fitting algorithm for the dots which are lying in the cyan line in
the Figure.8 is shown in the Figure
The coordinates of dots so obtained are plotted as shown in
minimum error.
Figure.10 (a)graph for anticlockwise line fitting
Computer Science & Information Technology (CS & IT)
Figure.6. Scanned input image Figure.7. Binary image
processed and converted into binary image as discussed in previous section. A
portion of the binary image is shown in Figure.7.
The first occurring white dot when tracing vertically up to a threshold distance T is found. All
such dots along the horizontal direction is indicated by vertical yellow lines in Figure.8. These
dots are horizontally joined by the cyan line.
Figure.8 Before line fitting algorithm Figure.9. After line fitting algorithm
Image obtained after using line fitting algorithm for the dots which are lying in the cyan line in
Figure 9
The coordinates of dots so obtained are plotted as shown in Figure.10. It also shows the line with
Figure.10 (a)graph for anticlockwise line fitting Figure.10(b)graph for clockwise line fitting
41
Figure.7. Binary image
processed and converted into binary image as discussed in previous section. A
distance T is found. All
such dots along the horizontal direction is indicated by vertical yellow lines in Figure.8. These
Figure.9. After line fitting algorithm
Image obtained after using line fitting algorithm for the dots which are lying in the cyan line in
.10. It also shows the line with
Figure.10(b)graph for clockwise line fitting
8. 42 Computer Science & Information Technology (CS & IT)
The final image obtained after removing tilt is shown in
The actual rotation angle (θ) the starting point and end point before line fitting (S
after line fitting (Sl and El), the experimentally calculated angle
Merr are tabulated for different anticlockwise rotation angle in Table.1. A similar listing is done
for clockwise rotation angle in Table.2.
Actual θ
Start point (Sb)
End point (Eb)
Start point (Sl)
End pt (El)
θc
Merr
Actual θ
Start point(Sb)
End point(Eb)
Start point(Sl)
End pt (El)
θc
Merr
4.2 Scaling
The image obtained as a result of rotation correction is given as input for scaling correction.
Horizontal and vertical projections
zero frequency after eliminating consecutive zeros
shown in blue in Figure.12.
Computer Science & Information Technology (CS & IT)
The final image obtained after removing tilt is shown in Figure.11.
θ) the starting point and end point before line fitting (S
), the experimentally calculated angle θc and mean error of line fitting
are tabulated for different anticlockwise rotation angle in Table.1. A similar listing is done
for clockwise rotation angle in Table.2.
Figure.11. Re-rotated image
Table.1 anticlockwise rotation
-4 -3 -2 -1
(79,155) (77, 144) (79,137) (73,116)
(845,101) (843,102) (850,107) (844,103)
(79,156) (77,143) (79,136) (73,115)
(845,103) (844,103) (850,107) (844,102)
-4.01 -2.99 -2.177 -0.97
-2.24 -2.04 5.35 -1.69
Table.2 clockwise rotation
4 3 2 1
) (915,159) (897,144) (883,136) (861,116)
) (149,109) (140,108) (126,110) (141,106)
) (915,160) (897,145) (883,137) (861,116)
(149,107) (140,107) (126,109) (141,105)
3.95 2.89 2.09 0.91
-4.06 5.53 -4.40 8.31
The image obtained as a result of rotation correction is given as input for scaling correction.
Horizontal and vertical projections profiles are calculated. The horizontal and vertical
zero frequency after eliminating consecutive zeros are identified as Braille cell boundary and
) the starting point and end point before line fitting (Sb and Eb) and
and mean error of line fitting
are tabulated for different anticlockwise rotation angle in Table.1. A similar listing is done
The image obtained as a result of rotation correction is given as input for scaling correction.
. The horizontal and vertical lines with
boundary and are
9. Computer Science & Information Technolo
Figure.12 Image after Profiling
To decide the scaling factor and the ratio, 4 standard distances are considered. These distances
are calculated with the dot centres
Case 1: distance in horizontal direction
for a standard Braille cell (Std w
Case 2: distance in horizontal direction between 2 consecutive cells
to standard measurement ranges from 6
Case 3: distance in vertical direction within the same cell
2.5mm or 9.82 pixels (Std wv).
Case 4: distance in vertical direction between 2 consecutive Braille cells
according to standard measurement ranges 10.4 to 12.7 mm or
Part of the profiled image is cropped
colour as shown in Figure. 13. From the cropped image the distances and ratios are calculated as
specified in section 3. The top 3 most frequently occurring ratios
and their counts are tabulated in Table.3
(wh and bh) for the most frequently occurring ratio is listed in Table.4. Factr1 and Factr2 represent
the horizontal scale factor which are calculated from w
size
%
25
50
75
Orig
125
150
175
Computer Science & Information Technology (CS & IT)
Figure.12 Image after Profiling Figure.13. image with centroids marked
To decide the scaling factor and the ratio, 4 standard distances are considered. These distances
calculated with the dot centres as reference.
distance in horizontal direction within the same cell (wh), which is 2.3 mm or 9.04 pixels
wh).
distance in horizontal direction between 2 consecutive cells (bh). This distance according
to standard measurement ranges from 6-6.4mm i.e. 23.58 to 25.15 pixels (Std bh).
distance in vertical direction within the same cell (wv), whose standard measurement is
e in vertical direction between 2 consecutive Braille cells (bv). This distance
according to standard measurement ranges 10.4 to 12.7 mm or 39.3 to 47.16 pixels (Std
Part of the profiled image is cropped and the centroids of each Braille dot are marked red in
. 13. From the cropped image the distances and ratios are calculated as
3 most frequently occurring ratios that fall within the valid range
r counts are tabulated in Table.3 for various % of scaling. The horizontal distance values
) for the most frequently occurring ratio is listed in Table.4. Factr1 and Factr2 represent
the horizontal scale factor which are calculated from wh and bh respectively using the formula
Factr1 = wh÷ Std(wh)
Factr2 = bh÷ Std(bh)
Table.3. Horizontal Ratio-count
Ratio Count
r1 r2 r3 c1 c2 c3
2.60 2.64 2.69 6 3 2
2.61 2.68 2.72 3 3 2
2.66 2.70 2.60 5 2 1
2.70 2.64 2.67 4 2 2
2.64 2.75 2.68 4 2 1
2.61 2.66 2.73 3 1 1
2.60 2.73 2.76 3 2 1
43
Figure.13. image with centroids marked
To decide the scaling factor and the ratio, 4 standard distances are considered. These distances
which is 2.3 mm or 9.04 pixels
. This distance according
whose standard measurement is
. This distance
(Std bv).
he centroids of each Braille dot are marked red in
. 13. From the cropped image the distances and ratios are calculated as
that fall within the valid range
. The horizontal distance values
) for the most frequently occurring ratio is listed in Table.4. Factr1 and Factr2 represent
respectively using the formula
(11)
(12)
10. 44 Computer Science & Information Technology (CS & IT)
Table.4 Horizontal distance and scale factor
Dist
%
Within
char
(wh)
Betwn
char
(bh)
Ratio
bh÷wh
Factr
1
Factr
2
25 2.3 6 2.6 0.25 0.24
50 4.67 12.2 2.61 0.51 0.49
75 6.9 18.36 2.66 0.76 0.75
Orig 9.04 24.47 2.7 1 1
125 11.43 30.25 2.64 1.26 1.24
150 13.56 35.45 2.61 1.5 1.45
175 16.38 42.58 2.6 1.81 1.74
A similar procedure is adopted for vertical distances. The valid vertical ratios range from 2.08 to
2.54. Factr3 and Factr4 in the Table.6 represent the vertical scale factors, calculated from wv and
bv respectively using the formula
Factr3 = wv÷ Std(wv) (13)
Factr4= bv÷ Std(bv) (14)
Table.5 Vertical Ratio-count
size
%
Ratio Count
r1 r2 r3 c1 c2 c3
25 2.16 2.41 2.34 4 2 2
50 2.19 2.22 2.10 3 2 1
75 2.21 2.32 2.25 4 2 1
Orig 2.24 2.43 2.35 3 2 2
125 2.25 2.15 2.41 4 2 1
150 2.26 2.12 2.38 3 1 1
175 2.24 2.36 2.17 3 2 1
Table.6 Vertical distance and scale factor
Dist
%
Withi
n char
(wv)
Between
char
(bv)
Ratio
(bv÷wv)
Fact
r3
Factr
4
25 4.61 10 2.16 0.26 0.25
50 9.04 19.87 2.19 0.50 0.49
75 13.48 29.9 2.21 0.76 0.74
Orig 17.74 39.9 2.24 1.00 1.00
125 22.18 49.8 2.25 1.25 1.24
150 25.9 58.65 2.26 1.46 1.48
175 31.04 69.8 2.24 1.75 1.74
To show the calculated distance between the Braille cells in horizontal and vertical direction, the
centroids of the dots are marked in yellow colour in (Figure.14). The distance between one
11. Computer Science & Information Technology (CS & IT) 45
Braille character to another which lie in the range of standard measurement in either direction are
marked in yellow colour in the figure.
Figure.14. Horizontal and vertical distances
5. CONCLUSION
Extraction of text from the Braille document images requires skew detection and correction like
normal document images. The skew detection for Braille documents is more challenging because
it consists of dots only. This paper suggests a line fitting based skew correction method. The
presence of noises could be mistaken for dots and hence noise removal and binarization process
plays an important role in skew angle detection. A simple template matching could be used for
Braille character extraction when the scale of the scanned document agrees with the standard
measurement. The scale factor of the skew corrected image is calculated using the ratio of within
character distance to between character distance.The arbitrary presence of dots is eliminated
comparing with standard range and the maximum is calculated by voting system.
Braille documents when created manually, there is a possibility that the Braille dots may not be in
a straight line, can be slanting between characters. This kind of skews could not be corrected by
the method. An algorithm for skew correction for 180 degrees has to be modelled in future.
REFERENCES
[1] Jan Mennens, Luc van Tichelen, Guido Francois, and Jan J. Engelen (1994), “Optical Recognition of
Braille Writing Using Standard Equipment”, IEEE Transactions on Rehabiliation Engg, Vol. 2, No. 4,
December 1994.
[2] AbdulMalik Al-Salman, YosefAlOhali, Mohammed AlKanhal, and Abdullah AlRajih, “An Arabic
Optical Braille Recognition System”, ICTA’07, April 12-14, Hammamet, Tunisia.
[3] MajidYoosefiBabadi, Behrooz Nasihatkon1, ZohrehAzimifar, Paul Fieguth, “ Probabilistic
Estimation Of Braille Document Parameters”, ICIP 2009.
[4] Jie Li and Xiaoguang Yan, Dayong Zhang, “Optical Braille Recognition with Haar Wavelet Features
and Support-Vector Machine”, 2010 International Conference on Computer, Mechatronics, Control
and Electronic Engineering (CMCE).
[5] A. Antonacopoulos, D. Bridson, “A Robust BrailleRecognitionSystem” Proceedings of the IAPR
International Workshop on Document Analysis Systems, Italy, 2004.
[6] J.A. Bilmes, “A Gentle Tutorial of the EM Algorithm and Its Application to Parameter Estimation for
Gaussian Mixtureand Hidden Markov Models”, Technical Report, University ofBerkeley,1998.
12. 46 Computer Science & Information Technology (CS & IT)
[7] S.Padmavathi, Manojna K.S.S, Sphoorthy Reddy .S and Meenakshy.D, “Conversion Of Braille To
Text In English, Hindi And Tamil Languages”, in International Journal of Computer Science,
Engineering and Applications (IJCSEA), Volume3 number 3 ,June 2013 pp19-32.
[8] AbdulMalik S. Al-Salman, ”A Bi-directional Bi-Lingual Translation Braille-Text System”, J. King
Saud University, Vol. 20, Comp. & Info. Sci., pp. 13-29,Riyadh(1428H./2008).
[9] Minghu Jiang etal, "Braille to print translations of Chinese",Information and Software Technology 44
(2002) 91-100
[10] Trends And Technologies In Optical Braille Recognition by AbdulMalik S. Al-Salman, Yosef
AlOhali, and Layla O. Al-Abdulkarim, 3'rd Int. Conf. on Information Technology,May 2007,Jordan.
[11] AbdulMalik Al-Salman, Yosef AlOhali, Mohammed AlKanhal, and Abdullah AlRajih,"An Arabic
Optical Braille Recognition System",ICTA’07, April 12-14, Hammamet, Tunisia
[12] Lisa Wong,Waleed Abdulla,Stephan Hussmann,"A Software Algorithm Prototype for Optical
Recognition of Embossed Braille", Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th
International Conference on 23-26 Aug. 2004, 586- 589 Vol.2
[13] R.T. Ritchings, A. Antonacopoulos and D. Drakopoulos,"ANALYSIS OF SCANNED BRAILLE
DOCUMENTS",In the book: Document Analysis Systems, A. Dengel and A.L. Spitz (eds.), World
Scientific Publishing Co,1995, pp. 413-421
[14] C M Ng, Vincent Ng, Y Lau,"Regular Feature Extraction for Recognition of Braille", Computational
Intelligence and Multimedia Applications, 1999. ICCIMA '99. Proceedings. Third International
Conference, pages 302-306
[15] Zainab I. Authman, Zamen F.Jebr, "Arabic Braille scripts recognition and translation using image
processing techniques", Journal: Journal of College of Education, Year: 2012 Volume: 2 Issue: 3
Pages: 18-26, Publisher: Thi-Qar University
[16] Jan mennues etal.,"Optical Recognition of Braille writing Using Standard Equipment", IEEE
TRANSACTIONS ON REHABILITATION ENGINEERING, VOL. 2, NO. 4, DECEMBER 1994
[17] Paul Blenkhorn,"System For Converting Braille Into Print",IEEE TRANSACTIONS ON
REHABILITATION ENGINEERING, VOL. 3, NO. 2, JUNE 1995