Image segmentation refers to decomposing a scene into components. There is no single correct segmentation. Segmentation techniques include edge-based, region-filling, color-based using color spaces, texture-based, disparity-based, motion-based, and techniques for documents, medical, range, and biometric images. The k-means clustering algorithm is commonly used to group similar pixels into segments via an iterative process of assignment and centroid update.
A Secure Color Image Steganography in Transform Domain ijcisjournal
Steganography is the art and science of covert communication. The secret information can be concealed in content such as image, audio, or video. This paper provides a novel image steganography technique to hide both image and key in color cover image using Discrete Wavelet Transform (DWT) and Integer Wavelet Transform (IWT). There is no visual difference between the stego image and the cover image. The extracted image is also similar to the secret image. This is proved by the high PSNR (Peak Signal to Noise Ratio), value for both stego and extracted secret image. The results are compared with the results of similar techniques and it is found that the proposed technique is simple and gives better PSNR values than others.
Tracking and counting human in visual surveillance systemiaemedu
This document summarizes a proposed system for tracking and counting humans in a visual surveillance system. The system first performs background subtraction on input video frames to detect foreground objects. It applies this process to both grayscale and binary image formats, and selects the better-performing format. Features are then extracted from detected blobs, including size, centroid, and color. Tracking rectangles are drawn around whole detected objects to group separated body parts. Counting is done by detecting when pixel values change from background to foreground. Experimental results on videos with various challenges like shadows, illumination changes, and occlusions showed the system could accurately track and count humans, with near-perfect accuracy even for occluded or broken objects, though it introduced some minor errors.
This document presents a novel method for recognizing two-dimensional QR barcodes using texture feature analysis and neural networks. It first extracts texture features like mean, standard deviation, smoothness, skewness and entropy from divided blocks of barcode images. These features are then used to train a neural network to classify blocks as containing a barcode or not. The trained neural network can then be used to locate barcodes in unknown images by classifying each block. The method is implemented and evaluated using MATLAB on a database of QR code images, showing satisfactory recognition results.
This document summarizes a research paper that proposes a method for denoising remote sensing images using a combination of second order and fourth order partial differential equations (PDEs). It begins by explaining how noise is introduced in images and why denoising is important. It then discusses existing denoising methods using second order and fourth order PDEs individually and their limitations. The proposed method combines the two approaches to reduce both the blocky effect of second order PDEs and the speckle effect of fourth order PDEs. Simulation results show the combined method achieves better peak signal-to-noise and signal-to-noise ratios compared to the individual methods.
Review of Image Segmentation Techniques based on Region Merging ApproachEditor IJMTER
Image segmentation is an important task in computer vision and object recognition. Since
fully automatic image segmentation is usually very hard for natural images, interactive schemes with a
few simple user inputs are good solutions. In image segmentation the image is dividing into various
segments for processing images. The complexity of image content is a bigger challenge for carrying out
automatic image segmentation. On regions based scheme, the images are merged based on the similarity
criteria depending upon comparing the mean values of both the regions to be merged. So, the similar
regions are then merged and the dissimilar regions are merged together.
A comparison of image segmentation techniques, otsu and watershed for x ray i...eSAT Journals
Abstract The most dangerous and rapidly spreading disease in the world is Tuberculosis. In the investigating for suspected tuberculosis (TB), chest radiography is the only key techniques of diagnosis based on the medical imaging So, Computer aided diagnosis (CAD) has been popular and many researchers are interested in this research areas and different approaches have been proposed for the TB detection. Image segmentation plays a great importance in most medical imaging, by extracting the anatomical structures from images. There exist many image segmentation techniques in the literature, each of them having their own advantages and disadvantages. The aim of X-ray segmentation is to subdivide the image in different portions, so that it can help during the study the structure of the bone, for the detection of disorder. The goal of this paper is to review the most important image segmentation methods starting from a data base composed by real X-ray images. Keywords— chest radiography, computer aided diagnosis, image segmentation, anatomical structures, real X-rays.
This document provides an exploratory review of soft computing techniques for image segmentation. It discusses various segmentation techniques including discontinuity-based techniques like point, line and edge detection using spatial filtering. Thresholding techniques like global, adaptive and multi-level thresholding are also covered. Region-based techniques such as region growing, region splitting/merging and morphological watersheds are summarized. The document concludes that future work can focus on developing genetic segmentation filters using a genetic algorithm approach for medical image segmentation.
Image segmentation refers to decomposing a scene into components. There is no single correct segmentation. Segmentation techniques include edge-based, region-filling, color-based using color spaces, texture-based, disparity-based, motion-based, and techniques for documents, medical, range, and biometric images. The k-means clustering algorithm is commonly used to group similar pixels into segments via an iterative process of assignment and centroid update.
A Secure Color Image Steganography in Transform Domain ijcisjournal
Steganography is the art and science of covert communication. The secret information can be concealed in content such as image, audio, or video. This paper provides a novel image steganography technique to hide both image and key in color cover image using Discrete Wavelet Transform (DWT) and Integer Wavelet Transform (IWT). There is no visual difference between the stego image and the cover image. The extracted image is also similar to the secret image. This is proved by the high PSNR (Peak Signal to Noise Ratio), value for both stego and extracted secret image. The results are compared with the results of similar techniques and it is found that the proposed technique is simple and gives better PSNR values than others.
Tracking and counting human in visual surveillance systemiaemedu
This document summarizes a proposed system for tracking and counting humans in a visual surveillance system. The system first performs background subtraction on input video frames to detect foreground objects. It applies this process to both grayscale and binary image formats, and selects the better-performing format. Features are then extracted from detected blobs, including size, centroid, and color. Tracking rectangles are drawn around whole detected objects to group separated body parts. Counting is done by detecting when pixel values change from background to foreground. Experimental results on videos with various challenges like shadows, illumination changes, and occlusions showed the system could accurately track and count humans, with near-perfect accuracy even for occluded or broken objects, though it introduced some minor errors.
This document presents a novel method for recognizing two-dimensional QR barcodes using texture feature analysis and neural networks. It first extracts texture features like mean, standard deviation, smoothness, skewness and entropy from divided blocks of barcode images. These features are then used to train a neural network to classify blocks as containing a barcode or not. The trained neural network can then be used to locate barcodes in unknown images by classifying each block. The method is implemented and evaluated using MATLAB on a database of QR code images, showing satisfactory recognition results.
This document summarizes a research paper that proposes a method for denoising remote sensing images using a combination of second order and fourth order partial differential equations (PDEs). It begins by explaining how noise is introduced in images and why denoising is important. It then discusses existing denoising methods using second order and fourth order PDEs individually and their limitations. The proposed method combines the two approaches to reduce both the blocky effect of second order PDEs and the speckle effect of fourth order PDEs. Simulation results show the combined method achieves better peak signal-to-noise and signal-to-noise ratios compared to the individual methods.
Review of Image Segmentation Techniques based on Region Merging ApproachEditor IJMTER
Image segmentation is an important task in computer vision and object recognition. Since
fully automatic image segmentation is usually very hard for natural images, interactive schemes with a
few simple user inputs are good solutions. In image segmentation the image is dividing into various
segments for processing images. The complexity of image content is a bigger challenge for carrying out
automatic image segmentation. On regions based scheme, the images are merged based on the similarity
criteria depending upon comparing the mean values of both the regions to be merged. So, the similar
regions are then merged and the dissimilar regions are merged together.
A comparison of image segmentation techniques, otsu and watershed for x ray i...eSAT Journals
Abstract The most dangerous and rapidly spreading disease in the world is Tuberculosis. In the investigating for suspected tuberculosis (TB), chest radiography is the only key techniques of diagnosis based on the medical imaging So, Computer aided diagnosis (CAD) has been popular and many researchers are interested in this research areas and different approaches have been proposed for the TB detection. Image segmentation plays a great importance in most medical imaging, by extracting the anatomical structures from images. There exist many image segmentation techniques in the literature, each of them having their own advantages and disadvantages. The aim of X-ray segmentation is to subdivide the image in different portions, so that it can help during the study the structure of the bone, for the detection of disorder. The goal of this paper is to review the most important image segmentation methods starting from a data base composed by real X-ray images. Keywords— chest radiography, computer aided diagnosis, image segmentation, anatomical structures, real X-rays.
This document provides an exploratory review of soft computing techniques for image segmentation. It discusses various segmentation techniques including discontinuity-based techniques like point, line and edge detection using spatial filtering. Thresholding techniques like global, adaptive and multi-level thresholding are also covered. Region-based techniques such as region growing, region splitting/merging and morphological watersheds are summarized. The document concludes that future work can focus on developing genetic segmentation filters using a genetic algorithm approach for medical image segmentation.
Image compression using embedded zero tree waveletsipij
Compressing an image is significantly different than compressing raw binary data. compressing images is
used by this different compression algorithm. Wavelet transforms used in Image compression methods to
provide high compression rates while maintaining good image quality. Discrete Wavelet Transform (DWT)
is one of the most common methods used in signal and image compression .It is very powerful compared to
other transform because its ability to represent any type of signals both in time and frequency domain
simultaneously. In this paper, we will moot the use of Wavelet Based Image compression algorithm-
Embedded Zerotree Wavelet (EZW). We will obtain a bit stream with increasing accuracy from ezw
algorithm because of basing on progressive encoding to compress an image into . All the numerical results
were done by using matlab coding and the numerical analysis of this algorithm is carried out by sizing
Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR) for standard Lena Image .Experimental
results beam that the method is fast, robust and efficient enough to implement it in still and complex images
with significant image compression.
Digital image classification involves:
1) Sorting pixels into classes based on their spectral values using algorithms like supervised maximum likelihood classification or unsupervised isodata clustering.
2) Analyzing spectral patterns by examining pixels in feature space rather than image space. Distances between pixel vectors in feature spaces define class boundaries.
3) Validating classification results to determine accuracy by comparing to reference data. Problems can occur and techniques continue improving.
Extended Fuzzy Hyperline Segment Neural Network for Fingerprint RecognitionCSCJournals
In this paper we have proposed Extended Fuzzy Hyperline Segment Neural Network (EFHLSNN) and its learning algorithm which is an extension of Fuzzy Hyperline Segment Neural Network (FHLSNN). The fuzzy set hyperline segment is an n-dimensional hyperline segment defined by two end points with a corresponding extended membership function. The fingerprint feature extraction process is based on FingerCode feature extraction technique. The performance of EFHLSNN is verified using POLY U HRF fingerprint database. The EFHLSNN is found superior compared to FHLSNN in generalization, training and recall time.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Pattern Classification Based approach for Blur Classificationijeei-iaes
Blur type identification is one of the most crucial step of image restoration. In case of blind restoration of such images, it is generally assumed that the blur type is known prior to restoration of such images. However, it is not practical in real applications. So, blur type identification is extremely desirable before application of blind restoration technique to restore a blurred image. An approach to categorize blur in three classes namely motion, defocus, and combined blur is presented in this paper. Curvelet transform based energy features are utilized as features of blur patterns and a neural network is designed for classification. The simulation results show preciseness of proposed approach.
This document discusses object recognition by computers. It notes that while object recognition is easy for humans, it is difficult for computers because they cannot rely on appearance alone. Key challenges for computers include variations in scale, shape, occlusion, lighting and background clutter. The document then discusses techniques used for object recognition, including feature detection methods like SIFT and SURF that extract keypoints, descriptors that describe regions around keypoints, and feature matching to identify corresponding regions between images. It also covers bag-of-words models, visual vocabularies and inverted indexing to allow large scale image retrieval. Finally, it lists applications of object recognition like digital watermarking, face detection and robot navigation.
This document discusses image segmentation techniques. It describes discontinuity-based segmentation which divides an image based on abrupt intensity changes to find isolated points, lines, and edges. Region-based segmentation groups similar pixels using thresholding, region growing, or splitting and merging. Common edge detection operators are also presented, including Sobel, Prewitt, and Laplacian of Gaussian (LoG) filters. Linking detected edge points can be done locally or globally to find object boundaries in the image.
This document discusses a proposed approach for multi-focus image fusion using a discrete cosine wavelet sharpness criterion. Multi-focus image fusion combines information from multiple images of the same scene to produce an "all-in-focus" image. The proposed approach uses a discrete cosine transform to calculate sharpness values for sub-blocks of the input images and selects the sharpest sub-blocks to include in the fused image. Experimental results on images of a clock, bottle, and book show the discrete cosine wavelet criterion produces fused images with higher quality than a bilateral gradient-based sharpness criterion, as measured by mutual information metrics.
Content Based Image Retrieval Using 2-D Discrete Wavelet TransformIOSR Journals
This document proposes a content-based image retrieval system using 2D discrete wavelet transform and texture features. The system decomposes images using 2D DWT, extracts texture features from low frequency coefficients using GLCM, and retrieves similar images by calculating Euclidean distances between feature vectors. Experimental results on Wang's database show the proposed approach achieves 89.8% average retrieval accuracy.
This document describes an extended fuzzy c-means (EFCM) clustering algorithm for noisy image segmentation. The algorithm first preprocesses noisy pixels in an image by regenerating their values based on neighboring pixel intensities. It then applies the conventional fuzzy c-means clustering algorithm to segment the image. The EFCM approach is presented as being less sensitive to noise than other clustering algorithms and able to efficiently segment noisy images. The document provides background on image segmentation, fuzzy c-means clustering, types of image noise, and density-based clustering challenges. It also outlines the EFCM methodology and its computational advantages over other robust clustering methods for noisy image segmentation.
This document discusses a hand gesture recognition system for underprivileged individuals. It begins by outlining the key steps in hand gesture recognition systems: image capture, pre-processing, segmentation, feature extraction and gesture recognition. It then goes into more detail on specific techniques for each step, such as thresholding and edge detection for segmentation. The document also covers applications like access control, sign language translation and future areas like biometric authentication. In conclusion, it proposes that hand gesture recognition can help disabled individuals communicate through accessible human-computer interaction.
This is about Image segmenting.We will be using fuzzy logic & wavelet transformation for segmenting it.Fuzzy logic shall be used because of the inconsistencies that may occur during segementing or
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...iosrjce
This paper introduces the concept of Blind Deconvolution for restoration of a digital image and
small segments of a single image that has been degraded due to some noise. Concept of Image Restoration is
used in various areas like in Robotics to take decision, Biomedical research for analysis of tissues, cells and
cellular constituents etc. Segmentation is used to divide an image into multiple meaningful regions. Concept of
segmentation is helpful for restoration of only selected portion of the image hence reduces the complexity of the
system by focusing only on those parts of the image that need to be restored. There exist so many techniques for
the restoration of a degraded image like Wiener filter, Regularized filter, Lucy Richardson algorithm etc. All
these techniques use prior knowledge of blur kernel for restoration process. In Blind Deconvolution technique
Blur kernel initially remains unknown. This paper uses Gaussian low pass filter to convolve an image. Gaussian
low pass filter minimize the problem of ringing effect. Ringing effect occurs in image when transition between
one point to another is not clearly defined. After removing these ringing effects from the restored image,
resultant image will be clear in visibility. The aim of this paper is to provide better algorithm that can be helpful
in removing unwanted features from the image and the quality of the image can be measured in terms of
PSNR(Peak Signal-to-Noise Ratio) and MSE(Mean Square error). Proposed Technique also works well with
Motion Blur.
The document provides an overview of a course on digital image processing. It is divided into 5 units that cover topics such as digital image fundamentals, image transforms, image enhancement, image filtering and restoration, image compression, image segmentation, and image representation and description. The course will examine concepts like sampling and quantization, image transforms including Fourier transforms, image enhancement techniques, image compression standards, and image segmentation methods. Students will learn about various image processing schemes and how to reconstruct images from projections using transforms like the Radon transform. The document provides the context and outline for the digital image processing course.
The CounterPropagation algorithm updates a neural network with an input, hidden, and output layer. It identifies the hidden neuron with the highest input, setting its activation to 1 and others to 0. The output is then calculated as the weighted sum of the hidden neuron, equal to the weight of the link between the winner hidden neuron and the output neurons. This update works with the CounterPropagation learning function to train the network.
The counterpropagation network consists of three layers - an input layer, a hidden Kohonen layer, and an output Grossberg layer. The Kohonen layer uses competitive learning to categorize input patterns in an unsupervised manner. During operation, the input pattern activates a single node in the Kohonen layer, which then activates the appropriate output pattern in the Grossberg layer. Effectively, the counterpropagation network acts as a lookup table to map input patterns to associated output patterns by determining which stored pattern category the input belongs to.
Neural networks are inspired by biological neural networks and are composed of interconnected processing elements called neurons. Neural networks can learn complex patterns and relationships through a learning process without being explicitly programmed. They are widely used for applications like pattern recognition, classification, forecasting and more. The document discusses neural network concepts like architecture, learning methods, activation functions and applications. It provides examples of biological and artificial neurons and compares their characteristics.
Genetic algorithms are optimization techniques inspired by Darwin's theory of evolution. They use operations like selection, crossover and mutation to evolve solutions to problems by iteratively trying random variations. The document outlines the history, concepts, process and applications of genetic algorithms, including using them to optimize engineering design, routing, computer games and more. It describes how genetic algorithms encode potential solutions and use fitness functions to guide the evolution toward better outcomes.
This document presents information on Hopfield networks through a slideshow presentation. It begins with an introduction to Hopfield networks, describing them as fully connected, single layer neural networks that can perform pattern recognition. It then discusses the properties of Hopfield networks, including their symmetric weights and binary neuron outputs. The document proceeds to provide derivations of the Hopfield network model based on an additive neuron model. It concludes by discussing applications of Hopfield networks.
Image compression using embedded zero tree waveletsipij
Compressing an image is significantly different than compressing raw binary data. compressing images is
used by this different compression algorithm. Wavelet transforms used in Image compression methods to
provide high compression rates while maintaining good image quality. Discrete Wavelet Transform (DWT)
is one of the most common methods used in signal and image compression .It is very powerful compared to
other transform because its ability to represent any type of signals both in time and frequency domain
simultaneously. In this paper, we will moot the use of Wavelet Based Image compression algorithm-
Embedded Zerotree Wavelet (EZW). We will obtain a bit stream with increasing accuracy from ezw
algorithm because of basing on progressive encoding to compress an image into . All the numerical results
were done by using matlab coding and the numerical analysis of this algorithm is carried out by sizing
Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR) for standard Lena Image .Experimental
results beam that the method is fast, robust and efficient enough to implement it in still and complex images
with significant image compression.
Digital image classification involves:
1) Sorting pixels into classes based on their spectral values using algorithms like supervised maximum likelihood classification or unsupervised isodata clustering.
2) Analyzing spectral patterns by examining pixels in feature space rather than image space. Distances between pixel vectors in feature spaces define class boundaries.
3) Validating classification results to determine accuracy by comparing to reference data. Problems can occur and techniques continue improving.
Extended Fuzzy Hyperline Segment Neural Network for Fingerprint RecognitionCSCJournals
In this paper we have proposed Extended Fuzzy Hyperline Segment Neural Network (EFHLSNN) and its learning algorithm which is an extension of Fuzzy Hyperline Segment Neural Network (FHLSNN). The fuzzy set hyperline segment is an n-dimensional hyperline segment defined by two end points with a corresponding extended membership function. The fingerprint feature extraction process is based on FingerCode feature extraction technique. The performance of EFHLSNN is verified using POLY U HRF fingerprint database. The EFHLSNN is found superior compared to FHLSNN in generalization, training and recall time.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Pattern Classification Based approach for Blur Classificationijeei-iaes
Blur type identification is one of the most crucial step of image restoration. In case of blind restoration of such images, it is generally assumed that the blur type is known prior to restoration of such images. However, it is not practical in real applications. So, blur type identification is extremely desirable before application of blind restoration technique to restore a blurred image. An approach to categorize blur in three classes namely motion, defocus, and combined blur is presented in this paper. Curvelet transform based energy features are utilized as features of blur patterns and a neural network is designed for classification. The simulation results show preciseness of proposed approach.
This document discusses object recognition by computers. It notes that while object recognition is easy for humans, it is difficult for computers because they cannot rely on appearance alone. Key challenges for computers include variations in scale, shape, occlusion, lighting and background clutter. The document then discusses techniques used for object recognition, including feature detection methods like SIFT and SURF that extract keypoints, descriptors that describe regions around keypoints, and feature matching to identify corresponding regions between images. It also covers bag-of-words models, visual vocabularies and inverted indexing to allow large scale image retrieval. Finally, it lists applications of object recognition like digital watermarking, face detection and robot navigation.
This document discusses image segmentation techniques. It describes discontinuity-based segmentation which divides an image based on abrupt intensity changes to find isolated points, lines, and edges. Region-based segmentation groups similar pixels using thresholding, region growing, or splitting and merging. Common edge detection operators are also presented, including Sobel, Prewitt, and Laplacian of Gaussian (LoG) filters. Linking detected edge points can be done locally or globally to find object boundaries in the image.
This document discusses a proposed approach for multi-focus image fusion using a discrete cosine wavelet sharpness criterion. Multi-focus image fusion combines information from multiple images of the same scene to produce an "all-in-focus" image. The proposed approach uses a discrete cosine transform to calculate sharpness values for sub-blocks of the input images and selects the sharpest sub-blocks to include in the fused image. Experimental results on images of a clock, bottle, and book show the discrete cosine wavelet criterion produces fused images with higher quality than a bilateral gradient-based sharpness criterion, as measured by mutual information metrics.
Content Based Image Retrieval Using 2-D Discrete Wavelet TransformIOSR Journals
This document proposes a content-based image retrieval system using 2D discrete wavelet transform and texture features. The system decomposes images using 2D DWT, extracts texture features from low frequency coefficients using GLCM, and retrieves similar images by calculating Euclidean distances between feature vectors. Experimental results on Wang's database show the proposed approach achieves 89.8% average retrieval accuracy.
This document describes an extended fuzzy c-means (EFCM) clustering algorithm for noisy image segmentation. The algorithm first preprocesses noisy pixels in an image by regenerating their values based on neighboring pixel intensities. It then applies the conventional fuzzy c-means clustering algorithm to segment the image. The EFCM approach is presented as being less sensitive to noise than other clustering algorithms and able to efficiently segment noisy images. The document provides background on image segmentation, fuzzy c-means clustering, types of image noise, and density-based clustering challenges. It also outlines the EFCM methodology and its computational advantages over other robust clustering methods for noisy image segmentation.
This document discusses a hand gesture recognition system for underprivileged individuals. It begins by outlining the key steps in hand gesture recognition systems: image capture, pre-processing, segmentation, feature extraction and gesture recognition. It then goes into more detail on specific techniques for each step, such as thresholding and edge detection for segmentation. The document also covers applications like access control, sign language translation and future areas like biometric authentication. In conclusion, it proposes that hand gesture recognition can help disabled individuals communicate through accessible human-computer interaction.
This is about Image segmenting.We will be using fuzzy logic & wavelet transformation for segmenting it.Fuzzy logic shall be used because of the inconsistencies that may occur during segementing or
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...iosrjce
This paper introduces the concept of Blind Deconvolution for restoration of a digital image and
small segments of a single image that has been degraded due to some noise. Concept of Image Restoration is
used in various areas like in Robotics to take decision, Biomedical research for analysis of tissues, cells and
cellular constituents etc. Segmentation is used to divide an image into multiple meaningful regions. Concept of
segmentation is helpful for restoration of only selected portion of the image hence reduces the complexity of the
system by focusing only on those parts of the image that need to be restored. There exist so many techniques for
the restoration of a degraded image like Wiener filter, Regularized filter, Lucy Richardson algorithm etc. All
these techniques use prior knowledge of blur kernel for restoration process. In Blind Deconvolution technique
Blur kernel initially remains unknown. This paper uses Gaussian low pass filter to convolve an image. Gaussian
low pass filter minimize the problem of ringing effect. Ringing effect occurs in image when transition between
one point to another is not clearly defined. After removing these ringing effects from the restored image,
resultant image will be clear in visibility. The aim of this paper is to provide better algorithm that can be helpful
in removing unwanted features from the image and the quality of the image can be measured in terms of
PSNR(Peak Signal-to-Noise Ratio) and MSE(Mean Square error). Proposed Technique also works well with
Motion Blur.
The document provides an overview of a course on digital image processing. It is divided into 5 units that cover topics such as digital image fundamentals, image transforms, image enhancement, image filtering and restoration, image compression, image segmentation, and image representation and description. The course will examine concepts like sampling and quantization, image transforms including Fourier transforms, image enhancement techniques, image compression standards, and image segmentation methods. Students will learn about various image processing schemes and how to reconstruct images from projections using transforms like the Radon transform. The document provides the context and outline for the digital image processing course.
The CounterPropagation algorithm updates a neural network with an input, hidden, and output layer. It identifies the hidden neuron with the highest input, setting its activation to 1 and others to 0. The output is then calculated as the weighted sum of the hidden neuron, equal to the weight of the link between the winner hidden neuron and the output neurons. This update works with the CounterPropagation learning function to train the network.
The counterpropagation network consists of three layers - an input layer, a hidden Kohonen layer, and an output Grossberg layer. The Kohonen layer uses competitive learning to categorize input patterns in an unsupervised manner. During operation, the input pattern activates a single node in the Kohonen layer, which then activates the appropriate output pattern in the Grossberg layer. Effectively, the counterpropagation network acts as a lookup table to map input patterns to associated output patterns by determining which stored pattern category the input belongs to.
Neural networks are inspired by biological neural networks and are composed of interconnected processing elements called neurons. Neural networks can learn complex patterns and relationships through a learning process without being explicitly programmed. They are widely used for applications like pattern recognition, classification, forecasting and more. The document discusses neural network concepts like architecture, learning methods, activation functions and applications. It provides examples of biological and artificial neurons and compares their characteristics.
Genetic algorithms are optimization techniques inspired by Darwin's theory of evolution. They use operations like selection, crossover and mutation to evolve solutions to problems by iteratively trying random variations. The document outlines the history, concepts, process and applications of genetic algorithms, including using them to optimize engineering design, routing, computer games and more. It describes how genetic algorithms encode potential solutions and use fitness functions to guide the evolution toward better outcomes.
This document presents information on Hopfield networks through a slideshow presentation. It begins with an introduction to Hopfield networks, describing them as fully connected, single layer neural networks that can perform pattern recognition. It then discusses the properties of Hopfield networks, including their symmetric weights and binary neuron outputs. The document proceeds to provide derivations of the Hopfield network model based on an additive neuron model. It concludes by discussing applications of Hopfield networks.
The document discusses the syllabus for a course on Neural Networks. The mid-term syllabus covers introduction to neural networks, supervised learning including the perceptron and LMS algorithm. The end-term syllabus covers additional topics like backpropagation, unsupervised learning techniques and associative models including Hopfield networks. It also lists some references and applications of neural networks.
New Approach of Preprocessing For Numeral RecognitionIJERA Editor
The present paper proposes a new approach of preprocessing for handwritten, printed and isolated numeral
characters. The new approach reduces the size of the input image of each numeral by discarding the redundant
information. This method reduces also the number of features of the attribute vector provided by the extraction
features method. Numeral recognition is carried out in this work through k nearest neighbors and multilayer
perceptron techniques. The simulations have obtained a good rate of recognition in fewer running time.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
Tracking and counting human in visual surveillance systemiaemedu
This document summarizes a proposed system for tracking and counting humans in a visual surveillance system. The system first performs background subtraction on input video frames to detect foreground objects. It applies this process to both grayscale and binary image formats, and selects the better-performing format. Features are then extracted from detected objects. Objects are tracked frame-to-frame based on these features. Counting is performed by analyzing pixels representing detected humans across frames. Experimental results on videos with various challenges like shadows, illumination changes and occlusions show the system can accurately track and count humans, with near real-time performance.
Tracking and counting human in visual surveillance systemiaemedu
This document summarizes a proposed system for tracking and counting humans in a visual surveillance system. The system first performs background subtraction on input video frames to detect foreground objects. It applies this process to both grayscale and binary image formats, and selects the better-performing format. Features are then extracted from detected objects. Objects are tracked frame-to-frame based on these features. Counting is performed by analyzing pixels representing detected humans across frames. Experimental results on videos with various challenges like shadows, illumination changes and occlusions showed the system could accurately track and count humans, with near perfect accuracy in some cases and small errors in others.
Tracking and counting human in visual surveillance systemIAEME Publication
This document summarizes a proposed system for tracking and counting humans in a visual surveillance system. The system first performs background subtraction on input video frames to detect foreground objects. It applies this process to both grayscale and binary image formats, and selects the better-performing format. Features are then extracted from detected objects. Objects are tracked frame-to-frame based on these features. Counting is performed by analyzing pixels representing detected humans across frames. Experimental results on videos with various challenges like shadows, illumination changes and occlusions showed the system could accurately track and count humans, with near perfect accuracy in some cases and small errors in others.
The document analyzes the use of the Tent map as a source of pseudorandom bits for generating binary codes. It evaluates the Tent map's period length, discrimination value, and merit factor. The Tent map is proposed as an alternative to traditional low-complexity pseudorandom bit generators. Different window functions are applied to the binary codes generated from the Tent map to reduce side lobes and improve performance. Results show discrimination increases with sequence length and some window functions perform better than others at different lengths.
본 논문은 single depth map으로부터의 정확한 3D hand pose estimation을 목표로 한다. 3D hand pose estimation은 HCI, AR등의 기술을 구현함에 있어서 매우 중요한 기술이다. 이를 위해 많은 연구자들이 정확도를 높이기 위해 여러 방법을 제시하였지만, 여전히 손가락들의 비슷한 생김새, 가려짐, 다양한 손가락의 움직임으로 인한 복잡성 때문에 정확도를 올리는데 한계가 있었다. 본 논문은 기존 방법들의 한계를 극복하기 위해 기존 방법들이 사용하는 입력 형태와 출력 형태를 바꾸었다. 2d depth image를 입력으로 받아 hand joint의 3D coordinate를 직접 regress하는 대부분의 기존 방법들과는 달리, 제안하는 모델은 3D voxelized depth map을 입력으로 받아 3D heatmap을 출력한다. 이를 위해 encoder-decoder 형식의 3D CNN을 사용하였고, 달라진 입력과 출력 형태로 인해 제안하는 모델은 널리 사용되는 3개의 3d hand pose estimation dataset, 1개의 3d human pose estimation dataset에서 가장 높은 성능을 내었다. 또한 ICCV 2017에서 주최된 HANDS 2017 challenge에서 우승 하였다.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.
Implemented an Advanced 2D Otsu method for image segmentation which solved the problems of the traditional Otsu method such as sensitive to noise and shadow. Wrote and debugged the program in C and VisionX system.
A new gridding technique for high density microarrayAlexander Decker
This document describes a new gridding technique for high density microarray images. [1] The technique uses the intensity projection profile of the most suitable subimage to locate subarrays and individual spots without any user input parameters. [2] It is capable of processing images with irregular spots, varying surface intensity, and over 50% contamination. [3] The key steps are preprocessing the image, then using horizontal and vertical intensity projection profiles of the preprocessed image to estimate global parameters for locating subarrays, and local parameters for locating individual spots within each subarray.
Improvement of the Recognition Rate by Random ForestIJERA Editor
In this paper; we introduce a system of automatic recognition of characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the multi-layer perceptron (MLP) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures
Improvement oh the recognition rate by random forestYoussef Rachidi
In this paper; we introduce a system of automatic recognition of characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the multi-layer perceptron (MLP) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures.
The document describes the Tree Based Itinerary Design (TBID) algorithm for wireless sensor networks. The TBID algorithm partitions the sensor network area around the processing element into concentric zones. It then constructs itineraries for multiple mobile agents to efficiently collect and aggregate data from the sensor nodes. The algorithm builds trees connecting sensors in each zone and determines low-cost traversal orders for the mobile agents. Experimental results show that the aggregation time increases as the number of sensors or zones increases, but decreases as more mobile agents are used.
Fingerprints are imprints formed by friction
ridges of the skin and thumbs. They have long been used for
identification because of their immutability and individuality.
Immutability refers to the permanent and unchanging character
of the pattern on each finger. Individuality refers to the
uniqueness of ridge details across individuals; the probability
that two fingerprints are alike is about 1 in 1.9x1015. In despite of
this improvement which is adopted by the Federal Bureau of
Investigation (FBI), the fact still is “The larger the fingerprint
files became, the harder it was to identify somebody from their
fingerprints alone. Moreover, the fingerprint requires one of the
largest data templates in the biometric field”. The finger data
template can range anywhere from several hundred bytes to over
1,000 bytes depending upon the level of security that is required
and the method that is used to scan one's fingerprint. For these
reasons this work is motivated to present another way to tackle
the problem that is relies on the properties of Vector
Quantization coding algorithm.
A new technique to fingerprint recognition based on partial windowAlexander Decker
1) The document presents a new technique for fingerprint recognition based on analyzing a partial window around the core point of a fingerprint.
2) The technique first locates the core point of a fingerprint, then determines a window around the core point. Features are extracted from this window and input into an artificial neural network (ANN) to recognize fingerprints.
3) The technique aims to reduce computation time for fingerprint recognition by focusing the analysis on a partial window rather than the whole fingerprint image.
IRJET - Hand Gesture Recognition to Perform System OperationsIRJET Journal
This document describes a hand gesture recognition system that uses deep learning and convolutional neural networks. The system is trained on a dataset of over 50,000 images to recognize 19 different gestures. It first calibrates the background, segments the hand from the image, and recognizes the gesture. The model achieves 86.39% accuracy on the test set after training for 20 epochs with a batch size of 64 using an Adam optimizer.
Machine learning based augmented reality for improved learning application th...IJECEIAES
Detection of objects and their location in an image are important elements of current research in computer vision. In May 2020, Meta released its state-ofthe-art object-detection model based on a transformer architecture called detection transformer (DETR). There are several object-detection models such as region-based convolutional neural network (R-CNN), you only look once (YOLO) and single shot detectors (SSD), but none have used a transformer to accomplish this task. These models mentioned earlier, use all sorts of hyperparameters and layers. However, the advantages of using a transformer pattern make the architecture simple and easy to implement. In this paper, we determine the name of a chemical experiment through two steps: firstly, by building a DETR model, trained on a customized dataset, and then integrate it into an augmented reality mobile application. By detecting the objects used during the realization of an experiment, we can predict the name of the experiment using a multi-class classification approach. The combination of various computer vision techniques with augmented reality is indeed promising and offers a better user experience.
Similar to Implementation of a modified counterpropagation neural network model in online handwritten character recognition system (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FME
Implementation of a modified counterpropagation neural network model in online handwritten character recognition system
1. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Implementation of a Modified Counterpropagation Neural
Network Model in Online Handwritten Character Recognition
System
*Fenwa O.D., Emuoyibofarhe J. O., Olabiyisi S.O., Ajala F. A., Falohun A. S.
Department of Computer Science and Engineering,
Ladoke Akintola University of Technology, P.M.B 4000, Ogbomoso, Nigeria.
Tel: +2348163706286 E-mail: odfenwa@lautech.edu.ng
Abstract
Artificial neural networks are one of the widely used automated techniques. Though they yield high accuracy, most
of the neural networks are computationally heavy due to their iterative nature. Hence, there is a significant
requirement for a neural classifier which is computationally efficient and highly accurate. To this effect, a modified
Counter Propagation Neural Network (CPN) is employed in this work which proves to be faster than the
conventional CPN. In the modified CPN model, there was no need of training parameters because it is not an
iterative method like backpropagation architecture which took a long time for learning. This paper implemented a
modified Counterpropagation neural network for recognition of online uppercase (A-Z), lowercase (a-z) English
alphabets and digits (0-9). The system is tested for different handwritten character samples and better recognition
accuracies of 65% to 96% were obtained compared to related work in literature.
Keywords: Artificial Neural Network, Counterpropagation Neural Network, Character Recognition, Feature
Extraction.
1. Introduction
As technology advances, computer systems become an invaluable asset of humans. It enhances communication and
increases efficiency. Computer systems are greatly influencing the lives of human beings and their usage is increasing
at a tremendous rate. As computer systems become increasingly integrated into our everyday life, it is therefore
necessary to make them more easily accessible and user friendly. The ease with which we can exchange information
between user and computer is of immense importance today because input devices such as keyboard and mouse have
limitations. Owing to these limitations, researchers for over decades have been attracted to device a quick and natural
way of communication between computer systems and human beings (Anita and Dayashankar, 2010).
The online methods have been shown to be superior to their offline counterpart in recognizing handwritten characters
due the temporal information available with the former (Pradeep et al., 2011). Handwriting recognition system can
further be broken down into two categories: writer-independent recognition system which recognizes wide range of
possible writing styles and a writer-dependent recognition system which recognizes writing styles only from specific
users (Santosh and Nattee, 2009). Neural network classifiers exhibit powerful discriminative properties and features
such as ability to be trained automatically from examples, good performance with noisy data, parallel implementation
77
2. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
and efficient tools for learning large databases. Hence, it has been popularly and extensively used in handwriting
recognition (Jude, Vijila and Anitha, 2010).
2. Research Methodology
In this paper, four stages of development of the proposed character recognition system which include; data acquisition,
character processing which consists feature extraction and character digitization, training and classification using
modified Counterpropagation neural network model and testing are presented as shown in Figure 2.1. Experiments
were performed with 6200 handwriting character samples (English uppercase, lowercase alphabet and digits) collected
from 50 subjects using G-Pen 450 digitizer and the system was tested with 100 character samples written by people
who did not participate in the initial data acquisition process. The performance of the system was evaluated based on
Correct Recognition (CR), False Recognition (FR) and Recognition Failure (RF). The results of the proposed system
showed better recognition performance compared with similar work in literature.
Character Acquisition and Feature Extraction
Character Acquisition
The data used in this work were collected using Digitizer tablet (G-Pen 450). The G-Pen has an electric pen with
sensing writing board. An interface was developed using C# to acquire data (character information) such as stroke
number and pressure of the stroke from different subjects using the Digitizer tablet. Characters considered were 26
upper case (A-Z), 26 lower case (a-z) English alphabets and 10 digits (0-9) making a total number of 62 characters.
6,200 characters (62 x 2 x 50) were collected from 50 subjects as each individual was requested to write each of the
characters twice (this was done to allow the network learn various possible variations of a single character and become
adaptive in nature). This serves as the training data set which was the input data that was fed into the neural network.
Samples of characters collected were as shown in Figure 2.2
Proposed Feature Extraction Method
For extracting the feature, the modified hybrid zone based feature extraction is used. The major advantage of this
approach stems from its robustness to small variation, ease of implementation and provides good recognition rate.
Zone based feature extraction method provides good result even when certain pre-processing steps like filtering,
smoothing and slant removing are not considered.
The Zoning Algorithm
In this paper, a hybrid of modified Image Centroid and Zone-based (ICZ) and Zone Centroid and Zone-based (ZCZ)
distance metric feature extraction algorithm proposed by Fenwa, Omidiora and Fakolujo (2012). Modifications of the
two algorithms were in terms of:
(i) Number of zones being used
(ii) Measurement of the distances from both the Image Centroid and Zone Centroid
78
3. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
(iii) The area of application.
Few zones were adopted in their work but emphasis was being laid on how to effectively measure the pixel densities in
each zone. However, pixels pass through each of the zones at varied distances that was why an average of five
distances was measured for each of the zones at an angle of 200.
Hybrid of the Modified Zoning Feature Extraction Algorithms
The following were the algorithms to show the working procedure of the modified hybrid zoning feature extraction
methods:
Modified Algorithm1: Image Centroid and Zone-based (ICZ) distance metric feature extraction algorithm (Fenwa,
Omidiora and Fakolujo, 2012)
Input: Pre-processed character images
Output: Features for classification and recognition
Method Begins
Step 1: Divide the input image in to 25 equal zones as shown in Figure 2.3
Step 2: Compute the input image centroid as shown in Figure 2.4 using the formula:
n −1
Centre of gravity in the horizontal direction (x–axis) = ∑
n =0
xi , where n = width (2.1)
m −1
Centre of gravity in the vertical direction (y–axis) = ∑
m =0
yi , where m = height (2.2)
Step 3: Compute the distance between the image centroid to each pixel present in the zone as
shown in Figure 2.4
Step 4: Repeat step 3 for the entire pixel present in the zone (five points in this case):
d = d1 + d2 + d3 + d4 + d5 (2.3)
Step 5: Compute average distance between these points as:
Average Image Centroid Distance DI = d/5 (2.4)
where d = total distance between the image centroid to the pixel measured at an of
angle 200
Step 6: Repeat this procedure sequentially for the entire zone (25 zones).
Total Distance (P) = DI1 + DI2 + DI3 + . . + DIm (2.5)
m −1
DZ
Total Average Distance j = ∑
z =0 m (2.6)
where m = 25 (total number of zones)
Step 7: Finally, 25 such features was obtained for classification and recognition.
Method Ends.
79
4. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Modified Algorithm2: Zone Centroid and Zone-based (ZCZ) distance metric feature
extraction algorithm (Fenwa, Omidiora and Fakolujo, 2012)
Input: Pre-processed character image
Output: Features for classification and recognition
Method Begins
Step 1: Divide the input image in to 25 equal zones as shown in Figure 2.3
Step 2: Compute the zone centroid for the entire pixel present in the zone as shown in Figure
2.5 using the formula:
n −1
Centre of gravity in the horizontal direction (x–axis) = ∑
n =0
xi , where n = width (2.7)
m −1
Centre of gravity in the vertical direction (y–axis) = ∑
m =0
yi , where m = height (2.8)
Step 3: Compute the distance between the zone centroid to each pixel present in the zone.
Step 4: Repeat step 3 for the pixel present in a zone (5 points in this case).
Total Distance D = D1 + D2 + D3 + D4 + D5 (2.9)
Step 5: Compute average distance between these points as:
Average distance (2.10)
where D = distance between the zone centroid measured at angle 200 to the pixel in the zone.
Step 6: Repeat this procedure sequentially for the entire 25 zones.
Total Distance (Q) = DZ1 + DZ2 + DZ3 + . . + DZm (2.11)
m −1
DZ
Total Average Distance k = ∑
z =0 m (2.12)
where m = 25 (total number of zones)
Step 7: Finally, 25 such features was obtained for classification and recognition.
Method Ends
The Hybrid Zoning Algorithm: Hybrid of Modified ICZ and Modified ZCZ
(Fenwa, Omidiora and Fakolujo, 2012)
Input: Pre-processed character image
Output: Features for Classification and Recognition
Method Begins
Step 1: Divide the input image into 25 equal zones.
Step 2: Compute the input image centroid
Step 3: Compute the distance between the image centroid to each pixel present in the zone.
Step 4: Repeat step 3 for the entire pixel present in the zone.
80
5. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Step 5: Compute average distance between these points.
Step 6: Compute the zone centroid
Step 7: Compute the distance between the zone centroid to each pixel present in the zone.
Step 8: Repeat step 7 for the entire pixel present in the zone
Step 9: Compute average distance between these points.
Step 10: Repeat the steps 3-9 sequentially for the entire zones.
Step 11: Finally, 2xn (50) such features were obtained for classification and recognition.
Method Ends
Digitization
Digitization of an image into a binary matrix of specified dimensions makes the input image invariant of its actual
dimensions. Hence an image of whatever size gets transformed into a binary matrix of fixed pre-determined
dimensions. This establishes uniformity in the dimensions of the input and stored patterns as they move through the
recognition system. It is the conversion of the features selected in the canvas into binary digit which is then fed into
the input layer of the neural network. The fact that all Neural Networks take numeric input and produces alphanumeric
output inspired the idea that the data gathered for the Neural Network be represented numerically. The data were
carefully scaled to enable the network to learn. If a white area is found in input character matrix then it will mark the
corresponding pattern vector element as 0 (zero) and for black area it will mark as 1 (one). A method known as
one-of-N encoding, which employs the use of numeric variables to represent a single nominal variable (e.g. character
‘A’ which is represented with the bits (000000011110000000..111000000000000111) was employed in the data
representation for the network.
3. Related Work
Few attempts have been found in the literature in which conventional Counterpropagation Neural Network (CPN)
architecture has been used for the recognition of handwritten characters. Ahmed (1995) made an attempt but only for
digit recognition. Muhammad et al., (2005) implemented the conventional CPN for the recognition of online upper
case English alphabets and recognition rates of 60% to 93% were obtained for different sets of character samples.
This paper has focused on implementing a modified Counterpropagation neural network for recognition of online
uppercase (A-Z), lowercase (a-z) English alphabets and digits (0-9).
Counterpropagation Neural Networks
An important component of training in the conventional CPN is reduction of the data set into a respective data set of
lesser, specified size. This is achieved by the estimates of the dependent variable values corresponding to the new (and
reduced in number) independent variable vectors can also be calculated (Hecht-Nielsen, 1990). Thus CPN actually
operates as a closest-match lookup table and training a CPN is an attempt to appropriately reduce the size of the lookup
table (Ahmed, 1995). Regarding the training process of the counter-propagation network, it can be described as a
two-stage procedure: in the first stage the process updates the weights of the synapses between the input and the
81
6. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Kohonen layer, while in the second stage the weights of the synapses between the Kohonen and the Grossberg layer are
updated.
This work adopted the modified CPN proposed by Jude, et al, 2010 for abnormal brain tumour classification. Jude,
Vijila and Anitha (2010) described Counterpropagation neural network as a hybrid neural network employing both
supervised and unsupervised training methodologies. It consists of three layers: the input layer, the competition layer
and the output layer. Given the input training set ( ) i = 1, 2......N, where = (xi1, xi2, ......xin) and = (yi1, yi2,
......yin), the configuration of the network is as follows: number of neurons in the input layer = n, number of neurons in
the competition layer = N. The architecture of the network type used is as shown in Figure 2.6.
The learning algorithm proceeds in two steps. In the first step, each component of the training instance is presented
to the input layer. Let Uij be the arbitrary initial weights vector assigned to the links connecting input node i with the
competition node j. The transfer function of the competition layer is defined by the Euclidean distance dj between the
weight vector and the input vector as:
dj =/ / =1/2(∑i (Uij – Xki )2)1/2 j = 1,2, ...N (2.13)
For each , each node in the competition layer competes with the other nodes, and the node with the shortest
Euclidean distance wins. The output of the winning node is set to 1 and the rest to 0, thus, the output of the jth node in
the competition layer is
Zj = 1.0 if dj < di (2.14)
Zj = 0.0 otherwise (2.15)
The weight updation between the input layer and the competition layer is given by
Uijt+1 = Uijt + α(xji −Uijt) (2.16)
where t is the iteration number and α, the learning coefficient is such that 0<α ≤ 0.8, as suggested by Heicht- Neilsen in
1990. After the weight vectors Uij have stabilized, as the second step, the output layer begins learning the desired
output.
The weight adjustments for the output layer are given by:
Vjit +1 = Vtji + β (yki −Vjit)Zj (2.17)
where the learning coefficient lies in the range 0 < β ≤ 1.0
A stabilized set of weights are obtained by training the neural network with the input samples. After training, the
network was tested with unknown data set. The major drawback of the CPN is the high convergence time period which
leads to computational complexity. A suitable modification is made in the conventional CPN to improve the
convergence rate besides yielding sufficient classification accuracy.
Modified CPN Training Algorithm (Jude, Vijila, and Anitha, 2010)
82
7. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
The training algorithm involves the following two phases:
(i) Weight adjustment between the input layer and the hidden layer
The weight adjustment procedure for the hidden layer weights is same as that of the conventional CPN. It follows the
unsupervised methodology to obtain the stabilized weights. Equations (2.13) to (2.17) summarises this procedure. This
process is repeated for a suitable number of iterations and the stabilized set of weights are obtained. After convergence,
the weights between the hidden layer and the output layer are calculated.
(ii) Weight adjustment between the hidden layer and the output layer
The weight adjustment procedure employed in this work is significantly different from the conventional CPN. The
weights are calculated in the reverse direction without any iterative procedures. In conventional CPN, the weights are
calculated based on the criteria of minimizing the error. But in this work, a minimum error value is specified initially
and the weights are estimated based on the error value. The detailed steps of the modified CPN algorithm are given
below.
Step 1: The stabilized weight values are obtained when the error value (target output) is equal to zero (or) a predefined
minimum value. The following procedure uses this concept for weight matrices calculation.
Step 2: Supply the target vectors t1 to the output layer neurons
Step 3: Since ( t1 – y1 ) = R for convergence (2.18)
where t1 is the target value (target output), y1 is the network output and R is the minimum error value
(threshold value). The output of the output layer neurons is set equal to the target values as:
y1 = t1 − R (2.19)
Step 4: Once the output value is calculated, the sum of the weighted input signals ( y _ in1 ) can be estimated. Since the
sigmoid activation function is used, the following equation yields the value for y _ in1
y _ in1 =ln [ ] (2.20)
Step 5: Based on the values of y _ in1, the weight matrix vj1 is calculated using the following expression.
y _ in1 = Σ hj .vj1 (2.21)
where hj is the output value of the hidden layer and this value is obtained at the completion of phase 1.
Thus without any training methodology, the weight values are estimated. This technique accounts for higher
convergence rate since one set of weights are estimated directly.
.
4. Results and Discussion
The modified CPN model did not require training parameters because it is not an iterative method like
back-propagation architecture which took a long time for learning. Rather, a minimum error value is specified
initially and the weights are estimated based on the specified error value and this is what accounts for higher
convergence rate of the model since one set of weights are estimated directly.
Seven different data sets: 5 samples/character, 11 samples/character, 22 samples/character 33 samples/character, 44
samples/character, 55 samples/character, and 66 samples/character were being experimented to evaluate the
83
8. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
performance of the modified CPN model with gradually increasing the number of samples/character. The system was
evaluated on samples taken from individuals who did not participate in the initial process of data acquisition for the
training data set. This was done keeping in view the eventual aim of using the model in practical online recognition
system. The quality of an online handwriting recognizer is related to its ability to translate drawn characters
irrespective of writing styles.
A general trend of increase in performance with increase in samples/character was observed in the experiment. False
Recognition (FRs) is an important factor in any recognition system, the lower the false recognition rate, the more
reliable the system becomes. The developed system was able to obtain lower False Recognition (FR), lower
Recognition Failure (RF) and higher Correct Recognition (CR) compared with related work in literature. Table 2.1
showed the experimental results of both Conventional CPN model (Mohammad, et al 2005) and the modified CPN
model. Bar chart representation of the results is as shown in Figure 2.7
5. Conclusion
In conclusion, the result of this work showed better recognition performances in terms of accuracy and speed
compared with existing work in literature. Future work can explore the necessity to integrate into the learning
algorithm, an optimization algorithm to further enhance the performance of the system.
6. References
Ahmed, S.M. (1995): ‘’Experiments in character recognition to develop tools for an optical character recognition
system’’, IEEE Inc. First National Multi Topic Conference Proceedings, NUST, Rawalpindi, Pakistan, 61-67.
Anita, P. And Dayashankar S. (2010): ‘’Handwritten English Character Recognition using Neural Network’’,
International Journal of Computer Science and Communication, 1(2): 141-144.
Fenwa O.D., Omidiora E.O., Fakolujo O.A. (2012): ‘’Development of a Feature Extraction Technique for Online
Character Recognition System’’, Journal of Innovation System Design and Engineering, 3(3):10-23
Hecht-Nielsen, R. (1990): Neurocomputing: Addison-Wesley Publishing Company.
Jude, D. H, Vijila, C. K. and Anitha, J (2010): ‘’Performance Improved PSO Based Modified Counterpropagation
Neural Network for Abnormal Brain Image Classification’’, International Journal
Advanced Soft Computing Application, 2(1): 65-84.
Pradeep, J., Srinivasan, E. and Himavathi, S. (2011): ‘’Diagonal Based Feature Extraction for Handwritten
Alphabets Recognition using Neural Network’’, International Journal of Computer Science and Information
Technology (IJCS11), 3(1): 27-37.
Santosh, K.C. and Nattee, C. (2009): ‘’A Comprehensive Survey on Online Handwriting Recognition Technology
84
9. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
and Its Real Application to the Nepalese Natural Handwriting’’, Kathmandu University Journal of Science,
Engineering Technology, 5(1): 31-55.
Data Acquisition from Input
Device
Character Processing
Feature Extraction using
modified Zoning Technique
Training
Character Digitization
Training Data
Training and Classification
using Modified CPN Model
Character Testing
Figure 2.1: Block Diagram of the proposed Character Recognition System
85
10. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Figure 2.2: Sample handwritten characters
Figure 2.3: Character ‘n’ in 5 by 5 (25 equal zones) Figure 2.4: Image Centroid of character ‘‘n’’ in zoning
86
11. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Figure 2.5: Zone Centroid of character ‘n’ in zoning
Figure 2.6: Topology of CPN (Jude, Vijila, and Anitha, 2010)
87
12. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Table 2.1: Experimental results of conventional CPN model and modified CPN model
Conventional Counter Modified
propagation Counterpropagation
(Mohammad, et al
2005)
Character CR (%) FR (%) RF (%) CR (%) FR (%) RF (%)
Sa
mp
les
5 Each 60 40 0 65 10 0
11 Each 79 12 0 80 5 0
22 Each 76 23 1 79 4 0
33 Each 84 15 1 86 7 1
44 each 82 17 1 84 4 1
55 each 88 8 4 90 4 2
66 each 93 6 1 96 6 1
Figure 2.7: Performance comparison of conventional CPN and modified CPN of Table 2.1
88
13. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. Prospective authors of
IISTE journals can find the submission instruction on the following page:
http://www.iiste.org/Journals/
The IISTE editorial team promises to the review and publish all the qualified
submissions in a fast manner. All the journals articles are available online to the
readers all over the world without financial, legal, or technical barriers other than
those inseparable from gaining access to the internet itself. Printed version of the
journals is also available upon request of readers and authors.
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar