This document discusses using a Direction-Length Code (DLC) to represent binary objects. The DLC is a "knowledge vector" that provides information about the direction and length of pixels in every direction of an object. Patterns over a 3x3 pixel array are generated to form a basic alphabet for representing digital images as spatial distributions of these patterns. The DLC compresses bi-level images while preserving shape information and allowing significant data reduction. It can serve as standard input for numerous shape analysis algorithms. Components of images are extracted from the DLC and used to accurately regenerate the original images, demonstrating the effectiveness of the DLC representation.
Role Model of Graph Coloring Application in Labeled 2D Line Drawing ObjectWaqas Tariq
Several researches had worked on the development of sketch interpreters. However, very few of them gave a complete cycle of the sketch interpreter which can be used to transform an engineering sketch to a valid solid object. In this paper, a framework of the complete cycle of the sketch interpreter is presented. The discussion in this paper will stress on the usage of line labeling and graph coloring application in the validation of two dimensional (2D) line drawing phase. Both applications are needed to determine whether the given 2D line drawing represent possible or impossible object. In 2008, previous work by Matondang et al., has used line labeling algorithm to validate several 2D line drawings. However, the result shows that line labeling algorithm is not sufficient, as the algorithm does not have a validation technique for the result. Therefore, in this research study, it is going to be shown that if a 2D line drawing is valid as a possible object by using the line labeling algorithm, then it can be colored using graph coloring concept with a determine-able minimum numbers of color needed. This is equal in vice versa. The expected output from this phase is a valid-labeled of 2D line drawing with different colors at each edge and ready for the reconstruction phase. As a preliminary result, a high programming language MATLAB R2009a and several primitive 2D line drawings has been used and presented in this paper to test the graph coloring concept in labeled 2D line drawing.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
Dictionary based Image Compression via Sparse Representation IJECEIAES
Nowadays image compression has become a necessity due to a large volume of images. For efficient use of storage space and data transmission, it becomes essential to compress the image. In this paper, we propose a dictionary based image compression framework via sparse representation, with the construction of a trained over-complete dictionary. The overcomplete dictionary is trained using the intra-prediction residuals obtained from different images and is applied for sparse representation. In this method, the current image block is first predicted from its spatially neighboring blocks, and then the prediction residuals are encoded via sparse representation. Sparse approximation algorithm and the trained overcomplete dictionary are applied for sparse representation of prediction residuals. The detail coefficients obtained from sparse representation are used for encoding. Experimental result shows that the proposed method yields both improved coding efficiency and image quality as compared to some state-of-the-art image compression methods.
In the VLSI physical design, Floorplanning is the very crucial step as it optimizes the chip. The goal of
floorplanning is to find a floorplan such that no module overlaps with other, optimize the interconnection between
the modules, optimize the area of the floorplan and minimize the dead space. In this Paper, Simulated Annealing (SA)
algorithm has been employed to shrink dead space to optimize area and interconnect of VLSI floorplanning problem.
Sequence pair representation is employed to perturb the solution. The outcomes received after the application of SA
on different benchmark files are compared with the outcomes of different algorithms on same benchmark files and
the comparison suggests that the SA gives the better result. SA is effective and promising in VLSI floorplan design.
Matlab simulation results show that our approach can give better results and satisfy the fixed-outline and nonoverlapping
constraints while optimizing circuit performance.
Role Model of Graph Coloring Application in Labeled 2D Line Drawing ObjectWaqas Tariq
Several researches had worked on the development of sketch interpreters. However, very few of them gave a complete cycle of the sketch interpreter which can be used to transform an engineering sketch to a valid solid object. In this paper, a framework of the complete cycle of the sketch interpreter is presented. The discussion in this paper will stress on the usage of line labeling and graph coloring application in the validation of two dimensional (2D) line drawing phase. Both applications are needed to determine whether the given 2D line drawing represent possible or impossible object. In 2008, previous work by Matondang et al., has used line labeling algorithm to validate several 2D line drawings. However, the result shows that line labeling algorithm is not sufficient, as the algorithm does not have a validation technique for the result. Therefore, in this research study, it is going to be shown that if a 2D line drawing is valid as a possible object by using the line labeling algorithm, then it can be colored using graph coloring concept with a determine-able minimum numbers of color needed. This is equal in vice versa. The expected output from this phase is a valid-labeled of 2D line drawing with different colors at each edge and ready for the reconstruction phase. As a preliminary result, a high programming language MATLAB R2009a and several primitive 2D line drawings has been used and presented in this paper to test the graph coloring concept in labeled 2D line drawing.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
Dictionary based Image Compression via Sparse Representation IJECEIAES
Nowadays image compression has become a necessity due to a large volume of images. For efficient use of storage space and data transmission, it becomes essential to compress the image. In this paper, we propose a dictionary based image compression framework via sparse representation, with the construction of a trained over-complete dictionary. The overcomplete dictionary is trained using the intra-prediction residuals obtained from different images and is applied for sparse representation. In this method, the current image block is first predicted from its spatially neighboring blocks, and then the prediction residuals are encoded via sparse representation. Sparse approximation algorithm and the trained overcomplete dictionary are applied for sparse representation of prediction residuals. The detail coefficients obtained from sparse representation are used for encoding. Experimental result shows that the proposed method yields both improved coding efficiency and image quality as compared to some state-of-the-art image compression methods.
In the VLSI physical design, Floorplanning is the very crucial step as it optimizes the chip. The goal of
floorplanning is to find a floorplan such that no module overlaps with other, optimize the interconnection between
the modules, optimize the area of the floorplan and minimize the dead space. In this Paper, Simulated Annealing (SA)
algorithm has been employed to shrink dead space to optimize area and interconnect of VLSI floorplanning problem.
Sequence pair representation is employed to perturb the solution. The outcomes received after the application of SA
on different benchmark files are compared with the outcomes of different algorithms on same benchmark files and
the comparison suggests that the SA gives the better result. SA is effective and promising in VLSI floorplan design.
Matlab simulation results show that our approach can give better results and satisfy the fixed-outline and nonoverlapping
constraints while optimizing circuit performance.
A novel tool for stereo matching of imageseSAT Journals
Abstract Stereo matching techniques play an important role in many real world applications like robot stereo vision and image sequence analysis. From given pair of stereo pairs of images, it is possible to have matching techniques to obtain image descriptors or phenomena to compare the images. The goal of stereo matching can be achieved using either relational matching or feature or signal. However, the signal approach is most widely used. Recently Lemmens [10] provided a comprehensive review of many stereo matching techniques. In this paper we implement the techniques that can help in the real world. We build a prototype application that demonstrates the proof of concept. The empirical results revealed that the proposed application has good utility. Keywords – Stereo images, stereo matching,
Noise tolerant color image segmentation using support vector machineeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Image Matting via LLE/iLLE Manifold LearningITIIIndustries
Accurately extracting foreground objects is the problem of isolating the foreground in images and video, called image matting which has wide applications in digital photography. This problem is severely ill-posed in the sense that, at each pixel, one must estimate the foreground and background pixels and the so-called alpha value from only pixel information. The most recent work in natural image matting rely on local smoothness assumptions about foreground and background colours on which a cost function has been established. In this paper, we propose an extension to the class of affinity based matting techniques by incorporating local manifold structural
information to produce both a smoother matte based on the socalled improved Locally Linear Embedding. We illustrate our new algorithm using the standard benchmark images and very comparable results have been obtained.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
LATTICE-CELL : HYBRID APPROACH FOR TEXT CATEGORIZATIONcsandit
In this paper, we propose a new text categorization framework based on Concepts Lattice and
cellular automata. In this framework, concept structure are modeled by a Cellular Automaton
for Symbolic Induction (CASI). Our objective is to reduce time categorization caused by the
Concept Lattice. We examine, by experiments the performance of the proposed approach and
compare it with other algorithms such as Naive Bayes and k nearest neighbors. The results
show performance improvement while reducing time categorization.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
SYMMETRICAL WEIGHTED SUBSPACE HOLISTIC APPROACH FOR EXPRESSION RECOGNITIONijcsit
Human face expression is one of the cognitive activity or attribute to deliver the opinions to others.This paper mainly delivers the performance of appearance based holistic approach subspace methods based on Principal Component Analysis (PCA). In this work texture features are extracted from face images using Gabor filter. It was observed that extracted texture feature vector space has higher dimensional and has
more number of redundant contents. Hence training, testing and classification time becomes more. The expression recognition accuracy rate is also reduced. To overcome this problem Symmetrical Weighted 2DPCA (SW2DPCA) subspace method is introduced. Extracted feature vector space is projected in to subspace by using SW2DPCA method. By implementing weighted principles on odd and even symmetrical
decomposition space of training samples sets proposed method have been formed. Conventional PCA and 2DPCA method yields less recognition rate due to larger variations in expressions and light due to more number of feature space redundant variants. Proposed SW2DPCA method optimizes this problem by reducing redundant contents and discarding unequal variants. In this work a well known JAFFE databases
is used for experiments and tested with proposed SW2DPCA algorithm. From the experimental results it was found that facial recognition accuracy rate of GF+SW2DPCA based feature fusion subspace method has been increased to 95.24% compared to 2DPCA method.
PREDICTING GROWTH OF URBAN AGGLOMERATIONS THROUGH FRACTAL ANALYSIS OF GEO-SPATIAL DATA
Location Analytics is one of the fastest emerging fields in the broad area of Business Intelligence/Data Science. By
some industry estimates, almost 80% of all data has a location dimension to it. Consequently, identification of
trends and patterns in spatially distributed information has far reaching applications ranging from urban planning, to
logistics and supply chain management, location based marketing, sales territory planning and retail store location.
In view of this, we present an approach based on Fractal Analysis (FA) of highly granular geo-spatial data.
Specifically, we use proprietary data available at approximately1 square km level for New Delhi, India provided by Indicus Analytics (India’s leading economic data analytics firm based in New Delhi). We compare and contrast the patterns and insights generated using the FA approach with other more traditional approaches such as spatial to correlation and structural similarity indices. Preliminary results indicate that there are indeed “selfsimilar” local patterns that are completely missed by spatial correlation that are accurately captured by the more sophisticated FA approach. These patterns provide deep insights into the underlying socio-economic and demographic processes and can be used to predict the spatial distribution of these variables in the future. For example, questions such as what are the pockets of population growth in a city and how will businesses and government respond to that growth can be answered using the proposed approach.
A Survey on Tamil Handwritten Character Recognition using OCR Techniquescscpconf
In today’s fast growing technology, digital recognitions are playing wide role and providing
more scope to perform research in OCR techniques. Recognition of Tamil handwritten scripts is
complicated compared to other western language scripts. However, many researchers have
provided real-time solutions for offline Tamil character recognition also. Offline Tamil
handwritten documents recognition still offers many motivating challenges to researchers.
Current research offers many solutions on Tamil handwritten documents recognition even then
reasonable accuracy and performance has not been achieved. This paper analyses the various approaches and challenges concerning offline Tamil handwritten character recognition
Machine Learning Technique PCA Part's description in this research paper. Very good source for a clear understanding of how PCA i.e the Principal Component Analysis technique works while implementing machine learning techniques.
A novel tool for stereo matching of imageseSAT Journals
Abstract Stereo matching techniques play an important role in many real world applications like robot stereo vision and image sequence analysis. From given pair of stereo pairs of images, it is possible to have matching techniques to obtain image descriptors or phenomena to compare the images. The goal of stereo matching can be achieved using either relational matching or feature or signal. However, the signal approach is most widely used. Recently Lemmens [10] provided a comprehensive review of many stereo matching techniques. In this paper we implement the techniques that can help in the real world. We build a prototype application that demonstrates the proof of concept. The empirical results revealed that the proposed application has good utility. Keywords – Stereo images, stereo matching,
Noise tolerant color image segmentation using support vector machineeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Image Matting via LLE/iLLE Manifold LearningITIIIndustries
Accurately extracting foreground objects is the problem of isolating the foreground in images and video, called image matting which has wide applications in digital photography. This problem is severely ill-posed in the sense that, at each pixel, one must estimate the foreground and background pixels and the so-called alpha value from only pixel information. The most recent work in natural image matting rely on local smoothness assumptions about foreground and background colours on which a cost function has been established. In this paper, we propose an extension to the class of affinity based matting techniques by incorporating local manifold structural
information to produce both a smoother matte based on the socalled improved Locally Linear Embedding. We illustrate our new algorithm using the standard benchmark images and very comparable results have been obtained.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
LATTICE-CELL : HYBRID APPROACH FOR TEXT CATEGORIZATIONcsandit
In this paper, we propose a new text categorization framework based on Concepts Lattice and
cellular automata. In this framework, concept structure are modeled by a Cellular Automaton
for Symbolic Induction (CASI). Our objective is to reduce time categorization caused by the
Concept Lattice. We examine, by experiments the performance of the proposed approach and
compare it with other algorithms such as Naive Bayes and k nearest neighbors. The results
show performance improvement while reducing time categorization.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
SYMMETRICAL WEIGHTED SUBSPACE HOLISTIC APPROACH FOR EXPRESSION RECOGNITIONijcsit
Human face expression is one of the cognitive activity or attribute to deliver the opinions to others.This paper mainly delivers the performance of appearance based holistic approach subspace methods based on Principal Component Analysis (PCA). In this work texture features are extracted from face images using Gabor filter. It was observed that extracted texture feature vector space has higher dimensional and has
more number of redundant contents. Hence training, testing and classification time becomes more. The expression recognition accuracy rate is also reduced. To overcome this problem Symmetrical Weighted 2DPCA (SW2DPCA) subspace method is introduced. Extracted feature vector space is projected in to subspace by using SW2DPCA method. By implementing weighted principles on odd and even symmetrical
decomposition space of training samples sets proposed method have been formed. Conventional PCA and 2DPCA method yields less recognition rate due to larger variations in expressions and light due to more number of feature space redundant variants. Proposed SW2DPCA method optimizes this problem by reducing redundant contents and discarding unequal variants. In this work a well known JAFFE databases
is used for experiments and tested with proposed SW2DPCA algorithm. From the experimental results it was found that facial recognition accuracy rate of GF+SW2DPCA based feature fusion subspace method has been increased to 95.24% compared to 2DPCA method.
PREDICTING GROWTH OF URBAN AGGLOMERATIONS THROUGH FRACTAL ANALYSIS OF GEO-SPATIAL DATA
Location Analytics is one of the fastest emerging fields in the broad area of Business Intelligence/Data Science. By
some industry estimates, almost 80% of all data has a location dimension to it. Consequently, identification of
trends and patterns in spatially distributed information has far reaching applications ranging from urban planning, to
logistics and supply chain management, location based marketing, sales territory planning and retail store location.
In view of this, we present an approach based on Fractal Analysis (FA) of highly granular geo-spatial data.
Specifically, we use proprietary data available at approximately1 square km level for New Delhi, India provided by Indicus Analytics (India’s leading economic data analytics firm based in New Delhi). We compare and contrast the patterns and insights generated using the FA approach with other more traditional approaches such as spatial to correlation and structural similarity indices. Preliminary results indicate that there are indeed “selfsimilar” local patterns that are completely missed by spatial correlation that are accurately captured by the more sophisticated FA approach. These patterns provide deep insights into the underlying socio-economic and demographic processes and can be used to predict the spatial distribution of these variables in the future. For example, questions such as what are the pockets of population growth in a city and how will businesses and government respond to that growth can be answered using the proposed approach.
A Survey on Tamil Handwritten Character Recognition using OCR Techniquescscpconf
In today’s fast growing technology, digital recognitions are playing wide role and providing
more scope to perform research in OCR techniques. Recognition of Tamil handwritten scripts is
complicated compared to other western language scripts. However, many researchers have
provided real-time solutions for offline Tamil character recognition also. Offline Tamil
handwritten documents recognition still offers many motivating challenges to researchers.
Current research offers many solutions on Tamil handwritten documents recognition even then
reasonable accuracy and performance has not been achieved. This paper analyses the various approaches and challenges concerning offline Tamil handwritten character recognition
Machine Learning Technique PCA Part's description in this research paper. Very good source for a clear understanding of how PCA i.e the Principal Component Analysis technique works while implementing machine learning techniques.
Characteristic Comparison of U-Shaped Monopole and Complete Monopole Antenna.IOSR Journals
Abstract: A monopole antenna is a type of radio antenna formed by replacing one half of a dipole antenna with a ground plane at right-angles to the remaining half. Monopoles may be used from a few hundred KHz through several GHz in frequency and are commonly one-quarter of a wave length long, but may be shorter or longer. Monopole antennas exhibit high gain and improved efficiency in a surprisingly small package. Monopole antenna can be designed to exhibit wideband capabilities. The different available monopole antennas are dual band printed monopole antenna, cross-slot monopole antenna, U-shaped monopole antenna, triangular shaped monopole antenna and a wideband monopole antenna. This paper deals with the comparison obtained from the results such as return loss, VSWR, current distribution, and the radiation pattern of simple U-shaped and complete monopole antenna. Keywords- CPW, CST, FR4, LAN, WiMAX
A novel approach to design and fabrication of thermo-acoustic refrigerator us...IOSR Journals
The basic object of this paper is compression of refrigerant by the consumption of high amplitude
sound waves that results the generation thermal Energy. For generation of sound waves a loud speaker (or
common audio speaker) has been employed for creating high amplitude sound waves which compresses
refrigerant (helium) by the well Known thermodynamics process of heat absorption. A Thermo acoustic
refrigerator is fabricated with a cooler and loud speaker. The acoustic power produced by a loud speaker as a
sound waves reaches to heat the pump. The cooler works on a reverse Carnot cycle by incorporating a compact
acoustic network to create the travelling wave phasing necessary to operate. As the acoustic wave propagates
temperature difference occurs at the ends of stack, heat and cold is trap by the heat exchangers regenerator
from a cold heat exchanger to an ambient one. Instead of using co-axial topology network, the toroidal is used.
The design, construction and performance measurement of the cooler are also discussed. Finally, at 80 C a
coefficient of performance 7.0 is achieved.
Study of Various Histogram Equalization TechniquesIOSR Journals
Abstract: Histogram equalization (HE) works well on single channel images for contrast enhancement. However, the technique used is ineffective on multiple channel images. So, it is not suitable for consumer electronic products, where preserving the original brightness is necessary in order not to introduce unnecessary visual deterioration. Bi-histogram equalization (BHE) has been developed and it is analyzed mathematically.BHE separates the input image’s histogram into two, based on its mean before equalizing them independently so that it can preserve the original brightness up to certain extends. Recursive Mean-Separate Histogram Equalization (RMSHE) is another technique to provide better and scalable brightness preservation for gray scale and color images. While the separation is done only once in BHE, RMSHE performs the separation recursively based on their respective mean. It is analyzed mathematically that the output images mean brightness will converge to the input images mean brightness as the number of recursive mean separation increases. The recursive nature of RMSHE also allows scalable brightness preservation, which is very useful in consumer electronics. Finally a comparative study was made to analyze all the above methods using gray scale and color images. Keywords: Bi-histogram equalization, histogram equalization, scalable brightness preservation, recursive mean-separate
“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”IOSR Journals
Abstract: Elliptic Curve Cryptography (ECC) plays an important role in today’s public key based security
systems. . ECC is a faster and more secure method of encryption as compared to other Public Key
Cryptographic algorithms. This paper focuses on the performance advantages of using ECC in the wireless
network. So in this paper its algorithm has been implemented and analyzed for various bit length inputs. The
Private key is known only to sender and receiver and hence data transmission is secure.
Modelling and static analysis of femur bone by using different implant materialsIOSR Journals
Femur is leg bone of the human body Undergoing more deformation. Biomechanics is the theory of
how tissues, cells, muscles, bones, organs and the motion of them and how their form and function are
regulated by basic mechanical properties. The aim of this study is to create a model of real proximal human
femur bone and the behavior of femur bone is analyzed in ANSYS under physiological load conditions.
A finite element model of bones is generated by using CT scan data are being widely used to make
realistic investigations on the mechanical behavior of bone structures. . Orthopedic implantation is done in case
of failure. Before implantation it is necessary to analyze the perfectness in case of its material property, size and
shape, surface treatment, load resistance and chances of failure. Analysis is done for the stresses formed in
different femur implant materials under static loading condition using ANSYS software.
Analysis is done on different materials like structural steel, and Ti-6Al-4V implant materials. Since
each femur carries 1/2 the body weight , analysis is done for 550kg,650kg, 750kg load, including the cases of
patient carrying certain weight. And based on the analysis it can be concluded that, while comparing these two
implant materials Ti-6Al-4V gave less deformation on static load conditions. TI-6AL4V is a low density
material, which has excellent bio compatible and mechanical properties, it is ideal for the use of an implant in surgeries. Finally the success of implantation depends on implant material and size, implantation method and
its handling by the patient
Image Information Retrieval From Incomplete Queries Using Color and Shape Fea...sipij
Content based image retrieval (CBIR) is the task of searching digital images from a large database based on the extraction of features, such as color, texture and shape of the image. Most of the research in CBIR has been carried out with complete queries which were present in the database. This paper investigates utility of CBIR techniques for retrieval of incomplete and distorted queries. Studies were made in two categories of the query: first is complete and second is incomplete. The query image is considered to be distorted or incomplete image if it has some missing information, some undesirable objects, blurring, noise due to disturbance at the time of image acquisition etc. Color (hue, saturation and value (HSV) color space model) and shape (moment invariants and Fourier descriptor) features are used to represent the image. The algorithm was tested on database consisting of 1875 images. The results show that retrieval accuracy of incomplete queries is highly increased by fusing color and shape features giving precision of 79.87%. MATLAB ® 7.01 and its image processing toolbox have been used to implement the algorithm.
A CHINESE CHARACTER RECOGNITION METHOD BASED ON POPULATION MATRIX AND RELATIO...Teady Matius
A Chinese character has many different forms, information in its features will have many variations. Therefore, it needs a relational database to store many variations of their features. The use of the relational database to store the sets of features enables the use of distance measurements methods while measuring the sets of feature that owned by a Chinese character to recognize a Chinese character image inputted. The feature used in this thesis is the pixel population matrix. The sets of the features are stored and queried by using the relational database.
This paper discusses about how to recognize the Chinese character image and Chinese radical image by using relational database and pixel population matrix.
Content Based Image Retrieval (CBIR) is one of the
most active in the current research field of multimedia retrieval.
It retrieves the images from the large databases based on images
feature like color, texture and shape. In this paper, Image
retrieval based on multi feature fusion is achieved by color and
texture features as well as the similarity measures are
investigated. The work of color feature extraction is obtained by
using Quadratic Distance and texture features by using Pyramid
Structure Wavelet Transforms and Gray level co-occurrence
matrix. We are comparing all these methods for best image
retrieval
Color Image Watermarking Application for ERTU CloudCSCJournals
Color image is one of the the Egyptian Radio and Television Union (ERTU)’s content should be saved from any abuse from outside or inside the organization alike. The application of saving color image deploys the watermarking techniques based on Discrete Wavelet Transform (DWT). This application is implemented by software that suits the ERTU’s cloud besides many tests to insure the originality of the photo and if there is any changes applied on. All that provides the essential objectives of the cloud to overcome the limitation of distance as well as provide reliable and trusted services to Authorized group.
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
Dimensionality Reduction and Feature Selection Methods for Script Identificat...ITIIIndustries
The goal of this research is to explore effects of dimensionality reduction and feature selection on the problem of script identification from images of printed documents. The kadjacent segment is ideal for this use due to its ability to capture visual patterns. We have used principle component analysis to reduce the size of our feature matrix to a handier size that can be trained easily, and experimented by including varying combinations of dimensions of the super feature set. A modular
approach in neural network was used to classify 7 languages – Arabic, Chinese, English, Japanese, Tamil, Thai and Korean.
Effect of Similarity Measures for CBIR using Bins ApproachCSCJournals
This paper elaborates on the selection of suitable similarity measure for content based image retrieval. It contains the analysis done after the application of similarity measure named Minkowiski Distance from order first to fifth. It also explains the effective use of similarity measure named correlation distance in the form of angle ‘cosè’ between two vectors. Feature vector database prepared for this experimentation is based on extraction of first four moments into 27 bins formed by partitioning the equalized histogram of R, G and B planes of image into three parts. This generates the feature vector of dimension 27. Image database used in this work includes 2000 BMP images from 20 different classes. Three feature vector databases of four moments namely Mean, Standard deviation, Skewness and Kurtosis are prepared for three color intensities (R, G and B) separately. Then system enters in the second phase of comparing the query image and database images which makes of set of similarity measures mentioned above. Results obtained using all distance measures are then evaluated using three parameters PRCP, LSRR and Longest String. Results obtained are then refined and narrowed by combining the three different results of three different colors R, G and B using criterion 3. Analysis of these results with respect to similarity measures describes the effectiveness of lower orders of Minkowiski distance as compared to higher orders. Use of Correlation distance also proved its best for these CBIR results.
Content Based Image Retrieval Using Gray Level Co-Occurance Matrix with SVD a...ijcisjournal
In this paper, gray level co-occurrence matrix, gray level co-occurrence matrix with singular value
decomposition and local binary pattern are presented for content based image retrieval. Based upon the
feature vector parameters of energy, contrast, entropy and distance metrics such as Euclidean distance,
Canberra distance, Manhattan distance the retrieval efficiency, precision, and recall of the images are
calculated. The retrieval results of the proposed method are tested on Corel-1k database. The results after
being investigated shows a significant improvement in terms of average retrieval rate, average retrieval
precision and recall of different algorithms such as GLCM, GLCM & SVD, LBP with radius one and LBP
with radius two based on different distance metrics.
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGEScseij
It is A Challenging Task To Build A Cbir System Which Primarily Works On Texture Values As There
Meaning And Semantics Needs A Special Care To Be Mapped With Human Based Languages. We Have
Consider Highly Textured Images Having Properties(Entropy, Homogeneity, Contrast, Cluster Shade, Auto
Correlation)And Have Mapped Using A Fuzzy Minmax Scale W.R.T. Their Degree(High, Low,
Medium)And Technical Interpetation.This Developed System Is Performing Well In Terms Of Precision
And Recall Value Showing That Semantic Gap Has Been Reduced For Highly Textured Images Based Cbir.
Performance analysis of chain code descriptor for hand shape classificationijcga
Feature Extraction is an important task for any Image processing application. The visual properties of any image are its shape, texture and colour. Out of these shape description plays important role in any image classification. The shape description method classified into two types, contour base and region based. The contour base method concentrated on the shape boundary line and the region based method considers whole area. In this paper, contour based, the chain code description method was experimented for different hand shape.
The chain code descriptor of various hand shapes was calculated and tested with different classifier such as k-nearest- neighbour (k-NN), Support vector machine (SVM) and Naive Bayes. Principal component analysis (PCA) was applied after the chain code description. The performance of SVM was found better than k-NN and Naive Bayes with recognition rate 93%.
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
Object tracking is one of the most important problems in computer vision. The aim of video tracking is to extract the trajectories of a target or object of interest, i.e. accurately locate a moving target in a video sequence and discriminate target from non-targets in the feature space of the sequence. So, feature descriptors can have significant effects on such discrimination. In this paper, we use the basic idea of many trackers which consists of three main components of the reference model, i.e., object modeling, object detection and localization, and model updating. However, there are major improvements in our system. Our forth component, occlusion handling, utilizes the r-spatiogram to detect the best target candidate. While spatiogram contains some moments upon the coordinates of the pixels, r-spatiogram computes region-based compactness on the distribution of the given feature in the image that captures richer features to represent the objects. The proposed research develops an efficient and robust way to keep tracking the object throughout video sequences in the presence of significant appearance variations and severe occlusions. The proposed method is evaluated on the Princeton RGBD tracking dataset considering sequences with different challenges and the obtained results demonstrate the effectiveness of the proposed method.
Finding Neighbors in Images Represented By Quadtreeiosrjce
In this paper, we propose an algorithm for neighbors finding in images represented by quadtree data
structure. We first present a scheme for addressing image pixels that takes into account the order defined on a
quadtree blocks. We study various properties of this method, and then we develop a formula that establishes
links between quadtree blocks. Based on these results, we present an efficient algorithm that generates the list of
location codes of all possible neighbors, in all directions, for a given block of the quadtree.
A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVALcscpconf
Basic group of visual techniques such as color, shape, texture are used in Content Based Image Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in image database. To improve query result, relevance feedback is used many times in CBIR to help user to express their preference and improve query results. In this paper, a new approach for image retrieval is proposed which is based on the features such as Color Histogram, Eigen Values and Match Point. Images from various types of database are first identified by using edge detection techniques .Once the image is identified, then the image is searched in the particular database, then all related images are displayed. This will save the retrieval time. Further to retrieve the precise query image, any of the three techniques are used and comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as compared with other two techniques.
A comparative analysis of retrieval techniques in content based image retrievalcsandit
Basic group of visual techniques such as color, shape, texture are used in Content Based Image
Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in
image database. To improve query result, relevance feedback is used many times in CBIR to
help user to express their preference and improve query results. In this paper, a new approach
for image retrieval is proposed which is based on the features such as Color Histogram, Eigen
Values and Match Point. Images from various types of database are first identified by using
edge detection techniques .Once the image is identified, then the image is searched in the
particular database, then all related images are displayed. This will save the retrieval time.
Further to retrieve the precise query image, any of the three techniques are used and
comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as
compared with other two techniques.
Face Detection for identification of people in Images of Internetijceronline
One method for searching the internet faces in images is proposed by using digital processing topological with descriptors. Location in real time with the development of a database that stores addresses of internet downloaded images, in which the search is done by text, but by finding facial image, is achieved. Face recognition in images of Internet has proved to be a difficult task, because the images vary considerably depending on viewpoint, illumination, expression, pose, accessories, etc. The descriptors for general information: containing low-level descriptors. Developments on face recognition systems have improved significantly since the first system; image analysis is a topic on which much emphasis is being given in order to identify parameters, visual features in the image that provide environment data that it is represented in the image, but without the intervention of a person. In this project raises its realization using the method of viola and jones as face descriptor. We can distinguish even different parts of the face such as eyes, eyebrows, nose and mouth.One method for searching faces in image taken from internet intends to use digital processing using topological descriptors. It is located the face in real time.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
Image similarity using symbolic representation and its variationssipij
This paper proposes a new method for image/object retrieval. A pre-processing technique is applied to
describe the object, in one dimensional representation, as a pseudo time series. The proposed algorithm
develops the modified versions of the SAX representation: applies an approach called Extended SAX
(ESAX) in order to realize efficient and accurate discovering of important patterns, necessary for retrieving
the most plausible similar objects. Our approach depends upon a table contains the break-points that
divide a Gaussian distribution in an arbitrary number of equiprobable regions. Each breakpoint has more
than one cardinality. A distance measure is used to decide the most plausible matching between strings of
symbolic words. The experimental results have shown that our approach improves detection accuracy.
The aim of the study was to investigate the relationship between 2D gray scale pixels and 3D gray scale pixels of image reconstructions in computed tomography (CT). The 3D space image reconstruction from data projection was a challenging and difficult research problem. The image was normally reconstructed from the 2D data from CT data projection. In this descriptive study, a synthetics 3D Shepp-Logan phantom was used to simulate the actual data projection from a CT scanner. Real-time data projection of a human abdomen was also included in this study. Additionally, the Graphical User Interface (GUI) for the application was designed using Matlab Graphical User Interface Development Environment (GUIDE). The application was able to reconstruct 2D and 3D images in their respective spaces successfully.The image reconstruction for CT in 3D space was analyzedalong with 2D space in order to show their relationships and shared properties for the purpose of constructing these images.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Monitoring Java Application Security with JDK Tools and JFR Events
E018212935
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 2, Ver. I (Mar-Apr. 2016), PP 29-35
www.iosrjournals.org
DOI: 10.9790/0661-18212935 www.iosrjournals.org 29 | Page
Direction-Length Code (DLC) To Represent Binary Objects
Dr G D Jasmin, M G Justin Raj
Assistant Professor Institute of Aeronautical Engineering Telangana Dundigal, Hyderabad
Assistant Professor Annai Vailankanni College of Engineering Pottalkulam, K K Dist Tamil Nadu
Abstract: More and more images have been generated in digital form around the world. Efficient way of
description and classification of objects is a well needed application to identify the objects present in images.
Shape is an important visual feature of an image. Searching for images using shape features has also attracted
much attention. In this paper the Direction-Length Code which is also called a knowledge vector that gives the
information about the direction and length of pixels in every direction in an object is generated to represent
objects. The formal language attempts to simulate the structural and hierarchical nature of the human vision
system. Here, sentences are strings of symbols and languages correspond to pattern class. The patterns over a 3
x 3 array of vertices have been generated which form the basic alphabet of the digital picture language. That is,
one can visualize any digital image as a spatial distribution of these patterns. The Direction-Length code (DLC)
compresses the bi-level images by preserving the information at the same time allowing a considerable amount
of data reduction. Furthermore, these codes acts as a standard input format for numerous shape-analysis
algorithms.
I. Introduction
Pattern recognition could be formally defined as categorization of input data into identifiable classes
via extraction of significant features or attributes of the data from the background of irrelevant detail.
The encoding efficiency to represent shapes of objects is very important for image storage and
transmission [1]. It is also important for shape analysis and shape recognition of objects in pattern recognition.
Various methods such as colour based representation, texture based representation, shape based representation
and appearance based representation have been developed to represent objects [2-4]. More studies on these
methods are being carried out and advanced techniques are being developed [5,6]. Shape based representation of
objects gives more clarity about the shape of the objects that acts as a more unique and important feature of
objects for the further identification process [7-9].
II. Object Recognition System
The problem of object recognition is divided into the following sub problems:
1. Image pre-processing
2. Object description
3. Image reconstruction
4. Classification
The input image to the system may be a colour image, a gray scale image or a binary image. The
contours are extracted as they give the outlines of shapes of the objects present in the image. Object description
involves running through the contour pixels and finding a code to represent the object. This involves finding the
length of the contour in every possible direction.
Syntactic pattern recognition is inspired by the phenomenon that composition of a natural scene is an
analogue to the composition of a language, that is, sentences are built up from phrases, phrases are built up from
words and words are built up from alphabets, etc. [10]. The matching between shapes can use string matching
by finding the minimal number of edit operations to convert one string into another. A more general method is
to formulate the representation as a string grammar. Each primitive is interpreted as a alphabet of some
grammar, where a grammar is a set of rules of syntax that govern the generation of sentences formed from
symbols of the alphabet. The set of sentences generated by a grammar G is called its language and is denoted as
L(G). Here, sentences are strings of symbols (which in turn represent patterns), and languages correspond to
pattern class. After grammars have been established, the matching is straightforward. For a sentence
representing an unknown shape, the task is to decide in which language the shape represents a valid sentence.
Syntactic shape analysis is based on the theory of formal language [11]. It attempts to simulate the structural and
hierarchical nature of the human vision system. However, it is not practical in general applications due to the
fact that it is not possible to infer a pattern of grammar which can generate only the valid patterns. In addition,
this method needs a priori knowledge for the database in order to define code words or alphabets.
2. Direction-Length Code (DLC) to represent binary objects
DOI: 10.9790/0661-18212935 www.iosrjournals.org 30 | Page
III. Patterns Over 3x3 Array Of Pixels
Let us consider an object S present in an image f(x,y). A pixel p at coordinates (x,y) has 4 horizontal
and vertical neighbours called direct neighbours and 4 diagonal neighbours whose coordinates are given by
(x+1,y), (x-1,y), (x,y+1), (x,y-1), (x+1,y+1), (x+1,y-1), (x-1,y+1), (x-1,y-1).
The symbols used to represent the 8 possible directions are R, DR, D, DL, L, UL, U and UR. Note that R, D, L
and U are elementary symbols and DR, DL, UL and UR are composite symbols of the alphabet. The directions
denoted by these symbols are shown in Figure 1.
Fig 1: 3 X 3 Pixel representation and the Direction Codes
The direct neighbours of pixel p is the set
The diagonal neighbours of pixel p is the set
The 8-neighbours of pixel p is the set
Concave patterns can be constructed by removing the pixels in the direct neighbours. Removing pixels 2,4,6 and
8 the total number of concave patterns obtained are .
In the same way, the Convex patterns can be constructed by removing the pixels in the diagonal neighbours. So,
removing pixels 1,3,7 and 9 the total number of concave patterns obtained are .
Hybrid patterns can be constructed by removing pixels from direct and diagonal neighbours. Thus, the total
number of hybrid patterns removing pixels 1,2,3,4,6,7,8 and 9 are .
The convex contour patterns constructed from pattern A are shown below.
Table 1: Convex Patterns
Note that the direction codes for all 16 convex contour patterns are given below the respective patterns.
For instance, the direction code for the pattern A is R-D-L-U and the direction code for the pattern B3 is R-DR-
D-L-U.
Table 2: Concave Patterns
The 15 concave patterns constructed from the pattern A are shown in table 2. One can also construct a
total of 225 more patterns from the pattern A as shown below and determine their direction codes. Thus a total
of 256 patterns could be generated in the 3X3 array of vertices.
UR
DR
R
UL
L
DDL
U
4. Direction-Length Code (DLC) to represent binary objects
DOI: 10.9790/0661-18212935 www.iosrjournals.org 32 | Page
All these 256 patterns form the basic alphabet of what we call as a digital picture language. That is, one
can visualize any digital image as a spatial distribution of these patterns. Once such patterns are generated for a
given image, it can then be used for the successful classification of objects.
IV. Direction Length Code (DlC) Of Objects
The connectivity between the pixels is an important concept used in the field of object recognition.
Two pixels are said to be connected, if they are adjacent in some sense and their intensity levels satisfy a
specified criterion of similarity. If pixel p with coordinates (x,y) and pixel q with coordinates (s,t) are pixels of
object (image subset) S, a path from p to q is a sequence of distinct pixels with coordinates
(x0, y0), (x1, y1),……….., (xn, yn),
where (x0, y0) = (x,y) and (xn, yn) = (s,t), (xi, yi) is adjacent to (xi-1, yi-1), 1 ≤ i ≤ n, and n is the length of
the path. If there a path exists from p to q consisting of pixels in S, then p is said to be connected to q. The gap
of one or two pixels marks the presence of the next component in the same object or the beginning of the next
object which can be then found by analysing the relationship among the components. Figure 2(c ) shows the
vector code generated for the square shown in figure 2(a).
(a) Image of a square (b) contour map
<104,109>/R81*D81*L81*U80*/<105,109>#
(c ) Vector code
Figure 2: The DLC of a square
V. Generation Of Normalized Vector Code
It is important to note that, when the diagonal lines appear in the image, they do not appear as the
combination of only diagonal sides. The direction codes D and R appear with DR lines, D and L appear with
DL, U and L appear with UL and U and R direction codes appear with UR.
Consier the image of a triangle and the corresponding vector code shown below.
(a): Binary Image ( b): Contour
Figure 3: Image of a triangle
The DLC generated is
<75,121>/D1*DR2*D1*DR3*R1*DR1*D1*DR3*D1*DR3*D1*R7*D1*DR5*D1*DR2*D1*DR3*R1*DR2*
D1*DR1*D1*DR2*DR4*D2*DR2*R1*DR4*D2*DR7*D2*DR1*R1*DR5*D1*DR1*D1*DR8*D1*DR4*D1*
DR3*D1*DR3*R1*DR1*D1*DR3*D1*DR2*L175*UR3*U1*UR2*U1*UR2*R1*UR3*U1*UR2*U1*UR7*U
5. Direction-Length Code (DLC) to represent binary objects
DOI: 10.9790/0661-18212935 www.iosrjournals.org 33 | Page
1*UR7*U2*UR5*R1*UR1*U1*UR2*U1*UR4*U1*UR7*U1*UR5*U1*UR1*U1*UR4*R1*UR3*U2*UR7*U
2*UR1*R1*UR5*U1*UR2*U1*UR4*U1*UR1*R1*UR2*U1*UR2*/<77,120>#
This is then reduced to single occurrence of eight directions as
<75,121>/R10*DR82*D23*L175*U21*UR82*/<77,120>#
An approximate measure is taken such that when the amount of pixels in a particular direction is less
than 10% of its total amount of pixels it need to be added to their corresponding major directions. This reduces
the memory space needed as well as makes the recognition process easier.
Thus the vector code obtained is
<75,121>/DR110*L175*UR108*/<77,120>#
and then normalized to 100 pixels as
DR28*L45*UR27
So, the final vector is a normalized vector called knowledge vector as it consists of more information of
the object in its simpler form. The knowledge vector can be more informative by separating the basic sides and
the diagonal sides. Initially, it can be viewed as a string with every small change reflected and then limited to 8
directions. From the initial string obtained, the continuous pixels that make the straight lines (chosen only if at
least 2% pixels present) can be extracted first and the remaining pixels can be approximated to diagonal
directions. Let us consider an image of a triangle
(a) Image (b) Contour
Figure 4: Image of a Triangle and its contour
The knowledge vector obtained is
<75,121>/D1*DR2*D1*DR3*R1*DR1*D1*DR3*D1*DR3*D1*DR7*D1*DR5*D1*DR2*D1*DR3*R1*DR2
*D1*DR1*D1*DR2*DR4*D2*DR2*R1*DR4*D2*DR7*D2*DR1*R1*DR5*D1*DR1*D1*DR8*D1*DR4*D1
*DR3*D1*DR3*R1*DR1*D1*DR3*D1*DR2*L175*UR3*U1*UR2*U1*UR2*R1*UR3*U1*UR2*U1*UR7*
D1*U2*UR7*U2*UR5*R1*UR1*U1*UR2*U1*UR4*U1*UR7*U1*UR5*U1*UR1*U1*UR4*R1*UR3*U2*
UR7*U2*UR1*R1*UR5*U1*UR2*U1*UR4*U1*UR1*R1*UR2*U1*UR2*/<77,120>#
Vector limited to single occurrence of 8 directions as
<75,121>/R10*DR82*D24*L175*U22*UR82*/<77,120>#
And then approximated to
DR116*L175*UR104
Vector code normalized to 100 pixels DR29* L44* UR26
If we observe the knowledge vector obtained, L175 (L44 in the normalized vector) appears as a vector
for a straight line with no intermediate changes in directions. Such vectors can be separated as basic directions.
The other directions present are DR82, D24, U22, R10 and UR82. In this case, D appears with DR and U
appears with UR. So, these lengths can be added to get the directions DR and UR. Thus we obtain the vector
code: DR116*L175*UR104. This code is then normalized as DR29* L44* UR26.
VI. Component Wise Image Description And Regeneration
Let us consider the contour of the digital image Figure 5 consisting of four squares (not ideal squares).
a) Original image b) Contour map
Figure 5: Sample image consisting of four squares
The DLC generating algorithm was applied to this image and the knowledge string was obtained. Table
4 shows the component wise retrieval of the knowledge string for each component in the image.
6. Direction-Length Code (DLC) to represent binary objects
DOI: 10.9790/0661-18212935 www.iosrjournals.org 34 | Page
Table 4 : Component wise extraction of the Knowledge string.
Step Components Knowledge string
1.
<19,23>/R212*D223*
L212*U222*/<20,23>#
2.
<50,56>/R151*D159*
L151*U158*/<51,56>#
3.
<78,83>/R100*D104*
L100*U103*/<79,83>#
4.
<99,107>/R56*D58*
L56*U57*/<100,107>#
The bottom-up processing is also done to recognize the object by redrawing the same using the
knowledge string to verify the accuracy of the DLC generating algorithm.
Table 5: Redrawn images of squares from the extracted knowledge.
Step Knowledge string Component
1.
<19,23>/R212*D223*
L212*U222*/<20,23>#
2.
<50,56>/R151*D159*
L151*U158*/<51,56>#
3.
<78,83>/R100*D104*
L100*U103*/<79,83>#
7. Direction-Length Code (DLC) to represent binary objects
DOI: 10.9790/0661-18212935 www.iosrjournals.org 35 | Page
4.
<99,107>/R56*D58*
L56*U57*/<100,107>#
The knowledge base consists of the various components of the image. Large knowledge bases could be
then processed and classified using knowledge manipulation and classification algorithms.
VII. Conclusion
The normalized Direction-Length Code which is also called a knowledge vector that gives the
information of direction and length of pixels in every direction in an object is generated to represent objects. The
reverse process of regeneration of images from the vector code generated is also done. The algorithm shown
good results by generating the same image as its output even when there are more number of components
present.
The Direction-Length code (DLC) that compresses the bi-level images by preserving the information at
the same time allowing a considerable amount of data reduction can act as a standard input format for numerous
shape-analysis and classification algorithms.
References
[1]. H. Freeman, “On the Encoding of Arbitrary Geometric Configurations,” in Proceedings of IRE Translation Elec-tron Computer,
New York, pp-260-268, 1961.
[2]. J.W. Mckee, J.K. Aggarwal, Computer recognition of partial views of curved objects, IEEE Trans. Comput. C-26 790–800 (1977).
[3]. R. Chellappa, R. Bagdazian, Fourier coding of image boundaries, IEEE Trans. Pattern Anal. Mach. Intell. 6 (1) 102–105 (1984).
[4]. C.W. Richard, H. Hemami, “Identi1cation of three dimensional objects using Fourier descriptors of the boundary curve”, IEEE
Trans. System Man Cybernet, SMC-4 (4) 371–378 (1974).
[5]. Torralba, A., Oliva, A., Freeman, W. T., Object recognition by scene alignment [Abstract]. Journal of Vision, 3( 9): 196, 196a
(2003)
[6]. Leordeanu et al., “Beyond local appearance: Category recognition from pair wise interactions of simple features”, IEEE Computer
Society Conference on Computer Vision and Pattern Recognition ( 2007).
[7]. D. Zhang, G. Lu , “Review of shape representation and description techniques”, Pattern Recognition 37 : 1–19 (2004)
[8]. [8] Biederman, I., “Recognition-by-components: A theory of human image understanding”, Psychological Review, 94: 115-147
(1987)
[9]. J. Iivarinen, A. Visa, “Shape recognition of irregular objects”, Intelligent Robots and Computer Vision XV: Algorithms,
Techniques, Active Vision, and Materials Handling, Proc. SPIE 2904 25–32 (1996).
[10]. K.S. Fu, Syntactic Methods in Pattern Recognition, Academic Press, New York, 1974.
[11]. N. Chomsky, Syntactic Structures, Mouton, The Hague, Berlin, 1957.