The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified KMeans
clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
C OMPARATIVE S TUDY OF D IMENSIONALITY R EDUCTION T ECHNIQUES U SING PCA AND ...csandit
The aim of this paper is to present a comparative s
tudy of two linear dimension reduction
methods namely PCA (Principal Component Analysis) a
nd LDA (Linear Discriminant Analysis).
The main idea of PCA is to transform the high dimen
sional input space onto the feature space
where the maximal variance is displayed. The featur
e selection in traditional LDA is obtained
by maximizing the difference between classes and mi
nimizing the distance within classes. PCA
finds the axes with maximum variance for the whole
data set where LDA tries to find the axes
for best class seperability. The proposed method is
experimented over a general image database
using Matlab. The performance of these systems has
been evaluated by Precision and Recall
measures. Experimental results show that PCA based
dimension reduction method gives the
better performance in terms of higher precision and
recall values with lesser computational
complexity than the LDA based method.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
C OMPARATIVE S TUDY OF D IMENSIONALITY R EDUCTION T ECHNIQUES U SING PCA AND ...csandit
The aim of this paper is to present a comparative s
tudy of two linear dimension reduction
methods namely PCA (Principal Component Analysis) a
nd LDA (Linear Discriminant Analysis).
The main idea of PCA is to transform the high dimen
sional input space onto the feature space
where the maximal variance is displayed. The featur
e selection in traditional LDA is obtained
by maximizing the difference between classes and mi
nimizing the distance within classes. PCA
finds the axes with maximum variance for the whole
data set where LDA tries to find the axes
for best class seperability. The proposed method is
experimented over a general image database
using Matlab. The performance of these systems has
been evaluated by Precision and Recall
measures. Experimental results show that PCA based
dimension reduction method gives the
better performance in terms of higher precision and
recall values with lesser computational
complexity than the LDA based method.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
Low level features for image retrieval basedcaijjournal
In this paper, we present a novel approach for image retrieval based on extraction of low level features
using techniques such as Directional Binary Code (DBC), Haar Wavelet transform and Histogram of
Oriented Gradients (HOG). The DBC texture descriptor captures the spatial relationship between any pair
of neighbourhood pixels in a local region along a given direction, while Local Binary Patterns (LBP)
descriptor considers the relationship between a given pixel and its surrounding neighbours. Therefore,
DBC captures more spatial information than LBP and its variants, also it can extract more edge
information than LBP. Hence, we employ DBC technique in order to extract grey level texture features
(texture map) from each RGB channels individually and computed texture maps are further combined
which represents colour texture features (colour texture map) of an image. Then, we decomposed the
extracted colour texture map and original image using Haar wavelet transform. Finally, we encode the
shape and local features of wavelet transformed images using Histogram of Oriented Gradients (HOG) for
content based image retrieval. The performance of proposed method is compared with existing methods on
two databases such as Wang’s corel image and Caltech 256. The evaluation results show that our
approach outperforms the existing methods for image retrieval.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
An Unsupervised Cluster-based Image Retrieval Algorithm using Relevance FeedbackIJMIT JOURNAL
Content-based image retrieval (CBIR) systems utilize low level query image feature as identifying similarity between a query image and the image database. Image contents are plays significant role for image retrieval. There are three fundamental bases for content-based image retrieval, i.e. visual feature extraction, multidimensional indexing, and retrieval system design. Each image has three contents such as: color, texture and shape features. Color and texture both plays important image visual features used in Content-Based Image Retrieval to improve results. Color histogram and texture features have potential to retrieve similar images on the basis of their properties. As the feature extracted from a query is low level, it is extremely difficult for user to provide an appropriate example in based query. To overcome these problems and reach higher accuracy in CBIR system, providing user with relevance feedback is famous for provide promising solutio
Wavelet-Based Color Histogram on Content-Based Image RetrievalTELKOMNIKA JOURNAL
The growth of image databases in many domains, including fashion, biometric, graphic design,
architecture, etc. has increased rapidly. Content Based Image Retrieval System (CBIR) is a technique used
for finding relevant images from those huge and unannotated image databases based on low-level features
of the query images. In this study, an attempt to employ 2nd level Wavelet Based Color Histogram (WBCH)
on a CBIR system is proposed. Image database used in this study are taken from Wang’s image database
containing 1000 color images. The experiment results show that 2nd level WBCH gives better precision
(0.777) than the other methods, including 1st level WBCH, Color Histogram, Color Co-occurrence Matrix,
and Wavelet texture feature. It can be concluded that the 2nd Level of WBCH can be applied to CBIR system.
Content Based Image Retrieval (CBIR) aims at retrieving the images from the database based on the user query which is visual form rather than the traditional text form. The applications of CBIR extend from surveillance to remote sensing, medical imaging to weather forecasting, and security systems to historical research and so on. Though extensive research is made on content based image retrieval in the spatial domain, we have most images in the internet which is JPEG compressed which pushes the need for image retrieval in the compressed domain itself rather than decoding it to raw format before comparison and retrieval. This research addresses the need to retrieve the images from the database based on the features extracted from the compressed domain along with the application of genetic algorithm in improving the retrieval results. The research focuses on various features and their levels of impact on improving the precision and recall parameters of the CBIR system. Our experimentation results also indicate that the CBIR features in compressed domain along with the genetic algorithm usage improves the results considerably when compared with the literature techniques.
The project aims at development of efficient segmentation method for the CBIR system. Mean-shift segmentation generates a list of potential objects which are meaningful and then these objects are clustered according to a predefined similarity measure. The method was tested on benchmark data and F-Score of .30 was achieved.
A Review of Feature Extraction Techniques for CBIR based on SVMIJEEE
As with the advancement of multimedia technologies, users are not gratified with the conventional retrieval system techniques. So a application “Content Based Image Retrieval System” is introduced. CBIR is the application to retrieve the images or to search the digital images from the large database .The term “content” deals with the colour, shape, texture and all the information which is extracted from the image itself. This paper reviews the CBIR system which uses SVM classifier based algorithms for feature extraction phase.
Content Based Image Retrieval : Classification Using Neural Networksijma
In a content-based image retrieval system (CBIR), the main issue is to extract the image features that
effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of
retrieval performance of image features. This paper presents a review of fundamental aspects of content
based image retrieval including feature extraction of color and texture features. Commonly used color
features including color moments, color histogram and color correlogram and Gabor texture are
compared. The paper reviews the increase in efficiency of image retrieval when the color and texture
features are combined. The similarity measures based on which matches are made and images are
retrieved are also discussed. For effective indexing and fast searching of images based on visual features,
neural network based pattern learning can be used to achieve effective classification.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
This research paper presents a new technique for the synthesis of complex planar mechanisms. The author
calls this technique ‘nomogram-based synthesis’ since it depends on a kinematic nomogram facilitating the
synthesis of complex planar mechanisms without need to complex optimization procedures. A procedure of five
steps is presented to synthesize the mechanism under study for time ratio up to 4.3, normalized stroke up to 3.33.
The nomogram based synthesis can maintain transmission angle to be within a range from 94 to 120 degrees
indicating the effectiveness of the synthesis approach presented.
Keywords — Nomogram-based mechanism synthesis, 6 bar-2 sliders planar mechanism, successful
mechanism performance.
The classical or traditional information system provides answer after a user submits a complete query. It is even
noticed that presently, almost all the relational database systems rely on the query which has syntax and semantics
defined completely to access data. But often it is the case that we are willing to use vague terms in our query. The main
objective of database management system is to provide an environment that is both convenient and efficient for people
to use in storing and retrieving information. A recent trend of supporting auto complete is a first step to cope up with
this problem. We can have design of both classical and fuzzy database and can use effectively fuzzy queries on these
databases. Fuzzy databases are developed to manipulate the incomplete, unclear and vague data such as low, fast, very
high, about etc. The primary focus of fuzzy logic is on the natural language. This Paper provides the users the flexibility
or freedom to query database using natural language. Here this paper implements “interactive fuzzy search”. This
framework for interactive fuzzy search permits the user to explore the data as they type even in the presence of some
minor errors. This paper applies fuzzy queries on relational database so that it is possible to have the precise result as
well as the output for the uncertain terms we generally use based on some membership function
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
Low level features for image retrieval basedcaijjournal
In this paper, we present a novel approach for image retrieval based on extraction of low level features
using techniques such as Directional Binary Code (DBC), Haar Wavelet transform and Histogram of
Oriented Gradients (HOG). The DBC texture descriptor captures the spatial relationship between any pair
of neighbourhood pixels in a local region along a given direction, while Local Binary Patterns (LBP)
descriptor considers the relationship between a given pixel and its surrounding neighbours. Therefore,
DBC captures more spatial information than LBP and its variants, also it can extract more edge
information than LBP. Hence, we employ DBC technique in order to extract grey level texture features
(texture map) from each RGB channels individually and computed texture maps are further combined
which represents colour texture features (colour texture map) of an image. Then, we decomposed the
extracted colour texture map and original image using Haar wavelet transform. Finally, we encode the
shape and local features of wavelet transformed images using Histogram of Oriented Gradients (HOG) for
content based image retrieval. The performance of proposed method is compared with existing methods on
two databases such as Wang’s corel image and Caltech 256. The evaluation results show that our
approach outperforms the existing methods for image retrieval.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
An Unsupervised Cluster-based Image Retrieval Algorithm using Relevance FeedbackIJMIT JOURNAL
Content-based image retrieval (CBIR) systems utilize low level query image feature as identifying similarity between a query image and the image database. Image contents are plays significant role for image retrieval. There are three fundamental bases for content-based image retrieval, i.e. visual feature extraction, multidimensional indexing, and retrieval system design. Each image has three contents such as: color, texture and shape features. Color and texture both plays important image visual features used in Content-Based Image Retrieval to improve results. Color histogram and texture features have potential to retrieve similar images on the basis of their properties. As the feature extracted from a query is low level, it is extremely difficult for user to provide an appropriate example in based query. To overcome these problems and reach higher accuracy in CBIR system, providing user with relevance feedback is famous for provide promising solutio
Wavelet-Based Color Histogram on Content-Based Image RetrievalTELKOMNIKA JOURNAL
The growth of image databases in many domains, including fashion, biometric, graphic design,
architecture, etc. has increased rapidly. Content Based Image Retrieval System (CBIR) is a technique used
for finding relevant images from those huge and unannotated image databases based on low-level features
of the query images. In this study, an attempt to employ 2nd level Wavelet Based Color Histogram (WBCH)
on a CBIR system is proposed. Image database used in this study are taken from Wang’s image database
containing 1000 color images. The experiment results show that 2nd level WBCH gives better precision
(0.777) than the other methods, including 1st level WBCH, Color Histogram, Color Co-occurrence Matrix,
and Wavelet texture feature. It can be concluded that the 2nd Level of WBCH can be applied to CBIR system.
Content Based Image Retrieval (CBIR) aims at retrieving the images from the database based on the user query which is visual form rather than the traditional text form. The applications of CBIR extend from surveillance to remote sensing, medical imaging to weather forecasting, and security systems to historical research and so on. Though extensive research is made on content based image retrieval in the spatial domain, we have most images in the internet which is JPEG compressed which pushes the need for image retrieval in the compressed domain itself rather than decoding it to raw format before comparison and retrieval. This research addresses the need to retrieve the images from the database based on the features extracted from the compressed domain along with the application of genetic algorithm in improving the retrieval results. The research focuses on various features and their levels of impact on improving the precision and recall parameters of the CBIR system. Our experimentation results also indicate that the CBIR features in compressed domain along with the genetic algorithm usage improves the results considerably when compared with the literature techniques.
The project aims at development of efficient segmentation method for the CBIR system. Mean-shift segmentation generates a list of potential objects which are meaningful and then these objects are clustered according to a predefined similarity measure. The method was tested on benchmark data and F-Score of .30 was achieved.
A Review of Feature Extraction Techniques for CBIR based on SVMIJEEE
As with the advancement of multimedia technologies, users are not gratified with the conventional retrieval system techniques. So a application “Content Based Image Retrieval System” is introduced. CBIR is the application to retrieve the images or to search the digital images from the large database .The term “content” deals with the colour, shape, texture and all the information which is extracted from the image itself. This paper reviews the CBIR system which uses SVM classifier based algorithms for feature extraction phase.
Content Based Image Retrieval : Classification Using Neural Networksijma
In a content-based image retrieval system (CBIR), the main issue is to extract the image features that
effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of
retrieval performance of image features. This paper presents a review of fundamental aspects of content
based image retrieval including feature extraction of color and texture features. Commonly used color
features including color moments, color histogram and color correlogram and Gabor texture are
compared. The paper reviews the increase in efficiency of image retrieval when the color and texture
features are combined. The similarity measures based on which matches are made and images are
retrieved are also discussed. For effective indexing and fast searching of images based on visual features,
neural network based pattern learning can be used to achieve effective classification.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
This research paper presents a new technique for the synthesis of complex planar mechanisms. The author
calls this technique ‘nomogram-based synthesis’ since it depends on a kinematic nomogram facilitating the
synthesis of complex planar mechanisms without need to complex optimization procedures. A procedure of five
steps is presented to synthesize the mechanism under study for time ratio up to 4.3, normalized stroke up to 3.33.
The nomogram based synthesis can maintain transmission angle to be within a range from 94 to 120 degrees
indicating the effectiveness of the synthesis approach presented.
Keywords — Nomogram-based mechanism synthesis, 6 bar-2 sliders planar mechanism, successful
mechanism performance.
The classical or traditional information system provides answer after a user submits a complete query. It is even
noticed that presently, almost all the relational database systems rely on the query which has syntax and semantics
defined completely to access data. But often it is the case that we are willing to use vague terms in our query. The main
objective of database management system is to provide an environment that is both convenient and efficient for people
to use in storing and retrieving information. A recent trend of supporting auto complete is a first step to cope up with
this problem. We can have design of both classical and fuzzy database and can use effectively fuzzy queries on these
databases. Fuzzy databases are developed to manipulate the incomplete, unclear and vague data such as low, fast, very
high, about etc. The primary focus of fuzzy logic is on the natural language. This Paper provides the users the flexibility
or freedom to query database using natural language. Here this paper implements “interactive fuzzy search”. This
framework for interactive fuzzy search permits the user to explore the data as they type even in the presence of some
minor errors. This paper applies fuzzy queries on relational database so that it is possible to have the precise result as
well as the output for the uncertain terms we generally use based on some membership function
rring of useable data, for example a sensor in a room to monitor and control the temperature. It is estimated
that by 2020 there will be about 50 billion internet-enabled devices. The Internet of things presently is being used
in the fields of automobiles, agriculture, security surveillance, building management, smart-homes, and health
care. The IOT expects to use low-cost computing devices where there is less energy consumption impact to the environment[1]. and limited
This paper aims to describe a way for giving security to IT companies, scouting units, business organizations and
volunteer groups. Among the person identification methods, face recognition is known to be the most natural
ones, since the face modality is the modality that uses to identify people in everyday lives. This face detection
differentiates faces from non-faces and is therefore essential for accurate security. The other strategy involves face
recognition for marking the employees. The Raspberry pi module is used for face detection & recognition. The
camera will be connected to the Raspberry pi module. The employees database is collected. The database includes
name of the employees, there images & ID number[2].
This RFID reader module will be installed at the front side of organizations in such a way that all employees
provided by the RFID cards. Raspberry pi system have the database of the employees so comparing the database if
employees details are matched employ can entered into company. Thus with the help of this system, time will be
saved and it is so convenient to record employees. And the details of the employees will be sent to the
corresponding head of organization using IOT technology
As Diabetes Mellitus combined with other ailments will become a deadly combination, hence there
is an urgent need to break the link between diabetes and its related complications. For this purpose image
processing based analysis can potentially be helpful for earlier detection, education and treatment. Medical
image analysis of Diabetic patients with its related complications such as DR, CVD & Diabetic
Myonecrosis (i.e. on Retinal Images, Coronary angiographs, Electron micrographs, MRI etc) is to be the
aprioristic because of its more prevalence. Thus the main work of this paper is on literature review about
Diabetes and Imaging such as the Prevalence, Classification, Causes and Medical Imaging & Survey of
Image processing methods applied on Diabetic Related Causes.
Keywords — Image, segmentation, retinopathy, Myonecrosis,
Software keyloggers are a fast growing class of invasive software often used to harvest confidential
information. One of the main reasons for this rapid growth is the possibility for unprivileged programs
running in user space to eavesdrop and record all thekeystrokes typed by the users of a system. The ability
to run in unprivileged mode facilitates their implementation and distribution, but,at the same time, allows
one to understand and model their behavior in detail. Leveraging this characteristic, we propose a new
detection technique that simulates carefully crafted keystroke sequences in input and observes the behavior
of the keylogger in output to unambiguously identify it among all the running processes. We have
prototyped our technique as an unprivileged application, hence matching the same ease of deployment of a
keylogger executing in unprivileged mode. We have successfully evaluated the underlying technique
against the most common free keyloggers. This confirms the viability of our approach in practical
scenarios. We have also devised potential evasion techniques that may be adopted to circumvent our
approach and proposed a heuristic to strengthen the effectiveness of our solution against more elaborated
attacks. Extensive experimental results confirm that our technique is robust to both false positives and
false negatives in realistic settings.
Abstract:
E-commerce stands for electronic commerce. E-commerce is doing business online and electronically. This paper attempts to highlight the different challenges faced by e-commerce in India and to understand the essential growth factors required for e-commerce. This paper describes the different services and opportunities offered by E-commerce to business, Producers, Distributers and Customers.
Keywords: - E-commerce, Challenges, Growth Factors.
Recently technological and population development, the usage of vehicles is rapidly increasing and at the
same time the occurrence accident is also increased. Hence, the value of human life is ignored. No one can prevent
the accident, but can save their life by expediting the ambulance to the hospital in time. The objective of this
scheme is to minimize the delay caused by traffic congestion and to provide the smooth flow of emergency
vehicles. The concept of this scheme is to green the traffic signal in the path of ambulance automatically so that
the ambulance can reach the spot in time and human life can be saved. The main server finds the ambulance
through mail. At the same time, it controls the traffic light according to the ambulance location and thus arriving at
the hospital safely. This scheme is fully automated, thus it locates emergency vehicle and controls the traffic
lights, provide the shortest path to reach the hospital in time.
This study investigate the characteristics of a gear system including contact stresses, bending
stresses, and the transmission errors of gears in mesh. Gearing is one of most critical component in
mechanical power transmission systems. The bending stresses in the tooth root were examined using 3D
model.
Current methods of calculating gear contact stresses use Hertz’s equations and Lewis Equation
which were originally derived for contact between two cylinders. To enable the investigation of contact
problems with FEM, the stiffness relationship between the two contact areas is usually established
through a spring placed between the two contacting areas. This can be achieve by inserting a contact
element in between the two areas where contact occurs. The results of the three dimensional FEM analysis
from ANSYS are presented. These stresses were compared with theoretical values. Both results agree very
well. This indicates that the FEM model is accurate.
This report also considers the variations of the whole body stiffness arising from gear body
rotation due to bending deflection, shearing displacement and contact deformation. Many different
positions within the meshing cycle were investigated. Investigation of contact and bending stress
characteristics of spur gears continues to be immense attention to both engineers and researchers in spite
of many studies in the past.
This is because of the advances in the engineering technology that demands for gears with ever
increasing load capacities and speeds with high reliability, the designers need to be able to accurately
predict the stresses experienced the stresses by the loaded gears.
For Image Authentication Problem using Encryption Technique and LDPC Source Coding is necessary in Content
Delivery via unsecure medium, Like Peer-To-Peer (P2P) File Sharing. These transferring Digital Files from one Computer to
another. Images are the Most Important Utility of our life. They are used in many applications. There are Two Main Goals of
Image Security: Image Encryption and Authentication. More different encoded versions of the original image available.In
addition, unsecure medium might tamper with the contents.. We propose an efficient, accurate, reliable process using
encryption and LDPC source coding for the image authentication problem. The key idea is to provide a Slepian-Wolf encoded
as authentication data which is encrypted using cryptography key before ready to send. The key used for encryption is usually
independent of the Plain-Image. This can be decoded with side information of an authentic image.
This study aims to enlighten the researchers about the details of process mining. As process mining is a new research area, it includes process modelling and process analysis, as well as business intelligence and data mining. Also it is used as a tool that gives information about procedures. In this paper classification of process mining techniques, different process mining algorithms, challenges and area of application have been explained.Therefore, it was concluded that process mining can be a useful technique with faster results and ability to check conformance and compliance.
To develop an Application to visualize the key board of computer with the concept of image processing. The virtual
keyboard should be accessible and functioning. The keyboard must give input to computer. With the help of camera, image
of keyboard will be fetched. The typing will be captured by camera, as we type on cardboard simply drawn on paper. Camera
will capture finger movement while typing. So basically this is giving the virtual keyboard.
As the technology advances, more and more systems are introduced which will look after the users comfort. Few
years before hard switches were used as keys. Traditional QWERTY keyboards are bulky and offer very little in terms of
enhancements. Now-a-days soft touch keypads are much popular in the market. These keypads give an elegant look and a
better feel. Currently keyboards are static and their interactivity and usability would increase if they were made dynamic and
adaptable. Various on-screen virtual keyboards are available but it is difficult to accommodate full sized keyboard on the
screen as it creates hindrance to see the documents being typed. Virtual Keyboard has no physical appearance. Although
other forms of Virtual Keyboards exist; they provide solutions using specialized devices such as 3D cameras. Due to this, a
practical implementation of such keyboards is not feasible. The Virtual Keyboard that we propose uses only a standard web
camera, with no additional hardware. Thus we see that the new technology always has more Benefits and is more userfriendly.
In real world applications, most of the optimization problems involve more than one objective to
be optimized. The objectives in most of engineering problems are often conflicting, i.e., maximize
performance, minimize cost, maximize reliability, etc. In the case, one extreme solution would not satisfy
both objective functions and the optimal solution of one objective will not necessary be the best solution
for other objective(s). Therefore different solutions will produce trade-offs between different objectives
and a set of solutions is required to represent the optimal solutions of all objectives. Multi-objective
formulations are realistic models for many complex engineering optimization problems. Customized
genetic algorithms have been demonstrated to be particularly effective to determine excellent solutions to
these problems. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each
of which satisfies the objectives at an acceptable level without being dominated by any other solution. In
this paper, an overview is presented describing various multi objective genetic algorithms developed to
handle different problems with multiple objectives.
Osteoarthritis (OA) is the most common form of arthritis seen in aged or older populations. It is caused
because of a degeneration of articular cartilage, which functions as shock absorption cushion in knee joint. OA
also leads sliding of bones together, cause swelling, pain, eventually and loss of motion. Nowadays, magnetic
resonance imaging (MRI) technique is widely used in the progression of osteoarthritis diagnosis due to the ability
to display the contrast between bone and cartilage. Usually, analysis of MRI image is done manually by a
physician which is very unpredictable, subjective and time consuming. Hence, there is need to develop automated
system to reduce the processing time. In this paper, a new automatic knee OA detection system based on feature
extraction and artificial neural network is developed. The different features viz GLCM texture, statistical, shape
etc. is extracted by using different image processing algorithms. This detection system consists of 4 stages, which
are pre-processing with ROI cropping, segmentation, feature extraction, and classification by neural network. This
technique results 98.5% of classification accuracy at training stage and 92% at testing stage.
Keywords — Artificial Neural Network (ANN), Gray Level Co-occurrence Matrix (GLCM),Knee
Joint, Magnetic Resonance Imaging (MRI), Osteoarthritis(OA).
Free Space Optics is a medium with high bandwidth which has maximum data rate. Demand for
large data speed capacity has been increasing exponentially due to the massive spread of internet
So with the growing transmission rate and demand in the field of optical communication, the electronic
regeneration has become more expensive. With the introduction of power optical amplifiers the cost of
converting optical signals to electronic
combinations of hybrid amplifiers have been studied and emerged in FSO system. Their performances
have been compared on the basis of transmission distance
This paper intends to construct and design of the fuel monitoring and electronics control of dispenser for fuel station. It
consists of two sections; hardware implementation for control section and software implementation section. Some fuel stations
which used the flow rate sensors cannot get satisfactions both customers and dealers. The reason is that the fuel volume can
increase or decrease due to the temperature changes depending on the places and times. The system can solve the
disadvantages of it. The system controls for the compensation in volume flow rate with controller. In order to solve this
problem, PIC16F877A is used as a controller to control the time taken for each data representation customer requests from the
keypad and LCD (liquid crystal display) is also used as display unit in this system. And then computer is also used as a control
station by using VB.Net. The status of the fuel levels is also monitored from the computer. MikroC is used for software
implementation of the system In order to control time taken.
Keywords — fuel compensation system, volume flow rate, controller, and software.
As the rising of transportation system design, so many data logger design was developed for safety. In our country, there are
many accidents on highways. Both driver’s faults and road construction cause accidents. To reduce these condition some safety
system such as obstacle detection system, vehicle declination alarm system, temperature and smoke level display unit,
signboard warning on road sides should be used for both driver and passengers. In addition data logger system for whole
vehicle must be equipped for safety. With the implementation of PIC microcontroller as an Embedded device, this logger
design was constructed with many sensors and C# service-based database. Using Arduino boards, vehicle detection sensing
circuit, Check point radio signal sensing circuit for dangerous road sector, hall-effect magnetic wheel revolution sensing circuit
were designed to be connected with main PIC microcontroller and Personal Computer. Real time result was displayed on C#
Graphical User Interface and Vehicle data log could be easily exported to Microsoft Excel report.
Keywords -Arduino, alarm and alert system,C# service-based database, PC based control system, Vehicle data
logger.
Generally the prediction of behaviour of material at high temperature is very difficult. During design of
components which are subjected to or working at high temperature must consider the testing at elevated
temperature. Hot tensile testing (HTT) is the method of tensile testing of material at elevated temperature. The
materials used for automotive or aerospace applications are mostly subject to cyclic loading, high temperature
and sometimes involve high frequency vibrations. High strength aluminium alloys are one class of materials that
are widely used in the automotive and aerospace industries .In this work I test A413 material for HTT at different
temperature and strain rate, which can be used for piston.
Keywords — HTT, high temperature, strain rate, piston, automotive or aerospace.
A mediator person is required for communication between deaf person and a second person. But a
mediator should know the sign language used by deaf person. But this is also not possible always since there are
multiple sign languages for multiple languages. It is difficult for a deaf person to understand what a second
person speaks. And therefore deaf person should keep track of lip movements of second person in order to know
what he is speaking. But the lip movements do not give proper efficiency and accuracy since the facial
expressions and speech might not match. To overcome the above problems we have proposed a system, an
Android Application for recognizing sign language using hand gesture with the facility for user to define and
upload their own sign language into the system. The features of this system are the real time conversion of
gesture to text and speech. For two-way communication between deaf person and second person, the speech of
second person is converted into text. The processing steps include: gesture extraction, gesture matching and
conversion of text to speech and vice-versa. The system is not only useful for deaf community but can also be
used by common people who migrate to different regions and do not know local language.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal,
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHEScscpconf
Image retrieval system is an active area to propose a new approach to retrieve images from the
large image database. In this concerned, we proposed an algorithm to represent images using
divisive based and partitioned based clustering approaches. The HSV color component and Haar wavelet transform is used to extract image features. These features are taken to segment an image to obtain objects. For segmenting an image, we used modified k-means clustering algorithm to group similar pixel together into K groups with cluster centers. To modify Kmeans, we proposed a divisive based clustering algorithm to determine the number of cluster and get back with number of cluster to k-means to obtain significant object groups. In addition, we also discussed the similarity distance measure using threshold value and object uniqueness to quantify the results.
OBJECT DETECTION, EXTRACTION AND CLASSIFICATION USING IMAGE PROCESSING TECHNIQUEJournal For Research
Domestic refrigerators are widely used household appliances and a large extent of energy is consumed by this system. A phase change material is a substances that can store or release significant amount of heat energy by changing the phase liquid to vapour or vice versa. So, reduction of temperature fluctuation and improvement of system performances is that main reason of using PCM enhances the heat transfer rate thus improves the COP of refrigeration as well as the quality frozen food. The release and storage rate of a refrigerator is depends upon the characteristics of refrigerators and its properties using phase change material for a certain thermal load it is found that COP of conventional refrigerator is increased . The phase change material used in chamber built manually and which surrounds the evaporator chamber of a conventional refrigerator the whole heat transfer for load given to refrigerator cabin (to evaporator) evaporator to phase change material by conduction. This system hence improves the performances of household refrigerator by increasing its compressor cut-off time and thereby minimizing electrical energy usage. The main objective is to improve the performance, cooling time period, storage capacity and to maintain the constant cooling effect for more time during power cut off hours using phase change material.
The complexity of landscape pattern mining is well stated due to its non-linear spatial image formation and
inhomogeneity of the satellite images. Land Ex tool of the literature work needs several seconds to answer input
image pattern query. The time duration of content based image retrieval depends on input query complexity. This
paper focuses on designing and implementing a training dataset to train NML (Neural network based Machine
Learning) algorithm to reduce the search time to improve the result accuracy. The performance evolution of
proposed NML CBIR (Content Based Image Retrieval) method will be used for comparison of satellite and natural
images by means of increasing speed and accuracy.
Keywords: Spatial Image, Satellite image, NML, CBIR
Color and texture based image retrievaleSAT Journals
Abstract Content-based image retrieval (CBIR) is an vital research area for manipulating bulky image databases and records. Alongside the conventional method where the images are searched on the basis of words, CBIR system uses visual contents to retrieve the images. In content based image retrieval systems texture and color features have been the primal descriptors. We use HSV color information and mean of the image as texture information. The performance of proposed scheme is calculated on the basis of precision, recall and accuracy. As an effect, the blend of color and texture features of the image provides strong feature set for image retrieval. Keywords: image retrieval, HSV color space, color histogram, image texture.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Applications of spatial features in cbir a surveycsandit
With advances in the computer technology and the World Wide Web there has been an
explosion in the amount and complexity of multimedia data that are generated, stored,
transmitted, analyzed, and accessed. In order to extract useful information from this huge
amount of data, many content based image retrieval (CBIR) systems have been developed in the
last decade. A typical CBIR system captures image features that represent image properties
such as color, texture, or shape of objects in the query image and try to retrieve images from the
database with similar features. Retrieval efficiency and accuracy are the important issues in
designing Content Based Image Retrieval System. The Shape and Spatial features are quiet easy
and simple to derive and effective. Researchers are moving towards finding spatial features and
the scope of implementing these features in to the image retrieval framework for reducing the
semantic gap. This Survey paper focuses on the detailed review of different methods and their
evaluation techniques used in the recent works based on spatial features in CBIR systems.
Finally, several recommendations for future research directions have been suggested based on
the recent technologies.
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEYcscpconf
With advances in the computer technology and the World Wide Web there has been an explosion in the amount and complexity of multimedia data that are generated, stored,transmitted, analyzed, and accessed. In order to extract useful information from this hugeamount of data, many content based image retrieval (CBIR) systems have been developed in the
last decade. A typical CBIR system captures image features that represent image properties such as color, texture, or shape of objects in the query image and try to retrieve images from the
database with similar features. Retrieval efficiency and accuracy are the important issues in designing Content Based Image Retrieval System. The Shape and Spatial features are quiet easy and simple to derive and effective. Researchers are moving towards finding spatial features and the scope of implementing these features in to the image retrieval framework for reducing the semantic gap. This Survey paper focuses on the detailed review of different methods and their
evaluation techniques used in the recent works based on spatial features in CBIR systems. Finally, several recommendations for future research directions have been suggested based on
the recent technologies.
Content Based Image Retrieval Using Dominant Color and Texture FeaturesIJMTST Journal
The purpose of this Paper is to describe our research on different feature extraction and matching techniques in designing a Content Based Image Retrieval (CBIR) system. Due to the enormous increase in image database sizes, as well as its vast deployment in various applications, the need for CBIR development arose. Content Based Image Retrieval (CBIR) is the retrieval of images based on features such as color and texture. Image retrieval using color feature cannot provide good solution for accuracy and efficiency. The most important features are Color and texture. In this paper technique used for retrieving the images based on their content namely dominant color, texture and combination of both color and texture. The technique verifies the superiority of image retrieval using multi feature than the single feature.
These days we have an increased number of heart diseases including increased risk of heart attacks. Our proposed system users sensors that allow to detect heart rate of a person using heartbeat sensing even if the person is at home. The sensor is then interfaced to a microcontroller that allows checking heart rate readings and transmitting them over internet. The user may set the high as well as low levels of heart beat limit. After setting these limits, the system starts monitoring and as soon as patient heart beat goes above a certain limit, the system sends an alert to the controller which then transmits this over the internet and alerts the doctors as well as concerned users. Also the system alerts for lower heartbeats. Whenever the user logs on for monitoring, the system also displays the live heart rate of the patient. Thus concerned ones may monitor heart rate as well get an alert of heart attack to the patient immediately from anywhere and the person can be saved on time.This value will continue to grow if no proper solution is found. Internet of Things (IoT) technology developments allows humans to control a variety of high-tech equipment in our daily lives. One of these is the ease of checking health using gadgets, either a phone, tablet or laptop. we mainly focused on the safety measures for both driver and vehicle by using three types of sensors: Heartbeat sensor, Traffic light sensor and Level sensor. Heartbeat sensor is used to monitor heartbeat rate of the driver constantly and prevents from the accidents by controlling through IOT.
ABSTRACT The success of the cloud computing paradigm is due to its on-demand, self-service, and pay-by-use nature. Public key encryption with keyword search applies only to the certain circumstances that keyword cipher text can only be retrieved by a specific user and only supports single-keyword matching. In the existing searchable encryption schemes, either the communication mode is one-to-one, or only single-keyword search is supported. This paper proposes a searchable encryption that is based on attributes and supports multi-keyword search. Searchable encryption is a primitive, which not only protects data privacy of data owners but also enables data users to search over the encrypted data. Most existing searchable encryption schemes are in the single-user setting. There are only few schemes in the multiple data users setting, i.e., encrypted data sharing. Among these schemes, most of the early techniques depend on a trusted third party with interactive search protocols or need cumbersome key management. To remedy the defects, the most recent approaches borrow ideas from attribute-based encryption to enable attribute-based keyword search (ABKS
Cloud computing is the one of the emerging techniques to process the big data. Large collection of set or large
volume of data is known as big data. Processing of big data (MRI images and DICOM images) normally takes
more time compare with other data. The main tasks such as handling big data can be solved by using the concepts
of hadoop. Enhancing the hadoop concept it will help the user to process the large set of images or data. The
Advanced Hadoop Distributed File System (AHDF) and MapReduce are the two default main functions which
are used to enhance hadoop. HDF method is a hadoop file storing system, which is used for storing and retrieving
the data. MapReduce is the combinations of two functions namely maps and reduce. Map is the process of
splitting the inputs and reduce is the process of integrating the output of map’s input. Recently, in medical fields
the experienced problems like machine failure and fault tolerance while processing the result for the scanned
data. A unique optimized time scheduling algorithm, called Advanced Dynamic Handover Reduce Function
(ADHRF) algorithm is introduced in the reduce function. Enhancement of hadoop and cloud introduction of
ADHRF helps to overcome the processing risks, to get optimized result with less waiting time and reduction in
error percentage of the output image
Text mining has turned out to be one of the in vogue handle that has been joined in a few research
fields, for example, computational etymology, Information Retrieval (IR) and data mining. Natural
Language Processing (NLP) methods were utilized to extricate learning from the textual text that is
composed by people. Text mining peruses an unstructured form of data to give important
information designs in a most brief day and age. Long range interpersonal communication locales
are an awesome wellspring of correspondence as the vast majority of the general population in this
day and age utilize these destinations in their everyday lives to keep associated with each other. It
turns into a typical practice to not compose a sentence with remedy punctuation and spelling. This
training may prompt various types of ambiguities like lexical, syntactic, and semantic and because of
this kind of indistinct data; it is elusive out the genuine data arrange. As needs be, we are directing
an examination with the point of searching for various text mining techniques to get different
textual requests via web-based networking media sites. This review expects to depict how
contemplates in online networking have utilized text investigation and text mining methods to
identify the key topics in the data. This study concentrated on examining the text mining
contemplates identified with Facebook and Twitter; the two prevailing web-based social networking
on the planet. Aftereffects of this overview can fill in as the baselines for future text mining research.
Colorectal cancer (CRC) has potential to spread within the peritoneal cavity, and this transcoelomic
dissemination is termed “peritoneal metastases” (PM).The aim of this article was to summarise the current
evidence regarding CRC patients at high risk of PM. Colorectal cancer is the second most common cause of cancer
death in the UK. Prompt investigation of suspicious symptoms is important, but there is increasing evidence that
screening for the disease can produce significant reductions in mortality.High quality surgery is of paramount
importance in achieving good outcomes, particularly in rectal cancer, but adjuvant radiotherapy and chemotherapy
have important parts to play. The treatment of advanced disease is still essentially palliative, although surgery for
limited hepatic metastases may be curative in a small proportion of patients.
Heat transfer in pipes is a distinctive kind of procedure employed in heat exchanger which transfers great
deal of heat because of the impact of capillary action and phase change heat transfer principle. Late improvement
in the heat pipe incorporates high thermal conductivity liquids like Nano liquids, fixed inside to extricate the most
extreme heat. This paper audits, impact of different factors, for example, thermal pipe tilt edge, charged measure
of working liquid, nano particles sort, size, and mass/volume part and its impact on the change of thermal
proficiency, thermal exchange limit and decrease in thermal protection. The Nano liquid arrangement and the
examination of its thermal attributes likewise have been investigated. The retained sun oriented vitality is
exchanged to the working liquid streaming in the pipe. The execution of the framework is affected by thermal
exchange from tube to working liquid, with least convective misfortunes, which must be considered as one of the
essential plan factor. In tube and channel streams, to improve the rate of heat exchange to the working liquid,
detached enlargement methods, for example, contorted tapes and swirl generators are employed from the fluid
flow path. The variation of heat transfer coefficient and pressure drop in the pipe flow for water and water based
Al2O3 Nano fluids at different volume concentrations and twisted tapes are studied.
Now-a-day’s pedal powered grinding machine is used only for grinding purpose. Also, it requires lots of efforts
and limited for single application use. Another problem in existing model is that it consumed more time and also has
lower efficiency. Our aim is to design a human powered grinding machine which can also be used for many purposes
like pumping, grinding, washing, cutting, etc. it can carry water to a height 8 meter and produces 4 ampere of electricity
in most effective way. The system is also useful for the health conscious work out purpose. The purpose of this technical
study is to increase the performance and output capacity of pedal powered grinding machine.
This project proposes a distributed control approach to coordinate multiple energy storage units
(ESUs) to avoid violation of voltage and network load constraints ESU as a buffer can be a promising
solution which can store surplus power during the peak generation periods and use it in peak load
periods.In ESU converters both active and reactive power are used to deal with the power quality
issues in distribution network ESU’s reactive power is proposed to be used for voltage support, while
the active power is to be utilized in managing network loading.
The steady increase in non-linear loads on the power supply network such as, AC variable speed drives,
DC variable Speed drives, UPS, Inverter and SMPS raises issues about power quality and reliability. In this
subject, attention has been focused on harmonics . Harmonics overload the power system network and cause
reliability problems on equipment and system and also waste energy. Passive and active harmonic filters are
used to mitigate harmonic problems. The use of both active and passive filter is justified to mitigate the
harmonics. The difficulty for practicing engineers is to select and deploy correct harmonic filters , This paper
explains which solutions are suitable when it comes to choosing active and passive harmonic filters and also
explains the mistakes need to be avoided.
This Paper is aimed at analyzing the few important Power System equipment failures generally
occurring in the Industrial Power Distribution system. Many such general problems if not resolved it may
lead to huge production stoppage and unforeseen equipment damages. We can improve the reliability of
Power system by simply applying the problem solving tool for every case study and finding out the root cause
of the problem, validation of root cause and elimination by corrective measures. This problem solving
approach to be practiced by every day to improve the power system reliability. This paper will throw the light
and will be a guide for the Practicing Electrical Engineers to find out the solution for every problem which
they come across in their day to day maintenance activity.
More from IJET - International Journal of Engineering and Techniques (20)
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
1. International Journal of Engineering and Techniques - Volume 2 Issue 3, May – June 2016
ISSN: 2395-1303 http://www.ijetjournal.org Page 47
An Efficient High Dimensional Indexing Method For Content Based Image
Retrieval (CBIR)
Ruchi Kumari1
, Sandhya Tarar2
School of Information and Communication Technology,Gautam Buddha University,Greater Noida,India.
1. Introduction
In the recent days due to the accumulation of huge
images with the support of manual annotation suggestion
was very complicated. As in consecutive to solve this
complications content based retrieval of images were
popularized. So the content based retrieval of pictures is
the technique of computer concept in which it
overcomes the issues of image retrieval. In this
technique rather than utilizing the common scheme of
the image retrieval. Three basic methods were made like
multidimensional indexing [6] , retrieval system design
and the extraction of visual characteristics in the content
based retrieval of images. As a result the attitude of the
color of an image is accomplished with the help of
methods like histograms and averaging. Therefore the
texture condition were accomplished with the help of
vector or transform quantization’s. Finally the conditions
of the shape were produced by making use of
morphological operators or gradient operators. Therefore
the image Processing is considered as a multiple form of
Signal processing in which the contribution may be
either in the form of picture or a group of frameworks
based on the images. So the retrieval system based on
the images is a method in which it permits the user to
peruse, inquire and restore the images. Thus the content
based restoration of images is the series of actions for
gathering the requested query image from the large
database systems. This database system contains more
number of images required by the users. So the images
are gathered based on the information’s needed by the
user. As in consecutive to recover the exact images from
the image database system the initial properties like
texture, color and shape appearances were used for the
recovery purpose. The content based retrieval of images
performs well with all kinds of images and the research
is made based on the differencing the features including
the query image.
RESEARCH ARTICLE OPEN ACCESS
Abstract:
The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified K-
Means clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
Keywords — CBIR,Indexing Algorithm,DWT,K-means,Color histogram.
2. International Journal of Engineering and Techniques - Volume 2 Issue 3, May – June 2016
ISSN: 2395-1303 http://www.ijetjournal.org Page 48
The important elements of the content based retrieval of
images are the appearances which contains the texture,
graphical shape and color of the pictures. Thus the
features can be broadly classified in to two kinds such as
global and the local features. Therefore identification of
the object can be made conveniently with the help of
local features. The other elements utilized for the
purpose of retrieval is the use of related content, in
which the images can be gathered with the help of
related contents that is available in the images. The
elements like relevant feedback were used widely. It is a
most popular component in which it helps to search the
appropriate images accurately by gathering the
feedbacks from the users. Hence this component
retrieves the image which the user desires to recover. So
it is considered as an easy and reliable method for
recovering the images in content based retrieval systems.
2. Background Study
The content based retrieval of images
The content based retrieval is popularly used in the
fields like Military, Education, Web image classification
and Biomedicine. Few examples for the use of modern
content based restoration of images are QBIC which
denotes the Query by image content, Visual seek which
denotes the web based tools for inquiring videos and
images and VIPER which is referred as Visual
information processing for improved retrieval.
Discrete Wavelet Transform
Textures were designed based on the quasi-periodic
patterns along with the frequency or spatial descriptions.
The wavelet transforms changes completely the picture
in to a multi-scale description with the guidelines of both
frequency and spatial properties. Hence this permits
efficient analysis of multi-scale pictures with respect to
low economical costs. By conferring to this renewable,
the operations were made to display the curves, signals
and images. Hence the images, curves and signals can be
explained in condition to the coarse level
characterization in extension to the other analysis which
ranges from extensive scales to the confined scales. The
best examples of the discrete wavelet transforms were
Mexican Hat, Haar, Morlet, Daubechies, Coiflet. Among
all these examples the Haar is considered as a most
reliable and extensively, whereas the Daubachies has
peculiar framework and they are essential for modern
wavelet applications.
K-Means
K-Means is considered as transparent algorithms. It is
mainly used to solve the clustering problems. Therefore
it is commonly said as simplest independent learning
algorithms which are used to overcome the clustering
problems. The process taken by this algorithm is
accessible in order to allocate the required set of data
over a fixed sum of clusters. It considers K clusters in
priority to the procedures. The important plan of the K-
Means is to describe K++1 centroids which is assigned
one for all clusters. Thus these fundamentals need to be
located in an imaginative way. This is done because
various places may lead to produce distinct results. So,
the best suggestion is to locate the centroids as much as
available far from one another. The other action is to
precede all point associated with the given set of data’s
which is closest to the basic. So when no count is made
to wait it means that the first step is over and the
previous group is executed. Therefore after executing the
previous step. It is necessary to re-compute the recent K
fundamentals. After establishing these K new centroids
an additional confinement need to be executed among
the similar data counts to the neighboring centroids. The
convolution has been created. Therefore this loop will
consider the changes with respect to the step by step
changes in the centroids till no more changed are need to
be executed. In particular the centroids will not progress
any more. Therefore this algorithm targets to reduce the
detached operations.
The Research work will focus on the following research
prayers.
The self-regulating creation of textual annotations
for an extensive spectrum of an images is not
achievable in this image retrieval
By manually defining the images is inconvenient
and costly work for large kind of databases
3. International Journal of Engineering and Techniques - Volume 2 Issue 3, May – June 2016
ISSN: 2395-1303 http://www.ijetjournal.org Page 49
The physical explanatory notes of the content based
image retrieval are context sensitive, personal and
insufficient
Commonly the text based researches were carried
out by Yandex and Google. But then the results are
imperfect
Basically the work is located on the content of the
picture. Thus the text located image retrieval
contains drawbacks like more expensive task, loss of
information, efficiency and time absorbing. To solve
these issues content based image restoration system
is used for restoring the pictures.
3. Proposed Methodology
In this project the algorithms were implemented based
on the shape, color and texture feature extraction and
with the identical texture and color. The idea of Discrete
Wavelet transform is used to calculate the Euclidian
distance. The computation of clusters is made by
utilizing altered K-Means clustering. Basically the
shape, color and texture of the image were derived [4].
Finally the measure for coincidence among the query
image and the database image were made. The combined
approaches of recovering the exact image were
implemented and it also decreased the acceptable gap
between high level features. Thus the modified K-Means
algorithm handles less time for calculating and
differentiating compares to all other algorithms. It is
advanced for large as well as small kind of database.
Fig:Content Based Image Retrieval Process
1. Image Input
Contributes input images for query input.It defines the
width and the height attributes.These attributes were
adopted to describe the volume of the images in pixels.
2. Image Preprocessing
Image Resizing
The proposed image were conquered by the utilization of
various cameras in results in distinct sizes of images
which disturbs the end result. In order to avoid the issues
initial resizing of images is essential.
RGB to GRAY Scale Conversion
RGB Color Model
In RGB color model,all color seems to be basic spectral
elements of red,green and blue.The shade of the pixel is
created based on three elements which is red,green and
blue.It is defined by their equivalent concentration.The
color elements were commonly known as color planes or
color channels [10].
Gray color Model
It is also commonly referred as gray scale or gray level
image,intensity.Here the pixel value indicates the
intensity values.The image generated with the help of
sensors and the image receiving component signifies the
intensity and the brightness of the light of a picture [6]
[7].It adopts the two-dimensional spatial and the three
dimensional spatial co-ordinates.Thus the image which
includes only the intensity is known as gray scale
images.
In this method it helps to alter the RGB value of a pixel
into a gray scale value.
GRAY TO BINARY Image Conversion
The binarisation of images is made by utilizing
Otsu approach; whereas in MATLAB it can be made by
utilizing graythresh operations.
Gray Thresh Functions
The gray thrash operation is nothing but the global
image threshold by using the Otsu's technique.In
4. International Journal of Engineering and Techniques - Volume 2 Issue 3, May – June 2016
ISSN: 2395-1303 http://www.ijetjournal.org Page 50
MATLAB the command is given like
level=graythresh(I) in which it calculates for the global
threshold level.It may be adopted to change an intensity
image in to a binary image.Thus the gray thresh function
utilizes the Otsu's approach in which it selects the
threshold to reduce the intraclass deviations of the black
and white pixels.The graythresh operations avoids any
nonzero imaginary part of I.
Morphological Processing
In this functions like filling and closing were
implemented to perform disk type structuring
components to bridge the gaps in the images.
Feature Extraction by utilizing DWT
For every input image the color and shape appearances
were extracted with the help of discrete Wavelet
transform.
Share Feature Extraction
The consecutive shape appearances were
studied. They are
Image boundary
Image edge
Image Boundary
1. Initial step is the truncating the images
2. Calculating the white pixel which is
considered as a border of a seed calculation
Image Edge
1. The threshold values were adopted to detect
the Sobel edge to gray scale image
2. AND functions were adopted.
3. Estimating the surplus pixels.
4. Partitioning the extra pixels within the
location of the image.
Color feature extraction
The estimation of color feature were made by
Totalling the RGB color appearance
Totalling the L*a*b color appearances, where
L*-Total value of brightness
a*-Total value of intensity channel a
b*-Total value of intensity channel b
Necessity of seed pixels
Discovering the value for other three more seed
color appearance by computing the value of x, y, z
elements for computing of L*a*b.
Altered K-Means Clustering
Initially the Modified K-Means clustering algorithm
describes the capacity of the K clusters. Then the K-
Means assigns the recovered images from the closest
cluster, which is based on the appearances retrieved from
the pictures by it. The algorithm performs its function till
it satisfies the few difference in the mobility of the
feature points in each and every cluster [12] [13].
Algorithm for modified K-Means
The K-Mean algorithm is considered as a famous
clustering algorithm. This algorithm is applicable to
segmenting the image, bio informatics and data mining.
It is more understandable for small kinds of datasets. So
in this project the implementations of K-Means
algorithm are executed for a large kind of datasets.
Therefore modified K-Mean algorithm neglects pursuing
in to nearby optimal values and also decreases the
approval of cluster- failure parameters.
Algorithm: Altered technique(S,K),S={x1,x2,....,xn}
Input: The numeral clusters k1(k1>k) and a set of data
consist of n items(Xij+).
Output: A group of K clusters (Cij)which lowers the
Cluster - failure parameters
Algorithm
1. Calculating the ranges among all data counts in the
set D
2. Achieving the nearest pair of data count from the set
of data D and create a data-count set Am
(1<=p<=k+1)which consists of two data counts.
Finally eliminating the two data counts from the set
D
3. Achieving the data count in D which is nearest to
the data count set Ap, Extending it to Ap and
eliminate if from D.
5. International Journal of Engineering and Techniques - Volume 2 Issue 3, May – June 2016
ISSN: 2395-1303 http://www.ijetjournal.org Page 51
4. Reiterating the step 4 till the needed sum if data
counts in Am extents (n/k).
5. If p<k+1,then p=p+1,achieving other pair of data
counts from D among which the range is smaller and
creates another data-count groupAp if the condition
doesn't satisfy then move to the step 4.
4. Implementation
The implementation of content based restoration of
images results [8] in various challenges such as high
dimensional descriptors, scalable indexing for
multimedia databases [1][2][3], huge numbers of
headings, memory and disk space necessities and
correct retrieval [5].
So the content based multidimensional techniques
are in need of frequent and scalable retrieval system.
To establish the rapid values for indexing high-
dimensional image [1][2][3] data’s it can be satisfied
by constructing large scale content based restoration
systems of pictures.
In this paper distinct algorithms were implemented
due to the color, texture and shape feature extraction
and verified for the identical color and texture.
The concept of discrete wavelet transform was
implemented to calculate the Euclidian range, to
compute the clusters by selecting the altered K-
Means clustering [13].
Thus the feature extraction of the image is recovered
by the use of discrete wavelet transform. In the
enhanced organizations, the images were verified
with regards to the color feature.
So in this paper RGB color feature were recently
used. The descriptive statistics criteria’s were
selected for the feature extraction.
The wavelet transform enables to catch and produce
the low frequency components in an effective
manner. It has volatile resolution disintegration with
unrelated coefficients. The strength of the growing
transmission helps the acknowledgment of the image
at various characteristics.
The implementation made in this paper is the
equipment of altered K-Mean algorithm.
It is a famous algorithm for clustering and widely
adopted in the fields of bio informatics, image
segmentation, and data mining.
This algorithm performs well for the small as well as
large sets of databases.
This algorithm prevents the usage of cluster error
criterion.
200 images are available in the experimental image
database from varied categories
From various websites in worldwide, from the Corel
Photo Collection compact discs, and from the
picture Bank compact discs the images are gathered.
The recovery experiment’s objective is to calculate
the retrieval efficiency between the with and without
the database categorization.
Easy interpretation is possible to interpret, because
of its easy calculations and results obtained from
these measures.
.
From each image classification, totally 20 images
are selected at random and carries out the retrieval
process. And for the retrieval experiments, it selects
almost 100 query images.
From each of the query performed, it records the
total sum of suitably recovered images to the total
sum of recovered images gathered.
The implementation of this algorithm mainly helps
to break down the pictures so that the energy
produced by each break down images were
computed at the similar scale.
The procedure followed by the energy level
algorithms are, the breakdown of the whole image
into four equal parts of sub-images.
So by the utilization of this algorithm, the energy
levels of the sub-bands were computed and it also
leads to break down of low-low sub band image.
Hence this process is iterated for all the pictures in
the database. Thus by use of this algorithm, the
6. International Journal of Engineering and Techniques - Volume 2 Issue 3, May – June 2016
ISSN: 2395-1303 http://www.ijetjournal.org Page 52
query image is seeker for in the collection of data’s
which contains the images. This mechanism is
iterated till all the pictures in the database have been
differentiated with the query picture. So after the
achievement of Euclidean distance algorithm. A
collection of Euclidean distances has been arranged.
Thus five most searched pictures are then presented
as an output of the research of texture.
5. Results
7. International Journal of Engineering and Techniques - Volume 2 Issue 3, May – June 2016
ISSN: 2395-1303 http://www.ijetjournal.org Page 53
6. Conclusion
Thus the investigation of this study displays that
the implementation of the content based image retrieval
systems achievement is best for the purpose of image
retrieval. The truthfulness of this system is also
enhanced. So the discrete wavelet transform and the K-
means algorithm were interconnected for accomplishing
the best outcomes. Then the analysis is made by
comparing the retrieved images with the images gathered
from the internet. So for grid modulation and
fragmentation the modified K-Means(Sun, Peng and
Hwang, 2009) was implemented. Hence the converted
K-means is productive than the K means which avoids
the clusters in the blank pages. The outcomes of the
analysis displays that the implementation of converted K
means provides with the empty clusters without any
error. The quality of the feature extraction with the help
of discrete wavelet transforms achieved well in the
implemented work.
7. References
[1].S.Suchitra and Dr.S.Chitrakala," A Survey on Scalable
Image Indexing and Searching" in Proceedings of IEEE
,Computing,Communications and Networking
Technologies(ICCCNT),2013 Fourth International
Conference,2013
[2].Md. Khalid Imam Rahmani, M. A. Ansari, Amit Kumar ,"
An Efficient Indexing Algorithm for CBIR " in Proceedings of
IEEE, Computational Intelligence and Communication
Technology(CITC),2015 IEEE International Conference,2015
[3]. A. Yoshitaka and T. Ichikawa, “A Survey on Content-
Based Retrieval for Multimedia Databases”, IEEE Trans. on
Knowledge and Data Engineering, Vol. 11, No. 1, pp. 81-93,
1999.
[4].David Doermann, “The Indexing and Retrieval of
Document Images: A Survey”, 1998.
[5].Jun Wang, Sanjiv Kumar and Shih-Fu Chang,” Semi-
Supervised Hashing for Scalable Image Retrieval,” IEEE
Conference on Computer Vision and Pattern Recognition,
2010.
[6]. MeghaMihir Shah and Pratap Singh Patwal, “Multi-
Dimensional Image Indexing with R+-Tree”, IJIACS ISSN
2347 – 8616 Volume 3, Issue 1 April 2014
[7].IEEE Computer, Finding the right image, Special Issue on
Content Based Image Retrieval Systems, 28, No. 9, Sept.
1995.
[8].A. Pentland, R.W. Picard, S. Sclaroff, Photobook: content-
based manipulation of image databases, International Journal
of Computer Vision 18 (3) (1996) 233–254.
[9]. Capodiferro, L., Mangiatordi, F.,Di Claudio, E.D.,
Jacovitti, G.,"Equalized structural similarity index for image
quality assessment",ISPA 2011 - 7th International Symposium
on Image and Signal Processing and Analysis, IEEE Computer
Society ,2011,pp.420-424
[10]. Jun Wang, Sanjiv Kumar and Shih-Fu Chang,” Semi-
Supervised Hashing for Scalable Image Retrieval,” IEEE
Conference on Computer Vision and Pattern Recognition,
2010.
[11]. Eric Persoon and King-sun Fu. Shape Discrimination
Using Fourier Descriptors. IEEE Trans. On Systems, Man and
Cybernetics, Vol.SMC-7(3):170-179, 1977.
[12]. R. Chellappa and R. Bagdazian. Fourier Coding of Image
Boundaries. IEEE Trans. PAMI- 6(1):102-105, 1984.
[13].N. S. T. Sai, R. C. Patil, “Image Retrieval using Upper
Mean Color and Lower Mean Color of Image”, IJCST Vol. 3,
Issue 1, pp. 191-196, 2012.
[14]. SunkariMadhu,“ Content based Image Retrieval A
Quantitative Comparison between Query by Color and Query
by Texture”, Journal of Industrial and Intelligent Information
Vol. 2, No. 2, June 2014.
[15].Farsi, H. and Mohamadzadeh, S. (2013). Colour
and texture feature-based image retrieval by using
Hadamard matrix in discrete wavelet transform. IET
Image Processing, 7(3), pp.212-218.