In Content-Based Image Retrieval (CBIR) systems, the visual contents of the
images in the database are took out and represented by multi-dimensional characteristic
vectors. A well known CBIR system that retrieves images by unsupervised method known
as cluster based image retrieval system. For enhancing the performance and retrieval rate
of CBIR system, we fuse the visual contents of an image. Recently, we developed two
cluster-based CBIR systems by fusing the scores of two visual contents of an image. In this
paper, we analyzed the performance of the two recommended CBIR systems at different
levels of precision using images of varying sizes and resolutions. We also compared the
performance of the recommended systems with that of the other two existing CBIR systems
namely UFM and CLUE. Experimentally, we find that the recommended systems
outperform the other two existing systems and one recommended system also comparatively
performed better in every resolution of image.
This paper gives a brief idea of the moving objects tracking and its application.
In sport it is challenging to track and detect motion of players in video frames. Task
represents optical flow analysis to do motion detection and particle filter to track players
and taking consideration of regions with movement of players in sports video. Optical flow
vector calculation gives motion of players in video frame. This paper presents improved
Luacs Kanade algorithm explained for optical flow computation for large displacement and
more accuracy in motion estimation.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
The Ultimate significance of Images lies in processing the digital image which stems from two principal application areas: Advances of pictorial information for human interpretation; and dispensation of image data for storage, communication, and illustration for self-sufficient machine perception. The objective of this research work is to define the meaning and possibility of image segmentation based on remote sensing images which are successively classified with statistical measures. In this paper kernel induced Possiblistic C-means clustering algorithm has been implemented for classifying remote sensing image data with image features. As a final point of the proposed work is to point out that this algorithm works well for segmenting and classifying the image with better accuracy with statistical metrices.
Analysis of combined approaches of CBIR systems by clustering at varying prec...IJECEIAES
The image retrieving system is used to retrieve images from the image database. Two types of Image retrieval techniques are commonly used: content-based and text-based techniques. One of the well-known image retrieval techniques that extract the images in an unsupervised way, known as the cluster-based image retrieval technique. In this cluster-based image retrieval, all visual features of an image are combined to find a better retrieval rate and precisions. The objectives of the study were to develop a new model by combining the three traits i.e., color, shape, and texture of an image. The color-shape and colortexture models were compared to a threshold value with various precision levels. A union was formed of a newly developed model with a color-shape, and color-texture model to find the retrieval rate in terms of precisions of the image retrieval system. The results were experimented on on the COREL standard database and it was found that the union of three models gives better results than the image retrieval from the individual models. The newly developed model and the union of the given models also gives better results than the existing system named clusterbased retrieval of images by unsupervised learning (CLUE).
This paper gives a brief idea of the moving objects tracking and its application.
In sport it is challenging to track and detect motion of players in video frames. Task
represents optical flow analysis to do motion detection and particle filter to track players
and taking consideration of regions with movement of players in sports video. Optical flow
vector calculation gives motion of players in video frame. This paper presents improved
Luacs Kanade algorithm explained for optical flow computation for large displacement and
more accuracy in motion estimation.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
The Ultimate significance of Images lies in processing the digital image which stems from two principal application areas: Advances of pictorial information for human interpretation; and dispensation of image data for storage, communication, and illustration for self-sufficient machine perception. The objective of this research work is to define the meaning and possibility of image segmentation based on remote sensing images which are successively classified with statistical measures. In this paper kernel induced Possiblistic C-means clustering algorithm has been implemented for classifying remote sensing image data with image features. As a final point of the proposed work is to point out that this algorithm works well for segmenting and classifying the image with better accuracy with statistical metrices.
Analysis of combined approaches of CBIR systems by clustering at varying prec...IJECEIAES
The image retrieving system is used to retrieve images from the image database. Two types of Image retrieval techniques are commonly used: content-based and text-based techniques. One of the well-known image retrieval techniques that extract the images in an unsupervised way, known as the cluster-based image retrieval technique. In this cluster-based image retrieval, all visual features of an image are combined to find a better retrieval rate and precisions. The objectives of the study were to develop a new model by combining the three traits i.e., color, shape, and texture of an image. The color-shape and colortexture models were compared to a threshold value with various precision levels. A union was formed of a newly developed model with a color-shape, and color-texture model to find the retrieval rate in terms of precisions of the image retrieval system. The results were experimented on on the COREL standard database and it was found that the union of three models gives better results than the image retrieval from the individual models. The newly developed model and the union of the given models also gives better results than the existing system named clusterbased retrieval of images by unsupervised learning (CLUE).
Automatic dominant region segmentation for natural imagescsandit
Image Segmentation segments an image into different homogenous regions. An efficient
semantic based image retrieval system divides the image into different regions separated by
color or texture sometimes even both. Features are extracted from the segmented regions and
are annotated automatically. Relevant images are retrieved from the database based on the
keywords of the segmented region In this paper, automatic image segmentation is proposed to
obtained dominant region of the input natural images. Dominant region are segmented and
results are obtained . Results are also recorded in comparison to JSEG algorithm
Improving image resolution through the cra algorithm involved recycling proce...csandit
Image processing concepts are widely used in medical fields. Digital images are prone to a
variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot
of researchers are working on the field analysis and processing of multi-dimensional images.
Work previously hasn’t sufficient to stop them, so they continue performance work is due by the
researcher. In this paper we contribute a novel research work for analysis and performance
improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image
processing. The CRA algorithms have better response from researcher to use them
Moving object detection using background subtraction algorithm using simulinkeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
Implementation of Object Tracking for Real Time VideoIDES Editor
Real-time tracking of object boundaries is an
important task in many vision applications. Here we propose
an approach to implement the level set method. This approach
does not need to solve any partial differential equations (PDFs),
thus reducing the computation dramatically compared with
optimized narrow band techniques proposed before. With our
approach, real-time level-set based video tracking can be
achieved.
Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...IJECEIAES
Face recognition system in real time is divided into three processes, namely feature extraction, clustering, detection, and recognition. Each of these stages uses different methods, Local Binary Pattern (LBP), Agglomerative Hierarchical Clustering (AHC) and Euclidean Distance. Multi-face image search using Content Based Image Retrieval (CBIR) method. CBIR performs image search by image feature itself. Based on real time trial results, the accuracy value obtained is 61.64%.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
The content based image retrieval (CBIR) technique
is one of the most popular and evolving research areas of the
digital image processing. The goal of CBIR is to extract visual
content like colour, texture or shape, of an image automatically.
This paper proposes an image retrieval method that uses colour
and texture for feature extraction. This system uses the query by
example model. The system allows user to choose the feature on
the basis of which retrieval will take place. For the retrieval
based on colour feature, RGB and HSV models are taken into
consideration. Whereas for texture the GLCM is used for
extracting the textural features which then goes into Vector
Quantization phase to speed up the retrieval process.
Empirical Coding for Curvature Based Linear Representation in Image Retrieval...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A NOVEL IMAGE SEGMENTATION ENHANCEMENT TECHNIQUE BASED ON ACTIVE CONTOUR AND...acijjournal
Topological alignments and snakes are used in image processing, particularly in locating object
boundaries. Both of them have their own advantages and limitations. To improve the overall image
boundary detection system, we focused on developing a novel algorithm for image processing. The
algorithm we propose to develop will based on the active contour method in conjunction with topological
alignments method to enhance the image detection approach. The algorithm presents novel technique to
incorporate the advantages of both Topological Alignments and snakes. Where the initial segmentation
by Topological Alignments is firstly transformed into the input of the snake model and begins its
evolvement to the interested object boundary. The results show that the algorithm can deal with low
contrast images and shape cells, demonstrate the segmentation accuracy under weak image boundaries,
which responsible for lacking accuracy in image detecting techniques. We have achieved better
segmentation and boundary detecting for the image, also the ability of the system to improve the low
contrast and deal with over and under segmentation.
Some Studies on Different Power Allocation Schemes of Superposition Modulationidescitation
Superposition Modulation/Mapping (SM) is a newly
evolving modulation technique in which the conversion from
binary digits to symbols is done by linear superposition of the
binary digits instead of bijective (one-to-one) mapping. Due
to linear superposition, the symbol distribution of the data
symbols thus formed are Gaussian shaped which is capacity
achieving without active signal shaping. In this paper, a detailed
study on SM has been presented with respect to its different
power allocation schemes namely Equal Power Allocation
(EPA), Unequal Power Allocation (UPA) and Grouped Power
Allocation (GPA). Also, it has been shown that SM is more
capacity achieving than the conventional modulation
technique such as Quadrature Amplitude Modulation (QAM)
A rapid progress is seen in the field of robotics both in educational and industrial
automation sectors. The Robotics education in particular is gaining technological advances
and providing more learning opportunities. In automotive sector, there is a necessity and
demand to automate daily human activities by robot. With such an advancement and
demand for robotics, the realization of a popular computer game will help students to learn
and acquire skills in the field of robotics. The computer game such as Pacman offers
challenges on both software and hardware fronts. In software, it provides challenges in
developing algorithms for a robot to escape from the pool of attacking robots and to develop
algorithms for multiple ghost robots to attack the Pacman. On the hardware front, it
provides a challenge to integrate various systems to realize the game. This project aims to
demonstrate the pacman game in real world as well as in simulation. For simulation
purpose Player/Stage is used to develop single-client and multi-client architectures. The
multi- client architecture in player/stage uses one global simulation proxy to which all the
robot models are connected. This reduces the overhead to manage multiple robots proxy.
The single-client architecture enables only two robot models to connect to the simulation
proxy. Multi-client approach offers flexibility to add sensors to each port which will be used
distinctly by the client attached to the respective robot. The robots are named as Pacman
and Ghosts, which try to escape and attack respectively. Use of Network Camera has been
done to detect the global positions of the robots and data is shared through inter-process
communication.
ZigBee has been developed to support lower data rates and low power consuming
applications. This paper targets to analyze various parameters of ZigBee physical (PHY).
Performance of ZigBee PHY is evaluated on the basis of energy consumption in
transmitting and receiving mode and throughput. Effect of variation in network size is
studied on these performance attributes. Some modulation schemes are also compared and
the best modulation scheme is suggested with tradeoffs between different performance
metrics.
In Cognitive Radio Networks (CRN), Cooperative Spectrum Sensing (CSS) is
used to improve performance of spectrum sensing techniques used for detection of licensed
(Primary) user’s signal. In CSS, the spectrum sensing information from multiple unlicensed
(Secondary) users are combined to take final decision about presence of primary signal. The
mixing techniques used to generate final decision about presence of PU’s signal are also
called as Fusion techniques / rules. The fusion techniques are further classified as data
fusion and decision fusion techniques. In data fusion technique all the secondary users
(SUs) share their raw information of spectrum detection like detected energy or other
statistical information, while in decision fusion technique all the SUs take their local
decisions and share the decision by sending ‘0’ or ‘1’ corresponding to absence and presence
of PU’s signal respectively. The rules used in decision fusion techniques are OR rule, AND
rule and K-out-of-N rule. The CSS is further classified as distributed CSS and centralized
CSS. In distributed CSS all the SUs share the spectrum detection information with each
other and by mixing the shared information; all the SUs take final decision individually. In
centralized CSS all the SUs send their detected information to a secondary base station /
central unit which combines the shared information and takes final decision. The secondary
base station shares the final decision with all the SUs in the CRN. This paper covers
overview of information fusion methods used for CSS and analysis of decision fusion rules
with simulation results.
The next generation wireless networks comprises of mobile users moving
between heterogeneous networks, using terminals with multiple access interfaces and
services. The most important issue in such environment is ABC (Always Best Connected) i.e.
allowing the best connectivity to applications anywhere at any time. For always best
connectivity requirement various vertical handover strategies for decision making have
been proposed. This paper provides an overview of the most interesting and recent
strategies.
Automatic dominant region segmentation for natural imagescsandit
Image Segmentation segments an image into different homogenous regions. An efficient
semantic based image retrieval system divides the image into different regions separated by
color or texture sometimes even both. Features are extracted from the segmented regions and
are annotated automatically. Relevant images are retrieved from the database based on the
keywords of the segmented region In this paper, automatic image segmentation is proposed to
obtained dominant region of the input natural images. Dominant region are segmented and
results are obtained . Results are also recorded in comparison to JSEG algorithm
Improving image resolution through the cra algorithm involved recycling proce...csandit
Image processing concepts are widely used in medical fields. Digital images are prone to a
variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot
of researchers are working on the field analysis and processing of multi-dimensional images.
Work previously hasn’t sufficient to stop them, so they continue performance work is due by the
researcher. In this paper we contribute a novel research work for analysis and performance
improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image
processing. The CRA algorithms have better response from researcher to use them
Moving object detection using background subtraction algorithm using simulinkeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
Implementation of Object Tracking for Real Time VideoIDES Editor
Real-time tracking of object boundaries is an
important task in many vision applications. Here we propose
an approach to implement the level set method. This approach
does not need to solve any partial differential equations (PDFs),
thus reducing the computation dramatically compared with
optimized narrow band techniques proposed before. With our
approach, real-time level-set based video tracking can be
achieved.
Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...IJECEIAES
Face recognition system in real time is divided into three processes, namely feature extraction, clustering, detection, and recognition. Each of these stages uses different methods, Local Binary Pattern (LBP), Agglomerative Hierarchical Clustering (AHC) and Euclidean Distance. Multi-face image search using Content Based Image Retrieval (CBIR) method. CBIR performs image search by image feature itself. Based on real time trial results, the accuracy value obtained is 61.64%.
Web Image Retrieval Using Visual Dictionaryijwscjournal
In this research, we have proposed semantic based image retrieval system to retrieve set of relevant images for the given query image from the Web. We have used global color space model and Dense SIFT feature extraction technique to generate visual dictionary using proposed quantization algorithm. The images are transformed into set of features. These features are used as inputs in our proposed Quantization algorithm for generating the code word to form visual dictionary. These codewords are used to represent images semantically to form visual labels using Bag-of-Features (BoF). The Histogram intersection method is used to measure the distance between input image and the set of images in the image database to retrieve similar images. The experimental results are evaluated over a collection of 1000 generic Web images to demonstrate the effectiveness of the proposed system.
The content based image retrieval (CBIR) technique
is one of the most popular and evolving research areas of the
digital image processing. The goal of CBIR is to extract visual
content like colour, texture or shape, of an image automatically.
This paper proposes an image retrieval method that uses colour
and texture for feature extraction. This system uses the query by
example model. The system allows user to choose the feature on
the basis of which retrieval will take place. For the retrieval
based on colour feature, RGB and HSV models are taken into
consideration. Whereas for texture the GLCM is used for
extracting the textural features which then goes into Vector
Quantization phase to speed up the retrieval process.
Empirical Coding for Curvature Based Linear Representation in Image Retrieval...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A NOVEL IMAGE SEGMENTATION ENHANCEMENT TECHNIQUE BASED ON ACTIVE CONTOUR AND...acijjournal
Topological alignments and snakes are used in image processing, particularly in locating object
boundaries. Both of them have their own advantages and limitations. To improve the overall image
boundary detection system, we focused on developing a novel algorithm for image processing. The
algorithm we propose to develop will based on the active contour method in conjunction with topological
alignments method to enhance the image detection approach. The algorithm presents novel technique to
incorporate the advantages of both Topological Alignments and snakes. Where the initial segmentation
by Topological Alignments is firstly transformed into the input of the snake model and begins its
evolvement to the interested object boundary. The results show that the algorithm can deal with low
contrast images and shape cells, demonstrate the segmentation accuracy under weak image boundaries,
which responsible for lacking accuracy in image detecting techniques. We have achieved better
segmentation and boundary detecting for the image, also the ability of the system to improve the low
contrast and deal with over and under segmentation.
Some Studies on Different Power Allocation Schemes of Superposition Modulationidescitation
Superposition Modulation/Mapping (SM) is a newly
evolving modulation technique in which the conversion from
binary digits to symbols is done by linear superposition of the
binary digits instead of bijective (one-to-one) mapping. Due
to linear superposition, the symbol distribution of the data
symbols thus formed are Gaussian shaped which is capacity
achieving without active signal shaping. In this paper, a detailed
study on SM has been presented with respect to its different
power allocation schemes namely Equal Power Allocation
(EPA), Unequal Power Allocation (UPA) and Grouped Power
Allocation (GPA). Also, it has been shown that SM is more
capacity achieving than the conventional modulation
technique such as Quadrature Amplitude Modulation (QAM)
A rapid progress is seen in the field of robotics both in educational and industrial
automation sectors. The Robotics education in particular is gaining technological advances
and providing more learning opportunities. In automotive sector, there is a necessity and
demand to automate daily human activities by robot. With such an advancement and
demand for robotics, the realization of a popular computer game will help students to learn
and acquire skills in the field of robotics. The computer game such as Pacman offers
challenges on both software and hardware fronts. In software, it provides challenges in
developing algorithms for a robot to escape from the pool of attacking robots and to develop
algorithms for multiple ghost robots to attack the Pacman. On the hardware front, it
provides a challenge to integrate various systems to realize the game. This project aims to
demonstrate the pacman game in real world as well as in simulation. For simulation
purpose Player/Stage is used to develop single-client and multi-client architectures. The
multi- client architecture in player/stage uses one global simulation proxy to which all the
robot models are connected. This reduces the overhead to manage multiple robots proxy.
The single-client architecture enables only two robot models to connect to the simulation
proxy. Multi-client approach offers flexibility to add sensors to each port which will be used
distinctly by the client attached to the respective robot. The robots are named as Pacman
and Ghosts, which try to escape and attack respectively. Use of Network Camera has been
done to detect the global positions of the robots and data is shared through inter-process
communication.
ZigBee has been developed to support lower data rates and low power consuming
applications. This paper targets to analyze various parameters of ZigBee physical (PHY).
Performance of ZigBee PHY is evaluated on the basis of energy consumption in
transmitting and receiving mode and throughput. Effect of variation in network size is
studied on these performance attributes. Some modulation schemes are also compared and
the best modulation scheme is suggested with tradeoffs between different performance
metrics.
In Cognitive Radio Networks (CRN), Cooperative Spectrum Sensing (CSS) is
used to improve performance of spectrum sensing techniques used for detection of licensed
(Primary) user’s signal. In CSS, the spectrum sensing information from multiple unlicensed
(Secondary) users are combined to take final decision about presence of primary signal. The
mixing techniques used to generate final decision about presence of PU’s signal are also
called as Fusion techniques / rules. The fusion techniques are further classified as data
fusion and decision fusion techniques. In data fusion technique all the secondary users
(SUs) share their raw information of spectrum detection like detected energy or other
statistical information, while in decision fusion technique all the SUs take their local
decisions and share the decision by sending ‘0’ or ‘1’ corresponding to absence and presence
of PU’s signal respectively. The rules used in decision fusion techniques are OR rule, AND
rule and K-out-of-N rule. The CSS is further classified as distributed CSS and centralized
CSS. In distributed CSS all the SUs share the spectrum detection information with each
other and by mixing the shared information; all the SUs take final decision individually. In
centralized CSS all the SUs send their detected information to a secondary base station /
central unit which combines the shared information and takes final decision. The secondary
base station shares the final decision with all the SUs in the CRN. This paper covers
overview of information fusion methods used for CSS and analysis of decision fusion rules
with simulation results.
The next generation wireless networks comprises of mobile users moving
between heterogeneous networks, using terminals with multiple access interfaces and
services. The most important issue in such environment is ABC (Always Best Connected) i.e.
allowing the best connectivity to applications anywhere at any time. For always best
connectivity requirement various vertical handover strategies for decision making have
been proposed. This paper provides an overview of the most interesting and recent
strategies.
This paper presents the design and performance comparison of a two stage
operational amplifier topology using CMOS and BiCMOS technology. This conventional op
amp circuit was designed by using RF model of BSIM3V3 in 0.6 μm CMOS technology and
0.35 μm BiCMOS technology. Both the op amp circuits were designed and simulated,
analyzed and performance parameters are compared. The performance parameters such as
gain, phase margin, CMRR, PSRR, power consumption etc achieved are compared. Finally,
we conclude the suitability of CMOS technology over BiCMOS technology for low power
RF design.
Wireless sensor networks (WSN) have been widely used in various applications.
In these networks nodes collect data from the attached sensors and send their data to a base
station. However, nodes in WSN have limited power supply in form of battery so the nodes
are expected to minimize energy consumption in order to maximize the lifetime of WSN. A
number of techniques have been proposed in the literature to reduce the energy
consumption significantly. In this paper, we propose a new clustering based technique
which is a modification of the popular LEACH algorithm. In this technique, first cluster
heads are elected using the improved LEACH algorithm as usual, and then a cluster of
nodes is formed based on the distance between node and cluster head. Finally, data from
node is transferred to cluster head. Cluster heads forward data, after applying aggregation,
to the cluster head that is closer to it than sink in forward direction or directly to the sink.
This reduction in distance travelled improves the performance over LEACH algorithm
significantly.
Now-a-days, Internet has become an important part of human’s life, a person
can shop, invest, and perform all the banking task online. Almost, all the organizations have
their own website, where customer can perform all the task like shopping, they only have to
provide their credit card details. Online banking and e-commerce organizations have been
experiencing the increase in credit card transaction and other modes of on-line transaction.
Due to this credit card fraud becomes a very popular issue for credit card industry, it causes
many financial losses for customer and also for the organization. Many techniques like
Decision Tree, Neural Networks, Genetic Algorithm based on modern techniques like
Artificial Intelligence, Machine Learning, and Fuzzy Logic have been already developed for
credit card fraud detection. In this paper, an evolutionary Simulated Annealing algorithm is
used to train the Neural Networks for Credit Card fraud detection in real-time scenario.
This paper shows how this technique can be used for credit card fraud detection and
present all the detailed experimental results found when using this technique on real world
financial data (data are taken from UCI repository) to show the effectiveness of this
technique. The algorithm used in this paper are likely beneficial for the organizations and
for individual users in terms of cost and time efficiency. Still there are many cases which are
misclassified i.e. A genuine customer is classified as fraud customer or vise-versa.
Performance Evaluation Of Ontology And Fuzzybase Cbiracijjournal
In This Paper, We Have Done Performance Evaluation Of Ontology Using Low-Level Features Like
Color, Texture And Shape Based Cbir, With Topic Specific Cbir.The Resulting Ontology Can Be Used
To Extract The Appropriate Images From The Image Database. Retrieving Appropriate Images From An
Image Database Is One Of The Difficult Tasks In Multimedia Technology. Our Results Show That The
Values Of Recall And Precision Can Be Enhanced And This Also Shows That Semantic Gap Can Also Be
Reduced. The Proposed Algorithm Also Extracts The Texture Values From The Images Automatically
With Also Its Category (Like Smooth, Course Etc) As Well As Its Technical Interpretation
PERFORMANCE EVALUATION OF ONTOLOGY AND FUZZYBASE CBIRacijjournal
IN THIS PAPER, WE HAVE DONE PERFORMANCE EVALUATION OF ONTOLOGY USING LOW-LEVEL FEATURES LIKE
COLOR, TEXTURE AND SHAPE BASED CBIR, WITH TOPIC SPECIFIC CBIR.THE RESULTING ONTOLOGY CAN BE USED
TO EXTRACT THE APPROPRIATE IMAGES FROM THE IMAGE DATABASE. RETRIEVING APPROPRIATE IMAGES FROM AN
IMAGE DATABASE IS ONE OF THE DIFFICULT TASKS IN MULTIMEDIA TECHNOLOGY. OUR RESULTS SHOW THAT THE
VALUES OF RECALL AND PRECISION CAN BE ENHANCED AND THIS ALSO SHOWS THAT SEMANTIC GAP CAN ALSO BE
REDUCED. THE PROPOSED ALGORITHM ALSO EXTRACTS THE TEXTURE VALUES FROM THE IMAGES AUTOMATICALLY
WITH ALSO ITS CATEGORY (LIKE SMOOTH, COURSE ETC) AS WELL AS ITS TECHNICAL INTERPRETATION.
Image retrieval is the major innovations in the development of images. Mining of images is used to mine latest information from
the general collection of images. CBIR is the latest method in which our target images is to be extracted on the basis of specific features of
the specified image. The image can be retrieved in fast if it is clustered in an accurate and structured manner. In this paper, we have the
combined the theories of CBIR and analysis of features of CBIR systems.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An Unsupervised Cluster-based Image Retrieval Algorithm using Relevance FeedbackIJMIT JOURNAL
Content-based image retrieval (CBIR) systems utilize low level query image feature as identifying similarity between a query image and the image database. Image contents are plays significant role for image retrieval. There are three fundamental bases for content-based image retrieval, i.e. visual feature extraction, multidimensional indexing, and retrieval system design. Each image has three contents such as: color, texture and shape features. Color and texture both plays important image visual features used in Content-Based Image Retrieval to improve results. Color histogram and texture features have potential to retrieve similar images on the basis of their properties. As the feature extracted from a query is low level, it is extremely difficult for user to provide an appropriate example in based query. To overcome these problems and reach higher accuracy in CBIR system, providing user with relevance feedback is famous for provide promising solutio
Ijaems apr-2016-16 Active Learning Method for Interactive Image RetrievalINFOGAIN PUBLICATION
With many possible multimedia applications, content-based image retrieval (CBIR) has recently gained more interest for image management and web search. CBIR is a technique that utilizes the visual content of an image, to search for similar images in large-scale image databases, according to a user’s concern. In image retrieval algorithms, retrieval is according to feature similarities with respect to the query, ignoring the similarities among images in database. To use the feature similarities information, this paper presents the k-means clustering algorithm to image retrieval system. This clustering algorithm optimizes the relevance results by firstly clustering the similar images in the database. In this paper, we are also implementing wavelet transform which demonstrates significant rough and precise filtering. We also apply the Euclidean distance metric and input a query image based on similarity features of which we can retrieve the output images. The results show that the proposed approach can greatly improve the efficiency and performances of image retrieval.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHEScscpconf
Image retrieval system is an active area to propose a new approach to retrieve images from the
large image database. In this concerned, we proposed an algorithm to represent images using
divisive based and partitioned based clustering approaches. The HSV color component and Haar wavelet transform is used to extract image features. These features are taken to segment an image to obtain objects. For segmenting an image, we used modified k-means clustering algorithm to group similar pixel together into K groups with cluster centers. To modify Kmeans, we proposed a divisive based clustering algorithm to determine the number of cluster and get back with number of cluster to k-means to obtain significant object groups. In addition, we also discussed the similarity distance measure using threshold value and object uniqueness to quantify the results.
A Review of Feature Extraction Techniques for CBIR based on SVMIJEEE
As with the advancement of multimedia technologies, users are not gratified with the conventional retrieval system techniques. So a application “Content Based Image Retrieval System” is introduced. CBIR is the application to retrieve the images or to search the digital images from the large database .The term “content” deals with the colour, shape, texture and all the information which is extracted from the image itself. This paper reviews the CBIR system which uses SVM classifier based algorithms for feature extraction phase.
Content Based Image Retrieval : Classification Using Neural Networksijma
In a content-based image retrieval system (CBIR), the main issue is to extract the image features that
effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of
retrieval performance of image features. This paper presents a review of fundamental aspects of content
based image retrieval including feature extraction of color and texture features. Commonly used color
features including color moments, color histogram and color correlogram and Gabor texture are
compared. The paper reviews the increase in efficiency of image retrieval when the color and texture
features are combined. The similarity measures based on which matches are made and images are
retrieved are also discussed. For effective indexing and fast searching of images based on visual features,
neural network based pattern learning can be used to achieve effective classification.
Applications of spatial features in cbir a surveycsandit
With advances in the computer technology and the World Wide Web there has been an
explosion in the amount and complexity of multimedia data that are generated, stored,
transmitted, analyzed, and accessed. In order to extract useful information from this huge
amount of data, many content based image retrieval (CBIR) systems have been developed in the
last decade. A typical CBIR system captures image features that represent image properties
such as color, texture, or shape of objects in the query image and try to retrieve images from the
database with similar features. Retrieval efficiency and accuracy are the important issues in
designing Content Based Image Retrieval System. The Shape and Spatial features are quiet easy
and simple to derive and effective. Researchers are moving towards finding spatial features and
the scope of implementing these features in to the image retrieval framework for reducing the
semantic gap. This Survey paper focuses on the detailed review of different methods and their
evaluation techniques used in the recent works based on spatial features in CBIR systems.
Finally, several recommendations for future research directions have been suggested based on
the recent technologies.
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEYcscpconf
With advances in the computer technology and the World Wide Web there has been an explosion in the amount and complexity of multimedia data that are generated, stored,transmitted, analyzed, and accessed. In order to extract useful information from this hugeamount of data, many content based image retrieval (CBIR) systems have been developed in the
last decade. A typical CBIR system captures image features that represent image properties such as color, texture, or shape of objects in the query image and try to retrieve images from the
database with similar features. Retrieval efficiency and accuracy are the important issues in designing Content Based Image Retrieval System. The Shape and Spatial features are quiet easy and simple to derive and effective. Researchers are moving towards finding spatial features and the scope of implementing these features in to the image retrieval framework for reducing the semantic gap. This Survey paper focuses on the detailed review of different methods and their
evaluation techniques used in the recent works based on spatial features in CBIR systems. Finally, several recommendations for future research directions have been suggested based on
the recent technologies.
Information Systems and Networks are subjected to electronic attacks. When
network attacks hit, organizations are thrown into crisis mode. From the IT department to
call centers, to the board room and beyond, all are fraught with danger until the situation is
under control. Traditional methods which are used to overcome these threats (e.g. firewall,
antivirus software, password protection etc.) do not provide complete security to the system.
This encourages the researchers to develop an Intrusion Detection System which is capable
of detecting and responding to such events. This review paper presents a comprehensive
study of Genetic Algorithm (GA) based Intrusion Detection System (IDS). It provides a
brief overview of rule-based IDS, elaborates the implementation issues of Genetic Algorithm
and also presents a comparative analysis of existing studies.
Step by step operations by which we make a group of objects in which attributes
of all the objects are nearly similar, known as clustering. So, a cluster is a collection of
objects that acquire nearly same attribute values. The property of an object in a cluster is
similar to other objects in same cluster but different with objects of other clusters.
Clustering is used in wide range of applications like pattern recognition, image processing,
data analysis, machine learning etc. Nowadays, more attention has been put on categorical
data rather than numerical data. Where, the range of numerical attributes organizes in a
class like small, medium, high, and so on. There is wide range of algorithm that used to
make clusters of given categorical data. Our approach is to enhance the working on well-
known clustering algorithm k-modes to improve accuracy of algorithm. We proposed a new
approach named “High Accuracy Clustering Algorithm for Categorical datasets”.
Brain tumor is a malformed growth of cells within brain which may be
cancerous or non-cancerous. The term ‘malformed’ indicates the existence of tumor. The
tumor may be benign or malignant and it needs medical support for further classification.
Brain tumor must be detected, diagnosed and evaluated in earliest stage. The medical
problems become grave if tumor is detected at the later stage. Out of various technologies
available for diagnosis of brain tumor, MRI is the preferred technology which enables the
diagnosis and evaluation of brain tumor. The current work presents various clustering
techniques that are employed to detect brain tumor. The classification involves classification
of images into normal and malformed (if detected the tumor). The algorithm deals with
steps such as preprocessing, segmentation, feature extraction and classification of MR brain
images. Finally, the confirmatory step is specifying the tumor area by technique called
region of interest.
A Proxy signature scheme enables a proxy signer to sign a message on behalf of
the original signer. In this paper, we propose ECDLP based solution for chen et. al [1]
scheme. We describe efficient and secure Proxy multi signature scheme that satisfy all the
proxy requirements and require only elliptic curve multiplication and elliptic curve addition
which needs less computation overhead compared to modular exponentiations also our
scheme is withstand against original signer forgery and public key substitution attack.
Water marking has been proposed as a method to enhance data security. Text
water marking requires extreme care when embedding additional data within the images
because the additional information must not affect the image quality. Digital water marking
is a method through which we can authenticate images, videos and even texts. Add text
water mark and image water mark to your photos or animated image, protect your
copyright avoid unauthorized use. Water marking functions are not only authentication, but
also protection for such documents against malicious intentions to change such documents
or even claim the rights of such documents. Water marking scheme that hides water
marking in method, not affect the image quality. In this paper method of hiding a data using
LSB replacement technique is proposed.
Today among various medium of data transmission or storage our sensitive data
are not secured with a third-party, that we used to take help of. Cryptography plays an
important role in securing our data from malicious attack. This paper present a partial
image encryption based on bit-planes permutation using Peter De Jong chaotic map for
secure image transmission and storage. The proposed partial image encryption is a raw data
encryption method where bits of some bit-planes are shuffled among other bit-planes based
on chaotic maps proposed by Peter De Jong. By using the chaotic behavior of the Peter De
Jong map the position of all the bit-planes are permuted. The result of the several
experimental, correlation analysis and sensitivity test shows that the proposed image
encryption scheme provides an efficient and secure way for real-time image encryption and
decryption.
This paper presents a survey of Dependency Analysis of Service Oriented
Architecture (SOA) based systems. SOA presents newer aspects of dependency analysis due
to its different architectural style and programming paradigm. This paper surveys the
previous work taken on dependency analysis of service oriented systems. This study shows
the strengths and weaknesses of current approaches and tools available for dependency
analysis task in context of SOA. The main motivation of this work is to summarize the
recent approaches in this field of research, identify major issue and challenges in
dependency analysis of SOA based systems and motivate further research on this topic.
In this paper, proposed a novel implementation of a Soft-Core system using
micro-blaze processor with virtex-5 FPGA. Till now Hard-Core processors are used in
FPGA processor cores. Hard cores are a fixed gate-level IP functions within the FPGA
fabrics. Now the proposed processor is Soft-Core Processor, this is a microprocessor fully
described in software, usually in an HDL. This can be implemented by using EDK tool. In
this paper, developed a system which is having a micro-blaze processor is the combination
of both hardware & Software. By using this system, user can control and communicate all
the peripherals which are in the supported board by using Xilinx platform to develop an
embedded system. Implementing of Soft-Core process system with different peripherals like
UART interface, SPA flash interface, SRAM interface has to be designed using Xilinx
Embedded Development Kit (EDK) tools.
The article presents a simple algorithm to construct minimum spanning tree and
to find shortest path between pair of vertices in a graph. Our illustration includes the proof
of termination. The complexity analysis and simulation results have also been included.
Wimax technology has reshaped the framework of broadband wireless internet
service. It provides the internet service to unconnected or detached areas such as east South
Africa, rural areas of America and Asia region. Full duplex helpers employed with one of
the relay stations selection and indexing method that is Randomized Distributed Space Time
are used to expand the coverage area of primary Wimax station. The basic problem was
identified at cell edge due to weather conditions (rain, fog), insertion of destruction because
of multiple paths in the same communication channel and due to interference created by
other users in that communication. It is impractical task for the receiver station to decode
the transmitted signal successfully at the cell edges, which increases the high packet loss and
retransmissions. But Wimax is a outstanding technology which is used for improving the
quality of internet service and also it offers various services like Voice over Internet
Protocol, Video conferencing and Multimedia broadcast etc where a little delay in packet
transmission can cause a big loss in the communication. Even setup and initialization of
another Wimax station nearer to each other is not a good alternate, where any mobile
station can easily handover to another base station if it gets a strong signal from other one.
But in rural areas, for few numbers of customers, installation of base station nearer to each
other is costlier task. In this review article, we present a scheme using R-DSTC technique to
choose and select helpers (relay nodes) randomly to expand the coverage area and help to
mobile station as a helper to provide secure communication with base station. In this work,
we use full duplex helpers for better utilization of bandwidth.
Radio Frequency identification (RFID) technology has become emerging
technique for tracking and items identification. Depend upon the function; various RFID
technologies could be used. Drawback of passive RFID technology, associated to the range
of reading tags and assurance in difficult environmental condition, puts boundaries on
performance in the real life situation [1]. To improve the range of reading tags and
assurance, we consider implementing active backscattering tag technology. For making
mobiles of multiple radio standards in 4G network; the Software Defined Radio (SDR)
technology is used. Restrictions in Existing RFID technologies and SDR technology, can be
eliminated by the development and implementation of the Software Defined Radio (SDR)
active backscattering tag compatible with the EPC global UHF Class 1 Generation 2 (Gen2)
RFID standard. Such technology can be used for many of applications and services.
Vehicle technology has increased rapidly in recent years, particularly in relation
to braking system and sensing system. In parallel to the development of braking
technologies, sensors have been developed that are capable of detecting physical obstacles,
other vehicles or pedestrians around the vehicle. This development prevents accidents of
vehicles using Stereo Multi-Purpose cameras, Automated Emergency Braking Systems and
Ultrasonic Sensors. The stereo multi-purpose camera provides spatial intelligence of up to
50 metres in front of the vehicle and there is an environment recognition of 500 metres.
Cars can automatically brake due to obstacles or any hindrance when the sensor senses the
obstacles. The braking circuit function is to brake the car automatically after receiving
signal from the sensors. All cars are competent in applying brakes automatically to a
maximum extent of deceleration of 0.4g. Integrated safety systems are based on three
principles. They are: collision avoidance, collision mitigation braking systems and forward
collision warning.
Stability of software is related to the decomposing the classes. In any software,
major part of the code is suffers from the Yoyo problem with multiple issues related to
readability of code, understandability of code as well as maintainability of code. Due to
these issues, there is need to rethink, redesign, re-factor these pieces of code. The best way is
to simplify the inter relationship of class objects in such a manner that code becomes concise
with Liskov Substitution Principle by decomposition of classes. However this may lead to
unknown or unwanted issues affecting the stability of overall application which may even
lead to software erosion.
Software cost estimation is a key open issue for the software industry, which
suffers from cost overruns frequently. As the most popular technique for object-oriented
software cost estimation is Use Case Points (UCP) method, however, it has two major
drawbacks: the uncertainty of the cost factors and the abrupt classification. To address
these two issues, refined the use case complexity classification using fuzzy logic theory which
mitigate the uncertainty of cost factors and improve the accuracy of classification.
Software estimation is a crucial task in software engineering. Software estimation
encompasses cost, effort, schedule, and size. The importance of software estimation becomes
critical in the early stages of the software life cycle when the details of software have not
been revealed yet. Several commercial and non-commercial tools exist to estimate software
in the early stages. Most software effort estimation methods require software size as one of
the important metric inputs and consequently, software size estimation in the early stages
becomes essential.
The proposed method presents a techniques using fuzzy logic theory to improve the
accuracy of the use case points method by refining the use case classification.
Internet data almost double every year. The need of multimedia communication
is less storage space and fast transmission. So, the large volume of video data has become
the reason for video compression. The aim of this paper is to achieve temporal compression
for three-dimensional (3D) videos using motion estimation-compensation and wavelets.
Instead of performing a two-dimensional (2D) motion search, as is common in conventional
video codec’s, the use of a 3D motion search has been proposed, that is able to better exploit
the temporal correlations of 3D content. This leads to more accurate motion prediction and
a smaller residual. The discrete wavelet transform (DWT) compression scheme has been
added for better compression ratio. The DWT has a high-energy compaction property thus
greatly impacted the field of compression. The quality parameters peak signal to noise ratio
(PSNR) and mean square error (MSE) have been calculated. The simulation results shows
that the proposed work improves the PSNR from existing work.
Because the ability of Distributed Denial of Service (DDoS) attack creates huge
volume of unwanted traffic so it is widely regarded as a major threat for the current
Internet. A flooding-based DDoS attack is a very common way in which a victim machine is
attacked by sending a large amount of malicious traffic. Because of these attacks,existing
network-level congestion control mechanisms are inadequate for preventing service quality
from deteriorating. Although a number of techniques have been proposed to defeat DDoS
attacks but still It is very hard to detect and respond to DDoS attacks due to large and
complex network environments, the use of source-address spoofing, and moreover its
difficult to make difference between legitimate and attack traffic. To measure the impact of
DDoS attack on FTP services, repeated research in cyber security that is important to the
scientific advancement of the field is required. To fullfill this requirement, the cyber-
DEfense Technology Experimental Research (DETER) testbed has been developed. In this
paper, we have created one dumb-bell topology and generated background traffic as FTP
traffic. We have launched different types of DDoS attacks along with FTP traffic by using
attack tools available in DETER testbed. Finally we have measured impact of DDoS attack
on FTP server in terms of metrics such as throughput, percentage link utilization, and
normal packet survival ratio (NPSR).
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
2. 329
segmentation, (ii) Feature Extraction, and (iii) Computation of similarity measures.
Image segmentation is the method of splitting an image into sections such that each section is identical with
respect to some feature, explanation given in [3]. There are a number of reviews that attempt to classify the
different segmentation techniques. According to Ref. [13], Segmentation techniques can be classified into the
following groups: Histogram Thresholding, Feature Space Clustering, Region Based Approaches, Edge
Detection Approaches, Fuzzy Approaches, Neural network Approaches, and Graph-theoretical Approaches.
In this work, we fuse visual contents (shape-color and texture-color contents) with a marginal value to
compute the similarity of the query. We also analyze the retrieval rate at different precision levels and
compare the results with existing systems in every resolution of image database [1] and [7].
This paper is organized as follows. In the next section, we discuss few existing CBIR systems. In section III,
we discuss the unsupervised approach of image retrieval. In section IV, we present some experimental results
and comparisons. Finally, we conclude and give future direction in section V.
Figure 1: A structure of CBIR system
II. EXISTING CBIR SYSTEMS
Existing CBIR systems can be grouped into two categories: full-image retrieval system and region-based
image retrieval system. Some of the existing CBIR systems may also belong to the both categories [9].
In full-image retrieval systems features are extracted for the entire image without segmenting it into regions.
Full image retrieval systems use the global feature of images. In this system, the images in the database are
segmented but the query image is not segmented. So, for the query image, the global features are used while
for images in the database, the local features are used.
In region-based systems, the image is segmented into regions prior to the extraction of the features. Then, the
features are extracted for each region. Here, local features are used for both the query image and the images
in the database. Region-based systems can be further divided into three types: In the first type, the query
image is not segmented but the images in the database are segmented and the system looks for images that
hold the query image as their part, this is called sub-image retrieval. In the second type, both the query and
the images in the database are segmented but only one part of the query image is used for the searching. In
the third type both the query image and images in the database are segmented and all the regions of the query
image will be used for the comparison [16].
Most of the existing CBIR systems are region-based systems because region-based systems are healthier than
full-image retrieval systems. Region-based systems use different segmentation techniques to divide images
into subparts. Some of the existing systems are: (i) Blobworld, has been developed by UC Berkeley
Computer Vision Group [8]. It segments the image into blobs (regions) using an (Expectation-Maximization)
EM-algorithm based on the color and texture features of the pixels. It is a region-based image retrieval
system [10]. (ii) The Earth Mover's Distance, Multi-Dimensional Scaling, and Color-Based Image Retrieval,
in this retrieval system, images are deemed as points in a metric space in which they are moved roughly so as
to locate image neighborhoods of interest, based on color information. This distance function is called the
Earth Mover’s Distance (EMD) [11]. The system also uses of multi-dimensional scaling (MDS) techniques to
insert a group of images as points in a two- or three-dimensional Euclidean space so that their distances are
conserved as much as achievable. It is a full image-based retrieval system. (iii) NaTra is developed by
department of Electrical and Computer Engineering, University of California at Santa Barbara [12]. Images
are automatically segmented into about six to twelve non overlapping identical regions. The segmentation is
based on an edge flow algorithm which uses ‘edges’ in color and texture features to identify identical regions.
It is a region-based image retrieval system. (iv) PicSOM is developed by Laboratory of Computer and
Information Science, Helsinki University of Technology. Image is divided into five regions. For each region,
color and texture properties are used. In addition, edge and shape properties are used as features. Features are
Image
Segmentation
Similarity
Computation
Feature
Extraction
Input Query
Image
Set of Similar
Images
3. 330
stored in a tree arrangement that uses self-organizing map (SOM). It is a region-based image retrieval system
[14]. (v) SIMPLIcity (Semantics-sensitive Integrated Matching for Picture LIbraries) is developed by James
Ze Wang, Gio Wiederhold, Jia Li and others at Stanford University [15]. It segments the image into 4 x 4
pixel blocks and extracts a feature vector for each block. Use the k-mean clustering approach to segment the
image into regions. It is a region-based image retrieval system. (vi) UFM (Unified Feature Matching)
developed by Chen and Wang [13]. UFM scheme describes the similarity between images by incorporating
properties of all regions in the images. The similarity of two images is then defined as the overall similarity
between two families of fuzzy features and quantified by a similar measure, UFM measure, which
incorporates properties of all the regions in the images. It is a region-based image retrieval system. (vii)
CLUE (CLUster based rEtrieval of images) is developed by Chen et al. [2] and [5]. It is known as cluster-
based retrieval of images by unsupervised learning (CLUE), for improving user relations with image retrieval
systems by developing the similarity information. CLUE retrieves image cluster by applying a graph –
theoretic clustering algorithm to a collection of images in the surrounding area of the query. In particular,
clusters created depend on which images are retrieved in reply to the query.
III. UNSUPERVISED METHOD OF CBIR
Unsupervised learning occupies learning samples in the input when no definite output values are provided. It
is differentiated from supervised learning in that the learner is provided only unlabeled patterns.
Unsupervised learning is applied to the class of problems, where one seeks to cease how the data are
classified. In a standard CBIR system, target images (known as, images in the database) are stored by feature
similarities with respect to the query. CLUE technique is used for improving user interaction with image
retrieval systems by fully exploiting the similarity information. CLUE retrieves image clusters by applying a
graph-theoretic clustering algorithm to a collection of images in the locality of the query [2] and [5].
The recently recommend two different approaches of CBIR namely color-shape & color-texture systems also
based graph clustering algorithms, to improve the performance of the CLUE algorithm (as shown in Figure
2) [1] and [7]. In the first approach, we sum the values of color and shape visual contents, for assigning the
weights to different images. On the basis of these weights the relevant images are extracted from the image
database. In other approach, we sum the values of color and texture features, for assigning the weights to
different images. On the basis of these weights the relevant images are extracted from the image database.
Figure 2: Structure of recommended CBIR system
First step of implementation of the CBIR system is to segment the image. For finding the segment of image
we should detect the edges of the input image, so the steps for detecting the edges in the input image are:
convert into gray scale form and apply edge detection technique [6] and [16]. Now the image is segmented,
the second step is to extract the features, for extracting the features divide the segmented image into the
number of regions (cluster). Extract the feature of each region. Features are extracted by mainly three ways:
(i) color, (ii) shape, and (iii) texture. Color features are among the most important and extensively used low
Euclidean distance, shape
comparison, region matching etc.
Visualisation
Feature
Extraction
Computation of
Similarity
Image
Database
Histogram, color layout, regions
etc.
Fusing features shape-color and texture-
color after applying threshold of 0.6
Image Segmentation
Values of similar
images
4. 331
level features in image database. As given in Ref. [4], there are three central equations for color distribution
on the image for each region. Now we discuss the extraction of the relevant images from the large image
database in unsupervised way. First we say that, the numbers of clusters are not fixed; it depends on the some
conditions, in our case, we set one condition, if a cluster has less than 100 images that cluster would not be
further clustered. For example, see Figure 3 shows an image database of 100 nodes (images) partitioned into
the form of tree structure [2].
Figure 3: Partitioned into the form of tree structure for N (100 images) nodes
Following Steps are the description of the retrieval of relevant images by unsupervised way [16]:
Step 1: Assume database has total 100 images as shown in Figure 3, N= 100; these are the sorted pre-
processed target images with respect to the query image.
Step 2: First find, collection of neighboring target images with respect to query image using Nearest
Neighbor Method (discussed in [5]).
Step 3: Constructs a weighted undirected graph, that contain the query image and its neighboring target
images using the equations discussed in previous chapter, it counts the nonnegative weight of an edge.
Step 4: Now, selection of a representative node, the node has maximum sum value of their feature among
these 100 target nodes select as representative node in each clusters, and apply normalized-cut Ncut equation
to the graph for partitioning the graph into two sub graphs, If the similarity measure of the target nodes less
than the representative nodes then images are confined into the left cluster C1 (contain 40 images), and others
into C2 that contains 60 images (of given example of 100 target images) [6].
Step 5: Criteria of Clusters partitioning: In this method, we select cluster to partition which has maximum
number of nodes (images). In above example, second cluster (C2) has maximum nodes, so further divide C2
into two sub clusters C3 and C4, repeat Step 4 & Step 5 until a stopping criteria satisfies.
Step 6: Sopping Criteria: In our method, if a cluster has more than 100 images that cluster would be further
divided into two sub clusters: (for above example, only for understanding purpose, if a cluster has nodes
(images) more than 25 that divided into two new clusters, otherwise left as leaf nodes.
Step 7: Retrieval of Relevant Images: start to retrieve leaf clusters from left to right (in inorder traversal
way). In our method, if we get first 100 images then we stop this procedure and re-initiate the program for the
next query and so on. For given example, assume we want 50 relevant images, first, retrieve left-most cluster
C8, we get 25 images from this cluster (now indexing as per the similarity measure 1 to 25), then retrieve
cluster C7 (in left to right manner, has 15 images, indexing 25 to 40), total images retrieved 40, then retrieve
C5 (25 images, indexing 41 to 50), total images exceeds the limits, condition would be failed, no further
cluster retrieval possible, and display the top 50 relevant images.
Step 8: Save the collection of images in a file for all queries (for all iteration) and manually count the
precision at different levels.
IV. RESULT AND DISCUSSION
In this section, we present our results and compare the performance of the suggested systems with the two
other existing systems. Our systems used the same feature extraction technique as given in Ref. {[2], [5] and
5. 332
[12]}. We used the Euclidean distance as the similarity measure between the query and target images in the
database. We have used four COREL image database, each database has its own size (number of images) and
resolutions which we mentioned in the corresponding table’s caption. We have worked on approximately
80,000 images. Detail of databases given in [8].
Performance of the system is measured using precision statistics technique. In the sense of image
(information) retrieval, precision is the ratio of retrieved images that are relevant to the total number of
retrieved images [16]. Precision takes all retrieved images into account. In this work, we evaluated precision
at a given cut off rank (evaluated precision at k, where k = 10, 20, 30, ….. upto 100).
ܲ݊݅ݏ݅ܿ݁ݎ =
|ܴ݈݁݁ݐ݊ܽݒ ݏ݁݃ܽ݉ܫ ݐݑ ݇|
|݈ܶܽݐ ݏ݁݃ܽ݉ܫ ܴ݁݀݁ݒ݁݅ݎݐ ݐݑ ݇|
The calculation of precisions mathematically illustrated as: Let P is the total precision and P1, P2, P3, ,..
………upto P100 are the precisions for image queries 1, 2, 3,…….upto 100, for one particular category,
because each category has total 100 images, after computing these precision, took average and reported in the
corresponding Tables. Example is given in the following equation.
ܲ (ܽݐ ݇ = 10) =
|ܲଵ + ܲଶ + … … … …… . . + ܲଵ|
|݇ = 10|
We have tested all four CBIR systems on all four databases, in which, two are existing system: UFM and
CLUE, and two are recently recommended system: shape-color and texture-color.
Database 1 contains 10,000 images of resolution 185 X 84. We randomly picked 100 general purpose images
from this database and tested our systems. The average of result is shown in Table I and graphically
represented in Figure 4.
TABLE I. PRECISION RESULT OF DATABASE 1
Existing Recommended
CBIR Systems UFM CLUE Shape-Color Texture-Color
Average Relevant
Images
3.09 5.76 7.68 9.12
Average Retrieved
Images
12 12 12 12
Average Precision 0.258 0.48 0.64 0.76
Database 2 also contains 10,000 images of resolution 185 X 96. We have chosen 100 random generated
queries of images and retrieved the relevant images from this database for our systems. The average of result
is shown in Table II and graphically represented in Figure 5.
Database 3 contains 60,000 general purpose images of resolution 185 X 85. We randomly picked 100 images
for queries and retrieved the relevant images from this database for our systems. The average of result is
shown Table III and graphically represented in Figure 6.
Figure 4: Average Precision of Database 1
6. 333
TABLE II. PRECISION RESULT OF DATABASE 2
Existing Recommended
CBIR Systems UFM CLUE Shape-Color Texture-Color
Average Relevant
Images
13.32 14.245 10.625 11.00
Average Retrieved
Images
37 39 25 25
Average Precision 0.36 0.365 0.425 0.44
Figure 5: Average Precision of Database 2
TABLE III. PRECISION RESULT OF DATABASE 3
Existing Recommended
CBIR Systems UFM CLUE Color-Shape Color-texture
Average Relevant
Images
12.09 13.10 12.00 13.025
Average Retrieved
Images
31 31 25 25
Average Precision 0.39 042 0.48 0.521
Figure 6: Average Precision of Database 3
Database 4 is a benchmark database (shown in Table IV), contains 10 different categories of images, each
category has 100 images of resolution 256 X 384. In this database, we made query image to each image and
retrieved the relevant images at top k. Corresponding figures: Figure 7 and Figure 8 show the one result of
each recommended system (Shape-Color & Texture-Color) for the sample query of Bus image. The average
7. 334
precision values of color-shape and color-texture systems for each category at different precision levels (k =
10, 20 ….. 100) are shown in the Table V and Table VI respectively. We took 100 random queries of images
from each of the ten categories of this database (Database 4), in turn, a total of 1000 random queries, and
retrieved the relevant images from this database. Finally, computed the average precision. Figure 9 is the
graphical representaion of Table V. Figure 10 is the graphical representaion of Table VI.
TABLE IV. DESCRIPTION OF COREL DATABASE 4
Category No Category Name Category No Category Name
1 African People 6 Elephants
2 Beach 7 Flowers
3 Buildings 8 Horses
4 Buses 9 Glaciers
5 Dinosaurs 10 Food
Figure 7: Result of Shape-Color system, first image is the query image: 16 Matches out of 25
Figure 8: Result of Texture-Color system, first image is the query image: 19 Matches out of 25
8. 335
TABLE V. PERFORMANCE AT DIFFERENT PRECISION (K) FOR SHAPE-COLOR SYSTEM OF DATABASE 4
ID Name 10 20 30 40 50 60 70 80 90 100
1 People 0.70 0.685 0.667 0.636 0.615 0.575 0.568 0.555 0.542 0.53
2 Beach 0.68 0.645 0.617 0.586 0.554 0.535 0.505 0.475 0.446 0.42
3 Buildings 0.60 0.565 0.537 0.506 0.485 0.454 0.433 0.415 0.395 0.37
4 Buses 0.84 0.775 0.767 0.746 0.725 0.708 0.675 0.634 0.628 0.62
5 Dinosaurs 1.00 0.995 0.987 0.980 0.978 0.975 0.973 0.970 0.965 0.95
6 Elephants 0.58 0.535 0.487 0.436 0.395 0.367 0.335 0.315 0.325 0.30
7 Flowers 0.86 0.835 0.807 0.793 0.785 0.775 0.765 0.760 0.752 0.74
8 Horses 0.84 0.825 0.812 0.798 0.793 0.792 0.789 0.785 0.776 0.77
9 Mountains 0.54 0.500 0.497 0.436 0.395 0.372 0.345 0.332 0.315 0.30
10 Food 0.78 0.745 0.712 0.707 0.693 0.685 0.665 0.660 0.654 0.64
Avg All
Categories
0.74 0.710 0.689 0.662 0.642 0.624 0.605 0.590 0.579 0.564
Figure 9: At different precision k for Shape-Color system of Database 4 for each category
9. 336
TABLE VI. PERFORMANCE AT DIFFERENT PRECISION (K) FOR TEXTURE-COLOR SYSTEM OF DATABASE 4
ID Name 10 20 30 40 50 60 70 80 90 100
1 People 0.70 0.685 0.676 0.646 0.625 0.595 0.558 0.545 0.538 0.53
2 Beach 0.68 0.645 0.628 0.605 0.575 0.555 0.525 0.485 0.456 0.43
3 Buildings 0.64 0.595 0.567 0.546 0.525 0.494 0.463 0.435 0.415 0.40
4 Buses 0.88 0.845 0.817 0.786 0.755 0.728 0.705 0.674 0.668 0.65
5 Dinosaurs 1.00 0.995 0.979 0.967 0.957 0.952 0.947 0.935 0.929 0.92
6 Elephants 0.58 0.535 0.476 0.446 0.415 0.377 0.345 0.335 0.325 0.32
7 Flowers 0.86 0.845 0.837 0.813 0.795 0.785 0.780 0.778 0.774 0.77
8 Horses 0.84 0.825 0.812 0.798 0.793 0.792 0.789 0.785 0.780 0.78
9 Mountains 0.55 0.500 0.497 0.436 0.395 0.372 0.345 0.332 0.315 0.30
10 Food 0.78 0.745 0.727 0.707 0.693 0.684 0.665 0.660 0.654 0.65
Avg All Categories 0.75 0.722 0.702 0.675 0.653 0.633 0.612 0.596 0.604 0.575
Figure 10: At different precision k for Texture-Color system of Database 4 for each category
10. 337
V. CONCLUSIONS
In this paper, we analyzed the results of four content based image retrieval systems (UFM, CLUE, Shape-
color, and Texture-color). We have used four different COREL databases of varying sizes and varying
resolutions. We used Database 4 for detail analysis because most of the researchers used it as a benchmark.
We analyzed the results at different precision levels (k) and compared the average performance. We found
that retrieval rate of the fused CBIR systems are better. We also, analyzed that Texture-Color system gives
better results in all categories of images. We used the two features namely texture and shape in combination
with the third feature color. In future, we may combine these two features themselves. Quality of the clusters
depends on the choice of the partitioning algorithm. In future, other graph theoretic clustering techniques can
also be checked for possible performance improvement. Our systems work only on a database that consist
images of same resolution. In future, we may devise some technique to enable it to work also on a database
consisting of images of varying resolution.
ACKNOWLEDGMENT
We wish to express our most sincere gratitude to Dr. Rashid Ali. We would like to extend our profound
gratitude to Prof. Nesar Ahmad, as a Chairman Department of Computer Engineering, A.M.U., Aligarh,
India, for providing various facilities during the study and experimental works. We also express our deep
regards and thanks to all faculty members of the department.
REFERENCES
[1] S. M. Zakariya, R. Ali and N. Ahmad, “Combining visual features of an image at different precision value of
unsupervised content based image retrieval”, 2010 IEEE ICCIC, pp. 110 – 113, December 2010.
[2] Y. Chen, J. Z. Wang, and R. Krovetz, “CLUE: Cluster-Based Retrieval of Images by Unsupervised learning”, IEEE
Transaction on Image Processing, vol. 14, no. 8, pp. 1187-1201, August 2005.
[3] Pal, Nikhil R., and Sankar K. Pal. “A review on image segmentation techniques”, Pattern recognition, vol. 26, no. 9,
pp. 1277-1294, 1993.
[4] H. D. Cheng, X. H. Jiang, Y. Sun, and Jing Li Wang, “Color image segmentation: Advances & prospects”, Elsevier
Science, Pattern Recognition, vol. 34, no. 12, pp. 2259–2281, December 2001.
[5] Yixin Chen, James Z. Wang and Robert Krovetz, “Content-Based Image Retrieval by Clustering”, Proceedings of
the 5th ACM SIGMM International Workshop on Multi-media Information Retrieval, pp. 193-200, 2003.
[6] J. Shi and J. Malik, “Normalized cuts and image segmentation”, IEEE Transaction Pattern Analysis Machine
Intelligence, vol. 22, no. 8, pp. 888–905, August 2000.
[7] A. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain, “Content-based image retrieval at the end of the early
years”, IEEE Transaction Pattern Analysis Machine Intelligence, vol. 22, No. 12, pp. 1349–1380, December 2000.
[8] S. M. Zakariya, R. Ali, and N. Ahmad, “Unsupervised Content Based Image Retrieval by Combining Visual
Features of an Image With A Threshold”, IJCCT, vol. 2, no. 4, pp. 204-209, December 2010.
[9] C. Carson, M. Thomas, S. Belongie, J.M. Hellerstein, and J. Malik, “Blobworld: A System for Region Based Image
Indexing and Retrieval,” Proceeding Visual Information Systems, pp. 509-516, June 1999.
[10] Y. Rubner, L. J. Guibas, and C. Tomasi, “The Earth Mover’s Distance Multi-Dimensional Scaling, and Color-Based
Image Retrieval”, Proceeding DARPA Image Understanding workshop, pp. 661-668, May 1997.
[11] W.Y. Ma and B. Manjunath, “NaTra: A Toolbox for Navigating Large Image Databases,” Proc. IEEE International
Conference on Image Processing, pp. 568-571, vol. 8, no. 20, February1997.
[12] Y. Chen and J. Z. Wang, “A Region-Based Fuzzy Feature Matching Approach to Content-Based Image Retrieval”,
IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 1252-1267, September 2002.
[13] Jorma Laaksonen, Markus Koskela, Sami Laakso, and Erkki Oja, “Picsom - Content-based image retrieval with
Self-Organizing Maps”, Pattern Recognition Letters, vol. 21, pp.1199–1207, June 2000.
[14] J. Z. Wang, J. Li, and G. Wiederhold, “SIMPLIcity: Semantics-Sensitive Integrated Matching for Picture Libraries”,
IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 23, no. 9, pp. 947-963, September 2001.
[15] C. Carson, S. Belongie, H. Greenspan, and J. Malik, “Blobworld: Image Segmentation Using Expectation
Maximization and its application to Image Querying”, IEEE Transaction On Pattern Analysis and Machine
Intelligence, vol. 24, no. 8, pp. 924-937, 2002.
[16] S. M. Zakariya, Nesar. Ahmad and Rashid Ali, “Unsupervised Learning Method for Content Based Image
Retrieval”, LAP Lambert Academic Publishing, Germany, June 2013.