In this project, we show that an image can be reconstructed using local descriptors, with or without
complete geometrical metadata in LAB VIEW. We use greedy algorithms to progressively learn the missing
information before reconstruction and colorization is performed .Our experiments show that most of the vital
information about a query image can be recovered even if scale metadata is missing. Compared to images
reconstructed with scale information, we find that there is no significant decline in image quality, and a close
resemblance of the original image post-processing step. Lab VIEW (laboratory Virtual Instrumentation
Engineering Workbench). Lab VIEW is a Graphical Programming Language. It contains icons rather than lines
of text. In contrast to text-based programming languages, where instructions determine program execution, Lab
VIEW uses dataflow programming, where the flow of data determines execution. As the world becomes
increasingly digitalized, it has become intractable to naively search for images merely using pixel information.
As such, researchers have been looking for more efficient methods to perform image matching and retrieval.
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...Editor IJMTER
Digital Images are used in magazines, blogs, website, television and more. Digital image processing
techniques are used for feature selection, pattern extraction classification and retrieval requirements. Color, texture
and shape features are used in the image processing. Digital images processing also supports computer graphics
and computer vision domains. Scene text recognition is performed with two schemes. They are character
recognizer and binary character classifier models. A character recognizer is trained to predict the category of a
character in an image patch. A binary character classifier is trained for each character class to predict the existence
of this category in an image patch. Scene text recognition is performed on detected text regions. Pixel-based layout
analysis method is adopted to extract text regions and segment text characters in images. Text character
segmentation is carried out with color uniformity and horizontal alignment of text characters. Discriminative
character descriptor is designed by combining several feature detectors and descriptors. Histogram of Oriented
Gradients (HOG) is used to identify the character descriptors. Character structure is modeled at each character
class by designing stroke configuration maps. The scene text extraction scheme is also supports for smart mobile
devices. Text recognition methods are used with text understanding and text retrieval applications. The text
recognition scheme is enhanced with content based image retrieval process. The system is integrated with
additional representative and discriminative features for text structure modeling process. The system is enhanced to
perform text and word level recognition using lexicon analysis. The training process is included with word
database update task.
Semantic Concept Detection in Video Using Hybrid Model of CNN and SVM Classif...CSCJournals
In today's era of digitization and fast internet, many video are uploaded on websites, a mechanism is required to access this video accurately and efficiently. Semantic concept detection achieve this task accurately and is used in many application like multimedia annotation, video summarization, annotation, indexing and retrieval. Video retrieval based on semantic concept is efficient and challenging research area. Semantic concept detection bridges the semantic gap between low level extraction of features from key-frame or shot of video and high level interpretation of the same as semantics. Semantic Concept detection automatically assigns labels to video from predefined vocabulary. This task is considered as supervised machine learning problem. Support vector machine (SVM) emerged as default classifier choice for this task. But recently Deep Convolutional Neural Network (CNN) has shown exceptional performance in this area. CNN requires large dataset for training. In this paper, we present framework for semantic concept detection using hybrid model of SVM and CNN. Global features like color moment, HSV histogram, wavelet transform, grey level co-occurrence matrix and edge orientation histogram are selected as low level features extracted from annotated groundtruth video dataset of TRECVID. In second pipeline, deep features are extracted using pretrained CNN. Dataset is partitioned in three segments to deal with data imbalance issue. Two classifiers are separately trained on all segments and fusion of scores is performed to detect the concepts in test dataset. The system performance is evaluated using Mean Average Precision for multi-label dataset. The performance of the proposed framework using hybrid model of SVM and CNN is comparable to existing approaches.
CNN FEATURES ARE ALSO GREAT AT UNSUPERVISED CLASSIFICATION cscpconf
This paper aims at providing insight on the transferability of deep CNN features to
unsupervised problems. We study the impact of different pretrained CNN feature extractors on
the problem of image set clustering for object classification as well as fine-grained
classification. We propose a rather straightforward pipeline combining deep-feature extraction
using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of
images. This approach is compared to state-of-the-art algorithms in image-clustering and
provides better results. These results strengthen the belief that supervised training of deep CNN
on large datasets, with a large variability of classes, extracts better features than most carefully
designed engineering approaches, even for unsupervised tasks. We also validate our approach
on a robotic application, consisting in sorting and storing objects smartly based on clustering
A novel Image Retrieval System using an effective region based shape represen...CSCJournals
With recent improvements in methods for the acquisition and rendering of shapes, the need for retrieval of shapes from large repositories of shapes has gained prominence. A variety of methods have been proposed that enable the efficient querying of shape repositories for a desired shape or image. Many of these methods use a sample shape as a query and attempt to retrieve shapes from the database that have a similar shape. This paper introduces a novel and efficient shape matching approach for the automatic identification of real world objects. The identification process is applied on isolated objects and requires the segmentation of the image into separate objects, followed by the extraction of representative shape signatures and the similarity estimation of pairs of objects considering the information extracted from the segmentation process and shape signature. We compute a 1D shape signature function from a region shape and use it for region shape representation and retrieval through similarity estimation. The proposed region shape feature is much more efficient to compute than other region shape techniques invariant to image transformation.
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...Editor IJMTER
Digital Images are used in magazines, blogs, website, television and more. Digital image processing
techniques are used for feature selection, pattern extraction classification and retrieval requirements. Color, texture
and shape features are used in the image processing. Digital images processing also supports computer graphics
and computer vision domains. Scene text recognition is performed with two schemes. They are character
recognizer and binary character classifier models. A character recognizer is trained to predict the category of a
character in an image patch. A binary character classifier is trained for each character class to predict the existence
of this category in an image patch. Scene text recognition is performed on detected text regions. Pixel-based layout
analysis method is adopted to extract text regions and segment text characters in images. Text character
segmentation is carried out with color uniformity and horizontal alignment of text characters. Discriminative
character descriptor is designed by combining several feature detectors and descriptors. Histogram of Oriented
Gradients (HOG) is used to identify the character descriptors. Character structure is modeled at each character
class by designing stroke configuration maps. The scene text extraction scheme is also supports for smart mobile
devices. Text recognition methods are used with text understanding and text retrieval applications. The text
recognition scheme is enhanced with content based image retrieval process. The system is integrated with
additional representative and discriminative features for text structure modeling process. The system is enhanced to
perform text and word level recognition using lexicon analysis. The training process is included with word
database update task.
Semantic Concept Detection in Video Using Hybrid Model of CNN and SVM Classif...CSCJournals
In today's era of digitization and fast internet, many video are uploaded on websites, a mechanism is required to access this video accurately and efficiently. Semantic concept detection achieve this task accurately and is used in many application like multimedia annotation, video summarization, annotation, indexing and retrieval. Video retrieval based on semantic concept is efficient and challenging research area. Semantic concept detection bridges the semantic gap between low level extraction of features from key-frame or shot of video and high level interpretation of the same as semantics. Semantic Concept detection automatically assigns labels to video from predefined vocabulary. This task is considered as supervised machine learning problem. Support vector machine (SVM) emerged as default classifier choice for this task. But recently Deep Convolutional Neural Network (CNN) has shown exceptional performance in this area. CNN requires large dataset for training. In this paper, we present framework for semantic concept detection using hybrid model of SVM and CNN. Global features like color moment, HSV histogram, wavelet transform, grey level co-occurrence matrix and edge orientation histogram are selected as low level features extracted from annotated groundtruth video dataset of TRECVID. In second pipeline, deep features are extracted using pretrained CNN. Dataset is partitioned in three segments to deal with data imbalance issue. Two classifiers are separately trained on all segments and fusion of scores is performed to detect the concepts in test dataset. The system performance is evaluated using Mean Average Precision for multi-label dataset. The performance of the proposed framework using hybrid model of SVM and CNN is comparable to existing approaches.
CNN FEATURES ARE ALSO GREAT AT UNSUPERVISED CLASSIFICATION cscpconf
This paper aims at providing insight on the transferability of deep CNN features to
unsupervised problems. We study the impact of different pretrained CNN feature extractors on
the problem of image set clustering for object classification as well as fine-grained
classification. We propose a rather straightforward pipeline combining deep-feature extraction
using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of
images. This approach is compared to state-of-the-art algorithms in image-clustering and
provides better results. These results strengthen the belief that supervised training of deep CNN
on large datasets, with a large variability of classes, extracts better features than most carefully
designed engineering approaches, even for unsupervised tasks. We also validate our approach
on a robotic application, consisting in sorting and storing objects smartly based on clustering
A novel Image Retrieval System using an effective region based shape represen...CSCJournals
With recent improvements in methods for the acquisition and rendering of shapes, the need for retrieval of shapes from large repositories of shapes has gained prominence. A variety of methods have been proposed that enable the efficient querying of shape repositories for a desired shape or image. Many of these methods use a sample shape as a query and attempt to retrieve shapes from the database that have a similar shape. This paper introduces a novel and efficient shape matching approach for the automatic identification of real world objects. The identification process is applied on isolated objects and requires the segmentation of the image into separate objects, followed by the extraction of representative shape signatures and the similarity estimation of pairs of objects considering the information extracted from the segmentation process and shape signature. We compute a 1D shape signature function from a region shape and use it for region shape representation and retrieval through similarity estimation. The proposed region shape feature is much more efficient to compute than other region shape techniques invariant to image transformation.
Myanmar Alphabet Recognition System Based on Artificial Neural Networkijtsrd
This paper describes Myanmar Alphabet Recognition System Based on Neural Network. Typical pattern recognition systems are designed using two parts. The first part is a feature extractor that finds features within the data, which are specific to the task being solved. Edge detection method is used to extract image' features. It may be grouped into two categories, gradient and Laplacian. The gradient method (Roberts, Prewitt, Sobel) detects the edges by looking for the maximum and minimum in the first derivative of the image. In this paper, Sobel edge operator is chosen because it can generate the significant features for Myanmar Alphabet than other techniques. The second part is the classifier; Multilayer Perceptron Network is designed for recognition purpose. It is used to train the train data set and classify the test data set that it is shown with its result box and sound. These data sets are composed of all Myanmar alphabets. For programming and simulation of this paper, MATLAB Programming Language is used for implementation. Myat Thida Tun"Myanmar Alphabet Recognition System Based on Artificial Neural Network" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-5 , August 2018, URL: http://www.ijtsrd.com/papers/ijtsrd17054.pdf http://www.ijtsrd.com/engineering/information-technology/17054/myanmar-alphabet-recognition-system-based-on-artificial-neural-network/myat-thida-tun
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...Editor IJMTER
Web mining techniques are used to analyze the web page contents and usage details. Human facial
images are shared in the internet and tagged with additional information. Auto face annotation techniques are used
to annotate facial images automatically. Annotations are used in online photo search and management.
Classification techniques are used to assign the facial annotation. Supervised or semi-supervised machine learning
techniques are used to train the classification models. Facial images with labels are used in the training process.
Noisy and incomplete labels are referred as weak labels. Search-based face annotation (SBFA) is assigned by
mining weakly labeled facial images available on the World Wide Web (WWW). Unsupervised label refinement
(ULR) approach is used for refining the labels of web facial images with machine learning techniques. ULR
scheme is used to enhance the label quality using graph-based and low-rank learning approach. The training phase
is designed with facial image collection, facial feature extraction, feature indexing and label refinement learning
steps. Similar face retrieval and voting based face annotation tasks are carried out under the testing phase.
Clustering-Based Approximation (CBA) algorithm is applied to improve the scalability. Bisecting K-means
clustering based algorithm (BCBA) and divisive clustering based algorithm (DCBA) are used to group up the
facial images. Multi step Gradient Algorithm is used for label refinement process. The web face annotation scheme
is enhanced to improve the label quality with low refinement overhead. Noise reduction is method is integrated
with the label refinement process. Duplicate name removal process is integrated with the system. The indexing
scheme is enhanced with weight values for the labels. Social contextual information is used to manage the query
facial image relevancy issues.
A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...ijsrd.com
Auto face annotation is an important role for many real-world multimedia applications. Recently search based face annotation paradigm is one of the research challenge in computer vision and image analysis. In this paper, we present the problem is to annotate the most weakly labeled facial images are duplicate names, noisy and incomplete. To handle this problem we used an effective semi-automatic annotation methodology with unsupervised label refinement (ULR) approach for refining the labels in facial images by using some machine learning techniques and fuzzy clustering-based approximation algorithm is used to improve the scalability considerably. Finally to develop an optimization algorithm for solving a large scale learning task. The result of this proposed ULR algorithm can improve the performance than other ULR algorithms in weak label matrix.
Image Maximization Using Multi Spectral Image Fusion Techniquedbpublications
This paper reports a detailed study of a set of image fusion algorithms for its implementation. The paper explains the theory and implementation of the effective image fusion algorithm and the experimental results. Based on the research and development of some image quality metrics, the fusion algorithm is evaluated. The report is an image fusion algorithm that evaluates and implements image quality metrics that are used to evaluate the implementation. In this study, two different image fusion techniques have been applied to hyperspectral and low spatial resolution satellite images with high spatial and low spectral resolution images to obtain a fusion graph with increased spatial resolution Like, while keeping spectral information as much as possible. These techniques are raw component analysis
(PCA) and wavelet transform (WT) image fusion MATLAB is used to build the GUto
apply and render the results of image fusion algorithms. The subjective (visual) and objective evaluation of the fusion image has been carried out to assess the success of the method. The objective evaluation methods include correlation coefficient (CC), root mean square error (RMSE), relative global dimension synthesis error (ERGAS) The results show that the PCA method performs better on the top of the spectral information, and is less successful in increasing the spatial resolution. The WT is performed after the IHS transformation to improve the spatial resolution and is performed with respect to the preservation of the spectral information after the PCA and WT methods.
An Analysis of Various Deep Learning Algorithms for Image Processingvivatechijri
Various applications of image processing has given it a wider scope when it comes to data analysis.
Various Machine Learning Algorithms provide a powerful environment for training modules effectively to
identify various entities of images and segment the same accordingly. Rather one can observe that though the
image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task,
deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known
and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the
image processing domain. It has way higher accuracy and computational power for classifying images further
and segregating their various entities as individual components of the image working region. Major focus will
be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level
segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions.
Survey on Multiple Query Content Based Image Retrieval SystemsCSCJournals
This paper reviews multiple query approaches for Content-Based Image Retrieval systems (MQIR). These are recently proposed Content-Based Image Retrieval systems that enhance the retrieval performance by conveying a richer understanding of the user high-level interest to the retrieval system. In fact, by allowing the user to express his interest using a set of query images, MQIR bridge the semantic gap with the low-level image features. Nevertheless, the main challenge of MQRI systems is how to compute the distances between the set of query images and each image in the database in a way that enhances the retrieval results and reflects the high-level semantic the user is interested in. For this matter, several approaches have been reported in the literature. In this paper, we investigate existing multiple query retrieval systems. We describe each approach, detail the way it computes the distances between the set of query images and each image in the database, and analyze its advantages and disadvantages in reflecting the high-level semantics meant by the user.
Tag based image retrieval (tbir) using automatic image annotationeSAT Journals
Abstract In recent days, several social networking sites are more popular with digitized images. It comprises the major portion of the databases which makes the search engines to face difficulty in searching. We present a proficient image retrieval technique, which achieves eminent retrieval efficiency. Most of the images are annotated manually, thus the visual content and tags may be mismatched. This leads to poor performance in Tag Based Image Retrieval (TBIR). Automatic Image Annotation (AIA) analyzes the missing and noisy tags and over-refines it to increase the performance of TBIR. AIA can be achieved using the Tag Completion algorithm. The images retrieved from the TBIR are ranked based on the relevancy of the tags and visual content of the images. The relevancy can be evaluated using Content Based Image Retrieval (CBIR) technique. Based on the ranks, the images are indexed in the Tag matrix. Thus the images that match the search query can be retrieved in an optimal way. Keywords: Image Retrieval, Automatic Image Annotation, Tag Based Image Retrieval (TBIR), Tag Completion Algorithm, Content Based Image Retrieval (CBIR), Tag Matrix
Image retrieval and re ranking techniques - a surveysipij
There is a huge amount of research work focusing on the searching, retrieval and re-ranking of images in
the image database. The diverse and scattered work in this domain needs to be collected and organized for
easy and quick reference.
Relating to the above context, this paper gives a brief overview of various image retrieval and re-ranking
techniques. Starting with the introduction to existing system the paper proceeds through the core
architecture of image harvesting and retrieval system to the different Re-ranking techniques. These
techniques are discussed in terms of approaches, methodologies and findings and are listed in tabular form
for quick review.
Hierarchical deep learning architecture for 10 k objects classificationcsandit
Evolution of visual object recognition architectures based on Convolutional Neural Networks &
Convolutional Deep Belief Networks paradigms has revolutionized artificial Vision Science.
These architectures extract & learn the real world hierarchical visual features utilizing
supervised & unsupervised learning approaches respectively. Both the approaches yet cannot
scale up realistically to provide recognition for a very large number of objects as high as 10K.
We propose a two level hierarchical deep learning architecture inspired by divide & conquer
principle that decomposes the large scale recognition architecture into root & leaf level model
architectures. Each of the root & leaf level models is trained exclusively to provide superior
results than possible by any 1-level deep learning architecture prevalent today. The proposed
architecture classifies objects in two steps. In the first step the root level model classifies the
object in a high level category. In the second step, the leaf level recognition model for the
recognized high level category is selected among all the leaf models. This leaf level model is
presented with the same input object image which classifies it in a specific category. Also we
propose a blend of leaf level models trained with either supervised or unsupervised learning
approaches. Unsupervised learning is suitable whenever labelled data is scarce for the specific
leaf level models. Currently the training of leaf level models is in progress; where we have
trained 25 out of the total 47 leaf level models as of now. We have trained the leaf models with
the best case top-5 error rate of 3.2% on the validation data set for the particular leaf models.
Also we demonstrate that the validation error of the leaf level models saturates towards the
above mentioned accuracy as the number of epochs are increased to more than sixty. The top-5
error rate for the entire two-level architecture needs to be computed in conjunction with the
error rates of root & all the leaf models. The realization of this two level visual recognition
architecture will greatly enhance the accuracy of the large scale object recognition scenarios
demanded by the use cases as diverse as drone vision, augmented reality, retail, image search
& retrieval, robotic navigation, targeted advertisements etc.
Applications of spatial features in cbir a surveycsandit
With advances in the computer technology and the World Wide Web there has been an
explosion in the amount and complexity of multimedia data that are generated, stored,
transmitted, analyzed, and accessed. In order to extract useful information from this huge
amount of data, many content based image retrieval (CBIR) systems have been developed in the
last decade. A typical CBIR system captures image features that represent image properties
such as color, texture, or shape of objects in the query image and try to retrieve images from the
database with similar features. Retrieval efficiency and accuracy are the important issues in
designing Content Based Image Retrieval System. The Shape and Spatial features are quiet easy
and simple to derive and effective. Researchers are moving towards finding spatial features and
the scope of implementing these features in to the image retrieval framework for reducing the
semantic gap. This Survey paper focuses on the detailed review of different methods and their
evaluation techniques used in the recent works based on spatial features in CBIR systems.
Finally, several recommendations for future research directions have been suggested based on
the recent technologies.
Content Based Image Retrieval (CBIR) aims at retrieving the images from the database based on the user query which is visual form rather than the traditional text form. The applications of CBIR extend from surveillance to remote sensing, medical imaging to weather forecasting, and security systems to historical research and so on. Though extensive research is made on content based image retrieval in the spatial domain, we have most images in the internet which is JPEG compressed which pushes the need for image retrieval in the compressed domain itself rather than decoding it to raw format before comparison and retrieval. This research addresses the need to retrieve the images from the database based on the features extracted from the compressed domain along with the application of genetic algorithm in improving the retrieval results. The research focuses on various features and their levels of impact on improving the precision and recall parameters of the CBIR system. Our experimentation results also indicate that the CBIR features in compressed domain along with the genetic algorithm usage improves the results considerably when compared with the literature techniques.
K2 Algorithm-based Text Detection with An Adaptive Classifier ThresholdCSCJournals
In natural scene images, text detection is a challenging study area for dissimilar content-based image analysis tasks. In this paper, a Bayesian network scores are used to classify candidate character regions by computing posterior probabilities. The posterior probabilities are used to define an adaptive threshold to detect text in scene images with accuracy. Therefore, candidate character regions are extracted through maximally stable extremal region. K2 algorithm-based Bayesian network scores are learned by evaluating dependencies amongst features of a given candidate character region. Bayesian logistic regression classifier is trained to compute posterior probabilities to define an adaptive classifier threshold. The candidate character regions below from adaptive classifier threshold are discarded as non-character regions. Finally, text regions are detected with the use of effective text localization scheme based on geometric features. The entire system is evaluated on the ICDAR 2013 competition database. Experimental results produce competitive performance (precision, recall and harmonic mean) with the recently published algorithms.
Myanmar Alphabet Recognition System Based on Artificial Neural Networkijtsrd
This paper describes Myanmar Alphabet Recognition System Based on Neural Network. Typical pattern recognition systems are designed using two parts. The first part is a feature extractor that finds features within the data, which are specific to the task being solved. Edge detection method is used to extract image' features. It may be grouped into two categories, gradient and Laplacian. The gradient method (Roberts, Prewitt, Sobel) detects the edges by looking for the maximum and minimum in the first derivative of the image. In this paper, Sobel edge operator is chosen because it can generate the significant features for Myanmar Alphabet than other techniques. The second part is the classifier; Multilayer Perceptron Network is designed for recognition purpose. It is used to train the train data set and classify the test data set that it is shown with its result box and sound. These data sets are composed of all Myanmar alphabets. For programming and simulation of this paper, MATLAB Programming Language is used for implementation. Myat Thida Tun"Myanmar Alphabet Recognition System Based on Artificial Neural Network" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-5 , August 2018, URL: http://www.ijtsrd.com/papers/ijtsrd17054.pdf http://www.ijtsrd.com/engineering/information-technology/17054/myanmar-alphabet-recognition-system-based-on-artificial-neural-network/myat-thida-tun
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...Editor IJMTER
Web mining techniques are used to analyze the web page contents and usage details. Human facial
images are shared in the internet and tagged with additional information. Auto face annotation techniques are used
to annotate facial images automatically. Annotations are used in online photo search and management.
Classification techniques are used to assign the facial annotation. Supervised or semi-supervised machine learning
techniques are used to train the classification models. Facial images with labels are used in the training process.
Noisy and incomplete labels are referred as weak labels. Search-based face annotation (SBFA) is assigned by
mining weakly labeled facial images available on the World Wide Web (WWW). Unsupervised label refinement
(ULR) approach is used for refining the labels of web facial images with machine learning techniques. ULR
scheme is used to enhance the label quality using graph-based and low-rank learning approach. The training phase
is designed with facial image collection, facial feature extraction, feature indexing and label refinement learning
steps. Similar face retrieval and voting based face annotation tasks are carried out under the testing phase.
Clustering-Based Approximation (CBA) algorithm is applied to improve the scalability. Bisecting K-means
clustering based algorithm (BCBA) and divisive clustering based algorithm (DCBA) are used to group up the
facial images. Multi step Gradient Algorithm is used for label refinement process. The web face annotation scheme
is enhanced to improve the label quality with low refinement overhead. Noise reduction is method is integrated
with the label refinement process. Duplicate name removal process is integrated with the system. The indexing
scheme is enhanced with weight values for the labels. Social contextual information is used to manage the query
facial image relevancy issues.
A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...ijsrd.com
Auto face annotation is an important role for many real-world multimedia applications. Recently search based face annotation paradigm is one of the research challenge in computer vision and image analysis. In this paper, we present the problem is to annotate the most weakly labeled facial images are duplicate names, noisy and incomplete. To handle this problem we used an effective semi-automatic annotation methodology with unsupervised label refinement (ULR) approach for refining the labels in facial images by using some machine learning techniques and fuzzy clustering-based approximation algorithm is used to improve the scalability considerably. Finally to develop an optimization algorithm for solving a large scale learning task. The result of this proposed ULR algorithm can improve the performance than other ULR algorithms in weak label matrix.
Image Maximization Using Multi Spectral Image Fusion Techniquedbpublications
This paper reports a detailed study of a set of image fusion algorithms for its implementation. The paper explains the theory and implementation of the effective image fusion algorithm and the experimental results. Based on the research and development of some image quality metrics, the fusion algorithm is evaluated. The report is an image fusion algorithm that evaluates and implements image quality metrics that are used to evaluate the implementation. In this study, two different image fusion techniques have been applied to hyperspectral and low spatial resolution satellite images with high spatial and low spectral resolution images to obtain a fusion graph with increased spatial resolution Like, while keeping spectral information as much as possible. These techniques are raw component analysis
(PCA) and wavelet transform (WT) image fusion MATLAB is used to build the GUto
apply and render the results of image fusion algorithms. The subjective (visual) and objective evaluation of the fusion image has been carried out to assess the success of the method. The objective evaluation methods include correlation coefficient (CC), root mean square error (RMSE), relative global dimension synthesis error (ERGAS) The results show that the PCA method performs better on the top of the spectral information, and is less successful in increasing the spatial resolution. The WT is performed after the IHS transformation to improve the spatial resolution and is performed with respect to the preservation of the spectral information after the PCA and WT methods.
An Analysis of Various Deep Learning Algorithms for Image Processingvivatechijri
Various applications of image processing has given it a wider scope when it comes to data analysis.
Various Machine Learning Algorithms provide a powerful environment for training modules effectively to
identify various entities of images and segment the same accordingly. Rather one can observe that though the
image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task,
deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known
and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the
image processing domain. It has way higher accuracy and computational power for classifying images further
and segregating their various entities as individual components of the image working region. Major focus will
be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level
segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions.
Survey on Multiple Query Content Based Image Retrieval SystemsCSCJournals
This paper reviews multiple query approaches for Content-Based Image Retrieval systems (MQIR). These are recently proposed Content-Based Image Retrieval systems that enhance the retrieval performance by conveying a richer understanding of the user high-level interest to the retrieval system. In fact, by allowing the user to express his interest using a set of query images, MQIR bridge the semantic gap with the low-level image features. Nevertheless, the main challenge of MQRI systems is how to compute the distances between the set of query images and each image in the database in a way that enhances the retrieval results and reflects the high-level semantic the user is interested in. For this matter, several approaches have been reported in the literature. In this paper, we investigate existing multiple query retrieval systems. We describe each approach, detail the way it computes the distances between the set of query images and each image in the database, and analyze its advantages and disadvantages in reflecting the high-level semantics meant by the user.
Tag based image retrieval (tbir) using automatic image annotationeSAT Journals
Abstract In recent days, several social networking sites are more popular with digitized images. It comprises the major portion of the databases which makes the search engines to face difficulty in searching. We present a proficient image retrieval technique, which achieves eminent retrieval efficiency. Most of the images are annotated manually, thus the visual content and tags may be mismatched. This leads to poor performance in Tag Based Image Retrieval (TBIR). Automatic Image Annotation (AIA) analyzes the missing and noisy tags and over-refines it to increase the performance of TBIR. AIA can be achieved using the Tag Completion algorithm. The images retrieved from the TBIR are ranked based on the relevancy of the tags and visual content of the images. The relevancy can be evaluated using Content Based Image Retrieval (CBIR) technique. Based on the ranks, the images are indexed in the Tag matrix. Thus the images that match the search query can be retrieved in an optimal way. Keywords: Image Retrieval, Automatic Image Annotation, Tag Based Image Retrieval (TBIR), Tag Completion Algorithm, Content Based Image Retrieval (CBIR), Tag Matrix
Image retrieval and re ranking techniques - a surveysipij
There is a huge amount of research work focusing on the searching, retrieval and re-ranking of images in
the image database. The diverse and scattered work in this domain needs to be collected and organized for
easy and quick reference.
Relating to the above context, this paper gives a brief overview of various image retrieval and re-ranking
techniques. Starting with the introduction to existing system the paper proceeds through the core
architecture of image harvesting and retrieval system to the different Re-ranking techniques. These
techniques are discussed in terms of approaches, methodologies and findings and are listed in tabular form
for quick review.
Hierarchical deep learning architecture for 10 k objects classificationcsandit
Evolution of visual object recognition architectures based on Convolutional Neural Networks &
Convolutional Deep Belief Networks paradigms has revolutionized artificial Vision Science.
These architectures extract & learn the real world hierarchical visual features utilizing
supervised & unsupervised learning approaches respectively. Both the approaches yet cannot
scale up realistically to provide recognition for a very large number of objects as high as 10K.
We propose a two level hierarchical deep learning architecture inspired by divide & conquer
principle that decomposes the large scale recognition architecture into root & leaf level model
architectures. Each of the root & leaf level models is trained exclusively to provide superior
results than possible by any 1-level deep learning architecture prevalent today. The proposed
architecture classifies objects in two steps. In the first step the root level model classifies the
object in a high level category. In the second step, the leaf level recognition model for the
recognized high level category is selected among all the leaf models. This leaf level model is
presented with the same input object image which classifies it in a specific category. Also we
propose a blend of leaf level models trained with either supervised or unsupervised learning
approaches. Unsupervised learning is suitable whenever labelled data is scarce for the specific
leaf level models. Currently the training of leaf level models is in progress; where we have
trained 25 out of the total 47 leaf level models as of now. We have trained the leaf models with
the best case top-5 error rate of 3.2% on the validation data set for the particular leaf models.
Also we demonstrate that the validation error of the leaf level models saturates towards the
above mentioned accuracy as the number of epochs are increased to more than sixty. The top-5
error rate for the entire two-level architecture needs to be computed in conjunction with the
error rates of root & all the leaf models. The realization of this two level visual recognition
architecture will greatly enhance the accuracy of the large scale object recognition scenarios
demanded by the use cases as diverse as drone vision, augmented reality, retail, image search
& retrieval, robotic navigation, targeted advertisements etc.
Applications of spatial features in cbir a surveycsandit
With advances in the computer technology and the World Wide Web there has been an
explosion in the amount and complexity of multimedia data that are generated, stored,
transmitted, analyzed, and accessed. In order to extract useful information from this huge
amount of data, many content based image retrieval (CBIR) systems have been developed in the
last decade. A typical CBIR system captures image features that represent image properties
such as color, texture, or shape of objects in the query image and try to retrieve images from the
database with similar features. Retrieval efficiency and accuracy are the important issues in
designing Content Based Image Retrieval System. The Shape and Spatial features are quiet easy
and simple to derive and effective. Researchers are moving towards finding spatial features and
the scope of implementing these features in to the image retrieval framework for reducing the
semantic gap. This Survey paper focuses on the detailed review of different methods and their
evaluation techniques used in the recent works based on spatial features in CBIR systems.
Finally, several recommendations for future research directions have been suggested based on
the recent technologies.
Content Based Image Retrieval (CBIR) aims at retrieving the images from the database based on the user query which is visual form rather than the traditional text form. The applications of CBIR extend from surveillance to remote sensing, medical imaging to weather forecasting, and security systems to historical research and so on. Though extensive research is made on content based image retrieval in the spatial domain, we have most images in the internet which is JPEG compressed which pushes the need for image retrieval in the compressed domain itself rather than decoding it to raw format before comparison and retrieval. This research addresses the need to retrieve the images from the database based on the features extracted from the compressed domain along with the application of genetic algorithm in improving the retrieval results. The research focuses on various features and their levels of impact on improving the precision and recall parameters of the CBIR system. Our experimentation results also indicate that the CBIR features in compressed domain along with the genetic algorithm usage improves the results considerably when compared with the literature techniques.
K2 Algorithm-based Text Detection with An Adaptive Classifier ThresholdCSCJournals
In natural scene images, text detection is a challenging study area for dissimilar content-based image analysis tasks. In this paper, a Bayesian network scores are used to classify candidate character regions by computing posterior probabilities. The posterior probabilities are used to define an adaptive threshold to detect text in scene images with accuracy. Therefore, candidate character regions are extracted through maximally stable extremal region. K2 algorithm-based Bayesian network scores are learned by evaluating dependencies amongst features of a given candidate character region. Bayesian logistic regression classifier is trained to compute posterior probabilities to define an adaptive classifier threshold. The candidate character regions below from adaptive classifier threshold are discarded as non-character regions. Finally, text regions are detected with the use of effective text localization scheme based on geometric features. The entire system is evaluated on the ICDAR 2013 competition database. Experimental results produce competitive performance (precision, recall and harmonic mean) with the recently published algorithms.
“ Vertical Image Search Engine” is IEEE project ppt. The basic working principle of the image search engine can be helpful to you in building up final year project.
BEST IMAGE PROCESSING TOOLS TO EXPECT in 2023 – Tutors IndiaTutors India
As the name suggests, processing an image entails a number of steps before we reach our goal.
Check our Pdf for More Information
Visit our work (Source):
https://www.tutorsindia.com/blog/top-13-image-processing-tools-to-expect-2023/
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal,
Shallow vs. Deep Image Representations: A Comparative Study with Enhancements...CSCJournals
The traditional approach for solving the object recognition problem requires image representations to be first extracted and then fed to a learning model such as an SVM. These representations are handcrafted and heavily engineered by running the object image through a sequence of pipeline steps which requires a good prior knowledge of the problem domain in order to engineer these representations. Moreover, since the classification is done in a separate step, the resultant handcrafted representations are not tuned by the learning model which prevents it from learning complex representations that might would give it more discriminative power. However, in end-to-end deep learning models, image representations along with the classification decision boundary are all learnt directly from the raw data requiring no prior knowledge of the problem domain. These models deeply learn the object image representation hierarchically in multiple layers corresponding to multiple levels of abstraction resulting in representations that are more discriminative and give better results on challenging benchmarks. In contrast to the traditional handcrafted representations, the performance of deep representations improves with the introduction of more data, and more learning layers (more depth) and they perform well on large-scale machine learning problems. The purpose of this study is six fold: (1) review the literature of the pipeline processes used in the previous state-of-the-art codebook model approach for tackling the problem of generic object recognition, (2) Introduce several enhancements in the local feature extraction and normalization steps of the recognition pipeline, (3) compare the enhancements proposed to different encoding methods and contrast them to previous results, (4) experiment with current state-of-the-art deep model architectures used for object recognition, (5) compare between deep representations extracted from the deep learning model and shallow representations handcrafted through the recognition pipeline, and finally, (6) improve the results further by combining multiple different deep learning models into an ensemble and taking the maximum posterior probability.
Hi There, This Synopsis report is Implemented by Umang Saxena,Sakshi Sharma and Ronit Shrivastava of IT Branch,SVVV Indore.This will help for those students who wants to make a good and effective report regarding to any topic.
Thank you
Warm regards
The main objective of this work is the uniting and streamlining of an automatic face detection application and recognition system for video indexing applications. Human identification means the classification of gender which can increase the identification accuracy. So, accurate gender classification algorithms may increase the accuracy of the applications and can reduce its complexity. But, in some applications, some challenges are there such as rotation, gray scale variations that may reduce the accuracy of the application. The main goal of building this module is to understand the values in image, pattern, and array processing with OpenCV for effective processing faces for building pipe-lining, SVM models.
Similar to SVD Based Blind Video Watermarking Algorithm (20)
Structure failure often occurs in the structure of wall. This failure can adversely affect the comfort level of the structure. Knowing the behavior of structure resulting from the load is important, as it can help to predict the strength of the structure and comfort of the structure being worked on. One way to find out and predict the strength and comfort of the structure as a result of the load received is experimental test and simulation. The simulation VecTor2 used to predict the shear force, crack, and displacement of reinforced concrete wall when applied the load. This simulation considered the effect of bond stress-slip effect of behavior reinforced concrete. Bonds stress-slip gives a great influence on the strength and hysteretic response of the reinforced concrete wall. That is why this study considers the influence of bond stress-slip on reinforced concrete wall. All the result of simulation VecTor2 using bond stress-slip effect would be compared with the result of the experimental test to see the accuracy of the simulation test.
The concept of sustainable construction is increasingly affecting the development of the construction market.The specificity of construction as an economic activity and ofthe construction product (goods and services) determine the existence of a complex vertical chain of links, involving different actors, who tend to work in the short term and are limited to the rational use of knowledge and experience in practice. Moreover, it is characterized by a low level of inter-company relationshipsresulting in a fragmentation of responsibilityand complicates and hinders the realization of projects and sites,which meet the requirements of sustainable construction. Sustainable construction requires a holistic approach and substantial changes in the organization of construction activity, both at the market and firm level, under the active role of the state. The aim of the study is: 1) analysis of problems in the vertical chain of connections in the construction market, 2)an analysis of the possibilities for creating stable long-term relationships and a joint approach of clients, contractors and subcontractors, which can provide economic, social and environmental efficiency of the construction.
Since the recent development of UAVs(Unmanned Aerial Vehicles) and digital sensors technology has enabled the acquisition of high-resolution image data, it is considered that the image data of riverside can be analysed. Therefore, this study analyses the applicability of remote sensing techniques through image analysis in river systems and habitats. The target stream in this study was the Cheongmi stream and the applicability of the river environmental evaluation technique was analysed through image analysis. The satellite images used for the analysis of river topography and environments were compared with the aerial images taken by a micro UAV), and the river environmental evaluation was carried out with the field research at the same time. The data acquisition range and application limit by river environmental evaluation technique proposed previously were evaluated, and as a result, it was found that it was possible to draw various evaluation parameters using a drone that could take an image at a low altitude in comparison to satellite images.
Industrial engineering is founded on the idea that there is always a better way. This mantra rings true in everything an industrial engineer does, from lean manufacturing to six sigma, to quality control and ergonomics. This paper demonstrates the uniqueness of this discipline, the impact its techniques has in sectors outside of manufacturing, and the positive effects it has on businesses.
The study was carried out using the UAV for analyzing the characteristics of debris in order to present the methodology to estimate the quantitative amount of debris caught in small river facilities. A total of six small rivers that maintained the form of a natural river were selected for collecting UAV images, and the grouping of each target in the image was carried out using the object-based classification method, and based on the object-based classification result of the UAV images, the land cover classification for the status of factors causing the generation of debris for six target sections was carried out by applying the screen digitizing method. In addition, in order to verify the accuracy of the classification result, the error matrix was performed, securing the reliability of the result. The accuracy analysis result showed that for all six target sections, the overall accuracy was 93.95% and the Kappa coefficient was 0.93, showing an excellent result.
Multilevel Inverters are getting popular and have become more attractive to researchers in the recent times for high power applications due to their better power quality and higher efficiency as compared to two level inverters. This research work presents a detailed comparative analysis of various multicarrier sinusoidal PWM schemes such as In Phase Disposition, Phase Opposition Disposition and Alternate Phase Opposite Disposition implemented on five level conventional and modified cascaded h-bridge inverters in MATLAB/SIMULINK software. Conventional five level topology uses eight switches and suffers from increased switching complexity while modified five level topology uses only five switches and is recommended to reduce switching complexity and switching losses. It also ensures less number of components, reduced size and overall cost of the system. The effect of modulation index (Ma) on the output harmonic contents in various PWM techniques is also analyzed.
Objective: Cervical cancer (CC) is one of the leading causes of cancer-related deaths among women worldwide.Human papillomavirus (HPV) is the most important element in this disease.The aim of this study is to prepare TiO2/ZnO nanocomposite (NC), titanium dioxide (TiO2) and zinc oxide (ZnO)nanoparticles (NPs) to determine the anticancer activity on human CC cell line (HeLa) and healthy mouse fibroblast cell line (L-929). Materials&Methods: ZnO, TiO2 NPs and NC were prepared by a solution combustion synthesis method. The samples were characterized by ultraviolet–visible spectroscopy. Stability analysis was performed with zeta potential. The synthesized NC and NPs were permormed to the HeLa and L-929 cell lines and anticancer activity of these NC and NPs were determined by using MTT method. The HeLa and L-929 cells were treated with different concentrations of these NC and NPs (0,5-100 μg/ml) for 24, 48 and 72 hours. The spectrophotometric readings at 570 nm were recorded and analysed with Graphpad Prism7. Results: NC and NPs were successfully synthesized. The effects of these NC and NPs on the HeLa and L-929 cells were compared with the control group and IC50 values were determined for 24, 48 and 72 hours. Then we compared the effects of these molecules on the L-929 cell line with the HeLa cell line and founded more active is on HeLa cells. Conclusion:There are many drugs used in CC treatment. However, undesirable toxicity and drug resistance of these drugs negatively affect treatment.We have synthesized NC and NPs in order to formulate basis of a new drug in this study and have identified anti-cancer activity.As a result, we found that NC and NPs anti-cancer activity was higher in HeLa cells than in L-929.
Graphene is a material that attracts attention in technical textile applications as in many other areas due to its outstanding features. In this study, it was aimed to investigate the performance properties of graphene coated fabrics. Pre-treated polyester fabrics were coated with nano-graphene powders at different concentration rates (50, 100 and 200 g/kg) by knife-over-roll technique. According to test results, generally, the graphene coating had a positive effect on the performance properties of polyester fabrics.
This study was focused on the effects of Sugarcane Bagasse Ash (SCBA) additive on process parameters and compost quality of Co-composting of filter cake and bagasse. Filter cake and bagasse were mixed and sugar cane bagasse ash (SCBA) from a heating power plant of sugar mill. Three compost mixes (M) were obtained: MA with 0%, MB with 10% and MC with 20 wt % of fuel ash. These three different mixes were composted in an experimental composter as three parallel experiments for 3 weeks each. The physical, chemical and biological parameters were monitoring during composting. Significantly, ash additives decreased the total organic carbon; measured by mineralization the breaking down of the organic matter was more rapid in the MC than in the MA, as well as increased the pH during composting. Interesting, the pH decreased was most important in MA and attend 5 for the first week of composting, and then it gradually increased to pH around 8 at the end of the process. The results indicated that ash inhibits the pH drop due to production of organic acids during composting. The acidity of the material was reported as affects the process during the initial phase of rising temperature and quality of the final product. The temperature reached up to 50-55oC during thermophilic phase, the greater temperature was obtained for MC. At the end of composting, the electrical conductivity increased in the MC, especially in MC, but don’t exceed limit (4 mS/cm) for prevent phytotoxicity of the compost. The SCBA additive was likely to speed up the composting process of bagasse with filter cake from 44 days to 33 days.
The work presents report on production and analysis of bioresin from epoxidized mango kernel oil (EMKO). The bioresin (acrylated epoxidized mango kernel oil) or AEMKO was produced from epoxidized mango kernel oil via acrylation chemical reaction route. The FTIR spectrum analysis of epoxidized mango kernel oil (EMKO) and acrylated epoxidized mango kernel oil (AEMKO) produced gave the degree of acrylation (DOA) as 46%. The Viscosity of AEMKO (resin) was determined at room temperature (25 °C) to be 387cP while the density at 25oC was 1.2 g/cm3. The glass transition temperature (Tg) of the bioresin was determined to be 95oC. Production cost analysis of the bioresin was done and found to be N8, 804.35 per litre. The high cost was due to high costs of the chemicals, labour and overhead charges involved at my local level. At commercial level, those components of the costs would definitely reduce to the level compatible with synthetic (polyester) resin (N2, 500 per litre) currently sold by some markers in Nigeria. However, the overall results of the work demonstrated that bioresin can be successfully synthesized from mango kernel oil with properties compatible with ASTM standards. The commercial production of the bioresin will go a long way in mitigating some of the challenges associated with total use of fossil fuel currently use for production of bulk of synthetic resins for composite manufacturing activities.
The window functions used for digital filter design are used to eliminate oscillations in
the FIR (Finite Impulse Response) filter design. In this work, the use of Particle Swarm Optimization
(PSO) algorithm is proposed in the design of cosh window function, in which has widely used in the
literature and has useful spectral parameters. The cosh window is a window function derived from the
Kaiser window. It is more advantageous than the Kaiser window because there is no power series
expansion in the time domain representation. The designed window function shows better ripple ratio
characteristics than other window functions commonly used in the literature. The results obtained
were presented in tables and figures and successful results were obtained
The aim of the study was to investigate the relationship between 2D gray scale pixels and 3D gray scale pixels of image reconstructions in computed tomography (CT). The 3D space image reconstruction from data projection was a challenging and difficult research problem. The image was normally reconstructed from the 2D data from CT data projection. In this descriptive study, a synthetics 3D Shepp-Logan phantom was used to simulate the actual data projection from a CT scanner. Real-time data projection of a human abdomen was also included in this study. Additionally, the Graphical User Interface (GUI) for the application was designed using Matlab Graphical User Interface Development Environment (GUIDE). The application was able to reconstruct 2D and 3D images in their respective spaces successfully.The image reconstruction for CT in 3D space was analyzedalong with 2D space in order to show their relationships and shared properties for the purpose of constructing these images.
In this work the antimicrobial activity and the economic viability analysis of the essential oil extracted from the hybrid formed by the seeds species of the Murupi (Capsicum chinense), Criollos de Morellos (Capsicum annuum) and Finger of the young (Capsicum baccatum ). The essential oil of the pepper was obtained by the steam drag process and for this extraction, the Soxhlet method was used. For the determination of the antimicrobial activity of the oil the disc diffusion method was used for the strains of Bacillus cereus, Staphylococcus aureus and Escherichia coli. The results point out the resistance of the tested strains to the essential oil of the respective pepper and, in terms of financial and economic aspects, this was not feasible on a small scale. It is suggested that other microorganisms be tested and, later, that studies be carried out with the purpose of characterizing the studied oil chemically for proper application in the agroindustry.
Eliminating Gibbs phenomenon, which occurs during design of Finite Impulse Response (FIR) digital filter and which is undesirable, is very important in order to provide expected performance from digital filter. Window functions have been developed to eliminate these oscillations and to improve the performance of the filter in this regard. In this work, an application was developed for designing window function using LABVIEW which is a graphical programming environment produced by National Instruments. LABVIEW offers a powerful programming environment away from complexity. In this work, the performances of cosh and exponential window functions, which are designed by using the possibilities of LABVIEW in programming, are examined and the situations that will occur under various conditions are compared.
Better efficiency of the air transport system of a country at the national level, especially in terms of its
capacity to generate value for passenger flow and cargo transport, effectively depends on the identification of
the demand generation potential of each hub for this type of service. This requires the mapping of the passenger
flow and volume of cargo transport of each region served by the system and the number of connections. The
main goal of this study was to identify important factors that account for the great variability (demand) of
regional hubsof the airport modal system in operation in the State of São Paulo, the most populated and
industrialized in the Southeast region in Brazil. For this purpose, datasets for each airport related to passengers
or cargo flow were obtained from time series data in the period ranging from January 01, 2008 to December
31, 2014. Different data analysis approaches could imply in better mapping of the flow of the air modal system
from the evaluation of some factors related to operations/volume. Therefore, different statistical models - such
as multiple linear regression with normal errors and new stochastic volatility (SV) models - are introduced in
this study, to provide a better view of the operation system in the four main regional hubs, within a large group
of 32 airports reported in the dataset.
Linear attenuation coefficient (휇) is a measure of the ability of a medium to diffuse and absorb radiation. In the interaction of radiation with matter, the linear absorption coefficient plays an important role because during the passage of radiation through a medium, its absorption depends on the wavelength of the radiation and the thickness and nature of the medium. Experiments to determine linear absorption coefficient for Lead, Copper and Aluminum were carried out in air. The result showed that linear absorption Coefficient for Lead is 0.545cm – 1, Copper is 0.139cm-1 and Aluminum is 0.271cm-1 using gamma-rays. The results agree with standard values.
This study presents results of Activity Concentrations, Absorbed dose rate and the Annual Effective dose rates of naturally occurring radionuclides (40K, 232Th and 226Ra) absorbed in 8 soil samples collected from different areas within the Ajiwei mining sites in Niger State, North Central Nigeria. A laboratory γ-ray spectrometry NaI (Tl) at the Centre for Energy Research and Training (CERT), Ahmadu Bello University Zaria, was used to carry out the analysis of the soil samples. The values of Activity Concentration for 40K ranged from 421.6174 ± 7.9316 to 768.7403 ± 7.9315; for 226Ra it ranged from 20.6257 ± 2.0858 to 44.0324 ± 5.0985 and for 232Th the ranged is from 23.7172 ± 1.3683 to 62.7137 ± 4.1049 Bq.Kg-1. While the Absorbed Dose for 40K ranged from 17.5814 ± 0.3307 to 32.0565 ± 0.3307 ŋGy.h-1, for 226Ra the range is from 9.5291 ± 0.9636 to 20.3430 ± 2.3555 ŋGy.h-1 and for 232Th range from 14.3252 ± 0.4414 to 37.8791 ± 2.4794 ŋGy.h-1. The total average Absorbed Dose rate of the 8 soil samples collected is 63.7877 ŋGy.h-1 and the estimated Annual Effective Dose for the sampled areas range from 0.0636- 0.1028mSvy-1 (i.e 64 – 103 μSv.y-1), with an average Annual Effective Dose of 0.0782 mSv.y-1 (i.e. 78.2 μSv.y-1). These results show’s that the radiation exposure level reaching members of the public in the study areas is lower than the recommended limit value of 1 mSv.y-1 (UNSCEAR, 2000). Also the mean Radium Equivalents obtained ranged from 107.3259 BqKg-1 (AJ1) to 179.4064 BqKg-1 (AJ4). These results show that the recommended Radium Equivalent Concentration is ≤ 370 BqKg-1 which is the requirement for soil materials to be used for dwellings, this implies that the soil from this site is suitable use for residential buildings. The mean External Hazard Index ( Hext ) ranged from 0.1229 Bqkg-1 (AJ3) to 0.4226 Bqkg-1 (AJ7).. While the maximum allowed value of (Hext = 1) corresponds to the upper limit of Raeq (370 BqKg-1) in order to limit the external gamma radiation dose from the soil materials to 1.5 mGy y-1. That is, this Index should be equal to or less than unity (Hext ≤ = 1). Furthermore, the mean Internal Hazard Index (Hext) ranged from 0.3456 Bqkg-1 (AJ1) to 0.6453 Bqkg-1 (AJ2) .Finally, the mean value of the Excess Alpha Radiation (Iα) ranged from 0.1031 Bq.Kg-1 (AJ1) to 0.2202 Bq.Kg-1 (AJ3. All these values for Iα are below the maximum permissible value of Iα= 1 which corresponds to 200 Bq.Kg-1. It can therefore be said that no radiological hazard is envisaged to dwellers of the study areas and the miners working on those sites area.
Pick and place task is one among the most important tasks in industrial field handled by “Selective
Compliance Assembly Robot Arm” (SCARA). Repeatability with high-speed movement in horizontal plane is
remarkable feature of this type of manipulator. The challenge of design SCARA is the difficulty of achieving
stability of high-speed movement with long length of links. Shorter links arm can move more stable. This
condition made the links should be considered restrict then followed by restriction of operation area
(workspace). In this research, authors demonstrated on expanding SCARA robot’s workspace in horizontal area
via linear sliding actuator that embedded to base link of the robot arm. With one additional prismatic joint the
previous robot manipulator with 3 degree of freedom (3-DOF), 2 revolute joints and 1 prismatic joint is become
4-DOF PRRP manipulator. This designation increased workspace of robot from 0.5698m2 performed by the
previous arm (without linear actuator) to 1.1281m2 by the propose arm (with linear actuator). The increasing
rate was about 97.97% of workspace with the same links length. The result of experimentation also indicated
that the operation time spent to reach object position was also reduced.
The paper contains several technical solutions of air and moisture permeability in textile
layers and theirs combinations. It is useful collection of the author’s knowledge from several last years.
Discussed are also various marketing declarations of miraculous characteristics of individual used materials.
Examples show not only own technical solution, but also the good description of ongoing processes, using the
method of numerical simulation.
Physical and chemical properties of host environment to concrete structures have serious impact on
the performance and durability of constructed concrete facilities. This paper presents a 7-month study that
simulated the influence of soil contamination due to organic abattoir waste and indiscriminate disposal of spent
hydrocarbon on strength and durability of embedded concrete. Concrete mix, 1:1.5:3 was designed for all cube
and beam specimens with water-cement ratio of 0.5 and the compressive and flexural strengths of the specimen
were measured from age 28 days up to 196 days in the host environment. It was found that both host
environments attack the physical and strength of concrete in compression and flexure. However, hydrocarbon
had much greater adverse effect on the load-carrying capacity of concrete structures and hence make
constructed facilities less serviceable and vulnerable to premature failure.
More from International Journal of Modern Research in Engineering and Technology (20)
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
1. International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 1 Issue 4 ǁ October 2016.
w w w . i j m r e t . o r g Page 1
SVD Based Blind Video Watermarking Algorithm
Pavani C., M. Murali Krishna
Pavani C. is Assistant Professor, Dept of ECE, Kommuri Pratap Reddy Institute of Technology,e-mail:chamalapavani55@gmail.com
M. Murali Krishna is Junior Engineer, Medha Servo Drives Pvt Ltd
Abstract: In this project, we show that an image can be reconstructed using local descriptors, with or without
complete geometrical metadata in LAB VIEW. We use greedy algorithms to progressively learn the missing
information before reconstruction and colorization is performed .Our experiments show that most of the vital
information about a query image can be recovered even if scale metadata is missing. Compared to images
reconstructed with scale information, we find that there is no significant decline in image quality, and a close
resemblance of the original image post-processing step. Lab VIEW (laboratory Virtual Instrumentation
Engineering Workbench). Lab VIEW is a Graphical Programming Language. It contains icons rather than lines
of text. In contrast to text-based programming languages, where instructions determine program execution, Lab
VIEW uses dataflow programming, where the flow of data determines execution. As the world becomes
increasingly digitalized, it has become intractable to naively search for images merely using pixel information.
As such, researchers have been looking for more efficient methods to perform image matching and retrieval.
Keywords: Lab VIEW, GPIB, PXI, SIFT, SURF, VXI
I. Introduction:
1.1 What is lab view?
LAB VIEW refers to Laboratory Virtual
Instrument Workbench. Lab VIEW is a Graphical
Programming Language. It contains icons rather
than lines of text. In Lab VIEW, you build a user
interface with a set of tools and objects. The user
interface is known as the front panel. You then add
code using graphical representations of functions to
control the front panel objects. The block diagram
contains this code. Lab VIEW contains a
comprehensive set of tools for acquiring,
analyzing, displaying, and storing data, as well as
tools to help you troubleshoot code you write. You
can use a flowchart Lab VIEW to communicate
with hardware such as data acquisition, vision, and
motion control devices, as well as GPIB, PXI, VXI,
RS232, and RS485 instruments .It is the product of
NATIONAL INSTRUMENTS (NI).
1.2 Overview of Lab View
As the world becomes increasingly
digitalized, it has become intractable to naively
search for images merely using pixel information.
As such, researchers have been looking for more
efficient methods to perform image matching and
retrieval. Successful and widely used methods have
relied on local descriptors such as the Scale
Invariant Feature Transform local descriptors such
as the Scale Invariant Feature Transform local
descriptors provide valuable information about
regions of the image –key points that contain a lot
of details. A set of local descriptors with
corresponding geometrical significantly reducing
the amount of storage required, so using them
drastically increases the efficiency of visual search
Precisely because the descriptors store so much
valuable information, an image, or the important
parts of an image, could very well be reconstructed
simply from these descriptors. Given a sufficiently
large and diverse database image patches with their
associated descriptor information patches closely
matching the query descriptors can be retrieved and
stitched together using geometrical Meta data from
the key points to recover a resemblance of the
original image. Weinzaepfelet.al. presented images
reconstructed successfully and with interpretable
quality from local SIFT descriptors and elliptical
image patches [3], the implications of which could
be both good and bad. When reconstruction
happens at a malicious node, the pirate may gather
(often secret) information about the queried image.
This poses a problem to the privacy of images,
especially in the case where a copyright holder
passes descriptor information to a third party
system for image query purposes. Much research
has been done to enhance the security and
robustness of visual search pipeline. Hsu et. al. [4]
have studied the weaknesses of SIFT descriptors
and suggested that feature descriptors could get
deleted while still maintaining an acceptable visual
quality, but the visual research result an acceptable
visual quality, but the visual research result will
unfortunately change [5] has thoroughly studied the
process of attacking a visual search pipeline. A
pirate could eavesdrop over the communication
channel and modify the feature descriptors or the
image from the database, in both cases causing
matching to fail. Most of the time, if the pirate is
not a user, cryptographically methods could make
the search pipeline more secure. However, when
reconstruction happens at a trusted node, say the
2. International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 1 Issue 4 ǁ October 2016.
w w w . i j m r e t . o r g Page 2
database, and then image recovery could be
desirable. Visual search could take a huge step
forward if databases could use query information to
enhance their repository. For example, when
different users submit queries for a landmark from
different angles, this information could be used to
reconstruct a multi view or even a three-
dimensional image of that landmark. In this project,
we investigate the image quality attainable via
image reconstruction using square patches with
SIFT and SURF descriptors. We also expand upon
the image reconstruction Problem, and investigate
the feasibility of reconstructing images using
descriptors with incomplete geometrical
information. Geometrical information which
typically includes the (x, y) coordinates; orientation
and scale of the image patches are used for post
verification during visual search, and can be used
for the accurate placement and warping of patches
from the database during image reconstruction. It
would be of interest to investigate how much
reconstruction would still be possible with partial
information. We narrow down the problem to
descriptors lacking information about the scales of
image patches, and examine how well the scales
can be learned, as well as the resulting image
quality. We use a greedy algorithm and
progressively learn the scales of the image patches,
before reconstruction is performed. As we might
expect, reconstruction is far from perfect, but vital
information about the image can still be inferred
from the image. Upon comparison with images
reconstructed with full geometrical information, it
is important to note that most of the information of
an image is still retained even though the
descriptors do not contain data about the scales of
image patches. So much information is kept that it
is possible to recognize the features in the image
and reproduce a close replica of the original image
by subsequently coloring the image. This paper is
organized as follows; Section II outlines the
problem statement. In Section III we explain the
main reconstruction and colorizing algorithms;
Section IV details the greedy algorithm we used for
learning the scales of patches. Section V then
summarizes some of the experiments we did and
discusses possible improvements.
1.3 Why we use lab view
NI Lab VIEW is a graphical programming
language designed for engineers and scientists to
develop test, control, and measurement
applications. The intuitive nature of Lab VIEW
graphical programming makes it easy for educators
and researchers to incorporate the software in a
range of courses and applications. With Lab VIEW,
educators and researchers can use a graphical
system design approach to design, prototype, and
deploy embedded systems. It combines the power
of graphical programming with hardware to
dramatically simplify and accelerate the
development of designs. Graphical system design
is a modern approach to designing, prototyping,
and deploying embedded systems. It combines
open graphical programming with hardware to
dramatically simplify development.
II. Virtual Instruments (VIs):
Lab VIEW programs are called as Virtual
Instruments because the icons appearance and
operation imitate actual Instruments like switches
and meters. VI contains three main components
2.1 Front Panel:
Front panel is the user interface of VI. It
contains controls and indicators. Controls are like
inputs that supply data to the block diagram and
indicators are like outputs that display the data
which will be generated by the block diagram.
Fig1: Front Panel
2.2 Block Diagram:
The code of the program is present on the
block diagram and it resembles a flowchart. It
defines the functionality of the VI. Block diagram
objects include terminals, functions, constants,
wires etc.
Fig2: Block Diagram
2.3 Icon and Connector Pane:
You can use a VI as a sub VI. A sub VI is a VI that
is used inside of another VI, similar to a function in
a text-based programming language. To use a VI as
a sub VI, it must have an icon and a connector
pane. Every VI displays an icon, shown above, in
the upper right corner of the front panel and block
diagram windows. An icon is a graphical
representation of a VI. The icon can contain both
3. International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 1 Issue 4 ǁ October 2016.
w w w . i j m r e t . o r g Page 3
text and images. If you use a VI as a sub VI, the
icon identifies the sub VI on the block diagram of
the VI. The default icon contains a number that
indicates how many new Vis you opened after
launching Lab VIEW.
Fig4: Icon
To use a VI as a sub VI, you need to build a
connector pane, shown above. The connector pane
is a set of terminals that corresponds to the controls
and indicators of that VI, similar to the parameter
list of a function call in text based programming
languages. Access the connector pane by right-
clicking the icon in the upper right corner of the
front panel window. You cannot access the
connector pane from the icon in the block diagram
window
Fig5: Connector Pane
III. Blocks of Lab View
3.1 What makes up Lab View
Lab VIEW itself is a software
development environment that contains numerous
components, several of which are required for any
type of test, measurement, or control application.
Figure6: Lab VIEW contains several valuable
components
To quote one of our software developers, “We
write low-level code so you don’t have to.” Our
global team of developers continually works on the
six areas called out in Figure 1 to free you, the Lab
VIEW user, up to focus on the bigger problems and
tasks you are trying to solve.
IV. G Programming Language
The G programming language is central to
Lab VIEW; so much so that it is often called “Lab
VIEW programming.” Using it, you can quickly tie
together data acquisition, analysis, and logical
operations and understand how data is being
modified. From a technical standpoint, G is a
graphical dataflow language in which nodes
(operations or functions) operate on data as soon as
it becomes available, rather than in the sequential
line-by-line manner that most programming
languages employ. You lay out the “flow” of data
through the application graphically with wires
connecting the output of one node to the input of
another. The practical benefit of the graphical
approach is that it puts more focus on data and the
operations being performed on that data, and
abstracts much of the administrative complexity of
computer programming such as memory allocation
and language syntax. New programmers typically
report shorter learning curves with G than with
other programming languages because they can
relate G code to flow charts and other familiar
visual representations of processes. Seasoned
programmers can also take advantage of the
productivity gains by working at a higher level of
abstraction while still employing advanced
programming practices such as object-oriented
design, encapsulation, and code profiling.
Figure7: This block diagram shows self-
documenting G code.
Lab VIEW contains a powerful optimizing
compiler that examines your block diagram and
directly generates efficient machine code, avoiding
the performance penalty associated with interpreted
or cross-compiled languages. The compiler can
also identify segments of code with no data
dependencies (that is, no wires connecting them)
and automatically split your application into
multiple threads that can run in parallel on multi
core processors, yielding significantly faster
analysis and more responsive control compared to a
single-threaded, sequential application. With the
debugging tools in Lab VIEW, you can slow down
execution and see the data flow through your
diagram, or you can use common tools such as
breakpoints and data probes to step through your
program node-by-node.
V. Hardware Support
Typically, integrating different hardware
devices can be a major pain point when automating
any test, measurement, or control system. Worse
yet, not integrating the different hardware pieces
leads to the hugely inefficient and error-prone
process of manually taking individual
measurements and then trying to correlate, process,
and tabulate data by hand.
4. International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 1 Issue 4 ǁ October 2016.
w w w . i j m r e t . o r g Page 4
Figure 8: Lab VIEW connects to almost any
hardware device.
Lab VIEW makes the process of integrating
hardware much easier by using a consistent
programming approach no matter what hardware
you are using. The same initialize-configure-
read/write-close pattern is repeated for a wide
variety of hardware devices, data is always returned
in a format compatible with the analysis and
reporting functions, and you are not forced to dig
into instrument programming manuals to find low-
level message and register-based communication
protocols unless you specifically need to. Lab
VIEW has freely available drivers for thousands of
NI and third-party hardware. In the rare case that a
Lab VIEW driver does not already exist, you have
tools to create your own, reuse a DLL or other
driver not related to Lab VIEW, or use low-level
communication mechanisms to operate hardware
without a driver. Chances are if the hardware can
be connected to a PC, Lab VIEW can talk to it. The
cross-platform nature of Lab VIEW also allows
you to deploy your code to many different
computing platforms. In addition to the popular
desktop OSs (Windows, Mac, and Linux), Lab
VIEW can target embedded real-time controllers,
ARM microprocessors, and field-programmable
gate arrays (FPGAs), so you can quickly prototype
and deploy to the most appropriate hardware
platform without having to learn new tool chains.
VI. UI Components and Reporting
Tools
Every Lab VIEW block diagram also has
an associated front panel, which is the user
interface of your application. On the front panel
you can place generic controls and indicators such
as strings, numbers, and buttons or technical
controls and indicators such as graphs, charts,
tables, thermometers, dials, and scales. All Lab
VIEW controls and indicators are designed for
engineering use, meaning you can enter SI units
such as 4M instead of 4,000,000, change the scale
of a graph by clicking on it and typing a new end
point, export data to tools such as NI diadem and
Microsoft Excel by right-clicking on it, and so on.
Figure 9: Every Lab VIEW block diagram has an
associated front panel, such as this signal
generation example with custom UI. Controls and
indicators are customizable. You can add them
either from a palette of controls on the front panel
or by right-clicking on a data wire on the block
diagram and selecting “Create Control” or “Create
Indicator.” In addition to displaying data as your
application is running, Lab VIEW also contains
several options for generating reports from your
test or acquired data. You can send simple reports
directly to a printer or HTML file,
programmatically generate Microsoft Office
documents, or integrate with NI Diadem for more
advanced reporting. Remote front panels and Web
service support allow you to publish data over the
Internet with the built-in Web server.
fig 10: Image Reconstruction block diagram
VII. Screen Shots
7.1 Front Panel:
Below diagram is front panel of this
project. In this we are giving some part of the
building and getting the exact building, if it is in
that building the Indicator (LED) will be blink,
otherwise it will be off position.
VIII. Building and Configuring the Front
Panel
8.1 Introduction:
The front panel is the user interface of a
VI. Generally, you design the front panel first, then
design the block diagram to perform tasks on the
inputs and outputs you create on the front panel.
You build the front panel with controls and
indicators, which are the interactive input and
output terminals of the VI, respectively. Controls
are knobs, push buttons, dials, and other input
devices. Indicators are graphs, LEDs, and other
displays. Controls simulate instrument input
devices and supply data to the block diagram of the
5. International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 1 Issue 4 ǁ October 2016.
w w w . i j m r e t . o r g Page 5
VI. Indicators simulate instrument output devices
and display data the block diagram acquires or
generates.
8.2 Building the Front Panel:
Select Window» Show Controls Palette to display
the Controls palette, then select controls and
indicators from the Controls palette and place them
on the front panel. Front panel is the user interface
that contains controls and indicators.
Fig11 : Control Palette
IX. Building the Block Diagram
After you build the front panel, you add
code using graphical representations of functions to
control the front panel objects. The block diagram
contains this graphical source code. Front panel
objects appear as terminals on the block diagram.
Double-click a block diagram terminal to highlight
the corresponding control or indicator on the front
panel. Terminals are entry and exit ports that
exchange information between the front panel and
block diagram. Data you enter into the front panel
controls enter the block diagram through the
control terminals. During execution, the output data
flow to the indicator terminals, where they exit the
block diagram, reenter the front panel, and appear
in front panel indicators. Objects on the block
diagram include terminals, nodes, and functions.
You build block diagrams by connecting the
objects with wires.
X. Running and Debugging VIs
To run a VI, you must wire all the sub
VIs, functions, and structures with the data types
the terminals expect. Sometimes a VI produces
data or runs in a way you do not expect. You can
use Lab VIEW to configure how a VI runs and to
identify problems with block diagram organization
or with the data passing through the block diagram.
Running a VI means executing the block diagram
code.
Fig12: Front Panel and Block Diagram
Toolbar
XI. Results
Fig 13: Given input to file path
Fig 14: Output on the front panel
XII. Future Scope
It may used for military applications while
sending data , the whole data as may be misused,
using this we can send a piece of image as can be
accessed by authorized personal only. The main
scope is it can be used archeological department for
recollection of the shape of various things which
are destroyed to some extent.
XIII. Conclusion
In modern days, due to huge reforms in
technology and due to globalization of technical
products, there have been large advancements in
human life too. So one of such advancements are
techniques like image processing, synthesis,
reconstruction etc. The image reconstruction is
nothing but the original image which is present in
the database can be retrieved or reconstructed using
small samples of the image. This is made possible
by the use of information matching present in
6. International Journal of Modern Research in Engineering and Technology (IJMRET)
www.ijmret.org Volume 1 Issue 4 ǁ October 2016.
w w w . i j m r e t . o r g Page 6
geometrical met data sample i.e. small part of the
original image to be reconstructed which is present
in the database. The whole reconstruction process
is carried out by an IMAGE ACQUIRE (IMAQ)
technique which subsequently performs the IM AQ
Learn Color Pattern and the IMAQ Setup Match
Color Pattern. The whole thing is Boolean function
and carried out in software called Lab VIEW. Thus
after the technique is performed the original image
from the database is reconstructed using the
samples and this output is obtained from the output
block IMAQ read file if and only if the number
matching data of the image sample perfectly
matches with the image in data base. There are
several applications in image reconstruction
technique such as, remote sensing via satellite,
military communication via aircraft, radar,
teleconferencing, and facsimile transmission, for
educational & business documents, medical images
that arise in computer tomography etc.
XIV. Acknowledgement
We take this opportunity to tank all who
have rendered their full support to my project work.
Again we take the unique privilege to express my
thanks to Mr. R. Srinivasa Rao, Internal Guide,
for his valuable guidance and encouragement given
to me through this project work. Finally, we extend
my thanks to all the people, who have helped me a
lot directly or indirectly in completion of this
project.
References
[1]. Tinku Acharya and A joy K. Ray (2006).
Image processing - principles and
Applications.
[2]. Tony F. Chan and Jackie (Jian hong) Shen
(2005). Image processing and Analysis.
[3]. Milan Sonka, Vaclav Hlavac and Roger
Boyle (1999). Image processing, Analysis,
Machine vision.
[4] www.wikipedia.org
[5] www.howstuffworks.com
[6] www.alldatasheets.com
Authors:
C. Pavani Reddy received her B.Tech
from G. Narayanamma Institute of
Technology and Science, Hyderabad
(near Shaikpet), Telangana in 2001.
She completed her M.E from
Osmania University, Hyderabad,
Telangana in 2009. Presently she is
working as Assistant Professor (ECE department)
in kommuri pratap reddy institute of technology,
Hyderabad, Telangana. Her area of interest is
microprocessors and microcontrollers, signal
processing, Image Processing, embedded systems,
vlsi and microelectronics.
M. Murali Krishna received his B.
Tech from Khammam institute of
technology and sciences,
Khammam, A.P in 2013. He
completed his M. Tech from DVR
institute of technology, Hyderabad
in 2015. He worked as junior engineer in Medha
Servo Drives Pvt Ltd, Ghatkesar, Hyderabad. His
area of interest is systems and signal processing,
Image Processing, Power electronics and
Communication systems.