This document discusses several image processing techniques for automated flaw detection in infrared thermography data, including thermal contrast computation, pulsed phase thermography, thermographic signal reconstruction, and principal component analysis. It also discusses using type-2 fuzzy logic to model uncertainties in image segmentation and classification by representing membership functions as three-dimensional rather than two-dimensional. The goal is to develop robust and automated techniques for flaw detection and sizing in carbon fiber composite materials using infrared thermography.
This document describes a new methodology for improving the accuracy of fingerprint verification systems. It proposes detecting singular points like core and delta points, and indexing templates based on the occurrence of delta points relative to the core point. Experiments on the FVC2006 database show the proposed method achieves higher recognition rates and lower false acceptance and rejection rates compared to existing minutiae-based matching techniques, especially for distorted images. It provides a concise way to represent templates and allows for faster matching by first comparing singular point information before minutiae points.
Segmentation and Classification of MRI Brain TumorIRJET Journal
This document presents a study comparing two techniques for detecting brain tumors in MRI images: level set segmentation and K-means segmentation. Features are extracted from the segmented tumors using discrete wavelet transform and gray level co-occurrence matrix. The features are then classified as benign or malignant using a support vector machine. The level set method and K-means method are evaluated based on accuracy, sensitivity, and specificity on a dataset of 41 MRI brain images. The level set method achieved slightly higher accuracy of 94.12% compared to the K-means method.
GRAY SCALE IMAGE SEGMENTATION USING OTSU THRESHOLDING OPTIMAL APPROACHJournal For Research
Image segmentation is often used to distinguish the foreground from the background. Image segmentation is one of the difficult research problems in the machine vision industry and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, the Otsu method, improves the image segmentation effect obviously. It can be implemented by two different approaches: Iteration approach and Custom approach. In this paper both approaches has been implemented on MATLAB and give the comparison of them and show that both has given almost the same threshold value for segmenting image but the custom approach requires less computations. So if this method will be implemented on hardware in an optimized way then custom approach is the best option.
PARALLEL GENERATION OF IMAGE LAYERS CONSTRUCTED BY EDGE DETECTION USING MESSA...ijcsit
Edge detection is one of the most fundamental algorithms in digital image processing. Many algorithms have been implemented to construct image layers extracted from the original image based on selecting threshold parameters. Changing theses parameters to get a high quality layer is time consuming. In this paper, we propose two parallel technique, NASHT1 and NASHT2, to generate multiple layers of an input
image automatically to enable the image tester to select the highest quality detected edges. In addition, the
effect of intensive I/O operations and the number of parallel running processes on the performance of the proposed techniques have also been studied.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This paper proposes a novel technique for detecting point landmarks in 3D medical images based on phase congruency (PC). A bank of 3D log-Gabor filters is used to compute energy maps from the images. These energy maps are combined to form the PC measure, which is invariant to intensity variations and provides good feature localization. Significant 3D point landmarks are detected by analyzing the eigenvectors of PC moments computed at each point. The method is demonstrated on head and neck images for radiation therapy planning.
Farsi character recognition using new hybrid feature extraction methodsijcseit
Identification of visual words and writings has long been one of the most essential and the most attractive
operations in the field of image processing which has been studied since the last few decades and includes
security, traffic control, fields of psychology, medicine, and engineering, etc. Previous techniques in the
field of identification of visual writings are very similar to each other for the most parts of their analysis,
and depending on the needs of the operational field have presented different feature extraction. Changes in
style of writing and font and turns of words and other issues are challenges of characters identifying
activity. In this study, a system of Persian character identification using independent orthogonal moment
that is Zernike Moment and Fourier-Mellin Moment has been used as feature extraction technique. The
values of Zernike Moments as characteristics independent of rotation have been used for classification
issues in the past and each of their real and imaginary components have been neglected individually and
with the phase coefficients, each of them will be changed by rotation. In this study, Zernike and Fourier-
Mellin Moments have been investigated to detect Persian characters in noisy and noise-free images. Also,
an improvement on the k-Nearest Neighbor (k-NN) classifier is proposed for character recognition. Using
the results comparison of the proposed method with current salient methods such as Back Propagation
(BP) and Radial Basis Function (RBF) neural networks in terms of feature extraction in words, it has been
shown that on the Hoda database, the proposed method reaches an acceptable detection rate (96/5%).
This document describes a new methodology for improving the accuracy of fingerprint verification systems. It proposes detecting singular points like core and delta points, and indexing templates based on the occurrence of delta points relative to the core point. Experiments on the FVC2006 database show the proposed method achieves higher recognition rates and lower false acceptance and rejection rates compared to existing minutiae-based matching techniques, especially for distorted images. It provides a concise way to represent templates and allows for faster matching by first comparing singular point information before minutiae points.
Segmentation and Classification of MRI Brain TumorIRJET Journal
This document presents a study comparing two techniques for detecting brain tumors in MRI images: level set segmentation and K-means segmentation. Features are extracted from the segmented tumors using discrete wavelet transform and gray level co-occurrence matrix. The features are then classified as benign or malignant using a support vector machine. The level set method and K-means method are evaluated based on accuracy, sensitivity, and specificity on a dataset of 41 MRI brain images. The level set method achieved slightly higher accuracy of 94.12% compared to the K-means method.
GRAY SCALE IMAGE SEGMENTATION USING OTSU THRESHOLDING OPTIMAL APPROACHJournal For Research
Image segmentation is often used to distinguish the foreground from the background. Image segmentation is one of the difficult research problems in the machine vision industry and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, the Otsu method, improves the image segmentation effect obviously. It can be implemented by two different approaches: Iteration approach and Custom approach. In this paper both approaches has been implemented on MATLAB and give the comparison of them and show that both has given almost the same threshold value for segmenting image but the custom approach requires less computations. So if this method will be implemented on hardware in an optimized way then custom approach is the best option.
PARALLEL GENERATION OF IMAGE LAYERS CONSTRUCTED BY EDGE DETECTION USING MESSA...ijcsit
Edge detection is one of the most fundamental algorithms in digital image processing. Many algorithms have been implemented to construct image layers extracted from the original image based on selecting threshold parameters. Changing theses parameters to get a high quality layer is time consuming. In this paper, we propose two parallel technique, NASHT1 and NASHT2, to generate multiple layers of an input
image automatically to enable the image tester to select the highest quality detected edges. In addition, the
effect of intensive I/O operations and the number of parallel running processes on the performance of the proposed techniques have also been studied.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This paper proposes a novel technique for detecting point landmarks in 3D medical images based on phase congruency (PC). A bank of 3D log-Gabor filters is used to compute energy maps from the images. These energy maps are combined to form the PC measure, which is invariant to intensity variations and provides good feature localization. Significant 3D point landmarks are detected by analyzing the eigenvectors of PC moments computed at each point. The method is demonstrated on head and neck images for radiation therapy planning.
Farsi character recognition using new hybrid feature extraction methodsijcseit
Identification of visual words and writings has long been one of the most essential and the most attractive
operations in the field of image processing which has been studied since the last few decades and includes
security, traffic control, fields of psychology, medicine, and engineering, etc. Previous techniques in the
field of identification of visual writings are very similar to each other for the most parts of their analysis,
and depending on the needs of the operational field have presented different feature extraction. Changes in
style of writing and font and turns of words and other issues are challenges of characters identifying
activity. In this study, a system of Persian character identification using independent orthogonal moment
that is Zernike Moment and Fourier-Mellin Moment has been used as feature extraction technique. The
values of Zernike Moments as characteristics independent of rotation have been used for classification
issues in the past and each of their real and imaginary components have been neglected individually and
with the phase coefficients, each of them will be changed by rotation. In this study, Zernike and Fourier-
Mellin Moments have been investigated to detect Persian characters in noisy and noise-free images. Also,
an improvement on the k-Nearest Neighbor (k-NN) classifier is proposed for character recognition. Using
the results comparison of the proposed method with current salient methods such as Back Propagation
(BP) and Radial Basis Function (RBF) neural networks in terms of feature extraction in words, it has been
shown that on the Hoda database, the proposed method reaches an acceptable detection rate (96/5%).
This document proposes a method for change detection in images that combines Change Vector Analysis, K-Means clustering, Otsu thresholding, and mathematical morphology. It involves detecting intensity changes using CVA, segmenting the difference image using K-Means, calculating a threshold with Otsu's method, applying the threshold and morphological operations, and comparing results to other change detection techniques. Experimental results on medical and other images show the proposed method achieves satisfactory change detection with fewer errors compared to other methods.
1. The document discusses connecting the visualization toolkit (VTK) and insight toolkit (ITK) for medical image analysis tasks like registration and segmentation.
2. Class hierarchies were implemented to define VTK filters using ITK algorithms for tasks like image filtering, registration, and segmentation.
3. Examples of connecting the toolkits for registration and segmentation are presented, including registering longitudinal MRI and CT lung images for comparison.
Image registration is the fundamental task used to
match two or more partially overlapping images taken, for
example, at different times,from different sensors, or from
different viewpoints and stitch these images into one
panoramic image comprising the whole scene. It is
afundamental image processing technique and is very useful
in integrating information from different sensors, finding
changes in images taken at different times, inferring threedimensional
information from stereo images, and recognizing
model-based objects.
This paper overviews the theoretical aspects of an image
registration problem. The purpose of this paper is to present a
survey of image registration techniques. This technique of
image registration aligns two images geometrically. These two
images are reference image and sensed image. The ultimate
purpose of digital image filtering is to support the visual
identification of certain features expressed by characteristic
shapes and patterns. Numerous recipes, algorithms and ready
made programs exist nowadays that predominantly have in
common that users have to set certain parameters.
Particularly if processing is fast and shows results rather
immediately, the choice of parameters may be guided by
making the image ―looking nice‖. However, in practical
situations most users are not in a mood to ―play around‖
with a displayed image, particularly if they are in a stressy
situation as it may encountered in security applications. The
requirements for the application of digital image processing
under such circumstances will be discussed with an example
of automaticfiltering without manual parameter settings that
even entails the advantage of delivering unbiased results
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
MEDICAL IMAGE TEXTURE SEGMENTATION USINGRANGE FILTERcscpconf
Medical image segmentation is a frequent processing step in image understanding and computer
aided diagnosis. In this paper, we propose medical image texture segmentation using texture
filter. Three different image enhancement techniques are utilized to remove strong speckle noise as well enhance the weak boundaries of medical images. We propose to exploit the concept of range filtering to extract the texture content of medical image. Experiment is conducted on ImageCLEF2010 database. Results show the efficacy of our proposed medical image texture segmentation.
IRJET- Document Layout analysis using Inverse Support Vector Machine (I-SV...IRJET Journal
This document discusses using inverse support vector machines (I-SVM) for document layout analysis of Hindi newspaper images for optical character recognition. It proposes a framework that uses bounding box segmentation, feature extraction using subline direction and bounding box shape detection, and I-SVM classification. Preprocessing steps include binarization, removing horizontal/vertical lines, and morphological operations. Experimental results show the algorithm can accurately label blocks in newspaper layouts and extract articles for OCR.
Particle Swarm Optimization for Nano-Particles Extraction from Supporting Mat...CSCJournals
This document summarizes a study that uses Particle Swarm Optimization (PSO) for automatic segmentation of nano-particles in Transmission Electron Microscopy (TEM) images. PSO is applied to specify local and global thresholds for segmentation by treating image entropy as a minimization problem. Results show the PSO method improves over previous techniques by reducing incorrect characterization of nano-particles in images affected by liquid concentrations or supporting materials, with up to a 27% reduction in errors. Compared to manual characterization, PSO provides comparable particle counting with higher computational efficiency suitable for real-time analysis.
Feature selection is the fundamental step in image
registration. Various tasks such as feature extraction, detection
are based on feature based approach. In the current paper we are
going to discuss about our technique that is hybrid of Local affine
and thin plate spline. An automatic edge detection method to
achieve the correct edge map is put forward to dealing with
image registration with affine transformation for the better
image registration. Registration algorithms compute
transformations to set correspondence between the two images.
The purpose of this paper is to provide a comprehensive
comparison of the existing literature available on Image
registration methods with proposed technique
Usage of Shape From Focus Method For 3D Shape Recovery And Identification of ...CSCJournals
Shape from focus is a method of 3D shape and depth estimation of an object from a sequence of pictures with changing focus settings. In this paper we propose a novel method of shape recovery, which was originally created for shape and position identification of glass pipette in medical hybrid robot. In proposed algorithm, Sum of Modified Laplacian is used as a focus operator. Each step of the algorithm is tested in order to pick the operators with the best results. Reconstruction allows not only to determine shape but also precisely define position of the object. The results of proposed method, performed on real objects, have shown the efficiency of this scheme.
Fingerprint Image Compression using Sparse Representation and Enhancement wit...Editor IJCATR
A technique for enhancing decompressed fingerprint image using Wiener2 filter is proposed. First compression is done by sparse representation. Compression of fingerprint is necessary for reducing the memory consumption and efficient transfer of fingerprint images. This is very essential for the application which includes access control and forensics. So the fingerprint image is compressed using sparse representation. In this technique, first dictionary is constructed for patches of fingerprint images. Then a fingerprint is selected and the coefficients are obtained and encoded. Thus the compressed fingerprint is obtained. But when the fingerprint is reconstructed, it is affected by noise. So Wiener2 filter is used to filter the noise in the image. The ridge and bifurcation count is extracted from decompressed and enhanced fingerprints. The experiment result shows that the enhanced fingerprint image preserves more bifurcation than decompressed fingerprint image. The future analysis can be considered for preserving ridges.
Image processing based girth monitoring and recording system for rubber plant...sipij
Measuring the girth and continuous monitoring of the increase in girth is one of the most important processes in rubber plantations since identification of girth deficiencies would enable planters to take corrective actions to ensure a good yield from the plantation.
This research paper presents an image processing based girth measurement & recording system that can replace existing manual process in an efficient and economical manner.
The system uses a digital image of the tree which uses the current number drawn on the tree to identify the tree number & its width. The image is threshold first & then filtered out using several filtering criterion to identify possible candidates for numbers. Identified blobs are then fed to the Tesseract OCR for number recognition. Threshold image is then filtered again with different criterion to segment out the black strip drawn on the tree which is then used to calculate the width of the tree using calibration parameters. Once the tree number is identified & width is calculated the girth the measured girth of the tree is stored in the data base under the identified tree number.
The results obtained from the system indicated significant improvement in efficiency & economy for main plantations. As future developments we are proposing a standard commercial system for girth measurement using standardized 2D Bar Codes as tree identifiers
Tonsillitis is a disease that can be found in every
part of the world. Moreover, it is one of the main causes
intervening for heart attack and pneumonia. It has been reported
that there are a large number of people having died because of
heart attack and pneumonia. To improve data transfer rates, this
paper proposes Gabor filter design with efficient noise reduction
and less power consumption usage is proposed in this paper.
Using textural properties of anatomical structures the filter
design is suitable for detecting the early stages of disease. The
code for Gabor filter will be developed in MATLAB
Corner Detection Using Mutual InformationCSCJournals
This work presents a new method of corner detection based on mutual information and invariant to image rotation. The use of mutual information, which is a universal similarity measure, has the advantage of avoiding the derivation which amplifies the effect of noise at high frequencies. In the context of our work, we use mutual information normalized by entropy. The tests are performed on grayscale images.
Face recognition using gaussian mixture model & artificial neural networkeSAT Journals
Abstract
Face recognition is a non-contact and friendly biometric identification technology. It has broad application prospects in the
military, public security and economic security. In this work, we also consider illumination variable database. The images have
taken from far distance and do not consider the close view face of the individual as in most of the face databases, clear face view
has been considered. In this first we located face as region of interest and then LBP and LPQ descriptors are used which is
illuminance invariant in nature. After this GMM has been used to reduce feature set by taking negative log-likelihood from each
LBP and LPQ descripted image histograms. After this ANN consumes stayed used for organization purposes. The investigational
consequencesshow excellent correctness rates in overall testing of input data.
Keywords: Illumination invariant, face recognition, LBP, LPQs,GMM,ANN
Segmentation of Tumor Region in MRI Images of Brain using Mathematical Morpho...CSCJournals
This paper introduces an efficient detection of brain tumor from cerebral MRI images. The methodology consists of two steps: enhancement and segmentation. To improve the quality of images and limit the risk of distinct regions fusion in the segmentation phase an enhancement process is applied. We applied mathematical morphology to increase the contrast in MRI images and to segment MRI images. Some of experimental results on brain images show the feasibility and the performance of the proposed approach.
IRJET- Image Segmentation Techniques: A SurveyIRJET Journal
1) The document discusses various techniques for image segmentation including histogram-based techniques, K-means clustering, fuzzy C-means clustering, and watershed segmentation.
2) Histogram-based techniques use the histogram of pixel intensities or colors to separate an image into regions but do not capture much detail. K-means and fuzzy C-means are clustering techniques that group similar pixels but do not consider spatial relationships.
3) The document surveys recent research on applying these techniques and combinations such as initializing fuzzy C-means with histograms to improve convergence speed and incorporating spatial data.
This summarizes an academic paper that proposes an unsupervised algorithm to detect regions of interest (ROIs) in images using fast feature detectors. It detects keypoints using Speeded-Up Robust Features (SURF) and Features from Accelerated Segment Test (FAST) to maximize interest points. It categorizes keypoints as foreground or background using k-nearest neighbors classification on texture descriptors. ROIs are identified as groups of foreground keypoints. Preliminary experiments showed this approach can efficiently detect ROIs without computationally expensive comparisons between images.
This document describes the design and implementation of a numerically controlled oscillator (NCO) on an FPGA. An NCO generates sinusoidal signals through a phase accumulator and lookup table. The authors implemented an NCO with 9-bit phase resolution, 54.18 dB spur level, 24-bit frequency resolution, and output signals of sine and cosine waves. They designed the NCO in VHDL, simulated it in Xilinx ISE, and tested it on a Spartan-2 FPGA board. The hardware results matched the simulation results, demonstrating frequencies from 1.25 MHz to 7.5 MHz with matching output on a spectrum analyzer. The FPGA-based NCO can be used for applications like software defined
Présentation de la formation à l'édition en FranceAsfored
Une présentation de la formation à l'édition en France par les étudiants de la promotion 2013 du mastère spécialisé Management de l'édition. Les étudiants ont présenté leur formation à Seagull School of Publishing, à l'occasion de leur voyage d'étude à New Delhi.
Use Email Marketing wisely; Stand out from Junk Mailbelieve52
Email marketing is an effective but cheap way to promote products, however junk mail is common so marketers must stand out. The document provides tips for effective email marketing: 1) use text rather than images to avoid being marked as spam, 2) avoid attachments which can spread viruses, 3) avoid fancy HTML which may not display properly on all email clients, 4) address the recipient by name to increase interest, and 5) provide a valid sender address to avoid being blocked or marked as spam.
This document proposes a method for change detection in images that combines Change Vector Analysis, K-Means clustering, Otsu thresholding, and mathematical morphology. It involves detecting intensity changes using CVA, segmenting the difference image using K-Means, calculating a threshold with Otsu's method, applying the threshold and morphological operations, and comparing results to other change detection techniques. Experimental results on medical and other images show the proposed method achieves satisfactory change detection with fewer errors compared to other methods.
1. The document discusses connecting the visualization toolkit (VTK) and insight toolkit (ITK) for medical image analysis tasks like registration and segmentation.
2. Class hierarchies were implemented to define VTK filters using ITK algorithms for tasks like image filtering, registration, and segmentation.
3. Examples of connecting the toolkits for registration and segmentation are presented, including registering longitudinal MRI and CT lung images for comparison.
Image registration is the fundamental task used to
match two or more partially overlapping images taken, for
example, at different times,from different sensors, or from
different viewpoints and stitch these images into one
panoramic image comprising the whole scene. It is
afundamental image processing technique and is very useful
in integrating information from different sensors, finding
changes in images taken at different times, inferring threedimensional
information from stereo images, and recognizing
model-based objects.
This paper overviews the theoretical aspects of an image
registration problem. The purpose of this paper is to present a
survey of image registration techniques. This technique of
image registration aligns two images geometrically. These two
images are reference image and sensed image. The ultimate
purpose of digital image filtering is to support the visual
identification of certain features expressed by characteristic
shapes and patterns. Numerous recipes, algorithms and ready
made programs exist nowadays that predominantly have in
common that users have to set certain parameters.
Particularly if processing is fast and shows results rather
immediately, the choice of parameters may be guided by
making the image ―looking nice‖. However, in practical
situations most users are not in a mood to ―play around‖
with a displayed image, particularly if they are in a stressy
situation as it may encountered in security applications. The
requirements for the application of digital image processing
under such circumstances will be discussed with an example
of automaticfiltering without manual parameter settings that
even entails the advantage of delivering unbiased results
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
MEDICAL IMAGE TEXTURE SEGMENTATION USINGRANGE FILTERcscpconf
Medical image segmentation is a frequent processing step in image understanding and computer
aided diagnosis. In this paper, we propose medical image texture segmentation using texture
filter. Three different image enhancement techniques are utilized to remove strong speckle noise as well enhance the weak boundaries of medical images. We propose to exploit the concept of range filtering to extract the texture content of medical image. Experiment is conducted on ImageCLEF2010 database. Results show the efficacy of our proposed medical image texture segmentation.
IRJET- Document Layout analysis using Inverse Support Vector Machine (I-SV...IRJET Journal
This document discusses using inverse support vector machines (I-SVM) for document layout analysis of Hindi newspaper images for optical character recognition. It proposes a framework that uses bounding box segmentation, feature extraction using subline direction and bounding box shape detection, and I-SVM classification. Preprocessing steps include binarization, removing horizontal/vertical lines, and morphological operations. Experimental results show the algorithm can accurately label blocks in newspaper layouts and extract articles for OCR.
Particle Swarm Optimization for Nano-Particles Extraction from Supporting Mat...CSCJournals
This document summarizes a study that uses Particle Swarm Optimization (PSO) for automatic segmentation of nano-particles in Transmission Electron Microscopy (TEM) images. PSO is applied to specify local and global thresholds for segmentation by treating image entropy as a minimization problem. Results show the PSO method improves over previous techniques by reducing incorrect characterization of nano-particles in images affected by liquid concentrations or supporting materials, with up to a 27% reduction in errors. Compared to manual characterization, PSO provides comparable particle counting with higher computational efficiency suitable for real-time analysis.
Feature selection is the fundamental step in image
registration. Various tasks such as feature extraction, detection
are based on feature based approach. In the current paper we are
going to discuss about our technique that is hybrid of Local affine
and thin plate spline. An automatic edge detection method to
achieve the correct edge map is put forward to dealing with
image registration with affine transformation for the better
image registration. Registration algorithms compute
transformations to set correspondence between the two images.
The purpose of this paper is to provide a comprehensive
comparison of the existing literature available on Image
registration methods with proposed technique
Usage of Shape From Focus Method For 3D Shape Recovery And Identification of ...CSCJournals
Shape from focus is a method of 3D shape and depth estimation of an object from a sequence of pictures with changing focus settings. In this paper we propose a novel method of shape recovery, which was originally created for shape and position identification of glass pipette in medical hybrid robot. In proposed algorithm, Sum of Modified Laplacian is used as a focus operator. Each step of the algorithm is tested in order to pick the operators with the best results. Reconstruction allows not only to determine shape but also precisely define position of the object. The results of proposed method, performed on real objects, have shown the efficiency of this scheme.
Fingerprint Image Compression using Sparse Representation and Enhancement wit...Editor IJCATR
A technique for enhancing decompressed fingerprint image using Wiener2 filter is proposed. First compression is done by sparse representation. Compression of fingerprint is necessary for reducing the memory consumption and efficient transfer of fingerprint images. This is very essential for the application which includes access control and forensics. So the fingerprint image is compressed using sparse representation. In this technique, first dictionary is constructed for patches of fingerprint images. Then a fingerprint is selected and the coefficients are obtained and encoded. Thus the compressed fingerprint is obtained. But when the fingerprint is reconstructed, it is affected by noise. So Wiener2 filter is used to filter the noise in the image. The ridge and bifurcation count is extracted from decompressed and enhanced fingerprints. The experiment result shows that the enhanced fingerprint image preserves more bifurcation than decompressed fingerprint image. The future analysis can be considered for preserving ridges.
Image processing based girth monitoring and recording system for rubber plant...sipij
Measuring the girth and continuous monitoring of the increase in girth is one of the most important processes in rubber plantations since identification of girth deficiencies would enable planters to take corrective actions to ensure a good yield from the plantation.
This research paper presents an image processing based girth measurement & recording system that can replace existing manual process in an efficient and economical manner.
The system uses a digital image of the tree which uses the current number drawn on the tree to identify the tree number & its width. The image is threshold first & then filtered out using several filtering criterion to identify possible candidates for numbers. Identified blobs are then fed to the Tesseract OCR for number recognition. Threshold image is then filtered again with different criterion to segment out the black strip drawn on the tree which is then used to calculate the width of the tree using calibration parameters. Once the tree number is identified & width is calculated the girth the measured girth of the tree is stored in the data base under the identified tree number.
The results obtained from the system indicated significant improvement in efficiency & economy for main plantations. As future developments we are proposing a standard commercial system for girth measurement using standardized 2D Bar Codes as tree identifiers
Tonsillitis is a disease that can be found in every
part of the world. Moreover, it is one of the main causes
intervening for heart attack and pneumonia. It has been reported
that there are a large number of people having died because of
heart attack and pneumonia. To improve data transfer rates, this
paper proposes Gabor filter design with efficient noise reduction
and less power consumption usage is proposed in this paper.
Using textural properties of anatomical structures the filter
design is suitable for detecting the early stages of disease. The
code for Gabor filter will be developed in MATLAB
Corner Detection Using Mutual InformationCSCJournals
This work presents a new method of corner detection based on mutual information and invariant to image rotation. The use of mutual information, which is a universal similarity measure, has the advantage of avoiding the derivation which amplifies the effect of noise at high frequencies. In the context of our work, we use mutual information normalized by entropy. The tests are performed on grayscale images.
Face recognition using gaussian mixture model & artificial neural networkeSAT Journals
Abstract
Face recognition is a non-contact and friendly biometric identification technology. It has broad application prospects in the
military, public security and economic security. In this work, we also consider illumination variable database. The images have
taken from far distance and do not consider the close view face of the individual as in most of the face databases, clear face view
has been considered. In this first we located face as region of interest and then LBP and LPQ descriptors are used which is
illuminance invariant in nature. After this GMM has been used to reduce feature set by taking negative log-likelihood from each
LBP and LPQ descripted image histograms. After this ANN consumes stayed used for organization purposes. The investigational
consequencesshow excellent correctness rates in overall testing of input data.
Keywords: Illumination invariant, face recognition, LBP, LPQs,GMM,ANN
Segmentation of Tumor Region in MRI Images of Brain using Mathematical Morpho...CSCJournals
This paper introduces an efficient detection of brain tumor from cerebral MRI images. The methodology consists of two steps: enhancement and segmentation. To improve the quality of images and limit the risk of distinct regions fusion in the segmentation phase an enhancement process is applied. We applied mathematical morphology to increase the contrast in MRI images and to segment MRI images. Some of experimental results on brain images show the feasibility and the performance of the proposed approach.
IRJET- Image Segmentation Techniques: A SurveyIRJET Journal
1) The document discusses various techniques for image segmentation including histogram-based techniques, K-means clustering, fuzzy C-means clustering, and watershed segmentation.
2) Histogram-based techniques use the histogram of pixel intensities or colors to separate an image into regions but do not capture much detail. K-means and fuzzy C-means are clustering techniques that group similar pixels but do not consider spatial relationships.
3) The document surveys recent research on applying these techniques and combinations such as initializing fuzzy C-means with histograms to improve convergence speed and incorporating spatial data.
This summarizes an academic paper that proposes an unsupervised algorithm to detect regions of interest (ROIs) in images using fast feature detectors. It detects keypoints using Speeded-Up Robust Features (SURF) and Features from Accelerated Segment Test (FAST) to maximize interest points. It categorizes keypoints as foreground or background using k-nearest neighbors classification on texture descriptors. ROIs are identified as groups of foreground keypoints. Preliminary experiments showed this approach can efficiently detect ROIs without computationally expensive comparisons between images.
This document describes the design and implementation of a numerically controlled oscillator (NCO) on an FPGA. An NCO generates sinusoidal signals through a phase accumulator and lookup table. The authors implemented an NCO with 9-bit phase resolution, 54.18 dB spur level, 24-bit frequency resolution, and output signals of sine and cosine waves. They designed the NCO in VHDL, simulated it in Xilinx ISE, and tested it on a Spartan-2 FPGA board. The hardware results matched the simulation results, demonstrating frequencies from 1.25 MHz to 7.5 MHz with matching output on a spectrum analyzer. The FPGA-based NCO can be used for applications like software defined
Présentation de la formation à l'édition en FranceAsfored
Une présentation de la formation à l'édition en France par les étudiants de la promotion 2013 du mastère spécialisé Management de l'édition. Les étudiants ont présenté leur formation à Seagull School of Publishing, à l'occasion de leur voyage d'étude à New Delhi.
Use Email Marketing wisely; Stand out from Junk Mailbelieve52
Email marketing is an effective but cheap way to promote products, however junk mail is common so marketers must stand out. The document provides tips for effective email marketing: 1) use text rather than images to avoid being marked as spam, 2) avoid attachments which can spread viruses, 3) avoid fancy HTML which may not display properly on all email clients, 4) address the recipient by name to increase interest, and 5) provide a valid sender address to avoid being blocked or marked as spam.
7 Tips for Selling Expensive Collectibles On eBaybelieve52
The document provides 7 tips for selling expensive collectibles on eBay:
1. Set a reserve price that is the minimum you will accept and start the bidding very low to attract buyers.
2. Provide detailed descriptions of how the item will be carefully packed to prevent damage.
3. Require buyers to pay for insurance to protect both parties from liability if the collectible is broken.
4. Verify the authenticity of the collectible by providing details, photos or certificates of authenticity.
5. Do not offer returns but suggest using an escrow service for high value items to assure buyers.
6. Take extra precautions like verifying funds if shipping internationally due to higher fraud risks overseas.
Mobile Agents: An Intelligent Multi-Agent System for Mobile PhonesIOSR Journals
This document summarizes a research paper on developing an intelligent multi-agent system for hotel reservations on mobile phones. The system would allow users to enter search criteria and have an agent automatically search for and book the most suitable hotel. The agent would move between hotels on the user's mobile device to collect details on facilities, prices, reviews and more. The goal is to make the hotel booking process more efficient and less time-consuming for users. The document outlines the proposed system design and architecture, which incorporates mobile agent technologies to enable the agent to perform tasks remotely on behalf of the user.
Captcha Recognition and Robustness Measurement using Image Processing TechniquesIOSR Journals
This document summarizes a research paper that evaluates the robustness of different types of CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) using image processing techniques. The paper studies three classes of text-based CAPTCHAs: ones with complex backgrounds, BotDetect CAPTCHAs, and Google CAPTCHAs. It proposes a character segmentation algorithm using forepart prediction and character-adaptive masking to break the CAPTCHAs. Experimental results show segmentation and recognition accuracies ranging from 60% to 100% for different CAPTCHA classes, demonstrating some classes are more robust than others against this attack method. The paper concludes the algorithm can improve over traditional methods but more
Este documento presenta las secciones clave para desarrollar un proyecto de investigación, incluyendo la identificación del problema, la formulación de objetivos generales y específicos, el marco teórico, la hipótesis o supuesto, las variables, la población o muestra, las técnicas e instrumentos de recolección de datos, y los procedimientos y análisis de datos.
A Predictive Control Strategy for Power Factor CorrectionIOSR Journals
This document presents a predictive control strategy for a boost power factor correction (PFC) converter. It begins with an introduction to power factor correction and discusses common control techniques like hysteresis control and linear control. It then describes the boost converter topology used for the PFC converter and discusses the design of key components. The predictive control strategy is presented, which predicts the duty cycle needed over a half line period to achieve unity power factor. Simulation results show the control strategy maintains near unity power factor and low total harmonic distortion under different operating conditions.
This document provides information about transportation in Lübeck, Germany. It discusses how students travel to school, including data showing most use bicycles. It also addresses air and noise pollution levels in Lübeck. Specifically, it notes that particulate limits were exceeded on some days in 2011-2012. The document outlines plans to expand transportation infrastructure and makes comparisons between sustainable and unsustainable transportation modes in the region.
A Secure Software Implementation of Nonlinear Advanced Encryption StandardIOSR Journals
This document summarizes a research paper on image steganography using a polynomial key for covert communications. It proposes a new steganographic encoding scheme that separates the color channels of bitmap images and hides messages randomly in the least significant bit of one color component where the other two components' colors equal the selected key. The secret message, cover image, and pseudorandom seed generated by a polynomial are inputs. Pixels where the red and green components match the key are identified, and the secret bits are randomly inserted in the blue component's least significant bit using the seed. Statistical analysis found no difference in quality between the original and stego image. The scheme aims to provide a covert communication method using open systems.
This document summarizes the electrochemical synthesis and corrosion inhibiting properties of a poly N-methyl aniline coating doped with di-N-propyl malonic acid on stainless steel. Cyclic voltammetry was used to polymerize N-methyl aniline in a solution containing di-N-propyl malonic acid on a stainless steel electrode. An adherent red polymer film was obtained. Electrochemical impedance spectroscopy and potentiodynamic polarization techniques showed that this polymer coating provided excellent corrosion protection of stainless steel in 0.5M sulfuric acid solution, with inhibition efficiencies above 99% as indicated by large increases in charge transfer resistance and decreases in corrosion current density compared to uncoated stainless steel.
1) The document proposes a more secure implementation of the AES encryption algorithm by making the S-box structure nonlinear and dynamic.
2) A biometrics scheme is combined with the AES encryption and decryption to improve authentication security. Fingerprints are used in both encryption and decryption processes.
3) The implementation generates a random virtual S-box for each input by XORing the default AES S-box with a derived S-box, making the S-box structure nonlinear and dynamic. This improves security against attacks on the AES algorithm.
Assessment of Serum Gonadotrophins and Prolactin Hormone Levels in In Vitro F...IOSR Journals
1) The study assessed hormone levels in 60 women undergoing in vitro fertility treatment at clinics in Port Harcourt, Nigeria.
2) Blood samples were taken on day two of the women's menstrual cycles and analyzed for FSH, LH, and prolactin levels using ELISA.
3) The results found that 31 of the 60 women (51.7%) had elevated hormone levels, including elevated prolactin in 23 women (38.3%), which is a major cause of female infertility.
SIMESITEM 2012: The European ARtSENSE project: Towards the Adaptive Augmented...ARtSENSE_EU
The document outlines presentations from the ARtSense Consortium meeting which discussed projects using augmented reality for cultural heritage applications, including the ARCHEOGUIDE project, augmented reality systems for museum guidance, and plans to use augmented reality to bring artifacts like Lavoisier's laboratory to life for visitors in a way that provides historical and technical context. Museum professionals, artists, and technologists discussed challenges and opportunities in developing augmented reality experiences for museums and cultural institutions.
Performance Evaluation of High Speed Congestion Control ProtocolsIOSR Journals
This document evaluates and compares the performance of seven high-speed congestion control protocols: Bic TCP, Cubic TCP, Hamilton TCP, HighSpeed TCP, Illinois TCP, Scalable TCP and YeAH TCP. The protocols are simulated using NS-2 with multiple flows and their efficiency, fairness, and overall performance are calculated and compared for different numbers of flows. Key metrics like throughput, link utilization, and fairness index are analyzed for the protocols under varying parameters like RTT, queue size, and number of flows. Cubic TCP and Bic TCP generally showed better performance than the other protocols based on the simulations.
The NEW New West: Strategic Approach to Community Building Lisa Spitaleinvestnewwest
The document summarizes the City of New Westminster's strategic approach to community building since the 1990s. It involved taking a three-pronged approach of enforcement against criminal elements, social planning to address root causes of issues like poverty, and civic leadership in economic development. This included initiatives like cracking down on problematic housing and businesses, addressing homelessness, and reducing liquor licenses. It resulted in significant declines in crime. The city has focused on growth around SkyTrain stations, developing amenities like Westminster Pier Park, and partnerships with major institutions. Population and development have increased substantially under this approach.
Requirements and Challenges for Securing Cloud Applications and ServicesIOSR Journals
This document discusses the requirements and challenges for securing cloud applications and services. It begins with an abstract that introduces cloud computing security as complex due to many factors. The document then provides context on cloud computing architectural frameworks and models to help evaluate security risks when adopting cloud services. It discusses key aspects of cloud architecture like deployment models, service models, and multi-tenancy that impact security. Understanding these relationships is important for informed risk management decisions regarding cloud adoption strategies.
Influence of local segmentation in the context of digital image processingiaemedu
This document discusses local segmentation in digital image processing. It begins by defining local segmentation as a reasonable approach for low-level image processing that examines existing algorithms and creates new ones. Local segmentation can be applied to important image processing tasks. The document then evaluates using local segmentation for image denoising, finding it highly competitive with state-of-the-art algorithms. Local segmentation attempts to separate signal from noise on a local scale, allowing higher-level algorithms to operate directly on the signal without amplifying noise.
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
International Journal on Soft Computing ( IJSC )ijsc
This document provides a summary of various wavelet-based image coding schemes. It discusses the basics of image compression including transformation, quantization, and encoding. It then reviews the wavelet transform approach and several popular wavelet coding techniques such as EZW, SPIHT, SPECK, EBCOT, WDR, ASWDR, SFQ, EPWIC, CREW, SR and GW coding. These techniques exploit the multi-resolution properties of wavelets for superior energy compaction and compression performance compared to DCT-based methods like JPEG. The document provides details on how each coding scheme works and compares their features.
Fuzzy Type Image Fusion Using SPIHT Image Compression TechniqueIJERA Editor
This paper presents a fuzzy type image fusion technique using Set Partitioning in Hierarchical Trees (SPIHT).
It is concluded that fusion with higher single levels provides better fusion quality. This technique can be used
for fusion of fuzzy images as well as multi model image fusion. The proposed algorithm is very simple, easy to
implement and could be used for real time applications. This is paper also provided comparatively studied
between proposed and previous existing technique and validation of the proposed algorithm as Peak Signal to
Noise Ratio (PSNR), Root Mean Square Error (RMSE).
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
Stereo matching algorithm using census transform and segment tree for depth e...TELKOMNIKA JOURNAL
This article proposes an algorithm for stereo matching corresponding process that will be used in many applications such as augmented reality, autonomous vehicle navigation and surface reconstruction. Basically, the proposed framework in this article is developed through a series of functions. The final result from this framework is disparity map which this map has the information of depth estimation. Fundamentally, the framework input is the stereo image which represents left and right images respectively. The proposed algorithm in this article has four steps in total, which starts with the matching cost computation using census transform, cost aggregation utilizes segment-tree, optimization using winner-takes-all (WTA) strategy, and post-processing stage uses weighted median filter. Based on the experimental results from the standard benchmarking evaluation system from the Middlebury, the disparity map results produce an average low noise error at 9.68% for nonocc error and 18.9% for all error attributes. On average, it performs far better and very competitive with other available methods from the benchmark system.
This document discusses using Fourier transforms to measure image similarity. It proposes a metric that uses both the real and complex components of the Fourier transform to compute a similarity ranking between two images. The metric calculates the intersection of the covariance matrix of the Fourier transform magnitude and phase spectra of the two images. The approach is shown to be advantageous for images with varying degrees of lighting. It summarizes existing methods for image comparison and feature extraction, and discusses implementing the proposed similarity metric using OpenCV. Sample results are given comparing the new Fourier-based method to existing histogram comparison techniques.
An enhanced fireworks algorithm to generate prime key for multiple users in f...journalBEEI
This work presents a new method to enhance the performance of fireworks algorithm to generate a prime key for multiple users. A threshold technique in image segmentation is used as one of the major steps. It is used processing the digital image. Some useful algorithms and methods for dividing and sharing an image, including measuring, recognizing, and recognizing, are common. In this research, we proposed a hybrid technique of fireworks and camel herd algorithms (HFCA), where Fireworks are based on 3-dimension (3D) logistic chaotic maps. Both, the Otsu method and the convolution technique are used in the pre-processing image for further analysis. The Otsu is employed to segment the image and find the threshold for each image, and convolution is used to extract the features of the used images. The sample of the images consists of two images of fingerprints taken from the Biometric System Lab (University of Bologna). The performance of the anticipated method is evaluated by using FVC2004 dataset. The results of the work enhanced algorithm, so quick response code (QRcode) is used to generate a stream key by using random text or number, which is a class of symmetric-key algorithm that operates on individual bits or bytes.
Multimodality medical image fusion using improved contourlet transformationIAEME Publication
1. The document presents a technique for medical image fusion using an improved contourlet transformation with log Gabor filters.
2. It proposes decomposing images using a contourlet transformation with modified directional filter banks that incorporate log Gabor filters. This aims to provide high quality fused images while localizing features accurately and minimizing noise.
3. Experimental results on fusing medical images show that the proposed technique achieves higher quality measurements like PSNR compared to a basic contourlet transformation fusion approach.
A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times
that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to
be possible to perfectly reconstruct a signal from a series of measurements. Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if the signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly. The main idea is that with prior knowledge about constraints on the signal’s frequencies, fewer samples are needed to reconstruct the signal. Sparse sampling (also known as, compressive sampling, or compressed sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions tounder determined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem. There are two conditions under which recovery is possible.[1] The first one is sparsity which requires the signal to be sparse in some domain. The second one is incoherence which is applied through the isometric property which is sufficient for sparse signals Possibility
of compressed data acquisition protocols which directly acquire just the important information Sparse sampling (CS) is a fast growing area of research. It neglects the extravagant acquisition process by measuring lesser values to reconstruct the image or signal. Sparse sampling is adopted successfully in various fields of image processing and proved its efficiency. Some of the image processing applications like face recognition, video encoding, Image encryption and reconstruction are presented here.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
NEURAL NETWORKS WITH DECISION TREES FOR DIAGNOSIS ISSUEScscpconf
1) The document presents a new technique for fault detection and isolation that uses neural networks to generate models of normal and faulty system behaviors. A decision tree is then used to evaluate residuals and isolate faults.
2) The technique is demonstrated on a benchmark process for an electro-pneumatic valve actuator. Neural networks are used to generate models of the actuator's normal and 19 possible faulty behaviors.
3) A decision tree structure is proposed to simplify online fault diagnosis by only evaluating the most significant residuals needed at each step to isolate faults. This reduces computational effort compared to evaluating all residuals.
NEURAL NETWORKS WITH DECISION TREES FOR DIAGNOSIS ISSUEScsitconf
This paper presents a new idea for fault detection and isolation (FDI) technique which is
applied to industrial system. This technique is based on Neural Networks fault-free and Faulty
behaviours Models (NNFMs). NNFMs are used for residual generation, while decision tree
architecture is used for residual evaluation. The decision tree is realized with data collected
from the NNFM’s outputs and is used to isolate detectable faults depending on computed
threshold. Each part of the tree corresponds to specific residual. With the decision tree, it
becomes possible to take the appropriate decision regarding the actual process behaviour by
evaluating few numbers of residuals. In comparison to usual systematic evaluation of all
residuals, the proposed technique requires less computational effort and can be used for on line
diagnosis. An application example is presented to illustrate and confirm the effectiveness and
the accuracy of the proposed approach.
Number Plate Recognition of Still Images in Vehicular Parking SystemIRJET Journal
This document discusses a proposed method for number plate recognition in vehicle parking systems using image processing techniques. It begins with an abstract that outlines the increasing need for automated vehicle management systems due to rising vehicle and traffic volumes. It then provides an overview of the key steps in number plate recognition systems - plate detection, character segmentation, and character recognition. The proposed method uses profile projection for segmentation and neural networks for recognition. The document reviews several existing plate detection methods and their limitations. It proposes a new method that uses edge detection and morphological operations to isolate the license plate from an image while removing noise. Finally, it discusses factors to consider for license plate detection and different image segmentation techniques used in existing automatic number plate recognition systems.
This document outlines an experiment to analyze aluminum castings using digital camera, thermal, and hyperspectral imaging to detect fatigue defects. The goals are to create a repeatable inspection process and determine the most effective nondestructive testing method. Samples will be imaged and hot spots analyzed to identify defects. Preliminary results are inconclusive due to the small sample size and destructive nature of fatigue testing. Further data collection is needed before validating the imaging techniques.
Comparison of signal smoothing techniques for use in embedded system for moni...Dalton Valadares
Paper about the comparison between some signal smoothing techniques for use in an embedded system responsible for monitoring the biofuels quality, specificaly the oxidative stability.
PERFORMANCE ASSESSMENT OF ANFIS APPLIED TO FAULT DIAGNOSIS OF POWER TRANSFORMER ecij
Continuous monitoring of Power transformer is very much essential during its operation. Incipient faults inside the tank and winding insulation needs careful attention. Traditional ratio methods and Duval triangle can be employed to diagnose the incipient faults. Many times correct diagnosis due to the
borderline problems and the existence of multiple faults may not be possible. Artificial intelligence (AI) techniques could be the best solution to handle the non linearity and complexity in the input data. In the proposed work, adaptive neuro fuzzy inference system (ANFIS), is utilized to deal with 9 incipient fault conditions including healthy condition of power transformer with sufficient DGA transformer oil samples. Comparison of the diagnosis performance of both the methods of ANFIS and the feasibility pertaining to the problem is presented. Diagnosis error in classifying the oil samples and the network structure are the main considerations of the present study.
IRJET - A Systematic Observation in Digital Image Forgery Detection using MATLABIRJET Journal
This document summarizes a research paper that proposes a new method for detecting digital image forgeries using analysis of illumination inconsistencies. The method extracts texture and edge-based features from illuminant maps of face regions in an image. These features are then classified using machine learning to detect if faces are illuminated inconsistently, indicating tampering. The approach requires only minimal user interaction by specifying bounding boxes around faces. Evaluation shows the method achieves a 86% detection rate of spliced images, outperforming existing illumination-based approaches. The work presents an important step in reducing human interaction for illumination-based forgery detection.
The document describes a system for recognizing fake currency notes using image processing techniques. It discusses how advances in color printing have made it easier to counterfeit currency notes. The proposed system uses image acquisition, preprocessing, grayscale conversion, edge detection, image segmentation, and feature extraction to analyze images of currency notes and compare them to a dataset of original notes to determine if they are fake or real. The key features extracted for analysis include the security thread, serial number, latent image, watermark, and identification mark. The system is designed to be simple, fast, and accurately detect counterfeits.
Comparative performance analysis of segmentation techniquesIAEME Publication
This document compares the performance of several image segmentation techniques: global thresholding, adaptive thresholding, region growing, and level set segmentation. It applies these techniques to medical and synthetic images corrupted with noise and evaluates the segmentation results using binary classification metrics like sensitivity, specificity, accuracy, and precision. The results show that level set segmentation best preserves object boundaries, adaptive thresholding captures most image details, and global thresholding has the highest success rate at extracting regions of interest. Overall, the study aims to determine the optimal segmentation method for medical images from CT scans.
Similar to Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation using Type 2 Fuzzy Sets (20)
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation using Type 2 Fuzzy Sets
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 13, Issue 6 (Jul. - Aug. 2013), PP 89-99
www.iosrjournals.org
www.iosrjournals.org 89 | Page
Image Processing for Automated Flaw Detection and CMYK
model for Color Image Segmentation using Type 2 Fuzzy Sets
Anindita Chatterjee , Himadri Nath Moulick
(Dept-IT, Tata Consultancy Services Kolkata, India)
(Dept-CSE, Aryabhatta Institute of Engg & Management Durgapur, pin-713148, India)
Abstract : Infrared (IR) thermography has evolved in recent years from being an emerging nondestructive
testing (NDT) technique to a viable approach for both aerospace manufacturing and in-service inspections. One
of the drawbacks of thermography techniques is that no standard signal processing has been universally
adopted and that different algorithms yield different sizing results. Additionally, the data interpretation is not as
simple as with other NDT techniques. In this paper the most common signal processing techniques applied to
pulsed thermography, which include derivative processing, pulsed phase thermography and principal
component analysis, are applied in an attempt to simplify the damage detection and sizing process. The pulsed
thermography experiments were carried out on 25 impacted panels made of carbon fiber epoxy material.
Despite using similar panels and the same experiment parameters, the damage detection and sizing processes
are not straight forward. It is concluded that some algorithms provide easier detection capability than others.
However, on their own, the different algorithms lack the robustness to make the damage detection and sizing
processes reliable and fully automated. And Optimization of the similarity measure is an essential theme in
medical image registration. In this paper, a novel continuous medical image registration approach (CMIR) is
proposed. This is our extension work of the previous one where we did a segmentation part of any particular
image with a custom algorithm .The CMIR, considering the feedback from users and their preferences on the
trade-off between global registration and local registration, extracts the concerned region by user interaction
and continuously optimizing the registration result. Experiment results show that CMIR is robust, and more
effective compared with the basic optimization algorithm. Image registration, as a precondition of image fusion,
has been a critical technique in clinical diagnosis. It can be classified into global registration and local
registration. Global registration is used most frequently, which could give a good approximation in most cases
and do not need to determine many parameters. Local registration can give detailed information about the
concerned regions, which is the critical region in the image. Finding the maximum of the similarity measure is
an essential problem in medical image registration. Our work is concentrating on that particular section with
the synergy of Tpe-2 fuzzy logic invoked in it.
Keywords; Extrinsic method, Fuzzy sets, image possessing, intrinsic method Introduction, Multi-model image
alignment.
I. INTRODUCTION
In pulsed thermography (PT) energy is applied to the specimen using a pulsed excitation. Typically, the
energy sources are flash lamps whose flash duration varies from a few milliseconds for good thermal conductors
to a few seconds for low-conductivity materials. The applied energy creates a thermal front that propagates from
the specimen‘s surface throughout the specimen. During the cool down process the surface temperature
decreases uniformly for a sample without internal flaws. When the thermal front intersects an interface from a
high to low conductivity layer, like in the case of delamination, disband and porosity, the cooling rate is locally
disrupted. This results in an accumulation of heat above the flaw that is also manifested at the specimen‘s
surface and can be detected by an IR camera. Thus, allowing defective areas to be distinguished from sound
areas. Image processing is commonly used for two purposes. Its first use is to improve the visual appearance of
images to a human viewer. Filtering and color map adjustments are commonly applied to make an image more
pleasant to look at. Its second purpose is to prepare the images or data for the measurement of features present.
This can include applying a threshold to create a binary image, applying morphologic filters, etc. The processed
image allows the operator to measure the size of the features of interest and could also be used for automated
flaw detection and measurements. Although NDT inspections are more and more automated, thanks to
automated scanning systems, the inspector is still required to identify the presence of flaw and to perform the
measurement of the features of interest. The following paragraphs review the most common signal processing
techniques applied to pulsed thermography data. In the open literature these algorithms are usually applied and
demonstrated on laboratory samples that contain artificial and simple geometry flaws. In this paper, solid
laminate samples that have been damaged by impact are used to investigate the capability of these algorithms
for the development of robust and automated flaw detection and measurements. And Information systems are
2. Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation
www.iosrjournals.org 90 | Page
often poorly defined, creating difficulty in representing concepts and selecting important features used to solve
the problems. Type-1 (T1) fuzzy set (FS) has been around for more than four decades and yet not able to handle
all kinds of uncertainties appearing in real life. The above statement sounds paradoxical because the word fuzzy
has the connotation of uncertainty. The extension of T1 fuzzy systems, in particular type-2 (T2) accommodates
the system uncertainties and minimizes its effect considerably in decision making. However, T2 FS is difficult
to understand and explain. Application of T1 fuzzy logic to rule-based systems is most significant that
demonstrates its importance as a powerful design methodology to tackle uncertainties. A fuzzy logic system
(FLS) is described completely in terms of T1 fuzzy sets, called type-1 fuzzy logic system (T1FLS), whereas a
FLS with at least one T2 fuzzy set is called T2 fuzzy logic system (T2FLS). T1FLSs cannot directly handle rule
uncertainties because T1 fuzzy sets are certain. On the other hand, T2FLS is very useful in circumstances where
it is difficult to determine an exact membership function of a fuzzy set. Such cases are handled by rule
uncertainties and measurement uncertainties. Like T1FLS, T2 has wide applications and the potential of T2
systems outperforms T1 in most of the cases. The aim of the paper is to describe T2 fuzzy systems for managing
uncertainties, identifying the frontier research areas where T2 fuzzy logic is applied and proposes an algorithm
on application of type-2 fuzzy sets in color image segmentation.
II. THERMAL CONTRASTS
The most basic data processing performed on pulsed thermographic data is the computation of thermal
contrasts. Thermal contrasts have the advantages of being less sensitive to noise and to the surface optical
properties1. The main problem with thermal contrast computation is that it requires a priori knowledge of a
sound area. Although more recently new contrast methods have been developed to overcome this problem
III. PULSED PHASE THERMOGRAPHY
Pulsed phase thermography (PPT) is a processing method in which the thermal images are transformed
from the time domain to the frequency domain5. This can be performed by processing a sequence of thermal
images (thermogram) with discrete Fourier transform (DFT):
…………………………….(1)
Where n designates the frequency increments (n=0,1,…N- 1), and Re and Im are the real and the imaginary
parts of
the DFT, respectively. For convenience, fast Fourier transform (FFT) a computationally efficient version of the
DFT is generally used. Once the data has been converted into the Fourier domain, the phase () and amplitude
(A) images of the different frequencies can be calculated using:
………………………(2)
The phase is particularly advantageous since it is less affected by environmental reflections, emissivity
variations, non-uniform heating, surface geometry and orientation. The phase characteristics are very attractive
not only for qualitative inspections but also for quantitative ones [6].
IV. THERMOGRAPHIC SIGNAL RECONSTRUCTION
Thermographic signal reconstruction (TSR)8 is a processing technique that uses polynomial
interpolation to allow increasing the spatial and the temporal resolution of a thermogram sequence, while
reducing the amount of data to be analyzed. TSR is based on the assumption that temperature profiles for non-
defective areas follow the decay curve given by the one-dimensional solution of the Fourier diffusion equation
for an ideal pulse uniformly applied to the surface of a semi-infinite body9, which is given by:
3. Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation
www.iosrjournals.org 91 | Page
……………………..(3)
Where T(t) is the temperature evolution, Q is the energy applied at the surface and e is the thermal effusively of
the sample, which is defined as: e kc ; where k, ρ, and c are the thermal conductivity, the mass density and
the specific heat, respectively. Equation 3 may be rewritten in a logarithmic notation and expanded into a
polynomial series.
……(4)
The noise reduction resulting from this polynomial interpolation [11,][12] enables the use of derivate processing
to enhance the contrast created by the presence of defects. The first and second derivatives of the thermogram
sequence provide information on the rate of temperature variation. These measurements are analogous to the
relations between position, velocity and acceleration in mechanics. The original thermogram corresponds to the
surface temperature of the inspected object (position). The first derivative gives information on the cooling rate
of the surface temperature (velocity), while the second derivative provides information on the acceleration or
deceleration of this cooling rate (acceleration).
V. PRINCIPAL COMPONENT ANALYSIS
Principal component analysis (PCA) 13, also known as principal component thermography (PCT)14, is
an orthogonal linear transformation that transforms the thermogram sequence into a new coordinate system. The
idea behind PCA is to remove possible correlation in the data by creating a new uncorrelated dataset called
principal components. It has been applied in thermal NDT for data reduction and flaw contrast enhancement.
The algorithm is based on the decomposition of the thermogram into its principal components using singular
value decomposition (SVD). The first step of the PCA algorithm is to reshape the three-dimensional
thermogram into a two dimensional array where the columns and rows contain the spatial and temporal
information, respectively. Thus, the original thermogram T(x,y,t) becomes A(n,m) where n = Nx Ny , m =
Nt, Nx and Ny are the number of pixels per row and column of the IR camera and Nt is the number of thermal
images in the thermogram sequence. The two-dimensional array A is then adjusted by subtracting the mean
along the time dimension, and decomposed into eigenvectors and eigenvalues.
……..(5)
Where U and V are orthogonal matrices which columns form the eigenvectors of AAT and ATA respectively,
and Г is a diagonal matrix that contains the singular values of ATA. Since the thermal images in the thermogram
are non-erratic and vary slowly in time, the principal temporal variations of the dataset are usually contained
within the first few eigenvectors. The principal component images are formed by calculating the dot product of
the eigenvector and the measured temperature.
VI. MODELING UNCERTAINTY USING OF FUZZY LOGIC
Uncertainty appears in many forms and independent of the kind of fuzzy logic (FL) or any kind of
methodology one uses to handle it . Uncertainty involves in real life, due to deficiency of information in various
forms. One of the best sources for general discussions about uncertainty is found in. Two types of uncertainties,
randomness and fuzziness exist, where probability theory is associated with the former and FS with the latter.
Fuzziness (or vagueness) generally recognizes uncertainty resulting from the imprecise boundaries of fuzzy sets,
nonspecificity connected with sizes (cardinalities) of relevant sets and strife (or discord), which expresses
conflicts among the various sets of alternatives. T1 fuzzy sets are certain and not able to handle all kinds of
uncertainties using a single membership value, which is crisp. A FLS needs some measure to capture
uncertainties than just a single number. The extended FL, named as T2FL able to handle uncertainties by
modeling and subsequently minimizing their effects. T2 fuzzy logic provides the measure of dispersion,
fundamental to the design of systems that includes linguistic or numerical uncertainties translating into rules.
T2 fuzzy set is a natural framework for handling both randomness and fuzziness. It is the third dimension of T2
membership function (MF) that allows us to evaluate the model uncertainties. A T2FLS has more design
4. Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation
www.iosrjournals.org 92 | Page
degrees of freedom than a T1FLS because T2 fuzzy sets are described by more parameters compare to T1 fuzzy
sets. Linguistic and random uncertainties are evaluated using the defuzzified and type-reduced outputs of the
T2FLS. The type-reduced output can be interpreted as a measure of dispersion about the defuzzified output.
VII. SCOPE OF WORK
Image segmentation is one of the most difficult image processing tasks because the segmented images
are not always precise rather vague. In earlier works, image segmentation was applied in monochrome color
images, later applied on red, green, blue (RGB) color space. Two main image segmentation techniques are
described in the literature; region reconstruction where image plane is analyzed using region growing process
and color space analysis where the color of each pixel is represented in the designated color space. Many
authors have tried to determine the best color space for some specific color image segmentation problems,
however, there does not exist a unique color space for all segmentation problems. Computational complexity
may increase significantly with reference to C(Cyan), M(Magenta),Y(Yellow), K(contrast) (CMYK) color space
in comparison with gray scale image segmentation. Classically, the RGB color space has been chosen for color
image segmentation where a point in the image is defined by the color component levels of the corresponding R,
G and B pixels. However, while the region growing techniques tend to over-segment the images, on the other
hand the color space analysis methods are not robust enough to significance appearance changes because of not
including any spatial information. Fuzzy logic is considered to be an appropriate tool for image analysis,
applicable in CMYK and particularly for gray scale segmentation. Recently, fuzzy region oriented techniques
and fuzzy entropy based techniques are applied for color image segmentation. The major concern of these
techniques is spatial ambiguity among the pixels, representing inherent vagueness. However, there still remain
some sources of uncertainties with the meanings of the words used for noisy measurements and the data used to
tune the parameters of T1 fuzzy sets may be noisy too. The new concept of evidence theory allows to tackling
imprecision in model uncertainty used in pattern classification, and produces good results in segmentation,
although this technique based on CMYK model is not often used.
The amount of uncertainty is evaluated using the approach proposed by Klir where he generalizes the Shannon
entropy to belief functions using two uncertainty measures, mainly the non-specificity and the discord. The
robust method using T2 fuzzy set is another approach for handling uncertainty in image analysis. It can take into
account three kinds of uncertainty, namely fuzziness, discord and nonspecificity. T2 fuzzy sets have grade of
membership value, which are themselves fuzzy. Hence, the membership function of a T2 fuzzy set has three
dimensions and it is the new third dimension that
VIII. PRELIMINARIES OF TYPE-2 FUZZY SYSTEM
The term "fuzzy set" is general that include T1 and T2 fuzzy sets (and even higher-type fuzzy sets). All
fuzzy sets are characterized by MFs. A T1 fuzzy set is characterized by a two-dimensional MF, whereas a T2
fuzzy set is characterized by a three-dimensional MF. Let us take an example of linguistic variable ―speed‖.
Different values of the variable like ―very high speed‖ , ―high speed‖ , ―low speed‖ signify the crisp value. One
approach to using the 100 sets of two endpoints is to average the endpoint data and use the average values for
the interval associated with "speed". A triangular (or other shape) MF has been constructed whose base
endpoints (on the x-axis) are at the two average values and whose apex is midway between the two endpoints.
The T1 triangular MF has been represented in two dimensions and expressed mathematically in equation (1)
{(x, MF(x))| x X} ………………(1)
However, the MF completely ignores the uncertainties associated with the two endpoints. A second approach
calculates the average values and the standard deviations for the two endpoints. The approach blurs the location
in between the two endpoints along the x-axis. Now the triangles are located in such a way so that their base
endpoints can be anywhere in the intervals along the x-axis associated with the blurred average endpoints, which
leads to a continuum of triangular MFs on the x-axis. Thus whole bunch of triangles, all having the same apex
point but different base points are obtained as shown in figure 5. Suppose, there are exactly N such triangles,
and at each value of x, MFs are: MF1(x), MF2(x), …, MFN(x). Weight is assigned to each membership value,
say wx1, wx2, …, wxN, representing the possibilities associated with each triangle at a particular value of x.
The resulting T2 MF is expressed using (2)
(x, {( MFi(x), wxi)| i = 1, …, N}| x X} ….. (2)
Another way to represent the membership value: {(x, MF(x, w)| xX and wJx} where MF(x, w) is the three-
dimensional T2 MF, shown in figure 1.
5. Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation
www.iosrjournals.org 93 | Page
Fig. 1 3-D Representation of T2 FS ―Speed‖
Another way to visualize T2 fuzzy sets is to plot their footprint of uncertainty (FOU).
Fig. 2 Triangular MFs (base endpoints l and r) with Uncertain Intervals
IX. FOOTPRINT OF UNCERTAINTY
In T2, MF(x, w) can be represented in a two-dimensional x-w plane, consisting of only the permissible
(sometimes called "admissible") values of x and w. It implies that x is defined over a range of values (its
domain), say, X while w is defined over its range of values (its domain), say, W. An example of FOU for a
Gaussian MF is shown. The standard deviation of the MF is certain while mean, m, is uncertain and varies
anywhere in the interval from m1 to m2. Uncertainty in the primary memberships of a T2 fuzzy set, Ã, consists
of a bounded region, called the footprint of uncertainty (FOU). FOU is the union of all primary memberships
(Jx), given in (3).
FOU ( Ã ) =
Xx
Jx
……………..(3)
FOU focuses our attention on the uncertainties inherent in a specific T2 membership function, whose shape is a
direct consequence of the nature of the uncertainty. The region of FOU indicates that there is a distribution that
sits on top of it—the new third dimension of T2 fuzzy sets. Shape of the distribution depends on the specific
choice made for the secondary grades. When the secondary grade is equal to one, the resulting T2 fuzzy set is
called interval T2 fuzzy sets (IT2FS), representing uniform weighting (possibilities).
Fig. 3 OU of Gaussian (primary) MF with Uncertain mean
6. Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation
www.iosrjournals.org 94 | Page
Fig. 4 OU: (a) Gaussian MF with Uncertain Standard Deviation (b) Gaussian MF with uncertain mean (c) Sigmoidal MF with Inflection
Uncertainties (d) Granulated Sigmoidal MF with Granulation Uncertainties.
X. TYPE-2 FUZZY SET ENTROPY
The process of obtaining necessary information to perform segmentation leads to the correct selection
of the regions of interest of the color image. The proposed work applied theory of fuzzy set to evaluate the
regions of interest with fixed accuracy. Fuzziness index [12] and entropy [13] provide the measurement of
degree of uncertainty [14] of the segmentation process. To measure the fuzziness of images, a few formal
definitions are discussed below. An ordinary fuzzy set A of the universe of discourse X is classically defined by
its membership function µA(x): X → [0, 1], xX.
A point x for which µA(x) = 0.5 is said a crossover point of fuzzy set A X . The uncertainty is represented
by the ―α-cut‖ of fuzzy set A, whose membership function µαA(x): X{0,1} is defined in (4).
µαA(x) = 1 if x α
= 0 if x < α …. (4)
Where α [0, 1] and x X
The fuzziness index γ(A) of a fuzzy set A reflects the degree of ambiguity by measuring the distance d(A, A0.5)
between A and its nearest set A0.5 (α=0.5) as described in (5).
γ(A) = 2d(A, A0.5) / n1/p …. (5)
A positive scalar p is introduced to keep γ(A) in between zero and one depending on the type of distance
function used. In the proposed algorithm with the help ―α-cut‖ ―n-cut‖ fuzzy set is described, where n is the
number of elements of n-cut vector. This measure represents the area between two membership functions µA(x)
and µαA(x), described in (6).
γ(A) = X
lim
(
dxxx AA ||
1 5.0
) …. (6)
where
represents the size of the set Ω ( linear index values ) and in practice we can use the discrete
formula, given in (7)
pp
A
Xx
A
p
A xx
X
1
5.0
]||
1
[
…. (7)
pA is a monotonic function, where p ,1[ +] and
X
represents the cardinality of the set X.
The term entropy of fuzzy set A, denoted by H(A) (monotonic increasing function) was first introduced by De
Luca and Termini, expressed in (8).
H(A) = (
2ln./) nxS An …. (8)
Where
xS An = -µA(x)ln(µA(x))-(1-(µA(x))ln(1-µA(x)))
In this work, we use the extension of the ―De Luca and Termini‖ measure to discrete images, proposed by Pal
[53]. The (linear) index of fuzziness of an MN image subset A X with L gray levels g[0, L−1] is defined
in (9) and shown in figure 5.
(A) =
)]()([*)(
1 1
0
gggh
MN
l
L
g
u
…. (9)
Where h(g) represents the histogram of the image and µX(g), the membership function consists of u(g) and
l(g). Entropies are used in with T2 fuzzy sets in gray scale image segmentation by extending the works
proposed by Tizhoosh. Tizhoosh applied T2 fuzzy sets for gray scale image thresholding and obtained good
results even in case of very noisy images. As proposed in , he used interval T2 fuzzy sets with the FOU,
described below:
Upper Limit:
xu :
xu =[µ(x)]0.5
Lower Limit:
xl :
xl =[µ(x)]2
7. Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation
www.iosrjournals.org 95 | Page
Fig. 5 Membership functions representing FOU
Here for the CMYK color model, the same functions are used for image segmentation. To overcome the
drawback of gray scale imaging, various correctional measures are considered in the proposed algorithm.
XI. ALGORITHM FOR COLOR IMAGE SEGMENTATION
Begin
Step1:Read the JPEG file to be segmented
Step2:Select the proper shape of the interval base T2 fuzzy set MF as
A =
Xx
Jxu μA (x, u ) / (x, u) Jx[0, 1]
Step3:Fix the image size of M N matrix;
Step4:Calculate h(g) for each color component of the color space;
//For linear index calculation
Step5:Calculate n – cut of the total image //for color pattern possibility matching with CMYK;
Step6:Initialize the position of the T2 MF;
Step7:Shift the MF with gray level ranges;
Step8: Mapping the picture colors into gray scale format;//For contour detection
Step 9: calculate the values of MF
gu
and
gl (where u(g) , l(g) µX(g));
Step10: Compute edge of the image based on
contour formation;
Step11: Compute similarity matrix say ‗W‘ based on inverting contours;
Step12: Find mid, max and min of fuzzy index;
Step13: Compute the n – cut eigenvectors;//possible
combination of colors for the input picture
Step14: Threshold the image with max value; //Unwanted pixel elimination based on Median filter
Step15: Masking of segmented image using msk matrix is defined by
[0 0 0 0 0
0 1 1 1 0
msk = 0 1 1 1 0
0 1 1 1 0
0 0 0 0 0]
Step16: Median filtering on the segmented image to remove noise;
Step17: Apply the region merging process using the obtained classes of pixels;
//Segmented portion of images are merged.
Step18: Smoothing of image to reduce the number of connected components;
Step19: Calculate the connected components;//n-Cut Eigen vector
Step20: Calculate the number of pixels in the final image;
End.
XII. EXPERIMENTAL RESULTS
Fig 6: A photograph of an impacted sample.
Upper Limit
Membership
Lower Limit
8. Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation
www.iosrjournals.org 96 | Page
In most papers available in the open literature, signal processing is applied to simple geometry
simulated flaws such as square Teflon® inserts or flat bottom holes. Example of typical results obtained by
processing PT inspection results of a composite sample that contains flat bottom holes are shown in Figure 7.
Fig 7: Typical images of (a) amplitude, (b) phase, (c) 1st principal component, (d) 2nd
principal component (e) first derivative and (f) second
derivative of a sample containing flat bottom holes.
As seen in the images presented in Figure 2, typically, except for the edge effect18, the flaws exhibit a uniform
feature. The reason being that artificial flaws used in laboratory samples are usually at one specific depth. The
reality is that a flaw can occur at different depths. For example an impact can cause fiber and matrix breakage
resulting in a surface dent while creating delamination damage in several deeper layers. In addition, real
components might be painted, be covered by a label or have information written on them that can affect the
thermogram and make automated detection challenging. Examples of processed pulsed thermography results of
impact-induced damage are presented in Figures 8 and 9.
Fig 8: Pulsed thermography images of different samples containing impact damage, paint lines, sticker and marker writings.
Fig 9: First four non-null images from (a-d) amplitude; (e-h) phase; and (i-l) principal components; and examples of 1st (m-p) and 2nd (q-t)
derivative images of the same sample.
Although simply applying a threshold on the data obtained on laboratory samples can yield good results for
automated detection, it lacks the robustness required to deal with real damage. Besides, due to the edge effect it
can either underestimate or overestimate the flaw size. In addition, real damage does not behave like simulated
flaws and sizing techniques such as the full-width half-maximum cannot be applied easily to estimate the flaw
size. As it can be seen in Figure 4, amplitude, phase and principal component images provide different
information and flaw sizes. One of the challenges is to select the proper images to accurately estimate the flaw
size. The amplitude, phase and principal component images tend to show the flaw within the first few images;
9. Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation
www.iosrjournals.org 97 | Page
while derivative processing algorithms require browsing through more images in the sequence to see the entire
flaw. Therefore the former techniques were selected to develop the proposed algorithm. The first step of the
algorithm proposed is to compute the amplitude, phase and principal component images. Then, the marking and
writing are automatically detected and removed from the images by creating a mask. For the samples used, it
was found that the writing of the last amplitude image had values less than two times the standard deviation
(STD) compared to the average of the panel.
Then the selected images: amplitude images 2 to 6, phase images 2 to 10, and PCA images 1 to 6 were
further processed. For each of the selected amplitude images, each pixel that had a value of one STD above the
average was considered being part of a flaw. Similarly, for the selected phase and PCA images each pixel that
had a value of one STD above or below the average of the sample was considered to be a flaw. This resulted
into several binary images that were summed up to give a ―flaw likelihood‖ image as shown in Figure 10(a).
After summation, each pixel that had a value superior to 4 was considered a flaw. An example of a binary image
obtained after applying the threshold to the flaw likelihood image is presented in Figures 10(b). A series of
morphological filters were then applied to remove what is considered to be noise and filling holes in such a way
that only the large connected components of binary image were kept, as shown in Figure 11 (a). The
agglomeration of pixels corresponding to the damage area was identified to be the one which had the highest
pixel value in the second amplitude image (Figure 11 (b)).
Fig 10: An example of a (a) damage likelihood image (b) flaw image after applying threshold.
Figure 11: An example of a (a) flaw image after applying morphological operations (b) flaw image after identifying the image blob
corresponding to the damage area.
The inspection results from both the front and the back of the samples were processed using this algorithm. In
each case the algorithm successfully identified the damaged area and the false calls rate was zero. The final flaw
image obtained (Figure 11 (b)) was used to calculate the flaw size and area. These measurements were similar to
those obtained by an inspector measuring the flaw based on the regular processing techniques presented in
section 2. The measurements obtained were then compared to those obtained by pulse-echo ultrasonic. It was
found that on average the measurements by thermography underestimate the flaw width and length by 15% and
10% respectively. Since the measurements obtained by the automated algorithm are similar to those obtained by
an inspector, the differences may be caused by the samples geometries and its material characteristic. Further
trial of the algorithm on different sample geometries should be carried out to better explain these differences.
Although the process of creating this image is more computation intensive than other techniques used
individually, it allows simplifying an entire thermogram sequence into a single image. Pulsed phase
thermography and PCA significantly reduces the number of images to analysis, but combining these techniques
reduces it even further.
And in order to test the performance of the algorithm, the samples (figure 12 onwards) are taken in
JPEG image format having a size of 163147 pixels in RGB color mode. Results of execution of the algorithm
are shown from figure 12.
10. Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation
www.iosrjournals.org 98 | Page
Fig. 12 Original image
Fig. 13 Segmented Image
Fig. 14 Contour Detection
Fig. 15 n-cut eigenvectors
Fig. 16 Complement of the Enhanced Image
A. Screenshot of matlab
XIII. CONCLUSION
An algorithm was developed for automated flaw detection and measurement of pulsed thermography
inspection data. The algorithm was based on the combination of information from typical signal processing
techniques used in pulsed thermography. The statistical divergence of the pixel value compared to the sample
allowed identifying the damaged area. The algorithm provided measurements similar to that of a human eye.
11. Image Processing for Automated Flaw Detection and CMYK model for Color Image Segmentation
www.iosrjournals.org 99 | Page
Moreover, the image obtained during the processing was easier to interpret by the inspector, provided the
damage area with stronger contrast compared to other processing techniques used individually.
Also Medical image registration has been an important area of research in the medical application of computer
vision‘s techniques for the past several years. It can be defined as a task of finding the transformation that will
optimally superimposes features from one imaging study over those of another study. The rules were applied to
predict the transient flow and pressure distributions in the brain vasculature comprising a patient specific circle
of Willis geometry and fractal models of peripheral vascular networks. The rules were shown to be able to
efficiently provide detailed descriptions of the flow and pressure distributions at different levels of blood vessel
sizes and simulate the variations of the blood flow in the major cerebral arteries when the peripheral
vasculatures are subjected to various physiological and pathological conditions. In order to improve the
prediction, the mechanisms of active regulation of blood flow need to be defined and implemented in the future
model development.
REFERENCES
[1] Maldague Xavier, ―Theory and Practice of Infrared Technology for Nondestructive Testing‖, John Wiley & Sons Inc, New York
NY, pp. 684, 2001.
[2] D. Gonzalez, Clemente Ibarra-Castanedo, F.J. Madruga and Xavier Maldague, ―Differentiated Absolute Phase Contrast Algorithm
for the Analysis of Pulsed Thermographic Sequences‖, Infrared Physics and Technology, Volume 48, Issue 1, pp. 16-21, 2006.
[3] Mirela Susa, Hernan Dario Benitez, Clemente Ibarra-Castanedo, Humberto Loaiza, Abdel Hakim Bendada and Xavier Maldague,
―Phase contrast using differential absolute contrast method‖, QIRT (Quantitative InfraRed Thermography) Journal, vol. 3, no 2, pp.
219- 230, 2006.
[4] Hernan Dario Benitez, Clemente Ibarra- Castanedo, Abdel Hakim Bendada, Xavier Maldague, Humberto Loaiza and Eduardo
Caicedo, ―Modified Differential Absolute Contrast Using Thermal Quadrupoles for the Nondestructive Testing of Finite Thickness
Specimens by Infrared Thermography‖, in CCECE 2006 - Canadian Conference on Electrical and Computer Engineering, (Ottawa
(Ontario) Canada), May 7-10 2006.
[5] Maldague X. and Marinetti S., ―Pulse Phase Infrared Thermography‖, J.Appl. Phys., Vol. 79, pp. 2694-2698, 1996.
[6] Ibarra-Castanedo C. ―Quantitative subsurface defect evaluation by pulsed phase thermography: depth retrieval with the phase,‖ Ph.
D. thesis, Laval University, 2005, http://www.theses.ulaval.ca/2005/23016/23016.pdf.
[7] Ibarra-Castanedo, C. and Maldague, X. ―Interactive methodology for optimized defect characterization by quantitative pulsed phase
thermography,‖ Research in Nondestructive Evaluation, Vol. 16, No. 4, pp1-19, 2005.
[8] Shepard S. M. ―Advances in Pulsed Thermography‖, Andres E. Rozlosnik, Ralph B. Dinwiddie (eds.), Proc. SPIE, Thermosense
XXIII, Vol. 4360, pp. 511-515, 2001.
[9] Carslaw, H. S. and Jaeger, J. C., ―Conduction of Heat in Solids‖, 2nd edition, Clarendon Press, Oxford.
[10] Martin R. E., Gyekenyesi A. L., Shepard S. M., ―Interpreting the Results, of Pulsed Thermography Data,‖ Materials Evaluation,
Vol. 61, No. 5, pp 611-616, 2003.
[11] Shepard, S. M, ―Temporal Noise Reduction, compression and Analysisn of Thermographic Image Data Sequence‖, US Patent
6,516,084, February, 2003.
[12] Shepard, S. M., Ahmed, T., Rubadeux, B. A., Wang , D., Lhota, J. R., ―Synthetic Processing of Pulsed Thermography Data for
Inspection of Turbine Components‖, Insight Vol.43 No.9, September, 2001.
[13] S. Marinetti, E. Grinzato, P.G. Bison, E. Bozzi, M. Chimenti, G. Pieri, O. Salvetti, ―Statistical analysis of IR thermographic
sequences by PCA‖, Infrared Physics and Technology 46 (2004) 85–91.
[14] Rajic N., ―Principal component thermography for flaw contrastenhancement and flaw depth characterisation in composite
structures‖, Composite Structures 58, pp521–528, 2002.
[15] J.N. Zalameda, P.A. Howell and W.P. Winfree, ―Compression Technique Computational Performance,‖ Proceedings of SPIE
Thermosense XXVII, Vol. 5782, pp. 399-406, 2005.
[16] K. E. Cramer and W. P. Winfree, ―The Application of Principal Component Analysis Using Fixed Eigenvectors to the Infrared
Thermographic Inspection of the Space Shuttle Thermal.
[17] Fred L. Bookstein, Principal warps: Thin-plate splines and the decomposition of deformations, IEEE Transactions on Pattern
Analysis and Machine Intelligence 11 (1989), no. 6, 567–585.
[18] F. Brezzi and M. Fortin, Mixed and hybrid finite element methods, Springer, 1991.
[19] Morten Bro-Nielsen, Medical image registration and surgery simulation, Ph.D. thesis, MM, Technical University of Denmark,
1996.
[20] Chaim Broit, Optimal registration of deformed images, Ph.D. thesis, Computer and Information Science, Uni Pensylvania, 1981.
[21] Gary E. Christensen and H. J. Johnson, Consistent image registration, IEEE Transaction on Medical Imaging 20 (2001), no. 7, 568–
582.
[22] Gary Edward Christensen, Deformable shape models for anatomy, Ph.D. thesis, Sever Institute of Technology, Washington
University, 1994.
[23] A. Collignon, A. Vandermeulen, P. Suetens, and G. Marchal, 3d multimodality medical image registration based on information
theory, Kluwer Academic Publishers: Computational Imaging and Vision 3 (1995), 263–274.
[24] R. Courant and David Hilbert, Methods of mathematical physiks, vol. II, Wiley, New York, 1962.
[25] Bernd Fischer and Jan Modersitzki, Fast inversion of matrices arising in image processing, Num. Algo. 22 (1999), 1–11.
[26] Fast diffusion registration, AMS Contemporary Mathematics, Inverse Problems, Image Analysis, and Medical Imaging, vol. 313,
2002, pp. 117–129.
[27] Combination of automatic non-rigid and landmark based registration: the best of both worlds, Medical Imaging 2003: Image
Processing (J.M. Fitzpatrick M. Sonka, ed.), Proceedings of the SPIE 5032, 2003, pp. 1037–1048.