We can find the simultaneous monitoring of thousands of genes in parallel Microarray technology. As per these measurements, microarray technology have proven powerful in gene expression profiling for discovering new types of diseases and for predicting the type of a disease. Gridding, Intensity extraction, Enhancement and Segmentation are important steps in microarray image analysis. This paper gives simple linear iterative clustering (SLIC) based self organizing maps (SOM) algorithm for segmentation of microarray image. The clusters of pixels which share similar features are called Superpixels, thus they can be used as mid-level units to decrease the computational cost in many vision applications. The proposed algorithm utilizes superpixels as clustering objects instead of pixels. The qualitative and quantitative analysis shows that the proposed method produces better segmentation quality than k-means, fuzzy cmeans and self organizing maps clustering methods.
Enhancement of genetic image watermarking robust against cropping attackijfcstjournal
The enhancement of image watermarking algorithm robust against particular attack by using genetic
algorithm is presented here. There is a trade-off between imperceptibility and robustness in image
watermarking. To preserve both of these characteristics in digital image watermarking in a logical value,
the genetic algorithm is used. Some factors were introduced for providing robustness of image
watermarking against cropping attack such as the Centre of Interest Proximity Factor (CIPF), the
Complexity Factor (CF) and the Priority Coefficient (PC).
Analysis and Detection of Image Forgery Methodologiesijsrd.com
"Forgery" is a subjective word. An image can become a forgery based upon the context in which it is used. An image altered for fun or someone who has taken a bad photo, but has been altered to improve its appearance cannot be considered a forgery even though it has been altered from its original capture. The other side of forgery are those who perpetuate a forgery for gain and prestige. They create an image in which to dupe the recipient into believing the image is real and from this they are able to gain payment and fame. Detecting these types of forgeries has become serious problem at present. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. Now these marks of tampering can be done by various operations such as rotation, scaling, JPEG compression, Gaussian noise etc. called as attacks. There are various methods proposed in this field in recent years to detect above mentioned attacks. This paper provides a detailed analysis of different approaches and methodologies used to detect image forgery. It is also analysed that block-based features methods are robust to Gaussian noise and JPEG compression and the key point-based feature methods are robust to rotation and scaling.
Image Quality Feature Based Detection Algorithm for Forgery in Images ijcga
The verifying of authenticity and integrity of images is a serious research issue. There are various types of techniques to create forged images for various intentions. In this paper, Attempt is made to verify the authenticity of image using the image quality features like markov and moment based features. They are found to have their best results in case of forgery involving splicing.
FAN search for image copy-move forgery-amalta 2014SondosFadl
The proposed Fan Search (FS) algorithm starts once a duplicated block is detected. Instead of exhaustive search for all blocks,the nearby blocks of the detected block are examined first in a spiral order.
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
AKHILESH KUMAR YADAV, DEENBANDHU SINGH, VIVEK KUMAR
Department of Computer Science and Engineering
Babu Banarasi Das University, Lucknow
akhi2232232@gmail.com, deenbandhusingh85@gmail.com, vivek.kumar0091@gmail.com
ABSTRACT- Digital images can be easily modified using powerful image editing software. Determining whether a manipulation is innocent of sharpening from those which are malicious, such as removing or adding parts to an image is the topic of this paper. In this paper we focus on detection of a special type of forgery-the Copy-Move forgery, in this part of the original image is copied moved to desired location in the same image and pasted. The proposed method compress images using DWT (discrete wavelet transform) and divided into blocks and choose blocks than perform feature vector calculation and lexicographical sorting and duplicated blocks are identified after sorting. This method is good at some manipulation/attack likes scaling, rotation, Gaussian noise, smoothing, JPEG compression etc.
INDEX TERMS- Copy-Move forgery, Wavelet Transform, Lexicographical Sorting, Region Duplication Detection.
Enhancement of genetic image watermarking robust against cropping attackijfcstjournal
The enhancement of image watermarking algorithm robust against particular attack by using genetic
algorithm is presented here. There is a trade-off between imperceptibility and robustness in image
watermarking. To preserve both of these characteristics in digital image watermarking in a logical value,
the genetic algorithm is used. Some factors were introduced for providing robustness of image
watermarking against cropping attack such as the Centre of Interest Proximity Factor (CIPF), the
Complexity Factor (CF) and the Priority Coefficient (PC).
Analysis and Detection of Image Forgery Methodologiesijsrd.com
"Forgery" is a subjective word. An image can become a forgery based upon the context in which it is used. An image altered for fun or someone who has taken a bad photo, but has been altered to improve its appearance cannot be considered a forgery even though it has been altered from its original capture. The other side of forgery are those who perpetuate a forgery for gain and prestige. They create an image in which to dupe the recipient into believing the image is real and from this they are able to gain payment and fame. Detecting these types of forgeries has become serious problem at present. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. Now these marks of tampering can be done by various operations such as rotation, scaling, JPEG compression, Gaussian noise etc. called as attacks. There are various methods proposed in this field in recent years to detect above mentioned attacks. This paper provides a detailed analysis of different approaches and methodologies used to detect image forgery. It is also analysed that block-based features methods are robust to Gaussian noise and JPEG compression and the key point-based feature methods are robust to rotation and scaling.
Image Quality Feature Based Detection Algorithm for Forgery in Images ijcga
The verifying of authenticity and integrity of images is a serious research issue. There are various types of techniques to create forged images for various intentions. In this paper, Attempt is made to verify the authenticity of image using the image quality features like markov and moment based features. They are found to have their best results in case of forgery involving splicing.
FAN search for image copy-move forgery-amalta 2014SondosFadl
The proposed Fan Search (FS) algorithm starts once a duplicated block is detected. Instead of exhaustive search for all blocks,the nearby blocks of the detected block are examined first in a spiral order.
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
AKHILESH KUMAR YADAV, DEENBANDHU SINGH, VIVEK KUMAR
Department of Computer Science and Engineering
Babu Banarasi Das University, Lucknow
akhi2232232@gmail.com, deenbandhusingh85@gmail.com, vivek.kumar0091@gmail.com
ABSTRACT- Digital images can be easily modified using powerful image editing software. Determining whether a manipulation is innocent of sharpening from those which are malicious, such as removing or adding parts to an image is the topic of this paper. In this paper we focus on detection of a special type of forgery-the Copy-Move forgery, in this part of the original image is copied moved to desired location in the same image and pasted. The proposed method compress images using DWT (discrete wavelet transform) and divided into blocks and choose blocks than perform feature vector calculation and lexicographical sorting and duplicated blocks are identified after sorting. This method is good at some manipulation/attack likes scaling, rotation, Gaussian noise, smoothing, JPEG compression etc.
INDEX TERMS- Copy-Move forgery, Wavelet Transform, Lexicographical Sorting, Region Duplication Detection.
Copy-Rotate-Move Forgery Detection Based on Spatial DomainSondosFadl
we propose a method which is efficient and fast for detecting Copy-Move regions even when the copied region was undergone rotation modify in spatial domain.
Region duplication forgery detection in digital imagesRupesh Ambatwad
Region duplication or copy move forgery is a common type of tampering scheme carried out to create a fake image. The field on blind image forensics depends upon the authenticity of the digital image. As in copy move forgery the duplicated region belongs to the same image, the detection of tampering is complex as it does not leave a visual clue. But the tampering gives rise to glitches at pixel level
A Robust Method for Moving Object Detection Using Modified Statistical Mean M...ijait
Moving object detection is low-level, important task for any visual surveillance system. One of the aim of this paper is to, to describe various approaches of moving object detection such as background subtraction, temporal difference, as well as pros and cons of these techniques. A statistical mean technique [10] has been used to overcome the problem in previous techniques. Even statistical mean method also suffers with the problem of superfluous effects of foreground objects. In this paper, the presented method tries to overcome this effect as well as reduces the computational complexity up to some extent. In this paper, a robust algorithm for automatic, noise detection and removal from moving objects in video sequences is presented. The algorithm considers static camera parameters.
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Pinaki Ranjan Sarkar
Recent advancement in sensor technology allows very high spatial resolution along with multiple spectral bands. There are many studies, which highlight that Object Based Image Analysis(OBIA) is more accurate than pixel-based classification for high resolution(< 2m) imagery. Image segmentation is a crucial step for OBIA and it is a very formidable task to estimate optimal parameters for segmentation as it does not have any unique solution. In this paper, we have studied different segmentation algorithms (both mono-scale and multi-scale) for different terrain categories and showed how the segmented output depends on upon various parameters. Later, we have introduced a novel method to estimate optimal segmentation parameters. The main objectives of this study are to highlight the effectiveness of presently available segmentation techniques on very high-resolution satellite data and to automate segmentation process. Pre-estimation of segmentation parameter is more practical and efficient in OBIA. Assessment of segmentation algorithms and estimation of segmentation parameters are examined based on the very high-resolution multi-spectral WorldView-3(0.3m, PAN sharpened) data.
Performance Analysis of CRT for Image Encryption ijcisjournal
With the fast advancements of information technology, the security of image data transmitted or stored over
internet is become very difficult. To hide the details, an effective method is encryption, so that only
authorized persons can decrypt the image with the keys available. Since the default features of digital
image such as high capacity data, large redundancy and large similarities among pixels, the conventional
encryption algorithms such as AES, , DES, 3DES, and Blow Fish, are not applicable for real time image
encryption. This paper presents the performance of CRT for image encryption to secure storage and
transmission of image over internet.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Face Recognition Using Neural Network Based Fourier Gabor Filters & Random Pr...CSCJournals
Face detection and recognition has many applications in a variety of fields such as authentication, security, video surveillance and human interaction systems. In this paper, we present a neural network system for face recognition. Feature vector based on Fourier Gabor filters is used as input of our classifier, which is a Back Propagation Neural Network (BPNN). The input vector of the network will have large dimension, to reduce its feature subspace we investigate the use of the Random Projection as method of dimensionality reduction. Theory and experiment indicates the robustness of our solution.
Automatic Image Segmentation Using Wavelet Transform Based On Normalized Gra...IJMER
Model-Based image segmentation plays an important role in image analysis and image
retrieval. To analyze the features of the image, model based segmentation algorithm will be more
efficient. This paper, proposed Automatic Image Segmentation using Wavelets (AISWT) based on
normalized graph cut method to make segmentation simpler. Discrete Wavelet Transform is considered
for segmentation which contains significant information of the input for the approximation band of image.
The Histogram based algorithm is used to obtain the number of regions and the initial parameters like
mean, variance and mixing factor
At the end of this lesson, you should be able to;
define segmentation.
Describe edge based in segmentation.
describe thresholding and its properties.
apply edge detection and thresholding as segmentation techniques.
Presentation on deformable model for medical image segmentationSubhash Basistha
Introduction to Image Processing
Steps of Image Processing
Types of Image Processing
Introduction to Image Segmentation
Introduction to Medical Image Segmentation
Application of Image Segmentation
Example of Image Segmentation
Need for Deformable Model
What is Deformable Model??
Types of Deformable Model
Noise Removal in Microarray Images Using Variational Mode Decomposition Techn...TELKOMNIKA JOURNAL
Microarray technology allows the simultaneous monitoring of thousands of genes in parallel.
Based on the gene expression measurements, microarray technology have proven powerful in gene
expression profiling for discovering new types of diseases and for predicting the type of a disease.
Enhancement, Gridding, Segmentation and Intensity extraction are important steps in microarray image
analysis. This paper presents a noise removal method in microarray images based on Variational Mode
Decomposition (VMD). VMD is a signal processing method which decomposes any input signal into
discrete number of sub-signals (called Variational Mode Functions) with each mode chosen to be its band
width in spectral domain. First the noisy image is processed using 2-D VMD to produce 2-D VMFs. Then
Discrete Wavelet Transform (DWT) thresholding technique is applied to each VMF for denoising. The
denoised microarray image is reconstructed by the summation of VMFs. This method is named as 2-D
VMD and DWT thresholding method. The proposed method is compared with DWT thresholding and
BEMD and DWT thresholding methods. The qualitative and quantitative analysis shows that 2-D VMD and
DWT thresholding method produces better noise removal than other two methods.
Copy-Rotate-Move Forgery Detection Based on Spatial DomainSondosFadl
we propose a method which is efficient and fast for detecting Copy-Move regions even when the copied region was undergone rotation modify in spatial domain.
Region duplication forgery detection in digital imagesRupesh Ambatwad
Region duplication or copy move forgery is a common type of tampering scheme carried out to create a fake image. The field on blind image forensics depends upon the authenticity of the digital image. As in copy move forgery the duplicated region belongs to the same image, the detection of tampering is complex as it does not leave a visual clue. But the tampering gives rise to glitches at pixel level
A Robust Method for Moving Object Detection Using Modified Statistical Mean M...ijait
Moving object detection is low-level, important task for any visual surveillance system. One of the aim of this paper is to, to describe various approaches of moving object detection such as background subtraction, temporal difference, as well as pros and cons of these techniques. A statistical mean technique [10] has been used to overcome the problem in previous techniques. Even statistical mean method also suffers with the problem of superfluous effects of foreground objects. In this paper, the presented method tries to overcome this effect as well as reduces the computational complexity up to some extent. In this paper, a robust algorithm for automatic, noise detection and removal from moving objects in video sequences is presented. The algorithm considers static camera parameters.
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Pinaki Ranjan Sarkar
Recent advancement in sensor technology allows very high spatial resolution along with multiple spectral bands. There are many studies, which highlight that Object Based Image Analysis(OBIA) is more accurate than pixel-based classification for high resolution(< 2m) imagery. Image segmentation is a crucial step for OBIA and it is a very formidable task to estimate optimal parameters for segmentation as it does not have any unique solution. In this paper, we have studied different segmentation algorithms (both mono-scale and multi-scale) for different terrain categories and showed how the segmented output depends on upon various parameters. Later, we have introduced a novel method to estimate optimal segmentation parameters. The main objectives of this study are to highlight the effectiveness of presently available segmentation techniques on very high-resolution satellite data and to automate segmentation process. Pre-estimation of segmentation parameter is more practical and efficient in OBIA. Assessment of segmentation algorithms and estimation of segmentation parameters are examined based on the very high-resolution multi-spectral WorldView-3(0.3m, PAN sharpened) data.
Performance Analysis of CRT for Image Encryption ijcisjournal
With the fast advancements of information technology, the security of image data transmitted or stored over
internet is become very difficult. To hide the details, an effective method is encryption, so that only
authorized persons can decrypt the image with the keys available. Since the default features of digital
image such as high capacity data, large redundancy and large similarities among pixels, the conventional
encryption algorithms such as AES, , DES, 3DES, and Blow Fish, are not applicable for real time image
encryption. This paper presents the performance of CRT for image encryption to secure storage and
transmission of image over internet.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Face Recognition Using Neural Network Based Fourier Gabor Filters & Random Pr...CSCJournals
Face detection and recognition has many applications in a variety of fields such as authentication, security, video surveillance and human interaction systems. In this paper, we present a neural network system for face recognition. Feature vector based on Fourier Gabor filters is used as input of our classifier, which is a Back Propagation Neural Network (BPNN). The input vector of the network will have large dimension, to reduce its feature subspace we investigate the use of the Random Projection as method of dimensionality reduction. Theory and experiment indicates the robustness of our solution.
Automatic Image Segmentation Using Wavelet Transform Based On Normalized Gra...IJMER
Model-Based image segmentation plays an important role in image analysis and image
retrieval. To analyze the features of the image, model based segmentation algorithm will be more
efficient. This paper, proposed Automatic Image Segmentation using Wavelets (AISWT) based on
normalized graph cut method to make segmentation simpler. Discrete Wavelet Transform is considered
for segmentation which contains significant information of the input for the approximation band of image.
The Histogram based algorithm is used to obtain the number of regions and the initial parameters like
mean, variance and mixing factor
At the end of this lesson, you should be able to;
define segmentation.
Describe edge based in segmentation.
describe thresholding and its properties.
apply edge detection and thresholding as segmentation techniques.
Presentation on deformable model for medical image segmentationSubhash Basistha
Introduction to Image Processing
Steps of Image Processing
Types of Image Processing
Introduction to Image Segmentation
Introduction to Medical Image Segmentation
Application of Image Segmentation
Example of Image Segmentation
Need for Deformable Model
What is Deformable Model??
Types of Deformable Model
Noise Removal in Microarray Images Using Variational Mode Decomposition Techn...TELKOMNIKA JOURNAL
Microarray technology allows the simultaneous monitoring of thousands of genes in parallel.
Based on the gene expression measurements, microarray technology have proven powerful in gene
expression profiling for discovering new types of diseases and for predicting the type of a disease.
Enhancement, Gridding, Segmentation and Intensity extraction are important steps in microarray image
analysis. This paper presents a noise removal method in microarray images based on Variational Mode
Decomposition (VMD). VMD is a signal processing method which decomposes any input signal into
discrete number of sub-signals (called Variational Mode Functions) with each mode chosen to be its band
width in spectral domain. First the noisy image is processed using 2-D VMD to produce 2-D VMFs. Then
Discrete Wavelet Transform (DWT) thresholding technique is applied to each VMF for denoising. The
denoised microarray image is reconstructed by the summation of VMFs. This method is named as 2-D
VMD and DWT thresholding method. The proposed method is compared with DWT thresholding and
BEMD and DWT thresholding methods. The qualitative and quantitative analysis shows that 2-D VMD and
DWT thresholding method produces better noise removal than other two methods.
Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...idescitation
Ongoing Microarray is an increasingly playing a crucial role applied in the field
of medical and biological operations. The initiator of Microarray technology is M. Schena et
al. [1] and from past few years microarrays have begun to be used in many fields such as
biomedicine, mostly on cancer and Diabetic, and medical diagnoses. A Deoxyribonucleic
Acid (DNA) microarray is a collection of microscopic DNA spots attached to a solid surface,
such as glass, plastic or silicon chip forming an array. Processing of DNA microarray image
analysis includes three tasks: gridding, segmentation and intensity extraction and at the
stage of processing, the irregularities of shape and spot position which leads to generate
significant errors. This article presents a new spot edge detection method using Window
based Bi-dimensional Empirical Mode Decomposition. On separating spots form the
background area and to decreases the probability of errors and gives more accurate
information about the states of spots we are proposing a spot edge detection via WBEMD.
By using this method we can identify the spots with low density, which leads to increasing
the performance of cDNA microarray images.
An artificial neural network approach for detecting skin cancerTELKOMNIKA JOURNAL
This study aims to present diagnose of melanoma skin cancer at an early stage. It applies feature
extraction method of the first order for feature extraction based on texture in order to get high degree of
accuracy with method of classification using artificial neural network (ANN). The method used is training
and testing phases with classification of Multilayer Perceptron (MLP) neural network. The results showed
that the accuracy of test image with 4 sets of training for image not suspected of melanoma and melanoma
with the lowest accuracy of 80% and the highest accuracy of 88.88%, respectively. The 4 sets of training
used consisted of 23 images. Of the 23 images used as a training consisted of 6 as not suspected of
melanoma images and 17 as suspected melanoma images.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The ergodicity property of chaotic system is utilized to perform the permutation process; a substitution operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are then applied to perform the permutation process. The encryption’s key streams not only depend on the cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption rate. The security and performance analysis have been performed, including key space analysis, histogram analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical image and video encryption.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The
ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid
chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are
then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by
bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption
rate. The security and performance analysis have been performed, including key space analysis, histogram
analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical
image and video encryption
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption rate. The security and performance analysis have been performed, including key space analysis, histogram analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its large key space and efficient permutation-substitution operation, and therefore it is suitable for practical image and video encryption.
Segmentation by Fusion of Self-Adaptive SFCM Cluster in Multi-Color Space Com...CSCJournals
This paper proposes a new, simple, and efficient segmentation approach that could find diverse applications in pattern recognition as well as in computer vision, particularly in color image segmentation. First, we choose the best segmentation components among six different color spaces. Then, Histogram and SFCM techniques are applied for initialization of segmentation. Finally, we fuse the segmentation results and merge similar regions. Extensive experiments have been taken on Berkeley image database by using the proposed algorithm. The results show that, compared with some classical segmentation algorithms, such as Mean-Shift, FCR and CTM, etc, our method could yield reasonably good or better image partitioning, which illustrates practical value of the method.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGEcscpconf
Advances in technology have brought about extensive research in the field of image fusion.
Image fusion is one of the most researched challenges of Face Recognition. Face Recognition
(FR) is the process by which the brain and mind understand, interpret and identify or verify
human faces.. Image fusion is the combination of two or more source images which vary in
resolution, instrument modality, or image capture technique into a single composite
representation. Thus, the source images are complementary in many ways, with no one input
image being an adequate data representation of the scene. Therefore, the goal of an image
fusion algorithm is to integrate the redundant and complementary information obtained from
the source images in order to form a new image which provides a better description of the scene
for human or machine perception. In this paper we have proposed a novel approach of pixel
level image fusion using PCA that will remove the image blurredness in two images and
reconstruct a new de-blurred fused image. The proposed approach is based on the calculation
of Eigen faces with Principal Component Analysis (PCA). Principal Component Analysis (PCA)
has been most widely used method for dimensionality reduction and feature extraction
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
Segmentation and recognition of handwritten digit numeral string using a mult...ijfcstjournal
In this paper, the use of Multi-Layer Perceptron (MLP) Neural Network model is proposed for recognizing
unconstrained offline handwritten Numeral strings. The Numeral strings are segmented and isolated
numerals are obtained using a connected component labeling (CCL) algorithm approach. The structural
part of the models has been modeled using a Multilayer Perceptron Neural Network. This paper also
presents a new technique to remove slope and slant from handwritten numeral string and to normalize the
size of text images and classify with supervised learning methods. Experimental results on a database of
102 numeral string patterns written by 3 different people show that a recognition rate of 99.7% is obtained
on independent digits contained in the numeral string of digits includes both the skewed and slant data.
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...CSCJournals
The object image annotation problem is basically a classification problem and there are many different modeling approaches for the solution. These approaches can be classified into two main categories such as generative and discriminative. An ideal classifier should combine these two complementary approaches. In this paper, we present a method achieving this combination by using the discriminative power of the neural networks and the generative nature of Bayesian networks. The evaluation of the proposed method on three typical image’s database has shown some success in automatic image annotation.
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...IJTET Journal
The segmentation of membranel blood vessels within the retina may be a essential step in designation of diabetic retinopathy during this paper, gift a replacement methodology for mechanically segmenting blood vessels in retinal pictures. 2 techniques for segmenting retinal blood vessels, supported totally different image process techniques, square measure represented and their strengths and weaknesses square measure compared. This methodology uses a neural network (NN) theme for element classification and gray-level and moment invariants-based options for element illustration. The performance of every algorithmic program was tested on the STARE and DRIVE dataset. wide used for this purpose, since they contain retinal pictures and also the
vascular structures. Performance on each sets of check pictures is healthier than different existing pictures. The methodology
proves particularly correct for vessel detection in STARE pictures. This effectiveness and lustiness with totally different image conditions, is employed for simplicity and quick implementation. This methodology used for early detection of Diabetic Retinopathy (DR)
Similar to SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation of Microarray Images (20)
Super Capacitor Electronic Circuit Design for Wireless ChargingIJAAS Team
Keeping time as base, a gadget has been proposed, where electrical accessories like Mobiles are charged within a fraction of minutes which is highly efficient and time saver as compared to the present time chargers which take nearly two hours to get fully charged. Objective of this project is to create a circuit which will be charged quickly and wireless. Wireless charging circuit works on the principle of inductive coupling. AC energy has been converted to DC energy through diode rectifier. Oscillator circuit produces high frequency passed by transmitter circuit to transmit magnetic field which is received by receiver circuit. A wireless charging concept with super capacitor will lead to faster charging and long operative life. Here super capacitor is used as a storage device. A Super capacitor has magnificent property, it can charge as well as discharge very quickly and linearly alike battery. The main difference between battery and super capacitor is specific energy, Super capacitor have 10-50 time less than battery.
On the High Dimentional Information Processing in Quaternionic Domain and its...IJAAS Team
There are various high dimensional engineering and scientific applications in communication, control, robotics, computer vision, biometrics, etc.; where researchers are facing problem to design an intelligent and robust neural system which can process higher dimensional information efficiently. The conventional real-valued neural networks are tried to solve the problem associated with high dimensional parameters, but the required network structure possesses high complexity and are very time consuming and weak to noise. These networks are also not able to learn magnitude and phase values simultaneously in space. The quaternion is the number, which possesses the magnitude in all four directions and phase information is embedded within it. This paper presents a well generalized learning machine with a quaternionic domain neural network that can finely process magnitude and phase information of high dimension data without any hassle. The learning and generalization capability of the proposed learning machine is presented through a wide spectrum of simulations which demonstrate the significance of the work.
Using FPGA Design and HIL Algorithm Simulation to Control Visual ServoingIJAAS Team
This is a novel research paper provides an optimal solution for object tracking using visual servoing control system with programmable gate array technology to realize the visual controller. The controller takes in account the robot dynamics to generate the joint torques directly for performing the tasks related to object tracking using visual servoing. Also, the notion of dynamic perceptibility provides the capability of the designed system to track desired objects employing direct visual servoing technique. This idea is assimilated in the suggested controller and realized in the programmable gate array. Additionally, this paper grants an ideal control framework for direct visual servoing robots that incorporates dynamic perceptibility features. With the aim of evaluating the proposed FPGA based architecture, the control algorithm is applied to Hardware-in-the-loop simulation (HIL) set up of three degrees of freedom rigid robotic manipulator with three links. Furthermore, different investigations are performed to demonstrate the behavior of the proposed system when a trajectory adjacent to a singularity is attained.
Mitigation of Selfish Node Attacks In Autoconfiguration of MANETsIJAAS Team
Mobile ad-hoc networks (MANETs) are composed of mobile nodes connected by wireless links without using any pre-existent infrastructure. Hence the assigning of unique IP address to the incoming node becomes difficult. There are various dynamic auto configuration protocols available to assign IP address to the incoming nodes including grid based protocol which assigns IP address with less delay and low protocol overhead. Such protocols get affected by presence of either selfish nodes or malicious nodes. Moreover there is no centralized approach to defend against these threats like in wired network such as firewall, intrusion detection system, proxy etc. The selfish nodes are the nodes which receive packet destined to it and drop packet destined to other nodes in order to save its energy and resources. This behavior of nodes affects normal functioning of auto configuration protocol. Many algorithms are available to isolate selfish nodes but they do not deal with presence of false alarm and protocol overhead. And also there are certain algorithms which use complex formulae and tedious mathematical calculations. The proposed algorithm in this paper helps to overcome the attack of selfish nodes effect in an efficient and scalable address auto configuration protocol that automatically configures a network by assigning unique IP addresses to all nodes with a very low protocol overhead, minimal address acquisition delay and computational overhead.
Vision Based Approach to Sign Language RecognitionIJAAS Team
We propose an algorithm for automatically recognizing some certain amount of gestures from hand movements to help deaf and dumb and hard hearing people. Hand gesture recognition is quite a challenging problem in its form. We have considered a fixed set of manual commands and a specific environment, and develop a effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture which in general terms means detecting, tracking and recognising. The algorithm is non-changing to rotations, translations and scale of the hand. We will be demonstrating the effectiveness of the technique on real imagery.
Design and Analysis of an Improved Nucleotide Sequences Compression Algorithm...IJAAS Team
DNA (deoxyribonucleic acid), is the hereditary material in humans and almost all other organisms. Nearly every cell in a person’s body has the same DNA. The information in DNA is stored as a code made up of four chemical bases: adenine (A), guanine (G), cytosine (C), and thymine (T). With continuous technology development and growth of sequencing data, large amount of biological data is generated. This large amount of generated data causes difficulty to store, analyse and process DNA sequences. So there is a wide need of reducing the size, for this reason, DNA Compression is employed to reduce the size of DNA sequence. Therefore there is a huge need of compressing the DNA sequence. In this paper, we have proposed an efficient and fast DNA sequence compression algorithm based on differential direct coding and variable look up table (LUT).
Review: Dual Band Microstrip Antennas for Wireless ApplicationsIJAAS Team
In this manuscript, a review of dual band microstrip antennas for wireless communication is presented. This review manuscript discusses regarding the geometric structures, different methods of analysis for antenna characteristics, and different types of wireless applications.
Building Fault Tolerance Within Wsn-A Topology ModelIJAAS Team
Wireless Sensor network plays a crucial role which helps in visualizing, processing, and analyzing the information wirelessly. WSN is a network which consists of huge amount of sensor devices which are of low cost and low powered also known as sensor nodes. These type of networks are generally used in real time applications such as monitoring of environmental conditions, militaries, industries etc., .but the problem that exists in WSN is may be due to different failures such as node failure, link failure, sink failure, interference, power dissipation and collision. If these faults are unable to handle then the desired network criteria’s may not be reached properly which results in inefficiency of the network. So, the main idea behind the investigation is to form a different networking topology which works in the event of failure.
Simplified Space Vector Pulse Width Modulation Based on Switching Schemes wit...IJAAS Team
This paper presents a simplified control strategy of SVPWM with a three segment switching sequence and 7 segment switch frequency for high power multilevel inverter. In the proposed method, the inverter switching sequences are optimized for minimization of device switching sequence frequency and improvement of harmonic spectrum by using the three most derived switching states and one suitable redundant state for each space vector. The proposed 3-segment sequence is compared with conventional 7-segment sequence similar for five level Cascaded H-Bridge inverter with various values of switching frequencies including very low frequency. The output spectrum of the proposed sequence design shows the reduction of device switching frequency and states current and line voltage. THD this minimizing the filter size requirement of the inverter, employed in industrial applications. Where sinusoidal output voltage is required.
An Examination of the Impact of Power Sector Reform on Manufacturing and Serv...IJAAS Team
The main objective of this study is to empirically examine the impact of Power Sector Reform on Manufacturing and Services Sector in Nigeria between 1999-2016. The study employed secondary annual time series data sourced from World Bank database (2016). The methodology adopted for the study was Augmented Dickey-Fuller (ADF); a test for long-run relationship using ARDL Bounds Testing approach with analysis of long-run and shortrun dynamics in the model. A striking revelation from the study is the inverse relationship that exists between manufacturing output and electricity consumption in Nigeria within the period referenced. This negative relationship is not unconnected with widespread allegation of misappropriation of budgeted funds for the Power Sector by successive administrations in Nigeria since 1999. It must be stated in clear terms that constant and consistent electricity generation, transmission and distribution is sine-qua-none for the growth of the national economy. Virtually all sectors of the economy depend on the supply of electricity to do business and so the lack of this vital ingredient of growth contributes in no small measure in stagnating economic growth and development. Efforts at reforming the power sector can only be fruitful when ALL stakeholders in the power sector including the political class put away their personal agendas and take the bull by the horn towards rescuing the nation from the looming danger of stagnant economic growth. Furthermore, there is the need for the Nigerian government to come up with new, better and alternative ways of improving energy generation and supply, as well as proper maintenance of electricity infrastructure in the country.
A Novel Handoff Necessity Estimation Approach Based on Travelling DistanceIJAAS Team
Mobility management is one of the most important challenges in Next
Generation Wireless Networks (NGWNs) as it enables users to move across
geographic boundaries of wireless networks. Nowadays, mobile
communications have heterogeneous wireless networks offering variable
coverage and Quality of Service (QoS). The availability of alternatives
generates a problem of occurrence of unnecessary handoff that results in
wastage of network resources. To avoid this, an efficient algorithm needs to
be developed to minimize the unnecessary handoffs. Conventionally,
whenever Wireless Local Area Network (WLAN) connectivity is available,
the mobile node switch from cellular network to wireless local area network
to gain maximum use of high bandwidth and low cost of wireless local area
network as much as possible. But to maintain call quality and minimum
number of call failure, a considerable proportion of these handovers should
be determined. Our algorithm makes the handoff to wireless local area
network only when the Predicted Received Signal Strength (PRSS) falls
below a threshold value and travelling distance inside the wireless local area
network is larger than a threshold distance.Through MATLAB simulation,
we show that our algorithm is able to improve handover performance.
The Combination of Steganography and Cryptography for Medical Image ApplicationsIJAAS Team
To give more security for the biomedical images for the patient betterment as well privacy for the patient highly confidently patient image report can be placed in database. If unknown persons like hospital staffs, relatives and third parties like intruder trying to see the report it has in the form of hidden state in another image. The patient detail like MRI image has been converted into any form of steganography. Then, encrypt those image by using proposed cryptography algorithm and place in the database.
Physical Properties and Compressive Strength of Zinc Oxide Nanopowder as a Po...IJAAS Team
In this study, the application of nanotechnology was applied in the dentistry field, especially in the innovation of dental amalgam material. To date, mercury (Hg) has been used widely as dental amalgam material with consideration of the cheap price, ease of use, and good mechanical strength. However, last few years, many problems have been faced in the dentistry field due to the use of mercury. Hence, new material is needed as an innovation to eliminate the mercury from dental amalgam composition. This research was conducted to analyze the physical properties and compressive strength of zinc oxide (ZnO) nanopowder as a potential dental amalgam material. The physical properties such as morphology and dimensions were analyzed by SEM and XRD. Further, the compression test was conducted by using hydraulic press machine. The results showed that the ZnO nanopowder analyzed has the particle size of 14.34 nm with the morphology classified as nanorods type. On the compression load of 500 kg, the average of ZnO green density is 3.170 g/cm3. This value experienced the increase of 4.763% when the load was set to 1000 kg, and 7.539% at 2000 kg. The dwelling time also took the same effect. At 30 seconds, the average of ZnO green density is 3.260 g/cm3. This value experienced the increase of 0.583% at 60 seconds and 3.098% at 90 seconds.
Experimental and Modeling Dynamic Study of the Indirect Solar Water Heater: A...IJAAS Team
The Indirect Solar Water Heater System (SWHS) with Forced Circulation is modeled by proposing a theoretical dynamic multi-node model. The SWHS, which works with a 1,91 m2 PFC and 300 L storage tank, and it is equipped with available forced circulation scale system fitted with an automated subsystem that controlled hot water, is what the experimental setup consisted of. The system, which 100% heated water by only using solar energy. The experimental weather conditions are measured every one minute. The experiments validation steps were performed for two periods, the first one concern the cloudy days in December, the second for the sunny days in May; the average deviations between the predicted and the experimental values is 2 %, 5 % for the water temperature output and for the useful energy are 4 %, 9 % respectively for the both typical days, which is very satisfied. The thermal efficiency was determined experimentally and theoretically and shown to agree well with the EN12975 standard for the flow rate between 0,02 kg/s and 0,2kg/s.
An Improved Greedy Parameter Stateless Routing in Vehicular Ad Hoc NetworkIJAAS Team
Congestion problem and packet delivery related issues in the vehicular ad hoc network environment is a widely researched problem in recent years. Many network designers utilize various algorithms for the design of ad hoc networks and compare their results with the existing approaches. The design of efficient network protocol is a major challenge in vehicular ad hoc network which utilizes the value of GPS and other parameters associated with the vehicles. In this paper GPSR protocol is improved and compared with the existing GPSR protocol and AODV protocol on the basis of various performance parameters like throughput of the network, delay and packet delivery ratio. The results also validate the performance of the proposed approach.
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM SystemIJAAS Team
Several classical timing synchronization schemes have been proposed for the timing synchronization in OFDM systems based on the correlation between identical parts of OFDM symbol. These schemes show poor performance due to the presence of plateau and significant side lobe. In this paper we present a timing synchronization schemes with timing metric based on a Constant Amplitude Zero Auto Correlation (CAZAC) sequence. The performance of the proposed timing synchronization scheme is better than the classical techniques.
Workload Aware Incremental Repartitioning of NoSQL for Online Transactional P...IJAAS Team
Numerous applications are deployed on the web with the increasing popularity of internet. The applications include, 1) Banking applications, 2) Gaming applications, 3) E-commerce web applications. Different applications reply on OLTP (Online Transaction Processing) systems. OLTP systems need to be scalable and require fast response. Today modern web applications generate huge amount of the data which one particular machine and Relational databases cannot handle. The E-Commerce applications are facing the challenge of improving the scalability of the system. Data partitioning technique is used to improve the scalability of the system. The data is distributed among the different machines which results in increasing number of transactions. The work-load aware incremental repartitioning approach is used to balance the load among the partitions and to reduce the number of transactions that are distributed in nature. Hyper Graph Representation technique is used to represent the entire transactional workload in graph form. In this technique, frequently used items are collected and Grouped by using Fuzzy C-means Clustering Algorithm. Tuple Classification and Migration Algorithm is used for mapping clusters to partitions and after that tuples are migrated efficiently.
A Fusion Based Visibility Enhancement of Single Underwater Hazy ImageIJAAS Team
Underwater images are prone to contrast loss, limited visibility, and undesirable color cast. For underwater computer vision and pattern recognition algorithms, these images need to be pre-processed. We have addressed a novel solution to this problem by proposing fully automated underwater image dehazing using multimodal DWT fusion. Inputs for the combinational image fusion scheme are derived from Singular Value Decomposition (SVD) and Discrete Wavelet Transform (DWT) for contrast enhancement in HSV color space and color constancy using Shades of Gray algorithm respectively. To appraise the work conducted, the visual and quantitative analysis is performed. The restored images demonstrate improved contrast and effective enhancement in overall image quality and visibility. The proposed algorithm performs on par with the recent underwater dehazing techniques.
Graph Based Workload Driven Partitioning System by Using MongoDBIJAAS Team
The web applications and websites of the enterprises are accessed by a huge number of users with the expectation of reliability and high availability. Social networking sites are generating the data exponentially large amount of data. It is a challenging task to store data efficiently. SQL and NoSQL are mostly used to store data. As RDBMS cannot handle the unstructured data and huge volume of data, so NoSQL is better choice for web applications. Graph database is one of the efficient ways to store data in NoSQL. Graph database allows us to store data in the form of relation. In Graph representation each tuple is represented by node and the relationship is represented by edge. But, to handle the exponentially growth of data into a single server might decrease the performance and increases the response time. Data partitioning is a good choice to maintain a moderate performance even the workload increases. There are many data partitioning techniques like Range, Hash and Round robin but they are not efficient for the small transactions that access a less number of tuples. NoSQL data stores provide scalability and availability by using various partitioning methods. To access the Scalability, Graph partitioning is an efficient way that can be easily represent and process that data. To balance the load data are partitioned horizontally and allocate data across the geographical available data stores. If the partitions are not formed properly result becomes expensive distributed transactions in terms of response time. So the partitioning of the tuple should be based on relation. In proposed system, Schism technique is used for partitioning the Graph. Schism is a workload aware graph partitioning technique. After partitioning the related tuples should come into a single partition. The individual node from the graph is mapped to the unique partition. The overall aim of Graph partitioning is to maintain nodes onto different distributed partition so that related data come onto the same cluster.
Data Partitioning in Mongo DB with CloudIJAAS Team
Cloud computing offers various and useful services like IAAS, PAAS SAAS for deploying the applications at low cost. Making it available anytime anywhere with the expectation to be it scalable and consistent. One of the technique to improve the scalability is Data partitioning. The alive techniques which are used are not that capable to track the data access pattern. This paper implements the scalable workload-driven technique for polishing the scalability of web applications. The experiments are carried out over cloud using NoSQL data store MongoDB to scale out. This approach offers low response time, high throughput and less number of distributed transaction. The result of partitioning technique is conducted and evaluated using TPC-C benchmark.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
2. IJAAS ISSN: 2252-8814
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation… (Durga Prasad Kondisetty)
79
Figure 1: Analysis of Microarray Image
The microarray images is a difficult task as the fluorescence of the glass slide adds noise floor to the
microarray image [3] [18]. The processing of the microarray image requires noise suppression with minimal
reduction of spot edge information that derives the segmentation process. This paper describes reduce the
noise in microarray images using Empirical Mode Decomposition [EMD] method. The BEMD method [5]
decomposes the image into several Intrinsic Mode Functions [IMF], in which the first function is the high
frequency component, second function next high frequency component and so on; the last function denotes
the low frequency component. The mean filter is applied only to the few first high frequency components
leaving the low frequency components, as the high frequency components contain noise. The image is
reconstructed by combining the filtered high frequency components and low frequency components. After
noise removal, segmentation, Expression ratio and gridding calculations are the important tasks in analysis of
microarray image. Any noise in the microarray image will affect the subsequent analysis [6].
In the proposed literature of many microarray image segmentation approaches have Fixed circle
segmentation [7], Adaptive circle Segmentation Technique [8], Seeded region growing methods [9] and
clustering algorithms [10] are the methods that deal with microarray image segmentation problem. This paper
mainly focuses on clustering algorithms. These algorithms have the advantages that they are not restricted to
a particular spot size and shape, does not require an initial state of pixels and no need of post processing.
These algorithms have been developed based on the information about the intensities of the pixels only (one
feature). In this paper, SLIC super pixel based self organizing maps clustering algorithm is proposed. The
qualitative and quantitative results show that proposed method has segmented the image better than k-means,
fuzzy c-means and self organizing maps clustering algorithms.
2. BI-DIMENSIONAL EMPIRICAL MODE DECOMPOSITION-DWT THRESHOLDING
METHOD
Empirical mode decomposition [11] is a signal processing method that nondestructively fragments
any non-linear and non-stationary signal into oscillatory functions by means of a mechanism called shifting
process. These functions are called Intrinsic Mode Functions (IMF), and it satisfies two properties, (i) the
number of zero crossings and extrema points should be equal or differ by one. (ii) Symmetric envelopes (zero
mean) interpret by local maxima and minima [12]. The signal after decomposition using EMD is non-
destructive means that the original signal can be obtained by adding the IMFs and residue. The first IMF is a
high frequency component and the subsequent IMFs contain from next high frequency to the low frequency
components. The shifting process used to obtain IMFs on a 2-D signal (image) is summarized as follows:
a) Let I(x,y) be a Microarray image used for EMD decomposition. Find all local maxima and local minima
points in I(x,y).
b) Upper envelope Up(x,y) is created by interpolating the maxima points and lower envelope Lw(x,y) is
created by interpolating minima points. The cubic spline interpolation method for interpolation is carried
out as:
c) Compute the mean of lower and upper envelopes denoted by Mean(x,y).
( ( , ) ( , ))
( , )
2
Up x y Lw x y
Mean x y
+
=
(1)
d) This mean signal is subtracted from the input signal.
( , ) ( , ) ( , )Sub x y I x y Mean x y= − (2)
e) If Sub(x,y) satisfies the IMF properties, then an IMF is obtained .
( , ) ( , )iIMF x y Sub x y= (3)
f) Subtract the extracted IMF from the input signal. Now the value of I(x,y) is
( , ) ( , ) ( , )iI x y I x y IMF x y= − (4)
Repeat the above steps (b) to (f) for the generation of next IMFs.
g) This process is repeated until I(x,y) does not have maxima or minima points to create envelopes.
Original Image can be reconstructed by inverse EMD given by
3. ISSN: 2252-8814
IJAAS Vol. 7, No. 1, March 2018: 78 – 85
80
1
( , ) ( , ) ( , )
n
i
i
I x y IMF x y res x y
=
= +∑
(5)
The mechanism of de-noising using BEMD-DWT is summarized as follows
a. Apply 2-D EMD for noisy microarray to obtain IMFi (i=1, 2, …k). The kth IMF is called residue.
b. The first intrinsic mode function (IMF1) contains high frequency components and it is suitable for
denoising. This IMF1 is denoised with mean filter. This de-noised IMF1 is represented with DNIMF1.
c. The denoised image is reconstructed by the summation of DNIMF1 and remaining IMFs given by
2
1
k
i
i
RI DNIMF IMF
=
= + ∑
(6)
where RI is the reconstructed band and the flow diagram of BEMD-DWT filtering is shown in figure
Figure 2: Flow Diagram of BEMD-mean filtering method
3. MICROARRAY IMAGE GRIDDING
The process of dividing the microarray image into blocks (sub-gridding) and each block again
divided into sub-blocks (spot-detection) is called Gridding. The final sub-block contains a single spot and
having only two regions spot and background. Existing algorithms for gridding are semi-automatic in nature
requiring several parameters such as size of spot, number of rows of spots, number of columns of spot etc. In
this paper, a fully automatic gridding algorithm designed in [13] is used for sub-gridding and spot-detection..
4. SLIC SUPERPIXELS
Simple linear iterative clustering (SLIC) is an adaption of k-means for Superpixel generation, with
two important distinctions: i) the number of distance calculations in the optimization is dramatically reduced
by limiting the search space to a region proportional to the Superpixel size. This reduces the complexity to be
linear in the number of pixels N and independent of the number of superpixels k. ii) A weighted distance
measure combines color and spatial proximity, while simultaneously providing control over the size and
compactness of the superpixels.
The algorithm of SLIC superpixels generation is given below [14].
1. Initialize p initial cluster centers in C = [k, x, y, r, s] T by sampling pixels at regular grid steps S.
2. For generation of equal sized super pixels the grid interval S is given by S=
p
N
3. Set label k(j)=-1 for each pixel j.
4. 4. Set distance d(j) = ∞ for each pixel j.
5. For each cluster center C do
6. For each pixel j in a 2S X 2S region around C do
7. Compute the distance D between C and j.
8. The distance D depends on pixel’s color (color proximity) and pixel position (spatial proximity), whose
values is known. The value of D is given by
4. IJAAS ISSN: 2252-8814
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation… (Durga Prasad Kondisetty)
81
(7)
The maximum spatial distance expected within a given cluster should correspond to the sampling
interval, NS = S. Determining the maximum color distance Nc is not so straightforward, as color distances
can vary significantly from cluster to cluster and image to image. The value of Nc in the range from [1, 40].
9. if D < d(i) then set d(i)=D and k(i)=p go to 6.
10. Goto 5, the same process for each cluster
11. Compute new cluster centers.
12. The clustering and updating processes are repeated until a predefined number of iteration is achieved. The
SLIC algorithm can generate compact and nearly uniform superpixels with a low computational
overhead.
5. FUZZY C-MEANS CLUSTERING ALGORITHM
The FCM algorithm for segmentation of microarray image is described below [15]:
1. Take randomly K initial clusters from the m*n image pixels.
2. Initialize membership matrix uij with value in range 0 to 1 and value of m=2.
Assign each pixel to the cluster Cj {j=1,2,…..K} if it satisfies the following condition [D(. , .)] is the
Euclidean distance measure between two values.
( , ) ( , ), 1,2,...,m m
ij i j iq i qu D I C u D I C q K
j q
< =
≠ (8)
The new membership and cluster centroid values as calculated as
1
1
1
1
, 1
( , )
( )
( , )
ik K
i k m
j j k
u for i K
D C I
D C I
−
=
= ≤ ≤
∑
1^
1
n
m
ij j
j
j n
m
ij
j
u I
C
u
=
=
=
∑
∑
(9)
3. Continue 2-3 until each pixel is assigned to the maximum membership cluster [16].
6. SLIC SUPERPIXEL BASED SOM CLUSTERING ALGORITHM
The SLIC algorithm generates superpixels which are used in our clustering algorithm. The
superpixels are generated based on the color similarity and proximity in the image plane. The algorithm
depends on two values NS and Nc, the higher value of NS corresponds to more regular and grid-like
Superpixel structure and lower value of Nc captures more image details. The SLIC Superpixel based SOM
clustering algorithm is given below:
1. Collect necessary information of superpixels by generate the superpixels representation of original image.
2. Initialize cluster centroids vi, i=1, ... , C.
3. The objective function F is given by
(10)
5. ISSN: 2252-8814
IJAAS Vol. 7, No. 1, March 2018: 78 – 85
82
4. The membership values uij is updated given by
(11)
Where γj is the number of pixels in superpixel sj,
uij denotes the membership of superpixel sj to the ith cluster.
Q is the number of superpixels in images and
ξj is the average color value of superpixel sj,
Nj stands for the set of neighboring superpixels that are adjacent to sj and
NR is the cardinality of Nj.
||·|| is a norm metric, denoting Euclidean distance between pixels and clustering centroids.
The parameter m is a weighting exponent on each SOM membership and determines the amount of
self mapping of the resulting classification.
5. The cluster centroids vi is updated given by
(12)
6. Repeats Steps 3 to 4, until ||Vnew-Vold||< ε.
7. EXPERIMENTAL RESULTS
Quantitative Analysis: Quantitative analysis is a numerically oriented procedure to figure out the
performance of algorithms without any human error. The Mean Square Error (MSE) [17] is significant metric
to validate the quality of image. It measures the square error between pixels of the original and the resultant
images.
Qualitative Analysis: The proposed clustering algorithm is performed on two microarray images
drawn from the standard microarray database corresponds to breast category a CGH tumor tissue [18]. Image
1 consists of a total of 38808 pixels and Image 2 consists of 64880 pixels. Gridding is performed on the
input images by the method proposed in [13], to segment the image into compartments, where each
compartment is having only one spot region and background. The gridding output is shown in figure 3. After
gridding the image into compartments, such that each compartment is having single spot and background,
compartment no 1 from image 1 and compartment no 12 from image 2 are extracted. Superpixels are
generated for these two compartments using SLIC and segmented using SLIC based SOM algorithm. The
Superpixel generation and segmentation is shown in figure 3.
The MSE is [18] mathematically defined as
MSE = ||vi-cj||2 (13)
Where N is the total number of pixels in an image and xi is the pixel which belongs to the jth cluster. The
lower difference between the resultant and the original image reflects that all the data in the region are
located near to its centre. Table 1 shows the quantitative evaluations of clustering algorithms. The results
confirm that SLIC based SOM algorithm produces the lowest MSE value for segmenting the microarray
image.
6. IJAAS ISSN: 2252-8814
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation… (Durga Prasad Kondisetty)
83
Image 1 Gridded Image
Image 2 Gridded Image
Compartment No 1 in image 1 Superpixels
Compartment No 12 in image 2 Superpixels
Segmented Image Compartment No 1 Segmented Image Compartment No 12
Figure 3: Super pixel based SOM segmentation
7. ISSN: 2252-8814
IJAAS Vol. 7, No. 1, March 2018: 78 – 85
84
Table 1: MSE Values
Method Compartment
No 1
Compartment
No 12
K-means 96.4 93.6
Fuzzy c-means 93.1 89.4
Self organizing maps 84.7 80.9
SLIC SOM 82.8 77.4
8. CONLUSIONS
Microarray technology is used for parallel analysis of gene expression ratio of different genes in a
single experiment. The analysis of microarray image is done with segmentation, information extraction and
gridding. The transcription abundance between two genes under experiment is the expression ratio of each
and every gene spot. Clustering algorithms have been used for microarray image segmentation with an
advantage that they are not restricted to a particular size and shape for the spots. This paper describes SLIC
based self organizing maps clustering algorithm for segmentation of microarray image. Spot information
includes the calculation of Expression Ratio in the region of every gene spot on the microarray image. The
expression-ratio measures the transcription abundance between the two sample genes. The proposed method
performs better noise suppression and produces better segmentation results.
REFERENCES
[1] J.Harikiran, A.Raghu, Dr.P.V.Lakshmi, Dr.R.Kiran Kumar, “Edge Detection using Mathematical Morphology for
Gridding of Microarray Image”, International Journal of Advanced Research in Computer Science, Volume 3, No
2, pp.172-176, April 2012.
[2] J.Harikiran, Dr.P.V.Lakshmi, Dr.R.Kiran Kumar, “Fast Clustering Algorithms for Segmentation of Microarray
Images”, International Journal of Scientific & Engineering Research, Volume 5, Issue 10, pp 569-574, 2014.
[3] Bogdan Smolka, et.al. “Ultrafast Technique of Impulsive Noise Removal with Application to Microarray Image
De-noising”, ICIAR 2005, LNCS 3656, pp. 990–997, 2005. Springer-Verlag Berlin Heidelberg 2005.
[4] Hara Stefanou, et.al. “Microarray Image Denoising Using a Two-Stage Multiresolution Technique”, 2007 IEEE
International Conference on Bioinformatics and Biomedicine.
[5] Lakshmana Phaneendra Maguluri, et.al, “BEMD with Clustering Algorithm for Segmentation of Microarray
Image”, International Journal of Electronics Communication and Computer Engineering Volume 4, Issue 2, ISSN:
2278–4209 2013.
[6] J.Harikiran, D.Ramakrishna, B.Avinash, Dr.P.V.Lakshmi, Dr.R.Kiran Kumar, “A New Method of Gridding for
Spot Detection in Microarray Images”, Computer Engineering and Intelligent Systems, Vol 5, No 3, pp.25-33, 2014
[7] M.Eisen, ScanAlyze User’s manual, 1999,
[8] J.Buhler, T.Ideker and D.Haynor, “Dapple:Improved Techniques for Finding spots on DMA Microarray Images”,
Tech. Rep. UWTR 2000-08-05, University of Washington, 2000.
[9] Frank Y. Shih and Shouxian Cheng, “Automatic seeded region growing for color image segmentation”, 0262-8856/
2005.
[10] Aliaa Saad El-Gawady, et.al. “Segmentation of Complementary DNA Microarray Images using Marker-Controlled
Watershed Technique”, International Journal of Computer Applications (0975 – 8887) Volume 110 – No. 12,
January 2015.
[11] J.Harikiran, et.al. “Multiple Feature Fuzzy C-means Clustering Algorithm for Segmentation of Microarray image”,
IAES International Journal of Electrical and Computer Engineering, Vol. 5, No. 5, pp. 1045-1053, 2015.
[12] J.Harikiran et.al. “Fuzzy C-means with Bi-dimensional empirical Mode decomposition for segmentation of
Microarray Image”, International Journal of Computer Science Issues, volume 9, Issue 5, Number 3, pp.273-279,
2012.
[13] J.Harikiran, B.Avinash, Dr.P.V.Lakshmi, Dr.R.Kirankumar, “Automatic Gridding Method for microarray images”,
Journal of Theoretical and Applied Information Technology, volume 65, Number 1, pp.235-241, 2014.
[14] Radhakrishna Achanta et.al.”SLIC Superpixels Compared to stae-of-the-art Superpixel Methods”, Journal of Latex
Class Files, Vol 6, No. 1, December 2011.
[15] Hamidreza Saberkari, et.al. ” Fully Automated Complementary DNA Microarray Segmentation using a Novel
Fuzzy‑based Algorithm”, IEEE Transactions on Information Technology in Biomedicine, Vol 5, Issue 3, Jul-Sep
2015.
[16] Luis Rueda and Vidya Vidyadharan, “A Hill-climbing Approach for Automatic Gridding of cDNA Microarray
Images”, IAES International Journal of Electrical and Computer Engineering, volume 4, No 6, December 2014,
pp.923-930.
[17] B.Saichandana et.al. “Hyperspectral Image Classification using Genetic Algorithm after Visualization using Image
Fusion”, International Journal of Computer Science And Technology, Volume 7, No. 2, June 2016, pp.2229-4333.
[18] Durga Prasad Kondisetty, Dr. Mohammed Ali Hussain. “A Review on Microarray Image Segmentation Methods”,
International Journal of Computer Science and Information Security (IJCSIS), Vol. 14, No. 12, December 2016.
8. IJAAS ISSN: 2252-8814
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation… (Durga Prasad Kondisetty)
85
[19] Komang Ariana, et.al., “Color Image Segmentation using Kohonen SOM”, International Journal of Engineering
and Technology (IJET), Vol. 6, No. 2, December 2014.
[20] Kristof Van Laerhoven. et.al, “Combining the Self-Organizing Map and K-Means Clustering for On-line
Classification of Sensor Data”, International Journal of Computer Science and Information Security (IJCSIS), Vol.
14, No. 12, December 2001.
[21] M.N.M. Sap, et.al,. “Hybrid Self Organizing Map for Overlapping Clusters”, International Journal of Signal
Processing, Image Processing and Pattern Recognition, Vol. 14, No. 12, December 2016.
[22] Janne Nikkil, et.al,. “Analysis and visualization of gene expression data using SOM”, Neural Networks 15 (2002)
953–966 Vol. 14, No. 12, December 2002.