This paper presents the analytical of non-Gaussian noise resistance with the aid of the use of bilateral in reverse confidential with the optical flow. In particular, optical flow is the sample of the image’s motion from the consecutive images caused by the object’s movement. It is a 2-D vector where every vector is a displacement vector displaying the motion from the first image to the second. When the noise interferes with the image flow, the approximated performance on the vector in optical flow is poor. We ensure greater appropriate noise resistance by applying bilateral in reverse confidential in optical flow in the experiment by concerning the error vector magnitude (EVM). Many noise resistance models of the global method optical flow are using for comparison in our experiment. And many sequenced image data sets where they are interfered with by several types of non-Gaussian noise are used for experimental analysis.
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...ijsrd.com
In past two decades there are various techniques are developed to support variety of image processing applications. The applications of image processing include medical, satellite, space, transmission and storage, radar and sonar etc. But noise in image effect all applications. So it is necessary to remove noise from image. There are various methods and techniques are there to remove noise from images. Wavelet transform (WT) has been proved to be effective in noise removal but this have some problems that is overcome by PCA method. This paper presents an efficient image de-noising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). This method provides better preservation of image local structures. In this method a pixel and its nearest neighbors are modeled as a vector variable whose training samples are selected from the local window by using block matching based LPG. In image de-noising, a compromise has to be found between noise reduction and preserving significant image details. PCA is a statistical technique for simplifying a dataset by reducing datasets to lower dimensions. It is a standard technique commonly used for data reduction in statistical pattern recognition and signal processing. This paper proposes a de-noising technique by using a new statistical approach, principal component analysis with local pixel grouping (LPG). This procedure is iterated second time to further improve the de-noising performance, and the noise level is adaptively adjusted in the second stage.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...ijsrd.com
In past two decades there are various techniques are developed to support variety of image processing applications. The applications of image processing include medical, satellite, space, transmission and storage, radar and sonar etc. But noise in image effect all applications. So it is necessary to remove noise from image. There are various methods and techniques are there to remove noise from images. Wavelet transform (WT) has been proved to be effective in noise removal but this have some problems that is overcome by PCA method. This paper presents an efficient image de-noising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). This method provides better preservation of image local structures. In this method a pixel and its nearest neighbors are modeled as a vector variable whose training samples are selected from the local window by using block matching based LPG. In image de-noising, a compromise has to be found between noise reduction and preserving significant image details. PCA is a statistical technique for simplifying a dataset by reducing datasets to lower dimensions. It is a standard technique commonly used for data reduction in statistical pattern recognition and signal processing. This paper proposes a de-noising technique by using a new statistical approach, principal component analysis with local pixel grouping (LPG). This procedure is iterated second time to further improve the de-noising performance, and the noise level is adaptively adjusted in the second stage.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...sipij
This paper focuses on improved edge model based on Curvelet coefficients analysis. Curvelet transform is
a powerful tool for multiresolution representation of object with anisotropic edge. Curvelet coefficients
contributions have been analyzed using Scale Invariant Feature Transform (SIFT), commonly used to study
local structure in images. The permutation of Curvelet coefficients from original image and edges image
obtained from gradient operator is used to improve original edges. Experimental results show that this
method brings out details on edges when the decomposition scale increases.
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...CSCJournals
Synthetic aperture radar (SAR) data represents a significant resource of information for a large variety of researchers. Thus, there is a strong interest in developing data encoding and decoding algorithms which can obtain higher compression ratios while keeping image quality to an acceptable level. In this work, results of different wavelet-based image compression and segmentation based wavelet image compression are assessed through controlled experiments on synthetic SAR images. The effects of dissimilar wavelet functions, number of decompositions are examined in order to find optimal family for SAR images. The choice of optimal wavelets in segmentation based wavelet image compression is coiflet for low frequency and high frequency component. The results presented here is a good reference for SAR application developers to choose the wavelet families and also it concludes that wavelets transform is rapid, robust and reliable tool for SAR image compression. Numerical results confirm the potency of this approach.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
An Experimental Approach For Evaluating Superpixel's Consistency Over 2D Gaus...CSCJournals
This article proposes a rigorous method to assess the consistency of superpixels for different superpixel segmentation algorithms. The proposed method extracts the superpixels that remain unchanged over certain levels of noise by adopting the Jaccard Similarity Coefficient (JSC). Technically, we developed a measure of Jaccard similarity for superpixel segmentation algorithms to compare the similarity between sets of superpixels (original and noisy). The algorithm calls the superpixel segmentation algorithm to generate the superpixel results of the original images and saves their boundary masks and labels. It then applies varying degrees of noise to the images and produces the superpixels results, and the process is repeated for four levels with increased noise value at each iteration. We chose 2D Gaussian Blur, Impulse Noise and a combination of both to corrupt the images. The proposed algorithm generates similarity indices of superpixels (original and noisy) using Jaccard Similarity (JS). To be categorized as a consistent superpixel, the similarity index must meet a predefined coefficient threshold (?) of JSC. The superpixels consistency of four different superpixel segmentation algorithms including Bilateral geodesic distance (BGD), Flooding based superpixels generation (FBS), superpixels via geodesic distance (GDS), and Turbopixel (TP) are evaluated. Precisely, the experimental results demonstrated that no single algorithm was able to yield an optimal outcome and failed to maintain consistent superpixels at each level of noise. Conclusively, more robust superpixel algorithms must be developed to solve such problems effectively.
OBIA on Coastal Landform Based on Structure Tensor csandit
This paper presents the OBIA method based on structure tensor to identify complex coastal
landforms. That is, develop Hessian matrix by Gabor filtering and calculate multiscale structure
tensor. Extract edge information of image from the trace of structure tensor and conduct
watershed segment of the image. Then, develop texons and create texton histogram. Finally,
obtain the final results by means of maximum likelihood classification with KL divergence as
the similarity measurement. The study findings show that structure tensor could obtain
multiscale and all-direction information with small data redundancy. Moreover, the method
described in the current paper has high classification accuracy
An Application of Second Generation Wavelets for Image Denoising using Dual T...IDES Editor
The lifting scheme of the discrete wavelet transform
(DWT) is now quite well established as an efficient technique
for image denoising. The lifting scheme factorization of
biorthogonal filter banks is carried out with a linear-adaptive,
delay free and faster decomposition arithmetic. This adaptive
factorization is aimed to achieve a well transparent, more
generalized, complexity free fast decomposition process in
addition to preserve the features that an ordinary wavelet
decomposition process offers. This work is targeted to get
considerable reduction in computational complexity and power
required for decomposition. The hard striking demerits of
DWT structure viz., shift sensitivity and poor directionality
had already been proven to be washed out with an emergence
of dual tree complex wavelet (DT-CWT) structure. The well
versed features of DT-CWT and robust lifting scheme are
suitably combined to achieve an image denoising with prolific
rise in computational speed and directionality, also with a
desirable drop in computation time, power and complexity of
algorithm compared to all other techniques.
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Content Based Video Retrieval in Transformed Domain using Fractional Coeffici...CSCJournals
With the development of multimedia and growing database there is huge demand of video retrieval systems. Due to this, there is a shift from text based retrieval systems to content based retrieval systems. Selection of extracted features play an important role in content based video retrieval. Good features selection also allows the time and space costs of the retrieval process to be reduced. Different methods[1,2,3] have been proposed to develop video retrievals systems to achieve better performance in terms of accuracy.
The proposed technique uses transforms to extract the features. The used transforms are Discrete Cosine, Walsh, Haar, Kekre, Discrete Sine, Slant and Discrete Hartley transforms. The benefit of energy compaction of transforms in higher coefficients is taken to reduce the feature vector size by taking fractional coefficients[5] of transformed frames of video. Smaller feature vector size results in less time for comparison of feature vectors resulting in faster retrieval of images. The feature vectors are extracted and coefficients sets are considered as feature vectors (100%, 6.25%, 3.125%, 1.5625%, 0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012%, 0.006% and 0.003% of complete transformed coefficients). The database consists of 500 videos spread across 10 categories.
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
We develop an efficient MRI denoising algorithm based on sparse representation and curvelet transform with variance stabilizing transformation framework. By using sparse representation, a MR image is decomposed into a sparsest coefficients matrix with more no of zeros. Curvelet transform is directional in nature and it preserves the important edge and texture details of MR images. In order to get sparsity and texture preservation, we post process the denoising result of sparse based method through curvelet transform. To use our proposed sparse based curvelet transform denoising method to remove rician noise in MR images, we use forward and inverse variance-stabilizing transformations. Experimental results reveal the efficacy of our approach to rician noise removal while well preserving the image details. Our proposed method shows improved performance over the existing denoising methods in terms of PSNR and SSIM for T1, T2 weighted MR images.
Coherence enhancement diffusion using robust orientation estimationcsandit
In this paper, a new robust orientation estimation for Coherence Enhancement Diffusion (CED)
is proposed. In CED, proper scale selection is very important as the gradient vector at that
scale reflects the orientation of local ridge. For this purpose, a new scheme is proposed in
which pre calculated orientation, by using orientation diffusion, is used to find the correct true
local scale. From the experiments it is found that the proposed scheme is working much better
in noisy environment as compared to the traditional Coherence Enhancement Diffusion.
Boosting CED Using Robust Orientation Estimationijma
n this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...sipij
This paper focuses on improved edge model based on Curvelet coefficients analysis. Curvelet transform is
a powerful tool for multiresolution representation of object with anisotropic edge. Curvelet coefficients
contributions have been analyzed using Scale Invariant Feature Transform (SIFT), commonly used to study
local structure in images. The permutation of Curvelet coefficients from original image and edges image
obtained from gradient operator is used to improve original edges. Experimental results show that this
method brings out details on edges when the decomposition scale increases.
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...CSCJournals
Synthetic aperture radar (SAR) data represents a significant resource of information for a large variety of researchers. Thus, there is a strong interest in developing data encoding and decoding algorithms which can obtain higher compression ratios while keeping image quality to an acceptable level. In this work, results of different wavelet-based image compression and segmentation based wavelet image compression are assessed through controlled experiments on synthetic SAR images. The effects of dissimilar wavelet functions, number of decompositions are examined in order to find optimal family for SAR images. The choice of optimal wavelets in segmentation based wavelet image compression is coiflet for low frequency and high frequency component. The results presented here is a good reference for SAR application developers to choose the wavelet families and also it concludes that wavelets transform is rapid, robust and reliable tool for SAR image compression. Numerical results confirm the potency of this approach.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
An Experimental Approach For Evaluating Superpixel's Consistency Over 2D Gaus...CSCJournals
This article proposes a rigorous method to assess the consistency of superpixels for different superpixel segmentation algorithms. The proposed method extracts the superpixels that remain unchanged over certain levels of noise by adopting the Jaccard Similarity Coefficient (JSC). Technically, we developed a measure of Jaccard similarity for superpixel segmentation algorithms to compare the similarity between sets of superpixels (original and noisy). The algorithm calls the superpixel segmentation algorithm to generate the superpixel results of the original images and saves their boundary masks and labels. It then applies varying degrees of noise to the images and produces the superpixels results, and the process is repeated for four levels with increased noise value at each iteration. We chose 2D Gaussian Blur, Impulse Noise and a combination of both to corrupt the images. The proposed algorithm generates similarity indices of superpixels (original and noisy) using Jaccard Similarity (JS). To be categorized as a consistent superpixel, the similarity index must meet a predefined coefficient threshold (?) of JSC. The superpixels consistency of four different superpixel segmentation algorithms including Bilateral geodesic distance (BGD), Flooding based superpixels generation (FBS), superpixels via geodesic distance (GDS), and Turbopixel (TP) are evaluated. Precisely, the experimental results demonstrated that no single algorithm was able to yield an optimal outcome and failed to maintain consistent superpixels at each level of noise. Conclusively, more robust superpixel algorithms must be developed to solve such problems effectively.
OBIA on Coastal Landform Based on Structure Tensor csandit
This paper presents the OBIA method based on structure tensor to identify complex coastal
landforms. That is, develop Hessian matrix by Gabor filtering and calculate multiscale structure
tensor. Extract edge information of image from the trace of structure tensor and conduct
watershed segment of the image. Then, develop texons and create texton histogram. Finally,
obtain the final results by means of maximum likelihood classification with KL divergence as
the similarity measurement. The study findings show that structure tensor could obtain
multiscale and all-direction information with small data redundancy. Moreover, the method
described in the current paper has high classification accuracy
An Application of Second Generation Wavelets for Image Denoising using Dual T...IDES Editor
The lifting scheme of the discrete wavelet transform
(DWT) is now quite well established as an efficient technique
for image denoising. The lifting scheme factorization of
biorthogonal filter banks is carried out with a linear-adaptive,
delay free and faster decomposition arithmetic. This adaptive
factorization is aimed to achieve a well transparent, more
generalized, complexity free fast decomposition process in
addition to preserve the features that an ordinary wavelet
decomposition process offers. This work is targeted to get
considerable reduction in computational complexity and power
required for decomposition. The hard striking demerits of
DWT structure viz., shift sensitivity and poor directionality
had already been proven to be washed out with an emergence
of dual tree complex wavelet (DT-CWT) structure. The well
versed features of DT-CWT and robust lifting scheme are
suitably combined to achieve an image denoising with prolific
rise in computational speed and directionality, also with a
desirable drop in computation time, power and complexity of
algorithm compared to all other techniques.
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Content Based Video Retrieval in Transformed Domain using Fractional Coeffici...CSCJournals
With the development of multimedia and growing database there is huge demand of video retrieval systems. Due to this, there is a shift from text based retrieval systems to content based retrieval systems. Selection of extracted features play an important role in content based video retrieval. Good features selection also allows the time and space costs of the retrieval process to be reduced. Different methods[1,2,3] have been proposed to develop video retrievals systems to achieve better performance in terms of accuracy.
The proposed technique uses transforms to extract the features. The used transforms are Discrete Cosine, Walsh, Haar, Kekre, Discrete Sine, Slant and Discrete Hartley transforms. The benefit of energy compaction of transforms in higher coefficients is taken to reduce the feature vector size by taking fractional coefficients[5] of transformed frames of video. Smaller feature vector size results in less time for comparison of feature vectors resulting in faster retrieval of images. The feature vectors are extracted and coefficients sets are considered as feature vectors (100%, 6.25%, 3.125%, 1.5625%, 0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012%, 0.006% and 0.003% of complete transformed coefficients). The database consists of 500 videos spread across 10 categories.
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
We develop an efficient MRI denoising algorithm based on sparse representation and curvelet transform with variance stabilizing transformation framework. By using sparse representation, a MR image is decomposed into a sparsest coefficients matrix with more no of zeros. Curvelet transform is directional in nature and it preserves the important edge and texture details of MR images. In order to get sparsity and texture preservation, we post process the denoising result of sparse based method through curvelet transform. To use our proposed sparse based curvelet transform denoising method to remove rician noise in MR images, we use forward and inverse variance-stabilizing transformations. Experimental results reveal the efficacy of our approach to rician noise removal while well preserving the image details. Our proposed method shows improved performance over the existing denoising methods in terms of PSNR and SSIM for T1, T2 weighted MR images.
Coherence enhancement diffusion using robust orientation estimationcsandit
In this paper, a new robust orientation estimation for Coherence Enhancement Diffusion (CED)
is proposed. In CED, proper scale selection is very important as the gradient vector at that
scale reflects the orientation of local ridge. For this purpose, a new scheme is proposed in
which pre calculated orientation, by using orientation diffusion, is used to find the correct true
local scale. From the experiments it is found that the proposed scheme is working much better
in noisy environment as compared to the traditional Coherence Enhancement Diffusion.
Boosting CED Using Robust Orientation Estimationijma
n this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Geometric wavelet transform for optical flow estimation algorithmijcga
This paper described an algorithm for computing the optical flow (OF) vector of a moving objet in a video sequence based on geometric wavelet transform (GWT). This method tries to calculate the motion between two successive frames by using a GWT. It consists to project the OF vectors on a basis of geometric wavelet. Using GWT for OF estimation has been attracting much attention. This approach takes advantage of the geometric wavelet filter property and requires only two frames. This algorithm is fast and able to estimate the OF with a low-complexity. The technique is suitable for video compression, and can be used for stereo vision and image registration.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
A Comparative Study of Wavelet and Curvelet Transform for Image DenoisingIOSR Journals
Abstract : This paper describes a comparison of the discriminating power of the various multiresolution based thresholding techniques i.e., Wavelet, curve let for image denoising.Curvelet transform offer exact reconstruction, stability against perturbation, ease of implementation and low computational complexity. We propose to employ curve let for facial feature extraction and perform a thorough comparison against wavelet transform; especially, the orientation of curve let is analysed. Experiments show that for expression changes, the small scale coefficients of curve let transform are robust, though the large scale coefficients of both transform are likely influenced. The reason behind the advantages of curvelet lies in its abilities of sparse representation that are critical for compression, estimation of images which are denoised and its inverse problems, thus the experiments and theoretical analysis coincide . Keywords: Curvelet transform, Face recognition, Feature extraction, Sparse representation Thresholding rules,Wavelet transform..
In this paper a PDE based hybrid method for image denoising is introduced. The method is a bi-stage filter with anisotropic diffusion followed by wavelet based bayesian shrinkage. Here efficient denoising is achieved by reducing the convergence time of anisotropic diffusion.
Optimization of Aberrated Coherent Optical SystemsIOSR Journals
The images of a straight edge in coherent illumination, produced by an optical system with circular aperture and apodised with amplitude filters have been studied. The image quality assessment parameters such as edge-ringing, edge-gradient and edge-shift of the edge fringes have been studied as a function of apodisation parameter for various degrees of defocus, Coma and primary spherical aberrations. It is found that, at certain combinations of aberrations the quality of the image of straight edge objects can be improved
A Novel Approach to Analyze Satellite Images for Severe Weather EventsIJERA Editor
Severe weather events (e.g. cyclones, thunderstorms, fog, floods etc.) have catastrophic effects on human life and aggrandize the impacts on agriculture, economy etc. Therefore information related to severe weather, in any form, is important for mitigation purposes and helps in delineating the causes and influences responsible. Satellites images provide such information related to the formation of such events at early stages and help in tracking its spatial and temporal development. This paper presents a framework to obtain maximum information of severe weather events from satellite panoramas. The backbone of this proposed framework involves high quality filtering of noise followed by a region growing technique which seeds each pixel, thus providing refined information for further extraction. This approach will be very helpful not only for acquiring valuable information of developing severe weather events, but also for other fields (e.g. medical and astronomy) where high precision imaging is required
WAVELET THRESHOLDING APPROACH FOR IMAGE DENOISINGIJNSA Journal
The original image corrupted by Gaussian noise is a long established problem in signal or image processing .This noise is removed by using wavelet thresholding by focused on statistical modelling of wavelet coefficients and the optimal choice of thresholds called as image denoising . For the first part, threshold is driven in a Bayesian technique to use probabilistic model of the image wavelet coefficients that are dependent on the higher order moments of generalized Gaussian distribution (GGD) in image processing applications. The proposed threshold is very simple. Experimental results show that the proposed method is called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image. It outperforms Donoho and Johnston Sure Shrink. The second part of the paper is attempt to claim on lossy compression can be used for image denoising .thus achieving the image compression & image denoising simultaneously. The parameter is choosing based on a criterion derived from Rissanen’s minimum description length (MDL) principle. Experiments show that this compression & denoise method does indeed remove noise significantly, especially for large noise power.
Adapter Wavelet Thresholding for Image Denoising Using Various Shrinkage Unde...muhammed jassim k
At Softroniics we provide job oriented training for freshers in IT sector. We are Pioneers in all leading technologies like Android, Java, .NET, PHP, Python, Embedded Systems, Matlab, NS2, VLSI etc. We are specializiling in technologies like Big Data, Cloud Computing, Internet Of Things (iOT), Data Mining, Networking, Information Security, Image Processing, Mechanical, Automobile automation and many other. We are providing long term and short term internship also.
We are providing short term in industrial training, internship and inplant training for Btech/Bsc/MCA/MTech students. Attached is the list of Topics for Mechanical, Automobile and Mechatronics areas.
MD MANIKANDAN-9037291113,04954021113
softroniics@gmail.com
www.softroniics.com
Single image super resolution with improved wavelet interpolation and iterati...iosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
Image Interpolation Techniques in Digital Image Processing: An OverviewIJERA Editor
In current digital era the image interpolation techniques based on multi-resolution technique are being discovered and developed. These techniques are gaining importance due to their application in variety if field (medical, geographical, space information) where fine and minor details are important. This paper presents an overview of different interpolation techniques, (nearest neighbor, Bilinear, Bicubic, B-spline, Lanczos, Discrete wavelet transform (DWT) and Kriging). Our results show bicubic interpolations gives better results than nearest neighbor and bilinear, whereas DWT and Kriging give finer details.
Square transposition: an approach to the transposition process in block cipherjournalBEEI
The transposition process is needed in cryptography to create a diffusion effect on data encryption standard (DES) and advanced encryption standard (AES) algorithms as standard information security algorithms by the National Institute of Standards and Technology. The problem with DES and AES algorithms is that their transposition index values form patterns and do not form random values. This condition will certainly make it easier for a cryptanalyst to look for a relationship between ciphertexts because some processes are predictable. This research designs a transposition algorithm called square transposition. Each process uses square 8 × 8 as a place to insert and retrieve 64-bits. The determination of the pairing of the input scheme and the retrieval scheme that have unequal flow is an important factor in producing a good transposition. The square transposition can generate random and non-pattern indices so that transposition can be done better than DES and AES.
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
Deep neural networks have accomplished enormous progress in tackling many problems. More specifically, convolutional neural network (CNN) is a category of deep networks that have been a dominant technique in computer vision tasks. Despite that these deep neural networks are highly effective; the ideal structure is still an issue that needs a lot of investigation. Deep Convolutional Neural Network model is usually designed manually by trials and repeated tests which enormously constrain its application. Many hyper-parameters of the CNN can affect the model performance. These parameters are depth of the network, numbers of convolutional layers, and numbers of kernels with their sizes. Therefore, it may be a huge challenge to design an appropriate CNN model that uses optimized hyper-parameters and reduces the reliance on manual involvement and domain expertise. In this paper, a design architecture method for CNNs is proposed by utilization of particle swarm optimization (PSO) algorithm to learn the optimal CNN hyper-parameters values. In the experiment, we used Modified National Institute of Standards and Technology (MNIST) database of handwritten digit recognition. The experiments showed that our proposed approach can find an architecture that is competitive to the state-of-the-art models with a testing error of 0.87%.
Supervised machine learning based liver disease prediction approach with LASS...journalBEEI
In this contemporary era, the uses of machine learning techniques are increasing rapidly in the field of medical science for detecting various diseases such as liver disease (LD). Around the globe, a large number of people die because of this deadly disease. By diagnosing the disease in a primary stage, early treatment can be helpful to cure the patient. In this research paper, a method is proposed to diagnose the LD using supervised machine learning classification algorithms, namely logistic regression, decision tree, random forest, AdaBoost, KNN, linear discriminant analysis, gradient boosting and support vector machine (SVM). We also deployed a least absolute shrinkage and selection operator (LASSO) feature selection technique on our taken dataset to suggest the most highly correlated attributes of LD. The predictions with 10 fold cross-validation (CV) made by the algorithms are tested in terms of accuracy, sensitivity, precision and f1-score values to forecast the disease. It is observed that the decision tree algorithm has the best performance score where accuracy, precision, sensitivity and f1-score values are 94.295%, 92%, 99% and 96% respectively with the inclusion of LASSO. Furthermore, a comparison with recent studies is shown to prove the significance of the proposed system.
A secure and energy saving protocol for wireless sensor networksjournalBEEI
The research domain for wireless sensor networks (WSN) has been extensively conducted due to innovative technologies and research directions that have come up addressing the usability of WSN under various schemes. This domain permits dependable tracking of a diversity of environments for both military and civil applications. The key management mechanism is a primary protocol for keeping the privacy and confidentiality of the data transmitted among different sensor nodes in WSNs. Since node's size is small; they are intrinsically limited by inadequate resources such as battery life-time and memory capacity. The proposed secure and energy saving protocol (SESP) for wireless sensor networks) has a significant impact on the overall network life-time and energy dissipation. To encrypt sent messsages, the SESP uses the public-key cryptography’s concept. It depends on sensor nodes' identities (IDs) to prevent the messages repeated; making security goals- authentication, confidentiality, integrity, availability, and freshness to be achieved. Finally, simulation results show that the proposed approach produced better energy consumption and network life-time compared to LEACH protocol; sensors are dead after 900 rounds in the proposed SESP protocol. While, in the low-energy adaptive clustering hierarchy (LEACH) scheme, the sensors are dead after 750 rounds.
Plant leaf identification system using convolutional neural networkjournalBEEI
This paper proposes a leaf identification system using convolutional neural network (CNN). This proposed system can identify five types of local Malaysia leaf which were acacia, papaya, cherry, mango and rambutan. By using CNN from deep learning, the network is trained from the database that acquired from leaf images captured by mobile phone for image classification. ResNet-50 was the architecture has been used for neural networks image classification and training the network for leaf identification. The recognition of photographs leaves requested several numbers of steps, starting with image pre-processing, feature extraction, plant identification, matching and testing, and finally extracting the results achieved in MATLAB. Testing sets of the system consists of 3 types of images which were white background, and noise added and random background images. Finally, interfaces for the leaf identification system have developed as the end software product using MATLAB app designer. As a result, the accuracy achieved for each training sets on five leaf classes are recorded above 98%, thus recognition process was successfully implemented.
Customized moodle-based learning management system for socially disadvantaged...journalBEEI
This study aims to develop Moodle-based LMS with customized learning content and modified user interface to facilitate pedagogical processes during covid-19 pandemic and investigate how teachers of socially disadvantaged schools perceived usability and technology acceptance. Co-design process was conducted with two activities: 1) need assessment phase using an online survey and interview session with the teachers and 2) the development phase of the LMS. The system was evaluated by 30 teachers from socially disadvantaged schools for relevance to their distance learning activities. We employed computer software usability questionnaire (CSUQ) to measure perceived usability and the technology acceptance model (TAM) with insertion of 3 original variables (i.e., perceived usefulness, perceived ease of use, and intention to use) and 5 external variables (i.e., attitude toward the system, perceived interaction, self-efficacy, user interface design, and course design). The average CSUQ rating exceeded 5.0 of 7 point-scale, indicated that teachers agreed that the information quality, interaction quality, and user interface quality were clear and easy to understand. TAM results concluded that the LMS design was judged to be usable, interactive, and well-developed. Teachers reported an effective user interface that allows effective teaching operations and lead to the system adoption in immediate time.
Understanding the role of individual learner in adaptive and personalized e-l...journalBEEI
Dynamic learning environment has emerged as a powerful platform in a modern e-learning system. The learning situation that constantly changing has forced the learning platform to adapt and personalize its learning resources for students. Evidence suggested that adaptation and personalization of e-learning systems (APLS) can be achieved by utilizing learner modeling, domain modeling, and instructional modeling. In the literature of APLS, questions have been raised about the role of individual characteristics that are relevant for adaptation. With several options, a new problem has been raised where the attributes of students in APLS often overlap and are not related between studies. Therefore, this study proposed a list of learner model attributes in dynamic learning to support adaptation and personalization. The study was conducted by exploring concepts from the literature selected based on the best criteria. Then, we described the results of important concepts in student modeling and provided definitions and examples of data values that researchers have used. Besides, we also discussed the implementation of the selected learner model in providing adaptation in dynamic learning.
Prototype mobile contactless transaction system in traditional markets to sup...journalBEEI
One way to prevent and reduce the spread of the covid-19 pandemic is through physical distancing program. This research aims to develop a prototype contactless transaction system using digital payment mechanisms and QR code technology that will be applied in traditional markets. The method used in the development of electronic market systems is a prototype approach. The application of QR code and digital payments are used as a solution to minimize money exchange contacts that are common in traditional markets. The results showed that the system built was able to accelerate and facilitate the buying and selling transaction process in traditional market environment. Alpha testing shows that all functional systems are running well. Meanwhile, beta testing shows that the user can very well accept the system that was built. The results of the study also show acceptance of the usefulness of the system being built, as well as the optimism of its users to be able to take advantage of this system both technologically and functionally, so its can be a part of the digital transformation of the traditional market to the electronic market and has become one of the solutions in reducing the spread of the current covid-19 pandemic.
Wireless HART stack using multiprocessor technique with laxity algorithmjournalBEEI
The use of a real-time operating system is required for the demarcation of industrial wireless sensor network (IWSN) stacks (RTOS). In the industrial world, a vast number of sensors are utilised to gather various types of data. The data gathered by the sensors cannot be prioritised ahead of time. Because all of the information is equally essential. As a result, a protocol stack is employed to guarantee that data is acquired and processed fairly. In IWSN, the protocol stack is implemented using RTOS. The data collected from IWSN sensor nodes is processed using non-preemptive scheduling and the protocol stack, and then sent in parallel to the IWSN's central controller. The real-time operating system (RTOS) is a process that occurs between hardware and software. Packets must be sent at a certain time. It's possible that some packets may collide during transmission. We're going to undertake this project to get around this collision. As a prototype, this project is divided into two parts. The first uses RTOS and the LPC2148 as a master node, while the second serves as a standard data collection node to which sensors are attached. Any controller may be used in the second part, depending on the situation. Wireless HART allows two nodes to communicate with each other.
Implementation of double-layer loaded on octagon microstrip yagi antennajournalBEEI
A double-layer loaded on the octagon microstrip yagi antenna (OMYA) at 5.8 GHz industrial, scientific and medical (ISM) Band is investigated in this paper. The double-layer consist of two double positive (DPS) substrates. The OMYA is overlaid with a double-layer configuration were simulated, fabricated and measured. A good agreement was observed between the computed and measured results of the gain for this antenna. According to comparison results, it shows that 2.5 dB improvement of the OMYA gain can be obtained by applying the double-layer on the top of the OMYA. Meanwhile, the bandwidth of the measured OMYA with the double-layer is 14.6%. It indicates that the double-layer can be used to increase the OMYA performance in term of gain and bandwidth.
The calculation of the field of an antenna located near the human headjournalBEEI
In this work, a numerical calculation was carried out in one of the universal programs for automatic electro-dynamic design. The calculation is aimed at obtaining numerical values for specific absorbed power (SAR). It is the SAR value that can be used to determine the effect of the antenna of a wireless device on biological objects; the dipole parameters will be selected for GSM1800. Investigation of the influence of distance to a cell phone on radiation shows that absorbed in the head of a person the effect of electromagnetic radiation on the brain decreases by three times this is a very important result the SAR value has decreased by almost three times it is acceptable results.
Exact secure outage probability performance of uplinkdownlink multiple access...journalBEEI
In this paper, we study uplink-downlink non-orthogonal multiple access (NOMA) systems by considering the secure performance at the physical layer. In the considered system model, the base station acts a relay to allow two users at the left side communicate with two users at the right side. By considering imperfect channel state information (CSI), the secure performance need be studied since an eavesdropper wants to overhear signals processed at the downlink. To provide secure performance metric, we derive exact expressions of secrecy outage probability (SOP) and and evaluating the impacts of main parameters on SOP metric. The important finding is that we can achieve the higher secrecy performance at high signal to noise ratio (SNR). Moreover, the numerical results demonstrate that the SOP tends to a constant at high SNR. Finally, our results show that the power allocation factors, target rates are main factors affecting to the secrecy performance of considered uplink-downlink NOMA systems.
Design of a dual-band antenna for energy harvesting applicationjournalBEEI
This report presents an investigation on how to improve the current dual-band antenna to enhance the better result of the antenna parameters for energy harvesting application. Besides that, to develop a new design and validate the antenna frequencies that will operate at 2.4 GHz and 5.4 GHz. At 5.4 GHz, more data can be transmitted compare to 2.4 GHz. However, 2.4 GHz has long distance of radiation, so it can be used when far away from the antenna module compare to 5 GHz that has short distance in radiation. The development of this project includes the scope of designing and testing of antenna using computer simulation technology (CST) 2018 software and vector network analyzer (VNA) equipment. In the process of designing, fundamental parameters of antenna are being measured and validated, in purpose to identify the better antenna performance.
Transforming data-centric eXtensible markup language into relational database...journalBEEI
eXtensible markup language (XML) appeared internationally as the format for data representation over the web. Yet, most organizations are still utilising relational databases as their database solutions. As such, it is crucial to provide seamless integration via effective transformation between these database infrastructures. In this paper, we propose XML-REG to bridge these two technologies based on node-based and path-based approaches. The node-based approach is good to annotate each positional node uniquely, while the path-based approach provides summarised path information to join the nodes. On top of that, a new range labelling is also proposed to annotate nodes uniquely by ensuring the structural relationships are maintained between nodes. If a new node is to be added to the document, re-labelling is not required as the new label will be assigned to the node via the new proposed labelling scheme. Experimental evaluations indicated that the performance of XML-REG exceeded XMap, XRecursive, XAncestor and Mini-XML concerning storing time, query retrieval time and scalability. This research produces a core framework for XML to relational databases (RDB) mapping, which could be adopted in various industries.
Key performance requirement of future next wireless networks (6G)journalBEEI
Given the massive potentials of 5G communication networks and their foreseeable evolution, what should there be in 6G that is not in 5G or its long-term evolution? 6G communication networks are estimated to integrate the terrestrial, aerial, and maritime communications into a forceful network which would be faster, more reliable, and can support a massive number of devices with ultra-low latency requirements. This article presents a complete overview of potential 6G communication networks. The major contribution of this study is to present a broad overview of key performance indicators (KPIs) of 6G networks that cover the latest manufacturing progress in the environment of the principal areas of research application, and challenges.
Modeling climate phenomenon with software grids analysis and display system i...journalBEEI
This study aims to model climate change based on rainfall, air temperature, pressure, humidity and wind with grADS software and create a global warming module. This research uses 3D model, define, design, and develop. The results of the modeling of the five climate elements consist of the annual average temperature in Indonesia in 2009-2015 which is between 29oC to 30.1oC, the horizontal distribution of the annual average pressure in Indonesia in 2009-2018 is between 800 mBar to 1000 mBar, the horizontal distribution the average annual humidity in Indonesia in 2009 and 2011 ranged between 27-57, in 2012-2015, 2017 and 2018 it ranged between 30-60, during the East Monsoon, the wind circulation moved from northern Indonesia to the southern region Indonesia. During the west monsoon, the wind circulation moves from the southern part of Indonesia to the northern part of Indonesia. The global warming module for SMA/MA produced is feasible to use, this is in accordance with the value given by the validate of 69 which is in the appropriate category and the response of teachers and students through a 91% questionnaire.
An approach of re-organizing input dataset to enhance the quality of emotion ...journalBEEI
The purpose of this paper is to propose an approach of re-organizing input data to recognize emotion based on short signal segments and increase the quality of emotional recognition using physiological signals. MIT's long physiological signal set was divided into two new datasets, with shorter and overlapped segments. Three different classification methods (support vector machine, random forest, and multilayer perceptron) were implemented to identify eight emotional states based on statistical features of each segment in these two datasets. By re-organizing the input dataset, the quality of recognition results was enhanced. The random forest shows the best classification result among three implemented classification methods, with an accuracy of 97.72% for eight emotional states, on the overlapped dataset. This approach shows that, by re-organizing the input dataset, the high accuracy of recognition results can be achieved without the use of EEG and ECG signals.
Parking detection system using background subtraction and HSV color segmentationjournalBEEI
Manual system vehicle parking makes finding vacant parking lots difficult, so it has to check directly to the vacant space. If many people do parking, then the time needed for it is very much or requires many people to handle it. This research develops a real-time parking system to detect parking. The system is designed using the HSV color segmentation method in determining the background image. In addition, the detection process uses the background subtraction method. Applying these two methods requires image preprocessing using several methods such as grayscaling, blurring (low-pass filter). In addition, it is followed by a thresholding and filtering process to get the best image in the detection process. In the process, there is a determination of the ROI to determine the focus area of the object identified as empty parking. The parking detection process produces the best average accuracy of 95.76%. The minimum threshold value of 255 pixels is 0.4. This value is the best value from 33 test data in several criteria, such as the time of capture, composition and color of the vehicle, the shape of the shadow of the object’s environment, and the intensity of light. This parking detection system can be implemented in real-time to determine the position of an empty place.
Quality of service performances of video and voice transmission in universal ...journalBEEI
The universal mobile telecommunications system (UMTS) has distinct benefits in that it supports a wide range of quality of service (QoS) criteria that users require in order to fulfill their requirements. The transmission of video and audio in real-time applications places a high demand on the cellular network, therefore QoS is a major problem in these applications. The ability to provide QoS in the UMTS backbone network necessitates an active QoS mechanism in order to maintain the necessary level of convenience on UMTS networks. For UMTS networks, investigation models for end-to-end QoS, total transmitted and received data, packet loss, and throughput providing techniques are run and assessed and the simulation results are examined. According to the results, appropriate QoS adaption allows for specific voice and video transmission. Finally, by analyzing existing QoS parameters, the QoS performance of 4G/UMTS networks may be improved.
A multi-task learning based hybrid prediction algorithm for privacy preservin...journalBEEI
There is ever increasing need to use computer vision devices to capture videos as part of many real-world applications. However, invading privacy of people is the cause of concern. There is need for protecting privacy of people while videos are used purposefully based on objective functions. One such use case is human activity recognition without disclosing human identity. In this paper, we proposed a multi-task learning based hybrid prediction algorithm (MTL-HPA) towards realising privacy preserving human activity recognition framework (PPHARF). It serves the purpose by recognizing human activities from videos while preserving identity of humans present in the multimedia object. Face of any person in the video is anonymized to preserve privacy while the actions of the person are exposed to get them extracted. Without losing utility of human activity recognition, anonymization is achieved. Humans and face detection methods file to reveal identity of the persons in video. We experimentally confirm with joint-annotated human motion data base (JHMDB) and daily action localization in YouTube (DALY) datasets that the framework recognises human activities and ensures non-disclosure of privacy information. Our approach is better than many traditional anonymization techniques such as noise adding, blurring, and masking.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Student information management system project report ii.pdf
Experimental analysis of non-Gaussian noise resistance on global method optical flow using bilateral in reverse confidential
1. Bulletin of Electrical Engineering and Informatics
Vol. 10, No. 2, April 2021, pp. 716~723
ISSN: 2302-9285, DOI: 10.11591/eei.v10i2.2740 716
Journal homepage: http://beei.org
Experimental analysis of non-Gaussian noise resistance on
global method optical flow using bilateral in reverse confidential
Darun Kesrarat1
, Vorapoj Patanavijit2
1
Department of Information Technology, Vincent Mary School of Science and Technology, Assumption University,
Bangkok, Thailand
2
Department of Computer and Network Engineering, Vincent Mary School of Engineering, Assumption University,
Bangkok, Thailand
Article Info ABSTRACT
Article history:
Received Aug 14, 2020
Revised Dec 12, 2020
Accepted Feb 5, 2021
This paper presents the analytical of non-Gaussian noise resistance with the
aid of the use of bilateral in reverse confidential with the optical flow. In
particular, optical flow is the sample of the image’s motion from the
consecutive images caused by the object’s movement. It is a 2-D vector
where every vector is a displacement vector displaying the motion from the
first image to the second. When the noise interferes with the image flow, the
approximated performance on the vector in optical flow is poor. We ensure
greater appropriate noise resistance by applying bilateral in reverse
confidential in optical flow in the experiment by concerning the error vector
magnitude (EVM). Many noise resistance models of the global method
optical flow are using for comparison in our experiment. And many
sequenced image data sets where they are interfered with by several types of
non-Gaussian noise are used for experimental analysis.
Keywords:
Optical flow
Non-Gaussian noise
Error vector magnitude
Bilateral filter
Reverse confidential
This is an open access article under the CC BY-SA license.
Corresponding Author:
Darun Kesrarat
Department of Information Technology
Vincent Mary School of Science and Technology
Assumption University, Bangkok, Thailand
Email: darunksr@gmail.com
1. INTRODUCTION
During the last three-decades, optical flow [1, 2] is an approach that was applied in several fields
such as motion estimation to point the moving object motion vector and motion compression. In traditional
image compression [3], motion compensation [4], video encoding [5, 6], movement tracking [7, 8], and the
distinction among the movement frame is coded. Then, the higher resolution is constructed in super-
resolution [9, 10]. Consequently, optical flow is the computation approach for identifying the motion vector
(MV) on each pixel. Several approaches of optical flow were presented such local method [11], phase
correlation method [12], block-based method [13] but this paper concentrates on the proposed global method
optical flow (G-OF) [14]. Because the G-OF is certain concerning the differential gradient-based algorithms
where it has been applied for several area in advance because of its acceptable in performance for motion
estimation. Many research works focus on the computationally efficient [15, 16] but this paper focus on the
noise resistance approaches.
In the actual world, most regarding the sequences are interfered with by way of many prerequisites
so purpose noises upon the sequences. The main problem is noise interference. The performance of the
optical flow is sensitive to the noise that degrades the result in motion estimation. Many approaches focus on
pre-processing such a noise remover model [17, 18] to remove the noise on the image earlier. And many
2. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Experimental analysis of non-Gaussian noise resistance on global method optical… (Darun Kesrarat)
717
approaches focus post-processing on a specific area such as many approaches in advance of optical flow have
been proposed during the last decade to improve the performance over the interfered noises such as the
bilateral function [19] and reverse confidential [20]. The concept of the reverse confidential in optical flow
and the bilateral function in optical flow was proved and presented an effective result in noise resistance.
This paper focuses on the use of the fusion approach among the bilateral filter and the reverse
confidential called the application of bilateral in reverse confidential with G-OF [21]. The objective of this
paper is to analyze the performance in the non-Gaussian noise resistance on global method optical flow using
bilateral in reverse confidential. The non-Gaussian noises are used in the experimental analysis such as salt &
pepper, speckle, and poisson. In the experimental, we analyze the performance in noise resistance with the
error vector magnitude (EVM) where the EVM concerns the degree of missed direction in the MV from the
optical flow’s outcome. And we also compare the performance in noise resistance against the other noise
resistance approaches to ensure greater appropriate noise resistance. The paper organization is arranged as
follows. Section 2 explains the research method in optical flow. Section 3 explains the experimental result.
Section 4 explains the conclusion.
2. RESEARCH METHODS IN GLOBAL METHOD OPTICAL FLOW AND NOISE
RESISTANCE APPROACHES
In this section, it is explained the global method optical flow and the noise resistance approaches on
the optical flow.
2.1. Global method optical flow
Global method optical flow (G-OF) [14] is an ordinary approach for pointing the 2-D vector motion
estimation via the use of “intensity of brightness level” with four-point mean differences [22, 23] and
“smoothness variation”. The iterative equations are calculated to obtain image velocity as in (1):
1
2 2 2
i i
x x y k
i i
x y
I I u I v I
u u
I I
+
+ +
= −
+ +
1
2 2 2
i i
y x y k
i i
x y
I I u I v I
v v
I I
+
+ +
= −
+ +
(1)
where
i
u and
i
v denote neighborhood average of horizontal and vertical. i is the number of iterative
calculations. The gradient intensity is:
(2)
where I(x,y,k) is the gradient intensity of an element (x,y) in the images at frame no. k.
2.2. Bilateral function in optical flow
Based on the effectiveness of noise removal from bilateral function, the use of a bilateral function
(BF) in correspondent with optical flow [19] found effectiveness in the noise tolerance on the result of MV.
The main process in corresponding the bilateral function in optical flow is to set the bilateral function [24-27]
over the result of the G-OF. In BF, the bilateral kernel is defined as:
(3)
In our work, we assigned 7× of v(x) for deviation in δa and deviation of intensity I(x) in δb based on
the traditional work [24]. Then, the kernel (ɸ) is occupied with the prepared function as:
(4)
3. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2020 : 716 – 723
718
In our work, we assigned ±7 concerning neighborhood for M, and K is the kernel normalization
based on the traditional work [24].
(5)
where m is a local range of neighborhood.
2.3. Reverse confidential optical flow
In the reverse confidential (RC) [20], the outcome of the MV from 2-ways was concerned in order
to ensure the stability of the optical flow approach. Then, the confidence function used the result of MV from
both directions to determine the confidence level and it was used to adjust the final result of MV.
In RC, the confidence function is defined as:
(6)
where vl and vl- are normal and reverse MV, and n is the number of neighbors MV. s is coordinate (x,y) in 2D
image and avoids the division by zero in the equation. The confidence levels are used to define the final
result MV in average of regional (N(s0)) by:-
(7)
where 0
( ( ))
n
l
v s is computed from the reliability of neighborhood N(s0) of the location s0=(x, y, k).
2.4. Bilateral in reverse confidential optical flow
The bilateral in reverse confidential (B-RC) [21] is the fusion approach that combines the traditional
bilateral function in correspondence with the reverse confidential approach. The process of B-RC is shown in
Figure 1. By using the 2-ways concept of RC, the bilateral function is applied to the MV of both direct and
the confidence is applied later. Then, the final outcome of MV is pointed from the confidence level.
Calculate the MV
with G-OF under
the 2-ways concept
of RC approach
Calculate Final MV
by confidence rate
based on RC
approach
Adopt Bilateral
function on 2-ways
MV
Get the set of
Image in
consecutive
Sequences
Consecutive
images
MV in
2 ways
Bilateral
vector
in 2-ways
Figure 1. The process of B-RC
3. EXPERIMENTAL AND ACHIEVEMENT
In this work, we used the global smoothness weight (α)=0.5 and use 4-point central differences
mask coefficient [22] for gradient estimation on G-OF. Four different well-known QCIF-176×144 sequences
(Akiyo, container, coastguard, and foreman) up to 50 frames on each were used on the test. Figure 2 presents
the sample of each sequence. Then, 5 kinds of non-Gaussian noise (poisson noise, salt & pepper noise at
density 0.005 and 0.025, and speckle noise at variance 0.01 and 0.05) were put in the sequence for testing.
Indefinitely, 20 types of the sequence were tested. And we evaluated the ordinary G-OF, BF, and RC in
comparison with B-RC.
Definitely, the achievement of noise tolerance was determined by the EVM. EVM is a measure of
how accurately in motion vector to quantify the performance where the lower in value of EVM means the
better in performance. In this work, we compare with the original ground truth MV and count only the no. of
non-zero movement vector with the root-mean-square by the average in the experimental. The Table 1
summarized the average EVM of each approach over 100 frames on the different types of non-Gaussian
noise. Figures 3-6 present the achievement of EVM in a frame by frame of the experiment sequences on the
different types of non-Gaussian noise by the graph.
5. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2020 : 716 – 723
720
(a) (b) (c)
(d) (e)
Figure 4. The achievement of EVM in a frame by frame of coastguard, (a) Poisson, (b) Salt & pepper
(d=0.005), (c) Salt & pepper (d=0.025), (d) Speckle (v=0.01), (e) Speckle (v=0.05)
(a) (b) (c)
(d) (e)
Figure 5. The achievement of EVM in a frame by frame of CONTAINER, (a) Poisson, (b) Salt & pepper
(d=0.005), (c) Salt & pepper (d=0.025), (d) Speckle (v=0.01), (e) Speckle (v=0.05)
6. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Experimental analysis of non-Gaussian noise resistance on global method optical… (Darun Kesrarat)
721
(a) (b) (c)
(d) (e)
Figure 6. The achievement of EVM in a frame by frame of FOREMAN, (a) Poisson, (b) Salt & pepper
(d=0.005), (c) Salt & pepper (d=0.025), (d) Speckle (v=0.01), (e) Speckle (v=0.05)
4. CONCLUSION
In our inspection, we concluded the experiment into 2 groups. The first group was concluded by the
type of non-Gaussian noise and the second group was concluded by the characteristic of the experiment
sequences. Because we found that the performance in noise resistance varies base on the types. Regarding the
type of non-Gaussian noise, we found that different types of noise return different results from each
approach. From the experimental, we found that: (i) The salt & pepper noise was the most impacted noise
that degraded the performance of the optical due to the highest value of EVM, (ii) The performance of the B-
RC achieved the best noise resistance over poisson and speckle noise and presented the second-best in salt &
pepper noise in coastguard and foreman sequence.
Regarding the characteristics of the experimental sequences, we found that it impacted the
performance of the optical flow. In this experiment, we separate the sequences into 2 subgroups for analysis.
There is a group of fast movement sequences such as coastguard and foreman sequence and the group of
slow movement sequences such as Akiyo and container. From the experimental, we found that: (i) The fast
movement sequence impacted to degrade the performance of the optical than slow movement sequence due
to the higher value in average EVM of the fast movement sequences, (ii) The performance of the B-RC
achieved the best over the slow movement sequence, (iii) The performance of the RC achieved the best over
the fast movement sequence only in the low level of salt & pepper and speckle noise. With the increase of
noise level in salt & pepper and speckle noise, the B-RC became the best in overall.
From the overall result in the experiment, the B-RC presented the greatest result in noise resistance
over the non-Gaussian noise. Then, the application areas such as motion tracking in the noisy environment
are suitable to adopt this method to gain accuracy in optical flow. There is no silver bullet method for
achieving the best result. Then, the study on several kinds of noises in correspond with several noise
resistance approaches is an issue for future research to explore.
ACKNOWLEDGEMENTS
This research project was funded by Assumption University.
7. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2020 : 716 – 723
722
REFERENCES
[1] A. Burton and J. Radford (editors), "Thinking in perspective: critical essays in the study of thought processes,"
Routledge, vol. 646, 1978.
[2] D. H. Warren and E. R. Strelow (editors), "Electronic spatial sensing for the blind: contributions from perception,"
Springer Science & Business Media, vol. 99, 1985.
[3] A. J. Qasim, R. Din, and F. Q. A. Alyousuf, "Review on techniques and file formats of image compression,"
Bulletin of Electrical Engineering and Informatics, vol. 9, no. 2, pp. 602-610, April 2020.
[4] Z. Chen, T. He, X. Jin, and F. Wu, "Learning for video compression," IEEE Transactions on Circuits and Systems
for Video Technology, vol. 30, no. 2, pp. 566-576, 2020.
[5] T. Wiegand, "Draft ITU-T Rec. and Final Draft International Standard of Joint Video Specification (ITU-T Rec.
H.264-ISO/IEC 14 496-10 AVC)," Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, document JVT-
G050r1.doc, JVT of ITU-T and ISO/IEC JTC1, 7th
Meeting, Pattaya, 2003.
[6] "Information Technology-Coding of Audio Visual Objects-Part 2: Visual, JTC1/SC29/WG11, ISO/IEC 14 469-2
(MPEG-4 Visual)," 2000.
[7] C. S. Royden and K. D. Moore, "Use of speed cues in the detection of moving objects by moving observers,"
Vision Research, vol. 59, pp. 17-24, 2012.
[8] S. A. Mahmoudi, M. Kierzynka, P. Manneback, and K. Kurowski, "Real-time motion tracking using optical flow
on multiple GPUs," Bull. Pol. Ac.:Tech., vol. 62, no. 1, pp. 139-150, 2014.
[9] C. Deng, J. Liu, W. Tian, S. Wang, H. Zhu, and S. Zhang, "Image super-resolution reconstruction based on L1/2
sparsity," Bulletin of Electrical Engineering and Informatics, vol. 3, no. 3, pp. 155-160, September 2014
[10] D. Kesrarat, K. Thakulsukanant, and V. Patanavijit, "A novel elementary spatial expanding scheme form on SISR
method with modifying Geman&McClure function," TELKOMNIKA (Telecommunication Computing Electronics
and Control), vol. 17, no. 5, pp. 2554-2560, Oct 2019.
[11] B. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," In
Proceeding of Defense Advanced Research Projects Agency (DARPA) Image Understanding Workshop, pp. 121-
130, 1981.
[12] D. J. Fleet and A. D. Jepson, "Computation of component image velocity from local phase information," Computer
Vision, vol. 5, no. 1, pp.77-104, 1990.
[13] T. R. Reed (editor), "Digital image sequence processing, compression, and analysis," Chemical Rubber Company
(CRC) Press, 2005.
[14] B. K. P. Horn and B.G. Schunck, "Determining optical flow," Artificial Intelligence, vol. 17, no. 1-3, pp. 185-203,
1981.
[15] Y. Su, Y. Huang, and C-C. J. Kuo, "Efficient text classification using tree-structured multi-linear principal
component analysis," IEEE 24th
International Conference on Pattern Recognition (ICPR), pp. 585-590, 2018.
[16] Y. Su, R. Lin, and C. C. J. Kuo "Tree-structured multi-stage principal component analysis (TMPCA): Theory and
applications," Expert Systems with Applications, vol. 118, pp. 355-364, 2019.
[17] A. Khmag, S. Ghoul S. A. R. Al-Haddad, and N. Kamarudin, "Noise level estimation for digital images using local
statistics and its applications to noise removal," TELKOMNIKA (Telecommunication Computing Electronics and
Control), vol. 16, no. 2, pp. 915-924, April 2018.
[18] L. Abderrahim, M. Salama, and D. Abdelbaki, “Novel design of a fractional wavelet and its application to image
denoising,” Bulletin of Electrical Engineering and Informatics, vol. 9, no. 1, pp. 129-140, February 2020
[19] D. Kesrarat. and V. Patanavijit, "Experimental analysis of performance comparison on both linear filter and
bidirectional confidential technique for spatial domain optical flow algorithm," The Electrical
Engineering/Electronics, Computer, Communications and Information Technology Association (ECTI)
Transactions on Computer and Information Technology, vol. 7, no. 2, pp. 157-168, 2013.
[20] R. Li and S. Yu, "Confidence based optical flow algorithm for high reliability," In Proceedings of ICASSP, pp.785-
788, 2008.
[21] D. Kesrarat. and V. Patanavijit, "Robust global based spatial correlation optical flow in bidirectional confidential
technique with bilateral filter," In Proceedings of IEEE Explore of the International Conference on Intelligent
Informatics and BioMedical Sciences (ICIIBMS 2015), pp. 1-6, 2015.
[22] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, "Performance of optical flow techniques," International Journal of
Computer Vision, vol. 12, no. 1, pp. 43-77, 1994.
[23] D. Fleet and Y. Weiss, "Optical flow estimation", Handbook of Mathematical Models in Computer Vision,
Springer, pp. 237-257, 2006.
[24] S. Paris, P. Kornprobst, J. Tumblin, and F. Durand, "Bilateral filtering: Theory and applications," Found. Trends
Compt. Graph. Vis., vol. 4, no. 1, pp.1-73, 2008.
[25] M. G. Mozerrov, "Constrained optical flow estimation as a matching problem," IEEE Trans. Image Processing,
vol. 22, no. 5, pp. 2044-2055, 2013.
[26] J. Sun, Y. Li, S. B. Kang, and H. Y. Shum, "Symmetric stereo matching for occlusion handling," In proceeding of
IEEE Conf. Comput. Vis.Pattern Recognit., vol. 2, pp. 399-406, 2005.
[27] M. Zhang and B. Gunturk, "A new image denoising method based on the bilateral filter," IEEE International
Conference on Acoustics, Speech and Signal Processing, ICASSP 2008, pp. 929-932, 2008.
8. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Experimental analysis of non-Gaussian noise resistance on global method optical… (Darun Kesrarat)
723
BIOGRAPHIES OF AUTHORS
Darun Kesrarat received the B.S., M.S., and Ph.D. from the Department of Information
Technology at Assumption University, Bangkok, Thailand. He is currently an Assistance
Professor. His research areas include signal processing on Image/Video Reconstruction, SRR
(Super-Resolution Reconstruction), Motion Estimation, and Optical Flow Estimation.
Vorapoj Patanavijit received the B.Eng., M. Eng, and Ph.D. degrees from the Department of
Electrical Engineering at the Chulalongkorn University, Bangkok, Thailand, in 1994, 1997 and
2007 respectively. He is currently an Associate Professor. He works in the field of signal
processing and multidimensional signal processing, specializing, in particular, on Image/Video
Reconstruction, SRR (Super-Resolution Reconstruction), Compressive Sensing, Enhancement,
Fusion, Digital Filtering, Denoising, Inverse Problems, Motion Estimation, Optical Flow
Estimation and Registration.