This article presents local-based stereo matching algorithm which comprises a devel- opment of an algorithm using block matching and two edge preserving filters in the framework. Fundamentally, the matching process consists of several stages which will produce the disparity or depth map. The problem and most challenging work for matching process is to get an accurate corresponding point between two images. Hence, this article proposes an algorithm for stereo matching using improved Sum of Absolute RGB Differences (SAD), gradient matching and edge preserving filters. It is Bilateral Filter (BF) to surge up the accuracy. The SAD and gradient matching will be implemented at the first stage to get the preliminary corresponding result, then the BF works as an edge-preserving filter to remove the noise from the first stage. The second BF is used at the last stage to improve final disparity map and increase the object boundaries. The experimental analysis and validation are using the Middlebury standard benchmarking evaluation system. Based on the results, the proposed work is capable to increase the accuracy and to preserve the object edges. To make the proposed work more reliable with current available methods, the quantitative measurement has been made to compare with other existing methods and it shows the proposed work in this article perform much better.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
Quality Assessment of Gray and Color Images through Image Fusion TechniqueIJEEE
. Image fusion is an emerging trend in the digital image processing to enhance images. In image fusion two or more images can be fused (combined) to obtain an enhanced image. In the present work image fusion technology has been used to enhance a given input image. Image fusion is used here to combine two images which contains complementary information.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
Quality Assessment of Gray and Color Images through Image Fusion TechniqueIJEEE
. Image fusion is an emerging trend in the digital image processing to enhance images. In image fusion two or more images can be fused (combined) to obtain an enhanced image. In the present work image fusion technology has been used to enhance a given input image. Image fusion is used here to combine two images which contains complementary information.
Interpolation Technique using Non Linear Partial Differential Equation with E...CSCJournals
With the large use of images for the communication, image zooming plays an important role.
Image zooming is the process of enlarging the image with some factor of magnification, where
the factor can be integer or non-integer. Applying zooming algorithm to an image generally results
in aliasing; edge blurring and other artifacts. The main focus of the work presented in this paper is
on the reduction of these artifacts. This paper focuses on reduction of these artifacts and
presents an image zooming algorithm using non-linear fourth order PDE method combined with
edge directed bi-cubic algorithm. The proposed method uses high resolution image obtained from
edge directed bi-cubic interpolation algorithm to construct the zoomed image. This technique
preserves edges and minimizes blurring and staircase effects in the zoomed image. In order to
evaluate image quality obtained after zooming, the objective assessment is performed.
CONTRAST ENHANCEMENT TECHNIQUES USING HISTOGRAM EQUALIZATION METHODS ON COLOR...IJCSEA Journal
Histogram equalization (HE) is a simple and widely used image contrast enhancement technique. The basic disadvantage of HE is it changes the brightness of the image. In order to overcome this drawback, various HE methods have been proposed. These methods preserves the brightness on the output image but, does not have a natural look. In order to overcome this problem the, present paper uses Multi-HE methods, which decompose the image into several sub images, and classical HE method is applied to each sub image. The algorithm is applied on various images and has been analysed using both objective and subjective assessment.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Image fusion using nsct denoising and target extraction for visual surveillanceeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
An efficient image compression algorithm using dct biorthogonal wavelet trans...eSAT Journals
Abstract
Recently the digital imaging applications is increasing significantly and it leads the requirement of effective image compression techniques. Image compression removes the redundant information from an image. By using it we can able to store only the necessary information which helps to reduce the transmission bandwidth, transmission time and storage size of image. This paper proposed a new image compression technique using DCT-Biorthogonal Wavelet Transform with arithmetic coding for improvement the visual quality of an image. It is a simple technique for getting better compression results. In this new algorithm firstly Biorthogonal wavelet transform is applied and then 2D DCT-Biorthogonal wavelet transform is applied on each block of the low frequency sub band. Finally, split all values from each transformed block and arithmetic coding is applied for image compression.
Keywords: Arithmetic coding, Biorthogonal wavelet Transform, DCT, Image Compression.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
Stereo matching based on absolute differences for multiple objects detectionTELKOMNIKA JOURNAL
This article presents a new algorithm for object detection using stereo camera system. The problem to get an accurate object detion using stereo camera is the imprecise of matching process between two scenes with the same viewpoint. Hence, this article aims to reduce the incorrect matching pixel with four stages. This new algorithm is the combination of continuous process of matching cost computation, aggregation, optimization and filtering. The first stage is matching cost computation to acquire preliminary result using an absolute differences method. Then the second stage known as aggregation step uses a guided filter with fixed window support size. After that, the optimization stage uses winner-takes-all (WTA) approach which selects the smallest matching differences value and normalized it to the disparity level. The last stage in the framework uses a bilateral filter. It is effectively further decrease the error on the disparity map which contains information of object detection and locations. The proposed work produces low errors (i.e., 12.11% and 14.01% nonocc and all errors) based on the KITTI dataset and capable to perform much better compared with before the proposed framework and competitive with some newly available methods.
Stereo matching algorithm using census transform and segment tree for depth e...TELKOMNIKA JOURNAL
This article proposes an algorithm for stereo matching corresponding process that will be used in many applications such as augmented reality, autonomous vehicle navigation and surface reconstruction. Basically, the proposed framework in this article is developed through a series of functions. The final result from this framework is disparity map which this map has the information of depth estimation. Fundamentally, the framework input is the stereo image which represents left and right images respectively. The proposed algorithm in this article has four steps in total, which starts with the matching cost computation using census transform, cost aggregation utilizes segment-tree, optimization using winner-takes-all (WTA) strategy, and post-processing stage uses weighted median filter. Based on the experimental results from the standard benchmarking evaluation system from the Middlebury, the disparity map results produce an average low noise error at 9.68% for nonocc error and 18.9% for all error attributes. On average, it performs far better and very competitive with other available methods from the benchmark system.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
A new function of stereo matching algorithm based on hybrid convolutional neu...IJEECSIAES
This paper proposes a new hybrid method between the learning-based and handcrafted methods for a stereo matching algorithm. The main purpose of the stereo matching algorithm is to produce a disparity map. This map is essential for many applications, including three-dimensional (3D) reconstruction. The raw disparity map computed by a convolutional neural network (CNN) is still prone to errors in the low texture region. The algorithm is set to improve the matching cost computation stage with hybrid CNN-based combined with truncated directional intensity computation. The difference in truncated directional intensity value is employed to decrease radiometric errors. The proposed method’s raw matching cost went through the cost aggregation step using the bilateral filter (BF) to improve accuracy. The winner-take-all (WTA) optimization uses the aggregated cost volume to produce an initial disparity map. Finally, a series of refinement processes enhance the initial disparity map for a more accurate final disparity map. This paper verified the performance of the algorithm using the Middlebury online stereo benchmarking system. The proposed algorithm achieves the objective of generating a more accurate and smooth disparity map with different depths at low texture regions through better matching cost quality.
Interpolation Technique using Non Linear Partial Differential Equation with E...CSCJournals
With the large use of images for the communication, image zooming plays an important role.
Image zooming is the process of enlarging the image with some factor of magnification, where
the factor can be integer or non-integer. Applying zooming algorithm to an image generally results
in aliasing; edge blurring and other artifacts. The main focus of the work presented in this paper is
on the reduction of these artifacts. This paper focuses on reduction of these artifacts and
presents an image zooming algorithm using non-linear fourth order PDE method combined with
edge directed bi-cubic algorithm. The proposed method uses high resolution image obtained from
edge directed bi-cubic interpolation algorithm to construct the zoomed image. This technique
preserves edges and minimizes blurring and staircase effects in the zoomed image. In order to
evaluate image quality obtained after zooming, the objective assessment is performed.
CONTRAST ENHANCEMENT TECHNIQUES USING HISTOGRAM EQUALIZATION METHODS ON COLOR...IJCSEA Journal
Histogram equalization (HE) is a simple and widely used image contrast enhancement technique. The basic disadvantage of HE is it changes the brightness of the image. In order to overcome this drawback, various HE methods have been proposed. These methods preserves the brightness on the output image but, does not have a natural look. In order to overcome this problem the, present paper uses Multi-HE methods, which decompose the image into several sub images, and classical HE method is applied to each sub image. The algorithm is applied on various images and has been analysed using both objective and subjective assessment.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Image fusion using nsct denoising and target extraction for visual surveillanceeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
An efficient image compression algorithm using dct biorthogonal wavelet trans...eSAT Journals
Abstract
Recently the digital imaging applications is increasing significantly and it leads the requirement of effective image compression techniques. Image compression removes the redundant information from an image. By using it we can able to store only the necessary information which helps to reduce the transmission bandwidth, transmission time and storage size of image. This paper proposed a new image compression technique using DCT-Biorthogonal Wavelet Transform with arithmetic coding for improvement the visual quality of an image. It is a simple technique for getting better compression results. In this new algorithm firstly Biorthogonal wavelet transform is applied and then 2D DCT-Biorthogonal wavelet transform is applied on each block of the low frequency sub band. Finally, split all values from each transformed block and arithmetic coding is applied for image compression.
Keywords: Arithmetic coding, Biorthogonal wavelet Transform, DCT, Image Compression.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
Stereo matching based on absolute differences for multiple objects detectionTELKOMNIKA JOURNAL
This article presents a new algorithm for object detection using stereo camera system. The problem to get an accurate object detion using stereo camera is the imprecise of matching process between two scenes with the same viewpoint. Hence, this article aims to reduce the incorrect matching pixel with four stages. This new algorithm is the combination of continuous process of matching cost computation, aggregation, optimization and filtering. The first stage is matching cost computation to acquire preliminary result using an absolute differences method. Then the second stage known as aggregation step uses a guided filter with fixed window support size. After that, the optimization stage uses winner-takes-all (WTA) approach which selects the smallest matching differences value and normalized it to the disparity level. The last stage in the framework uses a bilateral filter. It is effectively further decrease the error on the disparity map which contains information of object detection and locations. The proposed work produces low errors (i.e., 12.11% and 14.01% nonocc and all errors) based on the KITTI dataset and capable to perform much better compared with before the proposed framework and competitive with some newly available methods.
Stereo matching algorithm using census transform and segment tree for depth e...TELKOMNIKA JOURNAL
This article proposes an algorithm for stereo matching corresponding process that will be used in many applications such as augmented reality, autonomous vehicle navigation and surface reconstruction. Basically, the proposed framework in this article is developed through a series of functions. The final result from this framework is disparity map which this map has the information of depth estimation. Fundamentally, the framework input is the stereo image which represents left and right images respectively. The proposed algorithm in this article has four steps in total, which starts with the matching cost computation using census transform, cost aggregation utilizes segment-tree, optimization using winner-takes-all (WTA) strategy, and post-processing stage uses weighted median filter. Based on the experimental results from the standard benchmarking evaluation system from the Middlebury, the disparity map results produce an average low noise error at 9.68% for nonocc error and 18.9% for all error attributes. On average, it performs far better and very competitive with other available methods from the benchmark system.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
A new function of stereo matching algorithm based on hybrid convolutional neu...IJEECSIAES
This paper proposes a new hybrid method between the learning-based and handcrafted methods for a stereo matching algorithm. The main purpose of the stereo matching algorithm is to produce a disparity map. This map is essential for many applications, including three-dimensional (3D) reconstruction. The raw disparity map computed by a convolutional neural network (CNN) is still prone to errors in the low texture region. The algorithm is set to improve the matching cost computation stage with hybrid CNN-based combined with truncated directional intensity computation. The difference in truncated directional intensity value is employed to decrease radiometric errors. The proposed method’s raw matching cost went through the cost aggregation step using the bilateral filter (BF) to improve accuracy. The winner-take-all (WTA) optimization uses the aggregated cost volume to produce an initial disparity map. Finally, a series of refinement processes enhance the initial disparity map for a more accurate final disparity map. This paper verified the performance of the algorithm using the Middlebury online stereo benchmarking system. The proposed algorithm achieves the objective of generating a more accurate and smooth disparity map with different depths at low texture regions through better matching cost quality.
A new function of stereo matching algorithm based on hybrid convolutional neu...nooriasukmaningtyas
This paper proposes a new hybrid method between the learning-based and handcrafted methods for a stereo matching algorithm. The main purpose of the stereo matching algorithm is to produce a disparity map. This map is essential for many applications, including three-dimensional (3D) reconstruction. The raw disparity map computed by a convolutional neural network (CNN) is still prone to errors in the low texture region. The algorithm is set to improve the matching cost computation stage with hybrid CNN-based combined with truncated directional intensity computation. The difference in truncated directional intensity value is employed to decrease radiometric errors. The proposed method’s raw matching cost went through the cost aggregation step using the bilateral filter (BF) to improve accuracy. The winner-take-all (WTA) optimization uses the aggregated cost volume to produce an initial disparity map. Finally, a series of refinement processes enhance the initial disparity map for a more accurate final disparity map. This paper verified the performance of the algorithm using the Middlebury online stereo benchmarking system. The proposed algorithm achieves the objective of generating a more accurate and smooth disparity map with different depths at low texture regions through better matching cost quality.
Towards better performance: phase congruency based face recognitionTELKOMNIKA JOURNAL
Phase congruency is an edge detector and measurement of the significant feature in the image. It is a robust method against contrast and illumination variation. In this paper, two novel techniques are introduced for developing alow-cost human identification system based on face recognition. Firstly, the valuable phase congruency features, the gradient-edges and their associate dangles are utilized separately for classifying 130 subjects taken from three face databases with the motivation of eliminating the feature extraction phase. By doing this, the complexity can be significantly reduced. Secondly, the training process is modified when a new technique, called averaging-vectors is developed to accelerate the training process and minimizes the matching time to the lowest value. However, for more comparison and accurate evaluation,three competitive classifiers: Euclidean distance (ED),cosine distance (CD), and Manhattan distance (MD) are considered in this work. The system performance is very competitive and acceptable, where the experimental results show promising recognition rates with a reasonable matching time.
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...IJTET Journal
The segmentation of membranel blood vessels within the retina may be a essential step in designation of diabetic retinopathy during this paper, gift a replacement methodology for mechanically segmenting blood vessels in retinal pictures. 2 techniques for segmenting retinal blood vessels, supported totally different image process techniques, square measure represented and their strengths and weaknesses square measure compared. This methodology uses a neural network (NN) theme for element classification and gray-level and moment invariants-based options for element illustration. The performance of every algorithmic program was tested on the STARE and DRIVE dataset. wide used for this purpose, since they contain retinal pictures and also the
vascular structures. Performance on each sets of check pictures is healthier than different existing pictures. The methodology
proves particularly correct for vessel detection in STARE pictures. This effectiveness and lustiness with totally different image conditions, is employed for simplicity and quick implementation. This methodology used for early detection of Diabetic Retinopathy (DR)
Disparity Estimation by a Real Time Approximation AlgorithmCSCJournals
This paper presents an approximation real time algorithm for estimating the disparity of the stereo
images. The approximation is achieved by shrinking the left and right of original images.
According to this method (i ) left and right images have been shrinked three times,(ii) the disparity
image is computed from the shrinked left and right images to reconstruct the disparity image and
extrapolate the disparity image to retrieve the original image size. The computational time of
proposed algorithm is less than the existing methods, approximately real time and requires less
memory space. This method is applied on the standard stereo images and the results show that it
can easily reduce the computational time of about 76.34 % with no appreciable degradation of
accuracy.
Method of optimization of the fundamental matrix by technique speeded up rob...IJECEIAES
The purpose of determining the fundamental matrix (F) is to define the epipolar geometry and to relate two 2D images of the same scene or video series to find the 3D scenes. The problem we address in this work is the estimation of the localization error and the processing time. We start by comparing the following feature extraction techniques: Harris, features from accelerated segment test (FAST), scale invariant feature transform (SIFT) and speed-up robust features (SURF) with respect to the number of detected points and correct matches by different changes in images. Then, we merged the best chosen by the objective function, which groups the descriptors by different regions in order to calculate F. Then, we applied the standardized eight-point algorithm which also automatically eliminates the outliers to find the optimal solution F. The test of our optimization approach is applied on the real images with different scene variations. Our simulation results provided good results in terms of accuracy and the computation time of F does not exceed 900 ms, as well as the projection error of maximum 1 pixel, regardless of the modification.
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...IJEEE
This paper describes a methodology that aims to find and diagnosing faults in transmission lines exploitation image process technique. The image processing techniques have been widely used to solve problem in process of all areas. In this paper, the methodology conjointly uses a digital image process Wavelet Shrinkage function to fault identification and diagnosis. In other words, the purpose is to extract the faulty image from the source with the separation and the co-ordinates of the transmission lines. The segmentation objective is the image division its set of parts and objects, which distinguishes it among others in the scene, are the key to have an improved result in identification of faults.The experimental results indicate that the proposed method provides promising results and is advantageous both in terms of PSNR and in visual quality.
SURVEY ON POLYGONAL APPROXIMATION TECHNIQUES FOR DIGITAL PLANAR CURVESZac Darcy
Polygon approximation plays a vital role in abquitious applications like multimedia, geographic and object
recognition. An extensive number of polygonal approximation techniques for digital planar curves have
been proposed over the last decade, but there are no survey papers on recently proposed techniques.
Polygon is a collection of edges and vertices. Objects are represented using edges and vertices or contour
points (ie. polygon). Polygonal approximation is representing the object with less number of dominant
points (less number of edges and vertices). Polygon approximation results in less computational speed and
memory usage. This paper deals with comparative study on polygonal approximation techniques for digital
planar curves with respect to their computation and efficiency.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Similar to Development of stereo matching algorithm based on sum of absolute RGB color differences and gradient matching (20)
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Development of stereo matching algorithm based on sum of absolute RGB color differences and gradient matching
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 10, No. 3, June 2020, pp. 2375∼2382
ISSN: 2088-8708, DOI: 10.11591/ijece.v10i3.pp2375-2382 Ì 2375
Development of stereo matching algorithm based on sum of
absolute RGB color differences and gradient matching
Rostam Affendi Hamzah1
, M. G. Yeou Wei2
, N. Syahrim Nik Anwar3
1
Fakulti Teknologi Kejuruteraan Elektrik and Elektronik , Universiti Teknikal Malysia Melaka, Malaysia
2,3
Fakulti Kejuruteraan Elektrik, Universiti Teknikal Malysia Melaka, Malaysia
Article Info
Article history:
Received Dec 19, 2017
Revised Dec 11, 2019
Accepted Dec 18, 2019
Keywords:
Bilateral filtering
Computer vision
Gradient matching
Stereo matching
Stereo vision
ABSTRACT
This article presents local-based stereo matching algorithm which comprises a devel-
opment of an algorithm using block matching and two edge preserving filters in the
framework. Fundamentally, the matching process consists of several stages which
will produce the disparity or depth map. The problem and most challenging work
for matching process is to get an accurate corresponding point between two images.
Hence, this article proposes an algorithm for stereo matching using improved Sum
of Absolute RGB Differences (SAD), gradient matching and edge preserving filters.
It is Bilateral Filter (BF) to surge up the accuracy. The SAD and gradient matching
will be implemented at the first stage to get the preliminary corresponding result, then
the BF works as an edge-preserving filter to remove the noise from the first stage.
The second BF is used at the last stage to improve final disparity map and increase
the object boundaries. The experimental analysis and validation are using the Mid-
dlebury standard benchmarking evaluation system. Based on the results, the proposed
work is capable to increase the accuracy and to preserve the object edges. To make the
proposed work more reliable with current available methods, the quantitative measure-
ment has been made to compare with other existing methods and it shows the proposed
work in this article perform much better.
Copyright c 2020 Insitute of Advanced Engineeering and Science.
All rights reserved.
Corresponding Author:
Rostam Affendi Hamzah,
Fakulti Teknologi Kejuruteraan Elektrik and Elektronik,
Universiti Teknikal Malaysia Melaka,
Malaysia.
Email: rostamaffendi@utem.edu.my
1. INTRODUCTION
Computer vision is interdisciplinary field that comprises methods for acquiring, processing
and analyzing and image understanding from digital images or videos. It is artificial intelligence to mimic
the human visual system. Stereo vision is a part of them and the process to get the information such as
object detection, recognition and depth estimation is called as stereo matching. This process starts with
corresponding from one point on reference image to another point on the target image. These images can be two
or more. In this article, the images are using from the stereo camera input which is also known as stereo
images. The matching algorithm from the matching process produces disparity map. This map consists of depth
information which is valuable for many applications such as virtual reality [1], 3D surface reconstruction [2],
face recognition [3] and robotics automation [4-5]. The stereo baseline can be setup in a wide or short
baseline [6] distance which depends on the applications. To determine the range or distance estimation,
the triangulation function is applied to each of the pixel on the disparity map. Therefore, to get an accu-
rate result, the matching process requires complex and challenging solution for depth or distance estimation.
It requires precise function on the propose framework. Fundamentally, matching algorithm consists of multiple
Journal homepage: http://ijece.iaescore.com/index.php/IJECE
2. 2376 Ì ISSN: 2088-8708
stages which was proposed by Szeliski and Scharstein [7]. First stage, matching cost computes the preliminary
matching point of stereo image. Second stage, the filtering is utilized to reduce the preliminary noise of the first
stage. Then, disparity selection and optimization stage normalizes the disparity value each pixel on the image.
Last stage is to refine the final result and also known as disparity map post-processing step.
In stereo matching development, there are two major approaches available in developing the algorithm
framework. It is local methods as published in [8-10] and global method [11]. Mostly local methods use local
properties or local contents using windows-based technique such as fixed windows implemented in [12-13],
adaptive window [14], convolution neural network [15] and multiple windows [16]. In common, Winner-
Takes-All (WTA) strategy is applied for local based optimization. It is low computational complexity and fast
execution time [17-19]. Local method such implemented in [20] that used plane fitting technique to increase
the accuracy at the final stage. This method also known as RANSAC that efficiently works on the low textured
areas. However, the error still occurred on the object edges. Their method requires several iterations for plane
fitting process. If wrong iterations, then it will affect the results. Commonly, local methods show fast running
time, but low accuracy on the edges due to improper selection of windows sizes. Hence, to get an accurate
result for the local approach is a challenge to the researchers.
Another approach in stereo matching algorithm to produce the disparity map is global optimization
method. Fundamentally, this method uses energy-based function which is known as Markov Random Field
(MRF). The method in global optimization such as Belief Propagation (BP) [21] and Graph Cut
(GC) [22] produce accurate results. Each pixel of interest calculation requires all pixel’s energy in dispar-
ity map. It calculates neighboring or nearby pixels using maximum flow and the selection is made based on
the minimum cut-off energy on the disparity map. The algorithms implemented using global optimization
approach normally involve high computational requirement due to all pixel’s energy calculation and absorp-
tion. Global methods involve iterations which increase the execution time each disparity map reconstruction.
This article aims to produce accurate results and competitive with some established methods. The first function
or stage will be implemented using improved Sum of Absolute Differences (SAD) [23] with gradient match-
ing. Then, the second stage utilizes the edge preserving filter which is known as Bilateral Filter (BF) [24].
This filter is capable to remove noise and preserved object edges. The third stage is optimization based on
WTA strategy. Last stage, the BF is applied once again to remove unwanted or remaining invalid pixels.
The BF is also capable to increase the accuracy at object boundaries.
2. RESEARCH METHOD
The diagram of the proposed work is dispalyed by Figure 1. The stereo matching algorithm starts
with STEP 1 to get the preliminary disparity map. The improved SAD has been proposed which the weighted
technique is used on the block matching process. The combination of improved SAD with gradient match-
ing in this article should be able to increase the effectiveness of corresponding process and accuracy. Then at
STEP 2, the BP is utilized to reduce the noise and preserved the object edges. The BP is capable to efficiently
remove noise on the low texture regions and sharping the object boundaries. The optimization uses WTA
strategy which this method normalizes the floating points numbers and selects minimum disparity values on
the disparity map. Final stage at STEP 4 is also using the BP but with the disparity values. This filter is a type
of nonlinear filter and capable to improve final disparity map.
Figure 1. A flowchart of the proposed algorithm.
Int J Elec & Comp Eng, Vol. 10, No. 3, June 2020 : 2375 – 2382
3. Int J Elec & Comp Eng ISSN: 2088-8708 Ì 2377
2.1. Matching cost computation
The first stage of the proposed framework is using the weighted SAD. The preliminary disparity map
is produced at this stage. Hence, robust function must be used to increase the effectiveness on the dispar-
ity map. The problem on matching process at this stage on the low texture regions must be at minimum.
The weight is proposed at SAD to improve the values on the low texture regions. Thus, the consistency of the
weight at the low texture region is capable to make the matching process accurate and reduces the mismatch
or invalid pixels. The RGB values are used with the weight of sum of intensity differences on right image Ir
and left image Il which is given by (1):
SAD(x, y, d) =
1
W
(x,y)∈w
|Ii
l (x, y)i − Ii
r(x − d, y)| (1)
where (x, y) are the coordinates pixel of interest with d represents the disparity value, W is the proposed
weight, RGB channels numbers are i and w represents kernel of SAD algorithm. The second part is gradient
matching components. It contains the magnitude differences from each image. There will be two directions
that need to be calculated on this gradient differences. Vertical direction Gy and horizontal direction of Gx are
the directions with the equations are given by (3) and (2):
Gy =
1
0
−1
∗ Im (2)
Gx = 1 0 −1 ∗ Im (3)
where Im is input image and ∗ represents convolution operation on the gradient matching. The Gx and Gy are
the gradient magnitude for m which is given by (4):
m = G2
x + G2
y (4)
(5) is the gradient matching kernel G(x, y, d).
G(x, y, d) = |ml(x, y) − mr(x − d, y)| (5)
The matching cost function at this stage is given by (6) where the input volume of SAD(x, y,d) and G(x, y,d)
are combined together.
MC(x, y, d) = SAD(x, y, d) + G(x, y, d) (6)
2.2. Cost aggregation
This second stage more likely to filter the preliminary disparity map from stage one. Normally the
preliminary disparity map contains high noise and it must be removed. Some of invalid and uncertainties
pixels are constructed during the matching process. Hence, at this stage the filter must be robust and is capable
to remove high noise of invalid pixels and preserved the object boundaries. The BP is used due to strong
preserving object edges and at the same time efficient to remove high noise especially on the plain color and
low texture regions. (7) is the BF function used in this article.
WBF
(p, q) =
q∈wB
exp −
|p − q|2
σ2
s
exp −
|Ip − Iq|2
σ2
c
(7)
where p is the location pixel of interest at (x,y), wB and q are window size of BF and neighboring
pixels respectively. The σs denotes a factor of spatial adjustment and σc equals to similarity factor for the color
detection. The p − q is spatial Euclidean interval and |Ip − Iq| denotes the Euclidean distance in color space.
Hence, (8) is the cost aggregation function of BF with the matching cost computation input.
C(p, d) = WBF
(p, q)MC(p, d) (8)
Development of stereo matching algorithm based on... (Rostam Affendi Hamzah)
4. 2378 Ì ISSN: 2088-8708
2.3. Disparity optimization
This stage optimizes the disparity values on disparity map. The normalization is based on the
minimum disparity values with the floating-point number which the WTA is selected in this article. The WTA
is normally being used in the local based methods due to fast implementation. The WTA function is given
by (9).
dx,y = argmind∈DC(p, d) (9)
where D represents a set of valid disparity values for an image and C(p, d) denotes the second stage of
aggregation step. Fundamentally, after this stage the disparity map still contains noise or invalid pixels.
Thus, this map needs to be improved and the last stage is will remove remaining noise.
2.4. Disparity refinement
The last stage of the algorithm framework is known as refinement or post processing stage. It has
several continuous processes which starts with handling the occlusion regions, filling the invalid pixels and
filtering final disparity map. The left-right consistency checking process is conducted to identify occlusion
areas and some invalid pixels. Then, these invalid pixels are restored with valid pixel values through the filling
process. Some of artifacts and unwanted pixels will be removed using the BF and at the same time preserved
the object boundaries. The BF smoothes the final disparity map as indicates by (7).
3. RESULT AND ANALYSIS
This section explains about the disparity map results that will be represented by color-scale intensity.
The different color tones show that the respected objects are mapped based on the disparity values and the
distance sensor (i.e., stereo camera). Most probably the lighter intensity volume indicates that the object is
closer to the sensor. The experimental analysis has been executed on a personal computer with Windows 10,
3.2GHz and 8G RAM. The input images are from the Middlebury stereo evaluation dataset [24] which contains
15 standard images and must be submitted online. These images are very complex, and each image consists
of different characteristics and properties such as light settings objects depth, incoherence regions, different
resolutions and low texture areas. The values of {w, σs, σc, wB} are {9x9, 17, 0.4, 11x11}.
Figure 2 shows a sample Jadeplant image (i.e., left and right) from the Middlebury training dataset
with different brightness and high contrast. Generally, due to the brightness difference, these input images are
very challenging to be matched. It contains different pixel values at the same corresponding point. However, the
proposed algorithm is correctly discovered the disparity locations. The level of disparity contour are precisely
assigned and object distance are well-recognized. Figure 3 shows the final disparity map results of 15 training
images from the Middlebury dataset. The accuracy attributes for error evaluation are nonocc (non-occluded)
and all error. The nonocc error is the error evaluation based on the non-occluded regions on disparity map while
all error represents the all pixels’ evaluation on an image of disparity map. Within these 15 images, Pipes and
Jadeplant images are the most difficult images to be matched. These images comprise several piping lines and
leaves with different sizes respectively. Yet, the propose algorithm can reconstruct almost accurate disparity
map with clear discontinuities regions. Fundamentally, real images from the Middlebury are difficult and very
challenging to get an accurate corresponding point. It was developed to test the robustness of an algorithm
where same corresponding point maybe contains different pixel values. Additionally, each image contains
difference characteristics such as plain color objects, shadow, discontinuity regions and occluded areas.
With referring to Figure 3, the disparity maps of low texture surfaces such as Motorcycle, Motorcy-
cleP, Playtable and PlaytableP are well recreated with different depth and disparity contour. Other regions
difficult to be matched are plain colour objects and shadow such as images of ArtL, Recycle, Piano and
PianoL. These regions consist of similar pixel values and possibility to get wrong matching are very high.
The disparity maps from the proposed work display almost accurate matching for these images. It shows that the
proposed work is able to get correct matching pixels on these regions and robust against the plain colour areas.
The quantitative measurement from the Middlebury online results are given in Tables 1 and 2. These results
are produced by the Middlebury online benchmarking evaluation system with two error attributes as explained
above. Some established methods are also included in these Tables to show the competitiveness of the proposed
work. Overall, an average error measurement is assessed to rank the best results. For Table 1, the proposed
method is ranked at top of the table with 6.11%, and Table 2 with 9.15%. It shows that the proposed work
is competitive with other recently published methods and can be used as a complete algorithm. The proposed
Int J Elec & Comp Eng, Vol. 10, No. 3, June 2020 : 2375 – 2382
5. Int J Elec & Comp Eng ISSN: 2088-8708 Ì 2379
work is rank at top compared to [15-17,19,25,26] for nonocc error. The weight average error is 6.11% where
Jadepl, Playrm and Vintge images are the lowest error produced. For the all error attribute in Table 2, the
proposed work is produced at 9.15% which is the lowest average error. It shows that the proposed work in this
article is competitive with some established methods.
Figure 2. Different brightness and contrast of the input Jadeplant stereo images. Accurate disparity map result
is produced by the proposed work compared to the image without the proposed work. The plant structures are
clearly displayed and smooth disparity map can be notified
Figure 3. These final disparity images are from the Middlebury dataset which the results tabulated in
Table 1 and Table 2
Table 1. The nonocc error results using the Middlebury dataset. The comparison results with other published
methods
Algorithms Adiron ArtL Jadepl Motor MotorE Piano PianoL Pipes Playrm Playt PlayP Recyc Shelvs Teddy Vintge Weight Ave
Proposed Algorithm 3.66 4.23 19.54 3.11 3.87 5.01 11.02 6.32 5.11 24.85 5.10 3.33 7.52 2.21 7.90 6.11
SNCC [19] 2.89 4.05 18.10 2.68 2.52 3.52 7.08 6.14 5.64 45.40 3.13 2.90 7.59 1.58 13.50 6.97
ELAS [26] 3.09 4.72 29.70 3.28 3.29 4.30 8.31 5.61 6.00 21.80 2.84 3.09 9.00 2.36 10.90 7.22
MPSV [15] 3.83 6.00 19.70 5.85 5.53 5.68 34.30 9.59 5.86 15.30 4.20 4.59 13.00 3.70 14.30 8.81
ADSM [16] 13.30 6.10 15.00 3.67 5.67 7.08 20.60 6.57 13.20 23.10 3.55 5.76 17.20 3.05 10.10 8.95
DoGGuided [17] 15.20 9.57 27.10 5.64 8.31 8.09 32.40 9.67 14.00 24.50 5.32 5.56 16.20 4.15 15.00 12.00
BSM [25] 7.27 11.40 30.50 6.67 6.52 10.80 32.10 10.50 12.50 24.40 12.80 7.42 16.40 4.88 32.80 13.40
Development of stereo matching algorithm based on... (Rostam Affendi Hamzah)
6. 2380 Ì ISSN: 2088-8708
Table 2. The all error results using the Middlebury dataset. These comparisons show the competitiveness of
the proposed method
Algorithms Adiron ArtL Jadepl Motor MotorE Piano PianoL Pipes Playrm Playt PlayP Recyc Shelvs Teddy Vintge Weight Ave
Proposed Algorithm 4.74 7.45 27.21 5.57 5.54 6.01 11.64 11.77 7.31 27.05 8.54 3.86 9.21 3.33 8.87 9.15
SNCC [19] 3.63 6.78 39.80 5.12 5.11 4.65 8.23 11.80 8.05 45.60 4.36 3.29 8.10 2.55 14.80 10.40
ELAS [26] 4.08 7.18 52.80 5.39 5.45 4.96 9.00 10.70 7.94 23.20 3.83 3.78 9.46 3.34 11.60 10.60
ADSM [16] 14.30 10.60 34.10 6.00 8.00 7.37 20.40 12.10 16.90 25.50 5.84 5.83 17.20 4.11 11.10 12.30
MPSV [15] 5.87 9.43 40.20 9.11 8.80 7.03 34.20 15.80 8.58 16.90 5.89 6.78 13.70 4.82 16.80 12.70
DoGGuided [17] 20.10 28.00 56.50 13.80 16.80 13.40 37.30 23.80 30.30 30.80 13.00 9.13 19.00 13.40 23.60 22.30
BSM [25] 12.70 28.70 58.70 14.80 14.70 16.00 35.80 24.50 29.40 31.00 20.20 12.10 19.20 14.30 39.30 23.50
To verify the potentiality of the proposed algorithm, the images from the KITTI [27] are also tested.
These images are more difficult and challenging to be matched. It contains complex edges and structures such
as shadow, plain color surfaces, high different contrast and brightness areas with large untextured regions.
The experimental results are shown in Figure 4. The disparity map results show accurate disparity values
estimation in grayscale. As for reference, the signage, a cyclist, trees and cars, are well-reconstructed with
correct disparity level. It shows the proposed work in this article capable to work with difficult stereo images
from real environment.
Figure 4. The disparity map results from the KITTI training dataset. This article utilizes images from the
number of #000004 10-#000007 10
4. CONCLUSION
In this work, the combination of SAD algorithm based RGB color and gradient matching are producing
accurate results. The second stage where edge preserving filter is utilized. The BF at the aggregation stage is
capable to filter high noise and conserve the object boundaries of the preliminary disparity map. The WTA
strategy was implemented at the optimization stage to normalize the floating points numbers to the disparity
values. The second edge preserving filter was used at the last stage of the proposed work using the same
BF. This nonlinear filter removed remaining noise and increase the efficiency of final disparity map. Overall,
these edge preserving filters used in the proposed framework were able to remove noise especially on the low
texture regions and able to preserve the object edges as shown by Figure 2. The quantitative measurement
from the standard benchmarking Middlebury system also demonstrated low average errors were produced by
the proposed framework at 6.11% and 9.15% of non-occluded and all pixel errors respectively. The training
images are shown by Figure 3. From real images of the KITTI, the proposed work was also demonstrated
accurate results.
Int J Elec & Comp Eng, Vol. 10, No. 3, June 2020 : 2375 – 2382
7. Int J Elec & Comp Eng ISSN: 2088-8708 Ì 2381
ACKNOWLEDGEMENT
This project was sponsored by a grant from the Universiti Teknikal Malaysia Melaka with the Number:
JURNAL/2018/FTK/Q00008.
REFERENCES
[1] S. F. Gani, et al., ”Development of portable automatic number plate recognition (ANPR) system on Rasp-
berry Pi,” International Journal of Electrical and Computer Engineering, vol. 9, pp. 1805–1813, 2019.
[2] R. A. Hamzah, et al., ”Stereo Matching Algorithm for 3D Surface Reconstruction Based on Triangulation
Principle,” in International Conference on Information Technology, Information Systems and Electrical
Engineering (ICITISEE), , 2016, pp. 119–124, 2016.
[3] S.F. Gani, et al., ”Development of a portable community video surveillance system, International Journal
of Electrical & Computer Engineering, vol. 9, pp. 1814–1821, 2019.
[4] D Scharstein, et al., ”A Taxonomy and Evaluation of Dense Two-frame Stereo Correspondence Algo-
rithms,” in Stereo and Multi-Baseline Vision, 2001.(SMBV 2001). Proceedings. IEEE Workshop on, 2001,
pp. 131–140.
[5] R. Christian, et al., ”Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras,” IEEE Trans-
action on Circuit and Systems for Video Technology, vol. 26, pp. 1–18, 2016.
[6] Q. Yang, ”A Non-local cost aggregation Method for Stereo Matching,” in IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2012, pp. 1402–1409.
[7] H. Asmaa, et al., ”Fast Cost-volume Filtering for Visual Correspondence and Beyond,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 35, pp. 504–511, 2015.
[8] H. Ibrahim, et al., ”Stereo matching algorithm based on illumination control to improve the accuracy,”
Image Analysis & Stereology, vol. 35, pp. 39–52, 2016.
[9] L. Qian, et al., ”Stereo Matching Algorithm Based on Ground Control Points Using Graph Cut,” in Image
and Signal Processing (CISP), 2014 7th International Congress on, 2014, pp. 503–508.
[10] W. Sih-Sian, et al., ”Efficient Hardware Architecture for Large Disparity Range Stereo Matching Based
on Belief Propagation,” in Signal Processing Systems (SiPS), 2016 IEEE International Workshop on, 2016,
pp. 236–241.
[11] H. Hirschm¨uller, et al., ”Real-time Correlation-based Stereo Vision with Reduced Border Errors, Interna-
tional Journal of Computer Vision, vol. 47, pp. 1-3, 2002.
[12] K. Jedrzej, et al., ”Real-time Stereo Matching on CUDA Using an Iterative Refinement Method for Adap-
tive Support-weight Correspondences,” IEEE Transactions on Circuits and Systems for Video Technology,
vol. 23, pp. 94–104, 2013.
[13] Q. Yang, et al., ”Fast Stereo Matching Using Adaptive Guided Filtering,” Image and Vision Computing,
vol. 32, pp. 202–211, 2014.
[14] J. Zbontar and Y. LeCun, ”Computing The Stereo Matching Cost with a Convolution Neural Network,”
in IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1592–1599.
[15] B. Jean-Charles et al., ”Morphological Processing of Stereoscopic Image Superimpositions for Disparity
Map Estimation,” Hal-01330139, pp. 1–17, 2016.
[16] M. Bing et al., ”Accurate Dense Stereo Matching Based on Image Segmentation Using an Adaptive
Multi-Cost Approach, Symmetry, vol. 8, p. 159,2016.
[17] K. Masamichi, et al., ”High Accuracy Local Stereo Matching Using DoG Scale Map,”in Fifteenth IAPR
International Conference on Machine Vision Applications,2017, pp. 258–261.
[18] W. Hu, et al., ”Virtual Support Windows for Adaptive-weight Stereo Matching,” in Visual Communica-
tions and Image Processing (VCIP), 2011 IEEE, 2011, pp. 1–4.
[19] E. Nils and E. Julian, ”Anisotropic Median Filtering for Stereo Disparity Map Refinement.,” in VISAPP
(2), 2013, pp. 189–198.
[20] K.A.A. Aziz, et al., ”A pixel to pixel correspondence and region of interest in stereo vision application”,
in 2012 IEEE Symposium on Computers & Informatics (ISCI), pp. 193–197, 2012.
[21] M.G.Y. Wei, et al., ”Stereo matching based on absolute differences for multiple objects detection,”
Telkomnika, vol. 17, pp. 261-267, 2019.
[22] A. H. Hasan, et al., ”Disparity Mapping for Navigation of Stereo Vision Autonomous Guided Vehicle,”
in Soft Computing and Pattern Recognition, 2009. SOCPAR’09. International Conference of, 2009, pp.
Development of stereo matching algorithm based on... (Rostam Affendi Hamzah)
8. 2382 Ì ISSN: 2088-8708
575–579.
[23] R. A. Hamzah et al., ”Visualization of image distortion on camera calibration for stereo vision applica-
tion,” in Control System, Computing and Engineering (ICCSCE), 2012 IEEE International Conference on,
2012, pp. 28–33.
[24] S. Daniel and S. Richard, ” Middlebury Stereo Evaluation - Version 3 (Accessed date : November 2019,
http://vision.middlebury.edu/stereo/eval/references.,” 2019.
[25] K. Zhang, et al., ”Binary Stereo Matching,” in Pattern Recognition (ICPR), 2012 21st International Con-
ference on, 2012, pp. 356–359.
[26] A. Geiger et al., ”Efficient Large-scale Stereo Matching,” in Asian Conference on Computer Vision, 2010,
pp. 25–38.
[27] M. Menze and G. Andreas, ”Object Sflow for Autonomous Vehicles,” in IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2015, pp. 3061–3070.
BIOGRAPHIES OF AUTHORS
Rostam Affendi Hamzah was graduated from Universiti Teknologi Malaysia where he received his
B.Eng majoring in Electronic Engineering. Then he received his M. Sc. majoring in Electronic
System Design engineering from the Universiti Sains Malaysia in 2010. In 2017, he received PhD
majoring in Electronic Imaging from Universiti Sains Malaysia. Currently he is a lecturer in the
Universiti Teknikal Malaysia Melaka teaching digital electronics, digital image processing and
embedded system.
Melvin Gan Yeou Wei was born in 1992, Melvin Gan Yeou Wei graduated from Universiti Teknikal
Malaysia Melaka where he received his B.Eng majoring in Electrical in 2017. Currently, he is
pursuing a Master Degree in the Universiti Teknikal Malaysia Melaka.
Nik Syahrim Nik Anwar was born in 1981, Nik Syahrim graduated from University of Applied
Science Heilbronn, Germany where he received his Diplom in Mechatronik und Mikrosystemtechnik
majoring in Mechatronics in 2006. In 2010 he received his M. Sc. majoring in Mechatronics from the
University of Applied Science Aachen, Germany. In 2018, he received PhD majoring in Electrical
from the Universiti Sains Malaysia. Currently he is a lecturer in Universiti Teknikal Malaysia Melaka
teaching Electrical and Mechatronics subjects.
Int J Elec & Comp Eng, Vol. 10, No. 3, June 2020 : 2375 – 2382