Block matching algorithm (BMA) for motion estimation (ME) is the heart to many motion-compensated video-coding techniques/standards, such as ISO MPEG-1/2/4 and ITU-T H.261/262/263/264/265, to reduce the temporal redundancy between different frames. During the last three decades,
hundreds of fast block matching algorithms have been proposed. The shape and size of search patterns in motion estimation will influence more on the searching speed and quality of performance. This article provides an overview of the famous block matching algorithms and compares their computational complexity and motion prediction quality.
MotionEstimation Technique forReal Time Compressed Video TransmissionIJERA Editor
Motion Estimation is one of the most critical modules in a typical digital video encoder. many implementation
tradeoffs should be considered while designing such a module. It candefine ME as a part of inter coding
technique.Inter coding refers to a mechanism of finding co-relation between two frames (stillimages), which are
not far away from each other as far as the order of occurrence is concerned, one frame is called the reference
frameand the other frame called the current frame, and then encoding the information which is a function of this
co-relation‟ instead of the frame itself. This paper focuses more on block matching algorithms which comes
under feature/region matching. Block motion estimation algorithms are widely adopted by video coding
standards, mainly due to their simplicity and good distortion performance In FS every candidate points will be
evaluated and more time will be taken for predicting the suitable motion vectors. based on the above noted
drawback, the above said adaptive pattern is proposed
Optimization is proposed at algorithm/code-level for both encoder and decoder to make it feasible to perform
real-time H.264/AVC video encoding/decoding on mobile device for mobile multimedia applications. For
encoder an improved motion estimation algorithm based on hexagonal pattern search is proposed exploiting the
temporal redundancy of a video sequence. For decoder, at code level memory access minimization scheme is
proposed and at algorithm level a fast interpolation scheme is proposed.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Robust foreground modelling to segment and detect multiple moving objects in ...IJECEIAES
Last decade has witnessed an ever increasing number of video surveillance installa- tions due to the rise of security concerns worldwide. With this comes the need for video analysis for fraud detection, crime investigation, traffic monitoring to name a few. For any kind of video analysis application, detection of moving objects in videos is a fundamental step. In this paper, an efficient foreground modelling method to segment multiple moving objects is implemented. Proposed method significantly reduces noise thereby accurately segmenting region of interest under dynamic conditions while handling occlusion to a large extent. Extensive performance analysis shows that the proposed method was found to give far better results when compared to the de facto standard as well as relatively new approaches used for moving object detection.
An Efficient Block Matching Algorithm Using Logical ImageIJERA Editor
Motion estimation, which has been widely used in various image sequence coding schemes, plays a key role in the transmission and storage of video signals at reduced bit rates. There are two classes of motion estimation methods, Block matching algorithms (BMA) and Pel-recursive algorithms (PRA). Due to its implementation simplicity, block matching algorithms have been widely adopted by various video coding standards such as CCITT H.261, ITU-T H.263, and MPEG. In BMA, the current image frame is partitioned into fixed-size rectangular blocks. The motion vector for each block is estimated by finding the best matching block of pixels within the search window in the previous frame according to matching criteria. The goal of this work is to find a fast method for motion estimation and motion segmentation using proposed model. Recent day Communication between ends is facilitated by the development in the area of wired and wireless networks. And it is a challenge to transmit large data file over limited bandwidth channel. Block matching algorithms are very useful in achieving the efficient and acceptable compression. Block matching algorithm defines the total computation cost and effective bit budget. To efficiently obtain motion estimation different approaches can be followed but above constraints should be kept in mind. This paper presents a novel method using three step and diamond algorithms with modified search pattern based on logical image for the block based motion estimation. It has been found that, the improved PSNR value obtained from proposed algorithm shows a better computation time (faster) as compared to original Three step Search (3SS/TSS ) method .The experimental results based on the number of video sequences were presented to demonstrate the advantages of proposed motion estimation technique.
A Novel Approach of Caching Direct Mapping using Cubic ApproachKartik Asati
Hi.., I had work out on this research paper in duration my master degree(MCA) and Successfully present at 6th International Conference under "Science Engineering Technology (SET) " in may-2013 in VIT University.
New Data Association Technique for Target Tracking in Dense Clutter Environme...CSCJournals
Improving data association process by increasing the probability of detecting valid data points (measurements obtained from radar/sonar system) in the presence of noise for target tracking are discussed in manuscript. We develop a novel algorithm by filtering gate for target tracking in dense clutter environment. This algorithm is less sensitive to false alarm (clutter) in gate size than conventional approaches as probabilistic data association filter (PDAF) which has data association algorithm that begin to fail due to the increase in the false alarm rate or low probability of target detection. This new selection filtered gate method combines a conventional threshold based algorithm with geometric metric measure based on one type of the filtering methods that depends on the idea of adaptive clutter suppression methods. An adaptive search based on the distance threshold measure is then used to detect valid filtered data point for target tracking. Simulation results demonstrate the effectiveness and better performance when compared to conventional algorithm.
Molecular dynamics (MD) is a very useful tool to understand various phenomena in atomistic detail. In MD, we can overcome the size- and time-scale problems by efficient parallelization. In this lecture, I’ll explain various parallelization methods of MD with some examples of GENESIS MD software optimization on Fugaku.
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIXijcsit
To store and transmit digital images in least memory space and bandwidth image compression is needed. Image compression refers to the process of minimizing the image size by removing redundant data bits in a manner that quality of an image should not be degrade. Hence image compression reduces quantity of the image size without reducing its quality. In this paper it is being attempted to enhance the basic JPEG compression by reducing image size. The proposed technique is about amendment of the conventional run length coding for JPEG (Joint Photographic Experts Group) image compression by using the concept of sparse matrix. In this algorithm, the redundant data has been completely eliminated and hence leaving the quality of an image unaltered. The JPEG standard document specifies three steps: Discrete cosine transform, Quantization followed by Entropy coding. The proposed work aims at the enhancement of the third step which is Entropy coding.
MotionEstimation Technique forReal Time Compressed Video TransmissionIJERA Editor
Motion Estimation is one of the most critical modules in a typical digital video encoder. many implementation
tradeoffs should be considered while designing such a module. It candefine ME as a part of inter coding
technique.Inter coding refers to a mechanism of finding co-relation between two frames (stillimages), which are
not far away from each other as far as the order of occurrence is concerned, one frame is called the reference
frameand the other frame called the current frame, and then encoding the information which is a function of this
co-relation‟ instead of the frame itself. This paper focuses more on block matching algorithms which comes
under feature/region matching. Block motion estimation algorithms are widely adopted by video coding
standards, mainly due to their simplicity and good distortion performance In FS every candidate points will be
evaluated and more time will be taken for predicting the suitable motion vectors. based on the above noted
drawback, the above said adaptive pattern is proposed
Optimization is proposed at algorithm/code-level for both encoder and decoder to make it feasible to perform
real-time H.264/AVC video encoding/decoding on mobile device for mobile multimedia applications. For
encoder an improved motion estimation algorithm based on hexagonal pattern search is proposed exploiting the
temporal redundancy of a video sequence. For decoder, at code level memory access minimization scheme is
proposed and at algorithm level a fast interpolation scheme is proposed.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Robust foreground modelling to segment and detect multiple moving objects in ...IJECEIAES
Last decade has witnessed an ever increasing number of video surveillance installa- tions due to the rise of security concerns worldwide. With this comes the need for video analysis for fraud detection, crime investigation, traffic monitoring to name a few. For any kind of video analysis application, detection of moving objects in videos is a fundamental step. In this paper, an efficient foreground modelling method to segment multiple moving objects is implemented. Proposed method significantly reduces noise thereby accurately segmenting region of interest under dynamic conditions while handling occlusion to a large extent. Extensive performance analysis shows that the proposed method was found to give far better results when compared to the de facto standard as well as relatively new approaches used for moving object detection.
An Efficient Block Matching Algorithm Using Logical ImageIJERA Editor
Motion estimation, which has been widely used in various image sequence coding schemes, plays a key role in the transmission and storage of video signals at reduced bit rates. There are two classes of motion estimation methods, Block matching algorithms (BMA) and Pel-recursive algorithms (PRA). Due to its implementation simplicity, block matching algorithms have been widely adopted by various video coding standards such as CCITT H.261, ITU-T H.263, and MPEG. In BMA, the current image frame is partitioned into fixed-size rectangular blocks. The motion vector for each block is estimated by finding the best matching block of pixels within the search window in the previous frame according to matching criteria. The goal of this work is to find a fast method for motion estimation and motion segmentation using proposed model. Recent day Communication between ends is facilitated by the development in the area of wired and wireless networks. And it is a challenge to transmit large data file over limited bandwidth channel. Block matching algorithms are very useful in achieving the efficient and acceptable compression. Block matching algorithm defines the total computation cost and effective bit budget. To efficiently obtain motion estimation different approaches can be followed but above constraints should be kept in mind. This paper presents a novel method using three step and diamond algorithms with modified search pattern based on logical image for the block based motion estimation. It has been found that, the improved PSNR value obtained from proposed algorithm shows a better computation time (faster) as compared to original Three step Search (3SS/TSS ) method .The experimental results based on the number of video sequences were presented to demonstrate the advantages of proposed motion estimation technique.
A Novel Approach of Caching Direct Mapping using Cubic ApproachKartik Asati
Hi.., I had work out on this research paper in duration my master degree(MCA) and Successfully present at 6th International Conference under "Science Engineering Technology (SET) " in may-2013 in VIT University.
New Data Association Technique for Target Tracking in Dense Clutter Environme...CSCJournals
Improving data association process by increasing the probability of detecting valid data points (measurements obtained from radar/sonar system) in the presence of noise for target tracking are discussed in manuscript. We develop a novel algorithm by filtering gate for target tracking in dense clutter environment. This algorithm is less sensitive to false alarm (clutter) in gate size than conventional approaches as probabilistic data association filter (PDAF) which has data association algorithm that begin to fail due to the increase in the false alarm rate or low probability of target detection. This new selection filtered gate method combines a conventional threshold based algorithm with geometric metric measure based on one type of the filtering methods that depends on the idea of adaptive clutter suppression methods. An adaptive search based on the distance threshold measure is then used to detect valid filtered data point for target tracking. Simulation results demonstrate the effectiveness and better performance when compared to conventional algorithm.
Molecular dynamics (MD) is a very useful tool to understand various phenomena in atomistic detail. In MD, we can overcome the size- and time-scale problems by efficient parallelization. In this lecture, I’ll explain various parallelization methods of MD with some examples of GENESIS MD software optimization on Fugaku.
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIXijcsit
To store and transmit digital images in least memory space and bandwidth image compression is needed. Image compression refers to the process of minimizing the image size by removing redundant data bits in a manner that quality of an image should not be degrade. Hence image compression reduces quantity of the image size without reducing its quality. In this paper it is being attempted to enhance the basic JPEG compression by reducing image size. The proposed technique is about amendment of the conventional run length coding for JPEG (Joint Photographic Experts Group) image compression by using the concept of sparse matrix. In this algorithm, the redundant data has been completely eliminated and hence leaving the quality of an image unaltered. The JPEG standard document specifies three steps: Discrete cosine transform, Quantization followed by Entropy coding. The proposed work aims at the enhancement of the third step which is Entropy coding.
In this paper, we propose a novel fast video search algorithm for large video database.
Histogram of Oriented Gradients (HOG) has been reported which can be reliably applied to
object detection, especially pedestrian detection. We use HOG based features as a feature
vector of a frame image in this study. Combined with active search, a temporal pruning
algorithm, fast and robust video search can be achieved. The proposed search algorithm has
been evaluated by 6 hours of video to search for given 200 video clips which each length is 15
seconds. Experimental results show the proposed algorithm can detect the similar video clip
more accurately and robust against Gaussian noise than conventional fast video search
algorithm.
An Effective Implementation of Configurable Motion Estimation Architecture fo...ijsrd.com
This project introduces configurable motion estimation architecture for a wide range of fast block-matching algorithms (BMAs). Contemporary motion estimation architectures are either too rigid for multiple BMAs or the flexibility in them is implemented at the cost of reduced performance. .In block-based motion estimation, a block-matching algorithm (BMA) searches the best matching block for the current macro block from the reference frame. During the searching procedure, the checking point yielding the minimum block distortion (MBD) determines the displacement of the best matching block.
Fast Full Search for Block Matching Algorithmsijsrd.com
This project introduces configurable motion estimation architecture for a wide range of fast block-matching algorithms (BMAs). Contemporary motion estimation architectures are either too rigid for multiple BMAs or the flexibility in them is implemented at the cost of reduced performance. In block-based motion estimation, a block-matching algorithm (BMA) searches for the best matching block for the current macro block from the reference frame. During the searching procedure, the checking point yielding the minimum block distortion (MBD) determines the displacement of the best matching block.
A Three-Point Directional Search Block Matching Algorithm Yayah Zakaria
This paper proposes compact directional asymmetric search patterns, which we have named as three-point directional search (TDS). In most fast search motion estimation algorithms, a symmetric search pattern is usually set at the minimum block distortion point at each step of the search. The design of the
symmetrical pattern in these algorithms relies primarily on the assumption that the direction of convergence is equally alike in each direction with respect to the search center. Therefore, the monotonic property of real -world video sequences is not properly used by these algorithms. The strategy of TDS is to keep searching for the minimum block distortion point in the most probable directions, unlike the previous fast search motion estimation algorithms where all the directions are checked. Therefore, the proposed method significantly reduces the number of search points for locating a motion vector. Compared to conventional fast algorithms, the proposed method has the fastest search speed and most satisfactory PSNR values for
all test sequences.
A New Cross Diamond Search Motion Estimation Algorithm for HEVCIJERA Editor
In this project, a novel approach for motion estimation is proposed. There are few block matching algorithm existing for motion estimation. In motion estimation a new cross diamond search algorithm is implemented compared to diamond search it uses less search point. Because of this we can reduce the computational complexity. The performance of the algorithm is compared with other algorithm by means of search points. This algorithm achieves close performance than that of three step search and diamond search. Compared to all the algorithm cross diamond uses less logic elements, delay and power dissipation.
Problem Decomposition and Information Minimization for the Global, Concurrent...gerogepatton
This piece of research introduces a purely data-driven, directly reconfigurable, divide-and-conquer on-line
monitoring (OLM) methodology for automatically selecting the minimum number of neutron detectors
(NDs) – and corresponding neutron noise signals (NSs) – which are currently necessary, as well as
sufficient, for inspecting the entire nuclear reactor (NR) in-core area. The proposed implementation builds
upon the 3-tuple configuration, according to which three sufficiently pairwise-correlated NSs are capable
of on-line (I) verifying each NS of the 3-tuple and (II) endorsing correct functioning of each corresponding
ND, implemented herein via straightforward pairwise comparisons of fixed-length sliding time-windows
(STWs) between the three NSs of the 3-tuple. A pressurized water NR (PWR) model – developed for H2020
CORTEX – is used for deriving the optimal ND/NS configuration, where (i) the evident partitioning of the
36 NDs/NSs into six clusters of six NDs/NSs each, and (ii) the high cross-correlations (CCs) within every
3-tuple of NSs, endorse the use of a constant pair comprising the two most highly CC-ed NSs per cluster as
the first two members of the 3-tuple, with the third member being each remaining NS of the cluster, in turn,
thereby computationally streamlining OLM without compromising the identification of either deviating NSs
or malfunctioning NDs. Tests on the in-core dataset of the PWR model demonstrate the potential of the
proposed methodology in terms of suitability for, efficiency at, as well as robustness in ND/NS selection,
further establishing the “directly reconfigurable” property of the proposed approach at every point in time
while using one-third only of the original NDs/NSs.
Optimized linear spatial filters implemented in FPGAIOSRJVSP
Linear spatial filters (LSF) are used for filtering of digital images with the purpose of blurring, noise reduction, detail enhancement etc. The realization of LSF confronts the capital problem of a lot of operations needed for their computation. In this paper, described is an approach for optimizing of LSF by utilizing parallel algorithms and their hardware implementation on FPGA. A model and an algorithm based on partial sums and aimed at calculating the filtered pixels are presented. Defined are criteria for comparing of the different types of linear filters. A schematic diagram of an FPGA-based DSP operational block is shown. VHDL is utilized for the hardware design. Conducted are studies focused on comparing the partial sums and the non-partial sums based methods of filtering. It is ascertained that the methods employing partial sums reduce the number of operations to the size of the window (3, 5, 7,...) . These FPGA-based LSF are suitable for applications using threshold detecting, edge detection or image detail enhancement.
New Chaotic Substation and Permutation Method for Image Encryptiontayseer Karam alshekly
New Chaotic Substation and Permutation Method for Image Encryption is introduced based on combination between Block Cipher and chaotic map. The new algorithm encrypts and decrypts a block of 500 byte. Each block is firstly permuted by using the hyper-chaotic map and then the result is substituted using 1D Bernoulli map. Finally the resulted block is XORed with the key block. The proposed cipher image subjected to number of tests which are the security analysis (key space analysis and key sensitivity analysis) and statistical attack analysis (histogram, correlation, and differential attack and information entropy) and all results show that the proposed encryption scheme is secure because of its large key space; it’s highly sensitivity to the cipher keys and plain-images.
All optical image processing using third harmonic generation for image correl...M. Faisal Halim
Term Paper: All optical image processing using third harmonic generation for image correlation
Optical Information Processing Course
Monday, 20th December, 2010
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Fast Computational Four-Neighborhood Search Algorithm For Block matching Moti...IJERA Editor
The Motion estimation is an effective method for removing temporal redundancyfound in video sequence compression. Block Matching algorithm has been widely used in motion estimation and a number of fast algorithms have proposed to reduce the computational complexity of BMA. In this paper we propose a new search strategy for fast block matching based on Four-Neighborhood Search (FNS) and fast computational strategy, this new algorithm can significantly speed up the computation of the block matching by reducing the number of checked points and the time computational. Results have been shown that 89% to 93% of operations can be saved while maintaining the quality of video relative to full search algorithm.
A Three-Point Directional Search Block Matching Algorithm IJECEIAES
This paper proposes compact directional asymmetric search patterns, which we have named as three-point directional search (TDS). In most fast search motion estimation algorithms, a symmetric search pattern is usually set at the minimum block distortion point at each step of the search. The design of the symmetrical pattern in these algorithms relies primarily on the assumption that the direction of convergence is equally alike in each direction with respect to the search center. Therefore, the monotonic property of real -world video sequences is not properly used by these algorithms. The strategy of TDS is to keep searching for the minimum block distortion point in the most probable directions, unlike the previous fast search motion estimation algorithms where all the directions are checked. Therefore, the proposed method significantly reduces the number of search points for locating a motion vector. Compared to conventional fast algorithms, the proposed method has the fastest search speed and most satisfactory PSNR values for all test sequences.
High Performance Architecture for Full Search Block matching Algorithmiosrjce
Video compression has two major issues to be handled, one is video compression rate and other one
is quality. There is always a trade-off between speed and quality. Full search block matching algorithm
(FSBMA) is most popular motion estimation algorithm. But high computational complexity is the major
challenge of FSBM. This makes FSBM to be very difficult to use for real time video processing with the low
power batteries. Other algorithm gives better speed on the expense of quality of video. The proposed algorithm
i.e. modified full search block matching algorithm (MFSBMA) reduces the computational complexity by keeping
the PSNR same as of FSBMA. MFSBMA skips the SAD calculations for a current background macroblock and it
does SAD calculations for foreground current microblock. This method reduces SAD calculations drastically.
This work presents apipelined architecture forMFSBMA which can work on real time HDTV video processing.
The proposed algorithm reduces computational complexity by 50% keeping PSNR same with the full search
algorithm.
Different Approach of VIDEO Compression Technique: A StudyEditor IJCATR
The main objective of video compression is to achieve video compression with less possible losses to reduce the
transmission bandwidth and storage memory. This paper discusses different approach of video compression for better transmission of
video frames for multimedia application. Video compression methods such as frame difference approach, PCA based method,
accordion function, fuzzy concept, and EZW and FSBM were analyzed in this paper. Those methods were compared for performance,
speed and accuracy and which method produces better visual quality.
Optimization of Macro Block Size for Adaptive Rood Pattern Search Block Match...IJERA Editor
In area of video compression, Motion Estimation is one of the most important modules and play an important role
to design and implementation of any the video encoder. It consumes more than 85% of video encoding time due to
searching of a candidate block in the search window of the reference frame. Various block matching methods have
been developed to minimize the search time. In this context, Adaptive Rood Pattern Search is one of the less
expensive block matching methods, which is widely acceptable for better Motion Estimation in video data
processing. In this paper we have proposed to optimize the macro block size used in adaptive rood pattern search
method for improvement in motion estimation.
Fast Motion Estimation for Quad-Tree Based Video Coder Using Normalized Cross...CSCJournals
Motion estimation is the most challenging and time consuming stage in block based video codec. To reduce the computation time, many fast motion estimation algorithms were proposed and implemented. This paper proposes a quad-tree based Normalized Cross Correlation (NCC) measure for obtaining estimates of inter-frame motion. The measure operates in frequency domain using FFT algorithm as the similarity measure with an exhaustive full search in region of interest. NCC is a more suitable similarity measure than Sum of Absolute Difference (SAD) for reducing the temporal redundancy in video compression since we can attain flatter residual after motion compensation. The degrees of homogeneous and stationery regions are determined by selecting suitable initial fixed threshold for block partitioning. An experimental result of the proposed method shows that actual numbers of motion vectors are significantly less compared to existing methods with marginal effect on the quality of reconstructed frame. It also gives higher speed up ratio for both fixed block and quad-tree based motion estimation methods.
In this paper, we propose a novel fast video search algorithm for large video database.
Histogram of Oriented Gradients (HOG) has been reported which can be reliably applied to
object detection, especially pedestrian detection. We use HOG based features as a feature
vector of a frame image in this study. Combined with active search, a temporal pruning
algorithm, fast and robust video search can be achieved. The proposed search algorithm has
been evaluated by 6 hours of video to search for given 200 video clips which each length is 15
seconds. Experimental results show the proposed algorithm can detect the similar video clip
more accurately and robust against Gaussian noise than conventional fast video search
algorithm.
An Effective Implementation of Configurable Motion Estimation Architecture fo...ijsrd.com
This project introduces configurable motion estimation architecture for a wide range of fast block-matching algorithms (BMAs). Contemporary motion estimation architectures are either too rigid for multiple BMAs or the flexibility in them is implemented at the cost of reduced performance. .In block-based motion estimation, a block-matching algorithm (BMA) searches the best matching block for the current macro block from the reference frame. During the searching procedure, the checking point yielding the minimum block distortion (MBD) determines the displacement of the best matching block.
Fast Full Search for Block Matching Algorithmsijsrd.com
This project introduces configurable motion estimation architecture for a wide range of fast block-matching algorithms (BMAs). Contemporary motion estimation architectures are either too rigid for multiple BMAs or the flexibility in them is implemented at the cost of reduced performance. In block-based motion estimation, a block-matching algorithm (BMA) searches for the best matching block for the current macro block from the reference frame. During the searching procedure, the checking point yielding the minimum block distortion (MBD) determines the displacement of the best matching block.
A Three-Point Directional Search Block Matching Algorithm Yayah Zakaria
This paper proposes compact directional asymmetric search patterns, which we have named as three-point directional search (TDS). In most fast search motion estimation algorithms, a symmetric search pattern is usually set at the minimum block distortion point at each step of the search. The design of the
symmetrical pattern in these algorithms relies primarily on the assumption that the direction of convergence is equally alike in each direction with respect to the search center. Therefore, the monotonic property of real -world video sequences is not properly used by these algorithms. The strategy of TDS is to keep searching for the minimum block distortion point in the most probable directions, unlike the previous fast search motion estimation algorithms where all the directions are checked. Therefore, the proposed method significantly reduces the number of search points for locating a motion vector. Compared to conventional fast algorithms, the proposed method has the fastest search speed and most satisfactory PSNR values for
all test sequences.
A New Cross Diamond Search Motion Estimation Algorithm for HEVCIJERA Editor
In this project, a novel approach for motion estimation is proposed. There are few block matching algorithm existing for motion estimation. In motion estimation a new cross diamond search algorithm is implemented compared to diamond search it uses less search point. Because of this we can reduce the computational complexity. The performance of the algorithm is compared with other algorithm by means of search points. This algorithm achieves close performance than that of three step search and diamond search. Compared to all the algorithm cross diamond uses less logic elements, delay and power dissipation.
Problem Decomposition and Information Minimization for the Global, Concurrent...gerogepatton
This piece of research introduces a purely data-driven, directly reconfigurable, divide-and-conquer on-line
monitoring (OLM) methodology for automatically selecting the minimum number of neutron detectors
(NDs) – and corresponding neutron noise signals (NSs) – which are currently necessary, as well as
sufficient, for inspecting the entire nuclear reactor (NR) in-core area. The proposed implementation builds
upon the 3-tuple configuration, according to which three sufficiently pairwise-correlated NSs are capable
of on-line (I) verifying each NS of the 3-tuple and (II) endorsing correct functioning of each corresponding
ND, implemented herein via straightforward pairwise comparisons of fixed-length sliding time-windows
(STWs) between the three NSs of the 3-tuple. A pressurized water NR (PWR) model – developed for H2020
CORTEX – is used for deriving the optimal ND/NS configuration, where (i) the evident partitioning of the
36 NDs/NSs into six clusters of six NDs/NSs each, and (ii) the high cross-correlations (CCs) within every
3-tuple of NSs, endorse the use of a constant pair comprising the two most highly CC-ed NSs per cluster as
the first two members of the 3-tuple, with the third member being each remaining NS of the cluster, in turn,
thereby computationally streamlining OLM without compromising the identification of either deviating NSs
or malfunctioning NDs. Tests on the in-core dataset of the PWR model demonstrate the potential of the
proposed methodology in terms of suitability for, efficiency at, as well as robustness in ND/NS selection,
further establishing the “directly reconfigurable” property of the proposed approach at every point in time
while using one-third only of the original NDs/NSs.
Optimized linear spatial filters implemented in FPGAIOSRJVSP
Linear spatial filters (LSF) are used for filtering of digital images with the purpose of blurring, noise reduction, detail enhancement etc. The realization of LSF confronts the capital problem of a lot of operations needed for their computation. In this paper, described is an approach for optimizing of LSF by utilizing parallel algorithms and their hardware implementation on FPGA. A model and an algorithm based on partial sums and aimed at calculating the filtered pixels are presented. Defined are criteria for comparing of the different types of linear filters. A schematic diagram of an FPGA-based DSP operational block is shown. VHDL is utilized for the hardware design. Conducted are studies focused on comparing the partial sums and the non-partial sums based methods of filtering. It is ascertained that the methods employing partial sums reduce the number of operations to the size of the window (3, 5, 7,...) . These FPGA-based LSF are suitable for applications using threshold detecting, edge detection or image detail enhancement.
New Chaotic Substation and Permutation Method for Image Encryptiontayseer Karam alshekly
New Chaotic Substation and Permutation Method for Image Encryption is introduced based on combination between Block Cipher and chaotic map. The new algorithm encrypts and decrypts a block of 500 byte. Each block is firstly permuted by using the hyper-chaotic map and then the result is substituted using 1D Bernoulli map. Finally the resulted block is XORed with the key block. The proposed cipher image subjected to number of tests which are the security analysis (key space analysis and key sensitivity analysis) and statistical attack analysis (histogram, correlation, and differential attack and information entropy) and all results show that the proposed encryption scheme is secure because of its large key space; it’s highly sensitivity to the cipher keys and plain-images.
All optical image processing using third harmonic generation for image correl...M. Faisal Halim
Term Paper: All optical image processing using third harmonic generation for image correlation
Optical Information Processing Course
Monday, 20th December, 2010
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Fast Computational Four-Neighborhood Search Algorithm For Block matching Moti...IJERA Editor
The Motion estimation is an effective method for removing temporal redundancyfound in video sequence compression. Block Matching algorithm has been widely used in motion estimation and a number of fast algorithms have proposed to reduce the computational complexity of BMA. In this paper we propose a new search strategy for fast block matching based on Four-Neighborhood Search (FNS) and fast computational strategy, this new algorithm can significantly speed up the computation of the block matching by reducing the number of checked points and the time computational. Results have been shown that 89% to 93% of operations can be saved while maintaining the quality of video relative to full search algorithm.
A Three-Point Directional Search Block Matching Algorithm IJECEIAES
This paper proposes compact directional asymmetric search patterns, which we have named as three-point directional search (TDS). In most fast search motion estimation algorithms, a symmetric search pattern is usually set at the minimum block distortion point at each step of the search. The design of the symmetrical pattern in these algorithms relies primarily on the assumption that the direction of convergence is equally alike in each direction with respect to the search center. Therefore, the monotonic property of real -world video sequences is not properly used by these algorithms. The strategy of TDS is to keep searching for the minimum block distortion point in the most probable directions, unlike the previous fast search motion estimation algorithms where all the directions are checked. Therefore, the proposed method significantly reduces the number of search points for locating a motion vector. Compared to conventional fast algorithms, the proposed method has the fastest search speed and most satisfactory PSNR values for all test sequences.
High Performance Architecture for Full Search Block matching Algorithmiosrjce
Video compression has two major issues to be handled, one is video compression rate and other one
is quality. There is always a trade-off between speed and quality. Full search block matching algorithm
(FSBMA) is most popular motion estimation algorithm. But high computational complexity is the major
challenge of FSBM. This makes FSBM to be very difficult to use for real time video processing with the low
power batteries. Other algorithm gives better speed on the expense of quality of video. The proposed algorithm
i.e. modified full search block matching algorithm (MFSBMA) reduces the computational complexity by keeping
the PSNR same as of FSBMA. MFSBMA skips the SAD calculations for a current background macroblock and it
does SAD calculations for foreground current microblock. This method reduces SAD calculations drastically.
This work presents apipelined architecture forMFSBMA which can work on real time HDTV video processing.
The proposed algorithm reduces computational complexity by 50% keeping PSNR same with the full search
algorithm.
Different Approach of VIDEO Compression Technique: A StudyEditor IJCATR
The main objective of video compression is to achieve video compression with less possible losses to reduce the
transmission bandwidth and storage memory. This paper discusses different approach of video compression for better transmission of
video frames for multimedia application. Video compression methods such as frame difference approach, PCA based method,
accordion function, fuzzy concept, and EZW and FSBM were analyzed in this paper. Those methods were compared for performance,
speed and accuracy and which method produces better visual quality.
Optimization of Macro Block Size for Adaptive Rood Pattern Search Block Match...IJERA Editor
In area of video compression, Motion Estimation is one of the most important modules and play an important role
to design and implementation of any the video encoder. It consumes more than 85% of video encoding time due to
searching of a candidate block in the search window of the reference frame. Various block matching methods have
been developed to minimize the search time. In this context, Adaptive Rood Pattern Search is one of the less
expensive block matching methods, which is widely acceptable for better Motion Estimation in video data
processing. In this paper we have proposed to optimize the macro block size used in adaptive rood pattern search
method for improvement in motion estimation.
Fast Motion Estimation for Quad-Tree Based Video Coder Using Normalized Cross...CSCJournals
Motion estimation is the most challenging and time consuming stage in block based video codec. To reduce the computation time, many fast motion estimation algorithms were proposed and implemented. This paper proposes a quad-tree based Normalized Cross Correlation (NCC) measure for obtaining estimates of inter-frame motion. The measure operates in frequency domain using FFT algorithm as the similarity measure with an exhaustive full search in region of interest. NCC is a more suitable similarity measure than Sum of Absolute Difference (SAD) for reducing the temporal redundancy in video compression since we can attain flatter residual after motion compensation. The degrees of homogeneous and stationery regions are determined by selecting suitable initial fixed threshold for block partitioning. An experimental result of the proposed method shows that actual numbers of motion vectors are significantly less compared to existing methods with marginal effect on the quality of reconstructed frame. It also gives higher speed up ratio for both fixed block and quad-tree based motion estimation methods.
A novel sketch based face recognition in unconstrained video for criminal inv...IJECEIAES
Face recognition in video surveillance helps to identify an individual by comparing facial features of given photograph or sketch with a video for criminal investigations. Generally, face sketch is used by the police when suspect’s photo is not available. Manual matching of facial sketch with suspect’s image in a long video is tedious and time-consuming task. To overcome these drawbacks, this paper proposes an accurate face recognition technique to recognize a person based on his sketch in an unconstrained video surveillance. In the proposed method, surveillance video and sketch of suspect is taken as an input. Firstly, input video is converted into frames and summarized using the proposed quality indexed three step cross search algorithm. Next, faces are detected by proposed modified Viola-Jones algorithm. Then, necessary features are selected using the proposed salp-cat optimization algorithm. Finally, these features are fused with scale-invariant feature transform (SIFT) features and Euclidean distance is computed between feature vectors of sketch and each face in a video. Face from the video having lowest Euclidean distance with query sketch is considered as suspect’s face. The proposed method’s performance is analyzed on Chokepoint dataset and the system works efficiently with 89.02% of precision, 91.25% of recall and 90.13% of F-measure.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
A hybrid bacterial foraging and modified particle swarm optimization for mode...IJECEIAES
This paper study the model reduction procedures used for the reduction of large-scale dynamic models into a smaller one through some sort of differential and algebraic equations. A confirmed relevance between these two models exists, and it shows same characteristics under study. These reduction procedures are generally utilized for mitigating computational complexity, facilitating system analysis, and thence reducing time and costs. This paper comes out with a study showing the impact of the consolidation between the Bacterial-Foraging (BF) and Modified particle swarm optimization (MPSO) for the reduced order model (ROM). The proposed hybrid algorithm (BF-MPSO) is comprehensively compared with the BF and MPSO algorithms; a comparison is also made with selected existing techniques.
The paper presents a nature inspired algorithm that copies the big bang theory of evolution.
This algorithm is simple with regard to number of parameters. Embedded systems are powered by
batteries and enhancing the operating time of the battery by reducing the power consumption is vital.
Embedded systems consume power while accessing the memory during their operation. An efficient
method for power management is proposed in this work. The proposed method, reduce the energy
consumption in memories from 76% up to 98% as compared to other methods reported in the
literature.
Improved Error Detection and Data Recovery Architecture for Motion Estimation...IJERA Editor
Given the critical role of motion estimation (ME) in a video coder, testing such a module is of priority concern. While focusing on the testing of ME in a video coding system, this work presents an error detection and data recovery (EDDR) design, based on the proposed residue-and-quotient (RQ) code, to embed into ME for video coding testing applications. An error in processing elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the proposed EDDR design. Experimental results indicate that the proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty. Importantly, the proposed EDDR design performs satisfactorily in terms of throughput and reliability for ME testing applications.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Improving the Proactive Routing Protocol using Depth First Iterative Deepenin...Yayah Zakaria
Owing to the wireless and mobility nature, nodes in a mobile ad hoc network are not within the transmission range. It needs to transfer data through the multi-intermediate nodes. Opportunistic data forwarding is an assuring solution to make use of the broadcast environment of wireless communication links. Due to absence of source routing capability with efficient proactive routing protocol, it is not widely used. To rectify the
problem, we proposed memory and routing efficient proactive routing protocol using Depth-First Iterative-Deepening and hello messaging scheme. This protocol can conserve the topology information in every node in the network. In experimental analysis and discussion, we implemented the proposed work using NS2 simulator tool and proved that the proposed technique is performed well in terms of average delay, buffer and throughput.
Improvement at Network Planning using Heuristic Algorithm to Minimize Cost of...Yayah Zakaria
Wireless Mesh Networks (WMN) consists of wireless stations that are connected with each other in a semi-static configuration. Depending on the configuration of a WMN, different paths between nodes offer different levels of efficiency. One areas of research with regard to WMN is cost minimization. A Modified Binary Particle Swarm Optimization (MBPSO) approach was used to optimize cost. However, minimized cost does not
guarantee network performance. This paper thus, modified the minimization function to take into consideration the distance between the different nodes so as to enable better performance while maintaining cost balance. The results were positive with the PDR showing an approximate increase of 17.83% whereas the E2E delay saw an approximate decrease of 8.33%.
A Vertical Handover Algorithm in Integrated Macrocell Femtocell Networks Yayah Zakaria
The explosion in wireless telecommunication technologies has lead to a huge increase in the number of mobile users. The greater dependency on the mobile devices has raised the user’s expectations to always remain best connected. In the process, the user is always desiringgood signal strength even at certain black spots and indoors. Moreover, the exponential growth of
the number of mobile devices has overloaded macrocells. Femtocells have emerged out as a good promising solution for complete coverage indoors and for offloading macrocell. Therefore, a new handover strategy between femtocells and macrocell is proposed in this paper. The proposed handover
algorithm is mainly based on calculating equivalent received signal strength along with dynamic margin for performing handover. The simulation results of proposed algorithm are compared with the traditional algorithm. The proposed strategy shows improvement in two major performance parameters
namely reduction in unnecessary handovers and Packet Loss Ratio. The quantitative analysis further shows 55.27% and 23.03% reduction in packet loss ratio and 61.85% and 36.78% reduction in unnecessary handovers at a speed of 120kmph and 30kmph respectively. Moreover, the proposed algorithm proves to be an efficient solution for both slow and fast moving vehicles.
Symmetric Key based Encryption and Decryption using Lissajous Curve EquationsYayah Zakaria
Sender and receiver both uses two large similar prime numbers and uses parametric equations for swapping values of kx and by product of kx and ky is the common secret key. Generated secret key is used for encryption and decryption using ASCII key matrix of order 16X16. Applying playfair rules for encryption and decryption. Playfair is a digraph substitution cipher. Playfair makes use of pairs of letters for encryption and decryption. This
application makes use of all ASCII characters which makes brute force attack impossible.
Gain Flatness and Noise Figure Optimization of C-Band EDFA in 16-channels WDM...Yayah Zakaria
In this paper, Gain Flatness and Noise Figure of Erbium Doped Fiber Amplifier (EDFA) have been investigated in 16-channels Wavelength Division Multiplexing (WDM). Fiber Bragg Grating (FBG) is used in C-band with the aim to achieve flat EDFA output gain. The proposed model has been studied in detail to evaluate and to enhance the performance of the transmission system in terms of gain, noise figure and eye diagram of the
received signals. To that end, various design parameters have been investigated and optimized, such as frequency spacing, EDF length and temperature. To enhance the transmission system performance in terms of gain flatness, the Gain Flattening Filter (GFF) has been introduced in the design. To prove the efficiency of the new design, the optical transmission
system with optimized design parameters has been compared with a previous works in the literature. The simulation results show satisfactory performance with quasi-equalized gain for each channel of the WDM transmission system.
In this paper partial H-plane band-pass waveguide filter, utilizing a novel resonant structure comprising a metal window along with metal posts has been proposed to compactthe filter size. The metal windows and postshave been implemented transversely in a partial H-plane waveguides, which have
one-quarter cross section size compared to the conventional waveguides in the same frequency range. Partial H-plane band-pass waveguide filter with novel proposed resonant structures has considerably shorter longitudinal length compared to the conventional partial H-plane filters, so that they reduce both cross section size and the total length of the filter compared to
conventional H-plane filters, in the same frequency range. In the presented design procedure, the size and shape of each metal window and metal posts has been determined by fitting the transfer function of the proposed resonant structure to that of a desiredone, which is obtained from a suitable equivalent
circuit model. The design process is based on optimization using
electromagnetic simulator software, HFSS. A proposed partial H-plane bandpass
filter has been designed and simulated to verify usefulness and
performanceof the design method.
Vehicular Ad Hoc Networks: Growth and Survey for Three Layers Yayah Zakaria
A vehicular ad hoc network (VANET) is a mobile ad hoc network that allows wireless communication between vehicles, as well as between vehicles and roadside equipment. Communication between vehicles promotes safety and reliability, and can be a source of entertainment. We investigated the historical development, characteristics, and application fields of VANET and briefly introduced them in this study. Advantages and disadvantages were discussed based on our analysis and comparison of various classes of MAC and routing protocols applied to VANET. Ideas and breakthrough directions for inter-vehicle communication designs were proposed based on the
characteristics of VANET. This article also illustrates physical, MAC, and network layer in details which represent the three layers of VANET. The main works of the active research institute on VANET were introduced to help researchers track related advanced research achievements on the subject.
Barriers and Challenges to Telecardiology Adoption in Malaysia Context Yayah Zakaria
Mainly in infrastructure deficient communities, telecardiology is considered as a complement to insufficient cardiac care. Telecardiology can reduce travelling and waiting time, enables information sharing in shorter time and facilitate care in rural and remote areas. A qualitative study examined the perspectives of health care providers: cardiologist and general physician and
health care service receivers: patient and public towards telecardiology adoption. The barriers in telecardiology adoption were identified in this paper. It includes practicality of telecardiology, the need of education for staffs and administrators, ease of use, preferred face-to-face consultation,
cost and confidentiality. Improvements can be done by the implementers based on this study in order to promote telecardiology successfully in Malaysia.
Novel High-Gain Narrowband Waveguide-Fed Filtenna using Genetic Algorithm Yayah Zakaria
Filtenna is an antenna with filtering feature. There are many ways to design a filtenna. In this paper, a high-gain narrowband waveguide-fed aperture filtenna has been proposed and designed. A patterned plane, which is designed using genetic algorithm has been used at the open end of the waveguide fed, mounted on a conducting ground plane. To design the patterned pattern, magnetic field integral equation of the structure has been derived, so it has been solved using method of moments. The proposed filtenna has been simulated with HFSS that confirms the results obtained by method of moments. Finally, an unprinted dielectric as a superstrate has been used to enhance the gain of the filtenna. The filtenna bandwidth is 1.76% (160 MHz) which has the gain of 15.91 dB at the central frequency
of 9.45 GHz.
Improved Algorithm for Pathological and Normal Voices Identification Yayah Zakaria
There are a lot of papers on automatic classification between normal and pathological voices, but they have the lack in the degree of severity estimation of the identified voice disorders. Building a model of pathological and normal voices identification, that can also evaluate the degree of severity
of the identified voice disorders among students. In the present work, we present an automatic classifier using acoustical measurements on registered sustained vowels /a/ and pattern recognition tools based on neural networks. The training set was done by classifying students’ recorded voices based on threshold from the literature. We retrieve the pitch, jitter, shimmer and harmonic-to-noise ratio values of the speech utterance /a/, which constitute the input vector of the neural network. The degree of severity is estimated to evaluate how the parameters are far from the standard values based on the percent of normal and pathological values. In this work, the base data used for testing the proposed algorithm of the neural network is formed by healthy
and pathological voices from German database of voice disorders. The
performance of the proposed algorithm is evaluated in a term of the accuracy
(97.9%), sensitivity (1.6%), and specificity (95.1%). The classification rate is
90% for normal class and 95% for pathological class.
Uncertain Systems Order Reduction by Aggregation Method Yayah Zakaria
In the field of control engineering, approximating the higher-order system with its reduced model copes with more intricateproblems. These complex problems are addressed due to the usage of computing technologies and advanced algorithms. Reduction techniques enable the system from higherorder to lower-order form retaining the properties of former even after reduction. This document renders a method for demotion of uncertain systems based on State Space Analysis. Numerical examples are illustrated to show the accuracy of the proposed method.
Human Data Acquisition through Biometrics using LabVIEW Yayah Zakaria
Human Data Acquisition is an innovative work done based on fingerprints of a particular person. Using the fingerprints we can get each and every detail of any individual. Through this, the data acquired can be used in many applications such as Airport Security System, Voting System, and Employee login System, in finding the thieves etc. We in our project have implemented
in Voting System. In this we use the components such as MyDAQ which is data acquisition device. The coding here is in done in a Graphical Programming language named LabVIEW where the execution of any program is done in a sequential way or step by step according to the data received.
An Accurate Scheme for Distance Measurement using an Ordinary Webcam Yayah Zakaria
Nowadays, image processing has become one of the widely used computer aided science. Two major branches of this scientific field are image enhancement and machine vision. Machine vision has many applications and demands in robotic and defense industries. Detecting distance of objects is
one of the extensive research in the defense industry and robotic industries that a lot of annual projects have been involved in this issue both inside and outside the country. So, in this paper, an accurate algorithm is presented for measuring the distance of the objects from a camera. In this method, a laser
transmitter is used alongside a regular webcam. The laser light is transmitted to the desired object and then the distance of the object is calculated using image processing methods and mathematical and geometric relations. The performance of the proposed algorithm was evaluated using MATLAB software. The accuracy rate of distance detection is up to 99.62%. The results
also has shown that the presented algorithms make the obstacle distance measurement more reliable. Finally, the performance of the proposed algorithm was compared with other methods from different literatures.
Identity Analysis of Egg Based on Digital and Thermal Imaging: Image Processi...Yayah Zakaria
This research was conducted to analyze the identification of eggs. The research processes use two tools, namely thermal imaging camera and smartphone camera. The identification process was done by using Matlab prototype tools. The image has been acquired by means of proficiency level, then analyzed and applied several methods. Image acquisition results of thermal imaging camera are processed using morphological dilation and do the complement in black and white (BW). While the digital image uses the merger method of morphological dilation and opening, and it doesn't need to be complemented. Labeling process is done, and the process of determining centroid and bounding box. The process has been done and it can be applied for identifying of chicken eggs with the accuracy rate of 100%. There are different methods of both images is obtained area (pixels) which is equivalent to the difference is very small as 6 x 10-3.
.
Recognition of Tomato Late Blight by using DWT and Component Analysis Yayah Zakaria
Plant disease recognition concept is one of the successful and important applications of image processing and able to provide accurate and useful information to timely prediction and control of plant diseases. In the study, the wavelet based features computed from RGB images of late blight infected images and healthy images. The extracted features submitted to Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA) and Independent Component Analysis performed (ICA) for reducing dimensions in feature data processing and classification. To recognize and classify late blight from healthy plant images are classified into two classes i.e. late blight infected or healthy. The Euclidean Distance measure is used to compute the distance by these two classes of training and testing dataset for tomato late blight recognition and classification. Finally, the three-component analysis is compared for late blight recognition accuracy. The Kernel Principal Component Analysis (KPCA) yielded overall recognition accuracy with 96.4%.
Fuel Cell Impedance Model Parameters Optimization using a Genetic Algorithm Yayah Zakaria
The objective of this paper is the PEM fuel cell impedance model parameters identification. This work is a part of a larger work which is the diagnosis of the fuel cell which deals with the optimization and the parameters identification of the impedance complex model of the Nexa Ballard 1200 PEM fuel cell. The method used for the identification is a sample genetic algorithm and the proposed impedance model is based on electric parameters, which will be found from a sweeping of well determined frequency bands. In fact, the frequency spectrum is divided into bands according to the behavior of the fuel cell. So, this work is considered a first in the field of impedance spectroscopy So, this work is considered a first in the field of impedance spectroscopy. Indeed, the identification using genetic algorithm requires experimental measures of the fuel cell impedance to optimize and identify the impedance model parameters values. This method is characterized by a good precision compared to the numeric methods. The obtained results prove the effectiveness of this approach.
Noise Characterization in InAlAs/InGaAs/InP pHEMTs for Low Noise Applications Yayah Zakaria
In this paper, a noise revision of an InAlAs/InGaAs/InP psoeudomorphic high electron mobility transistor (pHEMT) in presented. The noise performances of the device were predicted over a range of frequencies from 1GHz to 100GHz. The minimum noise figure (NFmin), the noise resistance (Rn) and optimum source impedance (Zopt) were extracted using two
approaches. A physical model that includes diffusion noise and G-R noise models and an analytical model based on an improved PRC noise model that considers the feedback capacitance Cgd. The two approaches presented matched results allowing a good prediction of the noise behaviour. The
pHEMT was used to design a single stage S-band low noise amplifier (LNA). The LNA demonstrated a gain of 12.6dB with a return loss coefficient of 2.6dB at the input and greater than -7dB in the output and an overall noise figure less than 1dB.
Effect of Mobility on (I-V) Characteristics of Gaas MESFET Yayah Zakaria
We present in this paper an analytical model of the current–voltage (I-V) characteristics for submicron GaAs MESFET transistors. This model takes into account the analysis of the charge distribution in the active region and incorporate a field depended electron mobility, velocity saturation and charge
build-up in the channel. We propose in this frame work an algorithm of simulation based on mathematical expressions obtained previously. We propose a new mobility model describing the electric field-dependent. predictions of the simulator are compared with the experimental data [1] and
have been shown to be good.
Performance Analysis of Post Compensated Long Haul High Speed Coherent Optica...Yayah Zakaria
This paper addresses the performance analysis of OFDM transmission system based on coherent detection over high speed long haul optical links with high spectral efficiency modulation formats such as Quadrature Amplitude Modulation (QAM) as a mapping method prior to the OFDM multicarrier representation. Post compensation is used to compensate for
phase noise effects. Coherent detection for signal transmitted at bit rate of 40 Gbps is successfully achieved up to distance of 3200km. Performance is analyzed in terms of Symbol Error Rate and Error Vector Magnitude by varying Optical Signal to Noise Ratio (OSNR) and varying the length of the fiber i.e transmission distance. Transmission performance is also observed through constellation diagrams at different transmission distances and
different OSNRs.
A Single-Stage Low-Power Double-Balanced Mixer Merged with LNA and VCO Yayah Zakaria
This paper proposes three types of single stage low-power RF front-end, called double-balanced LMVs, by merging LNA, mixer, and voltagecontrolled oscillator (VCO) exploiting a series LC (SLC) network. The low intermediate frequency (IF) or baseband signal can be directly sensed at the drain nodes of the VCO switching transistors by adding a simple resistorcapacitor (RC) low-pass filter (LPF). By adopting a double-balanced mixer
topology, the strong leakage of the local oscillator (LO) at the IF output is effectively suppressed. Using a 65 nm CMOS technology, the proposed double-balanced LMVs (DB-LMVs) are designed. Oscillating at around 2.4 GHz ISM band, the phase noise of the proposed three DB-LMVs is −111 dBc/Hz at 1 MHz offset frequency. The simulated voltage conversion gain is larger than 36 dB and the double-side band (DSB) noise figure (NF) is less than 7.7 dB. The DB-LMVs consume only 0.2 mW dc power from 1-V supply voltage.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
2. IJECE ISSN: 2088-8708
A Survey on Block Matching Algorithms for Video Coding (Adapa Venkata Paramkusam)
217
square-shaped search patterns TSS and NTSS. The search patterns in DS algorithm are derived from the
checking points within circle of radium of 2 pels. The hexagon based search algorithm [4] was developed by
investigating why the DS pattern can yield speed improvement over some square-shaped search patterns and
what the mechanism behind is.
Hexagon-based search (HS) algorithm can achieve substantial speed improvement over the DS
algorithm with similar distortion performance. The enhanced versions of hexagonal search algorithm [6-8]
have been proposed for further reduction of the search points over Hexagonal Search algorithm. These
algorithms mainly concentrate on the fast inner search technique to improve the inner search speed. The main
purpose of this paper is to make a comprehensive studyof block matching algorithms. Comparisons in many
directions are made for system designers to determine the best tradeoff. The rest of this paper is organized as
follows. In section II, The well known block matching algorithms such as three step search, New three step
search, Diamond search, Hexagon based search and enhanced versions of hexagonal search algorithm have
been discussed. Section III is devoted to the discussions of the experimental results for various sequences.
Finally, section IV concludes this paper.
2. OVERVIEW
2.1. Three Step Search (TSS) Block Matching Algorithm
Three Step Search (TSS) is one of the first non full search algorithms and was introduced by Koga et
al [1] in 1981. It searches for the best motion vectors in three steps. The search pattern of TSS is shown in
Figure 1. In the first step 8 blocks, at a distance of step size equal to or slightly larger than half of the
maximum search range, from the centre point corresponding to zero motion vector are selected for
comparison. In the second step the step size is halved, the centre point is moved to the point with the
minimum distortion. Step-1 and step-2 are repeated till the step size becomes smaller than one. It is mainly
used for real time video compression with low bit rate video application such as video conferencing and
videophone.
The TSS is one of the most popular BMAs and is also recommended by RM8 of H.261 and SM3 of
MPEG owing to its simplicity and effectiveness. For a maximum displacement window of 7 i.e. d=7, the
number of checking points required is (9+8+8) =25. For a maximum displacement window of„d‟, the number
of checking points required equals to [1+8{log2(d+1)}].
Figure 1. Example of Three Step Search Path to Locate a Motion Vector at (-3, 2)
2.2. New Three Step Search (NTSS) Block Matching Algorithm
This algorithm was proposed by Renxiang Li, Bing Zeng and Ming L.Liou [2]. It is a modified
version of the three step search algorithm for searching small motion video sequences. For these video
sequences, the motion vector distribution is highly centre biased. Therefore, in addition to the original
checking points used in TSS, 8 extra neighbouring checking points of search window centre are searched in
the first step of NTSS (total 17) as shown Figure 2. There are three cases where the minimum BDM point
occurs.
a. For stationary block the minimum BDM point occurs at the search window centre then searching will stop
(This is called first step stop).
b. For quasi stationary block (small motion video sequence) the minimum BDM point occurs at any one of
the eight neighbours of the search window centre, the search in the second step will be performed only for
eight neighbouring points of the minimum BDM point and then stop the search (This is called second step
stop). Depending on position of this minimum BDM point, only five or three new checking points are
3. ISSN: 2088-8708
IJECE Vol. 7, No. 1, February 2017 : 216 – 224
218
required to be tested. The number of checking points required is then either (17+3) =20 or (17+5) =22.
c. If the minimum BDM point occurs at any one of the remaining eight checking points then complete three
step search will be executed. In the worst case (i.e. there is no single stationary block) the NTSS requires
33 block matches as compared to 25 matches in TSS. Eight block matches will be saved once a first step-
stop occurs. Depending on the position of the minimum BDM point (in the first step) on the 8 neighbours
of the window centre, five or three block matches will be saved once a second step-stop occurs:(1) if the
minimum is one of the four neighbouring positions along the horizontal or vertical directions, five block
matches will be saved;(2) if the minimum is one of the four neighbouring positions along the two
diagonal directions, three block matches will be saved.
The number of block matches needed in NTSS for estimating a block motion vector can be
estimated by 17P1 + 20P2 + 22P2' + 33 ( 1-P1-P2- P2'), where P1 is the probability of occurring a first step-
stop while P2 and P2' are respectively probabilities of occurring a second step-stop in the two cases
mentioned above. These probabilities are dependent on how many stationary or quasi-stationary blocks a
video frame contains.
Figure 2. Circles are the checking points in the first step of TSS, triangles are the 8 extra points added in the
first step of NTSS, and squares are the new checking points (3 or 5) in the second step depending on
minimum BDM point, in the first step, on the 8 neighbors of the window centre
2.3. Diamond Search (DS) Block Matching Algorithm
The diamond search algorithm (DS) is proposed by S. Zhu and K. K. Ma [3].It is based on MV
distribution of real world video sequences. It is an outstanding algorithm adopted by MPEG-4 verification
model (VM) due to its superiority to other methods in the class of fixed search pattern algorithms. It employs
two search patterns as shown in Figure 3, which are derived from the checking points within circle of radium
of 2 pels. The first pattern, called large diamond search pattern (LDSP) comprises nine checking points and
form a diamond shape. The second pattern consisting of five checking points forms a smaller diamond shape,
called Small diamond search pattern (SDSP).The search starts with the LDSP which is centered at the origin
of the search window, and the 9 checking points of LDSP are tested. If the minimum block distortion (MBD)
calculated is not located at the centre position, new LDSP is formed which is centered at the MBD point
found inthe search. The search procedure is repeated until MBD occurs at the centre point. At any stage if
MBD occurs at centre of LDSP, the search pattern is switched to SDSP as reaching to the final search stage.
Among the five checking points in SDSP, the position yielding the MBD provides the motion vector of the
best matching block.
(a) (b)
4. IJECE ISSN: 2088-8708
A Survey on Block Matching Algorithms for Video Coding (Adapa Venkata Paramkusam)
219
Figure 3. Search Patterns of DS (a) LDSP with 9 Search Points (b) SDSP with 5 Search Points
The checking points are partially overlapped between adjacent steps; especially, when LDSP is
repeatedly used. For illustration, three cases of checking-point overlapping are presented in Figure 4. When
the previous MBD point is located at one of the corners or edge points of LDSP, only five or three new
checking points are required to be tested as shown in Figure 4(a) and (b), respectively. If the centre point of
LDSP produces the MBD, the search pattern is changed from LDSP to SDSP in the final search. In this case,
only four new points are required to be tested, as shown in Figure 4(c).
2.4. Hexagon Based Search (HS) Block Matching Algorithm
Based on an in-depth examination of the influence of search pattern on speed performance in block
motion estimation, Ce Zhu, Xiao Lin, and Lap-Pui Chau [4]. Propose an algorithm using a hexagon-based
search pattern to achieve substantial speed improvement over the DS algorithm with similar distortion
performance. The diamond shaped pattern is more efficient than square shaped search patterns, but the
diamond shape is not approximate enough to a circle, which is just 90 rotation of a square. The 8 checking
points have different distances from the centre, so that each search point cannot be equally utilized with
maximum efficiency in DS algorithm.
A hexagon-based search pattern is depicted in Figure 5(a), which consists of seven checking points
with the centre surrounded by six endpoints of the hexagon with the two edge points (up and down) being
excluded. Of the six endpoints in the hexagon, two horizontal points are away from the centre with distance 2
and the remaining four points have a distance of from the centre point, respectively. From the Figure 5(a),
we can see the six endpoints are approximately uniformly distributed around the centre, which is highly
desirable.
This algorithm uses two search patterns large HS and small HS as shown in Figure 5(a) and 5(b). In
the first step of search, the large hexagonal pattern with seven checking points is used for search. In the
second step if the MBD is found at the centre, switch to use the small hexagonal pattern, including four
checking points for the focused inner search. Otherwise, the search continues around the point with minimum
block distortion (MBD) using the same large hexagonal pattern. Note that while the large hexagonal pattern
moves along the direction of decreasing distortion, only three new non overlapped checking points will be
evaluated as candidates each time. The total number of search points per block will be
NHS (mx, my) = 7 X (3 X n) + 4. (1)
Where (mx, my) is the final motion vector found, and n is the number of executions of Step 2. In
Equation 1 the digit 7 indicates number of checking points used in step 1, the digit 3 indicates number of
checking points used in each execution ofstep 2 and the digit 4 indicates number of checking points used with
small HS as a final stage.
(a) (b)
Figure 5. Search patterns ofHS (a) Large HS (b) Small HS
2.5. Enhanced Versions of Hexagonal Search Algorithm
An Enhanced Hexagonal Search (EHS) algorithm [6] is proposed for further reduction of the search
points over Hexagonal Search algorithm. This EHS uses the 6-side-based fast inner search technique to
improve the inner search speed against the Hexagonal Search. In Hexagonal Search algorithm, all the search
points inside the large hexagon need to be evaluated, this is computationally ineffective.
5. ISSN: 2088-8708
IJECE Vol. 7, No. 1, February 2017 : 216 – 224
220
The EHS algorithm only checks the most promising inner search points by exploiting the group-sum
distortion information of the six checked endpoints of the large hexagon. At first EHS starts coarse search
with the large hexagonal pattern (Figure 5(b)) to trace a small area of optimal motion vector. After locating a
small area in the coarse search, EHS groups the six checked endpoints of the large hexagon as shown in
Figure 6(a). At each group, the group-sum distortion is calculated and then focused inner checking points
nearest to the smallest distortion group will be evaluated to obtain the minimum distortion point. Three inner
searching points nearby the smallest distortion group will be calculated in the focused inner search if the
smallest distortion group is 3 or 6. Otherwise, two inner searching points nearest to the smallest distortion
group will be used in the focused inner search.
An Enhanced Hexagonal-based Search using Point-Oriented Inner Search (EHS-POIS) is presented
in [7] to reduce the search points further. The EHS-POIS uses an efficient grouping method for the large
hexagon which is based on minimizing Mean Internal Distance (MID) for each inner point, as shown in
Figure 6(b). The eight inner points enclosed by large hexagon are divided into two sets by the MID value.
First set contains a′, c′, e′, f′, g′ and h′ points and second set includes b′ and d′ points. These points are
surrounded by two or three nearest checked large hexagon points.
The Normalized Group Distortions (NGDs) of the hexagon are calculated and the minimum NGDs
in each set are found. The two inner points related to minimum NGDs in two sets are finally evaluated to find
the final motion vector. The fine search of EHS requires 2 or 3 search points depending on the portion of
inner points with smallest group distortion whereas the fine search of EHS-POIS requires only 2 search
points constantly.
(a) (b) (c)
Figure 6. Strategies used to predict portion of inner searches in EHS, EHS-POIS and EHS-DOIS, (a) Group
oriented prediction of EHS, (b) Point-oriented prediction of EHS-POIS, (c) Direction-oriented prediction of
EHS-DOIS
An Enhanced Hexagonal-based Search using Direction-Oriented Inner Search (EHS-DOIS) [8] has
shown the considerable improvement over EHS-POIS by increasing the inner search speed double. The EHS-
DOIS forms pseudo-points prediction pattern {A′, B′, C′, D′, E′, F′, G′, H′} as shown in Figure 6(c). The
group distortions of these eight pseudo-points are calculated through six checked points of large hexagon.
Select one point in {a′, b′, c′, d′, e′, f′, g′, h′} that would be on the arrow from the center of large hexagon to
the pseudo-point with minimum distortion among eightpseudo-points. The SAD at this selected point is
finally evaluated to find the final motion vector.
3. COMPARISION
A comprehensive set of experiments have been conducted using the luminance components of the
first 100 frames of six video sequences to assess the performance and computational complexity of state-of-
the-art fast search motion estimation algorithms. These video sequences consist of different types of motion
characteristics and have various formats including QCIF, CIF, and HD. The search range is set to ±63 for
HD video sequences (Kirsten-Sara and Rocket launch sequences) and ±15 for the remaining (QCIF and CIF)
video sequences.
The experimental results are shown in terms of two testing criteria: speed and motion prediction
quality. The speed performance is shown in the Average Number of total Operations per Block (ANOB). For
6. IJECE ISSN: 2088-8708
A Survey on Block Matching Algorithms for Video Coding (Adapa Venkata Paramkusam)
221
the latter, the average Peak Signal to Noise Ratio (PSNR) per frame is calculated. The size of a macroblock is
set to 16 × 16 pixels in all the fast block matching algorithms.
One problem that occurs with the TSS is that it uses a uniformly allocated checking point pattern in
the first step, which is not very efficient to search small motions appearing in stationary or quasi-stationary
blocks. To remedy this problem the NTSS employs a centre-biased checking point pattern in the first step and
halfway-stop technique. As compared to TSS, NTSS is much more robust, produces smaller motion
compensation errors, and has a very compatible computational complexity.
Although NTSS uses more checking points in its first step as compared to TSS, the first step-stop
and second step-stop can reduce computation effectively. Eight block matches will be saved once a first step-
stop occurs and five or three block matches will be saved once a second step-stop occurs. From the
experimental results conducted on Salesman and Miss America test sequences, TSS and NTSS are compared
in terms of speed-up ratios with respect to Full search, Probabilities of catching true motions, and Average
distances, as shown in Table I. For a search window of size 15x15, w=7. Full search will check
(2w+1)2=225, while TSS check (1+8 log2(w+1)) = 25, thus leading to a speed-up ratio of 9. The speed-up
ratios of NTSS with respect to full search are given in Table 1. Comparing with the results of TSS the NTSS
basically possess rather comparable computational complexity. The probability that the true motion vector is
found by NTSS and TSS are also presented in Table 1, from which it is seen that NTSS provides higher
probabilities than those of TSS. The average distances calculated in NTSS are small as compared with TSS,
this is because of centre-biased pattern is used in NTSS where as TSS employs uniformly distributed pattern.
In TSS and NTSS algorithms, square-shaped search patterns are employed, whereas the DS
algorithm adopts a diamond-shaped search pattern, which has faster processing with similar distortion in
comparison with TSS and NTSS. Compared with TSS, the DS pattern can find large motion blocks with
fewer search points and also reduce its susceptibility to getting stuck in local optima due to its relatively large
step size in horizontal and vertical directions.
Table 1. Comparison between TSS and NTSS (In terms of speed-up-ratios w.r.t.Full search, Probabilities of
catching True motions, Average distances)
The compact shape of the DS pattern around the centre also yields fewer search points than NTSS
for finding stationary or small motion vectors. The diamond pattern (large one) is so compact in terms of
distance between neighboring points that there may exist some redundancy among the search points,
especially in the beginning of lower resolution search. Consequently, such distribution of search points in DS
pattern is inefficient in finding possible candidates in the next step. The reason for this disadvantage is that
the diamond shape is not approximate enough to a circle, which is just 90 rotation of a square. The Hexagon
based search pattern is more approximated to a circle with a uniform distribution of a minimum number of
search points and each search point is equally utilized with maximum efficiency, where the redundancy
among search points is removed maximally. This HS algorithm can find a same motion vector with fewer
search points than the DS algorithm. Generally speaking, the larger the motion vector, the more search points
the HS algorithm can save. The average number of search points per block with respect to different
algorithms and different video sequences are shown in Table 2.
Table 2. The average number of search points per block with respect to block matching different algorithms
and different video sequence
Speed-up ratios Probabilities Average distances
Salesman
Miss
America
Salesman
Miss
America
Salesman
Miss
America
TSS 9.0 9.0 0.951 0.535 0.369 1.193
NTSS 10.94 7.94 0.990 0.722 0.044 0.687
Salesman
Miss
America
Tennis Mobile
Kirsten-
Sara
Rocket
launch
NTSS 18.0 21.3 22.7 28.4 23.8 24.9
DS 13.0 16.3 16.9 22.7 19.0 19.8
HS 10.7 12.8 13.0 16.3 13.9 14.8
EHS 10.3 12.2 12.8 15.8 13.3 14.1
EHS-POIS 10.1 12.0 12.3 15.3 13.0 12.8
EHS-DOIS 09.5 11.2 11.6 11.6 12.6 12.1
7. ISSN: 2088-8708
IJECE Vol. 7, No. 1, February 2017 : 216 – 224
222
The search point number per block for the FS and TSS are fixed as 255 and 25 respectively for all
video sequences. Compared with TSS, NTSS and DS search patterns theHS takes less number of search
points in an average per block. The EHS-DOIS is the fastest among the compared algorithms, andits
performance is in the most cases comparable with DSand HS.The average PSNR values per frame in all the
fast block matching algorithms are furnished in Table 3. It can be observed from Table 3 that the average
PSNRs obtained by DS are better than those of HSand enhanced versions of HSi.e., EHS, EHS-POIS and
EHS-DOIS algorithms in all the cases. The total average PSNR of the DS is better by 1.11dB when compared
to that of EHS-DOIS algorithm.
Table 3. The average PSNRs in dB for all the fastalgorithms
To comprehend the efficiency of all the fast block matching algorithms more vividly, speed and
motion prediction quality of all the fast block matching algorithms using Mobile and Rocket launch video
sequences have been plotted in Figure 7 and Figure 8 respectively. Figure 7(a) and Figure 8(a) plot a frame-
by-frame comparison of the average number of search points per block for all the fast block matching
algorithms applied to the Mobile and Rocket launch video sequences respectively. Figure 7(b) and
Figure 8(b) plot a frame-by-frame comparison of average PSNR for all the fast block matching algorithms
applied to the Mobile and Rocket launch video sequences respectively. Figure 7(a) and Figure 8(a) clearly
manifest the superiority of the EHS-DOIS against other algorithms in terms of average number of search
points, while Figure 7(b) and 8(b) show that the PSNR performance of the DS are better than that of
EHS-DOIS.
(a)
(b)
Figure 7. Performance comparison of the fast block matching algorithms for “Mobile” sequence in terms of:
(a) theaverage number of search points per block and (b) average PSNR per frame
Salesman
Miss
America
Tennis Mobile
Kirsten-
Sara
Rocket
launch
NTSS 26.52 32.32 24.22 23.32 44.13 37.43
DS 27.45 33.85 24.82 23.85 44.21 37.69
HS 26.83 32.69 24.45 23.63 44.04 37.12
EHS 26.64 32.55 24.56 23.54 43.68 36.86
EHS-POIS 26.31 32.23 24.33 23.21 43.34 36.54
EHS-DOIS 26.91 32.91 23.81 22.71 42.45 36.20
8. IJECE ISSN: 2088-8708
A Survey on Block Matching Algorithms for Video Coding (Adapa Venkata Paramkusam)
223
(a)
(b)
Figure 8. Performance comparison of the fast block matching algorithms for “Rocket launch” sequence in
terms of: (a) theaverage number of search points per block and (b) average PSNR per frame
4. CONCLUSION
Motion estimation generally consumes the most computation in a video coding. In this paper, many
fast BMA algorithmsbelonging to different search patterns and search strategies are analyzed. Performance
and computational complexity of selected algorithms is compared to facilitate the choice of an appropriate
algorithm to a specific application.
REFERENCES
[1] T. Koga, et al., “Motion compensated interframe coding for video conferencing,” in Proc. Nat. Telecommun. Conf.,
pp. C9.6.1–C9.6.5, 1981.
[2] R. Li, et al., “A New Three-step Search Algorithm for Block Motion Estimation,” IEEE Trans. Circuits Syst. Video
Technol., vol/issue: 4(4), pp. 438–442, 1994.
[3] S. Zhu and K. K. Ma, “A New Diamond Search Algorithm for Fast Block-matching Motion Estimation,” IEEE
Trans. Image Processing, vol/issue: 9(2), pp. 287–290, 2000.
[4] C. Zhu, et al., “Hexagon-based Search Pattern for Fast Block Motion Estimation,” IEEE Trans. on Circuits and
Systems for Video Technology, vol/issue: 12(5), pp. 349-355, 2002.
[5] C. H. Cheung and L. M. Po, “Novel Cross-Diamond-Hexagonal Search Algorithms for Fast Block Motion
Estimation,” IEEE Trans.on Multimedia, vol/issue: 7(1), pp. 16–22, 2005.
[6] C. Zhu, et al., “Enhanced Hexagonal Search for fast block motion estimation,” IEEE Trans. Circuits Syst. Video
Technol., vol/issue: 14(10), pp. 1210–1214, 2004.
[7] L. M. Po, et al., “Novel point oriented inner searches for fast block motion estimation,” IEEE Trans.on Multimedia,
vol/issue: 9(1), pp. 9–15, 2007.
[8] B. J. Zou, et al., “Enhanced Hexagonal-Based Search Using Direction-Oriented Inner Search for Motion
Estimation,” IEEE Trans. Circuits Syst. Video Technol., vol/issue: 20(1), pp. 156–160, 2010.
[9] O. Ndili and T. Ogunfunmi, “Algorithm and Architecture Co-Design of Hardware-Oriented, Modified Diamond
Search for Fast Motion Estimation in H.264/AVC,” IEEE Trans. Circuits Syst. Video Technol., vol/issue: 21(9), pp.
1214–1227, 2011.
[10] Z. Shi, et al., “A motion estimation algorithm based on Predictive Intensive Direction Search for H.264/AVC,”
Multimedia and Expo (ICME), 2010 IEEE International Conference on, pp. 667, 672, 19-23 July 2010.
9. ISSN: 2088-8708
IJECE Vol. 7, No. 1, February 2017 : 216 – 224
224
[11] Y. H. Ko, et al., “Adaptive search range motion estimation using neighboring motion vector differences,”
Consumer Electronics, IEEE Transactions on, vol/issue: 57(2), pp. 726, 730, 2011.
[12] L. Hsieh, et al., “Motion estimation using two-stage predictive search algorithms based on joint spatio-temporal
correlation information,” Expert Systems with Applications, vol/issue: 38(9), pp. 11608-11623, 2011.
[13] H. Nisar, et al., “Content adaptive fast motion estimation based on spatio-temporal homogeneity analysis and
motion classification,” Pattern Recognition Letters, vol/issue: 33(1), pp. 52-61, 2012.
[14] Y. H. Ko, et al., “Fast motion estimation algorithm combining search point sampling technique with adaptive
search range algorithm,” Circuits and Systems (MWSCAS), 2012 IEEE 55th International Midwest Symposium on,
pp. 988, 991, 5-8 Aug 2012.
[15] K. H. Ng, et al., “A Search Patterns Switching Algorithm for Block Motion Estimation,” Circuits and Systems for
Video Technology, IEEE Transactions on, vol/issue: 19(5), pp. 753, 759, 2009.
[16] J. J. Tsai and H. M. Hang, “On the Design of Pattern-Based Block Motion Estimation Algorithms,” Circuits and
Systems for Video Technology, IEEE Transactions on, vol/issue: 20(1), pp. 136, 143, 2010.
[17] F. Varray and H. Liebgott, “Multi-resolution transverse oscillation in ultrasound imaging for motion estimation,”
Ultrasonics, Ferroelectrics, and Frequency Control, IEEE Transactions on, vol/issue: 60(7), pp. 1333, 1342, 2013.
[18] J. Stuckler and S. Behnke, “Multi-resolution surfel maps for efficient dense 3D modeling and tracking,” Journal of
Visual Communication and Image Representation, vol/issue: 25(1), pp. 137-147, 2014.
[19] M. Nieuwenhuisen and S. Behnke, “Hierarchical planning with 3d local multiresolution obstacle avoidance for
micro aerial vehicles,” Proceedings of the Joint Int. Symposium on Robotics (ISR) and the German Conference on
Robotics (ROBOTIK), 2014.
[20] D. Droeschel, et al., “Local multi-resolution representation for 6D motion estimation and mapping with a
continuously rotating 3D laser scanner.In: Robotics and Automation (ICRA),” IEEE International Conference on,
2014.
[21] Y. Lin and Y. C. Wang, “Improved parabolic prediction-based fractional search for H.264/AVC video coding,”
Image Process. IET, vol/issue: 3(5), pp. 261–271, 2009.
[22] S. Dikbas, et al., “Fast motion estimation with interpolation-free sub-sample accuracy,” IEEE Trans. Circuits Syst.
Video Technol., vol/issue: 20(7), pp. 1047–1051, 2010.