Image processing has become under the spotlight recently and leads to a
significant shift in various fields such as biomedical, satellite images, and
graphical applications. Nevertheless, the poor quality of an image is one of
the noticeable limitations of image processing as it restricts efficient data
extraction to be conducted. Conventionally, the image was processed via
software applications such as MATLAB. In spite of the software's ability to
cater to the data extraction of low-quality image issues, it still suffers from the
time-consuming issue. As the ability to obtain a rapid outcome is a favorable
feature of efficient image processing, the use of hardware in image processing
is deemed to keep the addressed issue at bay. Thus, the image enhancement
techniques using hardware have gradually rising interest among researchers
with numerous approaches such as field programmable gate array (FPGA). In
this study, 25 different research papers published from 2016 to 2021 are
studied and analyzed to focus on the performance of FPGA as hardware
implementation in image processing techniques.
Digital image enhancement by brightness and contrast manipulation using Veri...IJECEIAES
This document describes a proposed method for digitally enhancing images using brightness and contrast manipulation implemented in Verilog hardware description language. The method aims to improve low quality images impacted by low exposure and haze. It involves converting image files to hexadecimal, manipulating pixel brightness and contrast values using algorithms, and converting the output back to image format. The algorithms adjust brightness by adding a constant to pixel values and modify contrast by stretching the dynamic range of values. The method is evaluated on vehicle registration plate images and shows improvements over other enhancement methods based on quantitative and qualitative metrics.
An Investigation towards Effectiveness in Image Enhancement Process in MPSoC IJECEIAES
The document discusses image enhancement techniques for use in multiprocessor systems-on-chip (MPSoC). It reviews existing image enhancement methods implemented on FPGA and identifies limitations like low accuracy. The paper proposes using advanced bus architectures like AMBA/AXI in MPSoC to improve communication between memory and input medical images for enhancement, reducing heterogeneity effects. It summarizes various image enhancement algorithms that could be implemented on reconfigurable FPGA hardware for use in MPSoC, including brightness control, contrast stretching, histogram equalization, and edge detection techniques.
The Computation Complexity Reduction of 2-D Gaussian FilterIRJET Journal
This document discusses reducing the computational complexity of a 2D Gaussian filter for image smoothing. It begins with an abstract that notes 2D Gaussian filters are commonly used for image smoothing but require heavy computational resources. It then proposes using fixed-point arithmetic rather than floating point to implement the filter on an FPGA, which can increase efficiency and decrease area and complexity. The document is divided into sections that cover the theory behind image filtering, image smoothing and sharpening, quality metrics for evaluation, and an energy scalable Gaussian smoothing filter architecture. It concludes by discussing results and benefits of implementing the filter using fixed-point arithmetic on an FPGA.
HARDWARE SOFTWARE CO-SIMULATION FOR TRAFFIC LOAD COMPUTATION USING MATLAB SIM...ijcsity
Due to increase in number of vehicles, Traffic is a major problem faced in urban areas throughout the
world. This document presents a newly developed Matlab Simulink model to compute traffic load for real
time traffic signal control. Signal processing, video and image processing and Xilinx Blockset have been
extensively used for traffic load computation. The approach used is Edge detection operation, wherein,
Edges are extracted to identify the number of vehicles. The developed model computes the results with
greater degrees of accuracy and is capable of being used to set the green signal duration so as to release
the traffic dynamically on traffic junctions.
Xilinx System Generator (XSG) provides Simulink Blockset for several hardware operations that could be
implemented on various Xilinx Field programmable gate arrays (FPGAs). The method described in this
paper involves object feature identification and detection. Xilinx System Generator provides some blocks to
transform data provided from the software side of the simulation environment to the hardware side. In our
case it is MATLAB Simulink to System Generator blocks. This is an important concept to understand in the
design process using Xilinx System Generator. The Xilinx System Generator, embedded in MATLAB
Simulink is used to program the model and then test on the FPGA board using the properties of hardware
co-simulation tools.
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSIRJET Journal
The document discusses face counting using OpenCV and Python by analyzing unusual events in crowds. It proposes using the Haar cascade algorithm for face detection and counting. Feature extraction is performed using gray-level co-occurrence matrix (GLCM) to extract texture and edge features. Discriminant analysis is then used to differentiate between samples accurately. The system aims to correctly detect and count faces in images using Python tools like OpenCV for digital image processing tasks and feature extraction algorithms like GLCM and discrete wavelet transform (DWT). It is intended to have good recognition accuracy compared to previous methods.
IRJET- Face Recognition using Landmark Estimation and Convolution Neural NetworkIRJET Journal
This document summarizes a research paper on face recognition using landmark estimation and convolutional neural networks. The researchers used the LFW dataset to test their system. They first used HOG and SVM for face recognition, achieving 85% accuracy. They then used CNN for further improvement. Keypoints were detected using landmark estimation for face normalization before inputting faces into the CNN. Various CNN architectures and hyperparameters were tested. The best performing model achieved over 95% accuracy on the LFW dataset, demonstrating the effectiveness of the proposed method.
IRJET- Review of Tencent ML-Images Large-Scale Multi-Label Image DatabaseIRJET Journal
This document summarizes and reviews a paper that created a large-scale multi-label image database called Tencent ML-Images containing 10 million images with 14,000 possible labels. The authors implemented a ResNet visual representation model trained on this database, achieving 79.2% top-1 accuracy when fine-tuned on ImageNet. However, the reviewers faced challenges re-implementing this work due to limited resources and the large size of the full database. They were able to train a reduced model but could not match the original paper's results due to using less training data. The database and model implementation fill a gap, but the database's machine-generated labels and class imbalances may limit its ability to learn rich visual representations
Real-Time Implementation and Performance Optimization of Local Derivative Pat...IJECEIAES
Pattern based texture descriptors are widely used in Content Based Image Retrieval (CBIR) for efficient retrieval of matching images. Local Derivative Pattern (LDP), a higher order local pattern operator, originally proposed for face recognition, encodes the distinctive spatial relationships contained in a local region of an image as the feature vector. LDP efficiently extracts finer details and provides efficient retrieval however, it was proposed for images of limited resolution. Over the period of time the development in the digital image sensors had paid way for capturing images at a very high resolution. LDP algorithm though very efficient in content-based image retrieval did not scale well when capturing features from such high-resolution images as it becomes computationally very expensive. This paper proposes how to efficiently extract parallelism from the LDP algorithm and strategies for optimally implementing it by exploiting some inherent General-Purpose Graphics Processing Unit (GPGPU) characteristics. By optimally configuring the GPGPU kernels, image retrieval was performed at a much faster rate. The LDP algorithm was ported on to Compute Unified Device Architecture (CUDA) supported GPGPU and a maximum speed up of around 240x was achieved as compared to its sequential counterpart.
Digital image enhancement by brightness and contrast manipulation using Veri...IJECEIAES
This document describes a proposed method for digitally enhancing images using brightness and contrast manipulation implemented in Verilog hardware description language. The method aims to improve low quality images impacted by low exposure and haze. It involves converting image files to hexadecimal, manipulating pixel brightness and contrast values using algorithms, and converting the output back to image format. The algorithms adjust brightness by adding a constant to pixel values and modify contrast by stretching the dynamic range of values. The method is evaluated on vehicle registration plate images and shows improvements over other enhancement methods based on quantitative and qualitative metrics.
An Investigation towards Effectiveness in Image Enhancement Process in MPSoC IJECEIAES
The document discusses image enhancement techniques for use in multiprocessor systems-on-chip (MPSoC). It reviews existing image enhancement methods implemented on FPGA and identifies limitations like low accuracy. The paper proposes using advanced bus architectures like AMBA/AXI in MPSoC to improve communication between memory and input medical images for enhancement, reducing heterogeneity effects. It summarizes various image enhancement algorithms that could be implemented on reconfigurable FPGA hardware for use in MPSoC, including brightness control, contrast stretching, histogram equalization, and edge detection techniques.
The Computation Complexity Reduction of 2-D Gaussian FilterIRJET Journal
This document discusses reducing the computational complexity of a 2D Gaussian filter for image smoothing. It begins with an abstract that notes 2D Gaussian filters are commonly used for image smoothing but require heavy computational resources. It then proposes using fixed-point arithmetic rather than floating point to implement the filter on an FPGA, which can increase efficiency and decrease area and complexity. The document is divided into sections that cover the theory behind image filtering, image smoothing and sharpening, quality metrics for evaluation, and an energy scalable Gaussian smoothing filter architecture. It concludes by discussing results and benefits of implementing the filter using fixed-point arithmetic on an FPGA.
HARDWARE SOFTWARE CO-SIMULATION FOR TRAFFIC LOAD COMPUTATION USING MATLAB SIM...ijcsity
Due to increase in number of vehicles, Traffic is a major problem faced in urban areas throughout the
world. This document presents a newly developed Matlab Simulink model to compute traffic load for real
time traffic signal control. Signal processing, video and image processing and Xilinx Blockset have been
extensively used for traffic load computation. The approach used is Edge detection operation, wherein,
Edges are extracted to identify the number of vehicles. The developed model computes the results with
greater degrees of accuracy and is capable of being used to set the green signal duration so as to release
the traffic dynamically on traffic junctions.
Xilinx System Generator (XSG) provides Simulink Blockset for several hardware operations that could be
implemented on various Xilinx Field programmable gate arrays (FPGAs). The method described in this
paper involves object feature identification and detection. Xilinx System Generator provides some blocks to
transform data provided from the software side of the simulation environment to the hardware side. In our
case it is MATLAB Simulink to System Generator blocks. This is an important concept to understand in the
design process using Xilinx System Generator. The Xilinx System Generator, embedded in MATLAB
Simulink is used to program the model and then test on the FPGA board using the properties of hardware
co-simulation tools.
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSIRJET Journal
The document discusses face counting using OpenCV and Python by analyzing unusual events in crowds. It proposes using the Haar cascade algorithm for face detection and counting. Feature extraction is performed using gray-level co-occurrence matrix (GLCM) to extract texture and edge features. Discriminant analysis is then used to differentiate between samples accurately. The system aims to correctly detect and count faces in images using Python tools like OpenCV for digital image processing tasks and feature extraction algorithms like GLCM and discrete wavelet transform (DWT). It is intended to have good recognition accuracy compared to previous methods.
IRJET- Face Recognition using Landmark Estimation and Convolution Neural NetworkIRJET Journal
This document summarizes a research paper on face recognition using landmark estimation and convolutional neural networks. The researchers used the LFW dataset to test their system. They first used HOG and SVM for face recognition, achieving 85% accuracy. They then used CNN for further improvement. Keypoints were detected using landmark estimation for face normalization before inputting faces into the CNN. Various CNN architectures and hyperparameters were tested. The best performing model achieved over 95% accuracy on the LFW dataset, demonstrating the effectiveness of the proposed method.
IRJET- Review of Tencent ML-Images Large-Scale Multi-Label Image DatabaseIRJET Journal
This document summarizes and reviews a paper that created a large-scale multi-label image database called Tencent ML-Images containing 10 million images with 14,000 possible labels. The authors implemented a ResNet visual representation model trained on this database, achieving 79.2% top-1 accuracy when fine-tuned on ImageNet. However, the reviewers faced challenges re-implementing this work due to limited resources and the large size of the full database. They were able to train a reduced model but could not match the original paper's results due to using less training data. The database and model implementation fill a gap, but the database's machine-generated labels and class imbalances may limit its ability to learn rich visual representations
Real-Time Implementation and Performance Optimization of Local Derivative Pat...IJECEIAES
Pattern based texture descriptors are widely used in Content Based Image Retrieval (CBIR) for efficient retrieval of matching images. Local Derivative Pattern (LDP), a higher order local pattern operator, originally proposed for face recognition, encodes the distinctive spatial relationships contained in a local region of an image as the feature vector. LDP efficiently extracts finer details and provides efficient retrieval however, it was proposed for images of limited resolution. Over the period of time the development in the digital image sensors had paid way for capturing images at a very high resolution. LDP algorithm though very efficient in content-based image retrieval did not scale well when capturing features from such high-resolution images as it becomes computationally very expensive. This paper proposes how to efficiently extract parallelism from the LDP algorithm and strategies for optimally implementing it by exploiting some inherent General-Purpose Graphics Processing Unit (GPGPU) characteristics. By optimally configuring the GPGPU kernels, image retrieval was performed at a much faster rate. The LDP algorithm was ported on to Compute Unified Device Architecture (CUDA) supported GPGPU and a maximum speed up of around 240x was achieved as compared to its sequential counterpart.
This document discusses face detection on embedded systems. It begins by providing background on face detection applications and existing solutions. It then describes implementing a Viola-Jones face detection algorithm on a PC as a software prototype, achieving 80% accuracy. This implementation is then ported to an embedded system using a Nios II softcore processor. Profile analysis shows the bottleneck is searching locations. The document explores reducing the search space through downsampling images using bicubic interpolation, achieving a 4x speedup with no loss of accuracy on test images.
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
Study of Energy Efficient Images with Just Noticeable Difference Threshold Ba...ijtsrd
This document presents a novel method for producing energy-efficient images using a Feature Transform based Just Noticeable Difference Threshold (FTJNDT) model. The proposed method aims to reduce image energy consumption on displays like OLED by lowering pixel luminance below the just-noticeable difference threshold while maintaining perceptual quality. The FTJNDT model determines individual luminance thresholds for each image block based on visual saliency and non-linear modulation functions. An optimization framework is used to estimate modulation parameters and feature values using an objective image quality assessment. Experimental results showed the method reduced image energy consumption by an average of 4.31% compared to original images.
IRJET - Wavelet based Image Fusion using FPGA for Biomedical ApplicationIRJET Journal
This document describes a wavelet-based image fusion system implemented on an FPGA for biomedical applications. The system takes two input images, applies discrete wavelet transforms to both, then fuses the wavelet coefficients to create a single output image. It uses MATLAB and Xilinx System Generator to simulate the design in Simulink and implement it on a Virtex6 FPGA. The results show that wavelet-based fusion can combine the spatial and spectral information from multiple input images into a higher quality fused output image suitable for medical applications like fusing MRI and CT scans.
1. The document describes a deep learning model to analyze and classify rice quality using images of rice paddies. Rice paddies are photographed and the images are analyzed by a model trained on custom datasets to classify rice purity levels.
2. A convolutional neural network model is built using TensorFlow to classify rice paddies as pure, impure, or partially impure based on image analysis. The model achieves comparable accuracy to state-of-the-art systems.
3. The model can be used by rice mills to automatically analyze rice purity from images and categorize rice without manual inspection, improving efficiency over traditional methods.
A high frame-rate of cell-based histogram-oriented gradients human detector a...IAESIJAI
In respect of the accuracy, one of the well-known techniques for human
detection is the histogram-oriented gradients (HOG) method. Unfortunately,
the HOG feature calculation is highly complex and computationally
intensive. Thus, in this research, we aim to achieve a resource-efficient and
low-power HOG hardware architecture while maintaining its high frame-rate
performance for real-time processing. A hardware architecture for human
detection in 2D images using simplified HOG algorithm was introduced in
this paper. To increase the frame-rate, we simplify the HOG computation
while maintaining the detection quality. In the hardware architecture, we
design a cell-based processing method instead of a window-based method.
Moreover, 64 parallel and pipeline architectures were used to increase the
processing speed. Our pipeline architecture can significantly reduce memory
bandwidth and avoid any external memory utilization. an altera field
programmable gate arrays (FPGA) E2-115 was employed to evaluate the
design. The evaluation results show that our design achieves performance up
to 86.51 frame rate per second (Fps) with a relatively low operating
frequency (27 MHz). It consumes 48,360 logic elements (LEs) and 4,363
registers. The performance test results reveal that the proposed solution
exhibits a trade-off between Fps, clock frequency, the use of registers, and
Fps-to-clock ratio.
IRJET- Different Approaches for Implementation of Fractal Image Compressi...IRJET Journal
This document discusses different approaches for implementing fractal image compression on medical images using parallel processing on GPUs. It analyzes fractal image compression algorithms in MATLAB to compress medical images with very low loss in image quality. The key aspects covered are:
1. Fractal image compression uses self-similarity within images to achieve high compression ratios while preserving image quality.
2. Implementing fractal compression in parallel on GPUs can significantly reduce computational time compared to CPU implementations, as the redundant processing can be distributed across multiple processors.
3. The document implements and evaluates different fractal compression algorithms on MATLAB to compress medical images with low signal-to-noise ratio, high compression ratio, and reduced encoding time.
End-to-end deep auto-encoder for segmenting a moving object with limited tra...IJECEIAES
The document proposes two end-to-end deep auto-encoder approaches for segmenting moving objects from surveillance videos when limited training data is available. The first approach uses transfer learning with a pre-trained VGG-16 model as the encoder and its transposed architecture as the decoder. The second approach uses a multi-depth auto-encoder with convolutional and upsampling layers. Both approaches apply data augmentation techniques like PCA and traditional methods to increase the training data size. The models are trained and evaluated on the CDnet2014 dataset, achieving better performance than other models trained with limited data.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Iaetsd a low power and high throughput re-configurable bip for multipurpose a...Iaetsd Iaetsd
This document presents a reconfigurable low power binary image processor for image processing applications. The processor consists of a reconfigurable binary processing module with mixed-grained architecture providing flexibility, efficiency and performance. Line memories are selected for lower power consumption and clock gating technique is used to reduce their power. The processor supports real-time binary image processing operations like morphological transformations and is suitable for applications like object recognition and tracking.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET- A Review Paper on Object Detection using Zynq-7000 FPGA for an Embedde...IRJET Journal
This document reviews object detection using the Zynq-7000 FPGA for embedded applications. It discusses how the Zynq-7000 FPGA is a promising platform for embedded applications due to its dual-core ARM processor and programmable logic on a single chip. The document reviews various object detection algorithms such as R-CNN, Fast R-CNN, Faster R-CNN, and YOLO and compares their prediction times. It is proposed to implement object detection on the Zynq-7000 FPGA using algorithms like YOLO that provide fast and accurate detection in real-time.
This document provides an overview of a project that implemented image filtering using VHDL on an FPGA board. It discusses designing filters like average, Sobel, Gaussian, and Laplacian filters. Cache memory and a processing unit were developed to hold pixel values and apply filter kernels. Different methods for multiplication in the convolution process were evaluated. Results showed the output images after applying each filter both in software and on the FPGA board. In conclusion, FPGAs provide reconfigurable, accelerated processing for image applications like filtering compared to general purpose computers.
Photo Editing And Sharing Web Application With AI- Assisted FeaturesIRJET Journal
The document describes a web application for photo editing and sharing that utilizes generative adversarial networks (GANs) and other machine learning techniques to provide AI-assisted editing features. Specifically, it uses StyleGAN to allow users to semantically edit image attributes like age, pose, and smile without needing expert photo editing skills. The application was developed with Python-Django and its AI features include encoding images into latent spaces, editing the latent vectors to modify attributes, and generating high resolution images. The goal is to make image editing more accessible while producing high fidelity results.
HARDWARE/SOFTWARE CO-DESIGN OF A 2D GRAPHICS SYSTEM ON FPGAijesajournal
This document describes the hardware/software co-design of a 2D graphics system implemented on an FPGA. It discusses the hardware design which includes developing Bresenham and BitBLT IP cores to accelerate computationally intensive 2D graphics operations. It also discusses the software design which includes graphics drivers and APIs running on a CPU core to initialize and manage the graphics creation process by driving the IP cores. The system is aimed to benefit low-end embedded applications by providing reconfigurable 2D graphics capabilities on FPGA.
This document provides an overview of topics to be covered in a mid-term exam for a computer graphics course. The topics include introductions and applications of computer graphics, graphics hardware and I/O devices, interactive and non-interactive computer graphics, raster and vector graphics, scan converting lines and shapes, 2D transformations, 2D viewing, and 2D zooming and panning. The document was prepared by Bahadar sher and provides his email for contact.
A Parallel Architecture for Multiple-Face Detection Technique Using AdaBoost ...Hadi Santoso
Face detection is a very important biometric application in the field of image
analysis and computer vision. The basic face detection method is AdaBoost
algorithm with a cascading Haar-like feature classifiers based on the
framework proposed by Viola and Jones. Real-time multiple-face detection,
for instance on CCTVs with high resolution, is a computation-intensive
procedure. If the procedure is performed sequentially, an optimal real-time
performance will not be achieved. In this paper we propose an architectural
design for a parallel and multiple-face detection technique based on Viola
and Jones' framework. To do this systematically, we look at the problem
from 4 points of view, namely: data processing taxonomy, parallel memory
architecture, the model of parallel programming, as well as the design of
parallel program. We also build a prototype of the proposed parallel
technique and conduct a series of experiments to investigate the gained
acceleration.
Image super resolution using Generative Adversarial Network.IRJET Journal
This document discusses using a generative adversarial network (GAN) for image super resolution. It begins with an abstract that explains super resolution aims to increase image resolution by adding sub-pixel detail. Convolutional neural networks are well-suited for this task. Recent years have seen interest in reconstructing super resolution video sequences from low resolution images. The document then reviews literature on image super resolution techniques including deep learning methods. It describes the methodology which uses a CNN to compare input images to a trained dataset to predict if high-resolution images can be generated from low-resolution images.
Satellite image processing is an intricate task that requires vast computation and data processing, which cannot
be handled by a single computer. Furthermore, the processing of the massive amount of data accumulated by
the satellite is a huge challenge for the end user. Hence, grid computing is the essential platform to provide high
computing performance at the user end. This article reviews the grid services used for satellite image processing
and significant data processing.
IRJET- Sketch-Verse: Sketch Image Inversion using DCNNIRJET Journal
The document describes a system that uses deep convolutional neural networks to convert face sketch images to photorealistic images. It first constructs a semi-simulated dataset from a large dataset containing face sketches and corresponding photos. It then trains a model using techniques like deep residual learning and perceptual losses. The trained model is able to take face sketches as input and generate photorealistic images as output. An evaluation of the system found a conversion rate of around 70% for test images. The authors aim to improve the model's robustness through additional data augmentation and training.
Because of the rapid growth in technology breakthroughs, including
multimedia and cell phones, Telugu character recognition (TCR) has recently
become a popular study area. It is still necessary to construct automated and
intelligent online TCR models, even if many studies have focused on offline
TCR models. The Telugu character dataset construction and validation using
an Inception and ResNet-based model are presented. The collection of 645
letters in the dataset includes 18 Achus, 38 Hallus, 35 Othulu, 34×16
Guninthamulu, and 10 Ankelu. The proposed technique aims to efficiently
recognize and identify distinctive Telugu characters online. This model's main
pre-processing steps to achieve its goals include normalization, smoothing,
and interpolation. Improved recognition performance can be attained by using
stochastic gradient descent (SGD) to optimize the model's hyperparameters.
Scientific workload execution on a distributed computing platform such as a
cloud environment is time-consuming and expensive. The scientific workload
has task dependencies with different service level agreement (SLA)
prerequisites at different levels. Existing workload scheduling (WS) designs
are not efficient in assuring SLA at the task level. Alongside, induces higher
costs as the majority of scheduling mechanisms reduce either time or energy.
In reducing, cost both energy and makespan must be optimized together for
allocating resources. No prior work has considered optimizing energy and
processing time together in meeting task level SLA requirements. This paper
presents task level energy and performance assurance-workload scheduling
(TLEPA-WS) algorithm for the distributed computing environment. The
TLEPA-WS guarantees energy minimization with the performance
requirement of the parallel application under a distributed computational
environment. Experiment results show a significant reduction in using energy
and makespan; thereby reducing the cost of workload execution in comparison
with various standard workload execution models.
More Related Content
Similar to A systematic literature review on hardware implementation of image processing
This document discusses face detection on embedded systems. It begins by providing background on face detection applications and existing solutions. It then describes implementing a Viola-Jones face detection algorithm on a PC as a software prototype, achieving 80% accuracy. This implementation is then ported to an embedded system using a Nios II softcore processor. Profile analysis shows the bottleneck is searching locations. The document explores reducing the search space through downsampling images using bicubic interpolation, achieving a 4x speedup with no loss of accuracy on test images.
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
Study of Energy Efficient Images with Just Noticeable Difference Threshold Ba...ijtsrd
This document presents a novel method for producing energy-efficient images using a Feature Transform based Just Noticeable Difference Threshold (FTJNDT) model. The proposed method aims to reduce image energy consumption on displays like OLED by lowering pixel luminance below the just-noticeable difference threshold while maintaining perceptual quality. The FTJNDT model determines individual luminance thresholds for each image block based on visual saliency and non-linear modulation functions. An optimization framework is used to estimate modulation parameters and feature values using an objective image quality assessment. Experimental results showed the method reduced image energy consumption by an average of 4.31% compared to original images.
IRJET - Wavelet based Image Fusion using FPGA for Biomedical ApplicationIRJET Journal
This document describes a wavelet-based image fusion system implemented on an FPGA for biomedical applications. The system takes two input images, applies discrete wavelet transforms to both, then fuses the wavelet coefficients to create a single output image. It uses MATLAB and Xilinx System Generator to simulate the design in Simulink and implement it on a Virtex6 FPGA. The results show that wavelet-based fusion can combine the spatial and spectral information from multiple input images into a higher quality fused output image suitable for medical applications like fusing MRI and CT scans.
1. The document describes a deep learning model to analyze and classify rice quality using images of rice paddies. Rice paddies are photographed and the images are analyzed by a model trained on custom datasets to classify rice purity levels.
2. A convolutional neural network model is built using TensorFlow to classify rice paddies as pure, impure, or partially impure based on image analysis. The model achieves comparable accuracy to state-of-the-art systems.
3. The model can be used by rice mills to automatically analyze rice purity from images and categorize rice without manual inspection, improving efficiency over traditional methods.
A high frame-rate of cell-based histogram-oriented gradients human detector a...IAESIJAI
In respect of the accuracy, one of the well-known techniques for human
detection is the histogram-oriented gradients (HOG) method. Unfortunately,
the HOG feature calculation is highly complex and computationally
intensive. Thus, in this research, we aim to achieve a resource-efficient and
low-power HOG hardware architecture while maintaining its high frame-rate
performance for real-time processing. A hardware architecture for human
detection in 2D images using simplified HOG algorithm was introduced in
this paper. To increase the frame-rate, we simplify the HOG computation
while maintaining the detection quality. In the hardware architecture, we
design a cell-based processing method instead of a window-based method.
Moreover, 64 parallel and pipeline architectures were used to increase the
processing speed. Our pipeline architecture can significantly reduce memory
bandwidth and avoid any external memory utilization. an altera field
programmable gate arrays (FPGA) E2-115 was employed to evaluate the
design. The evaluation results show that our design achieves performance up
to 86.51 frame rate per second (Fps) with a relatively low operating
frequency (27 MHz). It consumes 48,360 logic elements (LEs) and 4,363
registers. The performance test results reveal that the proposed solution
exhibits a trade-off between Fps, clock frequency, the use of registers, and
Fps-to-clock ratio.
IRJET- Different Approaches for Implementation of Fractal Image Compressi...IRJET Journal
This document discusses different approaches for implementing fractal image compression on medical images using parallel processing on GPUs. It analyzes fractal image compression algorithms in MATLAB to compress medical images with very low loss in image quality. The key aspects covered are:
1. Fractal image compression uses self-similarity within images to achieve high compression ratios while preserving image quality.
2. Implementing fractal compression in parallel on GPUs can significantly reduce computational time compared to CPU implementations, as the redundant processing can be distributed across multiple processors.
3. The document implements and evaluates different fractal compression algorithms on MATLAB to compress medical images with low signal-to-noise ratio, high compression ratio, and reduced encoding time.
End-to-end deep auto-encoder for segmenting a moving object with limited tra...IJECEIAES
The document proposes two end-to-end deep auto-encoder approaches for segmenting moving objects from surveillance videos when limited training data is available. The first approach uses transfer learning with a pre-trained VGG-16 model as the encoder and its transposed architecture as the decoder. The second approach uses a multi-depth auto-encoder with convolutional and upsampling layers. Both approaches apply data augmentation techniques like PCA and traditional methods to increase the training data size. The models are trained and evaluated on the CDnet2014 dataset, achieving better performance than other models trained with limited data.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Iaetsd a low power and high throughput re-configurable bip for multipurpose a...Iaetsd Iaetsd
This document presents a reconfigurable low power binary image processor for image processing applications. The processor consists of a reconfigurable binary processing module with mixed-grained architecture providing flexibility, efficiency and performance. Line memories are selected for lower power consumption and clock gating technique is used to reduce their power. The processor supports real-time binary image processing operations like morphological transformations and is suitable for applications like object recognition and tracking.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET- A Review Paper on Object Detection using Zynq-7000 FPGA for an Embedde...IRJET Journal
This document reviews object detection using the Zynq-7000 FPGA for embedded applications. It discusses how the Zynq-7000 FPGA is a promising platform for embedded applications due to its dual-core ARM processor and programmable logic on a single chip. The document reviews various object detection algorithms such as R-CNN, Fast R-CNN, Faster R-CNN, and YOLO and compares their prediction times. It is proposed to implement object detection on the Zynq-7000 FPGA using algorithms like YOLO that provide fast and accurate detection in real-time.
This document provides an overview of a project that implemented image filtering using VHDL on an FPGA board. It discusses designing filters like average, Sobel, Gaussian, and Laplacian filters. Cache memory and a processing unit were developed to hold pixel values and apply filter kernels. Different methods for multiplication in the convolution process were evaluated. Results showed the output images after applying each filter both in software and on the FPGA board. In conclusion, FPGAs provide reconfigurable, accelerated processing for image applications like filtering compared to general purpose computers.
Photo Editing And Sharing Web Application With AI- Assisted FeaturesIRJET Journal
The document describes a web application for photo editing and sharing that utilizes generative adversarial networks (GANs) and other machine learning techniques to provide AI-assisted editing features. Specifically, it uses StyleGAN to allow users to semantically edit image attributes like age, pose, and smile without needing expert photo editing skills. The application was developed with Python-Django and its AI features include encoding images into latent spaces, editing the latent vectors to modify attributes, and generating high resolution images. The goal is to make image editing more accessible while producing high fidelity results.
HARDWARE/SOFTWARE CO-DESIGN OF A 2D GRAPHICS SYSTEM ON FPGAijesajournal
This document describes the hardware/software co-design of a 2D graphics system implemented on an FPGA. It discusses the hardware design which includes developing Bresenham and BitBLT IP cores to accelerate computationally intensive 2D graphics operations. It also discusses the software design which includes graphics drivers and APIs running on a CPU core to initialize and manage the graphics creation process by driving the IP cores. The system is aimed to benefit low-end embedded applications by providing reconfigurable 2D graphics capabilities on FPGA.
This document provides an overview of topics to be covered in a mid-term exam for a computer graphics course. The topics include introductions and applications of computer graphics, graphics hardware and I/O devices, interactive and non-interactive computer graphics, raster and vector graphics, scan converting lines and shapes, 2D transformations, 2D viewing, and 2D zooming and panning. The document was prepared by Bahadar sher and provides his email for contact.
A Parallel Architecture for Multiple-Face Detection Technique Using AdaBoost ...Hadi Santoso
Face detection is a very important biometric application in the field of image
analysis and computer vision. The basic face detection method is AdaBoost
algorithm with a cascading Haar-like feature classifiers based on the
framework proposed by Viola and Jones. Real-time multiple-face detection,
for instance on CCTVs with high resolution, is a computation-intensive
procedure. If the procedure is performed sequentially, an optimal real-time
performance will not be achieved. In this paper we propose an architectural
design for a parallel and multiple-face detection technique based on Viola
and Jones' framework. To do this systematically, we look at the problem
from 4 points of view, namely: data processing taxonomy, parallel memory
architecture, the model of parallel programming, as well as the design of
parallel program. We also build a prototype of the proposed parallel
technique and conduct a series of experiments to investigate the gained
acceleration.
Image super resolution using Generative Adversarial Network.IRJET Journal
This document discusses using a generative adversarial network (GAN) for image super resolution. It begins with an abstract that explains super resolution aims to increase image resolution by adding sub-pixel detail. Convolutional neural networks are well-suited for this task. Recent years have seen interest in reconstructing super resolution video sequences from low resolution images. The document then reviews literature on image super resolution techniques including deep learning methods. It describes the methodology which uses a CNN to compare input images to a trained dataset to predict if high-resolution images can be generated from low-resolution images.
Satellite image processing is an intricate task that requires vast computation and data processing, which cannot
be handled by a single computer. Furthermore, the processing of the massive amount of data accumulated by
the satellite is a huge challenge for the end user. Hence, grid computing is the essential platform to provide high
computing performance at the user end. This article reviews the grid services used for satellite image processing
and significant data processing.
IRJET- Sketch-Verse: Sketch Image Inversion using DCNNIRJET Journal
The document describes a system that uses deep convolutional neural networks to convert face sketch images to photorealistic images. It first constructs a semi-simulated dataset from a large dataset containing face sketches and corresponding photos. It then trains a model using techniques like deep residual learning and perceptual losses. The trained model is able to take face sketches as input and generate photorealistic images as output. An evaluation of the system found a conversion rate of around 70% for test images. The authors aim to improve the model's robustness through additional data augmentation and training.
Similar to A systematic literature review on hardware implementation of image processing (20)
Because of the rapid growth in technology breakthroughs, including
multimedia and cell phones, Telugu character recognition (TCR) has recently
become a popular study area. It is still necessary to construct automated and
intelligent online TCR models, even if many studies have focused on offline
TCR models. The Telugu character dataset construction and validation using
an Inception and ResNet-based model are presented. The collection of 645
letters in the dataset includes 18 Achus, 38 Hallus, 35 Othulu, 34×16
Guninthamulu, and 10 Ankelu. The proposed technique aims to efficiently
recognize and identify distinctive Telugu characters online. This model's main
pre-processing steps to achieve its goals include normalization, smoothing,
and interpolation. Improved recognition performance can be attained by using
stochastic gradient descent (SGD) to optimize the model's hyperparameters.
Scientific workload execution on a distributed computing platform such as a
cloud environment is time-consuming and expensive. The scientific workload
has task dependencies with different service level agreement (SLA)
prerequisites at different levels. Existing workload scheduling (WS) designs
are not efficient in assuring SLA at the task level. Alongside, induces higher
costs as the majority of scheduling mechanisms reduce either time or energy.
In reducing, cost both energy and makespan must be optimized together for
allocating resources. No prior work has considered optimizing energy and
processing time together in meeting task level SLA requirements. This paper
presents task level energy and performance assurance-workload scheduling
(TLEPA-WS) algorithm for the distributed computing environment. The
TLEPA-WS guarantees energy minimization with the performance
requirement of the parallel application under a distributed computational
environment. Experiment results show a significant reduction in using energy
and makespan; thereby reducing the cost of workload execution in comparison
with various standard workload execution models.
Investigating human subjects is the goal of predicting human emotions in the
real world scenario. A significant number of psychological effects require
(feelings) to be produced, directly releasing human emotions. The
development of effect theory leads one to believe that one must be aware of
one's sentiments and emotions to forecast one's behavior. The proposed line
of inquiry focuses on developing a reliable model incorporating
neurophysiological data into actual feelings. Any change in emotional affect
will directly elicit a response in the body's physiological systems. This
approach is named after the notion of Gaussian mixture models (GMM). The
statistical reaction following data processing, quantitative findings on emotion
labels, and coincidental responses with training samples all directly impact the
outcomes that are accomplished. In terms of statistical parameters such as
population mean and standard deviation, the suggested method is evaluated
compared to a technique considered to be state-of-the-art. The proposed
system determines an individual's emotional state after a minimum of 6
iterative learning using the Gaussian expectation-maximization (GEM)
statistical model, in which the iterations tend to continue to zero error. Perhaps
each of these improves predictions while simultaneously increasing the
amount of value extracted.
Early diagnosis of cancers is a major requirement for patients and a
complicated job for the oncologist. If it is diagnosed early, it could have made
the patient more likely to live. For a few decades, fuzzy logic emerged as an
emphatic technique in the identification of diseases like different types of
cancers. The recognition of cancer diseases mostly operated with inexactness,
inaccuracy, and vagueness. This paper aims to design the fuzzy expert system
(FES) and its implementation for the detection of prostate cancer. Specifically,
prostate-specific antigen (PSA), prostate volume (PV), age, and percentage
free PSA (%FPSA) are used to determine prostate cancer risk (PCR), while
PCR serves as an output parameter. Mamdani fuzzy inference method is used
to calculate a range of PCR. The system provides a scale of risk of prostate
cancer and clears the path for the oncologist to determine whether their
patients need a biopsy. This system is fast as it requires minimum calculation
and hence comparatively less time which reduces mortality and morbidity and
is more reliable than other economic systems and can be frequently used by
doctors.
The biomedical profession has gained importance due to the rapid and accurate diagnosis of clinical patients using computer-aided diagnosis (CAD) tools.
The diagnosis and treatment of Alzheimer’s disease (AD) using complementary multimodalities can improve the quality of life and mental state of patients.
In this study, we integrated a lightweight custom convolutional neural network
(CNN) model and nature-inspired optimization techniques to enhance the performance, robustness, and stability of progress detection in AD. A multi-modal
fusion database approach was implemented, including positron emission tomography (PET) and magnetic resonance imaging (MRI) datasets, to create a fused
database. We compared the performance of custom and pre-trained deep learning models with and without optimization and found that employing natureinspired algorithms like the particle swarm optimization algorithm (PSO) algorithm significantly improved system performance. The proposed methodology,
which includes a fused multimodality database and optimization strategy, improved performance metrics such as training, validation, test accuracy, precision, and recall. Furthermore, PSO was found to improve the performance of
pre-trained models by 3-5% and custom models by up to 22%. Combining different medical imaging modalities improved the overall model performance by
2-5%. In conclusion, a customized lightweight CNN model and nature-inspired
optimization techniques can significantly enhance progress detection, leading to
better biomedical research and patient care.
Class imbalance is a pervasive issue in the field of disease classification from
medical images. It is necessary to balance out the class distribution while training a model. However, in the case of rare medical diseases, images from affected
patients are much harder to come by compared to images from non-affected
patients, resulting in unwanted class imbalance. Various processes of tackling
class imbalance issues have been explored so far, each having its fair share of
drawbacks. In this research, we propose an outlier detection based image classification technique which can handle even the most extreme case of class imbalance. We have utilized a dataset of malaria parasitized and uninfected cells. An
autoencoder model titled AnoMalNet is trained with only the uninfected cell images at the beginning and then used to classify both the affected and non-affected
cell images by thresholding a loss value. We have achieved an accuracy, precision, recall, and F1 score of 98.49%, 97.07%, 100%, and 98.52% respectively,
performing better than large deep learning models and other published works.
As our proposed approach can provide competitive results without needing the
disease-positive samples during training, it should prove to be useful in binary
disease classification on imbalanced datasets.
Recently, plant identification has become an active trend due to encouraging
results achieved in plant species detection and plant classification fields
among numerous available plants using deep learning methods. Therefore,
plant classification analysis is performed in this work to address the problem
of accurate plant species detection in the presence of multiple leaves together,
flowers, and noise. Thus, a convolutional neural network based deep feature
learning and classification (CNN-DFLC) model is designed to analyze
patterns of plant leaves and perform classification using generated finegrained feature weights. The proposed CNN-DFLC model precisely estimates
which the given image belongs to which plant species. Several layers and
blocks are utilized to design the proposed CNN-DFLC model. Fine-grained
feature weights are obtained using convolutional and pooling layers. The
obtained feature maps in training are utilized to predict labels and model
performance is tested on the Vietnam plant image (VPN-200) dataset. This
dataset consists of a total number of 20,000 images and testing results are
achieved in terms of classification accuracy, precision, recall, and other
performance metrics. The mean classification accuracy obtained using the
proposed CNN-DFLC model is 96.42% considering all 200 classes from the
VPN-200 dataset.
Big data as a service (BDaaS) platform is widely used by various
organizations for handling and processing the high volume of data generated
from different internet of things (IoT) devices. Data generated from these IoT
devices are kept in the form of big data with the help of cloud computing
technology. Researchers are putting efforts into providing a more secure and
protected access environment for the data available on the cloud. In order to
create a safe, distributed, and decentralised environment in the cloud,
blockchain technology has emerged as a useful tool. In this research paper, we
have proposed a system that uses blockchain technology as a tool to regulate
data access that is provided by BDaaS platforms. We are securing the access
policy of data by using a modified form of ciphertext policy-attribute based
encryption (CP-ABE) technique with the help of blockchain technology. For
secure data access in BDaaS, algorithms have been created using a mix of CPABE with blockchain technology. Proposed smart contract algorithms are
implemented using Eclipse 7.0 IDE and the cloud environment has been
simulated on CloudSim tool. Results of key generation time, encryption time,
and decryption time has been calculated and compared with access control
mechanism without blockchain technology.
Internet of things (IoT) has become one of the eminent phenomena in human
life along with its collaboration with wireless sensor networks (WSNs), due
to enormous growth in the domain; there has been a demand to address the
various issues regarding it such as energy consumption, redundancy, and
overhead. Data aggregation (DA) is considered as the basic mechanism to
minimize the energy efficiency and communication overhead; however,
security plays an important role where node security is essential due to the
volatile nature of WSN. Thus, we design and develop proximate node aware
secure data aggregation (PNA-SDA). In the PNA-SDA mechanism, additional
data is used to secure the original data, and further information is shared with
the proximate node; moreover, further security is achieved by updating the
state each time. Moreover, the node that does not have updated information is
considered as the compromised node and discarded. PNA-SDA is evaluated
considering the different parameters like average energy consumption, and
average deceased node; also, comparative analysis is carried out with the
existing model in terms of throughput and correct packet identification.
Drones provide an alternative progression in protection submissions since
they are capable of conducting autonomous seismic investigations. Recent
advancement in unmanned aerial vehicle (UAV) communication is an internet
of a drone combined with 5G networks. Because of the quick utilization of
rapidly progressed registering frameworks besides 5G officialdoms, the
information from the user is consistently refreshed and pooled. Thus, safety
or confidentiality is vital among clients, and a proficient substantiation
methodology utilizing a vigorous sanctuary key. Conventional procedures
ensure a few restrictions however taking care of the assault arrangements in
information transmission over the internet of drones (IOD) environmental
frameworks. A unique hyperelliptical curve (HEC) cryptographically based
validation system is proposed to provide protected data facilities among
drones. The proposed method has been compared with the existing methods
in terms of packet loss rate, computational cost, and delay and thereby
provides better insight into efficient and secure communication. Finally, the
simulation results show that our strategy is efficient in both computation and
communication.
Monitoring behavior, numerous actions, or any such information is considered
as surveillance and is done for information gathering, influencing, managing,
or directing purposes. Citizens employ surveillance to safeguard their
communities. Governments do this for the purposes of intelligence collection,
including espionage, crime prevention, the defense of a method, a person, a
group, or an item; or the investigation of criminal activity. Using an internet
of things (IoT) rover, the area will be secured with better secrecy and
efficiency instead of humans, will provide an additional safety step. In this
paper, there is a discussion about an IoT rover for remote surveillance based
around a Raspberry Pi microprocessor which will be able to monitor a
closed/open space. This rover will allow safer survey operations and would
help to reduce the risks involved with it.
In a world where climate change looms large the spotlight often shines on
greenhouse gases, but the shadow of man-made aerosols should not be
underestimated. These tiny particles play a pivotal role in disrupting Earth's
radiative equilibrium, yet many mysteries surround their influence on various
physical aspects of our planet. The root of these mysteries lies in the limited
data we have on aerosol sources, formation processes, conversion dynamics,
and collection methods. Aerosols, composed of particulate matter (PM),
sulfates, and nitrates, hold significant sway across the hemisphere. Accurate
measurement demands the refinement of in-situ, satellite, and ground-based
techniques. As aerosols interact intricately with the environment, their full
impact remains an enigma. Enter a groundbreaking study in Morocco that
dared to compare an internet of thing (IoT) system with satellite-based
atmospheric models, with a focus on fine particles below 10 and 2.5
micrometers in diameter. The initial results, particularly in regions abundant
with extraction pits, shed light on the IoT system's potential to decode
aerosols' role in the grand narrative of climate change. These findings inspire
hope as we confront the formidable global challenge of climate change.
The use of technology has a significant impact to reduce the consequences of
accidents. Sensors, small components that detect interactions experienced by
various components, play a crucial role in this regard. This study focuses on
how the MPU6050 sensor module can be used to detect the movement of
people who are falling, defined as the inability of the lower body, including
the hips and feet, to support the body effectively. An airbag system is
proposed to reduce the impact of a fall. The data processing method in this
study involves the use of a threshold value to identify falling motion. The
results of the study have identified a threshold value for falling motion,
including an acceleration relative (AR) value of less than or equal to 0.38 g,
an angle slope of more than or equal to 40 degrees, and an angular velocity
of more than or equal to 30 °/s. The airbag system is designed to inflate
faster than the time of impact, with a gas flow rate of 0.04876 m3
/s and an
inflating time of 0.05 s. The overall system has a specificity value of 100%,
a sensitivity of 85%, and an accuracy of 94%.
The fundamental principle of the paper is that the soil moisture sensor obtains
the moisture content level of the soil sample. The water pump is automatically
activated if the moisture content is insufficient, which causes water to flow
into the soil. The water pump is immediately turned off when the moisture
content is high enough. Smart home, smart city, smart transportation, and
smart farming are just a few of the new intelligent ideas that internet of things
(IoT) includes. The goal of this method is to increase productivity and
decrease manual labour among farmers. In this paper, we present a system for
monitoring and regulating water flow that employs a soil moisture sensor to
keep track of soil moisture content as well as the land’s water level to keep
track of and regulate the amount of water supplied to the plant. The device
also includes an automated led lighting system.
In order to provide sensing services to low-powered IoT devices, wireless sensor networks (WSNs) organize specialized transducers into networks. Energy usage is one of the most important design concerns in WSN because it is very hard to replace or recharge the batteries in sensor nodes. For an energy-constrained network, the clustering technique is crucial in preserving battery life. By strategically selecting a cluster head (CH), a network's load can be balanced, resulting in decreased energy usage and extended system life. Although clustering has been predominantly used in the literature, the concept of chain-based clustering has not yet been explored. As a result, in this paper, we employ a chain-based clustering architecture for data dissemination in the network. Furthermore, for CH selection, we employ the coati optimisation algorithm, which was recently proposed and has demonstrated significant improvement over other optimization algorithms. In this method, the parameters considered for selecting the CH are energy, node density, distance, and the network’s average energy. The simulation results show tremendous improvement over the competitive cluster-based routing algorithms in the context of network lifetime, stability period (first node dead), transmission rate, and the network's power reserves.
The construction industry is an industry that is always surrounded by
uncertainties and risks. The industry is always associated with a threatindustry which has a complex, tedious layout and techniques characterized by
unpredictable circumstances. It comprises a variety of human talents and the
coordination of different areas and activities associated with it. In this
competitive era of the construction industry, delays and cost overruns of the
project are often common in every project and the causes of that are also
common. One of the problems which we are trying to cater to is the improper
handling of materials at the construction site. In this paper, we propose
developing a system that is capable of tracking construction material on site
that would benefit the contractor and client for better control over inventory
on-site and to minimize loss of material that occurs due to theft and misplacing
of materials.
Today, health monitoring relies heavily on technological advancements. This
study proposes a low-power wide-area network (LPWAN) based, multinodal
health monitoring system to monitor vital physiological data. The suggested
system consists of two nodes, an indoor node, and an outdoor node, and the
nodes communicate via long range (LoRa) transceivers. Outdoor nodes use an
MPU6050 module, heart rate, oxygen pulse, temperature, and skin resistance
sensors and transmit sensed values to the indoor node. We transferred the data
received by the master node to the cloud using the Adafruit cloud service. The
system can operate with a coverage of 4.5 km, where the optimal distance
between outdoor sensor nodes and the indoor master node is 4 km. To further
predict fall detection, various machine learning classification techniques have
been applied. Upon comparing various classifier techniques, the decision tree
method achieved an accuracy of 0.99864 with a training and testing ratio of
70:30. By developing accurate prediction models, we can identify high-risk
individuals and implement preventative measures to reduce the likelihood of
a fall occurring. Remote monitoring of the health and physical status of elderly
people has proven to be the most beneficial application of this technology.
The effectiveness of adaptive filters are mainly dependent on the design
techniques and the algorithm of adaptation. The most common adaptation
technique used is least mean square (LMS) due its computational simplicity.
The application depends on the adaptive filter configuration used and are well
known for system identification and real time applications. In this work, a
modified delayed μ-law proportionate normalized least mean square
(DMPNLMS) algorithm has been proposed. It is the improvised version of the
µ-law proportionate normalized least mean square (MPNLMS) algorithm.
The algorithm is realized using Ladner-Fischer type of parallel prefix
logarithmic adder to reduce the silicon area. The simulation and
implementation of very large-scale integration (VLSI) architecture are done
using MATLAB, Vivado suite and complementary metal–oxide–
semiconductor (CMOS) 90 nm technology node using Cadence RTL and
Genus Compiler respectively. The DMPNLMS method exhibits a reduction
in mean square error, a higher rate of convergence, and more stability. The
synthesis results demonstrate that it is area and delay effective, making it
practical for applications where a faster operating speed is required.
The increasing demand for faster, robust, and efficient device development of enabling technology to mass production of industrial research in circuit design deals with challenges like size, efficiency, power, and scalability. This paper, presents a design and analysis of low power high speed full adder using negative capacitance field effecting transistors. A comprehensive study is performed with adiabatic logic and reversable logic. The performance of full adder is studied with metal oxide field effect transistor (MOSFET) and negative capacitance field effecting (NCFET). The NCFET based full adder offers a low power and high speed compared with conventional MOSFET. The complete design and analysis are performed using cadence virtuoso. The adiabatic logic offering low delay of 0.023 ns and reversable logic is offering low power of 7.19 mw.
The global agriculture system faces significant challenges in meeting the
growing demand for food production, particularly given projections that the
world's population will reach 70% by 2050. Hydroponic farming is an
increasingly popular technique in this field, offering a promising solution to
these challenges. This paper will present the improvement of the current
traditional hydroponic method by providing a system that can be used to
monitor and control the important element in order to help the plant grow up
smoothly. This proposed system is quite efficient and user-friendly that can
be used by anyone. This is a combination of a traditional hydroponic system,
an automatic control system and a smartphone. The primary objective is to
develop a smart system capable of monitoring and controlling potential
hydrogen (pH) levels, a key factor that affects hydroponic plant growth.
Ultimately, this paper offers an alternative approach to address the challenges
of the existing agricultural system and promote the production of clean,
disease-free, and healthy food for a better future.
More from International Journal of Reconfigurable and Embedded Systems (20)
Impartiality as per ISO /IEC 17025:2017 StandardMuhammadJazib15
This document provides basic guidelines for imparitallity requirement of ISO 17025. It defines in detial how it is met and wiudhwdih jdhsjdhwudjwkdbjwkdddddddddddkkkkkkkkkkkkkkkkkkkkkkkwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwioiiiiiiiiiiiii uwwwwwwwwwwwwwwwwhe wiqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq gbbbbbbbbbbbbb owdjjjjjjjjjjjjjjjjjjjj widhi owqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq uwdhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhwqiiiiiiiiiiiiiiiiiiiiiiiiiiiiw0pooooojjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj whhhhhhhhhhh wheeeeeeee wihieiiiiii wihe
e qqqqqqqqqqeuwiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiqw dddddddddd cccccccccccccccv s w c r
cdf cb bicbsad ishd d qwkbdwiur e wetwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww w
dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffw
uuuuhhhhhhhhhhhhhhhhhhhhhhhhe qiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc ccccccccccccccccccccccccccccccccccc bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbu uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuum
m
m mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm m i
g i dijsd sjdnsjd ndjajsdnnsa adjdnawddddddddddddd uw
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Sri Guru Hargobind Ji - Bandi Chor Guru.pdfBalvir Singh
Sri Guru Hargobind Ji (19 June 1595 - 3 March 1644) is revered as the Sixth Nanak.
• On 25 May 1606 Guru Arjan nominated his son Sri Hargobind Ji as his successor. Shortly
afterwards, Guru Arjan was arrested, tortured and killed by order of the Mogul Emperor
Jahangir.
• Guru Hargobind's succession ceremony took place on 24 June 1606. He was barely
eleven years old when he became 6th Guru.
• As ordered by Guru Arjan Dev Ji, he put on two swords, one indicated his spiritual
authority (PIRI) and the other, his temporal authority (MIRI). He thus for the first time
initiated military tradition in the Sikh faith to resist religious persecution, protect
people’s freedom and independence to practice religion by choice. He transformed
Sikhs to be Saints and Soldier.
• He had a long tenure as Guru, lasting 37 years, 9 months and 3 days
3rd International Conference on Artificial Intelligence Advances (AIAD 2024)GiselleginaGloria
3rd International Conference on Artificial Intelligence Advances (AIAD 2024) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the area advanced Artificial Intelligence. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the research area. Core areas of AI and advanced multi-disciplinary and its applications will be covered during the conferences.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Digital Twins Computer Networking Paper Presentation.pptx
A systematic literature review on hardware implementation of image processing
1. International Journal of Reconfigurable and Embedded Systems (IJRES)
Vol. 12, No. 1, March 2023, pp. 19~28
ISSN: 2089-4864, DOI: 10.11591/ijres.v12.i1.pp19-28 19
Journal homepage: http://ijres.iaescore.com
A systematic literature review on hardware implementation of
image processing
Zul Imran Azhari, Samsul Setumin, Anis Diyana Rosli, Siti Juliana Abu Bakar
Centre for Electrical Engineering Studies, Universiti Teknologi MARA Cawangan Pulau Pinang, Permatang Pauh, Malaysia
Article Info ABSTRACT
Article history:
Received Apr 22, 2022
Revised Aug 8, 2022
Accepted Aug 26, 2022
Image processing has become under the spotlight recently and leads to a
significant shift in various fields such as biomedical, satellite images, and
graphical applications. Nevertheless, the poor quality of an image is one of
the noticeable limitations of image processing as it restricts efficient data
extraction to be conducted. Conventionally, the image was processed via
software applications such as MATLAB. In spite of the software's ability to
cater to the data extraction of low-quality image issues, it still suffers from the
time-consuming issue. As the ability to obtain a rapid outcome is a favorable
feature of efficient image processing, the use of hardware in image processing
is deemed to keep the addressed issue at bay. Thus, the image enhancement
techniques using hardware have gradually rising interest among researchers
with numerous approaches such as field programmable gate array (FPGA). In
this study, 25 different research papers published from 2016 to 2021 are
studied and analyzed to focus on the performance of FPGA as hardware
implementation in image processing techniques.
Keywords:
Field programmable gate array
Hardware implementation
Image enhancement
Image processing
Systematic literature review
This is an open access article under the CC BY-SA license.
Corresponding Author:
Samsul Setumin
Centre for Electrical Engineering Studies, Universiti Teknologi MARA Cawangan Pulau Pinang
Permatang Pauh, 13500 Pulau Pinang, Malaysia
Email: samsuls@uitm.edu.my
1. INTRODUCTION
Digital image processing (DIP) is widely employed in diverse applications such as communication,
multimedia services, arts, medicine, space exploration, surveillance, automated industry, robotics, aerospace
as well as in education areas [1], [2]. Technology advances in digital image processing are influenced by several
major goals. For instance, one goal is to enhance human perception by enhancing visual data. Consequently,
any image that sought to enhance its quality signifies that it intends to enhance the quality of the original image
[3]. Implementation of real-time IP algorithms on sequential processors has significant drawbacks for large
image sizes with high resolution. Several initiatives were taken to execute it in hardware, which allows for
optimized and parallel methods. Field programmable gate array (FPGA) technology, on the other hand, is
viewed to offer a reliable alternative. In paper [1], the real-time biomedical image enhancement (BIE) method
using FPGA is proposed to improve the quality of biomedical images for human viewing. In this paper,
brightness control, contrast stretching, and threshold enhancement is implemented to observe human veins.
Enhancing the quality of an image for human interpretation and analysis is the primary goal of image
processing [4]. Various image processing applications, such as segmentation, object detection, and recognition,
utilize image enhancement techniques in the pre-processing step. These methods either improve the image to
make it better for the human eye or improve the algorithm of automatic computer applications [5]. For a
complex analysis and image display, the primary goal is to draw attention to certain characteristics that are
significant in an image. Considering the acquisition process was less than optimal, this technique was
2. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 1, March 2023: 19-28
20
developed to improve the visual quality of an image. For a particular application, the results of the image
processing are preferable to the original image [5].
Running real-time image processing methods on serial processors turns out to have additional
drawbacks for the huge number of images with high resolution. However, there have been numerous attempts
to implement it on hardware, where it is possible to use more parallel and optimized techniques. An appropriate
substitute is the usage of FPGA technology [1]. FPGAs have recently been acknowledged to have significantly
improved in terms of size and functionalities. This has raised interest in adopting it as a platform for image
processing applications, particularly those that need real-time processing [1]. Moreover, a haze is typically the
root cause of an image that lacks contrast. The image appears blurry because it reduces visibility, especially
for far-off scenes [6]. Therefore, contrast enhancement of an image is important to remove the haze and
increase the visibility of an image. There are a lot of algorithms proposed by researchers to enhance images
that suffer from low contrast and low exposure. Horé and Pecht proposed 2D filters based on a general
approximation utilizing only the power of two that produced a very good image [7].
FPGAs have recently advanced massively in both size and functionalities. This has piqued interest in
adopting it as a development platform for IP applications, particularly in an application where real-time
processing is essential [8]. In an FPGA, a matrix of programmable logic cells is linked via a grid of coupled
lines and switches. These input/output cells are positioned around the chip's edge and serve as a link between
the chip's interconnection lines and its outer pins. Note that, it is necessary to define the logic function of each
cell as well as the switches for joining lines in order to set up an FPGA. FPGA implementation of low-light
image enhancement does have a significant impact on the processing speed compared with software
implementation while pricing comparable image results [9].
In this paper, we provide a thorough analysis and systematic literature review (SLR) to aid scholars
in the field of DIP. The survey includes image processing algorithm hardware implementation research from
2016 to 2021. The implementation of image processing on hardware is the main focus of this review. We are
inspired to perform this SLR since there are no other SLR papers covering hardware implementation of image
processing. A total of 263 different research publications have been gathered, 25 of which focused on the
hardware implementation of AI and ML algorithms. We used stringent exclusion criteria for this study to
consider as many papers as we could in order to represent a diversity of perspectives on the relevant topic.
The remaining sections are arranged as follows: section 2 discusses a literature review of related
studies. The methodology is then covered in section 3, which explains the criteria that were utilized to
categorize and assess the quality of the gathered research papers. We go over the analyses and answers to the
research questions that were posed in the methodology section in section 4. Section 5 presents the work's
conclusion in its final form.
2. LITERATURE REVIEW
The investigation of the use of hardware in image processing techniques becoming progressively
popular in recent years. It has been the subject of a number of studies. The attention occurred as image
processing algorithm demands vast computational resources, especially in real-time processing [10]. as very
large-scale integration (VLSI) technology progresses, hardware implementation is believed to be a viable
choice in image processing to attain more rapid and effective outcomes [10]. Consequently, there are numerous
hardware implementations for various applications are discovered. Over the last five years, the image
processing areas have seen many studies published, focusing on hardware-software optimizations and their
implementation methods.
The primary goal of image enhancement was to improve the brightness, contrast, and improved visual
quality of an image for human viewing [11], [12]. Image enhancement evolves in a wide range of applications
such as biomedical [8], and haze removal [6], [12]. Dong et al. [13] proposed a high-speed, effective algorithm
for enhancing a dim light video. In this paper, the enhancement was done by inverting input with dim-lighting
video and employing an optimized image de-hazed algorithm on the inverted video. On the other hand, Ren et
al. [14] proposes another method for improving the visual quality of an image with dim light by adjusting the
illumination to be uniform and strong in intensity.
An array of programmable logic gates can be found on the FPGA chip (PLA). FPGA does not have a
chip architecture that has been rebuilt. The programmers used logic gates to construct the hardware architecture
of the application to program the FPGAs. Two hardware description languages (HDLs) such as VHDL and
Verilog are employed in FPGA hardware architecture. On the other hand, “C, Java, system C” and domain-
specific languages (DSL) are used to program FPGAs [10]. However, ‘C’ type languages are incompatible for
non-sequential programming applications, whilst DSLs are appropriate for certain applications only [15].
When developing an effective algorithm, it is entirely beneficial to figure out the FPGA hardware
thoroughly. High-level design exploration is carried out using high-level languages, and it begins with
3. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
A systematic literature review on hardware implementation of image processing (Zul Imran Azhari)
21
algorithmic verification. The programmer must first verify the algorithm prior to creating a code that will
execute well [15]. Only software coding can provide decent, optimized code. As it is flexible in nature, it faces
two challenges: sequential execution and operational serial memory. Due to these two factors, reading and
writing image frames require a long time.
The most recent state-of-the-art solutions map applications with substantial data processing; this can
be accomplished by employing hardware parallelism with multicore or FPGA-based designs. The key
advantages of employing FPGAs as hardware platforms are their low cost, low power consumption [16], and
a short time to market. Several optimization strategies may be used to extract parallelism in hardware, resulting
in a significant increase in processing speed compared to software. The data from software is transferred to
hardware, manually or automatically depending on the designer’s preferences. For this process, the term high-
level synthesis is used. A wide range of commercial tools, such as Vivado HLS, Catapult, and others able to
convert algorithmic design (SW) to hardware design (HW) [15]. These tools advise the designer to use
optimizations such as loop unrolling, array splitting, and so on, resulting in a time and resource trade-off. As
an embedded system design suggests, a hierarchy-structured IP core is built using the mentioned tools and will
interact with the processor's local bus.
Image processing applications, in general, take a long time to process since the applications require a
large data set and a complex algorithm. An image processing algorithm with improved performance and shorter
time to market can be achieved via configurable hardware and system-level programming languages [17]. For
hardware implementation, a variety of technologies are available. Reconfigurable devices such as application
specific integrated circuits (ASICs) and FPGAs are able to assist parallelism and pipeline techniques [17]. The
use of reconfigurable hardware to construct image processing algorithms reduces time-to-market costs and
allows for quick prototyping with simplified debugging and verification steps. As a result, reconfigurable
devices appear to be an excellent option for implementing image processing algorithms.
This work adds to the current body of knowledge and helps to provide a complete picture of image
processing. This survey's contributions can be summarized as follows:
− To give the most relevant information on this issue, the paper uses an SLR technique.
− The survey examines projects completed in the last five years, from 2016 to 2021.
− The survey compares and contrasts existing FPGA hardware.
− The study includes papers on image processing algorithms at the software level.
3. METHOD
The SLR covers a wide range of hardware implementations of image processing. The employed
method is based on Kitchenham and Charter’s method [18], presented in Figure 1. Firstly, research questions
for this SLR are identified which is important to set the focus of this research. Next, search terms and exclusion
criteria are determined to gather relevant papers. Database library and year range will also be specified before
performing the search. Paper selection and inclusion criteria have also been identified to screen relevant papers
from overall searched papers. The remaining papers will then be categorized and applied for selection and
filtration to further screen the papers to ensure the remaining papers are the most relevant papers for this
research. Technical metrics and parameters will then be extracted from the papers and processed afterward.
The aim of this survey is to obtain the answer to several research questions concerning:
RQ 1: Application perspective: what are the most common applications that make use of image processing in
hardware? This inquiry aims to discover an image processing applications that works via hardware
implementation.
RQ 2: Image processing perspective: what image processing algorithms and tools are utilized to implement
them in hardware? The goal of this topic is to look at the most common algorithm used in image
processing.
RQ 3: Hardware perspective: what are the employed hardware platforms for image processing acceleration?
The goal of this is to identify the type of hardware used.
3.1. Search strategy
The next step in performing this review is to determine the search terms and gather relevant papers to
address the research questions. The terms "image processing" and "image enhancement" are employed, as well
as "hardware implementation" and "FPGA". Using the stated search criteria, the following digital libraries are
utilized to obtain all relevant items (journals and conference papers): Scopus and Web of Science.
4. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 1, March 2023: 19-28
22
Figure 1. SLR method
3.2. Study selection
Initially, 263 papers were gathered using the previously acknowledged search terms. Next, the papers
were grouped into more specific categories. In the selection and filtering process, the following steps are taken
into consideration:
Step 1: Remove articles that are duplicated or have several versions.
Step 2: Use inclusion and exclusion criteria to exclude any papers that are not relevant.
Step 3: Remove review papers from the collected papers.
Step 4: Apply quality assessment rules to include qualified papers that best address the research questions.
Step 5: Using reference lists of the collected papers, look for new similar publications and repeat the process.
The following are the applied inclusion and exclusion criteria:
a. Inclusion criteria:
− Date from 2016 to 2021.
− Only journals and conference papers that discuss the optimization or implementation of image processing
in hardware.
b. Exclusion criteria:
− Non-journal and non-conference articles.
− Articles that do not discuss image processing application of image enhancement and edge detection.
3.3. Quality assessment rules
The final stage in determining the list of articles to be examined in this review is to apply the quality
assessment rules (QARs). The QARs are crucial in ensuring that the quality of research publications is properly
assessed. As a result, 10 QARs are assigned, each earning one point out of ten. Each QAR's score is determined
by the following formula: "completely answered" = 1, "above average" = 0.75, "average" = 0.5, "below
average" = 0.25, and "not answered" = 0. The score of each item is the sum of the marks achieved for the 10
QARs. Furthermore, the article is considered if the result is 5 or greater; otherwise, it is excluded. The following
list is the QARs that are taken into account in this study:
QAR 1: Are the research objectives well-defined?
QAR 2: Are image processing techniques clearly defined and deliberated?
QAR 3: Is there any evidence of the proposed technique being used in practice in the paper?
QAR 4: Is there any evidence that the proposed technique has been tested in practice?
QAR 5: Are the experiments well-thought-out and well-supported?
QAR 6: Is the suggested image processing design thoroughly validated using common-standard test cases?
QAR 7: Is the result accurately reported?
QAR 8: Is the proposed image processing design compared to others?
QAR 9: Are the approaches for analyzing the data appropriate?
QAR 10: Does the overall study contribute significantly to the image processing area of research?
3.4. Data extraction strategy
In this stage, the final selection of papers will be evaluated in order to gather crucial data for answering
the research questions. To begin, general information about each document is extracted, such as the paper
number, title, publication year, and publishing type. Then more particular data is required, such as algorithm
models and hardware accelerator categories, such as FPGA, GPU, ASIC, or a mix of these. Finally, any
5. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
A systematic literature review on hardware implementation of image processing (Zul Imran Azhari)
23
information that is directly connected to the study topics is sought. Due to the underlying unstructured nature
of the required data, extracting such details is difficult. For example, some articles [10] proposed the
compilation of software into a hardware descriptive language that could be implemented on FPGAs. It is worth
noting that not all of the papers responded to the addressed research questions.
3.5. Synthesis of extracted data
After gathering information from the selected publications, numerous approaches were employed to
collect and construct the desired responses that were in line with the research questions. To get the necessary
information for RQ 1, RQ 2, and RQ 3, a qualitative synthesis is used.
4. RESULTS AND DISCUSSION
The results of the predetermined research questions are discussed in this section. A study of image
processing hardware implementations between 2016 and 2020 based on the selected papers is viewed. The first
subsection provides a summary of the articles collected, followed by subsections that go through each study
issue in detail, highlighting key conclusions and observations.
4.1. Study overview
As previously stated, this study identifies and discusses 25 out of 263 research publications on FPGA-
based implementation. It was particularly valuable for research to create and test image processing because of
its rapid prototyping environment and ability to give an easy prototype environment with significant
computational capabilities. As a result, methods such as edge detection, Gaussian filtering, histograms, and
many more are rapidly evolving. FPGA is seen to be a feasible alternative for implementing these algorithms
because of its considerable flexibility. Furthermore, current FPGA design trends have made it more affordable
and have attracted substantial attention for deep learning research.
Figure 2 shows that there is almost the same number of research papers collected from 2016 to 2021.
The equal distribution signifies the relevancy of the extracted information over time. In other words, the study
fairly examined the papers within these past 6 years.
Figure 2. Collected research papers between 2016 and 2021
4.2. RQ 1: Application perspective: what are the most common applications that make use of image
processing in hardware?
The collected research papers are categorized according to the implemented application. Three
categories are identified based on the implemented applications as follows: image enhancement and edge
detection. The field of image processing is rapidly expanding. Picture processing techniques are used in
numerous new applications, including computer graphics, biomedical imaging, satellite imaging as well as
underwater image restoration and enhancement. As shown in Figure 3, 79% of the papers are image
0
1
2
3
4
5
6
2016 2017 2018 2019 2020 2021
No.
of
Collected
Research
Papers
Year
Total of Collected Research Papers by Year
6. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 1, March 2023: 19-28
24
enhancement-related applications. Most research articles on image processing tend to focus on image
enhancement. Image enhancement has become increasingly popular due to its close ties to computer vision and
image analysis. It has a wide range of applications, but it's most commonly used to improve the quality of an
image for better interpretation. Other image processing applications and edge detection have been studied for
implementation in hardware in recent years. These applications grew in prominence as a result of their potential
to automate numerous real-time decision-making systems.
Alareqi et al. [8] explained the real-time biomedical image enhancement (BIE) method using FPGA
is proposed to improve the quality of biomedical images for human viewing. In this paper, brightness control,
contrast stretching, and threshold enhancement is implemented to observe human veins. Ngo et al. [12]
proposed an image enhancement application for image haze removal. Due to the unavoidable adverse effects
of poor weather conditions, images or videos taken outdoors typically suffer from an apparent loss of contrast
and detail. The technique proposed by the authors aims to eliminate undesirable effects and restore clear
visibility. Alex et al. [19] showed image enhancement application enhances the underwater image using
contrast limited adaptive histogram equalization (CLAHE). This technique is able to enhance contrast and
improves the quality of an image that suffers from poor lighting conditions, such as an underwater image.
Figure 3. Comprehensive distribution of all image processing applications
4.3. RQ 2: Image processing perspective: what image processing algorithms and tools are utilized to
implement them in hardware?
This section presents an algorithmic analysis of the collected research papers in order to identify the
most widely used algorithms in image enhancement and edge detection applications. Most of the collected
research papers discuss the implementation of image enhancement. Table 1 shows all algorithms used in the
collected papers. There are various algorithms proposed for image enhancement. The most proposed algorithm
for image enhancement is contrast enhancement as it aids to enhance and improve the quality of an image.
Histogram-based algorithms are also used in image enhancement. The visual appearance of an image can be
improved by using the histogram. The dynamic range of pixels, contrast, and many other issues that arise
during the acquisition of an image is shown in the histogram [11].
The algorithm proposed by Ngo et al. [12] is a single image haze removal algorithm that aims to
improve the quality of the images and restore clear visibility. The proposed algorithm includes a detail
enhancement algorithm in the pre-processed to restore the faded detail of the input image. Weight maps are
first calculated concerning dark channels to accurately blend haze-free areas into the fused picture and then
normalized to avoid the out-of-range issue [12]. As a result of fusing a series of under-exposed images, the
hazy input is lighter than the resultant image. For this reason, adaptive tone remapping is used to increase
luminance and highlight the chrominance. Soma and Jatoth [6] propose another image enhancement application
using the de-hazing algorithm. Two methods for hardware implementation of the image de-hazing algorithm
are discussed in this paper. The authors propose the pixel-wise and grey image-based de-hazing algorithm. The
hazy input image is converted into two images that are grey images and a minimum image. These two images
were then converted into an average image, significantly reducing the halos and artifacts present in the final
dehazed images. Timarchi et al. [20] propose a novel algorithm based on ConText, called the modified context
(MCT), to attain a high-quality stego-image. The proposed algorithm is based on threshold level, which can
lead to faster and more power-efficient implementation.
Whilst, for edge detection application, two papers employed the Sobel operator algorithm [20], [21].
Aside from that, others used algorithms for edge detection are Harris, smallest univalue segment assimilating
25%
79%
Image Processing Application
Edge Detection
Image
Enhancement
7. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
A systematic literature review on hardware implementation of image processing (Zul Imran Azhari)
25
nucleus (Susan) [22], Canny [23], Robert, Prewitt, and Laplacian of Gaussian (LoG) operator [24]. Edge
detection is an approach of an image processing technique to distinguish the edge of objects within the image.
The Sobel operator operates based on differentiation and is used to detect edges. As a comparison to
conventional edge detection operators, the Sobel operator is more applicable due to its simplicity and higher
precision [20]. Using the Sobel operator, the paper [21] was able to identify diseases in hevea tree leaves by
computing the gradient of each pixel in the image.
Table 1. Summary of the algorithm in each category proposed in the collected papers
Type of
Implementation
Image processing application
Image enhancement Edge detection
Algorithm
Average, Gaussian, sharpening filter, grayscale and
edge detection [25]
Harris and susan [22]
Histogram (brightness, contrast enhancement, region of
interest) [11]
Sobel Operator [21], [26]
De-hazing [6], [12] Robert, Prewitt, Sobel, and Laplacian of
Gaussian (LoG) operator masks [24]
Power-of-two terms [24] Canny edge detection [23]
Gaussian-based halo-reducing filter [7], [27] Biomedical image enhancement (BIE) [8]
Modified context (MCT) [20]
Median filter [28]
Adding image [1]
Adaptive histogram equalization (AHE) [19]
2D adaptive DIP [29]
Boundary discriminative noise detection (BDND) [30]
Guided image filtering and Halide [31]
Discrete wavelet transform (DWT) [2]
Contrast, brightness enhancement, image inverting, and
threshold operation [32]
Gaussian-based smoothing filter [33]
Particle swarm [34]
Medical image algorithm [35]
4.4. RQ 3: Hardware perspective: what are the employed hardware platforms for image processing
acceleration?
This section discusses the hardware-implemented image enhancement based on standard figures of
merit as depicted in Figure 4. 13 papers discussed hardware implementation for image enhancement. 10 papers
discussed hardware implementation using Xilinx board and the remaining 3 papers implemented hardware in
image processing via Altera board.
Figure 4. Summary of hardware perspective section’s outcomes
8. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 1, March 2023: 19-28
26
Out of 13 papers that implemented image enhancement in hardware, only papers [12] and [6] can be
compared as only these papers employs similar algorithm. Soma and Jatoth [6] proposed hardware
implementation via the use of the Xilinx Zynq-706 FPGA board, while the paper [12] proposes the Xilinx
Zynq-7000 FPGA board for its hardware implementation. Paper [12] uses a video file, while paper [6] uses an
image file as an input. Therefore, it is impossible to perform an accurate comparison due to these distinct inputs.
Table 2 compare the setup between these two papers.
Table 2. Comparison between hardware mapping of haze removal technique
Parameter [12] [6]
FPGA Zynq-7000 Zynq-706
Max. clock (MHz) 242 933
Latency (ns) 4.12 1.07
LUT available 218600 218600
LUT utilized 30676 2664
5. CONCLUSION
This study compares existing image processing with hardware implementation in an SLR. Twenty-
five papers from 2016 to 2021 focusing on hardware implementation in image processing were analyzed. The
study found that most of the papers analyzed implemented image enhancement applications in their paper. This
shows that image enhancement is an interesting application whose main goal is to improve the image so a piece
of useful information can be extracted from it. Image processing algorithm for both image enhancement and
edge detection applications has been identified and listed. For image enhancement, a de-hazing algorithm is
commonly used in image processing to increase the visibility of an image. All hardware employed in those
reviewed papers has been listed from a hardware perspective. However, since only two papers use the same
algorithm, a thorough analysis in terms of FPGA resource usage can not be done. In-depth analyses of the
various implementation platforms, tools, and strategies used throughout the previous five years are presented
in this paper. Finally, we hope that this SLR paper can provide future research direction to researchers for
hardware implementation, particularly on image processing aspects.
ACKNOWLEDGEMENTS
The authors would like to thank Universiti Teknologi MARA, Cawangan Pulau Pinang for supporting
the research work, especially the Centre for Electrical Engineering Studies, by providing a platform to gather
the required papers and so on.
REFERENCES
[1] H. el Khoukhi and M. A. Sabri, “Comparative study between HDLs simulation and MATLAB for image processing,” 2018
International Conference on Intelligent Systems and Computer Vision, ISCV 2018, vol. 2018-May, pp. 1–6, 2018, doi:
10.1109/ISACV.2018.8354046.
[2] N. Chervyakov, P. Lyakhov, D. Kaplun, D. Butusov, and N. Nagornov, “Analysis of the quantization noise in discrete wavelet
transform filters for image processing,” Electronics (Switzerland), vol. 7, no. 8, p. 135, 2018, doi: 10.3390/electronics7080135.
[3] M. Sreenivasulu and T. Meenpal, “Efficient hardware implementation of 2D convolution on FPGA for image processing
application,” in 2019 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), Feb.
2019, pp. 1–5. doi: 10.1109/ICECCT.2019.8869347.
[4] R. C. Gonzalez and R. E. Woods, Digital image processing, 3rd ed. Pearson International Edition prepared by Pearson Education,
2008.
[5] B. H. Ramyashree, R. Vidhya, and D. K. Manu, “FPGA implementation of contrast stretching for image enhancement using system
generator,” in 12th IEEE International Conference Electronics, Energy, Environment, Communication, Computer, Control: (E3-
C3), INDICON 2015, 2016, pp. 1–6. doi: 10.1109/INDICON.2015.7443730.
[6] P. Soma and R. Jatoth, “Implementation of a novel, fast and efficient image De-Hazing algorithm on embedded hardware
platforms,” Circuits Syst Signal Process, vol. 40, pp. 1–17, 2021, doi: 10.1007/s00034-020-01517-4.
[7] A. Horé and O. Yadid-Pecht, “On the design of optimal 2D filters for efficient hardware implementations of image processing
algorithms by using power-of-two terms,” J Real Time Image Process, vol. 16, 2019, doi: 10.1007/s11554-015-0550-2.
[8] M. Alareqi et al., “Design and FPGA implementation of real-time hardware co-simulation for image enhancement in biomedical
applications,” in 2017 International Conference on Wireless Technologies, Embedded and Intelligent Systems (WITS), Apr. 2017,
pp. 1–6. doi: 10.1109/WITS.2017.7934601.
[9] X. Peng, X. Li, S. Geng, J. Wang, and F. Nie, “Low-light image enhancement based on FPGA; low-light image enhancement based
on FPGA,” in 2021 IEEE 15th International Conference on Anti-counterfeiting, Security, and Identification (ASID), 2021, pp. 66–
69. doi: 10.1109/ASID52932.2021.9651721.
[10] M. A. Talib, S. Majzoub, Q. Nasir, and D. Jamal, “A systematic literature review on hardware implementation of artificial
intelligence algorithms,” J Supercomput, vol. 77, no. 2, pp. 1897–1938, 2021, doi: 10.1007/s11227-020-03325-8.
9. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
A systematic literature review on hardware implementation of image processing (Zul Imran Azhari)
27
[11] R. Chinchwadkar, V. Ingale, and A. Gokhale, “Hardware implementation of histogram-based algorithm for image enhancement,”
in Applied Computer Vision and Image Processing, 2020, pp. 60–68. doi: 10.1007/978-981-15-4029-5_6.
[12] D. Ngo, S. Lee, Q.-H. Nguyen, T. Ngo, G.-D. Lee, and B. Kang, “Single image haze removal from image enhancement perspective
for real-time vision-based systems,” Sensors (Basel), vol. 20, no. 18, p. 5170, 2020, doi: 10.3390/s20185170.
[13] X. Dong et al., “Fast efficient algorithm for enhancement of low lighting video,” in 2011 IEEE International Conference on
Multimedia and Expo, Jul. 2011, pp. 1–6. doi: 10.1109/ICME.2011.6012107.
[14] Y. Ren, Z. Ying, T. H. Li, and G. Li, “LECARM: low-light image enhancement using the camera response model,” IEEE
Transactions on Circuits and Systems for Video Technology, vol. 29, no. 4, pp. 968–981, 2019, doi: 10.1109/TCSVT.2018.2828141.
[15] P. Soma and R. K. Jatoth, “Hardware implementation issues on image processing algorithms,” in 2018 4th International Conference
on Computing Communication and Automation (ICCCA), Dec. 2018, pp. 1–6. doi: 10.1109/CCAA.2018.8777564.
[16] P. R. Schaumont, A Practical Introduction to Hardware/Software Codesign. Springer US, 2013. doi: 10.1007/978-1-4419-6000-9.
[17] I. Chiuchisan, M. C. Cerlinca, A. D. Potorac, and A. Graur, “Image enhancement methods approach using verilog hardware
description language,” in 11th International Conference on Development and Application Systems, 2012, pp. 144–148.
[18] D. Budgen and P. Brereton, “Performing systematic literature reviews in software engineering,” in 28th international conference
on Software engineering, 2006, pp. 1051–1052.
[19] R. S. M. Alex, S. Deepa, and M. H. Supriya, “Underwater image enhancement using CLAHE in a reconfigurable platform,” in
OCEANS 2016 MTS/IEEE Monterey, 2016, pp. 1–5. doi: 10.1109/OCEANS.2016.7761194.
[20] S. Timarchi, M. A. Alaei, and H. Koushkbaghi, “Novel algorithm and architectures for high-speed low-power ConText-based
steganography,” in 2017 19th International Symposium on Computer Architecture and Digital Systems (CADS), Dec. 2017, pp. 1–
6. doi: 10.1109/CADS.2017.8310733.
[21] N. M. Yusoff, I. S. A. Halim, N. E. Abdullah, and A. A. A. Rahim, “Real-time hevea leaves diseases identification using sobel edge
algorithm on FPGA: A preliminary study,” 2018 9th IEEE Control and System Graduate Research Colloquium, ICSGRC 2018 -
Proceeding, no. August, pp. 168–171, 2019, doi: 10.1109/ICSGRC.2018.8657603.
[22] C. Torres-Huitzil, “A review of image interest point detectors: from algorithms to FPGA hardware implementations,” in Image
Feature Detectors and Descriptors, vol. 630, 2016, pp. 47–74. doi: 10.1007/978-3-319-28854-3_3.
[23] D. Sangeetha and P. Deepa, “An efficient hardware implementation of canny edge detection algorithm,” in 2016 29th International
Conference on VLSI Design and 2016 15th International Conference on Embedded Systems (VLSID), Jan. 2016, pp. 457–462. doi:
10.1109/VLSID.2016.68.
[24] G. B. Reddy and K. Anusudha, “Implementation of image edge detection on FPGA using XSG,” in 2016 International Conference
on Circuit, Power and Computing Technologies (ICCPCT), Mar. 2016, pp. 1–5. doi: 10.1109/ICCPCT.2016.7530374.
[25] A. Rupani, P. Whig, G. Sujediya, and P. Vyas, “Hardware implementation of IoT-based image processing filters,” in Proceedings
of the Second International Conference on Computational Intelligence and Informatics, 2018, pp. 681–691.
[26] S. Taslimi, R. Faraji, A. Aghasi, and H. R. Naji, “Adaptive edge detection technique implemented on FPGA,” Iranian Journal of
Science and Technology, Transactions of Electrical Engineering, pp. 1–12, 2020.
[27] P. Ambalathankandy, A. Horé, and O. Yadid-Pecht, “An FPGA implementation of a tone mapping algorithm with a halo-reducing
filter,” J Real Time Image Process, vol. 16, no. 4, pp. 1317–1333, 2019, doi: 10.1007/s11554-016-0635-6.
[28] M. Ismaeil, K. Pritamdas, K. J. K. Devi, and S. Goyal, “Performance analysis of new adaptive decision based median filter on
FPGA for impulsive noise filtering,” in 2017 1st International Conference on Electronics, Materials Engineering and Nano-
Technology (IEMENTech), Apr. 2017, pp. 1–5. doi: 10.1109/IEMENTECH.2017.8076990.
[29] E. Kalali and I. Hamzaoglu, “Low complexity 2D adaptive image processing algorithm and its hardware implementation,” IEEE
Transactions on Consumer Electronics, vol. 63, no. 3, pp. 277–284, Aug. 2017, doi: 10.1109/TCE.2017.014996.
[30] S. Sadangi, S. Baraha, and P. K. Biswal, “Efficient hardware implementation of switching median filter for extraction of extremely
high impulse noise corrupted images,” in TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON), Oct. 2019, pp. 1601–
1605. doi: 10.1109/TENCON.2019.8929543.
[31] A. Ishikawa, N. Fukushima, A. Maruoka, and T. Iizuka, “Halide and GENESIS for generating domain-specific architecture of
guided image filtering,” in 2019 IEEE International Symposium on Circuits and Systems (ISCAS), May 2019, pp. 1–5. doi:
10.1109/ISCAS.2019.8702260.
[32] N. Mohammed, M. Salih, R. A. A. Raof, Q. Hussein, and N. A. Khalid, “Design and implementation image processing functional
unit using spatial parallelism on FPGA,” ARPN Journal of Engineering and Applied Sciences, vol. 13, pp. 4514–4520, 2018.
[33] L. Kabbai, A. Sghaier, A. Douik, and M. Machhout, “FPGA implementation of filtered image using 2D Gaussian filter,”
International Journal of Advanced Computer Science and Applications, vol. 7, 2016, doi: 10.14569/IJACSA.2016.070771.
[34] H. Jing and X. Xiaoqiong, “Sports image detection based on FPGA hardware system and particle swarm algorithm,” Microprocess
Microsyst, vol. 80, p. 103348, 2021, doi: https://doi.org/10.1016/j.micpro.2020.103348.
[35] P. G. Patel, A. Ahmadi, and M. Khalid, “Implementing an improved image enhancement algorithm on FPGA,” in 2021 IEEE
Canadian Conference on Electrical and Computer Engineering (CCECE), 2021, pp. 1–6. doi:
10.1109/CCECE53047.2021.9569049.
BIOGRAPHIES OF AUTHORS
Zul Imran Azhari received the B. Eng. degree (Hons.) in Electrical and Elcetronic
engineering from the Universiti Teknologi Mara (UiTM), in 2021. He is now an engineer at Intel
Microelectronics (M) Sdn. Bhd. He can be contacted at email: zulimran9@gmail.com.
10. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 1, March 2023: 19-28
28
Samsul Setumin received the B.Eng. degree (Hons.) in electronic engineering from
the University of Surrey, in 2006, and the M.Eng. degree in electrical-electronic and
telecommunication from the Universiti Teknologi Malaysia, in 2009. He obtained his Ph.D.
degree from Universiti Sains Malaysia in 2019 in the imaging field. Since 2010, he has been a
Lecturer with the Universiti Teknologi MARA, Malaysia. He was a Test Engineer with Agilent
Technologies (M) Sdn. Bhd., and Intel Microelectronics (M) Sdn. Bhd., for a period of one year.
His research interests include computer vision, image processing, pattern recognition, and
embedded system design. He can be contacted at email: samsuls@uitm.edu.my.
Anis Diyana Rosli received her first honour degree from UiTM Shah Alam in 2007
and Msc from University of New South Wales in 2009. She is now a part time student in
Doctorate program and her research is on heavy metal detection via electrochemical sensor. At
the same time, she is also a senior lecturer in Universiti Teknologi MARA, Pulau Pinang and
have served for 11 years since 2010.She can be contacted via email at anis.diyana@uitm.edu.my.
Siti Juliana Abu Bakar Received her B.Eng degree (Hons.) in electronic
engineering from UTeM, Malaysia in 2019 and Master ESDE (Electronic System Design
Engineering) from USM, Malaysia, in the year 2015. Completed her Ph.D. in the field of
Automation & Control System from Universiti Sains Malaysia, Engineering Campus in 2020.
She is currently working as senior lecturer at Centre for Electrical Engineering Studies,
Universiti Teknologi MARA Campus Pulau Pinang Malaysia. Prior working as a senior lecturer
at UiTM, she worked at Intel Product (M) Sdn.Bhd more than 5+ years as Validation Engineer.
She can be contacted at email: sitijuliana@uitm.edu.my.