Heart rate (HR) is one of vital biomedical signals for medical diagnosis. Previously, conventional camera is proven to be able to detect small changes in the skin due to the cardiac activity and can be used to measure the HR. However, most of the previous systems operate on near distance mode with a single face patch, thus the feasibility of the remote heart rate for various distances remains vague. This paper tackles this issue by analyzing an optimal framework that capable to works under the mentioned issues. Initially, plausible face landmarks are estimated by employing cascaded of regression mechanism. Next, the region of interest (ROI) was constructed from the landmarks in a face location where non rigid motion is minimal. From the ROI, temporal photoplethysmograph (PPG) signal is calculated based on the average green pixels intensity and environmental illumination is separated using Independent Component Analysis (ICA) filter. Eventually, the PPG signal is further processed using series of temporal filter to exclude frequencies outside the range of interest prior to estimate the HR. As a conclusion, the HR can be detected up to 5 meters range with 94% accuracy using lower part of face region.
The optic disc (OD) is one of the important part of the eye for detecting various diseases such as Diabetic Retinopathy and Glaucoma. The localization of optic disc is extremely important for determining hard exudates and lesions. Diagnosis of the disease can prevent people from vision loss. This paper analyzes various techniques which are proposed by different authors for the exact localization of optic disc to prevent vision loss.
IRJET- Brain Tumor Segmentation and Detection using F-TransformIRJET Journal
The document presents a method for brain tumor segmentation and detection using F-transform. There are two main stages: detection and segmentation. In detection, asymmetry between the left and right hemispheres of the brain is analyzed to detect abnormalities. In segmentation, F-transform is used for edge detection followed by morphological operations to extract the tumor region. The method was able to accurately segment and detect brain tumors in MRI images and showed improved speed over other techniques.
The document describes a novel method for localizing the pupil in eye gaze tracking systems. The proposed method fuses existing pupil localization techniques, including Hough transform, gray projection, and coarse positioning. This fusion approach aims to improve localization accuracy while lowering false detections. The method first preprocesses the eye image using filtering, edge detection, and thresholding/binarization. It then applies the fused techniques of Hough transform, gray projection, and coarse positioning to the preprocessed image to accurately detect the pupil boundary and center coordinates in a robust manner. The results are expected to provide better pupil localization compared to existing individual techniques.
A study of a modified histogram based fast enhancement algorithm (mhbfe)sipij
Image enhancement is one of the most important issues in low-level image processing. The goal of image
enhancement is to improve the quality of an image such that enhanced image is better than the original
image. Conventional Histogram equalization (HE) is one of the most algorithms used in the contrast
enhancement of medical images, this due to its simplicity and effectiveness. However, it causes the
unnatural look and visual artefacts, where it tends to change the brightness of an images. The Histogram
Based Fast Enhancement Algorithm (HBFE) tries to enhance the CT head images, where it improves the
water-washed effect caused by conventional histogram equalization algorithms with less complexity. It
depends on using full gray levels to enhance the soft tissues ignoring other image details. We present a
modification of this algorithm to be valid for most CT image types with keeping the degree of simplicity.
Experimental results show that The Modified Histogram Based Fast Enhancement Algorithm (MHBFE)
enhances the results in term of PSNR, AMBE and entropy. We use also the Statistical analysis to ensure
the improvement of the proposed modification that can be generalized. ANalysis Of VAriance (ANOVA) is
used as first to test whether or not all the results have the same average. Then we find the significant
improvement of the modification.
Coutinho A Depth Compensation Method For Cross Ratio Based Eye TrackingKalle
Traditional cross-ratio methods (TCR) project a light pattern and use invariant properties of projective geometry to estimate the gaze position. Advantages of the TCR methods include robustness to large head movements and in general requires just a one time per user calibration. However, the accuracy of TCR methods decay significantly for head movements along the camera optical axis, mainly due to the angular difference between the optical and visual axis of the eye. In this paper we propose a depth compensation cross-ratio (DCR) method that improves the accuracy of TCR methods for large head depth variations. Our solution compensates the angular offset using a 2D onscreen vector computed from a simple calibration procedure. The length of the 2D vector, which varies with head distance, is adjusted by a scale factor that is estimated from relative size variations of the corneal reflection pattern. The proposed DCR solution was compared to a TCR method using synthetic and real data from 2 users. An average improvement of 40% was observed with synthetic data, and 8% with the real data.
This document discusses automatic segmentation of white matter from brain fMRI images. It presents a 3-step solution: 1) preprocessing raw images using histogram-based double thresholding to remove noise, 2) estimating a threshold value for segmentation using Otsu's algorithm, and 3) performing binary segmentation of images based on the calculated threshold. Currently, white matter segmentation is done manually, which is time-consuming. The proposed automatic method could help address this issue.
Comparative performance analysis of segmentation techniquesIAEME Publication
This document compares the performance of several image segmentation techniques: global thresholding, adaptive thresholding, region growing, and level set segmentation. It applies these techniques to medical and synthetic images corrupted with noise and evaluates the segmentation results using binary classification metrics like sensitivity, specificity, accuracy, and precision. The results show that level set segmentation best preserves object boundaries, adaptive thresholding captures most image details, and global thresholding has the highest success rate at extracting regions of interest. Overall, the study aims to determine the optimal segmentation method for medical images from CT scans.
IRJET- Survey based on Detection of Optic Disc in Retinal Images using Segmen...IRJET Journal
This document compares different segmentation techniques for detecting the optic disc in retinal images: K-means clustering, Hough transform, and fuzzy convergence. It finds that K-means clustering achieved the highest accuracy rate at 96.92% while Hough transform and fuzzy convergence achieved 93.2% and 95.2% accuracy, respectively. The techniques are also compared based on how well they determine properties of the optic disc like eccentricity, accuracy, and brightness. Pre-processing steps like filtering, color space conversion, and histogram equalization are applied before segmentation.
The optic disc (OD) is one of the important part of the eye for detecting various diseases such as Diabetic Retinopathy and Glaucoma. The localization of optic disc is extremely important for determining hard exudates and lesions. Diagnosis of the disease can prevent people from vision loss. This paper analyzes various techniques which are proposed by different authors for the exact localization of optic disc to prevent vision loss.
IRJET- Brain Tumor Segmentation and Detection using F-TransformIRJET Journal
The document presents a method for brain tumor segmentation and detection using F-transform. There are two main stages: detection and segmentation. In detection, asymmetry between the left and right hemispheres of the brain is analyzed to detect abnormalities. In segmentation, F-transform is used for edge detection followed by morphological operations to extract the tumor region. The method was able to accurately segment and detect brain tumors in MRI images and showed improved speed over other techniques.
The document describes a novel method for localizing the pupil in eye gaze tracking systems. The proposed method fuses existing pupil localization techniques, including Hough transform, gray projection, and coarse positioning. This fusion approach aims to improve localization accuracy while lowering false detections. The method first preprocesses the eye image using filtering, edge detection, and thresholding/binarization. It then applies the fused techniques of Hough transform, gray projection, and coarse positioning to the preprocessed image to accurately detect the pupil boundary and center coordinates in a robust manner. The results are expected to provide better pupil localization compared to existing individual techniques.
A study of a modified histogram based fast enhancement algorithm (mhbfe)sipij
Image enhancement is one of the most important issues in low-level image processing. The goal of image
enhancement is to improve the quality of an image such that enhanced image is better than the original
image. Conventional Histogram equalization (HE) is one of the most algorithms used in the contrast
enhancement of medical images, this due to its simplicity and effectiveness. However, it causes the
unnatural look and visual artefacts, where it tends to change the brightness of an images. The Histogram
Based Fast Enhancement Algorithm (HBFE) tries to enhance the CT head images, where it improves the
water-washed effect caused by conventional histogram equalization algorithms with less complexity. It
depends on using full gray levels to enhance the soft tissues ignoring other image details. We present a
modification of this algorithm to be valid for most CT image types with keeping the degree of simplicity.
Experimental results show that The Modified Histogram Based Fast Enhancement Algorithm (MHBFE)
enhances the results in term of PSNR, AMBE and entropy. We use also the Statistical analysis to ensure
the improvement of the proposed modification that can be generalized. ANalysis Of VAriance (ANOVA) is
used as first to test whether or not all the results have the same average. Then we find the significant
improvement of the modification.
Coutinho A Depth Compensation Method For Cross Ratio Based Eye TrackingKalle
Traditional cross-ratio methods (TCR) project a light pattern and use invariant properties of projective geometry to estimate the gaze position. Advantages of the TCR methods include robustness to large head movements and in general requires just a one time per user calibration. However, the accuracy of TCR methods decay significantly for head movements along the camera optical axis, mainly due to the angular difference between the optical and visual axis of the eye. In this paper we propose a depth compensation cross-ratio (DCR) method that improves the accuracy of TCR methods for large head depth variations. Our solution compensates the angular offset using a 2D onscreen vector computed from a simple calibration procedure. The length of the 2D vector, which varies with head distance, is adjusted by a scale factor that is estimated from relative size variations of the corneal reflection pattern. The proposed DCR solution was compared to a TCR method using synthetic and real data from 2 users. An average improvement of 40% was observed with synthetic data, and 8% with the real data.
This document discusses automatic segmentation of white matter from brain fMRI images. It presents a 3-step solution: 1) preprocessing raw images using histogram-based double thresholding to remove noise, 2) estimating a threshold value for segmentation using Otsu's algorithm, and 3) performing binary segmentation of images based on the calculated threshold. Currently, white matter segmentation is done manually, which is time-consuming. The proposed automatic method could help address this issue.
Comparative performance analysis of segmentation techniquesIAEME Publication
This document compares the performance of several image segmentation techniques: global thresholding, adaptive thresholding, region growing, and level set segmentation. It applies these techniques to medical and synthetic images corrupted with noise and evaluates the segmentation results using binary classification metrics like sensitivity, specificity, accuracy, and precision. The results show that level set segmentation best preserves object boundaries, adaptive thresholding captures most image details, and global thresholding has the highest success rate at extracting regions of interest. Overall, the study aims to determine the optimal segmentation method for medical images from CT scans.
IRJET- Survey based on Detection of Optic Disc in Retinal Images using Segmen...IRJET Journal
This document compares different segmentation techniques for detecting the optic disc in retinal images: K-means clustering, Hough transform, and fuzzy convergence. It finds that K-means clustering achieved the highest accuracy rate at 96.92% while Hough transform and fuzzy convergence achieved 93.2% and 95.2% accuracy, respectively. The techniques are also compared based on how well they determine properties of the optic disc like eccentricity, accuracy, and brightness. Pre-processing steps like filtering, color space conversion, and histogram equalization are applied before segmentation.
A SIMPLE IMAGE PROCESSING APPROACH TO ABNORMAL SLICES DETECTION FROM MRI TUMO...ijma
This paper proposed a method for brain tumor detection from the magnetic resonance imaging (MRI) of
human head scans. The proposed work explained the tumor detection process by means of image
processing transformations and thresholding technique. The MRI images are preprocessed by
transformation techniques and thus enhance the tumor region. Then the images are checked for
abnormality using fuzzy symmetric measure (FSM). If abnormal, then Otsu’s thresholding is used to extract
the tumor region. Experiments with the proposed method were done on 17 datasets. Various evaluation
parameters were used to validate the proposed method. The predictive accuracy (PA) and dice coefficient
(DC) values of proposed method reached maximum.
Abstract:
A technique for exudate detectionin fundus image is been presented in this paper. Due to diabetic retinopathy an abnormality is caused known as exudates.The loss of vision can be prevented by detecting the exudates as early as possible. The work mainly aims at detecting exudates which is present in the green channel of the RGB image by applying few preprocessing steps, DWT and feature extraction. The extracted features are fed to 3 different classifiers such as KNN, SVM and NN. Based on the classifier result if an exudate is present the extraction of exudate ROI is done based on canny edge detection followed by morphological operations. The severity of the exudates is established on the area of the detected exudate.
Keywords:Exudates, Fundus image, Diabetic retinopathy, DWT, KNN, SVM, NN, Canny edge detection, Morphological operations.
There are three major complications of diabetes which lead to blindness. They are retinopathy, cataracts, and glaucoma among which diabetic retinopathy is considered as the most serious complication affecting the blood vessels in the retina. Diabetic retinopathy (DR) occurs when tiny vessels swell and leak fluid or abnormal new blood vessels grow hampering normal vision.
Diabetic retinopathy is a widespread problem of visual impairment. The abnormalities like microaneurysms, hemorrhages and exudates are the key symptoms which play an important role in diagnosis of diabetic retinopathy. Early detection of these abnormalities may prevent the blurred vision or vision loss due to diabetic retinopathy. Basically exudates are lipid lesions able to be seen in optical images. Exudates are categorized into hard exudates and soft exudates based on its appearance. Hard exudates come out as intense yellow regions and soft exudates have fuzzy manifestations. Automatic detection of exudates may aid ophthalmologists in diagnosis of diabetic retinopathy and its early treatment. Fig. 1 shows the key symptoms of diabetic retinopathy.
Development of algorithm for identification of maligant growth in cancer usin...IJECEIAES
The precise identification and characterization of small pulmonary nodules at low-dose CT is a necessary requirement for the completion of valuable lung cancer screening. It is compulsory to develop some automated tool, in order to detect pulmonary nodules at low dose ct at the beginning stage itself. The various algorithms had been proposed earlier by many researchers within the past, but the accuracy of prediction is usually a challenging task. During this work, a man-made neural networ based methodology is proposed to seek out the irregular growth of lung tissues. Higher probability of detection is taken as a goal to urge an automatic tool, with great accuracy. The best feature sets derived from Haralick Gray level co occurrence Matrix and used because the dimension reduction way for feeding neural network. During this work, a binary Binary classifier neural network has been proposed to spot the traditional images out of all the images. The potential of the proposed neural network has been quantitatively computed using confusion matrix and located in terms of accuracy.
The document discusses several key principles and controversies in PET amyloid imaging, including reference tissue considerations, target region considerations, thresholds for amyloid positivity, and comparisons of [18F]AV-45 and [11C]PiB tracers. It analyzes data from ADNI AV45 scans to examine these issues, finding that the choice of reference region and scanner model can impact positivity thresholds. Surface maps may help reading of AV45 scans compared to PiB.
IRJET- Performance Analysis of Learning Algorithms for Automated Detection of...IRJET Journal
This document presents research on using machine learning algorithms to detect glaucoma from fundus images. The researchers extracted various texture and intensity features from fundus images using methods like gray level co-occurrence matrix, histogram of oriented gradients, wavelet transforms, and shearlet transforms. They used feature selection to reduce the number of features before classifying the images as normal or glaucomatous using support vector machines and K-nearest neighbors algorithms. The best accuracy of 97.5% was achieved using wavelet features. The researchers evaluated the classification performance using metrics like accuracy, sensitivity, specificity, and predictive values to assess the ability of the extracted features to detect glaucoma.
Automatic Segmentation of Brachial Artery based on Fuzzy C-Means Pixel Clust...IJECEIAES
Automatic extraction of brachial artery and measuring associated indices such as flow-mediated dilatation and Intima-media thickness are important for early detection of cardiovascular disease and other vascular endothelial malfunctions. In this paper, we propose the basic but important component of such decision-assisting medical software development – noise tolerant fully automatic segmentation of brachial artery from ultrasound images. Pixel clustering with Fuzzy C-Means algorithm in the quantization process is the key component of that segmentation with various image processing algorithms involved. This algorithm could be an alternative choice of segmentation process that can replace speckle noise-suffering edge detection procedures in this application domain.
IRJET- Acute Ischemic Stroke Detection and ClassificationIRJET Journal
This document presents a method for detecting and classifying acute ischemic strokes in CT scan images. The method involves pre-processing images using median filtering and skull stripping. Features like mean, entropy, and gray-level co-occurrence matrix values are then extracted. Naive Bayes and k-nearest neighbor classifiers are used to classify images as normal or stroke with 92% accuracy. The k-NN classifier takes longer (8.80 seconds) to process images compared to the Naive Bayes classifier (5.85 seconds). The method accurately detects stroke regions in images and can help in early diagnosis and treatment of ischemic strokes.
Comparative Assessments of Contrast Enhancement Techniques in Brain MRI Imagesijtsrd
Image processing plays a significant role in the medical field, particularly in medical imaging diagnostics, which is a growing and challenging area. Medical imaging is advantageous in diagnosis and early detection of many harmful diseases. One of such dangerous disease is a brain tumor medical imaging provides proper diagnosis of brain tumor. This paper will have an analysis of fundamental concepts as well as algorithms for brain MRI image processing. We have adhered several image processing steps on brain MRI images, conducting specific contrast enhancements and segmentation techniques, and evaluating every techniques performance in terms of evaluation parameters. The methods evaluated based on two measurement criteria, Peak Signal to Noise Ratio PSNR and Mean Square Error MSE , namely Contrast Stretching, Shock Filter, Histogram Equalization, Contrast Limited Adaptive Histogram Equalization CLAHE . This comparative analysis will be handy in identifying the best way for medical diagnosis, which would be the best method for providing better performance for brain MRI image analysis than others. Anchal Sharma | Mr. Mukesh Kumar Saini "Comparative Assessments of Contrast Enhancement Techniques in Brain MRI Images" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30336.pdf Paper Url :https://www.ijtsrd.com/engineering/bio-mechanicaland-biomedical-engineering/30336/comparative-assessments-of-contrast-enhancement-techniques-in-brain-mri-images/anchal-sharma
NIRS-BASED CORTICAL ACTIVATION ANALYSIS BY TEMPORAL CROSS CORRELATIONsipij
In this study we present a method of signal processing to determine dominant channels in near infrared spectroscopy (NIRS). To compare measuring channels and identify delays between them, cross correlation is computed. Furthermore, to find out possible dominant channels, a visual inspection was performed. The
outcomes demonstrated that the visual inspection exhibited evoked-related activations in the primary somatosensory cortex (S1) after stimulation which is consistent with comparable studies and the cross correlation study discovered dominant channels on both cerebral hemispheres. The analysis also showed a relationship between dominant channels and adjacent channels. For that reason, our results present a new
method to identify dominant regions in the cerebral cortex using near-infrared spectroscopy. These findings have also implications in the decrease of channels by eliminating irrelevant channels for the experiment.
Mri brain tumour detection by histogram and segmentationiaemedu
This document summarizes a research paper on detecting brain tumors in MRI images using a combination of histogram thresholding, modified gradient vector field (GVF), and morphological operators. The non-brain regions are removed using morphological operators. Histogram thresholding is then used to detect if the brain is normal or abnormal/contains a tumor. If abnormal, the modified GVF is used to detect the tumor contour. The proposed method aims to be computationally efficient by only performing segmentation if a tumor is detected. It was tested on many MRI brain images and performance was validated against human expert segmentation.
This document summarizes an article that presents two methods for iris segmentation: 1) An automatic segmentation method that uses circular Hough transform to locate the iris boundaries and normalize the iris region using rubber sheet modeling. This method achieves 84% accuracy. 2) A window technique that first finds the pupil center using thresholding and chain coding, then extracts information along radial lines from the pupil center to locate edges, achieving 99% accuracy for pupil boundary detection and 79% for iris edges. The document provides details on each step of the automatic segmentation method and explains the window technique at a high level.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
IRJET- Brain Tumour Detection and ART Classification Technique in MR Brai...IRJET Journal
This document describes a proposed method for detecting and classifying brain tumors in MR brain images using robust principal component analysis (RPCA) and quad tree (QT) decomposition for image fusion. The method involves fusing T1 and T2 MRI images using RPCA and QT decomposition. The fused image is then segmented using level set segmentation. Features are extracted from the segmented image using complete local binary pattern (CLBP) and pyramid histogram of oriented gradients (PHOG) approaches. The features are then classified using an adaptive resonance theory (ART) classifier to classify the brain tumor as malignant or benign. The proposed method aims to efficiently fuse multi-modal MRI images for improved brain tumor detection and classification.
Proposition of local automatic algorithm for landmark detection in 3D cephalo...journalBEEI
This study proposes a new contribution to solve the problem of automatic landmarks detection in three-dimensional cephalometry. 3D images obtained from CBCT (cone beam computed tomography) equipment were used for automatic identification of twelve landmarks. The proposed method is based on a local geometry and intensity criteria of skull structures. After the step of preprocessing and binarization, the algorithm segments the skull into three structures using the geometry information of nasal cavity and intensity information of the teeth. Each targeted landmark was detected using local geometrical information of the volume of interest containing this landmark. The ICC and confidence interval (95% CI) for each direction were 0, 91 (0.75 to 0.96) for x- direction; 0.92 (0.83 to 0.97) for y-direction; 0.92 (0.79 to 0.97) for z-direction. The mean error of detection was calculated using the Euclidian distance between the 3D coordinates of manually and automatically detected landmarks. The overall mean error of the algorithm was 2.76 mm with a standard deviation of 1.43 mm. Our proposed approach for automatic landmark identification in 3D cephalometric was capable of detecting 12 landmarks on 3D CBCT images which can be facilitate the use of 3D cephalometry to orthodontists.
This document summarizes a research paper on face recognition using Gabor features and PCA. It begins with an introduction to face recognition and discusses challenges like lighting, pose, and orientation. It then describes how the proposed system uses Gabor wavelets for preprocessing to reduce variations from pose, lighting, etc. Principal component analysis (PCA) is used to extract low dimensional and discriminating feature vectors from the preprocessed images. These feature vectors are then used for classification with k-nearest neighbors. The proposed system was tested on the Yale face database containing 100 images of 10 subjects with variable illumination and expressions.
A novel equalization scheme for the selective enhancement of optical disc and...TELKOMNIKA JOURNAL
The ratio of the diameters of Optic Cup (OC) and Optic Disc (OD), termed as ‘Cup to Disc Ratio’
(CDR), derived from the fundus imagery is a popular biomarker used for the diagnosis of glaucoma.
Demarcation of OC and OD either manually or through automated image processing algorithms is error
prone because of poor grey level contrast and their vague boundaries. A dedicated equalization which
simultaneously compresses the dynamic range of the background and stretches the range of ODis
proposed in this paper. Unlike the conventional GHE, in the proposed equalization, the original histogram
is inverted and weighted nonlinearly before computing the Cumulative Probability Density (CPD).
The equalization scheme is compared with Adaptive Histogram Equalization (AHE), Global Histogram
Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) in terms of the
difference between the mean grey levels of OD and the background, using a quantitative metric known as
Contrast Improvement Index (CII). The CII exhibited by CLAHE, GHE and the proposed scheme are
1.1977 ± 0.0326, 1.0862 ± 0.0304 and 1.3312 ± 0.0486, respectively.The proposed method is observed to
be superior to CLAHE, GHE and AHE and it can be employed in Computerized Clinical Decision Support
Systems (CCDSS) to improve the accuracy of localizing the OD and the computation of CDR.
New Noise Reduction Technique for Medical Ultrasound Imaging using Gabor Filt...CSCJournals
Ultrasound (US) imaging is an important medical diagnostic method, as it allows the examination of several internal body organs. However, its usefulness is diminished by signal dependent noise known as speckle noise. Speckle noise degrades target detectability in ultrasound images and reduces contrast and resolution, affecting the ability to identify normal and pathological tissue. For accurate diagnosis, it is important to remove this noise from ultrasound images. In this study, a new filtering technique is proposed for removing speckle noise from medical ultrasound images. It is based on Gabor filtering. Specifically, a preprocessing step is added before applying the Gabor filter. The proposed technique is applied to various ultrasound images, and certain measurement indexes are calculated, such as signal to noise ratio, peak signal to noise ratio, structure similarity index, and root mean square error, which are used for comparison. In particular, five widely used image enhancement techniques were applied to three types of ultrasound images (kidney, abdomen and ortho). The main objective of image enhancement is to obtain a highly detailed image, and in that respect, the proposed technique proved superior to other widely used filters.
Analysis of minimum face video duration and the effect of video compression t...journalBEEI
Heart rate (HR) is one of important indicator for human physiological diagnosis, and camera can be used to detect it via photoplethysmograph (PPG) signal extraction. In doing so, number of sample images required to measure the HR signal, and quality of the images itself are important to yield an accurate reading. This paper tackles such an issue by analyzing the effect of sampling interval to HR reading in compressed and original video format, obtained in various ranging locations. Technically, important facial points from video stream were estimated by using cascade regression facial tracker. Based on the facial points, region of interest (ROI) was constructed where non-rigid movement is minimal. Next, PPG signal was extracted by calculating the average value of green pixel intensity from the ROI. Following that, illumination variation was separated from the signal via independent component analysis (ICA). The PPG signal was further processed using series of signal filtering techniques to exclude frequencies beyond range of interest prior estimate the HR. From the experiment it can be observed that sampling time of 2 seconds in uncompressed video shows promising HR within the range of 1 to 5 meters.
Pulse Rate Monitoring Using Image ProcessingIRJET Journal
This paper presents a real-time heart rate monitoring system using video from a webcam. It extracts the heart rate from variations in facial skin color caused by blood circulation. Three signal processing methods - Fast Fourier Transform, Independent Component Analysis, and Principal Component Analysis - are applied to color channels in the video to extract the blood volume pulse. The extracted heart rate is then compared to a reference rate. Results show there is a high degree of agreement between the proposed method and the reference readings, indicating this non-contact technique has potential for use in personal healthcare and telemedicine applications.
Extraction of respiratory rate from ppg signals using pca and emdeSAT Publishing House
This document discusses extracting respiratory rate from photoplethysmography (PPG) signals using principal component analysis (PCA) and empirical mode decomposition (EMD). It begins with an introduction to PPG signals and how they contain respiratory information. It then discusses previous efforts to extract respiratory signals from PPG that used methods like filtering and wavelets. The document proposes using PCA and EMD to improve upon existing methods. It provides background on PCA, EMD, and reviews literature on extracting respiratory information from ECG and how respiration modulates PPG signals. The aim is to evaluate different signal processing techniques to extract respiratory information from commonly available biomedical signals like ECG and PPG to avoid using additional sensors.
A SIMPLE IMAGE PROCESSING APPROACH TO ABNORMAL SLICES DETECTION FROM MRI TUMO...ijma
This paper proposed a method for brain tumor detection from the magnetic resonance imaging (MRI) of
human head scans. The proposed work explained the tumor detection process by means of image
processing transformations and thresholding technique. The MRI images are preprocessed by
transformation techniques and thus enhance the tumor region. Then the images are checked for
abnormality using fuzzy symmetric measure (FSM). If abnormal, then Otsu’s thresholding is used to extract
the tumor region. Experiments with the proposed method were done on 17 datasets. Various evaluation
parameters were used to validate the proposed method. The predictive accuracy (PA) and dice coefficient
(DC) values of proposed method reached maximum.
Abstract:
A technique for exudate detectionin fundus image is been presented in this paper. Due to diabetic retinopathy an abnormality is caused known as exudates.The loss of vision can be prevented by detecting the exudates as early as possible. The work mainly aims at detecting exudates which is present in the green channel of the RGB image by applying few preprocessing steps, DWT and feature extraction. The extracted features are fed to 3 different classifiers such as KNN, SVM and NN. Based on the classifier result if an exudate is present the extraction of exudate ROI is done based on canny edge detection followed by morphological operations. The severity of the exudates is established on the area of the detected exudate.
Keywords:Exudates, Fundus image, Diabetic retinopathy, DWT, KNN, SVM, NN, Canny edge detection, Morphological operations.
There are three major complications of diabetes which lead to blindness. They are retinopathy, cataracts, and glaucoma among which diabetic retinopathy is considered as the most serious complication affecting the blood vessels in the retina. Diabetic retinopathy (DR) occurs when tiny vessels swell and leak fluid or abnormal new blood vessels grow hampering normal vision.
Diabetic retinopathy is a widespread problem of visual impairment. The abnormalities like microaneurysms, hemorrhages and exudates are the key symptoms which play an important role in diagnosis of diabetic retinopathy. Early detection of these abnormalities may prevent the blurred vision or vision loss due to diabetic retinopathy. Basically exudates are lipid lesions able to be seen in optical images. Exudates are categorized into hard exudates and soft exudates based on its appearance. Hard exudates come out as intense yellow regions and soft exudates have fuzzy manifestations. Automatic detection of exudates may aid ophthalmologists in diagnosis of diabetic retinopathy and its early treatment. Fig. 1 shows the key symptoms of diabetic retinopathy.
Development of algorithm for identification of maligant growth in cancer usin...IJECEIAES
The precise identification and characterization of small pulmonary nodules at low-dose CT is a necessary requirement for the completion of valuable lung cancer screening. It is compulsory to develop some automated tool, in order to detect pulmonary nodules at low dose ct at the beginning stage itself. The various algorithms had been proposed earlier by many researchers within the past, but the accuracy of prediction is usually a challenging task. During this work, a man-made neural networ based methodology is proposed to seek out the irregular growth of lung tissues. Higher probability of detection is taken as a goal to urge an automatic tool, with great accuracy. The best feature sets derived from Haralick Gray level co occurrence Matrix and used because the dimension reduction way for feeding neural network. During this work, a binary Binary classifier neural network has been proposed to spot the traditional images out of all the images. The potential of the proposed neural network has been quantitatively computed using confusion matrix and located in terms of accuracy.
The document discusses several key principles and controversies in PET amyloid imaging, including reference tissue considerations, target region considerations, thresholds for amyloid positivity, and comparisons of [18F]AV-45 and [11C]PiB tracers. It analyzes data from ADNI AV45 scans to examine these issues, finding that the choice of reference region and scanner model can impact positivity thresholds. Surface maps may help reading of AV45 scans compared to PiB.
IRJET- Performance Analysis of Learning Algorithms for Automated Detection of...IRJET Journal
This document presents research on using machine learning algorithms to detect glaucoma from fundus images. The researchers extracted various texture and intensity features from fundus images using methods like gray level co-occurrence matrix, histogram of oriented gradients, wavelet transforms, and shearlet transforms. They used feature selection to reduce the number of features before classifying the images as normal or glaucomatous using support vector machines and K-nearest neighbors algorithms. The best accuracy of 97.5% was achieved using wavelet features. The researchers evaluated the classification performance using metrics like accuracy, sensitivity, specificity, and predictive values to assess the ability of the extracted features to detect glaucoma.
Automatic Segmentation of Brachial Artery based on Fuzzy C-Means Pixel Clust...IJECEIAES
Automatic extraction of brachial artery and measuring associated indices such as flow-mediated dilatation and Intima-media thickness are important for early detection of cardiovascular disease and other vascular endothelial malfunctions. In this paper, we propose the basic but important component of such decision-assisting medical software development – noise tolerant fully automatic segmentation of brachial artery from ultrasound images. Pixel clustering with Fuzzy C-Means algorithm in the quantization process is the key component of that segmentation with various image processing algorithms involved. This algorithm could be an alternative choice of segmentation process that can replace speckle noise-suffering edge detection procedures in this application domain.
IRJET- Acute Ischemic Stroke Detection and ClassificationIRJET Journal
This document presents a method for detecting and classifying acute ischemic strokes in CT scan images. The method involves pre-processing images using median filtering and skull stripping. Features like mean, entropy, and gray-level co-occurrence matrix values are then extracted. Naive Bayes and k-nearest neighbor classifiers are used to classify images as normal or stroke with 92% accuracy. The k-NN classifier takes longer (8.80 seconds) to process images compared to the Naive Bayes classifier (5.85 seconds). The method accurately detects stroke regions in images and can help in early diagnosis and treatment of ischemic strokes.
Comparative Assessments of Contrast Enhancement Techniques in Brain MRI Imagesijtsrd
Image processing plays a significant role in the medical field, particularly in medical imaging diagnostics, which is a growing and challenging area. Medical imaging is advantageous in diagnosis and early detection of many harmful diseases. One of such dangerous disease is a brain tumor medical imaging provides proper diagnosis of brain tumor. This paper will have an analysis of fundamental concepts as well as algorithms for brain MRI image processing. We have adhered several image processing steps on brain MRI images, conducting specific contrast enhancements and segmentation techniques, and evaluating every techniques performance in terms of evaluation parameters. The methods evaluated based on two measurement criteria, Peak Signal to Noise Ratio PSNR and Mean Square Error MSE , namely Contrast Stretching, Shock Filter, Histogram Equalization, Contrast Limited Adaptive Histogram Equalization CLAHE . This comparative analysis will be handy in identifying the best way for medical diagnosis, which would be the best method for providing better performance for brain MRI image analysis than others. Anchal Sharma | Mr. Mukesh Kumar Saini "Comparative Assessments of Contrast Enhancement Techniques in Brain MRI Images" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30336.pdf Paper Url :https://www.ijtsrd.com/engineering/bio-mechanicaland-biomedical-engineering/30336/comparative-assessments-of-contrast-enhancement-techniques-in-brain-mri-images/anchal-sharma
NIRS-BASED CORTICAL ACTIVATION ANALYSIS BY TEMPORAL CROSS CORRELATIONsipij
In this study we present a method of signal processing to determine dominant channels in near infrared spectroscopy (NIRS). To compare measuring channels and identify delays between them, cross correlation is computed. Furthermore, to find out possible dominant channels, a visual inspection was performed. The
outcomes demonstrated that the visual inspection exhibited evoked-related activations in the primary somatosensory cortex (S1) after stimulation which is consistent with comparable studies and the cross correlation study discovered dominant channels on both cerebral hemispheres. The analysis also showed a relationship between dominant channels and adjacent channels. For that reason, our results present a new
method to identify dominant regions in the cerebral cortex using near-infrared spectroscopy. These findings have also implications in the decrease of channels by eliminating irrelevant channels for the experiment.
Mri brain tumour detection by histogram and segmentationiaemedu
This document summarizes a research paper on detecting brain tumors in MRI images using a combination of histogram thresholding, modified gradient vector field (GVF), and morphological operators. The non-brain regions are removed using morphological operators. Histogram thresholding is then used to detect if the brain is normal or abnormal/contains a tumor. If abnormal, the modified GVF is used to detect the tumor contour. The proposed method aims to be computationally efficient by only performing segmentation if a tumor is detected. It was tested on many MRI brain images and performance was validated against human expert segmentation.
This document summarizes an article that presents two methods for iris segmentation: 1) An automatic segmentation method that uses circular Hough transform to locate the iris boundaries and normalize the iris region using rubber sheet modeling. This method achieves 84% accuracy. 2) A window technique that first finds the pupil center using thresholding and chain coding, then extracts information along radial lines from the pupil center to locate edges, achieving 99% accuracy for pupil boundary detection and 79% for iris edges. The document provides details on each step of the automatic segmentation method and explains the window technique at a high level.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
IRJET- Brain Tumour Detection and ART Classification Technique in MR Brai...IRJET Journal
This document describes a proposed method for detecting and classifying brain tumors in MR brain images using robust principal component analysis (RPCA) and quad tree (QT) decomposition for image fusion. The method involves fusing T1 and T2 MRI images using RPCA and QT decomposition. The fused image is then segmented using level set segmentation. Features are extracted from the segmented image using complete local binary pattern (CLBP) and pyramid histogram of oriented gradients (PHOG) approaches. The features are then classified using an adaptive resonance theory (ART) classifier to classify the brain tumor as malignant or benign. The proposed method aims to efficiently fuse multi-modal MRI images for improved brain tumor detection and classification.
Proposition of local automatic algorithm for landmark detection in 3D cephalo...journalBEEI
This study proposes a new contribution to solve the problem of automatic landmarks detection in three-dimensional cephalometry. 3D images obtained from CBCT (cone beam computed tomography) equipment were used for automatic identification of twelve landmarks. The proposed method is based on a local geometry and intensity criteria of skull structures. After the step of preprocessing and binarization, the algorithm segments the skull into three structures using the geometry information of nasal cavity and intensity information of the teeth. Each targeted landmark was detected using local geometrical information of the volume of interest containing this landmark. The ICC and confidence interval (95% CI) for each direction were 0, 91 (0.75 to 0.96) for x- direction; 0.92 (0.83 to 0.97) for y-direction; 0.92 (0.79 to 0.97) for z-direction. The mean error of detection was calculated using the Euclidian distance between the 3D coordinates of manually and automatically detected landmarks. The overall mean error of the algorithm was 2.76 mm with a standard deviation of 1.43 mm. Our proposed approach for automatic landmark identification in 3D cephalometric was capable of detecting 12 landmarks on 3D CBCT images which can be facilitate the use of 3D cephalometry to orthodontists.
This document summarizes a research paper on face recognition using Gabor features and PCA. It begins with an introduction to face recognition and discusses challenges like lighting, pose, and orientation. It then describes how the proposed system uses Gabor wavelets for preprocessing to reduce variations from pose, lighting, etc. Principal component analysis (PCA) is used to extract low dimensional and discriminating feature vectors from the preprocessed images. These feature vectors are then used for classification with k-nearest neighbors. The proposed system was tested on the Yale face database containing 100 images of 10 subjects with variable illumination and expressions.
A novel equalization scheme for the selective enhancement of optical disc and...TELKOMNIKA JOURNAL
The ratio of the diameters of Optic Cup (OC) and Optic Disc (OD), termed as ‘Cup to Disc Ratio’
(CDR), derived from the fundus imagery is a popular biomarker used for the diagnosis of glaucoma.
Demarcation of OC and OD either manually or through automated image processing algorithms is error
prone because of poor grey level contrast and their vague boundaries. A dedicated equalization which
simultaneously compresses the dynamic range of the background and stretches the range of ODis
proposed in this paper. Unlike the conventional GHE, in the proposed equalization, the original histogram
is inverted and weighted nonlinearly before computing the Cumulative Probability Density (CPD).
The equalization scheme is compared with Adaptive Histogram Equalization (AHE), Global Histogram
Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) in terms of the
difference between the mean grey levels of OD and the background, using a quantitative metric known as
Contrast Improvement Index (CII). The CII exhibited by CLAHE, GHE and the proposed scheme are
1.1977 ± 0.0326, 1.0862 ± 0.0304 and 1.3312 ± 0.0486, respectively.The proposed method is observed to
be superior to CLAHE, GHE and AHE and it can be employed in Computerized Clinical Decision Support
Systems (CCDSS) to improve the accuracy of localizing the OD and the computation of CDR.
New Noise Reduction Technique for Medical Ultrasound Imaging using Gabor Filt...CSCJournals
Ultrasound (US) imaging is an important medical diagnostic method, as it allows the examination of several internal body organs. However, its usefulness is diminished by signal dependent noise known as speckle noise. Speckle noise degrades target detectability in ultrasound images and reduces contrast and resolution, affecting the ability to identify normal and pathological tissue. For accurate diagnosis, it is important to remove this noise from ultrasound images. In this study, a new filtering technique is proposed for removing speckle noise from medical ultrasound images. It is based on Gabor filtering. Specifically, a preprocessing step is added before applying the Gabor filter. The proposed technique is applied to various ultrasound images, and certain measurement indexes are calculated, such as signal to noise ratio, peak signal to noise ratio, structure similarity index, and root mean square error, which are used for comparison. In particular, five widely used image enhancement techniques were applied to three types of ultrasound images (kidney, abdomen and ortho). The main objective of image enhancement is to obtain a highly detailed image, and in that respect, the proposed technique proved superior to other widely used filters.
Analysis of minimum face video duration and the effect of video compression t...journalBEEI
Heart rate (HR) is one of important indicator for human physiological diagnosis, and camera can be used to detect it via photoplethysmograph (PPG) signal extraction. In doing so, number of sample images required to measure the HR signal, and quality of the images itself are important to yield an accurate reading. This paper tackles such an issue by analyzing the effect of sampling interval to HR reading in compressed and original video format, obtained in various ranging locations. Technically, important facial points from video stream were estimated by using cascade regression facial tracker. Based on the facial points, region of interest (ROI) was constructed where non-rigid movement is minimal. Next, PPG signal was extracted by calculating the average value of green pixel intensity from the ROI. Following that, illumination variation was separated from the signal via independent component analysis (ICA). The PPG signal was further processed using series of signal filtering techniques to exclude frequencies beyond range of interest prior estimate the HR. From the experiment it can be observed that sampling time of 2 seconds in uncompressed video shows promising HR within the range of 1 to 5 meters.
Pulse Rate Monitoring Using Image ProcessingIRJET Journal
This paper presents a real-time heart rate monitoring system using video from a webcam. It extracts the heart rate from variations in facial skin color caused by blood circulation. Three signal processing methods - Fast Fourier Transform, Independent Component Analysis, and Principal Component Analysis - are applied to color channels in the video to extract the blood volume pulse. The extracted heart rate is then compared to a reference rate. Results show there is a high degree of agreement between the proposed method and the reference readings, indicating this non-contact technique has potential for use in personal healthcare and telemedicine applications.
Extraction of respiratory rate from ppg signals using pca and emdeSAT Publishing House
This document discusses extracting respiratory rate from photoplethysmography (PPG) signals using principal component analysis (PCA) and empirical mode decomposition (EMD). It begins with an introduction to PPG signals and how they contain respiratory information. It then discusses previous efforts to extract respiratory signals from PPG that used methods like filtering and wavelets. The document proposes using PCA and EMD to improve upon existing methods. It provides background on PCA, EMD, and reviews literature on extracting respiratory information from ECG and how respiration modulates PPG signals. The aim is to evaluate different signal processing techniques to extract respiratory information from commonly available biomedical signals like ECG and PPG to avoid using additional sensors.
Extraction of respiratory rate from ppg signals using pca and emdeSAT Journals
Abstract Photoplethysmography is a non-invasive electro-optic method developed by Hertzman, which provides information on the blood volume flowing at a particular test site on the body close to the skin. PPG waveform contains two components; one, attributable to the pulsatile component in the vessels, i.e. the arterial pulse, which is caused by the heartbeat, and gives a rapidly alternating signal (AC component). The second one is due to the blood volume and its change in the skin which gives a steady signal that changes very slowly (DC component). PPG signal consists of not only the heart-beat information but also a respiratory signal. Estimation of respiration rates from Photoplethysmographic (PPG) signals would be an alternative approach for obtaining respiration related information.. There have been several efforts on PPG Derived Respiration (PDR), these methods are based on different signal processing techniques like filtering, wavelets and other statistical methods, which work by extraction of respiratory trend embedded into various physiological signals. PCA identifies patterns in data, and expresses the data in such a way as to highlight their similarities and differences. Since patterns in data can be hard to find in data of high dimension, where the luxury of graphical representation is not available, PCA is a powerful tool for analyzing such data. Due to external stimuli, biomedical signals are in general non-linear and non-stationary. Empirical Mode Decomposition is ideally suited to extract essential components which are characteristic of the underlying biological or physiological processes. The basis functions, called Intrinsic Mode Functions (IMFs) represent a complete set of locally orthogonal basis functions whose amplitude and frequency may vary over time. The contribution reviews the technique of EMD and related algorithms and discusses illustrative applications. Test results on PPG signals of the well known MIMIC database from Physiobank archive reveal that the proposed EMD method has efficiently extracted respiratory information from PPG signals. The evaluated similarity parameters in both time and frequency domains for original and estimated respiratory rates have shown the superiority of the method. Index Terms: Respiratory signal, PPG signal, Principal Component Analysis, EMD, ECG
Powerful processing to three-dimensional facial recognition using triple info...IJAAS Team
Face Detection plays a crucial role in identifying individuals and criminals in Security, surveillance, and footwork control systems. Face Recognition in the human is superb, and pictures can be easily identified even after years of separation. These abilities also apply to changes in a facial expression such as age, glasses, beard, or little change in the face. This method is based on 150 three-dimensional images using the Bosphorus database of a high range laser scanner in a Bogaziçi University in Turkey. This paper presents powerful processing for face recognition based on a combination of the salient information and features of the face, such as eyes and nose, for the detection of three-dimensional figures identified through analysis of surface curvature. The Trinity of the nose and two eyes were selected for applying principal component analysis algorithm and support vector machine to revealing and classification the difference between face and non-face. The results with different facial expressions and extracted from different angles have indicated the efficiency of our powerful processing.
An Ultrasound Image Despeckling Approach Based on Principle Component AnalysisCSCJournals
This document presents a new principle component analysis (PCA)-based approach for reducing speckle noise in ultrasound images. Speckle noise inherently degrades ultrasound image quality but also contains clinically useful textural information. The proposed method segments the image, calculates the covariance matrix for each segment, averages the covariance matrices to obtain a global covariance matrix, selects the dominant eigenvectors from this matrix to form a projection matrix, projects the image segments onto this matrix to denoise them, and recombines the denoised segments. When applied to simulated and real ultrasound images, it outperformed wavelet denoising, total variation filtering, and anisotropic diffusion filtering in terms of edge preservation and noise removal while maintaining textural information, as
The document describes a new hierarchical deep learning algorithm for facial expression recognition (FER). The algorithm extracts appearance features from preprocessed LBP images using a convolutional neural network. It also extracts geometric features by tracking the coordinates of action unit landmarks, which are facial muscles involved in expressions. These two types of features are fused in a hierarchical structure. The algorithm combines the softmax outputs of each network by considering the second-highest predicted emotion. It also uses an autoencoder to generate neutral expression images to help extract dynamic features between neutral and emotional expressions. The algorithm achieved 96.46% accuracy on the CK+ dataset and 91.27% on the JAFFE dataset, outperforming other recent FER methods.
The document describes an algorithm for eye detection in face images. It begins with face detection using skin color detection in HSV color space. Then it finds the symmetric axis of the extracted face region using gradient orientation histograms to determine the location of the eyes. It further finds the symmetric axis within the eye region to locate the center of the eyes. The algorithm aims to accurately detect the eyes even when the face is rotated, which is important for applications like face recognition and gaze tracking.
PRE-PROCESSING TECHNIQUES FOR FACIAL EMOTION RECOGNITION SYSTEMIAEME Publication
Humans share a universal and fundamental set of emotions which are exhibited through consistent facial expressions. Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature extraction and classification technique for emotion recognition is still an open problem. Image pre-processing and normalization is significant part of face recognition systems. Changes in lighting conditions produces dramatically decrease of recognition performance. In this paper, the image pre-processing techniques like K-Nearest Neighbor, Cultural Algorithm and Genetic Algorithm are used to remove the noise in the facial image for enhancing the emotion recognition. The performance of the preprocessing techniques are evaluated with various performance metrics.
BRAIN PORTION EXTRACTION USING HYBRID CONTOUR TECHNIQUE USING SETS FOR T1 WEI...sipij
Brain portion extraction from magnetic resonance image (MRI) of human head scan is an important
process in medical image analysis. In this paper, we propose a computationally simple and a robust brain
segmentation method. This method is based on forming a contour using the intensity values that satisfy a
set property and detect the boundary of the brain. After detecting the brain boundary the brain portion is
segmented. Experiments were conducted by applying the method on 3 volumes of T1 MRI data set collected
from Internet Brain Segmentation Repository(IBSR) and compared the results with that of the popular
skull stripping method Brain Extraction Tool (BET). The experimental results show that the proposed
algorithm gives better results than that of BET.
BRAIN PORTION EXTRACTION USING HYBRID CONTOUR TECHNIQUE USING SETS FOR T1 WEI...sipij
Brain portion extraction from magnetic resonance image (MRI) of human head scan is an important process in medical image analysis. In this paper, we propose a computationally simple and a robust brain segmentation method. This method is based on forming a contour using the intensity values that satisfy a set property and detect the boundary of the brain. After detecting the brain boundary the brain portion is segmented. Experiments were conducted by applying the method on 3 volumes of T1 MRI data set collected
from Internet Brain Segmentation Repository(IBSR) and compared the results with that of the popular skull stripping method Brain Extraction Tool (BET). The experimental results show that the proposed algorithm gives better results than that of BET.
This document summarizes a student project on face recognition. It begins with an introduction to face recognition, its applications, and common challenges. It then reviews literature on existing face recognition methods and identifies problems related to tilted poses and variations in illumination and expression. The proposed method will work to improve recognition rates under these conditions in two phases - training and testing. The method aims to enhance the preprocessing and feature extraction steps to make the system more robust. A basic flowchart of the proposed approach is provided, and the document concludes with references.
Recognition of emotional states using EEG signals based on time-frequency ana...IJECEIAES
The recognition of emotions is a vast significance and a high developing field of research in the recent years. The applications of emotion recognition have left an exceptional mark in various fields including education and research. Traditional approaches used facial expressions or voice intonation to detect emotions, however, facial gestures and spoken language can lead to biased and ambiguous results. This is why, researchers have started to use electroencephalogram (EEG) technique which is well defined method for emotion recognition. Some approaches used standard and pre-defined methods of the signal processing area and some worked with either fewer channels or fewer subjects to record EEG signals for their research. This paper proposed an emotion detection method based on time-frequency domain statistical features. Box-and-whisker plot is used to select the optimal features, which are later feed to SVM classifier for training and testing the DEAP dataset, where 32 participants with different gender and age groups are considered. The experimental results show that the proposed method exhibits 92.36% accuracy for our tested dataset. In addition, the proposed method outperforms than the state-of-art methods by exhibiting higher accuracy.
Human’s facial parts extraction to recognize facial expressionijitjournal
Real-time facial expression analysis is an important yet challenging task in human computer interaction.
This paper proposes a real-time person independent facial expression recognition system using a
geometrical feature-based approach. The face geometry is extracted using the modified active shape
model. Each part of the face geometry is effectively represented by the Census Transformation (CT) based
feature histogram. The facial expression is classified by the SVM classifier with exponential chi-square
weighted merging kernel. The proposed method was evaluated on the JAFFE database and in real-world
environment. The experimental results show that the approach yields a high recognition rate and is
applicable in real-time facial expression analysis.
This document presents a method for brain tumor segmentation using multi-level Otsu thresholding and the Chan-Vese active contour model. The method first applies multi-level Otsu thresholding to preprocessed MRI images to obtain the initial initialization area for the active contour model. Specifically, it uses 3-level Otsu thresholding. It then uses the Chan-Vese active contour model to segment the tumor area, starting from the initialized area. The method is tested on 85 brain MRI images and achieves a Dice similarity coefficient of 0.7856, outperforming other compared methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...CSCJournals
In this paper we present a comparative study on fusion of visual and thermal images using different wavelet transformations. Here, coefficients of discrete wavelet transforms from both visual and thermal images are computed separately and combined. Next, inverse discrete wavelet transformation is taken in order to obtain fused face image. Both Haar and Daubechies (db2) wavelet transforms have been used to compare recognition results. For experiments IRIS Thermal/Visual Face Database was used. Experimental results using Haar and Daubechies wavelets show that the performance of the approach presented here achieves maximum success rate of 100% in many cases.
Survey analysis for optimization algorithms applied to electroencephalogramIJECEIAES
This paper presents a survey for optimization approaches that analyze and classify electroencephalogram (EEG) signals. The automatic analysis of EEG presents a significant challenge due to the high-dimensional data volume. Optimization algorithms seek to achieve better accuracy by selecting practical features and reducing unwanted features. Forty-seven reputable research papers are provided in this work, emphasizing the developed and executed techniques divided into seven groups based on the applied optimization algorithm particle swarm optimization (PSO), ant colony optimization (ACO), artificial bee colony (ABC), grey wolf optimizer (GWO), Bat, Firefly, and other optimizer approaches). The main measures to analyze this paper are accuracy, precision, recall, and F1-score assessment. Several datasets have been utilized in the included papers like EEG Bonn University, CHB-MIT, electrocardiography (ECG) dataset, and other datasets. The results have proven that the PSO and GWO algorithms have achieved the highest accuracy rate of around 99% compared with other techniques.
An Ear Recognition Method Based on Rotation Invariant Transformed DCT IJECEIAES
Human recognition systems have gained great importance recently in a wide range of applications like access, control, criminal investigation and border security. Ear is an emerging biometric which has rich and stable structure and can potentially be implemented reliably and cost efficiently. Thus human ear recognition has been researched widely and made greatly progress. High recognition rates which are reported in most existing methods can be reached only under closely controlled conditions. Actually a slight amount of rotation and translation which is inescapable would be injurious for system performance. In this paper, a method that uses a transformed type of DCT is implemented to extract meaningful features from ear images. This algorithm is quite robust to ear rotation, translation and illumination. The proposed method is experimented on two popular databases, i.e. USTB II and IIT Delhi II, which achieves significant improvement in the performance in comparison to other methods with good efficiency based on LBP, DSIFT and Gabor. Also because of considering only important coefficients, this method is faster compared to other methods.
This document summarizes a study that used artificial neural networks (ANN) to segment MRI brain images into gray matter, white matter, and cerebrospinal fluid in order to analyze and classify three neurodegenerative diseases: Alzheimer's disease, Parkinson's disease, and epilepsy. Real MRI data from patients with these diseases was preprocessed, features were extracted using Gabor filters, and ANN was used to classify tissues. The ANN approach achieved 96.13% accuracy for Alzheimer's classification, 93.26% for Parkinson's, and 91.33% for epilepsy. The study demonstrated that ANN is effective for automated brain tissue segmentation and shows potential for assisting in diagnosis of neurological diseases.
Similar to Non-contact Heart Rate Monitoring Analysis from Various Distances with different Face Regions (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
2. IJECE ISSN: 2088-8708
Non-contact Heart Rate Monitoring Analysis from Various Distances with …. (Norwahidah Ibrahim)
3031
Instead relying on three channel of color traces, it has been reported in [4] that the green color
channel provide strongest PPG signal since hemoglobin light absorption is most sensitive to oxygenation
changes for green light compare to blue and red. However recent work in [3], claimed that single green trace,
provide more accurate ICA outcome to separate between source and noise which in contrast with [5], which
found that ICA slightly dropped the performance when using green trace alone. A head motion based method
for pulse detection is proposed by [6] which show a promising outcome for translating a subtle motion into
the HR estimator.
Most of the mentioned methods work well in controlled laboratory environment. However, their
test subjects remain in static with near distance with the acquisition device. In real scenario it is common that
the subject would move and also the distance would vary from the camera. The recent work in [7-10]
addressing the motion issue by incorporating combination of facial landmark extractions, tracker and various
illumination rectification. However, still they did not test the system various acquisition distances scenario in
which the strength of remote HR reside since the task is daunting for the contact based HR system. In this
paper, we investigate the capability of remote HR for acquiring data from various distances and provide an
optimal framework that capable to satisfy the constraint.
2. SYSTEM OVERVIEW
Figure 1 shows overview of the proposed framework that consist of five main steps namely, facial
tracker, signal extraction, Blind Source Separation (BSS) using ICA, filtering and lastly, histogram analysis.
Figure 1. Overall system flow for the proposed framework
The input for this system is face video that was recorded by using a video camera and the output is
the HR estimation reading that is measured by using beat per minute (bpm) unit. For the transformation
process from input to the output, initially the facial tracker is applied to the image sequences. This facial
tracker would track rigid and non-rigid facial landmark, and 49 points of important facial point would be
displayed. The 49 points of human facial landmarks produced will be used for constructing region of interest
(ROI). Next PPG signal is extracted from the constructed ROI based on temporal random trace information
of green color channel. Since the signal acquired is usually contain noise generated from the environment, an
ICA based BSS strategy is used to separate the primer PPG signal and the unwanted information. Following
that, the primer PPG signal is undergo sequence of filtering step to obtain the optimal state of signal power
spectral density (PSD) information and consequently the HR. However, relying on single sequence of HR
reading is still subject to the measurement variation. To overcome this, a histogram analysis of repetitive HR
reading is constructed based on the same ROI with different random traces. Eventually, the HR is estimated
from the average of histogram with lowest variation reading.
2.1. Facial Detection
Facial tracker is used in this system to detect the face and extract facial landmarks information. To
satisfy the face detection requirement, a set of validated face regions is extracted using AdaBoost-based
cascade with Haar like feature [11]. This classifier works by constructing a strong classifier (positive images)
as linear combination weak classifiers (negative images). During detection, a series of classifiers are applied
to every image sub-window with different scaling factor. Regions are considered valid if they pass through
all the classifier stages.
From the detected face region, facial landmarks are detected by using cascaded of pose regression
model [12]. The method works perform a raw initial guess of facial landmark positions and uses a cascade of
regressors to infer the shape as whole and explicitly minimizes the alignment error over the training data. Let
3. ISSN: 2088-8708
IJECE Vol. 7, No. 6, December 2017 : 3030 – 3036
3032
S = (x1,x2…xp) denotes the coordinate of all p facial landmarks in a bounding box I and rt(.,.) be the regressor
cascade, the current shape estimate of the facials location can be estimated by using following formulation :
S(t+1)
= S(t)
+ rt(I, S(t)
) (1)
The critical point of the cascade is that the regressor makes its predictions based on features, such as
pixel intensity values and indexed relative to the current shape estimate. Each regressor in the cascade returns
a vector which is used to update the current shape estimate in an additive manner. The method generally does
not build any parametric model; instead merely study the correlations between image features to infer a facial
shape. To train the regressor, each ground truth shape in the training set is transformed to the mean shape
which is rescaled and centered at the origin. Sample of the detected 49 points facial landmarks is shown in
Figure 2(a).
From the annotated facial landmarks location, a ROI of face region was constructed. In this paper,
four ROIs were investigated to determine the most suitable patch that can be used in various distances
requirement as shown in Figure 2(b). The first patch was on the right cheek of the face. Right cheek was
chosen since less non-rigid motion is generated in this area compared to other regions. Next, the second patch
was selected on the center of the face that includes eyes and nose region but excluding the forehead area
since the forehead tends to be covered by hair. This area was chosen because previous study [13] reported
that the center of face to be the most suitable ROI for video based HR. The third patch was the whole face
since basically the larger the region, the possibility to extract the PPG information from a far distance is high.
However, for the whole face region a reduction of 10% horizontal dimension and 20% of vertical
dimension with respect to center location is performed to exclude plausible region that constitute to the
background. Finally, the fourth patch was selected at the lower part of the face that includes nose and mouth
but excluding eyes and chin area. This region was chosen because of there are less non-rigid motion in it and
wider ROI dimension compared to the patch placed on the right cheek. From the selected ROI, random pair
temporal green color channel values that indicate the PPG signal is extracted.
Figure 2. Sample of facial tracker result, (a) 49 facial landmarks, (b) ROI for extracting the PPG signal
2.2. PPG Signal Extraction and Noise Removal
In each of the constructed ROI the PPG signal is obtained randomly pickup two different locations
inside the ROI, and consequently taps the reading of green color channel in the selected location over the
predefined video sampling time. The Green channel was selected for PPG extraction because it provides the
strongest PPG signal information compared to other color channel [4]. Sample of the extracted two traces of
PPG is shown in Figure 3. It can be seen from the figure that the obtained signals contaminated with noise
due to the effect from the environment and therefore need to undergo some filtering step prior to estimate the
HR value.
Figure 3. Sample of extracted two raw PPG signal from average of green trace in the face region
4. IJECE ISSN: 2088-8708
Non-contact Heart Rate Monitoring Analysis from Various Distances with …. (Norwahidah Ibrahim)
3033
2.3. Noise Removal using ICA
The raw PPG traces calculated from the green color channel of the tracked patch incorporated the
illumination variation information and hence produced a noise traces. In this section, a mechanism on how to
isolate the illumination effect that hinders an accurate PG signal extraction is described. Fundamentally, two
factors that affect the reading of the raw PPG signal which are the blood volume variation cause by cardiac
pulse and the illumination changes. The combination of both parameters can be represented by a linear
correlation define by Equation (2) in which s is the green channel signal and y is the variation of illumination.
Ideally if the y parameter can be estimated, then the pure cardiac pulse signal can be obtained. However, in
practice the signal of y cannot be measured directly.
PPGraw = s + y (2)
In this paper ICA is used separate the components of the green channel signal from the illumination
variation effect. Basically ICA is one of the BSS techniques that capable to recover unobserved signal from a
set of observed mixture in which the mixing process is unknown and assume to be linearly correlated. The
raw PPG traces obtained previously is fed to the ICA by assuming the rectified PPG and the illumination
components are independent. After solving for the components, the ICA will produce two separated signals
as depicted in Figure 4. From the figure, it can be clearly seen that the fed two traces produce two
components in which one of the ICA components signal pattern is quite similar with one of the raw traces. To
explicitly determine the generated ICA signals belong to which group, i.e., PPG or illumination, nearest
average Eulicdean distance measure is performed to find the similarity score between the two components
and the traces, and the one with the lowest value will be labeled as the rectified PPG signal while the other
one will be denoted as illumination variation .
Figure 4. Sample of ICA implementation of BSS, original signal in the left side and two ICA components on
the right side
2.4. PPG Filtering
Once the pure PPG signal is in hand, the final step is to determine corresponding heart rate value.
However, the obtained signal still contains noise and some filtering step is necessary. In this paper, two
temporal filter which are detrending [14] and moving average is implemented for reducing slow and non-
stationary trend of signal and polishing the random noise to make the signal smooth prior to frequency
domain conversion. The filtered PPG signal is converted to frequency domain for determine the power
spectrum density (PSD) using Welch method [15] with the constrained frequency spectrum within the range
of 0.7Hz to 4Hz that represent the HR value range from 42bpm to 240bpm. Eventually, the HR is calculated
by multiplying the maximal PSD response with 60. Illustration of the generated signal as a result of the
elaborated process can be seen in Figure 5 which consist of detrending result in (a), moving average outcome
in (b) and the corresponding PPG frequency that is converted using Welch method.
Figure 5. Filtering step, (a) Detrending, (b) Moving Average, (c) Pwelch with frequency range from
0.7Hz – 4Hz
5. ISSN: 2088-8708
IJECE Vol. 7, No. 6, December 2017 : 3030 – 3036
3034
2.5. Histogram Analysis
The HR value obtained from a single sequence calculation is still subject to the variation and hence
a histogram based analysis is performed to determine the consistent reading against repetition. In this paper
10 repetition of HR reading from the same image sequence is performed. Any HR reading that is significant
with the majority in the bin will be eliminated while the remaining will be used to determine an average of
HR which shows consistency over the sampled time period.
3. RESULTS AND ANALYSIS
To analyze the performance of HR reading over various distances and different face regions we
conduct an experiment in an indoor environment under controlled light condition. In the experiment, a
recorded video of 1440*1080 pixels resolution with 60 Fps is taken from participant with different distance
setting of 1 meter, 2 meters and 5 meters respectively. During recording, the participant exhibit normal static
motion along the process. Sample snapshots of experiment setting are shown in Figure 6 which indicates
pictures taken with facial landmarks detected.
The analysis is conducted to determine an optimal cross correlation setting between distances and
face ROI that yield highest accuracy of the estimated HR value. As mentioned previously, there were four
ROI selected for this experiment which are right cheek, center face, whole face and lower face. All the patch
will be utilize in each of the video taken at four different distances as explained above and the result is
showcase in Table 1. From the table, for 1 meter reading (Figure 6 (top left)) the most accurate HR was
found to be generated from right cheek region with 98.77% accuracy. Clearly this happen because of such
region generate less non-rigid movement and hence would reduce the motion artifact in the obtained PPG
signal, and consequently would results in more accurate HR reading.
Table 1. Accuracy table for each patch corresponding to the distances
Patch
(ROI)
Range (m)
Right Cheek Center of Face Lower Part of Face Whole face
Accuracy (%) Accuracy (%) Accuracy (%) Accuracy (%)
( HR accuracy results for near distance )
1 98.77 77.78 87.65 87.65
2 79.27 80.49 87.80 87.65
( HR accuracy results for farther distance )
5 93.99 91.57 97.60 91.62
Average 92.27 89.22 93.61 91.62
Figure 6. Experiment setup to determine the HR value with detected facial landmarks from distance of
1m (top left), 2m (top right) and 5m (bottom)
Moving to a 2 meter case (Figure 6 (top right)), it is found that the most accurate HR was obtained
from the lower part of the face area with 87.80% accuracy. Apparently, same as the right chick region, the
6. IJECE ISSN: 2088-8708
Non-contact Heart Rate Monitoring Analysis from Various Distances with …. (Norwahidah Ibrahim)
3035
lower part yield less non-rigid movement and since the ROI dimension is bigger; it can generate an accurate
HR reading even from a far distance. For a 5 meter case which consider as the farthest distance from the
camera as indicate in Figure 6 bottom, the most accurate HR was also obtained from the lower part of the
face area with 97.60% accuracy. Again, this was due to the less non-rigid movement in the area, so there was
less motion artifact and bigger dimension as mentioned previously that make the obtained PPG to be more
accurate.
Since the analyses were focusing on heart rate for various distances, the distance for both
experiments ranges up until five meters. Quantitative assessment such as accuracy and percent of error are
calculated using Equation below.
( ) (3)
( ) (4)
From the obtained data, for a single person HR reading over a 5 meters range, it can be deduce that
the ROI that produce the most inaccurate reading was the patch that placed in the center of the face that with
an average of 88.62% of accuracy while the optimal ROI is found to be from the lower part of the face with
an average of 93.65% of accurate HR estimation. One of the reason that contribute to the inaccuracy of the
HR reading is the eye movement is included in the selected ROI. However, generally the entire selected path
is capable to produce around 80% of accurate HR estimation in each of the distance variation.
4. CONCLUSION AND FUTURE WORKS
This paper investigates effect of ROI variation over a different distance for non contact HR
estimation framework. There were total of five steps in order to accomplish the framework which is video
acquisition, facial detection, PPG signal extraction using, PPG signal filtering and histogram. From the
experiments, we can conclude that the system capable to work properly with various distances variation
ranging from 1 to 5 meters with an average accuracy of 92%. Among the evaluated ROI, we found that the
lower part of face region capable to produce the highest average of accuracy over the distance variation with
93.61%.
For future, the system is expected to be able to detect the HR reading from various ranging of skin
tone and simultaneously extract the HR reading from a for multiple persons. We would also interested to
study the effect of using different types of signal analysis method to obtain more accurate the HR reading.
Apart from that, the system is also can be improved by reducing the processing time to make it applicable for
real time data HR measurement scenario.
ACKNOWLEDGEMENTS
The authors would like to thank to Ministry of Education (MOE) and Universiti Tun Hussein onn
Malaysia (UTHM) for supporting this research under Fundamental Research Grant Scheme (Vot. no. 1582).
REFERENCES
[1]. G. Cennini, J. Arguel, K.Aksit and A. Van Leest, “Heart Rate Monitoring via Remote Photoplethysmography with
Motion Artifact Reduction”, Optic Express, 18(5) : 4867-4875, 2010.
[2]. C. Li, C. Xu, C. Gui, M. D. Fox, “Distance regularized level set evolution and its application to image
segmentation”, IEEE Trans. on Image Processing, 2010.
[3]. M.-Z. Poh, D.J. McDuff, and R.W. Picard, “Advancements in noncontacts, multiparameter hysiological
measurements using a webcam”, IEEE Trans. on Biomedical Engineering, 2011.
[4]. W. Verkruysse, L. O. Svaasand, and J. S. Nelson, “Remote plethysmographic imaging using ambient light”, Optics
Express, 16(26) : 21434-21445, Dec 2008.
[5]. S. Kwon, H. Kim, and K. S. Park, “Validation of heart rate extraction using video imaging on a built-in camera
system of a smartphone”, In IEEE Engineering in Medicine and Biology Society (EMBC), pages 2174-2177, Aug
2012.
[6]. G. Balakrishnan, F. Durand, and J. Guttag, “Detecting pulse from head motions in video”, In IEEE Computer
Vision and Pattern Recognition (CVPR), pages 3430-3437, June 2013.
[7]. X. Li, J. Chen, G. Zhao, and M. Pietikainen, “Remote Heart Rate Measurement from Face Video under Realistic
Situations”, In IEEE Computer Vision and Pattern Recognition (CVPR), pp. : 4264-4271, 2014.
[8]. M. Kumar, A. Veeraraghavan and A. Sabharwal, “Distance cePPG: Robust non-contact Vital Sign Monitoring
using a Camera”, Biomedical Optic Express, 6(5):1565-1288, 2015.
7. ISSN: 2088-8708
IJECE Vol. 7, No. 6, December 2017 : 3030 – 3036
3036
[9]. A. Lam and Y. Kuno, “Robust Heart Rate Measurement from Video using Select Random Patches”, In IEEE
International Conference on Computer Vision (ICCV), 2015.
[10]. R.-Y. Huang and L.-R Dung, “Measurement of Heart Rate Variability using off the shelf smart phones”,
Biomedical Engineering online, 15:11, pp. 1-16, 2016.
[11]. P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features”, In Proc. of Int. Conf.
on Comp. Vision and Pattern Recognition, 2001, pp. 511-518.
[12]. P. Dollar, P. Welinder, P. Perona, “Cascaded Pose Regression”, In Proceeding on Computer Vision and Pattern
Recognition, 2010.
[13]. T. Pursche, J. Krajewski, and R. Moeller, “Video-based heart rate measurement from human faces”, In
Proceedings of IEEE International Conference on Consumer Electronics (IEEE, 2012), 2012,
pp. 544–545
[14]. P. Tarvainen, P. O. Ranta-aho, and P. A. Karjalainen, “An advanced detrending method with application to hrv
analysis”, IEEE Trans. on Biomed. Eng., 2002
[15]. P.Welch, “The use of fast fourier transform for the estimation of power spectra: a method based on time averaging
overshort, modified periodograms”, IEEE Trans. on Audio and Electroacoustics, 1967
BIOGRAPHIES OF AUTHORS
Norwahidah Ibrahim was born in Johor, Malaysia in 1993. She received the B.Eng degree in
Electronics (Mechatronics) from University Tun Hussein Onn Malaysia in 2016. Currently, she is
doing her M.Eng degree in Electrical at the same university. Her current research interest is
computer vision, image processing and signal processing.
Razali Tomari was born in Johor, Malaysia, in 1980. He received the B.Eng degree in
Mechatronics from the Universiti Teknologi Malaysia, in 2003, M.Sc in intelligent system from
the Universiti Putra Malaysia, in 2006. And PhD degree in computer vision and robotic from the
Saitama University, Japan in 2013. In 2003, he joined Faculty of Electrical Engineering,
University Tun Hussein Onn as a tutor and later on become a Senior Lecturer in 2013. His current
research interest includes computer vision, pattern recognition, smart wheelchair and sensing
technology.
WN Wan Zakaria received B.Eng (2007) in Electronics and Mechanical Engineering from Chiba
University and MSc(2008) and PhD (2012) from Newcastle University. She is currently a lecturer
in Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia. Her
current interests include Medical Robotics System specifically on development of robot force
control, Image Processing and Computer Aided Diagnosis, and development of Wearable Device.
She is author and co-author of several journal papers and conference proceedings.
Nurmiza Binti Othman was born in Terengganu, Malaysia, in 1983. She received the M. Sc.
(Electrical and Electronic Engineering) from Utsunomiya University, Japan in 2009 and the Ph.D
(Electrical and Electronic Engineering) from Kyushu University, Japan in 2014. In 2007, she
joined the Department of Electronic Engineering, Universiti Tun Hussein Onn Malaysia as a tutor.
From 2014 to present, she has appointed as a lecturer with the same institution. Currently, she is
also a Senior Researcher of UTHM Research Center For Applied Electromagnetics, Universiti Tun
Hussein Onn Malaysia. She is a member of professional membership IEEE under Engineering in
Medicine and Biology Society. Her research interests include Superconductor Engineering,
Microfabrication, Nanomagnetic Materials, Magnetic Particle Imaging, and Magnetic
Tomography.