The document describes a method for de-noising electrocardiogram (ECG) signals using empirical mode decomposition (EMD) combined with higher order statistics (HOS). EMD is used to decompose ECG signals into intrinsic mode functions (IMFs). Then, HOS measures including kurtosis and bispectrum are applied to the IMFs to identify and remove Gaussian noise components. The algorithm is tested on ECG signals with different levels of signal-to-noise ratio, and signal improvement is measured using SNR improvement and percent root mean square difference. Results show the method effectively de-noises ECG signals.
In many situations, the Electrocardiogram (ECG) is
recorded during ambulatory or strenuous conditions such that the
signal is corrupted by different types of noise, sometimes
originating from another physiological process of the body. Hence,
noise removal is an important aspect of signal processing. Here five
different filters i.e. median, Low Pass Butter worth, FIR, Weighted
Moving Average and Stationary Wavelet Transform (SWT) with
their filtering effect on noisy ECG are presented. Comparative
analyses among these filtering techniques are described and
statically results are evaluated.
This document discusses modeling of biomedical signals. It introduces autoregressive (AR) and moving average (MA) modeling techniques. For AR modeling, it describes three methods for computing the model parameters: the least squares method, the autocorrelation method, and the covariance method. The least squares method minimizes the mean squared error between predicted and actual signal samples. The autocorrelation and covariance methods relate the AR model parameters to the autocorrelation function of the signal.
Performance comparison of automatic peak detection for signal analyserjournalBEEI
Β
The aim of this paper is to propose a new peak detection method for a portable device, which know as modified automatic threshold peak detection (M-ATPD). M-ATPD evolves out of ATPD with a focus on reducing computational time. The proposed method replaces the clustering threshold calculation in ATPD with a standard deviation threshold calculation. M-ATPD reduces computational time by 2 times faster compared to ATPD for control signal and 8.65 times faster compared to ATPD for raw biosignals. Modified ATPD also shows a slight improvement in terms of detection error, with a decrease of about 6.66% to 13.33% in peak detection of noise signals. Modified ATPD successfully fixes the error of peak detection on pulse control signals associated with ATPD. For raw biosignals, in total M-ATPD achieved 19.41% lower detection error compare to ATPD.
Estimation of Separation and Location of Wave Emitting Sources : A Comparison...sipij
Β
This document compares the Burg method and Fourier transform method for estimating the separation and location of wave emitting sources. The Burg method provides higher resolution than the Fourier transform method and can better resolve adjacent sources. Simulation results show the Burg method more accurately estimates source separation and location, especially when the separation distance is small or noise is present. The percentage error is smaller for the Burg method compared to the Fourier transform method.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Filtering Electrocardiographic Signals using filtered- X LMS algorithmIDES Editor
Β
This document presents a study on using a filtered-X least mean square (FXLMS) algorithm to remove various types of noise from electrocardiogram (ECG) signals. The FXLMS algorithm is an adaptive noise cancellation technique that is shown to outperform a standard least mean square (LMS) algorithm in terms of signal-to-noise ratio when removing noise such as baseline wander, powerline interference, muscle artifacts, and motion artifacts from real ECG signals based on simulations using a publicly available ECG database. The key aspects of the FXLMS algorithm and its application to adaptive noise cancelation in ECG signals are discussed.
The sequential inversion technique (SIT) and differential coefficients method (DCM) are two methods discussed to reconstruct true transient emission signals from measurements taken by analyzers, which introduce delays and dispersion. The SIT reconstructs the input second by second based on the measured response and dispersion characteristics. Testing with real data showed it can accurately reconstruct signals without noise. However, reconstruction fails if the dispersion characteristics change or there is signal noise. The DCM defines the real input as a linear combination of the output and its derivatives. It was more accurate than SIT when noise was present. Both methods aim to compensate for measurement delays and dispersion to obtain instantaneous emissions from analyzer readings.
The document reports on a subjective comparison of 13 speech enhancement algorithms conducted by Dynastat, Inc. using the ITU-T P.835 methodology. The algorithms encompassed four classes: spectral subtractive, subspace, statistical-model based, and Wiener. A noisy speech corpus called NOIZEUS was developed with IEEE sentences corrupted by 8 noises at varying SNRs. For the subjective tests, enhanced speech from 20 sentences in 4 noises at 2 SNRs was evaluated based on signal distortion, noise distortion, and overall quality. The statistical-model based methods performed best overall, followed by a multi-band spectral subtraction method. Incorporating noise estimation did not significantly improve performance for some algorithms.
In many situations, the Electrocardiogram (ECG) is
recorded during ambulatory or strenuous conditions such that the
signal is corrupted by different types of noise, sometimes
originating from another physiological process of the body. Hence,
noise removal is an important aspect of signal processing. Here five
different filters i.e. median, Low Pass Butter worth, FIR, Weighted
Moving Average and Stationary Wavelet Transform (SWT) with
their filtering effect on noisy ECG are presented. Comparative
analyses among these filtering techniques are described and
statically results are evaluated.
This document discusses modeling of biomedical signals. It introduces autoregressive (AR) and moving average (MA) modeling techniques. For AR modeling, it describes three methods for computing the model parameters: the least squares method, the autocorrelation method, and the covariance method. The least squares method minimizes the mean squared error between predicted and actual signal samples. The autocorrelation and covariance methods relate the AR model parameters to the autocorrelation function of the signal.
Performance comparison of automatic peak detection for signal analyserjournalBEEI
Β
The aim of this paper is to propose a new peak detection method for a portable device, which know as modified automatic threshold peak detection (M-ATPD). M-ATPD evolves out of ATPD with a focus on reducing computational time. The proposed method replaces the clustering threshold calculation in ATPD with a standard deviation threshold calculation. M-ATPD reduces computational time by 2 times faster compared to ATPD for control signal and 8.65 times faster compared to ATPD for raw biosignals. Modified ATPD also shows a slight improvement in terms of detection error, with a decrease of about 6.66% to 13.33% in peak detection of noise signals. Modified ATPD successfully fixes the error of peak detection on pulse control signals associated with ATPD. For raw biosignals, in total M-ATPD achieved 19.41% lower detection error compare to ATPD.
Estimation of Separation and Location of Wave Emitting Sources : A Comparison...sipij
Β
This document compares the Burg method and Fourier transform method for estimating the separation and location of wave emitting sources. The Burg method provides higher resolution than the Fourier transform method and can better resolve adjacent sources. Simulation results show the Burg method more accurately estimates source separation and location, especially when the separation distance is small or noise is present. The percentage error is smaller for the Burg method compared to the Fourier transform method.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Filtering Electrocardiographic Signals using filtered- X LMS algorithmIDES Editor
Β
This document presents a study on using a filtered-X least mean square (FXLMS) algorithm to remove various types of noise from electrocardiogram (ECG) signals. The FXLMS algorithm is an adaptive noise cancellation technique that is shown to outperform a standard least mean square (LMS) algorithm in terms of signal-to-noise ratio when removing noise such as baseline wander, powerline interference, muscle artifacts, and motion artifacts from real ECG signals based on simulations using a publicly available ECG database. The key aspects of the FXLMS algorithm and its application to adaptive noise cancelation in ECG signals are discussed.
The sequential inversion technique (SIT) and differential coefficients method (DCM) are two methods discussed to reconstruct true transient emission signals from measurements taken by analyzers, which introduce delays and dispersion. The SIT reconstructs the input second by second based on the measured response and dispersion characteristics. Testing with real data showed it can accurately reconstruct signals without noise. However, reconstruction fails if the dispersion characteristics change or there is signal noise. The DCM defines the real input as a linear combination of the output and its derivatives. It was more accurate than SIT when noise was present. Both methods aim to compensate for measurement delays and dispersion to obtain instantaneous emissions from analyzer readings.
The document reports on a subjective comparison of 13 speech enhancement algorithms conducted by Dynastat, Inc. using the ITU-T P.835 methodology. The algorithms encompassed four classes: spectral subtractive, subspace, statistical-model based, and Wiener. A noisy speech corpus called NOIZEUS was developed with IEEE sentences corrupted by 8 noises at varying SNRs. For the subjective tests, enhanced speech from 20 sentences in 4 noises at 2 SNRs was evaluated based on signal distortion, noise distortion, and overall quality. The statistical-model based methods performed best overall, followed by a multi-band spectral subtraction method. Incorporating noise estimation did not significantly improve performance for some algorithms.
Recovery of low frequency Signals from noisy data using Ensembled Empirical M...inventionjournals
Β
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Ensemble Empirical Mode Decomposition: An adaptive method for noise reduction IOSR Journals
Β
This document describes ensemble empirical mode decomposition (EEMD), an adaptive method for noise reduction in signals. EEMD is an improvement over empirical mode decomposition (EMD) that can overcome the problem of mode mixing. EEMD works by decomposing the signal into intrinsic mode functions (IMFs) in the presence of added white noise, which is then averaged out. The algorithm adds white noise to the target signal multiple times, applies EMD each time, and takes the mean of the IMFs as the final result. This process separates different scales present in the signal and reduces noise. The document evaluates EEMD on electrocardiogram and other non-stationary signals, demonstrating its effectiveness in noise reduction.
parametric method of power spectrum Estimationjunjer
Β
The document discusses parametric methods of power spectrum estimation. It explains that parametric methods estimate the parameters of a mathematical model that describes the signal generation process. This involves selecting a model such as autoregressive (AR), moving average (MA), or autoregressive moving average (ARMA), estimating the model parameters from the data, and then using the estimated parameters to calculate the power spectrum. The document provides details on how to estimate the power spectrum using AR, MA, and ARMA models. It also discusses maximum entropy spectral estimation and high-resolution spectral estimation based on eigen-analysis.
A novel particle swarm optimization for papr reduction of ofdm systemsaliasghar1989
Β
This document summarizes a research paper that proposes a new particle swarm optimization (PPSO) technique to reduce the computational complexity of the original PSO (OPSO) method for phase optimization in partial transmit sequence (PTS) peak-to-average power ratio (PAPR) reduction schemes for orthogonal frequency division multiplexing (OFDM) systems. Simulation results show that PPSO achieves nearly the same PAPR performance as OPSO but with lower complexity, as it removes the need for random variables and exhaustive searching in phase factor selection for PTS. The complexity is reduced further as the number of particle generations and sub-blocks increases.
This document discusses various methods for modeling signals, including deterministic and stochastic processes. It covers topics like the least mean square direct method, Pade approximation, Prony's method, Shanks method, and stochastic processes like ARMA, MA, and AR. It also discusses an application of signal modeling for designing a least squares inverse FIR filter. Model order estimation is noted as an important problem in signal modeling when the correct model order is unknown.
This paper aims to present a very-large-scale integration (VLSI) friendly electrocardiogram (ECG) QRS detector for body sensor networks. Baseline wandering and background noise are removed from original ECG signal by mathematical morphological method. The performance of the algorithm is evaluated with standard MIT-BIH arrhythmia database and wearable exercise ECG Data. Corresponding power and area efficient VLSI architecture is reduced by replacing the one of the Ripple Carry Adder in the Carry select adder with Binary to Excess 1 converter
Method for Converter Synchronization with RF InjectionCSCJournals
Β
This paper presents an injection method for synchronizing analog to digital converters (ADC). This approach can eliminate the need for precision routed discrete synchronization signals of current technologies, such as JESD204. By eliminating the setup and hold time requirements at the conversion (or near conversion) clock rate, higher sample rate systems can be synchronized. Measured data from an existing multiple ADC conversion system was used to evaluate the method. Coherent beams were simulated to measure the effectiveness of the method. The results show near theoretical coherent processing gain.
An agent based particle swarm optimization for papr reduction of ofdm systemsaliasghar1989
Β
This document proposes an agent-based particle swarm optimization (APSO) algorithm to reduce the computational complexity of the original particle swarm optimization (OPSO) technique for partial transmit sequence (PTS) peak-to-average power ratio (PAPR) reduction in orthogonal frequency division multiplexing (OFDM) systems. Simulation results show that APSO achieves nearly the same PAPR reduction performance as OPSO but with significantly lower complexity, as the number of additions and multiplications is reduced by setting the velocity of all particles equal to the velocity of the agent particle in each iteration. APSO is thus an effective method to solve the phase optimization problem in PTS with lower complexity than OPSO.
Chaotic signals denoising using empirical mode decomposition inspired by mult...IJECEIAES
Β
The document describes a new method for denoising chaotic signals corrupted by additive noise using empirical mode decomposition (EMD) inspired by multivariate denoising. EMD is used to decompose the noisy chaotic signal into intrinsic mode functions (IMFs), which are then thresholded using a multivariate denoising algorithm combining wavelet transforms and principal component analysis. This proposed EMD-MD method is compared to other techniques using metrics like root mean square error and signal-to-noise ratio gain. Simulation results on Lorenz, Chen and Rossler chaotic systems show the EMD-MD method achieves the best denoising performance compared to conventional methods.
Research on Space Target Recognition Algorithm Based on Empirical Mode Decomp...Nooria Sukmaningtyas
Β
The space target recognition algorithm, which is based on the time series of radar cross section
(RCS), is proposed in this paper to solve the problems of space target recognition in the active radar
system. In the algorithm, EMD method is applied for the first time to extract the eigen of RCS time series.
The normalized instantaneous frequencies of high-frequency intrinsic mode functions obtained by EMD are
used as the eigen values for the recognition, and an effective target recognition criterion is established.
The effectiveness and the stability of the algorithm are verified by both simulation data and real data. In
addition, the algorithm could reduce the estimation bias of RCS caused by inaccurate evaluation, and it is
of great significance in promoting the target recognition ability of narrow-band radar in practice.
Optimum range of angle tracking radars: a theoretical computingIJECEIAES
Β
In this paper, we determine an optimal range for angle tracking radars (ATRs) based on evaluating the standard deviation of all kinds of errors in a tracking system. In the past, this optimal range has often been computed by the simulation of the total error components; however, we are going to introduce a closed form for this computation which allows us to obtain the optimal range directly. Thus, for this purpose, we firstly solve an optimization problem to achieve the closed form of the optimal range (Ropt.) and then, we compute it by doing a simple simulation. The results show that both theoretical and simulation-based computations are similar to each other.
This document summarizes a research paper that compares different digital filtering techniques for removing noise from electrocardiogram (ECG) signals. It describes how finite impulse response (FIR) filters were designed using various windowing techniques, including rectangular, Hamming, Hanning, and Blackman windows. Infinite impulse response (IIR) filters and wavelet transforms were also evaluated for denoising ECG signals. The performance of the different filtering approaches were compared based on the power spectral density and average power of the signals before and after filtering. The paper found that an FIR filter designed with the Kaiser window showed the best results for noise removal from ECG signals.
The document provides an overview of adaptive filters. It discusses that adaptive filters are digital filters that have self-adjusting characteristics to changes in input signals. They have two main components: a digital filter with adjustable coefficients and an adaptive algorithm. Common adaptive algorithms are LMS and RLS. Adaptive filters are used for applications like noise cancellation, system identification, channel equalization, and signal prediction. The key aspects of adaptive filter theory and algorithms like LMS, RLS, Wiener filters are also covered.
Papr reduction for ofdm oqam signals via alternative signal methodeSAT Journals
Β
Abstract
We deemed the PAPR reduction problem for OFDM/OQAM system. The PAPR reduction is the serious problem for
implementations of both OFDM and OFDM/OQAM systems due to their high PAPR. The OFDM/OQAM signal is generated by
summing over M time-shifted OFDM/OQAM symbols, where the successive symbols are interdependent with each other. The AS
(Alternative-Signal) method directly leads to the independent AS (AS-I) and joint AS (AS-J) algorithms. The AS-I algorithm
reduces the PAPR symbol by symbol with low complexity and AS-J applies optimal joint PAPR reduction among M
OFDM/OQAM symbols with much higher complexity. A sequential optimization procedure denoted AS-S have been proposed to
balance the computation complexity and system performance in this paper. AS-S algorithm, which adopts a sequential
optimization procedure over time with computational complexity linearly increasing with M. The Simulation results have been
provided for performance comparison of AS-I, AS-J, and AS-S algorithms.
KeywordsβPeak-to-Average power ratio (PAPR), orthogonal frequency division multiplexing with offset quadrature
amplitude modulation (OFDM/OQAM), Alternative-signal(AS),cyclic prefix(CP).
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Β
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation
of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a
49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
Implementation of Algorithms For Multi-Channel Digital Monitoring ReceiverIOSR Journals
Β
Abstract: Monitoring Receivers form an important constituent of the Electronic support. In Monitoring
Receiver we can monitor, demodulate or scan the multiple channels.
In this project, the Implementation of algorithm for multi channel digital monitoring receiver. The
implementation will carry out the channelization by the way of Digital down Converters (DDCs) and Digital
Base band Demodulation. The Intermediate Frequency (IF) at 10.7 MHz will be digitalized using Analog to
Digital Converter (ADC) with sampling frequency 52.5 MHz and further converted to Base band using DDCs.
Virtually all the digital receivers perform channel access using a DDC. The Base band data will be streamed to
the appropriate demodulators. Matlab Simulink will be used to simulate the logic modules before the
implementation. This system will be prototyped on an FPGA based COTS (Commercial-off-the-shelf)
development board. Xilinx System Generator will be used for the implementation of the algorithms.
Keywords: DDC, ADC, Digital Base band demodulation, IF, Monitoring Receiver.
The document analyzes soil samples collected around painted buildings in four local governments in Benue and Taraba States, Nigeria to determine heavy metal contamination levels. Testing found high levels of lead, cadmium, zinc and chromium contamination in the soils, with lead contamination being the highest. The order of contamination differed between locations but consistently included lead as the primary contaminant. The heavy metal levels pose health risks, especially to children, who are more vulnerable to negative impacts of heavy metal exposure through soil ingestion and dust inhalation near the painted buildings. Blood monitoring for heavy metals is recommended for individuals living near the painted structures.
Dust Interception Capacity And Alteration Of Various Biometric And Biochemica...IOSR Journals
Β
The dust accumulation capacity of Ficus carica L. was evaluated from eight different sites in and around Multan. The impact of dust accumulation was observed via various biometric attributes (leaf area, leaf fresh and dry weights) and biochemical attributes (chlorophyll contents, carotenoids & ascorbic acid) from leaves of F. carica. The maximum dust accumulation was occurred in the plants growing at Road sides while, the minimum dust was found on plants growing at Bahauddin Zakariya University. Dust accumulation has caused a significant effect on almost all foliage and biochemical attributes of F. carica. A positive correlation was found between dust accumulation and biometric attributes in F. carica. Biochemical responses had shown an inconsistency as chlorophylls (a, b & total), carotenoids decreased and ascorbic acid contents increased with an increase in dust accumulation. A negative correlation was found between dust deposition and chlorophyll contents. Whereas, accumulation of ascorbic acid was associated with a decline in pigment contents
Low Power FPGA Based Elliptical Curve CryptographyIOSR Journals
Β
Abstract: Cryptography is the study of techniques for ensuring the secrecy and authentication of the information. The development of public-key cryptography is the greatest and perhaps the only true revolution in the entire history of cryptography. Elliptic Curve Cryptography is one of the public-key cryptosystem showing up in standardization efforts, including the IEEE P1363 Standard. The principal attraction of elliptic curve cryptography compared to RSA is that it offers equal security for a smaller key-size, thereby reducing the processing overhead. As a Public-Key Cryptosystem, ECC has many advantages such as fast speed, high security and short key. It is suitable for the hardware of implementation, so ECC has been more and more focused in recent years. The hardware implementation of ECC on FPGA uses the arithmetic unit that has small area, small storage unit and fast speed, and it is an extremely suitable system which has limited computation ability and storage space.[1][2] The modular arithmetic division operations are carried out using conditional successive subtractions, thereby reducing the area. The system is implemented on Vertex-Pro XCV1000 FPGA. Index Terms β VHDL, FSM, FPGA, Elliptic Curve Cryptography.
Improving Sales in SME Using Internet MarketingIOSR Journals
Β
Abstract : In Indonesia, SMEs are the backbone of the Indonesian economy. Number of SMEs until 2011 to
reach around 52 million. SMEs in Indonesia is very important for the economy because it accounts for 60% of
GDP and 97% of the workforce holds. But access is limited to financial institutions only 25% or 13 million
SMEs who have access to financial institutions. Indonesian government, SMEs, through the Department of
Cooperatives and SMEs, in each province or regency / city.
Although Small and Medium Enterprises (SMEs) is driving the nation's economy, but in reality many of
the problems SMEs are still entangled. The main thing to note is the ability of SMEs to access a wider market.
Because of the ability to change and adapt to a changing environment will determine the existence of small
businesses in the nation's economy. In the end, the existence of small businesses that have high competitiveness
will strengthen the nation's economy as a whole. Thus, in this study will use an appropriate technology tools
that can provide assistance in introducing products through internet and increase sales in each SME
This study uses a sample of students at the State University of Malang that can make a significant
contribution in the small and medium businesses that are being initiated by students.
Keywords: Small Medium Enterprise, Internet Marketing, Sales Improvement
Effective Leadership-Employee Retention-Work Life Balance: A Cyclical ContinuumIOSR Journals
Β
This document summarizes a study on the relationship between effective leadership, employee work-life balance, and employee retention. It discusses how leadership styles can help balance employees' work and personal lives, leading to improved employee retention. The study presents different leadership theories and styles, and analyzes how they may impact work-life balance and retention. Specifically, it suggests that participative leadership approaches that balance task and employee orientation can foster understanding and job satisfaction, while autocratic styles focused solely on tasks may increase productivity but not retention. The document concludes that synchronized leadership considering employees' capabilities can help identify and retain talented staff.
The Comparison of theMaterials in Styles of Iranian Architecture and its Effe...IOSR Journals
Β
During the history,different elements have been determined as influential factors affecting the
architecture in different areas. The historical events and political alterations as well as religious and
economical changes can directly lead to the architecture style. One of the historical countries with rich
architectural history that can be increasingly exposed to more alterations is Iran. In general the architecture
styles in Iran can be categorised in six groups. These groups can be divided in two periods before and after
Islamβs emergence in Iran. The βParsiβ and βPartiβ architecture styles belong to the former period and
βRaziβ, βKhorasaniβ, βIsfahaniβ and βAzariβ were common in the latter period, after Islam. Such alterations
brought in a variety of architecture styles, in this country, due to theoretical alterations. Furthermore, some
novel architectural styles were resulted from a number of physical conditions which had also effects on the
theoretical architecture. The current research intends to put an emphasize on the alterations in materials used
in two historical periods of Iran, βAchaemenidβ Empire (550β330 BCE) and The βSassanidβ Empire (224 CE
to 651 CE) resulting in the changes in Iranian architecture. It also aims to explore the differences and the
reasons of changes in the materials used for constructions and the influence that these changes had on the
architectural style in the above mentioned periods
Recovery of low frequency Signals from noisy data using Ensembled Empirical M...inventionjournals
Β
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Ensemble Empirical Mode Decomposition: An adaptive method for noise reduction IOSR Journals
Β
This document describes ensemble empirical mode decomposition (EEMD), an adaptive method for noise reduction in signals. EEMD is an improvement over empirical mode decomposition (EMD) that can overcome the problem of mode mixing. EEMD works by decomposing the signal into intrinsic mode functions (IMFs) in the presence of added white noise, which is then averaged out. The algorithm adds white noise to the target signal multiple times, applies EMD each time, and takes the mean of the IMFs as the final result. This process separates different scales present in the signal and reduces noise. The document evaluates EEMD on electrocardiogram and other non-stationary signals, demonstrating its effectiveness in noise reduction.
parametric method of power spectrum Estimationjunjer
Β
The document discusses parametric methods of power spectrum estimation. It explains that parametric methods estimate the parameters of a mathematical model that describes the signal generation process. This involves selecting a model such as autoregressive (AR), moving average (MA), or autoregressive moving average (ARMA), estimating the model parameters from the data, and then using the estimated parameters to calculate the power spectrum. The document provides details on how to estimate the power spectrum using AR, MA, and ARMA models. It also discusses maximum entropy spectral estimation and high-resolution spectral estimation based on eigen-analysis.
A novel particle swarm optimization for papr reduction of ofdm systemsaliasghar1989
Β
This document summarizes a research paper that proposes a new particle swarm optimization (PPSO) technique to reduce the computational complexity of the original PSO (OPSO) method for phase optimization in partial transmit sequence (PTS) peak-to-average power ratio (PAPR) reduction schemes for orthogonal frequency division multiplexing (OFDM) systems. Simulation results show that PPSO achieves nearly the same PAPR performance as OPSO but with lower complexity, as it removes the need for random variables and exhaustive searching in phase factor selection for PTS. The complexity is reduced further as the number of particle generations and sub-blocks increases.
This document discusses various methods for modeling signals, including deterministic and stochastic processes. It covers topics like the least mean square direct method, Pade approximation, Prony's method, Shanks method, and stochastic processes like ARMA, MA, and AR. It also discusses an application of signal modeling for designing a least squares inverse FIR filter. Model order estimation is noted as an important problem in signal modeling when the correct model order is unknown.
This paper aims to present a very-large-scale integration (VLSI) friendly electrocardiogram (ECG) QRS detector for body sensor networks. Baseline wandering and background noise are removed from original ECG signal by mathematical morphological method. The performance of the algorithm is evaluated with standard MIT-BIH arrhythmia database and wearable exercise ECG Data. Corresponding power and area efficient VLSI architecture is reduced by replacing the one of the Ripple Carry Adder in the Carry select adder with Binary to Excess 1 converter
Method for Converter Synchronization with RF InjectionCSCJournals
Β
This paper presents an injection method for synchronizing analog to digital converters (ADC). This approach can eliminate the need for precision routed discrete synchronization signals of current technologies, such as JESD204. By eliminating the setup and hold time requirements at the conversion (or near conversion) clock rate, higher sample rate systems can be synchronized. Measured data from an existing multiple ADC conversion system was used to evaluate the method. Coherent beams were simulated to measure the effectiveness of the method. The results show near theoretical coherent processing gain.
An agent based particle swarm optimization for papr reduction of ofdm systemsaliasghar1989
Β
This document proposes an agent-based particle swarm optimization (APSO) algorithm to reduce the computational complexity of the original particle swarm optimization (OPSO) technique for partial transmit sequence (PTS) peak-to-average power ratio (PAPR) reduction in orthogonal frequency division multiplexing (OFDM) systems. Simulation results show that APSO achieves nearly the same PAPR reduction performance as OPSO but with significantly lower complexity, as the number of additions and multiplications is reduced by setting the velocity of all particles equal to the velocity of the agent particle in each iteration. APSO is thus an effective method to solve the phase optimization problem in PTS with lower complexity than OPSO.
Chaotic signals denoising using empirical mode decomposition inspired by mult...IJECEIAES
Β
The document describes a new method for denoising chaotic signals corrupted by additive noise using empirical mode decomposition (EMD) inspired by multivariate denoising. EMD is used to decompose the noisy chaotic signal into intrinsic mode functions (IMFs), which are then thresholded using a multivariate denoising algorithm combining wavelet transforms and principal component analysis. This proposed EMD-MD method is compared to other techniques using metrics like root mean square error and signal-to-noise ratio gain. Simulation results on Lorenz, Chen and Rossler chaotic systems show the EMD-MD method achieves the best denoising performance compared to conventional methods.
Research on Space Target Recognition Algorithm Based on Empirical Mode Decomp...Nooria Sukmaningtyas
Β
The space target recognition algorithm, which is based on the time series of radar cross section
(RCS), is proposed in this paper to solve the problems of space target recognition in the active radar
system. In the algorithm, EMD method is applied for the first time to extract the eigen of RCS time series.
The normalized instantaneous frequencies of high-frequency intrinsic mode functions obtained by EMD are
used as the eigen values for the recognition, and an effective target recognition criterion is established.
The effectiveness and the stability of the algorithm are verified by both simulation data and real data. In
addition, the algorithm could reduce the estimation bias of RCS caused by inaccurate evaluation, and it is
of great significance in promoting the target recognition ability of narrow-band radar in practice.
Optimum range of angle tracking radars: a theoretical computingIJECEIAES
Β
In this paper, we determine an optimal range for angle tracking radars (ATRs) based on evaluating the standard deviation of all kinds of errors in a tracking system. In the past, this optimal range has often been computed by the simulation of the total error components; however, we are going to introduce a closed form for this computation which allows us to obtain the optimal range directly. Thus, for this purpose, we firstly solve an optimization problem to achieve the closed form of the optimal range (Ropt.) and then, we compute it by doing a simple simulation. The results show that both theoretical and simulation-based computations are similar to each other.
This document summarizes a research paper that compares different digital filtering techniques for removing noise from electrocardiogram (ECG) signals. It describes how finite impulse response (FIR) filters were designed using various windowing techniques, including rectangular, Hamming, Hanning, and Blackman windows. Infinite impulse response (IIR) filters and wavelet transforms were also evaluated for denoising ECG signals. The performance of the different filtering approaches were compared based on the power spectral density and average power of the signals before and after filtering. The paper found that an FIR filter designed with the Kaiser window showed the best results for noise removal from ECG signals.
The document provides an overview of adaptive filters. It discusses that adaptive filters are digital filters that have self-adjusting characteristics to changes in input signals. They have two main components: a digital filter with adjustable coefficients and an adaptive algorithm. Common adaptive algorithms are LMS and RLS. Adaptive filters are used for applications like noise cancellation, system identification, channel equalization, and signal prediction. The key aspects of adaptive filter theory and algorithms like LMS, RLS, Wiener filters are also covered.
Papr reduction for ofdm oqam signals via alternative signal methodeSAT Journals
Β
Abstract
We deemed the PAPR reduction problem for OFDM/OQAM system. The PAPR reduction is the serious problem for
implementations of both OFDM and OFDM/OQAM systems due to their high PAPR. The OFDM/OQAM signal is generated by
summing over M time-shifted OFDM/OQAM symbols, where the successive symbols are interdependent with each other. The AS
(Alternative-Signal) method directly leads to the independent AS (AS-I) and joint AS (AS-J) algorithms. The AS-I algorithm
reduces the PAPR symbol by symbol with low complexity and AS-J applies optimal joint PAPR reduction among M
OFDM/OQAM symbols with much higher complexity. A sequential optimization procedure denoted AS-S have been proposed to
balance the computation complexity and system performance in this paper. AS-S algorithm, which adopts a sequential
optimization procedure over time with computational complexity linearly increasing with M. The Simulation results have been
provided for performance comparison of AS-I, AS-J, and AS-S algorithms.
KeywordsβPeak-to-Average power ratio (PAPR), orthogonal frequency division multiplexing with offset quadrature
amplitude modulation (OFDM/OQAM), Alternative-signal(AS),cyclic prefix(CP).
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Β
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation
of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a
49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
Implementation of Algorithms For Multi-Channel Digital Monitoring ReceiverIOSR Journals
Β
Abstract: Monitoring Receivers form an important constituent of the Electronic support. In Monitoring
Receiver we can monitor, demodulate or scan the multiple channels.
In this project, the Implementation of algorithm for multi channel digital monitoring receiver. The
implementation will carry out the channelization by the way of Digital down Converters (DDCs) and Digital
Base band Demodulation. The Intermediate Frequency (IF) at 10.7 MHz will be digitalized using Analog to
Digital Converter (ADC) with sampling frequency 52.5 MHz and further converted to Base band using DDCs.
Virtually all the digital receivers perform channel access using a DDC. The Base band data will be streamed to
the appropriate demodulators. Matlab Simulink will be used to simulate the logic modules before the
implementation. This system will be prototyped on an FPGA based COTS (Commercial-off-the-shelf)
development board. Xilinx System Generator will be used for the implementation of the algorithms.
Keywords: DDC, ADC, Digital Base band demodulation, IF, Monitoring Receiver.
The document analyzes soil samples collected around painted buildings in four local governments in Benue and Taraba States, Nigeria to determine heavy metal contamination levels. Testing found high levels of lead, cadmium, zinc and chromium contamination in the soils, with lead contamination being the highest. The order of contamination differed between locations but consistently included lead as the primary contaminant. The heavy metal levels pose health risks, especially to children, who are more vulnerable to negative impacts of heavy metal exposure through soil ingestion and dust inhalation near the painted buildings. Blood monitoring for heavy metals is recommended for individuals living near the painted structures.
Dust Interception Capacity And Alteration Of Various Biometric And Biochemica...IOSR Journals
Β
The dust accumulation capacity of Ficus carica L. was evaluated from eight different sites in and around Multan. The impact of dust accumulation was observed via various biometric attributes (leaf area, leaf fresh and dry weights) and biochemical attributes (chlorophyll contents, carotenoids & ascorbic acid) from leaves of F. carica. The maximum dust accumulation was occurred in the plants growing at Road sides while, the minimum dust was found on plants growing at Bahauddin Zakariya University. Dust accumulation has caused a significant effect on almost all foliage and biochemical attributes of F. carica. A positive correlation was found between dust accumulation and biometric attributes in F. carica. Biochemical responses had shown an inconsistency as chlorophylls (a, b & total), carotenoids decreased and ascorbic acid contents increased with an increase in dust accumulation. A negative correlation was found between dust deposition and chlorophyll contents. Whereas, accumulation of ascorbic acid was associated with a decline in pigment contents
Low Power FPGA Based Elliptical Curve CryptographyIOSR Journals
Β
Abstract: Cryptography is the study of techniques for ensuring the secrecy and authentication of the information. The development of public-key cryptography is the greatest and perhaps the only true revolution in the entire history of cryptography. Elliptic Curve Cryptography is one of the public-key cryptosystem showing up in standardization efforts, including the IEEE P1363 Standard. The principal attraction of elliptic curve cryptography compared to RSA is that it offers equal security for a smaller key-size, thereby reducing the processing overhead. As a Public-Key Cryptosystem, ECC has many advantages such as fast speed, high security and short key. It is suitable for the hardware of implementation, so ECC has been more and more focused in recent years. The hardware implementation of ECC on FPGA uses the arithmetic unit that has small area, small storage unit and fast speed, and it is an extremely suitable system which has limited computation ability and storage space.[1][2] The modular arithmetic division operations are carried out using conditional successive subtractions, thereby reducing the area. The system is implemented on Vertex-Pro XCV1000 FPGA. Index Terms β VHDL, FSM, FPGA, Elliptic Curve Cryptography.
Improving Sales in SME Using Internet MarketingIOSR Journals
Β
Abstract : In Indonesia, SMEs are the backbone of the Indonesian economy. Number of SMEs until 2011 to
reach around 52 million. SMEs in Indonesia is very important for the economy because it accounts for 60% of
GDP and 97% of the workforce holds. But access is limited to financial institutions only 25% or 13 million
SMEs who have access to financial institutions. Indonesian government, SMEs, through the Department of
Cooperatives and SMEs, in each province or regency / city.
Although Small and Medium Enterprises (SMEs) is driving the nation's economy, but in reality many of
the problems SMEs are still entangled. The main thing to note is the ability of SMEs to access a wider market.
Because of the ability to change and adapt to a changing environment will determine the existence of small
businesses in the nation's economy. In the end, the existence of small businesses that have high competitiveness
will strengthen the nation's economy as a whole. Thus, in this study will use an appropriate technology tools
that can provide assistance in introducing products through internet and increase sales in each SME
This study uses a sample of students at the State University of Malang that can make a significant
contribution in the small and medium businesses that are being initiated by students.
Keywords: Small Medium Enterprise, Internet Marketing, Sales Improvement
Effective Leadership-Employee Retention-Work Life Balance: A Cyclical ContinuumIOSR Journals
Β
This document summarizes a study on the relationship between effective leadership, employee work-life balance, and employee retention. It discusses how leadership styles can help balance employees' work and personal lives, leading to improved employee retention. The study presents different leadership theories and styles, and analyzes how they may impact work-life balance and retention. Specifically, it suggests that participative leadership approaches that balance task and employee orientation can foster understanding and job satisfaction, while autocratic styles focused solely on tasks may increase productivity but not retention. The document concludes that synchronized leadership considering employees' capabilities can help identify and retain talented staff.
The Comparison of theMaterials in Styles of Iranian Architecture and its Effe...IOSR Journals
Β
During the history,different elements have been determined as influential factors affecting the
architecture in different areas. The historical events and political alterations as well as religious and
economical changes can directly lead to the architecture style. One of the historical countries with rich
architectural history that can be increasingly exposed to more alterations is Iran. In general the architecture
styles in Iran can be categorised in six groups. These groups can be divided in two periods before and after
Islamβs emergence in Iran. The βParsiβ and βPartiβ architecture styles belong to the former period and
βRaziβ, βKhorasaniβ, βIsfahaniβ and βAzariβ were common in the latter period, after Islam. Such alterations
brought in a variety of architecture styles, in this country, due to theoretical alterations. Furthermore, some
novel architectural styles were resulted from a number of physical conditions which had also effects on the
theoretical architecture. The current research intends to put an emphasize on the alterations in materials used
in two historical periods of Iran, βAchaemenidβ Empire (550β330 BCE) and The βSassanidβ Empire (224 CE
to 651 CE) resulting in the changes in Iranian architecture. It also aims to explore the differences and the
reasons of changes in the materials used for constructions and the influence that these changes had on the
architectural style in the above mentioned periods
Transient Three-dimensional Numerical Analysis of Forced Convection Flow and ...IOSR Journals
Β
A three-dimensional transient numerical study of a constant property Newtonian fluid in curved pipe under laminar flow conditions is presented for a uniform wall temperature boundary condition. Numerical solutions were obtained using the control volume method described by Patankar for the range of. The working fluid was water. The transient flow pattern and the temperature distribution on the tube section were derived for different values of the Reynolds number. Graphical results for velocity and temperature are presented and analyzed. Results have shown that the maximum velocity in center of velocity profile increase with increasing of Reynolds number. In curved pipes, time averaged results exhibited Dean circulation and a strong velocity and temperature stratification in the radial direction. Flow and heat transfer were strongly asymmetric, with higher values near the outer pipe bend.
An Experimental Study on Strength Properties of Concrete When Cement Is Parti...IOSR Journals
Β
This study investigated the strength properties of concrete when cement is partially replaced with two types of sugarcane bagasse ash (SBA): raw SBA (B.A.1) and SBA heated to 8500C (B.A.2). Cubes, cylinders and beams were tested after 28 days of curing. For B.A.1, maximum strengths were achieved at 5% replacement, while for B.A.2 maximum compressive strength was at 15% replacement, tensile at 20%, and flexural at 30% replacement. Results showed that concrete with B.A.2 generally had higher strengths than with B.A.1, indicating that heating SBA improves its pozzolanic
Formulation of an anti-inflammatory drug as fast dissolving tabletsIOSR Journals
Β
The demand for mouth dissolving tablets has been growing during the last decade especially for elderly and children who have difficulties in swallowing. Ketorolac tromethamine is an effective anti-inflammatory agent that has been extensively used for the prevention of pain and inflammation associated with a wide variety of reasons. This study was aimed to form Ketorolac tromethamine mouth dissolving tablets by direct compression using superdisintegratants as crospovidone (CP) ,crosscarmellose sodium (CCS), and sodium starch glycolate (SSG) at concentrations of 3%, 6%, 9%, and 12%. The physical mixtures of the drug and the used excipients were evaluated for their micromeretric properties such as angle of repose, particle size, Hausner's ratio and % compressibility. Also, FTIR spectroscopy and DSC calorimetry were performed to indicate any possible interaction between the drug with the used excipients.All the prepared tablets were evaluated for their weight variation, thickness, hardness, wetting time, and disintegration time. Also in-vitro release study was done for all the prepared tablets using distilled deionized water as dissolution medium at 37.5Β± 0.5 cΛ. Based on in-vitro release study and stability studies, G5 (contained 3% CCS) was found to be the promising formulae and subjected to further studies.
Analysis Of Lpg Cylinder Using Composite MaterialsIOSR Journals
Β
This paper aims is innovation of alternative materials of Liquid petroleum gas (LPG). So, the finite
element analysis of Liquefied Petroleum Gas (LPG) cylinders made of Steel and Fiber Reinforced Plastic (FRP)
composites has been carried out. Finite element analysis of composite cylinder subjected to internal pressure is
performed. Layered shell element of a versatile FE analysis package ANSYS (version 11.0) has been used to
model the shell with FRP composites.
A number of cases are considered to study the stresses and deformations due to pressure loading inside the
cylinder. First, the results of stresses and deformation for steel cylinders are compared with the analytical
solution available in literature in order to validate the model and the software. The weight savings are also
presented for steel, Glass Fiber Reinforced Plastic (GFRP) composites LPG cylinders. Variations of stresses
and deformations throughout the cylinder made of steel and GFRP are studied.
Implications of Organisational Culture on Performance of Business OrganisationsIOSR Journals
Β
This article discusses the implications of organizational culture on the performance of business organizations in Nigeria. The first objective of the paper is to elaborate on organisational culture as a determinant of organisational performance. The second objective is to identify four dimensions of African culture as bases for more effective organization culture in the Nigerian context. Works of well known authors in the fields of Culture and Management were reviewed for perspectives and evidence. It is shown from the reviews that a relationship exists between organisational culture and performance of business organisations. In addition, the review suggests that relationship is mediated by reward perception and role perception. Social support, accommodation at work places, religious referencing, supervisor-subordinate age ratio and ethnic diversification are identified as culture factors likely to have positive impact on the performance of business organisations in Nigeria. It is recommended that business organizations in the country should give adequate attention to the development of corporate cultures that integrate the above factors in order to enhance their performance.
Cataloging Of Sessions in Genuine Traffic by Packet Size Distribution and Ses...IOSR Journals
Β
Abstract: Cataloging traffic keen on precise network applications is vital for application-aware network
organization and it turn into more taxing because modern applications incomprehensible their network
behaviors. Whereas port number-based classifiers work merely for a little renowned application and signaturebased
classifiers are not significant to encrypted packet payloads, researchers are inclined to classify network
traffic rooted in behaviors scrutinized in network applications. In this document, a session level Flood
Cataloging (SLFC) approach is proposed to organize network Floods as a session, which encompasses of
Floods in the equal discussion. SLFC initially classifies flood into the analogous applications by packet size
distribution (PSD) and subsequently faction Floods as sessions by port locality. With PSD, each Flood is
distorted into a set of points in a two-Dimension space and the remoteness among all Flood and the
representatives of preselected applications are calculated. The Flood is predicted as the application having a
least distance. Meanwhile, port locality is accustomed to cluster Floods as sessions since an application often
uses successive port statistics surrounded by a session. If flood of a session are categorized into diverse
applications, an arbitration algorithm is invoked to make the improvement.
Keywords: Flood Cataloging; session grouping; session Cataloging; packet size distribution
This document proposes a modified Newton's method for solving nonlinear equations that uses harmonic mean. It begins by reviewing Newton's method and some existing variants that use arithmetic mean or other integration rules to modify Newton's method and achieve cubic convergence without using second derivatives. It then presents the new Harmonic-Simpson-Newton method, which replaces the arithmetic mean in an existing Simpson Newton's method with harmonic mean. The method is proven to have cubic convergence. Numerical examples are provided to compare the efficiency of the new method to other cubic convergent methods.
QSAR studies of some anilinoquinolines for their antitumor activity as EGFR i...IOSR Journals
Β
Quantitative Structure-Activity Relationship studies has been performed on some anilinoquinolines . A variety of parameters including 2D- autocorelation, RDF, 3D- MoRSE, WHIM and GETAWAY parameters have been chosen for modeling the antitumor activity of these compounds. The multiple regression analysis reveals that the seven βparametric model is the best for modeling the activity of the compounds under present study. This model has been tested by using cross validated parameters. The results are also discussed on the basis of ridge regression.
Asifβs Equation of Charge Variation and Special RelativityIOSR Journals
Β
The theory of special relativity plays an important role in the modern theory of classical electromagnetism. Considering deeply the effect of Special relativity in Electromagnetism, when a charge particle moves with high speed as comparable to the speed of light in vacuum tube or in space under influence of electromagnetic field, its mass varies under Lorentz transformation [1].The question arises that does its charge vary under Lorentz transformation? In this paper, Asif's equation of charge variation demonstrates the variation of electric charge under Lorentz transformation. The more sophisticated view of electromagnetism expressed by electromagnetic fields in moving inertial frame can be achieved by considering some relativistic effect including charge as well. One can easily achieve the mass-energy relation from Asifβs equation of charge variation as proved in this paper.
Enhancing Security of Multimodal Biometric Authentication System by Implement...IOSR Journals
Β
Abstract : Conventional personal identification techniques for instance passwords, tokens, ID card and PIN
codes are prone to theft or forgery and thus biometrics isa solution thereto. Biometrics is the way of recognizing
and scrutinizing the physical traits of a person. Automated biometrics verification caters as a conducive and
legitimate method, but there must be an assurance to its cogency. Furthermore, in most of the cases unimodal
biometric recognition is not able to meet the performance requirements of the applications. According to recent
trends, recognition based on multimodal biometrics is emerging at a greater pace. Multimodal biometrics
unifies two or more biometric traits and thus the issues that emerge in unimodal recognition can be mitigated in
multimodal biometric systems. But with the rapid ontogenesis of information technology, even the biometric
data is not secure. Digital watermarking is one such technique that is implemented to secure the biometric data
from inadvertent or premeditated attacks.This paper propounds an approach that is projected in both the
directions of improving the performance of biometric identification system by going multimodal and, increasing
the security through watermarking. The biometric traits are initially transformed using Discrete Wavelet and
Discrete Cosine Transformation and then watermarked using Singular Value Decomposition. Scheme depiction
and presented outcomes justifies the effectiveness of the scheme.
Keywords: Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), Multimodal biometrics,
Singular Value Decomposition, Watermarking
Electrophoretic Patterns of Esterases in Eri silkworm Samia Cynthia riciniIOSR Journals
Β
The present study was carried out to investigate the patterns of esterase isozymes extracted from the silk gland, haemolymph and mid gut of Eri silkworm (Samia Cynthia ricini). The qualitative analysis of esterases was carried out by 7.5% of native Polyacrylamide Gel Electrophoresis (PAGE). The inhibitor sensitivity of the enzymes towards paraxon, eserine and pCMB was used to classify the individual zones of esterases. Three zones of esterases were observed in different tissues of Eri silkworm. Silk gland esterases were classified as CHsp (Cholinesterase like enzymes) esterases. The haemolymph and mid gut esterases were classified into Esdp (Enzyme inhibited by paraxon and pCMB).
Prevalence of Iron Deficiency Anaemia among Pregnant Women in Calabar, Cross ...IOSR Journals
Β
Iron is a component of a number of proteins including haemoglobin, myoglobin, cytochromes and enzymes involved in redox reactions. Inadequate iron intake can lead to varying degrees of deficiency, from low iron stores to early iron deficiency and iron-deficiency anaemia and this is dangerous to both baby and mother. The objective of this study is to assess the prevalence of iron deficiency and iron deficiency anaemia among pregnant women in Calabar, Cross River State Nigeria. Seventy pregnant women within the age range of 15-45 years from University of Calabar Teaching Hospital were recruited as subjects in this study. The control consisted of fifty age-matched apparently healthy non-pregnant women . The tests that were carried out using standard method include include full blood count (packed cell volume, haemoglobin, mean cell haemoglobin, mean cell haemoglobin concentration and red cell count), serum iron, total iron binding capacity, transferrin saturation,serum ferritin and soluble transferrin recptor. The prevalence of anaemia and iron deficiency anaemia were found to be significantly higher (p<0.05)><0.01)><0.01) increased in pregnant than non-pregnant. It was also shown that pregnant women in their third trimesters and multigravidae had the highest prevalence of iron deficiency and iron deficiency anaemia while pregnant women in their second trimester had the highest prevalence of anaemia. In conclusion the study has shown that the prevalence of anaemia, iron deficiency and iron deficiency anaemia among pregnant women in the studied area were still high and can be considered public health problem.
βComparitive Study of Prevalence of Hyperlactatemia in HIV / AIDS Patients re...IOSR Journals
Β
Hyperlactatemia is one of the important metabolic abnormalities in HIV infected patients. The
prevalence of hyperlactatemia in natural course of HIV disease is approximately about 2%. Aim of this study is
to estimate the prevalence of hyperlactatemia in HIV patients receiving two antiretroviral regimens, advocated
by NACO by monitoring the plasma lactate levels. This study was taken up with 200 patients to compare the
prevalence of hyperlactatemia of two commonly used NACO regimens (zidovudine+ lamivudine+ nevirapine)
Vs (stavudine+ lamivudine+ nevirapine). The plasma lactate levels were estimated between 9th to 18thmonth
after initiation of antiretroviral therapy. The comparision and correlation between plasma lactate levels, CD4
counts and haemoglobin percentage in both regimens was done. There was statistically significant rise in the
plasma lactate levels (p<0.05) in both regimens. The increase in plasma lactate levels is more in stavudine
group compared to zidovudine group. There was low degree of positive correlation between plasma lactate and
haemoglobin in Stavudine group but negative correlation between Plasma lactate and CD4 counts in both
groups. More focus is needed on Pharmacovigilance of NRTIs induced hyperlactatemia especially Stavudine.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A new approach for Reducing Noise in ECG signal employing Gradient Descent Me...paperpublications3
Β
Abstract: ECG is the main tool used by the physicians for identifying and for interpretation of Heart condition. The ECG should be free from noise and of good quality for the correct diagnosis. In real time situations ECG are corrupted by many types of noises. The high frequency noise is one of them. In this thesis, analysis has been carried out the use of neural network for denoising the ECG signal. A multilayer artificial neural network (ANN) is designed. Here gradient descent method (GDM) is used for training of artificial neural network. The noisy ECG signal is given as input to the neural network. The output of neural network is compared with De-noised(original) ECG signal and value of Root Mean Square Error(RMSE) is computed. In training process the weights are updated until the value of RMSE is minimized. Several iteration has to be performed in order to find Minimum Mean Square Error(MMSE). At MMSE network weights are finalized. Subsequently, network parameters are used for Noise reduction. The comparison with other technique shows that the neural networks method is able to better preserve the signal waveform at system output with reduced noise. Our results shows better accuracy in terms of parameters root mean square error, signal to noise ratio and smoothness (RMSE,SNR and R) as compare to GOWT[18].The database has been collected from MIT-BIH arrhythmias database.
Suppression of power line interference correction of baselinewanders andIAEME Publication
Β
This document summarizes a research paper that proposes a new method for enhancing electrocardiogram (ECG) signals based on the Constrained Stability Least Mean Square (CSLMS) algorithm. The CSLMS algorithm is applied to an adaptive noise cancellation filter to remove two dominant artifacts from ECG signals: high-frequency noise and baseline wander. Simulation results on ECG data from the MIT-BIH database show that the CSLMS method provides better denoising and artifact removal compared to the conventional LMS algorithm, improving signal-to-noise ratio by 3-6 decibels. The CSLMS algorithm exhibits smaller excess mean squared error and faster convergence than LMS, resulting in less signal distortion in
Real time ecg signal analysis by using new data reduction algorithm forIAEME Publication
Β
This document summarizes a research paper that proposes a new method for compressing electrocardiogram (ECG) signals for transmission over wireless personal area networks (WPANs). The method uses curvature analysis to select feature points in the ECG signal, including the P, Q, R, S, and T waves, which are important for diagnosis. Additional points are then selected iteratively to minimize reconstruction errors when decompressing the signal. The researchers conclude that the curvature-based method is able to preserve all important diagnostic features of the ECG signal while significantly compressing the data size for transmission over bandwidth-limited WPANs.
Electrocardiogram Denoised Signal by Discrete Wavelet Transform and Continuou...CSCJournals
Β
One of commonest problems in electrocardiogram (ECG) signal processing is denoising. In this paper a denoising technique based on discrete wavelet transform (DWT) has been developed. To evaluate proposed technique, we compare it to continuous wavelet transform (CWT). Performance evaluation uses parameters like mean square error (MSE) and signal to noise ratio (SNR) computations show that the proposed technique out performs the CWT.
Ensemble Empirical Mode Decomposition: An adaptive method for noise reductionIOSR Journals
Β
Abstract:Empirical mode decomposition (EMD), a data analysis technique, is used to denoise non-stationary and non-linear processes. The method does not require any pre & post processing of signal and use of any specified basis functions. But EMD suffers from a problem called mode mixing. So to overcome this problem a new method known as Ensemble Empirical mode decomposition (EEMD) has been introduced. The presented paper gives the detail of EEMD and its application in various fields. EEMD is a timeβspace analysis method, in which the added white noise is averaged out with sufficient number of trials; and the averaging process results in only the component of the signal (original data). EEMD is a truly noise-assisted data analysis (NADA) method and represents a substantial improvement over the original EMD. Keywords βData analysis, Empirical mode decomposition, intrinsic mode function, mode mixing, NADA,
This document proposes a peak detection algorithm for ECG and arterial blood pressure (ABP) signals based on empirical mode decomposition. It decomposes signals into intrinsic mode functions using the empirical mode decomposition technique. The algorithm was tested on various ECG and ABP datasets and implemented in MATLAB. It accurately detects peaks in ECG signals like the R-peak and features in ABP signals. The empirical mode decomposition approach adaptively decomposes biomedical signals in a data-driven manner for reliable peak detection.
This document presents an optimization of algorithms for real-time ECG beat classification. It compares algorithms using voltage values in the time domain versus those using Daubechies wavelet analysis. It extracts features around reference peaks within the QRS complex and uses clustering methods to classify beats in real-time as normal, premature ventricular contraction, or unclassified. Evaluating algorithms on 32 MIT-BIH records, the method using Daubechies wavelets and correlation measure achieved 93.25% sensitivity and 91.43% positive predictivity for premature ventricular contraction detection, making it suitable for real-time systems due to low computational cost.
ECG SIGNAL DENOISING USING EMPIRICAL MODE DECOMPOSITIONSarang Joshi
Β
The document presents a method for denoising ECG signals corrupted with power line interference using empirical mode decomposition and thresholding. It provides background on sources of power line interference in ECG signals and existing approaches to remove it. The proposed approach decomposes noisy ECG signals into intrinsic mode functions using EMD, then applies various thresholding techniques to the IMFs to remove noise before reconstructing the signal. It tests the method on signals from the MIT-BIH Arrhythmia Database corrupted with 10-50% noise and evaluates performance based on correlation coefficient and SNR improvement. Results show Donohoβs thresholding and hard thresholding achieved the best denoising based on these metrics.
ECG signal denoising using a novel approach of adaptive filters for real-time...IJECEIAES
Β
Electrocardiogram (ECG) is considered as the main signal that can be used to diagnose different kinds of diseases related to human heart. During the recording process, it is usually contaminated with different kinds of noise which includes power-line interference, baseline wandering and muscle contraction. In order to clean the ECG signal, several noise removal techniques have been used such as adaptive filters, empirical mode decomposition, Hilbert-Huang transform, wavelet-based algorithm, discrete wavelet transforms, modulus maxima of wavelet transform, patch based method, and many more. Unfortunately, all the presented methods cannot be used for online processing since it takes long time to clean the ECG signal. The current research presents a unique method for ECG denoising using a novel approach of adaptive filters. The suggested method was tested by using a simulated signal using MATLAB software under different scenarios. Instead of using a reference signal for ECG signal denoising, the presented model uses a unite delay and the primary ECG signal itself. Least mean square (LMS), normalized least mean square (NLMS), and Leaky LMS were used as adaptation algorithms in this paper.
This document describes a study that uses Kohonen neural network (KNN) to automatically identify the cutoff frequency for denoising electrocardiogram (ECG) signals. The methodology involves collecting noisy ECG data, removing baseline wandering using empirical mode decomposition, transforming the signal to the frequency domain using fast Fourier transform, applying KNN to cluster the frequency coefficients and identify the cutoff frequency, and filtering the signal using a finite impulse response low pass filter with the identified cutoff frequency. The results show that the KNN approach more effectively denoises the ECG signals compared to conventional filtering methods by identifying a lower cutoff frequency that removes more noise.
This document summarizes a proposed FPGA-based ECG analysis system for arrhythmia detection. The system uses empirical mode decomposition (EMD) for ECG signal preprocessing to remove noise. EMD decomposes the noisy ECG signal into intrinsic mode functions (IMFs) and spectral flatness is used to identify noisy IMFs. After enhancement, R peak detection is performed using a threshold to extract heart rate for arrhythmia detection. The design was implemented on an FPGA board using Verilog and was able to detect arrhythmias through LEDs while using a small portion of the FPGA's resources.
Bio-medical (EMG) Signal Analysis and Feature Extraction Using Wavelet TransformIJERA Editor
Β
This document summarizes research on analyzing electromyography (EMG) signals using wavelet transforms to extract features for classification of muscle activity. A multi-channel EMG acquisition system was developed using surface electrodes to measure forearm muscle signals. Different wavelet families were used to analyze the EMG signals. Features like root mean square, logarithm of root mean square, centroid frequency, and standard deviation were extracted. Root mean square feature extraction performed best. In the future, this method could be used to control prosthetic or robotic arms for real-time processing based on muscle activity.
ECG Signal Compression Technique Based on Discrete Wavelet Transform and QRS-...CSCJournals
Β
The document presents a new ECG signal compression technique based on discrete wavelet transform (DWT) and QRS-complex estimation. The technique first estimates the QRS-complex from the ECG signal, then subtracts it to form an error signal. This error signal is wavelet transformed, and the coefficients are thresholded based on energy packing efficiency to maximize compression ratio and minimize distortion. Testing on MIT-BIH records showed the technique achieves high compression ratios of 25.15 with low distortion levels of 0.7% PRD.
This document presents a method called Hybrid Linearization Method for de-noising electrocardiogram (ECG) signals. The method combines Extended Kalman Filtering (EKF) with Discrete Wavelet Transform (DWT). EKF is first used to de-noise the ECG signal and reduce noise, but DWT is then applied to further improve the quality of the de-noised signal. The algorithm and steps are described. Results show that the proposed Hybrid Linearization Method achieves a lower root mean square error than EKF alone, demonstrating its effectiveness at de-noising ECG signals.
New Method of R-Wave Detection by Continuous Wavelet TransformCSCJournals
Β
In this paper we have employed a new method of R-peaks detection in electrocardiogram (ECG) signals. This method is based on the application of the discretised Continuous Wavelet Transform (CWT) used for the Bionic Wavelet Transform (BWT). The mother wavelet associated to this transform is the Morlet wavelet. For evaluating the proposed method, we have compared it to others methods that are based on Discrete Wavelet Transform (DWT). In this evaluation, the used ECG signals are taken from MIT-BIH database. The obtained results show that the proposed method outperforms some conventional techniques used in our evaluation.
Novel method to find the parameter for noise removal from multi channel ecg w...eSAT Journals
Β
This document presents a novel method for removing noise from multi-channel electrocardiogram (ECG) waveforms using a multi-swarm optimization (MSO) approach. The method involves extracting features from ECG data, using MSO to identify an optimal cutoff frequency parameter for a finite impulse response (FIR) filter, and applying the FIR filter using the identified parameter to remove noise from the ECG signals. The MSO approach divides particles into multiple swarms that each focus on a region of the search space, helping to overcome sensitivity to initial positions found in traditional particle swarm optimization. The resulting filtered ECG signals are evaluated against original clean signals to validate the noise removal performance of the MSO-identified cutoff frequency parameter and
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Rule Based Identification of Cardiac Arrhythmias from Enhanced ECG Signals Us...CSCJournals
Β
The detection of abnormal cardiac rhythms, automatic discrimination from rhythmic heart activity, became a thrust area in clinical research. Arrhythmia detection is possible by analyzing the electrocardiogram (ECG) signal features. The presence of interference signals, like power line interference (PLI), Electromyogram (EMG) and baseline drift interferences, could cause serious problems during the recording of ECG signals. Many a time, they pose problem in modern control and signal processing applications by being narrow in-band interference near the frequencies carrying crucial information. This paper presents an approach for ECG signal enhancement by combining the attractive properties of principal component analysis (PCA) and wavelets, resulting in multi-scale PCA. In Multi-Scale Principal Component Analysis (MSPCA), the PCAβs ability to decorrelate the variables by extracting a linear relationship and wavelet analysis are utilized. MSPCA method effectively processed the noisy ECG signal and enhanced signal features are used for clear identification of arrhythmias. In MSPCA, the principal components of the wavelet coefficients of the ECG data at each scale are computed first and are then combined at relevant scales. Statistical measures computed in terms of root mean square deviation (RMSD), root mean square error (RMSE), root mean square variation (RMSV) and improvement in signal to noise ratio (SNRI) revealed that the Daubechies based MSPCA outperformed the basic wavelet based processing for ECG signal enhancement. With enhanced signal features obtained after MSPCA processing, the detectable measures, QRS duration and R-R interval are evaluated. By using the rule base technique, projecting the detectable measures on a two dimensional area, various arrhythmias are detected depending upon the beat falling into particular place of the two dimensional area.
Noise Cancellation in ECG Signals using ComputationallyCSCJournals
Β
Several signed LMS based adaptive filters, which are computationally superior having multiplier free weight update loops are proposed for noise cancellation in the ECG signal. The adaptive filters essentially minimizes the mean-squared error between a primary input, which is the noisy ECG, and a reference input, which is either noise that is correlated in some way with the noise in the primary input or a signal that is correlated only with ECG in the primary input. Different filter structures are presented to eliminate the diverse forms of noise: 60Hz power line interference, baseline wander, muscle noise and the motion artifact. Finally, we have applied these algorithms on real ECG signals obtained from the MIT-BIH data base and compared its performance with the conventional LMS algorithm. The results show that the performance of the signed regressor LMS algorithm is superior than conventional LMS algorithm, the performance of signed LMS and sign-sign LMS based realizations are comparable to that of the LMS based filtering techniques in terms of signal to noise ratio and computational complexity.
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
1. IOSR Journal of Applied Physics (IOSR-JAP)
e-ISSN: 2278-4861.Volume 4, Issue 1 (May. - Jun. 2013), PP 47-52
www.iosrjournals.org
www.iosrjournals.org 47 | Page
De-Noising Corrupted ECG Signals By Empirical Mode
Decomposition (EMD) With Application of Higher Order
Statistics (HOS)
Mitra DJ1
, Shahjalal M2
, Kiber MA3
1.3
Department of Applied Physics, Electronics and Communication Engineering, University of Dhaka,
Bangladesh
2
Departments of Basic Science, Primeasia University, Banani, Dhaka, Bangladesh
Abstract: The electrocardiogram (ECG) signals which are extensively used for heart disease diagnosis and
patient monitoring are usually corrupted with various sources of noise. In this paper, an algorithm is developed
to de-noise ECG signals based on Empirical Mode Decomposition (EMD) with application of Higher Order
Statistics (HOS). The algorithm is applied on several ECG signals for different levels of Signal to Noise Ratio
(SNR). The SNR improvement (SNRimp) and Percent Root mean square Difference (PRD (%)) are analyzed. The
results show that the developed algorithm is a reasonable one to de-noise ECG signals.
Keywords: ECG, Empirical Mode Decomposition (EMD), Higher Order Statistics (HOS), Intrinsic Mode
Function (IMF)
I. Introduction
Biomedical signals reflect the nature and activities of physiological processes. The electrocardiogram
(ECG) is the electrical manifestation of the contractile activity of the heart. The ECG is essential for diagnosis,
and therefore management of abnormal cardiovascular activity. Noise or unwanted signal is always present in
ECG signal, which makes it difficult to analyze. The typical sources of noise are, high frequency noise, motion
artifacts in ECG, maternal interference in fetal ECG, EMG noise, instrumentation noise etc. Therefore de-
noising the ECG signal is a pre-requisite to arrive at proper diagnosis by analyzing it.
A number of methods have been applied to de-noise ECG signals such as, digital filters, ICA, PCA,
adaptive filtering, wavelet transform etc. The existing de-noising techniques have certain limitations. The filter
bank based de-noising process smoothes the P and R amplitude of the ECG signal, and it is more sensitive to
different levels of noise [1]; The statistical model derived in PCA, ICA is not only fairly arbitrary but also
extremely sensitive to small changes in either the signal or the noise unless the basis functions are trained on a
global set of ECG beat types, moreover, the ICA doesnβt allow the prior information about the signals for
efficient filtering; adaptive filtering requires reference signal information for the effective filtering process, and
the reference signal has to be additionally recorded together with ECG [2]. Wavelets need a basis function to be
specified, moreover, the hard-thresholding WT leads to oscillation of the reconstructed ECG signal, and the
soft-thresholding method reduce the amplitudes of the ECG waveform, especially reduce the amplitudes of the
R-waves which is more important to diagnose the heart diseases [3].
In this paper, an algorithm has been developed to de-noise ECG signal based on Empirical Mode
Decomposition (EMD) along with application of Higher Order Statistics (HOS). The EMD method is totally
adaptive and data driven. This method doesnβt need a-priori basis function selection (i.e. mother wavelet) for
signal decomposition. EMD is effectively used for signal de-noising in a wide range of applications, such as
acoustic signals, ionospheric signals, in the study of heart rate variability (HRV), analysis of respiratory
mechanomyographic signals, crackle sound analysis in lung sounds and enhancement of cardiograph signals.
The acceptance of the method as a processing tool is stressed by the large number of publications in diverse
areas of signal processing including financial applications, fluid dynamics, ocean engineering and
electromagnetic field time series analysis [4]. Thus EMD is a versatile method to de-noise and analyze non-
stationary signals.
II. Empirical Mode Decomposition (Emd)
Empirical Mode Decomposition (EMD) method is a new non-linear technique, which was first
formulated by Dr. Norden Huang of NASA in 1996, for adaptively representing non-stationary signals as sums
of zero mean AM-FM components [5]. This method has very good results in the analysis of non-linear and non-
stationary signals, especially to the exact representation of the energy of the signal and the frequency content
thereof in relation to time. The main feature of the new method is to analyze the signal in structural components
known as Intrinsic Mode Functions (IMFs), arising from the signal itself and not defined in advance, so that the
2. De-Noising Corrupted ECG Signals By EMD With Application Of HOS
www.iosrjournals.org 48 | Page
analysis can be considered not as a-priori, but instead a-posteriori. The final result is the ability to display the
spectrum of the signal in function of time, much more accurately than the traditional methods. An Intrinsic
Mode Function (IMF) represents the oscillating mode embedded in the original data. IMF is a function that
satisfies the following two conditions:
1. The total number of local extrema and the number of zero crossings should be equal to each other or differ by
at most 1.
2. At any point the mean of upper and lower envelopes respectively defined by local maxima and the local
minima should be zero.
2.1 Sifting Process: Extraction of IMF by EMD
The algorithm is as following [5]:
Step 1: For a signal x(t), create upper envelope Emax(t) by local maxima and lower envelope Emin(t) by local
minima by using cubic splines interpolation.
Step 2: Calculate the mean of the upper and lower envelope:
π1 =
πΈ πππ₯ (π‘)+πΈ πππ (π‘)
2
(1)
Step 3: Subtract the mean from original data:
β1 π‘ = π₯ π‘ β π1(π‘) (2)
Step 4: Verify that h1(t) satisfies conditions for IMFs. If not, repeat steps 1-4 and the new balance emerges as:
β11 π‘ = β1 π‘ β π11(π‘) (3)
Where, π11 π‘ is the mean value of the envelopes defined by the extremes of β1 π‘ .
Step 5: Get first IMF (after k iterations):
β1π π‘ = β1 πβ1 π‘ β π1π π‘ (4)
Where, k is the number of repetitions until the first IMF,β1π π‘ = πΌππΉ1 , occurs.
Step 6: Calculate first residue:
π1 π‘ = π₯ π‘ β β1π (π‘) (5)
Step 7: Repeat whole algorithm with r1(t), r2(t)β¦. until residue is monotonic function.
Step 8: After n iterations x(t) is decomposed according to equation,
π₯ π‘ = πΌππΉπ +π
π=1 ππ (6)
Higher Order Statistics (HOS)
The use of higher-order statistics provides insight into signals which is not always available at lower
orders. Additionally, Gaussian-distributed signals have the interesting characteristic of disappearing at higher
orders. Because so much of the noise and interference environment is Gaussian-distributed, higher order
statistics thus offer the promise of a useful method of noise reduction [6]. ECG signal is easily contaminated by
different sources of noises, as mentioned previously. Different noise sources can be approximated by a white
Gaussian noise source [7]. So, after EMD is used to decompose the ECG signal into its IMF, higher order
statistics is a good choice to remove any Gaussian scales from the signal. In this paper, Kurtosis and Bispectrum
of every IMF are used as HOS parameters to check the Gaussianity, which is then followed by Bootstrap
technique.
In probability theory and statistics Kurtosis is any measure of the peakedness of the probability
distribution of a real-valued random variable. Kurtosis is defined as the normalized version of the fourth order
Cumulant. Assuming a zero mean signal, the normalized Kurtosis is expressed as:
πΎ4 =
πΈ π₯(π‘)4
πΈ π₯(π‘)2 2 β 3 = π
π₯(π)4π
π=1
π₯(π)2π
π=1
2 β 3 (7)
Where N is the number of signal samples, and n= 1, 2,β¦β¦N
And, a Kurtosis estimator can take values as:
πΎ4 β€
24
π
1βπ
(8)
Where a is as authorized confidence percentage value, with a numerically estimated optimum equal to 90%.
However, even though the Kurtosis of a Gaussian signal is restricted by equation (8), in fact Kurtosis estimation
may still be invalid, especially when the samples of the signal are not numerous enough to ensure convergence.
In these cases, the solution comes in the form of bootstrap technique.
Bootstrap is a statistical method to increase the accuracy of the estimator, and it is very effective in
cases when the available signal samples are limited. Bootstrap does exactly what a scientist do in practice if itβs
possible: it repeats the experiment many times. Bootstrap randomly reassigns the observations, re-computes the
estimates many times and treats these reassignments as repeated experiments [8].
In this study Bootstrap is used to evaluate the Kurtosis of each of the signal IMFs right after EMD. The
number of the IMF samples reassignments is limited to 1000 for computational load purposes. The Bootstrap
3. De-Noising Corrupted ECG Signals By EMD With Application Of HOS
www.iosrjournals.org 49 | Page
algorithm gives a maximum and minimum Kurtosis estimation of the Kurtosis value for each of the signal IMFs,
which in turn is compared with the theoretical Kurtosis limit as it is calculated by equation (8).
Signal Bispectrum is another candidate to test Gaussianity. The Bispectrum is defined as the third
order Spectrum of the signal and is calculated either as the Fourier transform of its third order Cumulant, or as
the triple product of its Fourier coefficients, [9]. That is:
π΅2
π₯
π1, π2 = πΈ π π1 π π2 πβ
(π1 + π2) = π3 π π1 π π2 πβ
(π1 + π2) (9)
Where, π1 β€ π, π1 β€ π, π1 + π2 β€ π,
X(Ο) is the Fourier transform of the signal x(n), and, m3 is the third order moment.
The Bispectrum of a Gaussian process is zero. However, there exist statistical processes where the
Bispectrum is zero despite deviating from Gaussianity. In other words a non Gaussian signal may admit zero
Bispectrum, but if a signal is Gaussian its Bispectrum has to be equal to zero. To this end, using the Bispectrum
criterion to ensure Gaussianity after the signal is classified as Gaussian by the Kurtosis test, appears to be a good
choice. Although the computational cost increases by using two Gaussianity estimators, by doing this it can be
ensured that the IMF under examination can be safely classified as Gaussian and excluded from the signal
reconstruction process. After the Gaussianity is checked thresholding rule is applied on the non-Gaussian IMFs.
The threshold is a modified version of the universal threshold proposed by Dohono [1], expressed as:
π = π ππ 2 lnβ‘(π) (10)
Where, Vi is the noise variance estimated by the noise model for the i-th IMF, (iβ₯2), N is the number of
signal samples, and c is a constant experimentally found to take values 1 to 0.7 depending of the type of signal.
The assumption also that the total noise energy is captured by the first IMF is not valid in the general case,
therefore the noise variance for the first IMF is estimated using a better estimator as:
π1 =
ππππππ πΌππΉ1
0.6745
2
(11)
An alternative approach for the noise variance estimator is where the absolute median deviation of the first
IMF is taken into account as:
π1 =
ππππππ ( πΌππΉ1βππππππ (πΌππΉ1) )
0.6745
2
(12)
A series of simulations concluded that the second of these two versions of noise variance estimator, performs
better for all types of signals.
Then the variance of each of the IMFs can be parameterized as a function of the first IMF variance as:
ππ =
π1
π½ π»
π π»
β2(1βπ»)π
, π β₯ 2 (13)
Where, Ξ²H is experimentally estimated in for three values of the Hurst exponent as (Ξ=0.2, Ξ²H=0.487),
Ξ=0.5, Ξ²H=0.719, (Ξ=0.8, Ξ²H=1.025), and, π β₯ 2 and π π» β 2.
Having now determined the thresholds for each IMF, the de-noising method would necessitate zeroing
the portion of the IMF which is below the threshold. However, the IMF nature requires setting to zero the IMF
portion between two adjacent zero crossings, when the absolute maximum of the IMF in this interval is below
the predefined threshold. This fact is based on the assumption that if the extrema which lies inside two adjacent
zero crossings interval exceeds the threshold, the interval is signal dominant; otherwise it is noise dominant
[10].
The thresholding operation for the EMD case and for every two successive zero crossings interval, π§π
π
=
π§π
π
, π§π
π +1
, the hard-thresholding can be expressed as:
π§π
π
=
π§π
π
, ππ
π
> ππ
0, ππ
π
β€ ππ
(14)
Where, π§π
π
denotes the thresholding interval, i is the IMF order index, ππ
π
is the jth
extrema of the ith
IMF, and j
= 1, 2,β¦., (Mi - 1), with Mi being the number of the zero-crossings of the ith
IMF.
For the soft-thresholding cases π§π
π
is given as:
π§π
π
=
π§π
π ππ
π
βπ π
ππ
π , ππ
π
> ππ
0, ππ
π
β€ ππ
(15)
The thresholded IMF is formed by concatenating the thresholded intervals as:
πΌππΉπ=[π§π
1
, π§π
2
, π§π
3
, π§π
π
] (16)
4. De-Noising Corrupted ECG Signals By EMD With Application Of HOS
www.iosrjournals.org 50 | Page
III. Proposed Algorithm to De-Noise ECG Signal
The main objects to suppress noise from an ECG signal are,
(a) Improving the signal to noise ratio (SNR), in order to unambiguously distinguish the characteristics of the
signal.
(b) Non-alteration of the original waveform shape and especially that of the complex QRS, preventing
deformation waves P and T, as well as maintaining proper ST area, so as maintain visibility of signal T.
Regarding these facts, the steps to de-noise ECG signal are as following,
Step 1: Empirical Mode Decomposition (EMD) of the ECG signal
Step 2: Delineation and separation of QRS complex
Step 3: Preservation of the QRS complex by using proper window
Step 4: Suppression of noise from intermediate portions of QRS complexes
Step 5: Checking Gaussianity of IMFs by higher order statistics (HOS)
Step 6: Thresholding the non-Gaussian IMFs and reconstruction of the ECG signal
IV. Implementation of The Algorithm
The developed algorithm is applied on ECG signals, which were taken from the Department of
Biomedical Physics and Technology, University of Dhaka. To observe the versatility of the technique the
algorithm is implemented on ECG signals of all 12 leads, and for different levels of input SNR. Fig. 1 and 2
show an uncorrupted signal and corrupted noisy signal. The noisy signal is decomposed into IMFs, which is
shown in Fig. 3.
The basic principle of de-noising by EMD is to represent the de-noised signal with a partial sum of the
IMFs. Although various approaches have been proposed to identify whether a specific IMF contains useful
information or noise, their performances are not satisfactory when directly applied to the problem of ECG de-
noising, as discussed next.
Examining the IMFs in Fig 3, itβs easy to find that the IMF1 contains almost nothing but high
frequency noise, and that the rest IMFs can be considered to mainly contain useful information about the ECG
components, except the IMF2 which contains both high frequency noise and components of the QRS complex.
Here comes the dilemma. If the IMF1 is simply discarded as noise, the output will still consist of considerable
noise as illustrated in Fig. 4(a). If the IMF2 is removed together, the resultant ECG will have the R waves
heavily distorted as shown in Fig. 4(b). Therefore, neither result is satisfactory.
The rate of information change in the QRS complex is very high compared to that of the other parts
of an ECG signal. An analysis of the EMD on clean and noisy ECG indicates that the QRS information is
mainly embedded in the first three high frequency IMFs. As a consequence, in a noisy case, a desirable
approach to de-noise the corrupted ECG signal y[n] in the EMD domain would be to filter out the noisy parts of
the first three IMFs without discarding the IMFs completely thus preserving the QRS complex. Now, adding
first three IMFs: d[n] = IMF1 + IMF2+ IMF3 is obtained from the corresponding ECG signal. Fig. 5 presents
the original uncorrupted and noisy ECG signals and the respective plots of d[n] in each case. It is revealed from
this figure that the oscillatory pattern of the QRS complex, and that of the d[n] in the QRS complex region are
highly similar to each other. So, QRS complex portion is the least affected part of ECG by noise.
So, the algorithm to delineate QRS complex is,
Step 1: Identify the fiducial points, which is the peaks of the R-wave
Step 2: Sum the first three IMFs to obtain d[n]
Step 3: Find the two nearest local minima on both sides of the fiducial point
Step 4: Detect the two closest zero-crossing points on the left-hand side of the left minimum and on the right-
hand side of the right minimum. These two points are identified as boundaries of the QRS complex. Next, a
window function is designed to preserve the QRS complex. The window function is a time domain window
applied to the sum of the first three IMFs, d[n]. A general design guideline for the QRS preserving window
function is that it should be flat over the duration of the QRS complex and decay gradually to zero so that a
smooth transition introduces minimal distortion. In this work, Tukey window is used. Fig. 6 shows the signal
d[n] after windowing operation.
Then the noise in the intermediate portions is suppressed by Savitzky-Golay(S-G) filter. S-G filter is
used because of its good reported processing of White Gaussian noise. Now, the Gaussianity of the IMFs is
checked according to HOS parameters discussed in section 4. The result of application of HOS is shown in Fig.
7. IMF 4 and 5 pass both the test of Gaussianity and therefore discarded. The non-Gaussian IMFs of index
higher than 3 are then Thresholded according to the rules discussed in section 3.
The thresholded non-Gaussian IMFs of index higher than 3 are added with the filtered signal to reconstruct
the ECG signals.
5. De-Noising Corrupted ECG Signals By EMD With Application Of HOS
www.iosrjournals.org 51 | Page
V. Result and Discussion
The performance measures are as following:
(a) Improvement in Signal to Noise Ratio, πππ πππ = 10 log10
π¦ π β π₯[π] 2π
π=1
π₯ π β π₯[π] 2π
π=1
Where, x[n] denotes the original ECG signal,
y[n] denotes the noisy ECG signal, and
π₯ π denotes the reconstructed de-noised ECG signal
(b) Percent Root Mean Square Difference, ππ π· =
(π₯ π β π₯[π])2π
π=1
π₯2[π]π
π=1
Γ 100
The mean values of the measures for different values of input SNR, for both soft and hard-thresholding
definition for all 12 leads are shown in the table.
The results show that the output SNR of the ECG signals is reasonably improved by de-noising by
implementing the developed algorithm. The PRD is also decreased to a satisfactory value, which reveals the
applicability of the algorithm in real-world environment.
VI. Conclusion
EMD is a very effective method to decompose non-stationary signals. However, when it comes to de-
noise ECG signal, direct de-noising canβt be done because it degrades the quality of the signal. In this paper, an
algorithm is developed to de-noise ECG signal using EMD along with HOS. The algorithm has been proved to
be quite good-performing. It is required to implement the algorithms for signals in noisy environment of the real
world. If the developed algorithm can de-noise the noisy signals successfully enough, then the algorithm can be
used in micro-controller chips to use the technique in ECG recorder machines through integrated circuits (ICs).
8. Figures and Table
Fig 1: original uncorrupted signal Fig 2: signal corrupted by noise with SNR 10 db
Fig 3: Noisy Signal Decomposed Into Imfs
(a) (b)
Fig. 4: direct ECG de-noising (a) removing IMF1 (b) removing IMF1 and IMF2
6. De-Noising Corrupted ECG Signals By EMD With Application Of HOS
www.iosrjournals.org 52 | Page
Fig 5: uncorrupted, noisy ECG and d(n) Fig 6: QRS complex preserved by
Windowing on d[n]
(a) (b)
Fig 7: application of HOS (a) Kurtosis (b) Bispectrum
Table: Results
Input
SNR
πΊπ΅πΉπππ for
Soft
πΊπ΅πΉπππ
for Hard
PRD for
Soft
PRD for
Hard
5 db 3.3308 2.9626 3.9073 4.3065
10 db 5.2309 3.877 2.6595 3.2624
15 db 10.7506 7.7349 2.8792 3.2624
20 db 15.5209 12.6355 2.7441 3.0307
REFERENCES
[1] L. Sharma, S. Dandapat, and A. Mahanta, Multiscale Wavelet Energies and Relative Energy Based De-noising of ECG Signal, in
Communication Control and Computing Technologies, IEEE International Conference , 2010, 491 β495.
[2] T. He, G. Clifford and L. Tarassenko, Application of ICA in Removing Artifacts from the ECG, Neural Processing Letters, 2006,
105β116.
[3] G. U. Reddy, M. Muralidhar, and S. Varadarajan, EMD De-noising Using Improved Thresholding Based on Wavelet Transform,
IJCSNS, 9( 9), 2009.
[4] A. Karagiannis, and Ph. Constantinou, On the Processing of White Gaussian Noise Biomedical Signals with the Empirical Mode
Decomposition, Biosignal, 2010.
[5] Huang, N. and N. O. Attoh-Okine, The Hilbert-Huang Transform in Engineering (CRC Press, Taylor & Francis Group6000 Broken
Sound Parkway NW, Boca Raton, FL 33487-2742, ISBN-10: 0-8493-3422-5).
[6] D. R. Green, The Utility of Higher Order Statistics in Gaussian Noise Suppression, Naval Postgraduate School, 2003.
[7] P. Flandrin, G. Rilling, and P. Gonsalves, Empirical Mode Decomposition as a Filter Bank.
[8] A.M. Zouhir, and B. Boashash, The Bootstrap and its Application in Signal Processing, IEEE Signal Processing Magazine, 15(1),
1998, 56-76.
[9] A. Al-Smadi, Tests for Gaussianity of a Stationary Time Series, World Academy of Science, Engineering and Technology, 2005.
[10] Y. Kopsinis, and S. McLaughlin, Development of EMD Based De-noising Methods Inspired by Wavelet Thresholding, IEEE
Transactions on Signal Processing, 57, 2009, 1351-1362.