This document describes an image analysis program that analyzes high-speed camera images of turbulent flames. The program takes flame image files as input and generates a probability density function of radii of curvature along the flame boundary contour. It also tracks flame area and penetration over time. Key steps include thresholding images, finding boundary points, connecting points using Delaunay triangulation and nearest neighbors, fitting polynomials to contour segments, and calculating radii of curvature along the contour.
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Thomas Templin
Markov Chain Monte Carlo (MCMC) methods provide a way to sample from a distribution (e.g., the joint posterior distribution for the parameters of a Bayesian model). These methods are useful when analytic solutions for parameter estimations do not exist. If the Markov chain is long, the sampled random variables are (approximately) identically distributed, but they are not independent because in a Markov chain each random variable depends on the previous one. However, because the Ergodic Theorem applies to MCMC methods, the chains converge (with probability one) to the stationary distribution, which for our purposes is the Bayesian joint posterior distribution.
MCMC methods are frequently implemented using a Gibbs sampler. This, however, requires knowledge of the parameters' conditional distributions, which are frequently not available. In this case, another MCMC method, called the Metropolis-Hastings algorithm, can be used. The Metropolis-Hastings algorithm is a type of acceptance/rejection method. It requires a candidate-generating distribution, also called proposal distribution. Ideally, the proposal distribution should be similar to the posterior distribution, but any distribution with the same support as the posterior is possible.
The Metropolis-Hastings algorithm generalizes to multidimensional distributions. In the multidimensional case, there are two types of algorithms ― the "regular" algorithm and the "componentwise" algorithm. Whereas the "regular" algorithm computes a full proposal vector at each step, the "componentwise" algorithm, which is implemented here for a binomial regression model, updates each component at a time, so that the proposals for all the components are evaluated, i.e., accepted or rejected, in turn.
Trend removal from raman spectra with local variance estimation and cubic spl...csijjournal
Trend removal is an important problem in most communication systems. Here, we show a proposed
algorithm for trend (background) removal from Raman Spectra by merging local variance estimation and
cubic spline interpolation methods. We found that Raman spectrum does not need a smoothing process to
remove trend from noisy signals. Employing this technique results in more speedy and noiseless systems
than other techniques that use wavelet transformation to suppress noise.
This document provides an introduction to signal processing techniques for analytical chemistry. It discusses basic operations like addition, subtraction, multiplication and division of signals. It also covers smoothing, differentiation, resolution enhancement, harmonic analysis, convolution and other techniques. Key aspects of signals and noise are described, including distinguishing signal from noise, measuring signal-to-noise ratio, and improving signals through techniques like ensemble averaging. Examples are provided using the free SPECTRUM software and MATLAB.
This document discusses various methods for modeling signals, including deterministic and stochastic processes. It covers topics like the least mean square direct method, Pade approximation, Prony's method, Shanks method, and stochastic processes like ARMA, MA, and AR. It also discusses an application of signal modeling for designing a least squares inverse FIR filter. Model order estimation is noted as an important problem in signal modeling when the correct model order is unknown.
This document summarizes an experiment using SURF and SIFT algorithms to perform panoramic image stitching. It describes detecting keypoints in images using SURF and SIFT, extracting descriptors, matching features between images, and filtering matches using RANSAC to reject outliers and estimate a fundamental matrix. Homography is also estimated from correspondences to relate point positions between images related by pure rotation. Code examples are provided to detect features and match them using OpenCV.
parametric method of power spectrum Estimationjunjer
The document discusses parametric methods of power spectrum estimation. It explains that parametric methods estimate the parameters of a mathematical model that describes the signal generation process. This involves selecting a model such as autoregressive (AR), moving average (MA), or autoregressive moving average (ARMA), estimating the model parameters from the data, and then using the estimated parameters to calculate the power spectrum. The document provides details on how to estimate the power spectrum using AR, MA, and ARMA models. It also discusses maximum entropy spectral estimation and high-resolution spectral estimation based on eigen-analysis.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
non parametric methods for power spectrum estimatonBhavika Jethani
non-parametric methods for power spectrum estimation which includes bartlett method, welch method , blackman and tukey methods and also the comparision of all these methods
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Thomas Templin
Markov Chain Monte Carlo (MCMC) methods provide a way to sample from a distribution (e.g., the joint posterior distribution for the parameters of a Bayesian model). These methods are useful when analytic solutions for parameter estimations do not exist. If the Markov chain is long, the sampled random variables are (approximately) identically distributed, but they are not independent because in a Markov chain each random variable depends on the previous one. However, because the Ergodic Theorem applies to MCMC methods, the chains converge (with probability one) to the stationary distribution, which for our purposes is the Bayesian joint posterior distribution.
MCMC methods are frequently implemented using a Gibbs sampler. This, however, requires knowledge of the parameters' conditional distributions, which are frequently not available. In this case, another MCMC method, called the Metropolis-Hastings algorithm, can be used. The Metropolis-Hastings algorithm is a type of acceptance/rejection method. It requires a candidate-generating distribution, also called proposal distribution. Ideally, the proposal distribution should be similar to the posterior distribution, but any distribution with the same support as the posterior is possible.
The Metropolis-Hastings algorithm generalizes to multidimensional distributions. In the multidimensional case, there are two types of algorithms ― the "regular" algorithm and the "componentwise" algorithm. Whereas the "regular" algorithm computes a full proposal vector at each step, the "componentwise" algorithm, which is implemented here for a binomial regression model, updates each component at a time, so that the proposals for all the components are evaluated, i.e., accepted or rejected, in turn.
Trend removal from raman spectra with local variance estimation and cubic spl...csijjournal
Trend removal is an important problem in most communication systems. Here, we show a proposed
algorithm for trend (background) removal from Raman Spectra by merging local variance estimation and
cubic spline interpolation methods. We found that Raman spectrum does not need a smoothing process to
remove trend from noisy signals. Employing this technique results in more speedy and noiseless systems
than other techniques that use wavelet transformation to suppress noise.
This document provides an introduction to signal processing techniques for analytical chemistry. It discusses basic operations like addition, subtraction, multiplication and division of signals. It also covers smoothing, differentiation, resolution enhancement, harmonic analysis, convolution and other techniques. Key aspects of signals and noise are described, including distinguishing signal from noise, measuring signal-to-noise ratio, and improving signals through techniques like ensemble averaging. Examples are provided using the free SPECTRUM software and MATLAB.
This document discusses various methods for modeling signals, including deterministic and stochastic processes. It covers topics like the least mean square direct method, Pade approximation, Prony's method, Shanks method, and stochastic processes like ARMA, MA, and AR. It also discusses an application of signal modeling for designing a least squares inverse FIR filter. Model order estimation is noted as an important problem in signal modeling when the correct model order is unknown.
This document summarizes an experiment using SURF and SIFT algorithms to perform panoramic image stitching. It describes detecting keypoints in images using SURF and SIFT, extracting descriptors, matching features between images, and filtering matches using RANSAC to reject outliers and estimate a fundamental matrix. Homography is also estimated from correspondences to relate point positions between images related by pure rotation. Code examples are provided to detect features and match them using OpenCV.
parametric method of power spectrum Estimationjunjer
The document discusses parametric methods of power spectrum estimation. It explains that parametric methods estimate the parameters of a mathematical model that describes the signal generation process. This involves selecting a model such as autoregressive (AR), moving average (MA), or autoregressive moving average (ARMA), estimating the model parameters from the data, and then using the estimated parameters to calculate the power spectrum. The document provides details on how to estimate the power spectrum using AR, MA, and ARMA models. It also discusses maximum entropy spectral estimation and high-resolution spectral estimation based on eigen-analysis.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
non parametric methods for power spectrum estimatonBhavika Jethani
non-parametric methods for power spectrum estimation which includes bartlett method, welch method , blackman and tukey methods and also the comparision of all these methods
MTM (Multitaper Method) is a non-parametric method for spectral estimation that reduces spectral leakage. It involves selecting data tapers within a chosen frequency bandwidth, multiplying the tapers with the signal, and averaging the resulting tapered spectra to estimate the power spectrum. MTM uses orthogonal Slepian tapers to generate multiple tapered versions of the input signal, then averages their periodograms to obtain the spectral estimate. This reduces variance compared to using a single taper. MTM has applications in analyzing atmospheric, oceanic, paleoclimate, geochemical, and seismological time series data.
This document discusses power density spectrum, which shows how the energy of a signal is distributed over different frequencies. Any signal can be represented as a summation of sinusoids using Fourier analysis. The Fourier transform breaks down a signal into sine and cosine components. A Fourier series represents periodic signals as an infinite sum of sines and cosines. The frequency spectrum shows the frequencies contained in a signal. Each frequency component is obtained from the Fourier transform. The average power of a discrete-time periodic signal can be expressed as the sum of powers of the individual frequency components. The power density spectrum is the distribution of this power as a function of frequency.
An enhanced fireworks algorithm to generate prime key for multiple users in f...journalBEEI
This work presents a new method to enhance the performance of fireworks algorithm to generate a prime key for multiple users. A threshold technique in image segmentation is used as one of the major steps. It is used processing the digital image. Some useful algorithms and methods for dividing and sharing an image, including measuring, recognizing, and recognizing, are common. In this research, we proposed a hybrid technique of fireworks and camel herd algorithms (HFCA), where Fireworks are based on 3-dimension (3D) logistic chaotic maps. Both, the Otsu method and the convolution technique are used in the pre-processing image for further analysis. The Otsu is employed to segment the image and find the threshold for each image, and convolution is used to extract the features of the used images. The sample of the images consists of two images of fingerprints taken from the Biometric System Lab (University of Bologna). The performance of the anticipated method is evaluated by using FVC2004 dataset. The results of the work enhanced algorithm, so quick response code (QRcode) is used to generate a stream key by using random text or number, which is a class of symmetric-key algorithm that operates on individual bits or bytes.
This document provides a comparative study of five adaptive beamforming algorithms: Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Sample Matrix Inversion (SMI), Recursive Least Square (RLS), and Conjugate Gradient Method (CGM). It outlines the basic principles and equations of each algorithm, such as how they calculate the weight vector and error terms. The key factors compared between the algorithms are computational complexity, convergence rate, and robustness. Adaptive beamforming can enhance signals of interest and minimize interference by forming nulls in the directions of interference sources using an antenna array.
This summary provides an overview of the key points from the document:
1) The document presents the use of General Regression Neural Networks (GRNN) to predict propagation path loss in an urban environment based on measurements taken in Kavala, Greece.
2) Two neural network models are studied - one for path loss prediction and another using error control. Their performance is compared to measured path loss values based on error metrics.
3) For line-of-sight predictions, the GRNN model achieves better performance than empirical models due to using multiple input parameters and generalization. For non-line-of-sight, a third GRNN model including street orientation has the lowest error rates.
The document proposes a real-time fire evacuation guidance system using simulation-based optimization. It begins by outlining the research need to improve current fire alert systems and reduce pre-movement times during evacuations. It then reviews existing evacuation research approaches and their limitations. The proposed system would couple a fire and smoke simulation with a dynamic network model of the building to optimize evacuee routing based on predicted fire spread. The objectives are to formulate an approach considering multiple fire-related variables and evacuee priorities over time.
The goal of obtaining faster data transfer rates for any piece of technology has been a goal of many in hopes of making life more convenient. In this project we deal with the lossy compression of speech signals that have become distorted in the efforts of trying to obtain faster data transfer rates. This distortion comes from the direct quantization of the speech signal, which results in the quality of the speech signal being brought to an unacceptable degree. In order to achieve our original goal of obtaining faster transfer rates and preserve the quality, we must consider linear predictive coding (LPC).
Time alignment techniques for experimental sensor dataIJCSES Journal
Experimental data is subject to data loss, which presents a challenge for representing the data with a
proper time scale. Additionally, data from separate measurement systems need to be aligned in order to
use the data cooperatively. Due to the need for accurate time alignment, various practical techniques are
presented along with an illustrative example detailing each step of the time alignment procedure for actual
experimental data from an Unmanned Aerial Vehicle (UAV). Some example MATLAB code is also
provided.
This document proposes applying Euclidean Distance Antenna Selection (EDAS) to Multiple Active Spatial Modulation (MASM) to improve its performance. It describes MASM and introduces a new method called Euclidean Distance Antenna Group Selection (EDAGS) that selects antenna groups to maximize the Euclidean distance between symbols. Due to high complexity, it also proposes applying the ideas of EDAS and capacity-optimized antenna selection (COAS) to MASM by selecting antennas first and then forming groups from them. The remainder of the paper will analyze the complexity of the methods, provide simulation results, and give concluding remarks.
1) The document discusses power spectrum estimation methods for digital signal processing.
2) It describes five common non-parametric power spectrum estimation techniques: periodogram method, modified periodogram method, Bartlett's method, Welch's method, and Blackman-Tukey method.
3) Each method has different tradeoffs between frequency resolution, variance, and bias that make some techniques better for certain applications like feature extraction.
This document summarizes and compares four detection algorithms for spatial modulation with multiple active transmit antennas (MASM): 1) Maximum likelihood (ML) detection, which performs an exhaustive search over all possible symbol vectors; 2) A decorrelator-based suboptimal detection method; 3) An ordered block minimum mean square error (OB-MMSE) detection method; 4) A proposed simplified ML detection method based on symbol cancellation and multi-level subset searching. Through simulations, the proposed detector is found to perform the same as ML detection down to bit error rates of 10-6, with lower complexity than ML detection and OB-MMSE for most configurations.
A beamforming comparative study of least mean square, genetic algorithm and g...TELKOMNIKA JOURNAL
Multipath environment is a limitation fact in optimized usage of wireless networks. Using smart antenna and beamforming algorithms contributed to that subscribers get a higher-gain signal and better directivity as well as reduce the consumed power for users and the mobile base stations by adjusting the appropriate weights for each element in the antenna array that leads to reducing interference anddirecting the main beam to wanted user. In this paper, the performance of three of beamforming algorithms in multipath environment in terms of directivity and side lobe level reduction has been studied and compared, which are least mean square (LMS), genetic algorithm (GA) and grey wolf optimization (GWO) technique. The simulation result appears that LMS algorithm aids us to get the best directivity followed by the GWO, and we may get most sidelobe level reduction by using the GA algorithm, followed by LMS algorithm in second rank.
This work proposes a technique called pseudo-source injection to reduce pseudo-shear artifacts in finite-difference modeling of acoustic wave propagation in transversely isotropic media. The technique derives pseudo-sources from the pseudo-differential operator governing wave propagation and injects them into the coupled finite-difference equations at each time step. This projects the solution onto the shear-free solution space, resulting in significantly fewer pseudo-shear artifacts compared to other kinematically accurate finite-difference methods. Numerical examples modeling wave propagation through models with gradients and inclusions demonstrate the absence of pseudo-shear artifacts when using the proposed pseudo-source technique.
The document describes the implementation of a wideband spectrum sensing algorithm using a software-defined radio. It discusses using an energy detection based approach to sense the local frequency spectrum and determine which portions are unused. The algorithm is first tested via simulations in MATLAB using known signal parameters. It is then tested using real data collected from a Universal Software Radio Peripheral (USRP) to analyze the actual wireless spectrum.
This document discusses modeling of biomedical signals. It introduces autoregressive (AR) and moving average (MA) modeling techniques. For AR modeling, it describes three methods for computing the model parameters: the least squares method, the autocorrelation method, and the covariance method. The least squares method minimizes the mean squared error between predicted and actual signal samples. The autocorrelation and covariance methods relate the AR model parameters to the autocorrelation function of the signal.
Higher Order Feature Set For Underwater Noise ClassificationCSCJournals
The development of intelligent systems for classification of underwater noise sources has been a field of research interest for decades. Such systems include the extraction of features from the received signals, followed by the application of suitable classification algorithms. Most of the existing feature extraction methods rely on the classical power spectral methods, which may fail to provide information pertaining to the deviations from linearity and Gaussianity of stochastic processes. Hence, many recent research efforts focus on higher order spectral methods in order to prevail over such limitations. This paper makes use of bispectrum, which is a higher order spectrum of order three, in order to extract a set of robust features for the classification of underwater noise sources. An SVM classifier is used for evaluating the performance of the feature set.
Rangoli Designs by Priya Yawale, PusadPriya Yawale
The article discusses the issue of child marriage in India, noting that it remains a significant problem despite laws banning the practice for girls under 18 and boys under 21. While the rate of child marriage has declined in recent decades, one-third of girls are still married before 18, especially in rural areas. Poverty, lack of education, and traditional customs all contribute to the continuation of this harmful practice.
The article discusses the importance of education for girls in rural India. It notes that many families in rural areas do not see the value of educating girls, preferring they stay home to help with chores and eventually marry. However, education can empower girls and help break the cycle of poverty by enabling them to support themselves financially and make their own choices in life.
The document provides information about an Innermetrix Values Index assessment. It describes the assessment as measuring seven dimensions of motivation based on the research of Eduard Spranger and Gordon Allport. These dimensions are Aesthetic, Economic, Individualistic, Political, Altruistic, Regulatory, and Theoretical. The document explains that understanding one's values on these dimensions provides insight into their unique drivers and motivations.
MTM (Multitaper Method) is a non-parametric method for spectral estimation that reduces spectral leakage. It involves selecting data tapers within a chosen frequency bandwidth, multiplying the tapers with the signal, and averaging the resulting tapered spectra to estimate the power spectrum. MTM uses orthogonal Slepian tapers to generate multiple tapered versions of the input signal, then averages their periodograms to obtain the spectral estimate. This reduces variance compared to using a single taper. MTM has applications in analyzing atmospheric, oceanic, paleoclimate, geochemical, and seismological time series data.
This document discusses power density spectrum, which shows how the energy of a signal is distributed over different frequencies. Any signal can be represented as a summation of sinusoids using Fourier analysis. The Fourier transform breaks down a signal into sine and cosine components. A Fourier series represents periodic signals as an infinite sum of sines and cosines. The frequency spectrum shows the frequencies contained in a signal. Each frequency component is obtained from the Fourier transform. The average power of a discrete-time periodic signal can be expressed as the sum of powers of the individual frequency components. The power density spectrum is the distribution of this power as a function of frequency.
An enhanced fireworks algorithm to generate prime key for multiple users in f...journalBEEI
This work presents a new method to enhance the performance of fireworks algorithm to generate a prime key for multiple users. A threshold technique in image segmentation is used as one of the major steps. It is used processing the digital image. Some useful algorithms and methods for dividing and sharing an image, including measuring, recognizing, and recognizing, are common. In this research, we proposed a hybrid technique of fireworks and camel herd algorithms (HFCA), where Fireworks are based on 3-dimension (3D) logistic chaotic maps. Both, the Otsu method and the convolution technique are used in the pre-processing image for further analysis. The Otsu is employed to segment the image and find the threshold for each image, and convolution is used to extract the features of the used images. The sample of the images consists of two images of fingerprints taken from the Biometric System Lab (University of Bologna). The performance of the anticipated method is evaluated by using FVC2004 dataset. The results of the work enhanced algorithm, so quick response code (QRcode) is used to generate a stream key by using random text or number, which is a class of symmetric-key algorithm that operates on individual bits or bytes.
This document provides a comparative study of five adaptive beamforming algorithms: Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Sample Matrix Inversion (SMI), Recursive Least Square (RLS), and Conjugate Gradient Method (CGM). It outlines the basic principles and equations of each algorithm, such as how they calculate the weight vector and error terms. The key factors compared between the algorithms are computational complexity, convergence rate, and robustness. Adaptive beamforming can enhance signals of interest and minimize interference by forming nulls in the directions of interference sources using an antenna array.
This summary provides an overview of the key points from the document:
1) The document presents the use of General Regression Neural Networks (GRNN) to predict propagation path loss in an urban environment based on measurements taken in Kavala, Greece.
2) Two neural network models are studied - one for path loss prediction and another using error control. Their performance is compared to measured path loss values based on error metrics.
3) For line-of-sight predictions, the GRNN model achieves better performance than empirical models due to using multiple input parameters and generalization. For non-line-of-sight, a third GRNN model including street orientation has the lowest error rates.
The document proposes a real-time fire evacuation guidance system using simulation-based optimization. It begins by outlining the research need to improve current fire alert systems and reduce pre-movement times during evacuations. It then reviews existing evacuation research approaches and their limitations. The proposed system would couple a fire and smoke simulation with a dynamic network model of the building to optimize evacuee routing based on predicted fire spread. The objectives are to formulate an approach considering multiple fire-related variables and evacuee priorities over time.
The goal of obtaining faster data transfer rates for any piece of technology has been a goal of many in hopes of making life more convenient. In this project we deal with the lossy compression of speech signals that have become distorted in the efforts of trying to obtain faster data transfer rates. This distortion comes from the direct quantization of the speech signal, which results in the quality of the speech signal being brought to an unacceptable degree. In order to achieve our original goal of obtaining faster transfer rates and preserve the quality, we must consider linear predictive coding (LPC).
Time alignment techniques for experimental sensor dataIJCSES Journal
Experimental data is subject to data loss, which presents a challenge for representing the data with a
proper time scale. Additionally, data from separate measurement systems need to be aligned in order to
use the data cooperatively. Due to the need for accurate time alignment, various practical techniques are
presented along with an illustrative example detailing each step of the time alignment procedure for actual
experimental data from an Unmanned Aerial Vehicle (UAV). Some example MATLAB code is also
provided.
This document proposes applying Euclidean Distance Antenna Selection (EDAS) to Multiple Active Spatial Modulation (MASM) to improve its performance. It describes MASM and introduces a new method called Euclidean Distance Antenna Group Selection (EDAGS) that selects antenna groups to maximize the Euclidean distance between symbols. Due to high complexity, it also proposes applying the ideas of EDAS and capacity-optimized antenna selection (COAS) to MASM by selecting antennas first and then forming groups from them. The remainder of the paper will analyze the complexity of the methods, provide simulation results, and give concluding remarks.
1) The document discusses power spectrum estimation methods for digital signal processing.
2) It describes five common non-parametric power spectrum estimation techniques: periodogram method, modified periodogram method, Bartlett's method, Welch's method, and Blackman-Tukey method.
3) Each method has different tradeoffs between frequency resolution, variance, and bias that make some techniques better for certain applications like feature extraction.
This document summarizes and compares four detection algorithms for spatial modulation with multiple active transmit antennas (MASM): 1) Maximum likelihood (ML) detection, which performs an exhaustive search over all possible symbol vectors; 2) A decorrelator-based suboptimal detection method; 3) An ordered block minimum mean square error (OB-MMSE) detection method; 4) A proposed simplified ML detection method based on symbol cancellation and multi-level subset searching. Through simulations, the proposed detector is found to perform the same as ML detection down to bit error rates of 10-6, with lower complexity than ML detection and OB-MMSE for most configurations.
A beamforming comparative study of least mean square, genetic algorithm and g...TELKOMNIKA JOURNAL
Multipath environment is a limitation fact in optimized usage of wireless networks. Using smart antenna and beamforming algorithms contributed to that subscribers get a higher-gain signal and better directivity as well as reduce the consumed power for users and the mobile base stations by adjusting the appropriate weights for each element in the antenna array that leads to reducing interference anddirecting the main beam to wanted user. In this paper, the performance of three of beamforming algorithms in multipath environment in terms of directivity and side lobe level reduction has been studied and compared, which are least mean square (LMS), genetic algorithm (GA) and grey wolf optimization (GWO) technique. The simulation result appears that LMS algorithm aids us to get the best directivity followed by the GWO, and we may get most sidelobe level reduction by using the GA algorithm, followed by LMS algorithm in second rank.
This work proposes a technique called pseudo-source injection to reduce pseudo-shear artifacts in finite-difference modeling of acoustic wave propagation in transversely isotropic media. The technique derives pseudo-sources from the pseudo-differential operator governing wave propagation and injects them into the coupled finite-difference equations at each time step. This projects the solution onto the shear-free solution space, resulting in significantly fewer pseudo-shear artifacts compared to other kinematically accurate finite-difference methods. Numerical examples modeling wave propagation through models with gradients and inclusions demonstrate the absence of pseudo-shear artifacts when using the proposed pseudo-source technique.
The document describes the implementation of a wideband spectrum sensing algorithm using a software-defined radio. It discusses using an energy detection based approach to sense the local frequency spectrum and determine which portions are unused. The algorithm is first tested via simulations in MATLAB using known signal parameters. It is then tested using real data collected from a Universal Software Radio Peripheral (USRP) to analyze the actual wireless spectrum.
This document discusses modeling of biomedical signals. It introduces autoregressive (AR) and moving average (MA) modeling techniques. For AR modeling, it describes three methods for computing the model parameters: the least squares method, the autocorrelation method, and the covariance method. The least squares method minimizes the mean squared error between predicted and actual signal samples. The autocorrelation and covariance methods relate the AR model parameters to the autocorrelation function of the signal.
Higher Order Feature Set For Underwater Noise ClassificationCSCJournals
The development of intelligent systems for classification of underwater noise sources has been a field of research interest for decades. Such systems include the extraction of features from the received signals, followed by the application of suitable classification algorithms. Most of the existing feature extraction methods rely on the classical power spectral methods, which may fail to provide information pertaining to the deviations from linearity and Gaussianity of stochastic processes. Hence, many recent research efforts focus on higher order spectral methods in order to prevail over such limitations. This paper makes use of bispectrum, which is a higher order spectrum of order three, in order to extract a set of robust features for the classification of underwater noise sources. An SVM classifier is used for evaluating the performance of the feature set.
Rangoli Designs by Priya Yawale, PusadPriya Yawale
The article discusses the issue of child marriage in India, noting that it remains a significant problem despite laws banning the practice for girls under 18 and boys under 21. While the rate of child marriage has declined in recent decades, one-third of girls are still married before 18, especially in rural areas. Poverty, lack of education, and traditional customs all contribute to the continuation of this harmful practice.
The article discusses the importance of education for girls in rural India. It notes that many families in rural areas do not see the value of educating girls, preferring they stay home to help with chores and eventually marry. However, education can empower girls and help break the cycle of poverty by enabling them to support themselves financially and make their own choices in life.
The document provides information about an Innermetrix Values Index assessment. It describes the assessment as measuring seven dimensions of motivation based on the research of Eduard Spranger and Gordon Allport. These dimensions are Aesthetic, Economic, Individualistic, Political, Altruistic, Regulatory, and Theoretical. The document explains that understanding one's values on these dimensions provides insight into their unique drivers and motivations.
Understanding Cultural Heritage Experts' Information Seeking NeedsAlia Amin
1. Cultural heritage experts primarily conduct information gathering searches to support tasks like exhibitions, publications, and collection management.
2. They search across multiple sources, both offline and online, with over half of searches coming from offline sources like literature, archives, and catalogs.
3. Experts value provenance and trust, sometimes being wary of online information without clear sourcing.
Designing a Thesaurus-based Comparison Search Interface for Linked Cultural H...Alia Amin
This document summarizes research on designing a thesaurus-based search interface to support comparison searches across multiple cultural heritage sources. Key findings from preliminary studies with cultural heritage experts showed that comparison searches are used for tasks like exhibition planning but are challenging due to issues like name aliases and searching in multiple languages and sources. The designed interface, called LISA, integrates thesauri and allows guided search, selection of multiple artworks, and comparison of properties in tables, charts and scatterplots. An evaluation study found LISA improved the time and ease of searching for and comparing artworks compared to a baseline system. User feedback suggested the interface was unfamiliar at first but visualizations provided different perspectives in comparisons.
This document summarizes a computer vision project that aims to allow a camera fixed to a drone to determine its position relative to a pipe. The method uses images of a pipe covered in a known pattern to extract the camera's orientation. Key steps include binarizing images, detecting pattern dots, calculating 3D coordinates, and using EPnP to retrieve the camera pose from 2D-3D correspondences. The project achieves accurate pose estimation but has limitations such as not distinguishing pattern orientation. Future work could involve a modified pattern to address these limitations.
This document provides information on numerical integration techniques, specifically the Trapezoidal rule and Simpson's rule. It begins with an overview of integration and the basis of the Trapezoidal rule in approximating the integrand as a linear polynomial. It then discusses the derivation and application of the Trapezoidal rule, as well as extending it to multiple segments to improve accuracy. Simpson's rule is introduced as approximating the integrand as a quadratic polynomial. The document gives examples applying single- and multi-segment Trapezoidal and Simpson's rules. It concludes with an introduction to the Romberg rule as an extrapolation method to further increase the accuracy of numerical integration estimates.
The International Journal of Computational Science, Information Technology an...rinzindorjej
The International Journal of Computational Science, Information Technology and Control Engineering (IJCSITCE) is an open access peer-reviewed journal that publishes quality articles which make innovative contributions in all areas of Computational Science, Mathematical Modeling, Information Technology, Networks, Computer Science, Control and Automation Engineering. IJCSITCE is an abstracted and indexed journal that focuses on all technical and practical aspects of Scientific Computing, Modeling and Simulation, Information Technology, Computer Science, Networks and Communication Engineering, Control Theory and Automation. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced techniques in computational science, information technology, computer science, chaos, control theory and automation, and establishing new collaborations in these areas.
This document describes visual servo control of a mobile robot using homography-based image feedback. A camera is mounted on the robot to track features in the environment. Homography is estimated from matched features to relate the current and target camera views. The robot's motion model and a control law are derived to drive the robot from its initial position to the target position defined by the target image. Experimental results are presented to validate the visual servo control approach. The high-level goal is to navigate the robot to the target position using only image feedback from the mounted camera.
This document discusses a sublinear algorithm for analyzing images based on property testing techniques. The algorithm can quickly determine if a binary image is divisible into two almost homochromous halfplanes or is far from having this property. It works by randomly selecting a small percentage of pixels from the image and analyzing the distribution of black and white pixels under different possible dividing lines. If any division results in both halves being sufficiently homochromous, it returns positive, otherwise negative. The runtime is proportional to the number of sampled pixels and does not depend on the full image size, making it sublinear. Analysis shows the algorithm has low error probabilities if enough pixels are sampled to represent each potential halfplane division.
This document proposes improvements to algorithms for robot navigation using omnidirectional cameras. It summarizes previous approaches that used two omnidirectional cameras and the Sum of Absolute Difference (SAD) algorithm to localize points in 3D scenes. The previous SAD method performed poorly on repetitive textures. The document then proposes enhancements to SAD and the Kanade-Lucas-Tomasi (KLT) feature tracker to address these issues. It also describes using the improved algorithms to reconstruct 3D structures and estimate camera motion. Experiments using simulators show the new approaches outperform previous methods in problems related to repetitive textures and camera rotation.
This document investigates error bounds for reduced order modeling (ROM) techniques. Earlier work developed bounds based on oversampling the actual error using a probability distribution function, but found the bounds could be overly conservative. This work numerically tests different distributions and finds that using a binomial distribution results in a smaller error bound multiplier, providing a more realistic estimate of the actual reduction errors compared to techniques using the normal distribution. Specifically, the binomial distribution produces a multiplier of 1.02, while the normal distribution results in a multiplier of 7.98. The smaller multiplier from the binomial distribution implies tighter, less conservative error bounds.
Further investegationonprobabilisticerrorbounds finalMohammad
This paper investigates error bounds for reduced order modeling (ROM) by examining the effect of the probability distribution function used to sample errors. Earlier work found bounds could be overly conservative using a normal distribution. Different distributions were tested to find the smallest multiplier needed to ensure 90% of sampled errors were below the bound. The binomial distribution resulted in the smallest multiplier, providing a more realistic error bound closer to actual reduction errors compared to other distributions like the normal or uniform. Using the binomial distribution allows generating error bounds for ROM predictions that are less conservative than previous approaches.
Macromodel of High Speed Interconnect using Vector Fitting Algorithmijsrd.com
At high frequency efficient macromodeling of high speed interconnects is all time challenging task. We have presented systematic methodologies to generate rational function approximations of high-speed interconnects using vector fitting technique for any type of termination conditions and construct efficient multiport model, which is easily and directly compatible with circuit simulators.
This document summarizes a paper that introduces a mathematical approach to quantify errors resulting from reduced order modeling (ROM) techniques. ROM aims to reduce the dimensionality of input and output data for computationally intensive simulations like uncertainty quantification. The paper presents a method to calculate probabilistic error bounds for ROM that account for discarded model components. Numerical experiments on a pin cell reactor physics model demonstrate the ability to determine error bounds and validate them against actual errors with high probability. The error bounds approach could enable ROM techniques to self-adapt the level of reduction needed to ensure errors remain below thresholds for reliability.
Probabilistic Error Bounds for Reduced Order Modeling M&C2015Mohammad
This paper introduces a mathematical approach to quantify errors resulting from reduced order modeling (ROM) techniques. ROM works by discarding model components deemed to have negligible impact, but this introduces reduction errors. The paper derives an expression to calculate probabilistic error bounds for the discarded components. Numerical experiments on a pin cell model demonstrate the approach, showing the error bounds capture the actual errors with high probability, even when the ROM is applied under different physics conditions. The error bounding technique allows ROM algorithms to self-adapt and ensure reduction errors remain below user-defined tolerances.
A Novel Cosine Approximation for High-Speed Evaluation of DCTCSCJournals
This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .
Smart Fire Detection System using Image ProcessingIJSRD
Fire is greatest genuine interruption which prompts monetary and natural misfortunes. The determination of flame edges is the procedure of a distinguishing limit between the range where thermo-chemical response and those without. It is an ancestor to picture based fire observing, when fire discovery, fire assessment, and the determination of fire and fire parameters. A few conventional edge-discovery techniques have been tried to discover fire edges, yet the outcomes accomplished has baffling. Some examination works identified with fire and fire edge recognition were accounted for distinctive applications; then again, the systems don't underscore the progression and clarity of the fire and fire edges. In this manner, to conquer these issues, applicant fire locales are initially recognized utilizing a foundation model and shading model of flame. The proposed framework was effectively connected to different errands in true situations and successfully recognized fire from flame hued objects. Exploratory results will show that the proposed strategy beats different routines in both of flame target upgrade and foundation point of interest.
Building 3D Morphable Models from 2D ImagesShanglin Yang
The document discusses building 3D morphable models from 2D images. It proposes using subdivision surfaces which require fewer parameters than meshes to represent 3D objects. An energy function is formulated that matches image silhouettes and constraints, enforces normal consistency, and includes smoothness terms. Optimization first uses global search to estimate the contour generator and then local optimization to estimate pose parameters and refine the model. Experiments demonstrate reconstructing dolphin shapes from images.
This document describes methods for determining the trajectory of artillery projectiles using neural networks and fuzzy logic instead of traditional firing tables. It presents a case study using a 155mm howitzer projectile. Fuzzy logic rules and a neural network are developed based on trajectory data calculated using basic projectile motion equations. The fuzzy logic system provides the angle of fire given a range and wind velocity. The neural network is trained on the trajectory data and then used to determine the angle of fire for any given range and wind velocity. The performance of these methods is compared to traditional firing tables.
This document summarizes a research paper that proposes a novel approach for enhancing digital images using morphological operators. The approach aims to improve contrast in images with poor lighting conditions. It uses two morphological operators based on Weber's law - the first employs blocked analysis while the second uses opening by reconstruction to define a multi-background. The performance of the proposed operators is evaluated on images with various backgrounds and lighting conditions. Key steps include dividing images into blocks, estimating minimum/maximum intensities in each block to determine background criteria, and applying contrast enhancement transformations based on the criteria. Opening by reconstruction is also used to approximate image background without modifying structures. Experimental results demonstrate the approach enhances images with poor lighting.
This document describes depth from defocus (DfD) techniques for estimating depth from a single image. It presents the theoretical foundations of camera geometry and point spread functions. It then describes two DfD methods - a Fourier transform method and S-transform method. It evaluates the Fourier method on synthetic images, finding it can estimate depth accurately for values beyond the camera's focal length but not before. However, the document concludes that while DfD shows promise, practical implementation is difficult due to assumptions required and sensitivity to errors.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
This document describes an autonomous quadcopter perching system that allows a quadcopter to identify log-shaped objects for landing. It focuses on the vision component which extracts cylinders from 3D point cloud data. The author proposes using a RANSAC algorithm in two steps: 1) determine the cylinder's orientation by fitting a plane to the Gauss image of the point cloud, and 2) determine the cylinder's radius and center by fitting a circle to points projected on the plane. The algorithm is able to extract cylinders with 98.26% accuracy but has weaker performance at distinguishing partial cylinders. Future work includes improving partial cylinder detection and processing real stereo camera data.
Using Generic Image Processing Operations to Detect a Calibration GridJan Wedekind
Camera calibration is an important problem in 3D computer vision. The problem of determining the camera parameters has been studied extensively. However the algorithms for determining the required correspondences are either semi-automatic (i.e. they require user interaction) or they involve difficult to implement custom algorithms.
We present a robust algorithm for detecting the corners of a calibration grid and assigning the correct correspondences for calibration . The solution is based on generic image processing operations so that it can be implemented quickly. The algorithm is limited to distortion-free cameras but it could potentially be extended to deal with camera distortion as well. We also present a corner detector based on steerable filters. The corner detector is particularly suited for the problem of detecting the corners of a calibration grid.
- See more at: http://figshare.com/articles/Using_Generic_Image_Processing_Operations_to_Detect_a_Calibration_Grid/696880#sthash.EG8dWyTH.dpuf
Using Generic Image Processing Operations to Detect a Calibration Grid
Final Report
1. College of Engineering
Department of Mechanical and Aerospace Engineering
MAE-704
Fluid Dynamics of Combustion II
Image Analysis of Turbulent Flame Tomography
Aaron McCullough
2. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 2 of 14
Abstract
A program for the analysis of turbulent flame images is presented. Flame image files are
accepted as inputs, and a contour of each flame boundary is then generated to produce a
probability density function of radii of curvature along the contour. Once this has been achieved,
turbulent scales responsible for flame shape can be ascertained based on flame geometry.
Secondary outputs are flame area and flame penetration as a function of flame lifetime. The code
is demonstrated to be a useful tool for post-processing of high-speed camera imaging, and areas
of improvement and further study are presented.
3. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 3 of 14
1. Introduction
High speed photography is one of the most widely used experimental tools for combustion
research. One area of particular interest in which this tool is utilized is that of turbulent flame
imaging. Most scholarly articles on this topic deal with laser diagnostic techniques of flame
experimentation, but the specific area to which this project seeks to contribute is that of turbulent
flame tomography. Reference [1] provides an excellent example of what is being done in this
field today. In that particular study, flame images were converted to binary images and Fourier
transformations were used to determine flame curvature distribution. The method presented in
the following pages takes a more visual approach by directly analyzing turbulent flame images.
Once image data is obtained, the necessity arises for programming aids which can quickly
analyze flame images and output parameters and graphical data to describe the nature of the
flame. One such code has been developed by the author to achieve this goal, and the methods
employed in its development, and the resulting output are the subject of this report. Given a set
of frames from a high speed video of a turbulent flame in a combustion chamber, the primary
objective of this program is to determine the degree of wrinkling of the flame by generating a
probability density function of radius of curvature along the flame boundary. A secondary
objective is to track the flame area and flame penetration into the chamber over flame lifetime.
The code was written in MATLAB and is found in the appendix, however the following methods
will describe the approach used in analyzing images rather than detailing MATLAB specific
functions.
2. Methods
A Diesel flame was ignited in a combustion chamber with the following ambient conditions -
temperature of 1000 Kelvin, pressure of 44 Barr,density of 15 kg per cubic m, and 21% oxygen by
volume. The flame was photographed with a high speed camera at a rate of 4500 frames per second. The
resulting 400 x 128 pixel images were converted to .png files and were read into the program in this
format - shown below in figure 1. Using a reference image with a grid of known dimensions, it was
determined that there were approximately 35 mm per 148 pixels or .23 square mm per pixel.
Figure 1. Flame Image
The first step in image analysis is to determine the boundary points of the flame. Each pixel has a
grayscale value between 0 and 255. It has been determined that the best results come from setting the
threshold value at 40. The image is then scanned by the program and all pixels below 40 are set to zero,
and all above 40 are set to 200. This scan results in a matrix whose corresponding image is shown below
in figure 2.
4. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 4 of 14
Figure 2. Flame Threshold
The next step in processing is to scan this image for boundary points. The condition for a boundary point
is a value of 200 at the point and at least one value of zero in the 4 locations adjacent to it. At this point in
the program, the boundary is also converted from pixel location to realx and y values in mm. The
resulting code generates x and y vectors which have been plotted in the image shown in figure 3.
Figure 3. Flame Boundary Points
Once these boundary points have been determined, they must be connected by a smooth contour. This
step represents the most challenging part of the program because an algorithm must be employed to sort
each boundary point based on which point is nearest to it. It has been ultimately discovered that a
Delaunay triangulation of the plot is needed to determine each point’s nearest neighbor. A description of
Delaunay triangulation is described in reference [2] and the nearest neighbor technique can be found in
reference [3]. Figure 4 shows the Delaunay triangulation of the example image.
Figure 4. Delaunay Triangulation
There are severalcomplications that must be dealt with when using this technique. The first is caused by
the nearest neighbor function. If the point of interest is used as the query point, it must be taken out of the
triangulation, or else it will be counted as its own nearest neighbor. For this reason,the code removes
each point from the data before performing a triangulation. It then inserts the point as a query point to find
its respective nearest neighbor. This complication could possibly be resolved by using a function which
finds the 2nd
nearest neighbor to a point included within the triangulation. Such a function was not
researched because the method described above was found to work well, and because such a function
5. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 5 of 14
could bring with it other unforeseen complications. A second complication arises from the fact that any
data point can be counted as its nearest neighbor’s nearest neighbor. This would result in a matrix
composed of only 2 distinct points, repeating for the duration of the nearest neighbor loop. To remedy this
problem, each point was removed from subsequent triangulations once its nearest neighbor was found.
This can be thought of as a “Pac-Man” type of mechanism which follows the flame contour, eating the
used data points. This method also keeps the code from skipping any “bubbles” of flame disconnected
from the contour of the main flame body as seen at position (30,23) in figure 3, or from skipping internal
pockets as seen at position (75,12) in figure 3. The contour-sorting loop begins at the minimum x-value of
the data set. If the original points were put back at the left end of the flame in figure 3, the nearest
neighbor sort would find these points again at the close of the contour and re-trace the first section of the
contour rather than jumping back to the bubbles and pockets that were missed. The only unresolved
problem found with the “Pac-Man” method is that once the data matrix is reduced to less than 3 points,
triangulation is no longer possible. This means that the last 2 segments of the contour plot are not taken
into account for flame wrinkling. It is assumed that this is not of concern when analyzing the flame
because the distance represented by 2 contour segments is on the order of 0.5 - 0.7 millimeters which is
close to the limit of resolution that can be achieved from the image. Figure 5 shows the resulting flame
contour generated by the Delaunay triangulation and nearest neighbor method.
Figure 5. Flame Contour
The creation of a defined contour now allows curve fitting techniques to be employed. Ideally, a
moving cubic spline would be used here to find the best fit equation of the curve at each point. An
unresolved error using the spline function precluded this method from being used. This part of the code
represents a large opportunity for future improvement. Rather than a cubic spline, a 6th
order polynomial
fit on 20 point contour segments was found to give good algebraic expressions for this curve. Figure 6
shows the actual flame contour in black and the polynomial fit curve in red.
Figure 6. 6th Order Polynomial Fit Curve
6. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 6 of 14
It can be seen in figure 6 that the polynomial fit has the most difficulty in areas where the contour doubles
back on itself and in regions of infinite slope. These areas specifically are examples of where a spline fit
would be more accurate. Other than these few discrepancies between the true contour and the polynomial
fit, the above figure shows a reasonably accurate algebraic representation of the flame boundary. Other
images within the video were also found to exhibit a similar degree of accuracy. Now that an algebraic
expression in terms of f(x) has been fitted to every segment of the flame boundary, the radius of curvature
at each point can be found by utilizing equation 1 below:
𝜌 =
(1+𝑓( 𝑥)′)3/2
𝑓(𝑥)′′
(1)
Where ρ is the radius of curvature, f(x) is the position of the boundary point as a function of x, and f(x)’
and f(x)’’ are the first and second derivatives of f with respect to x. When evaluated at all of the points
within the data set, this equation gives a distribution of all of the radii of curvature along the flame
boundary, which is ultimately the desired end output of this program. When this distribution is
normalized by the total number of radii measurements taken, the probability that a given radius will be
present on the contour is given by a probability density function shown below in figure 7.
Figure 7. PDF of Radii
The secondary objectives of the program are to plot both flame area and flame penetration from the
nozzle. Flame area was calculated by simply taking the sum of the white pixels in figure 2 and
multiplying by the conversion factor of .23 square mm per pixel. The flame penetration was calculated by
taking the difference between nozzle position and x-position of the flame tip. From the experiment, the
nozzle is known to be located at 19 pixels from the right edge of figure 1, which equates to 4.5 mm. The
outputs of the program for these parameters are shown below in figures 8 and 9.
7. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 7 of 14
Figure 8. Flame Area
Figure 9. Flame Penetration
3. Discussion
Two areas for improvement within this code are the triangulation method of nearest neighbor, and the
method of curve-fitting the boundary. The current methods described above give acceptable results that
can be used with a reasonable degree of confidence. They are not however completely flawless as can be
seen in figure 6. Flaws inherent in the triangulation method become apparent when the data set for the
flame boundary falls below 3 points. This results in an error due to an impossible triangulation. The
consequence of this is that the short periods of ignition and extinction cannot be analyzed by this code
because of very small flame boundaries which cannot be handled by the resolution of the images. This
flaw, however is not of great concern because boundary points plotted by the program as seen in figure 3
correspond to individual pixels within the image. Three pixels will never be able to give any accurate
measure of flame wrinkling and therefore the code is ultimately limited by camera resolution. Future
modifications to the code will allow all data points within the triangulation to be accounted for, but
program outputs are essentially as accurate as the image resolution allows as it stands now.
The second area of improvement which is of greater concern is that of curve fitting along the
flame boundary. The 20 point segment fitted to a 6th
order polynomial seems to work fairly well for the
example flame described above, but serious errors would occur if a vertical flame were to be analyzed by
this code. This is because the boundary points are fitted to a curve written as f(x),and the polynomial is
not able to handle more than one value f(x) for a given x. Because of the programs inability to accurately
fit segments in which the flame boundary doubles back upon itself (see figure 6), a new method needs to
be researched to give better curve fits for the flame boundary. The most promising method seems to be a
cubic spline technique. This will allow the curve to be broken into small enough segments that no double-
valued solutions to f(x) would be necessary,and will be implemented in the near future.
8. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 8 of 14
Because the current 6th
order polynomial method sometimes gives erratic solutions near segment
cut-offs or in areas of infinite slope, the radii of curvature calculated in equation 1 must be filtered to
throw out any extreme values that result from bad polynomial fits. Extremely small radii of less than 0.25
mm were also thrown out because the images tested did not allow for this type of resolution. It was found
that radii greater than 100 mm were generally the result of bad polynomial fits and could be neglected. As
can be seen in figure 7, there were very few radii found in the range beyond 20 mm. Based on the scale of
the radii, various transport or chemical mechanisms can be assumed to influence the flame shape. Larger
radii demonstrate larger eddies in the flow and smaller radii demonstrate smaller eddies. The most
influential mechanism was seen to create radii between 0 and 2 mm. Of course only those radii greater
than .25 mm were taken into account since the pdf was cut-off at 0.25 mm. The influential radii tend to
exponentially decrease in probability out to the 28-30 mm bin. After this, a few larger radii appear which
represent the largest aerodynamic scales influential upon flame shape. Once radii become larger than the
length of the flame itself, they do not give much useful information as to what scales are influential on
flame propagation. Figure 9 gives a useful reference for determining important time periods within the
flame’s lifetime. The time period of ignition and growth for this example is seen to occur from 1.5 ms to 6
ms. The flame then reaches a semi-steady state in which both flame length and area remain constant
between 6 ms and 14 ms. Global extinction then occurs rapidly in the time period between 14 ms and 15
ms.
Figure 10 shows pdf’s of flame radii at various points within its lifetime.
(a.) Ignition (b.) Growth
(c.) Maximum Flame Area (d.) Extinction
Figure 10. PDF of Curvature over Flame Lifetime
9. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 9 of 14
Figure 10 shows that flame curvature exhibits a fairly constant distribution throughout its lifetime. Larger
scales effecting curvature are seen to exist in a more substantial way during extinction in the 58-60 mm
bin, but even here the smaller scales represented by the 0 – 2 mm bin dominate curvature. This brings up
another consideration - within the 0 – 2mm bin, which radii occur most frequently? Figure 11 shows the
results of breaking the first bin into smaller 0.25 mm bins.
Figure 101. PDF of Radii between 0.25 and 2 mm
From this consideration, the conclusion can be made that no one particular scale represented by the above
bins seems to dominate curvature at the lower end of the radii spectrum. Because radii measurements are
limited by image resolution, a higher resolution camera is needed to study scales in this regime. The code
presented in the appendix is able to handle images of any size and resolution, and is only limited by
memory and processing speed. All that can conclusively be said of this analysis is that the turbulent scales
primarily responsible for flame curvature are represented by radii in the range of 0 – 10 mm. If the bins
from 0 – 10 mm are given the same width as those in figure 11, the distribution shown in figure 12
results.
(a) Ignition (b) Growth
10. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 10 of 14
(d) Maximum Flame Area (e) Extinction
Figure 12. PDF ofCurvature in the Range of0 – 10 mm
Figure 12 shows that even within the bins from 0 to 10 mm, the most influential scales still exist at the
lower end of the spectrum of curvature. To study these scales,a higher resolution image is needed. This
could be achieved by placing the camera closer to the flame, or using a lens capable of higher resolution.
Figures 8 and 9 give useful references for what is going on in the flame lifetime at a given instant.
These figures can be used in conjunction with curvature PDFs to give a better understanding of why the
flame may be exhibiting a certain radii probability distribution at a given time. Future work in the area of
flame area and penetration output could include a layered image showing all parts of the chamber touched
by the flame within the flame lifetime, and an included flame cone angle.
4. Conclusions
The method of image analysis described above represents an extremely useful tool for optical flame
analysis. The code can be easily modified to calculate other parameters of interest such as luminosity,
flame cone angle or any other geometric data that can be extracted from 2-D flame images. Based on the
outputs discussed above, various mechanisms responsible for flame curvature can be studied for turbulent
flames. The example flame video studied for this project demonstrated the highest probability for radii of
curvature within the 0 – 10 mm range. Future areas of study include smaller scale curvature regimes on
the order of 0 to 2 mm using higher resolution images, or other types of flames which may demonstrate
higher probability for larger radii of curvature. Major improvements on the code lie in the area of contour
generation and curve fitting techniques. There is also the possibility inputting multiple images from the
same instant to obtain a more complete picture of flame geometry. Photographs taken at 900
from the
plane shown in figure 1 could be analyzed in the same way in order to generate a PDF with more data
points. After taking all of these considerations into account, the program as it now stands still represents a
highly useful tool in turbulent flame analysis, and has been proven to be a strong foundation for
improvement and future modifications.
11. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 11 of 14
5. References
[1] Yung-Cheng Chen, Munki Kim, Jeongjae Han, Sangwook Yun, Youngbin Yoon Analysis of flame
surface normal and curvature measured in turbulent premixed stagnation-point flames with
crossed-plane tomography
[2] de Berg, Mark; Otfried Cheong, Marc van Kreveld, Mark Overmars (2008). Computational
Geometry: Algorithms and Applications. Springer-Verlag. ISBN 978-3-540-77973-5.
[3] Andrew Moore. "An introductory tutorial on KD trees". Extract from Andrew Moore’s PhD Thesis.
Efficient Memory-based Learning for Robot Control PhD Thesis Technical Report No. 209,
Computer Laboratory, University of Cambridge
Appendix: MATLAB Code
% Aaron McCullough
% MAE 704 Final Project
% Dr. Tarek Echekki
% Post-processing of Flame Imaging in a Combustion Chamber
clear all
close all
clc
% Ambient Conditions
% T = 1000; % Temp in Kelvin
% P = 44; % Pressure in Barr
% X02 = .21; % Ambient percent oxygen
%% Image Processing
start = input('Enter the first frame number ');
finish = input('Enter the last frame number ');
nframes = finish-start+1;
duration = nframes/4500; % length of video in seconds. Each frame is 1/4500
sec
t=(start:finish)/4500;
Area = zeros(1,nframes); FP = zeros(1,nframes); % Area and Flame Penetration
data allocation
NozzlePos = 19*35/148; % Nozzle position in mm
time = 0;
% Loop to read specified frames within the video
for k = start:finish
% Individual Frame Work
clear frame; clear f; clear xvect; clear yvect; clear img;
img = sprintf('1_%i.png',k);
frame = (imread(img));
frame(:,:,1) = fliplr(frame(:,:,1));frame(:,:,2) =
fliplr(frame(:,:,2));frame(:,:,3) = fliplr(frame(:,:,3));
f = frame(7:125,:,1); % Cut off top and bottom white bars
% Dimensions of image matrix X columns by Y rows
X = size(f,2); Y = size(f,1);
12. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 12 of 14
% Set threshold and redefine image (matrix f)
for y = 1:Y
for x = 1:X
if f(y,x)<40 % Threshold value
f(y,x)=0;
elseif f(y,x) >= 40
f(y,x) = 200;
end
end
end
Area(k-start+1) = sum(sum(f))/200*(35/148)^2; % flame area in mm^2
% find x and y values for boundary
i=1; % index for x and y vectors
xvect = zeros(1,length(f));
yvect = zeros(1,length(f));
for x = 2:X-1
for y = 2:Y-1
if f(y,x) == 200
m(1)= f(y+1,x);m(2)=f(y,x+1);m(3)=f(y-1,x);m(4)=f(y,x-1);
if length(find(m))~=length(m)
xvect(i) = x*35/148 - NozzlePos; % convert pixels to mm
yvect(i) = (Y - y)*35/148;
i=i+1;
end
end
end
end
lg = length(find(xvect));
xvect(lg+1:end)=[];
yvect(lg+1:end)=[];
xvect = xvect';yvect = yvect';
FP(k-start+1) = max(xvect);
% Nearest Neightbor Sort
map = [xvect yvect];
map2 = zeros(size(map));
lmp = length(map);
DT = delaunayTriangulation(map);
count = 1;
i=1; % index for old map
j=1; % index for new sorted map
while count < lmp-4
map1 = map;
qp = map(i,:); % Set query point
map1(i,:) = []; % Take out qp from map so that it won't count itself
as nearest neighbor
DT1 = delaunayTriangulation(map1); % triangulate the new map without
qp
vi1 = nearestNeighbor(DT1,qp); %Find index of nearest neighbor to
query point from original map
map2(j,:) = map(i,:);
map2(j+1,:) = map1(vi1,:);
map(i,:)=[]; % Pacman eats the used point ( < * * *
i = vi1; % next index in original map
13. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 13 of 14
j = j+1; % index for next slot in the sorted map
count = count + 1;
end
map = map2(1:end-4,:); % Redefine the map as an ordered set of data
% Spline map polyfit groups of 20 as 6th order
% From polynomial, the radius of curvature is found at every point
lmp = length(map);
l = 1; % counter for while loop
spl = [];
radii = [];
while l <= lmp-20
section = map(l:l+20,:);
fit = polyfit(section(:,1),section(:,2),6);clc;
%pp = spline(section(:,1),section(:,2)) % attempt to use spline.
Gives
%chckxy error. This is the biggest area for code improvement.
df = polyder(fit); % first derivative of polynomial
dff = polyder(df); % second derivative of polynomial
xs = section(:,1); % x values
for i = 1:length(xs)
xpowers = [xs(i).^5 xs(i).^4 xs(i).^3 xs(i).^2 xs(i) 1];
yprime = sum(df.*xpowers); % Value of y' at specific point
ydubprime = sum(dff.*xpowers(2:end)); % Value of y'' at specific
point
ROCsect(i) = sqrt(1+yprime.^2).^3/ydubprime; % Equation for
radius of curvature at point
end
ROCsect(1) = []; % erase the repeated point
radiisect = ROCsect';
pval = polyval(fit,section(:,1));
new = [section(:,1) pval];
spl = vertcat(spl,new); % spline made up of polynomial fits
radii = [radii;radiisect];
l = l+19; % overlap 2 points from last segment to avoid spikes.
end
% Repeat the loop once more to include the last segment
section = map(l:end,:);
fit = polyfit(section(:,1),section(:,2),6);clc;
df = polyder(fit); % first derivative of polynomial
dff = polyder(df); % second derivative of polynomial
xs = section(:,1); % x values
for i = 1:length(xs)
xpowers = [xs(i).^5 xs(i).^4 xs(i).^3 xs(i).^2 xs(i) 1];
yprime = sum(df.*xpowers); % Value of y' at specific point
ydubprime = sum(dff.*xpowers(2:end)); % Value of y'' at specific
point
ROCsect(i) = sqrt(1+yprime.^2).^3/ydubprime; % Equation for radius of
curvature at point
end
ROCsect(1) = []; % erase the repeated point
radiisect = ROCsect';
pval = polyval(fit,section(:,1));
new = [section(:,1) pval];
spl = vertcat(spl,new); % spline made up of polynomial fits
14. MAE-704 Spring 2014 North Carolina State University
Aaron McCullough Department of Mechanical & Aerospace Engineering
Page 14 of 14
radii = [radii;radiisect];
% Filter Radii data
i = 1;
lrd = length(radii);
while i <= lrd
if or(radii(i)>100,radii(i)<.25)
radii(i)=[];
else
i = i + 1;
end
lrd = length(radii);
end
time = time + 1/4500;
xlbl = sprintf('radii (mm), time = %1.2e',time);
% Histogram of Radii
bins = 1:2:99;
N = hist(radii,bins);
prob = N./numel(radii);
figure(k)
subplot(3,1,1)
image(frame)
subplot(3,1,2)
plot(map(:,1),map(:,2),'k',spl(:,1),spl(:,2),'r');title('Flame
Boundary');xlabel('Jet Longitudinal Axis (mm)Nozzle at x = 0');ylabel('Jet
Lateral Axis (mm)')
axis([0 100 0 30])
grid on;
subplot(3,1,3)
bar(bins,prob)
xlabel(xlbl);ylabel('probablility');
axis([0 100 0 1])
%grid on
F(k-start+1)=getframe(figure(k));
%close(k)
end
%% Output
play = 'y';
while play == 'y'
fps = input('how many frames per second? '); %frames per second
figure(k+1)
movie(F,3,fps) % Plays movie of flame
play = input('play again? ','s');
end
close(k+1)
% Flame Area and Penetration Plots
figure(k+2)
subplot(2,1,1)
plot(t,Area);title('Flame Area vs Time');xlabel('time (s)');ylabel('Area
mm^2')
grid on
subplot(2,1,2)
plot(t,FP);title('Flame Penetration vs Time');xlabel('time
(s)');ylabel('Length mm')
grid on