Optical tomography provides a means for the determination of the spatial distribution of materials with different optical density in a volume by non-intrusive means. This paper presents results of concentration measurements of gas bubbles in a water column using an optical tomography system. A hydraulic flow rig is used to generate vertical air–water two-phase flows with controllable bubble flow rate. Two approaches are investigated. The first aims to obtain an average gas concentration at the measurement section, the second aims to obtain a gas distribution profile by using tomographic imaging. A hybrid back-projection algorithm is used to calculate concentration profiles from measured sensor values to provide a tomographic image of the measurement cross-section. The algorithm combines the characteristic of an optical sensor as a hard field sensor and the linear back projection algorithm.
The objective is to analyze and propose a methodology to manage with the attenuating effect promoted by carbon dioxide - CO2 on the performance of ultrasonic flow meter in gas flaring applications. Such methodology is based on experiments performed in a wind tunnel with a Reynolds number about 10^4 and concentration of CO2 above 60%. The results indicate that the ultrasonic meter exhibited measurement readings failures, especially in stages of abrupt changes in gas concentration, whose contents were above 5%. It is verified, as well, that the approximation of ultrasonic transducers tends to reduce such measurement failures.
Air Quality Sampling and Monitoring: Stack sampling, instrumentation and methods of analysis of SO2, CO etc, legislation for control of air pollution and automobile
pollution
Pairing aermod concentrations with the 50th percentile monitored valueSergio A. Guerra
Presentation delivered to the Background Concentrations Workgroup for Air Dispersion Modeling organized by the Minnesota Pollution Control Agency. delivered on May 29, 2014. Three topics covered include 1) Screening monitoring data, 2) AERMOD’s time-space mismatch, and
3) Proposed 50th % Bkg Method
Use of Probabilistic Statistical Techniques in AERMOD Modeling EvaluationsSergio A. Guerra
The advent of the short term National Ambient Air Quality Standards (NAAQS) prompted modelers to reassess the common practices in dispersion modeling analyses. The probabilistic nature of the new short term standards also opens the door to alternative modeling techniques that are based on probability. One of these is the Monte Carlo technique that can be used to account for emission variability in permit modeling.
Currently, it is assumed that a given emission unit is in operation at its maximum capacity every hour of the year. This assumption may be appropriate for facilities that operate at full capacity most of the time. However, in most cases, emission units operate at variable loads that produce variable emissions. Thus, assuming constant maximum emissions is overly conservative for facilities such as power plants that are not in operation all the time and which exhibit high concentrations during very short periods of time.
Another element of conservatism in NAAQS demonstrations relates to combining predicted concentrations from the AMS/EPA Regulatory Model (AERMOD) with observed (monitored) background concentrations. Normally, some of the highest monitored observations are added to the AERMOD results yielding a very conservative combined concentration.
A case study is presented to evaluate the use of alternative probabilistic methods to complement the shortcomings of current dispersion modeling practices. This case study includes the use of the Monte Carlo technique and the use of a reasonable background concentration to combine with the AERMOD predicted concentrations. The use of these methods is in harmony with the probabilistic nature of the NAAQS and can help demonstrate compliance through dispersion modeling analyses, while still being protective of the NAAQS.
Advanced Modeling Techniques for Permit Modeling - Turning challenges into o...Sergio A. Guerra
Advance modeling techniques can be used in AERMOD to refine the inputs that are entered in the model to get more accurate results. This presentation covers:
-AERMOD’s Temporal Mismatch Limitation
-Building Downwash Limitations in BPIP/PRIME
-Advanced Modeling Techniques to Overcome these Limitations
Solutions include:
Equivalent Building Dimensions (EBD)
Emission Variability Processor (EMVAP)
Updated ambient ratio method (ARM2)
Pairing AERMOD values with the 50th % background concentrations in cumulative analyses.
The objective is to analyze and propose a methodology to manage with the attenuating effect promoted by carbon dioxide - CO2 on the performance of ultrasonic flow meter in gas flaring applications. Such methodology is based on experiments performed in a wind tunnel with a Reynolds number about 10^4 and concentration of CO2 above 60%. The results indicate that the ultrasonic meter exhibited measurement readings failures, especially in stages of abrupt changes in gas concentration, whose contents were above 5%. It is verified, as well, that the approximation of ultrasonic transducers tends to reduce such measurement failures.
Air Quality Sampling and Monitoring: Stack sampling, instrumentation and methods of analysis of SO2, CO etc, legislation for control of air pollution and automobile
pollution
Pairing aermod concentrations with the 50th percentile monitored valueSergio A. Guerra
Presentation delivered to the Background Concentrations Workgroup for Air Dispersion Modeling organized by the Minnesota Pollution Control Agency. delivered on May 29, 2014. Three topics covered include 1) Screening monitoring data, 2) AERMOD’s time-space mismatch, and
3) Proposed 50th % Bkg Method
Use of Probabilistic Statistical Techniques in AERMOD Modeling EvaluationsSergio A. Guerra
The advent of the short term National Ambient Air Quality Standards (NAAQS) prompted modelers to reassess the common practices in dispersion modeling analyses. The probabilistic nature of the new short term standards also opens the door to alternative modeling techniques that are based on probability. One of these is the Monte Carlo technique that can be used to account for emission variability in permit modeling.
Currently, it is assumed that a given emission unit is in operation at its maximum capacity every hour of the year. This assumption may be appropriate for facilities that operate at full capacity most of the time. However, in most cases, emission units operate at variable loads that produce variable emissions. Thus, assuming constant maximum emissions is overly conservative for facilities such as power plants that are not in operation all the time and which exhibit high concentrations during very short periods of time.
Another element of conservatism in NAAQS demonstrations relates to combining predicted concentrations from the AMS/EPA Regulatory Model (AERMOD) with observed (monitored) background concentrations. Normally, some of the highest monitored observations are added to the AERMOD results yielding a very conservative combined concentration.
A case study is presented to evaluate the use of alternative probabilistic methods to complement the shortcomings of current dispersion modeling practices. This case study includes the use of the Monte Carlo technique and the use of a reasonable background concentration to combine with the AERMOD predicted concentrations. The use of these methods is in harmony with the probabilistic nature of the NAAQS and can help demonstrate compliance through dispersion modeling analyses, while still being protective of the NAAQS.
Advanced Modeling Techniques for Permit Modeling - Turning challenges into o...Sergio A. Guerra
Advance modeling techniques can be used in AERMOD to refine the inputs that are entered in the model to get more accurate results. This presentation covers:
-AERMOD’s Temporal Mismatch Limitation
-Building Downwash Limitations in BPIP/PRIME
-Advanced Modeling Techniques to Overcome these Limitations
Solutions include:
Equivalent Building Dimensions (EBD)
Emission Variability Processor (EMVAP)
Updated ambient ratio method (ARM2)
Pairing AERMOD values with the 50th % background concentrations in cumulative analyses.
EFFECTS OF MET DATA PROCESSING IN AERMOD CONCENTRATIONSSergio A. Guerra
The current study evaluates the effect that different parameters used to process meteorological data have on AERMOD concentrations. Specifically, this study evaluates the effect from the use of AERMET processed with; 1-minute wind data collected by the Automated Surface Observing System (ASOS) and pre-processed using AERMINUTE, refined National Climatic Data Center (NCDC) station location and anemometer height, surface moisture, and urban/rural options. In this evaluation, one year of meteorological data was processed with nine different sets of input parameters and then used in AERMOD to run a short, medium and tall stack scenario for 1-hour, 24-hour and annual averaging periods. Downwash and terrain effects were not considered in this study. The results indicate that the three stack scenarios are sensitive to the location used for the meteorological station. Anemometer height changes had a small effect on concentrations for all scenarios except for the tall stack scenario which produced a modest increase in concentrations for the annual averaging period. Surface moisture was not found to have a strong effect on the scenarios evaluated. The use of AERMINUTE data resulted in significantly higher concentrations for the 1-hour (85%), 24-hour (81%), and annual (88%) averaging periods. The ice free group station option in AERMINUTE was also evaluated. When using AERMINUTE without specifying that the station is part of the ice free wind group stations, the concentrations obtained for tall stack scenario were lower for the 1-hour (64%), 24-hour (68%), and annual (78%) averaging periods. Finally, when it comes to the urban/rural evaluation, the greatest effect is observed in the medium stack scenario where concentrations double for the 1-hour scenario when using the rural option. However, in the tall stack scenario, significantly lower concentrations were obtained by using the urban parameter for the three averaging periods evaluated.
Presented at the 10th Conference of Air Quality Modeling
EPA‐Research Triangle Park, NC Campus on March 15, 2012; at the AWMA UMS Dispersion Modeling Workshop on May 15, 2012 and at the Annual AWMA Conference on June 20, 2012.
AERMOD Tiering Approach Case Study for 1-Hour NO2BREEZE Software
This study reviews 1-hour NO2 concentrations predicted by AERMOD for a hypothetical source at four locations throughout the United States with hourly varying background ozone concentrations.
INNOVATIVE DISPERSION MODELING PRACTICES TO ACHIEVE A REASONABLE LEVEL OF CON...Sergio A. Guerra
Presentation delivered at the Annual Air and Waste Management Association conference in Long beach, California on June 26, 2014.
Innovative dispersion modeling techniques are presented including ARM2, EMVAP and the 50th percentile background concentration. Case study involves peaking engines that are used 250 hour per year. These intermittent sources are required to undergo a modeling evaluation in many states. Current modeling techniques grossly overestimate the emissions from these sporadic sources.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
INNOVATIVE DISPERSION MODELING PRACTICES TO ACHIEVE A REASONABLE LEVEL OF CON...Sergio A. Guerra
Presentation delivered at the Board meeting for the Upper Midwest section of the Air and Waste Management Association meeting on September 16, 2014.
Innovative dispersion modeling techniques are presented including ARM2, EMVAP and the 50th percentile background concentration. Case study involves peaking engines that are used 250 hour per year. These intermittent sources are required to undergo a modeling evaluation in many states. Current modeling techniques grossly overestimate the emissions from these sporadic sources.
dispersion modeling requirements are more common in air permitting projects and in many cases become the bottleneck in permitting. Unlike any other consulting firm, CPP promotes cutting edge techniques which can alleviate excessive conservatism in permit modeling to a reasonable level that still protects public health. At CPP we start with the standard modeling techniques and apply the following advanced analysis tools, as needed, to optimize your permitting strategy:
• Analysis of BPIP output to verify if AERMOD is overpredicting,
• Screening tool to assess the benefit of refining the BPIP building dimensions inputs,
• Use of Equivalent Building Dimension (EBD) studies to correct building wake effects in AERMOD,
• Evaluation of background concentrations to determine a reasonable value to combine with predicted concentrations,
• Use of the Monte Carlo approach (i.e., EMVAP) to address sources with variable emissions,
• Use of the adjusted friction velocity (u-star) option in AERMET to address AERMOD’s overestimation during low wind stable hours,
• Site analysis to determine whether stacks taller than formula GEP stack heights are justified,
• Site specific wind tunnel modeling to determine GEP stack heights and Equivalent Building Dimensions,
• Site-specific wind erosion inputs, and
• Area and volume source enhancements.
It will guide to about the air sampling process which is essential step before you proceed for any type of research regarding air pollution, pollutants and health effects.
New Guideline on Air Quality Models and the Electric Utility IndustrySergio A. Guerra
The revision of the Guideline on AQ Models (Appendix W) will prompt many changes in the way dispersion modeling is conducted for regulatory purposes. Some of the changes to the Guideline include enhancements and bug fixes to the AERMOD modeling system, new screening techniques to address ozone and secondary PM2.5, delisting CALPUFF as the preferred long-range transport model, and updates on the use of meteorological input data. These changes will have a significant impact on the regulated community. This presentation will cover the main highlights from this guidance and how the electric utility industry will be impacted. In addition, the latest information provided by EPA during the 2016 Regional, State, and Local Modelers' Workshop will also be presented.
Presentation includes information related to gently sloping terrain, AERMINUTE, and EPA formula height.
Presented at the 27th Annual Conference on the Environment on November 13, 2012.
Important theoretical issues that significantly affect the accuracy of predicted concentrations subject to downwash effects have been identified in AERMOD/PRIME. These issues have prompted a number of industry groups to fund new research aiming at overcoming these shortcomings. The Plume Rise Model Enhancements (PRIME) building downwash algorithms1 (Schulman et al. 2000) in AERMOD2 are being updated to address some of the most critical limitations in the current theory. These enhancements will incorporate the latest advancements related to building downwash effects. The technical aspects of these enhancements are discussed in more detail in a recent publication "PRIME2: Development and Evaluation of Improved Building Downwash Algorithms for Solid and Streamlined Structures". The updates to the PRIME code include new equations to account for building wake effects that decay rapidly back to ambient levels above the top of the building; reduced wake effects for streamlined structures; and reduced wake effects for high approach roughness. A comparison with field data was conducted with the Bowline Point, Alaska North Slope, Millstone Nuclear Power Station, and the Duane Arnold Energy Center databases. A new experimental BPIP-PRM version is also discussed.
NOVEL DATA ANALYSIS TECHNIQUE USED TO EVALUATE NOX AND CO2 CONTINUOUS EMISSIO...Sergio A. Guerra
The current study presents a new data analysis technique developed while evaluating continuous emission data collected from a trash compactor. The evaluation involved tailpipe sampling with a portable emission monitoring system (PEMS) from a diesel fueled 525-horsepower trash
compactor. The sampling campaign took place by running the compactor with regular no. 2 diesel, B20 and ULSD fuels. The purpose was to determine the possible emission reductions in nitrous oxides (NOx) and carbon dioxide (CO2) from the use of B20 and ULSD in an off-road
vehicle. The results from the NOx analysis are discussed.
The initial data analysis identified two important issues. The first concern related to a bias in the calculated F values due to the very large number of samples (N). The large N influenced the probability values and indicated a false statistical significance for all factors tested. Additionally,
the data observations were found to be highly autocorrelated. Thus, a time interval data reduction
technique was used to address these two statistical limitations to the robustness of the statistical
analyses. The result in each case was a subset of quasi-independent observations sampled at an interval of 800 seconds. The autocorrelation and false statistical significance issues were promptly resolved by using this technique. Since the issues of false statistical significance and autocorrelation are inherent in continuous data, the positive results obtained from the use of this technique can be far-reaching. This technique allowed for a valid use of the general linear model (GLM) with engine speed as the covariate factor to test day, fuel type and compactor factors. This technique is most relevant given the advancements in data collection capabilities that
require data handling techniques to satisfy the statistical assumptions necessary for valid analyses to ensue.
Using Physical Modeling to Refine Downwash Inputs to AERMODSergio A. Guerra
Achieving compliance in dispersion modeling can be quite challenging because of the tight National Ambient Air Quality Standards (NAAQS). In addition, AERMOD’s limitations can, in many cases, produce higher than normal concentrations due to the inherent assumptions and simplifications in its formulation. In the case of downwash, the theory used to estimate these effects was developed for a limited set of building types. However, these formulations are commonly used indiscriminately for all types of buildings. This presentation will cover how the basics of wind tunnel modeling can overcome some of these limitations and be used to mitigate downwash induced overpredictions to achieve compliance.
PRIME2_consequence_analysis_and _model_evaluationSergio A. Guerra
The Plume Rise Model Enhancements (PRIME) building downwash algorithms1 (Schulman et al. 2000) in AERMOD2 are being updated to address some of the most critical limitations in the current theory. These enhancements will incorporate the latest advancements related to building downwash effects. The technical aspects of these enhancements are discussed in more detail in a companion paper titled “PRIME2: Development and Evaluation of Improved Building Downwash Algorithms for Solid and Streamlined Structures (MO13)”. The updates to the PRIME code include new equations to account for building wake effects that decay rapidly back to ambient levels above the top of the building; reduced wake effects for streamlined structures; and reduced wake effects for high approach roughness. A consequence analysis comparing the current AERMOD/PRIME model versus the new AERMOD/PRIME2 model was performed. Additionally, a field data evaluation was conducted with the Bowline Point database. The results from these analyses are discussed below.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Design of Electronic Nose System Using Gas Chromatography Principle and Surfa...TELKOMNIKA JOURNAL
Most gases are odorless, colorless and also hazard to be sensed by the human olfactory system.
Hence, an electronic nose system is required for the gas classification process. This study presents the
design of electronic nose system using a combination of Gas Chromatography Column and a Surface
Acoustic Wave (SAW). The Gas Chromatography Column is a technique based on the compound partition
at a certain temperature. Whereas, the SAW sensor works based on the resonant frequency change. In
this study, gas samples including methanol, acetonitrile, and benzene are used for system performance
measurement. Each gas sample generates a specific acoustic signal data in the form of a frequency
change recorded by the SAW sensor. Then, the acoustic signal data is analyzed to obtain the acoustic
features, i.e. the peak amplitude, the negative slope, the positive slope, and the length. The Support
Vector Machine (SVM) method using the acoustic feature as its input parameters are applied to classify
the gas sample. Radial Basis Function is used to build the optimal hyperplane model which devided into
two processes i.e., the training process and the external validation process. According to the result
performance, the training process has the accuracy of 98.7% and the external validation process has the
accuracy of 93.3%. Our electronic nose system has the average sensitivity of 51.43 Hz/mL to sense the
gas samples.
EFFECTS OF MET DATA PROCESSING IN AERMOD CONCENTRATIONSSergio A. Guerra
The current study evaluates the effect that different parameters used to process meteorological data have on AERMOD concentrations. Specifically, this study evaluates the effect from the use of AERMET processed with; 1-minute wind data collected by the Automated Surface Observing System (ASOS) and pre-processed using AERMINUTE, refined National Climatic Data Center (NCDC) station location and anemometer height, surface moisture, and urban/rural options. In this evaluation, one year of meteorological data was processed with nine different sets of input parameters and then used in AERMOD to run a short, medium and tall stack scenario for 1-hour, 24-hour and annual averaging periods. Downwash and terrain effects were not considered in this study. The results indicate that the three stack scenarios are sensitive to the location used for the meteorological station. Anemometer height changes had a small effect on concentrations for all scenarios except for the tall stack scenario which produced a modest increase in concentrations for the annual averaging period. Surface moisture was not found to have a strong effect on the scenarios evaluated. The use of AERMINUTE data resulted in significantly higher concentrations for the 1-hour (85%), 24-hour (81%), and annual (88%) averaging periods. The ice free group station option in AERMINUTE was also evaluated. When using AERMINUTE without specifying that the station is part of the ice free wind group stations, the concentrations obtained for tall stack scenario were lower for the 1-hour (64%), 24-hour (68%), and annual (78%) averaging periods. Finally, when it comes to the urban/rural evaluation, the greatest effect is observed in the medium stack scenario where concentrations double for the 1-hour scenario when using the rural option. However, in the tall stack scenario, significantly lower concentrations were obtained by using the urban parameter for the three averaging periods evaluated.
Presented at the 10th Conference of Air Quality Modeling
EPA‐Research Triangle Park, NC Campus on March 15, 2012; at the AWMA UMS Dispersion Modeling Workshop on May 15, 2012 and at the Annual AWMA Conference on June 20, 2012.
AERMOD Tiering Approach Case Study for 1-Hour NO2BREEZE Software
This study reviews 1-hour NO2 concentrations predicted by AERMOD for a hypothetical source at four locations throughout the United States with hourly varying background ozone concentrations.
INNOVATIVE DISPERSION MODELING PRACTICES TO ACHIEVE A REASONABLE LEVEL OF CON...Sergio A. Guerra
Presentation delivered at the Annual Air and Waste Management Association conference in Long beach, California on June 26, 2014.
Innovative dispersion modeling techniques are presented including ARM2, EMVAP and the 50th percentile background concentration. Case study involves peaking engines that are used 250 hour per year. These intermittent sources are required to undergo a modeling evaluation in many states. Current modeling techniques grossly overestimate the emissions from these sporadic sources.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
INNOVATIVE DISPERSION MODELING PRACTICES TO ACHIEVE A REASONABLE LEVEL OF CON...Sergio A. Guerra
Presentation delivered at the Board meeting for the Upper Midwest section of the Air and Waste Management Association meeting on September 16, 2014.
Innovative dispersion modeling techniques are presented including ARM2, EMVAP and the 50th percentile background concentration. Case study involves peaking engines that are used 250 hour per year. These intermittent sources are required to undergo a modeling evaluation in many states. Current modeling techniques grossly overestimate the emissions from these sporadic sources.
dispersion modeling requirements are more common in air permitting projects and in many cases become the bottleneck in permitting. Unlike any other consulting firm, CPP promotes cutting edge techniques which can alleviate excessive conservatism in permit modeling to a reasonable level that still protects public health. At CPP we start with the standard modeling techniques and apply the following advanced analysis tools, as needed, to optimize your permitting strategy:
• Analysis of BPIP output to verify if AERMOD is overpredicting,
• Screening tool to assess the benefit of refining the BPIP building dimensions inputs,
• Use of Equivalent Building Dimension (EBD) studies to correct building wake effects in AERMOD,
• Evaluation of background concentrations to determine a reasonable value to combine with predicted concentrations,
• Use of the Monte Carlo approach (i.e., EMVAP) to address sources with variable emissions,
• Use of the adjusted friction velocity (u-star) option in AERMET to address AERMOD’s overestimation during low wind stable hours,
• Site analysis to determine whether stacks taller than formula GEP stack heights are justified,
• Site specific wind tunnel modeling to determine GEP stack heights and Equivalent Building Dimensions,
• Site-specific wind erosion inputs, and
• Area and volume source enhancements.
It will guide to about the air sampling process which is essential step before you proceed for any type of research regarding air pollution, pollutants and health effects.
New Guideline on Air Quality Models and the Electric Utility IndustrySergio A. Guerra
The revision of the Guideline on AQ Models (Appendix W) will prompt many changes in the way dispersion modeling is conducted for regulatory purposes. Some of the changes to the Guideline include enhancements and bug fixes to the AERMOD modeling system, new screening techniques to address ozone and secondary PM2.5, delisting CALPUFF as the preferred long-range transport model, and updates on the use of meteorological input data. These changes will have a significant impact on the regulated community. This presentation will cover the main highlights from this guidance and how the electric utility industry will be impacted. In addition, the latest information provided by EPA during the 2016 Regional, State, and Local Modelers' Workshop will also be presented.
Presentation includes information related to gently sloping terrain, AERMINUTE, and EPA formula height.
Presented at the 27th Annual Conference on the Environment on November 13, 2012.
Important theoretical issues that significantly affect the accuracy of predicted concentrations subject to downwash effects have been identified in AERMOD/PRIME. These issues have prompted a number of industry groups to fund new research aiming at overcoming these shortcomings. The Plume Rise Model Enhancements (PRIME) building downwash algorithms1 (Schulman et al. 2000) in AERMOD2 are being updated to address some of the most critical limitations in the current theory. These enhancements will incorporate the latest advancements related to building downwash effects. The technical aspects of these enhancements are discussed in more detail in a recent publication "PRIME2: Development and Evaluation of Improved Building Downwash Algorithms for Solid and Streamlined Structures". The updates to the PRIME code include new equations to account for building wake effects that decay rapidly back to ambient levels above the top of the building; reduced wake effects for streamlined structures; and reduced wake effects for high approach roughness. A comparison with field data was conducted with the Bowline Point, Alaska North Slope, Millstone Nuclear Power Station, and the Duane Arnold Energy Center databases. A new experimental BPIP-PRM version is also discussed.
NOVEL DATA ANALYSIS TECHNIQUE USED TO EVALUATE NOX AND CO2 CONTINUOUS EMISSIO...Sergio A. Guerra
The current study presents a new data analysis technique developed while evaluating continuous emission data collected from a trash compactor. The evaluation involved tailpipe sampling with a portable emission monitoring system (PEMS) from a diesel fueled 525-horsepower trash
compactor. The sampling campaign took place by running the compactor with regular no. 2 diesel, B20 and ULSD fuels. The purpose was to determine the possible emission reductions in nitrous oxides (NOx) and carbon dioxide (CO2) from the use of B20 and ULSD in an off-road
vehicle. The results from the NOx analysis are discussed.
The initial data analysis identified two important issues. The first concern related to a bias in the calculated F values due to the very large number of samples (N). The large N influenced the probability values and indicated a false statistical significance for all factors tested. Additionally,
the data observations were found to be highly autocorrelated. Thus, a time interval data reduction
technique was used to address these two statistical limitations to the robustness of the statistical
analyses. The result in each case was a subset of quasi-independent observations sampled at an interval of 800 seconds. The autocorrelation and false statistical significance issues were promptly resolved by using this technique. Since the issues of false statistical significance and autocorrelation are inherent in continuous data, the positive results obtained from the use of this technique can be far-reaching. This technique allowed for a valid use of the general linear model (GLM) with engine speed as the covariate factor to test day, fuel type and compactor factors. This technique is most relevant given the advancements in data collection capabilities that
require data handling techniques to satisfy the statistical assumptions necessary for valid analyses to ensue.
Using Physical Modeling to Refine Downwash Inputs to AERMODSergio A. Guerra
Achieving compliance in dispersion modeling can be quite challenging because of the tight National Ambient Air Quality Standards (NAAQS). In addition, AERMOD’s limitations can, in many cases, produce higher than normal concentrations due to the inherent assumptions and simplifications in its formulation. In the case of downwash, the theory used to estimate these effects was developed for a limited set of building types. However, these formulations are commonly used indiscriminately for all types of buildings. This presentation will cover how the basics of wind tunnel modeling can overcome some of these limitations and be used to mitigate downwash induced overpredictions to achieve compliance.
PRIME2_consequence_analysis_and _model_evaluationSergio A. Guerra
The Plume Rise Model Enhancements (PRIME) building downwash algorithms1 (Schulman et al. 2000) in AERMOD2 are being updated to address some of the most critical limitations in the current theory. These enhancements will incorporate the latest advancements related to building downwash effects. The technical aspects of these enhancements are discussed in more detail in a companion paper titled “PRIME2: Development and Evaluation of Improved Building Downwash Algorithms for Solid and Streamlined Structures (MO13)”. The updates to the PRIME code include new equations to account for building wake effects that decay rapidly back to ambient levels above the top of the building; reduced wake effects for streamlined structures; and reduced wake effects for high approach roughness. A consequence analysis comparing the current AERMOD/PRIME model versus the new AERMOD/PRIME2 model was performed. Additionally, a field data evaluation was conducted with the Bowline Point database. The results from these analyses are discussed below.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Design of Electronic Nose System Using Gas Chromatography Principle and Surfa...TELKOMNIKA JOURNAL
Most gases are odorless, colorless and also hazard to be sensed by the human olfactory system.
Hence, an electronic nose system is required for the gas classification process. This study presents the
design of electronic nose system using a combination of Gas Chromatography Column and a Surface
Acoustic Wave (SAW). The Gas Chromatography Column is a technique based on the compound partition
at a certain temperature. Whereas, the SAW sensor works based on the resonant frequency change. In
this study, gas samples including methanol, acetonitrile, and benzene are used for system performance
measurement. Each gas sample generates a specific acoustic signal data in the form of a frequency
change recorded by the SAW sensor. Then, the acoustic signal data is analyzed to obtain the acoustic
features, i.e. the peak amplitude, the negative slope, the positive slope, and the length. The Support
Vector Machine (SVM) method using the acoustic feature as its input parameters are applied to classify
the gas sample. Radial Basis Function is used to build the optimal hyperplane model which devided into
two processes i.e., the training process and the external validation process. According to the result
performance, the training process has the accuracy of 98.7% and the external validation process has the
accuracy of 93.3%. Our electronic nose system has the average sensitivity of 51.43 Hz/mL to sense the
gas samples.
Diffusers are extensively used in centrifugal
compressors, axial flow compressors, ram jets, combustion
chambers, inlet portions of jet engines and etc. A small change in
pressure recovery can increases the efficiency significantly.
Therefore diffusers are absolutely essential for good turbo
machinery performance. The geometric limitations in aircraft
applications where the diffusers need to be specially designed so
as to achieve maximum pressure recovery and avoiding flow
separation.
The study behind the investigation of flow separation in a planar
diffuser by varying the diffuser taper angle for axisymmetric
expansion. Numerical solution of 2D axisymmetric diffuser model
is validated for skin friction coefficient and pressure coefficient
along upper and bottom wall surfaces with the experimental
results of planar diffuser predicted by Vance Dippold and
Nicholas J. Georgiadis in NASA research center [2]
.
Further the diffuser taper angle is varied for other different
angles and results shows the effect of flow separation were it is
reduces i.e., for what angle and at which angle it is just avoided.
The Claus process is the industry standard and so the most
significant gas desulfurizing process, recovering elemental sulfur
from gaseous hydrogen sulfide.
The process is commonly referred to as a sulfur recovery unit
(SRU) and is very widely used to produce sulfur from the
hydrogen sulfide found in raw natural gas and from the by-product
sour gases containing hydrogen sulfide derived from refining
petroleum crude oil and other industrial facilities.
There are many hundreds of Claus sulfur recovery units in
operation worldwide.
In fact, the vast majority of the 68,000,000 metric tons of sulfur
produced worldwide in one year is by-product sulfur from
petroleum refining and natural gas processing plants.
An artificial intelligence based improved classification of two-phase flow patte...ISA Interchange
Flow pattern recognition is necessary to select design equations for finding operating details of the process and to perform computational simulations. Visual image processing can be used to automate the interpretation of patterns in two-phase flow. In this paper, an attempt has been made to improve the classification accuracy of the flow pattern of gas/ liquid two- phase flow using fuzzy logic and Support Vector Machine (SVM) with Principal Component Analysis (PCA). The videos of six different types of flow patterns namely, annular flow, bubble flow, churn flow, plug flow, slug flow and stratified flow are re- corded for a period and converted to 2D images for processing. The textural and shape features extracted using image processing are applied as inputs to various classification schemes namely fuzzy logic, SVM and SVM with PCA in order to identify the type of flow pattern. The results obtained are compared and it is observed that SVM with features reduced using PCA gives the better classification accuracy and computationally less intensive than other two existing schemes. This study results cover industrial application needs including oil and gas and any other gas-liquid two-phase flows.
Image Reconstruction for Solid Profile Measurement in ERT Using Non-invasive ...TELKOMNIKA JOURNAL
Image reconstruction software and its image reconstruction algorithm are an important step towards constructing a tomography system. This paper demonstrates an image reconstruction of solid profile using linear back projection (LBP) algorithm and global threshold. A forward problem and inverse problem are discussed. The modelled of sensitivity distributions using COMSOL proved that the system is able to detect the liquid-solid regime in vertical pipe. Additionally, the location of the phantom can be easily distinguished using LBP algorithm and thresholding technique. The simulations and experiments results indicate that the sensitivity distribution of non-invasive ERT system can be applied in getting a tomogram of the medium of interest.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Development of Ammonia Gas Leak Detection and Location MethodTELKOMNIKA JOURNAL
This paper proposed the gas for industrial ammonia leak diffusion model and the Gauss method
of leakage localization. A set of wireless ammonia leak alarm system is composed of sensor node, network
coordinator and host used in the industrial field was developed, the purpose is to reduce the loss of
property caused by the leakage of ammonia industry. Using the monitoring system to carry out the
ammonia leak location simulation measurement experiment, the result shows that the relative positioning
error of the monitoring system is about 12%, which meets the needs of industrial production safety
monitoring. Using the wireless sensor network to monitor the concentration of ammonia gas and locate the
leakage source, it solves the problems of traditional wired alarm system, such as difficult wiring and weak
expansibility, which helps to find the leak timely and provides a reference for the emergency rescue work.
A Novel Approach for Precise Motion Artefact Detection in Photoplethysmograph...AM Publications
PPG signal is a useful tool for quick and critical diagnosis related to cardiovascular output via wearable or portable devices. Its drawback is unreliable during non-stationary states due to occurrences of frequency overlap of the desired and motion artifact signals. The accelerometer is usually used to reflect the motion artifact when the adaptive noise cancellation technique is implemented to address this obstacle, but it failed to predict the value of real induced noise accurately. In this work, we investigate a new concept that is capable of providing the entire motion artifact separately by recruiting twin photodetectors to formulate the influential signals. The main function of photo-detector (MPD) is to generate the corrupted PPG signal. While the second photo-detector (CPD) that covered up from the light effect, will be used to reflect the corruption effect that exists in both sources simultaneously by counting the generated dark photocurrent (GDPC). To validate the GDPC approach, experiments were executed to analyze the response of two methods during steady and motion state. Results showed resemblance responses for both methods regarding the’ amplitude fluctuations and high positive correlations in the time domain. Furthermore, the FFT peak plots in frequency domain indicated the potential of CPD to reflect all fundamental frequencies caused by motion, unlike the acceleration approach. Therefore, the proposed concept is a sure-fire method to obtain precise measurements at a lower cost.
Some possible interpretations from data of the CODALEMA experimentAhmed Ammar Rebai PhD
The purpose of the CODALEMA experiment, installed at the Nan\c{c}ay Radio Observatory (France), is to study the radio-detection of ultra-high energy cosmic rays in the energy range of 10^{16}-10^{18} eV. Distributed over an area of 0.25 km^2, the original device uses in coincidence an array of particle detectors and an array of short antennas, with a centralized acquisition. A new analysis of the observable in energy for radio is presented from this system, taking into account the geomagnetic effect. Since 2011, a new array of radio-detectors, consisting of 60 stand-alone and self-triggered stations, is being deployed over an area of 1.5 km^2 around the initial configuration. This new development leads to specific constraints to be discussed in term of recognition of cosmic rays and in term of analysis of wave-front.
Similar to Concentration measurements of bubbles in a water column using an optical tomography system (20)
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the traffic information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the modified backtracking search algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers.
Embedded intelligent adaptive PI controller for an electromechanical systemISA Interchange
In this study, an intelligent adaptive controller approach using the interval type-2 fuzzy neural network (IT2FNN) is presented. The proposed controller consists of a lower level proportional - integral (PI) controller, which is the main controller and an upper level IT2FNN which tuning on-line the parameters of a PI controller. The proposed adaptive PI controller based on IT2FNN (API-IT2FNN) is implemented practically using the Arduino DUE kit for controlling the speed of a nonlinear DC motor-generator system. The parameters of the IT2FNN are tuned on-line using back-propagation algorithm. The Lyapunov theorem is used to derive the stability and convergence of the IT2FNN. The obtained experimental results, which are compared with other controllers, demonstrate that the proposed API-IT2FNN is able to improve the system response over a wide range of system uncertainties.
State of charge estimation of lithium-ion batteries using fractional order sl...ISA Interchange
This paper presents a state of charge (SOC) estimation method based on fractional order sliding mode observer (SMO) for lithium-ion batteries. A fractional order RC equivalent circuit model (FORCECM) is firstly constructed to describe the charging and discharging dynamic characteristics of the battery. Then, based on the differential equations of the FORCECM, fractional order SMOs for SOC, polarization voltage and terminal voltage estimation are designed. After that, convergence of the proposed observers is analyzed by Lyapunov’s stability theory method. The framework of the designed observer system is simple and easy to implement. The SMOs can overcome the uncertainties of parameters, modeling and measurement errors, and present good robustness. Simulation results show that the presented estima- tion method is effective, and the designed observers have good performance.
Fractional order PID for tracking control of a parallel robotic manipulator t...ISA Interchange
This paper presents the tracking control for a robotic manipulator type delta employing fractional order PID controllers with computed torque control strategy. It is contrasted with an integer order PID controller with computed torque control strategy. The mechanical structure, kinematics and dynamic models of the delta robot are descripted. A SOLIDWORKS/MSC-ADAMS/MATLAB co-simulation model of the delta robot is built and employed for the stages of identification, design, and validation of control strategies. Identification of the dynamic model of the robot is performed using the least squares algorithm. A linearized model of the robotic system is obtained employing the computed torque control strategy resulting in a decoupled double integrating system. From the linearized model of the delta robot, fractional order PID and integer order PID controllers are designed, analyzing the dynamical behavior for many evaluation trajectories. Controllers robustness is evaluated against external disturbances employing performance indexes for the joint and spatial error, applied torque in the joints and trajectory tracking. Results show that fractional order PID with the computed torque control strategy has a robust performance and active disturbance rejection when it is applied to parallel robotic manipulators on tracking tasks.
Fuzzy logic for plant-wide control of biological wastewater treatment process...ISA Interchange
The application of control strategies is increasingly used in wastewater treatment plants with the aim of improving effluent quality and reducing operating costs. Due to concerns about the progressive growth of greenhouse gas emissions (GHG), these are also currently being evaluated in wastewater treatment plants. The present article proposes a fuzzy controller for plant-wide control of the biological wastewater treatment process. Its design is based on 14 inputs and 6 outputs in order to reduce GHG emissions, nutrient concentration in the effluent and operational costs. The article explains and shows the effect of each one of the inputs and outputs of the fuzzy controller, as well as the relationship between them. Benchmark Simulation Model no 2 Gas is used for testing the proposed control strategy. The results of simulation results show that the fuzzy controller is able to reduce GHG emissions while improving, at the same time, the common criteria of effluent quality and operational costs.
Design and implementation of a control structure for quality products in a cr...ISA Interchange
In recent years, interest for petrochemical processes has been increasing, especially in refinement area. However, the high variability in the dynamic characteristics present in the atmospheric distillation column poses a challenge to obtain quality products. To improve distillates quality in spite of the changes in the input crude oil composition, this paper details a new design of a control strategy in a conventional crude oil distillation plant defined using formal interaction analysis tools. The process dynamic and its control are simulated on Aspen HYSYS dynamic environment under real operating conditions. The simulation results are compared against a typical control strategy commonly used in crude oil atmospheric distillation columns.
Model based PI power system stabilizer design for damping low frequency oscil...ISA Interchange
This paper explores a two-level control strategy by blending a local controller with a centralized controller for the low frequency oscillations in a power system. The proposed control scheme provides stabilization of local modes using a local controller and minimizes the effect of inter-connection of sub-systems performance through a centralized control. For designing the local controllers in the form of proportional-integral power system stabilizer (PI-PSS), a simple and straight forward frequency domain direct synthesis method is considered that works on use of a suitable reference model which is based on the desired requirements. Several examples both on one machine infinite bus and multi-machine systems taken from the literature are illustrated to show the efficacy of the proposed PI-PSS. The effective damping of the systems is found to be increased remarkably which is reflected in the time-responses; even unstable operation has been stabilized with improved damping after applying the proposed controller. The proposed controllers give remarkable improvement in damping the oscillations in all the illustrations considered here and as for example, the value of damping factor has been increased from 0.0217 to 0.666 in Example 1. The simulation results obtained by the proposed control strategy are favorably compared with some controllers prevalent in the literature.
A comparison of a novel robust decentralized control strategy and MPC for ind...ISA Interchange
Abstract: In this work we have developed a novel, robust practical control structure to regulate an industrial methanol distillation column. This proposed control scheme is based on a override control framework and can manage a non-key trace ethanol product impurity specification while maintaining high product recovery. For comparison purposes, an MPC with a discrete process model (based on step tests) was also developed and tested. The results from process disturbance testing shows that, both the MPC and the proposed controller were capable of maintaining both the trace level ethanol specification in the distillate (XD) and high product recovery (β). Closer analysis revealed that the MPC controller has a tighter XD control, while the proposed controller was tighter in β control. The tight XD control allowed the MPC to operate at a higher XD set point (closer to the 10 ppm AA grade methanol standard), allowing for savings in energy usage. Despite the energy savings of the MPC, the proposed control scheme has lower installation and running costs. An economic analysis revealed a multitude of other external economic and plant design factors, that should be considered when making a decision between the two controllers. In general, we found relatively high energy costs favor MPC.
Fault detection of feed water treatment process using PCA-WD with parameter o...ISA Interchange
Feed water treatment process (FWTP) is an essential part of utility boilers; and fault detection is expected for its reliability improvement. Classical principal component analysis (PCA) has been applied to FWTPs in our previous work; however, the noises of T2 and SPE statistics result in false detections and missed detections. In this paper, Wavelet denoise (WD) is combined with PCA to form a new algorithm, (PCA- WD), where WD is intentionally employed to deal with the noises. The parameter selection of PCA-WD is further formulated as an optimization problem; and PSO is employed for optimization solution. A FWTP, sustaining two 1000 MW generation units in a coal-fired power plant, is taken as a study case. Its operation data is collected for following verification study. The results show that the optimized WD is effective to restrain the noises of T2 and SPE statistics, so as to improve the performance of PCA-WD algorithm. And, the parameter optimization enables PCA-WD to get its optimal parameters in an auto- matic way rather than on individual experience. The optimized PCA-WD is further compared with classical PCA and sliding window PCA (SWPCA), in terms of four cases as bias fault, drift fault, broken line fault and normal condition, respectively. The advantages of the optimized PCA-WD, against classical PCA and SWPCA, is finally convinced with the results.
Model-based adaptive sliding mode control of the subcritical boiler-turbine s...ISA Interchange
As higher requirements are proposed for the load regulation and efficiency enhancement, the control performance of boiler-turbine systems has become much more important. In this paper, a novel robust control approach is proposed to improve the coordinated control performance for subcritical boiler-turbine units. To capture the key features of the boiler-turbine system, a nonlinear control-oriented model is established and validated with the history operation data of a 300 MW unit. To achieve system linearization and decoupling, an adaptive feedback linearization strategy is proposed, which could asymptotically eliminate the linearization error caused by the model uncertainties. Based on the linearized boiler-turbine system, a second-order sliding mode controller is designed with the super-twisting algorithm. Moreover, the closed-loop system is proved robustly stable with respect to uncertainties and disturbances. Simulation results are presented to illustrate the effectiveness of the proposed control scheme, which achieves excellent tracking performance, strong robustness and chattering reduction.
A Proportional Integral Estimator-Based Clock Synchronization Protocol for Wi...ISA Interchange
Clock synchronization is an issue of vital importance in applications of wireless sensor networks (WSNs). This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol.
New Method for Tuning PID Controllers Using a Symmetric Send-On-Delta Samplin...ISA Interchange
In this paper we present a new method for tuning PI controllers with symmetric send-on-delta (SSOD) sampling strategy. First we analyze the conditions that produce oscillations in event based systems considering SSOD sampling strategy. The Describing Function is the tool used to address the problem. Once the conditions for oscillations are established, a new robustness to oscillation performance measure is introduced which entails with the concept of phase margin, one of the most traditional measures of relative stability in closed-loop control systems. Therefore, the application of the proposed robustness measure is easy and intuitive. The method is tested by both simulations and experiments. Additionally, a Java application has been developed to aid in the design according to the results presented in the paper.
Load estimator-based hybrid controller design for two-interleaved boost conve...ISA Interchange
This paper is devoted to the development of a hybrid controller for a two-interleaved boost converter dedicated to renewable energy and automotive applications. The control requirements, resumed in fast transient and low input current ripple, are formulated as a problem of fast stabilization of a predefined optimal limit cycle, and solved using hybrid automaton formalism. In addition, a real time estimation of the load is developed using an algebraic approach for online adjustment of the hybrid controller. Mathematical proofs are provided with simulations to illustrate the effectiveness and the robustness of the proposed controller despite different disturbances. Furthermore, a fuel cell system supplying a resistive load through a two-interleaved boost converter is also highlighted.
Effects of Wireless Packet Loss in Industrial Process Control SystemsISA Interchange
Timely and reliable sensing and actuation control are essential in networked control. This depends on not only the precision/quality of the sensors and actuators used but also on how well the communications links between the field instruments and the controller have been designed. Wireless networking offers simple deployment, reconfigurability, scalability, and reduced operational expenditure, and is easier to upgrade than wired solutions. However, the adoption of wireless networking has been slow in industrial process control due to the stochastic and less than 100% reliable nature of wireless communications and lack of a model to evaluate the effects of such communications imperfections on the overall control performance. In this paper, we study how control performance is affected by wireless link quality, which in turn is adversely affected by severe propagation loss in harsh industrial environments, co-channel interference, and unintended interference from other devices. We select the Tennessee Eastman Challenge Model (TE) for our study. A decentralized process control system, first proposed by N. Ricker, is adopted that employs 41 sensors and 12 actuators to manage the production process in the TE plant. We consider the scenario where wireless links are used to periodically transmit essential sensor measurement data, such as pressure, temperature and chemical composition to the controller as well as control commands to manipulate the actuators according to predetermined setpoints. We consider two models for packet loss in the wireless links, namely, an independent and identically distributed (IID) packet loss model and the two-state Gilbert-Elliot (GE) channel model. While the former is a random loss model, the latter can model bursty losses. With each channel model, the performance of the simulated decentralized controller using wireless links is compared with the one using wired links providing instant and 100% reliable communications. The sensitivity of the controller to the burstiness of packet loss is also characterized in different process stages. The performance results indicate that wireless links with redundant bandwidth reservation can meet the requirements of the TE process model under normal operational conditions. When disturbances are introduced in the TE plant model, wireless packet loss during transitions between process stages need further protection in severely impaired links. Techniques such as re-transmission scheduling, multi-path routing and enhanced physical layer design are discussed and the latest industrial wireless protocols are compared.
Fault Detection in the Distillation Column ProcessISA Interchange
Chemical plants are complex large-scale systems which need designing robust fault detection schemes to ensure high product quality, reliability and safety under different operating conditions. The present paper is concerned with a feasibility study of the application of the black-box modeling method and Kullback Leibler divergence (KLD) to the fault detection in a distillation column process. A Nonlinear Auto-Regressive Moving Average with eXogenous input (NARMAX) polynomial model is firstly developed to estimate the nonlinear behavior of the plant. Furthermore, the KLD is applied to detect abnormal modes. The proposed FD method is implemented and validated experimentally using realistic faults of a distillation plant of laboratory scale. The experimental results clearly demonstrate the fact that proposed method is effective and gives early alarm to operators.
Neural Network-Based Actuator Fault Diagnosis for a Non-Linear Multi-Tank SystemISA Interchange
The paper is devoted to the problem of the robust actuator fault diagnosis of the dynamic non-linear systems. In the proposed method, it is assumed that the diagnosed system can be modelled by the recurrent neural network, which can be transformed into the linear parameter varying form. Such a system description allows developing the designing scheme of the robust unknown input observer within H1 framework for a class of non-linear systems. The proposed approach is designed in such a way that a prescribed disturbance attenuation level is achieved with respect to the actuator fault estimation error, while guaranteeing the convergence of the observer. The application of the robust unknown input observer enables actuator fault estimation, which allows applying the developed approach to the fault tolerant control tasks.
A KPI-based process monitoring and fault detection framework for large-scale ...ISA Interchange
Large-scale processes, consisting of multiple interconnected sub-processes, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel re- presentation of each sub-process, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods.
An adaptive PID like controller using mix locally recurrent neural network fo...ISA Interchange
Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional integral derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initi- alized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on- line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller.
A method to remove chattering alarms using median filtersISA Interchange
Chattering alarms are the most found nuisance alarms that will probably reduce the usability and result in a confidence crisis of alarm systems for industrial plants. This paper addresses the chattering alarm reduction using median filters. Two rules are formulated to design the window size of median filters. If the alarm probability is estimated using process data, one rule is based on the probability of alarms to satisfy some requirements on the false alarm rate, or missed alarm rate. If there are only historical alarm data available, the other rule is based on percentage reduction of chattering alarms using alarm duration distribution. Experimental results for industrial cases testify that the proposed method is effective.
Design of a new PID controller using predictive functional control optimizati...ISA Interchange
An improved proportional integral derivative (PID) controller based on predictive functional control (PFC) is proposed and tested on the chamber pressure in an industrial coke furnace. The proposed design is motivated by the fact that PID controllers for industrial processes with time delay may not achieve the desired control performance because of the unavoidable model/plant mismatches, while model predictive control (MPC) is suitable for such situations. In this paper, PID control and PFC algorithm are combined to form a new PID controller that has the basic characteristic of PFC algorithm and at the same time, the simple structure of traditional PID controller. The proposed controller was tested in terms of set-point tracking and disturbance rejection, where the obtained results showed that the proposed controller had the better ensemble performance compared with traditional PID controllers.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Communications Mining Series - Zero to Hero - Session 1
Concentration measurements of bubbles in a water column using an optical tomography system
1. ISA Transactions 51 (2012) 821–826
Contents lists available at SciVerse ScienceDirect
ISA Transactions
journal homepage: www.elsevier.com/locate/isatrans
Concentration measurements of bubbles in a water column using
an optical tomography system
S. Ibrahim a,n, Mohd Amri.Md Yunus a, R.G. Green b, K. Dutton b
a
b
Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia
Materials and Engineering Research Institute, Sheffield Hallam University, City Campus, Sheffield, S1 1WB, United Kingdom
a r t i c l e i n f o
abstract
Article history:
Received 21 June 2011
Received in revised form
27 April 2012
Accepted 27 April 2012
Available online 22 May 2012
Optical tomography provides a means for the determination of the spatial distribution of materials with
different optical density in a volume by non-intrusive means. This paper presents results of
concentration measurements of gas bubbles in a water column using an optical tomography system.
A hydraulic flow rig is used to generate vertical air–water two-phase flows with controllable bubble
flow rate. Two approaches are investigated. The first aims to obtain an average gas concentration at the
measurement section, the second aims to obtain a gas distribution profile by using tomographic
imaging. A hybrid back-projection algorithm is used to calculate concentration profiles from measured
sensor values to provide a tomographic image of the measurement cross-section. The algorithm
combines the characteristic of an optical sensor as a hard field sensor and the linear back projection
algorithm
& 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Keywords:
Bubbles
Concentration
Optical tomography
Optical fibre sensors
Tomography
1. Introduction
In multi-phase flow measurement both the phase distribution
and the velocity profiles vary significantly with temporal and
spatial resolution. This is due to the different phases arranging
themselves in various ways. For a multi-phase flow, the flow
patterns are primarily functions of the volumetric fluxes of all
phases. The flow patterns are functions of superficial velocities or
pressure drops and are depicted in flow profiles. Hence it is
important to have a knowledge of the flow profiles in order to
design heat or heat and mass transfer equipment and to design
fluid-based conveying processes [1].
Information on gas concentration is vital in various applications. In the medical field, it is important to have information on
anaesthetic gas, oxygen or heliox concentration. In order to obtain
a precise bill, it is vital to record the exact heating value of the gas
for gas metering purpose. In gas burner appliances, flame optimisation is conducted for the purpose of efficiency and emission.
This is carried out by regulating the mixing ratio of air and gas.
Generally, accurate gas concentration measurement is vital in
many gas handling applications [2].
Many techniques have been employed for gas or bubble detection
in two-phase flows; both intrusive and non-intrusive measurement
n
Corresponding author. Tel.: þ60 19 7411 434; fax: þ60 7 55 66 272.
E-mail addresses: salleh@fke.utm.my (S. Ibrahim),
mus_utm@yahoo.com (M.Amri.M. Yunus), r.g.green@shu.ac.uk (R.G. Green),
k.dutton@shu.ac.uk (K. Dutton).
techniques were developed for such purpose. However, it is
important that sensors being used for measurement purposes
do not in any way perturb the quantity being measured. Point
sensors are not generally suitable as they disturb the flow field.
Non-intrusive techniques possess the advantage of not modifying
the flow field and they are suitable for laboratory tests [3].
The word tomography comes from the Greek words ‘tomos’
which means a cut or slice and ‘graphein’ which means to
write [4]. Process tomography is a methodology in which the
internal characteristics of process vessel reaction or pipeline flows
are acquired from measurements on or outside the domain of
interest in a non-invasive fashion [5]. This paper describes an
optical tomography system which is used to reconstruct an image
from measurements obtained from several sensors placed around
the measurement section of a hydraulic flow rig. Light travelling
through a transparent medium suffers attenuation for various
reasons, including scattering and absorption. Different materials
cause varying levels of attenuation and it is this phenomenon that
forms the concept of optical tomography. The voltage generated
by the optical sensors is proportional to the level of received light.
It is related to the amount of attenuation in the path of the light
beam caused by the flow regime [6]. Information about the
optical characteristics of a flow can be obtained if a view
consisting of an optical emitter and detector pair are positioned
either side of the measurement section. A larger area can be
interrogated if several views are combined to form a projection.
The image of the flow can be reconstructed if several different
projections are utilised [7].
0019-0578/$ - see front matter & 2012 ISA. Published by Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.isatra.2012.04.010
2. 822
S. Ibrahim et al. / ISA Transactions 51 (2012) 821–826
2. The measurement system
In an optical tomography system, several groups of transmitter
and receiver pairs are used in a system to provide better solution
and to minimise the aliasing which occurs when two particles
intercept the same view [8]. In this project, four dichroic halogen
bulbs act as light projectors. The light receivers consist of an array
of optical fibre sensors arranged in a combination of two orthogonal and two rectilinear projections (Fig. 1). The orthogonal
projections each consist of an 8 Â 8 array of optical fibre sensors
whereas the rectilinear projections consist of an 11 Â 11 array of
optical fibre sensors (the numbers of sensors is chosen so that they
will give a balanced sensitivity). Thus the total number of optical
fibre sensors used is thirty eight. Ideally the two orthogonal and
two rectilinear projections system should be in the same plane.
However, if they were they would overlap each other and so two
of the projections have to be placed in a separate plane. These
planes are separated by only a few mm with the two orthogonal
projection system placed on top of the rectilinear projections with
respect to the direction of flow. Each optical fibre receiver has a
length of 200 cm from the flow pipe to the electronic circuit.
The receiver circuit is designed for signal conditioning using
amplifiers and filters. The final outputs of the circuit are electrical
signals consisting of a rectified voltage and an averaged voltage. The
rectified voltage enables unipolar data acquisition and consequent
signal processing using a PC. The output of the amplifier should be
proportional to the gas flow rate passing the associated sensor. If all
the averaged voltage amplifier outputs are summed, they should be
proportional to the gas flow rate indicated by the gas rotameter. All
the electronic circuits are placed in an earthed metal box to
minimise electrical noise pick-up. The rectified analogue signals
from an array of optical transducers, covering a cross-section of the
pipe, are converted into digital form by the Keithley Instruments
DAS-1800HC data acquisition system and passed into an image
reconstruction system. For concentration measurements, a sampling
frequency of 500 Hz per channel is chosen as the velocities and flow
rates associated with this project are relatively low (i.e. 0.2–0.3 m/s).
This enables two hundred and twenty-two points to be collected for
each of the thirty-eight channels, which allows 0.44 s of flow data to
be obtained.
Data acquired by the receiver circuit is processed using the hybrid
linear back projection algorithm, which has been described in a
recent paper by Ibrahim et al. [9] in order to generate two-dimensional images of the bubbles in water. The algorithm incorporates
both a priori knowledge and linear back projection (LBP) in order to
improve the accuracy of the image reconstruction. Since optical
sensors are hard field sensors, the material in the flow is assumed
only to vary the intensity of the received signal. This enables a priori
knowledge from the optical sensors to be used as a constraint in the
reconstruction. The optical sensor signal is conditioned so when no
Projector 2
Flow Pipe
Projector 1
Top Aluminium Plate
Optical Fibre Receivers
Optical Fibre
Receivers
Perspex
Optical Fibre Receivers
Projector 3
Perspex
Bottom Aluminium
Plate
Projector 4
Fig. 1. Arrangement of the projectors and optical fibres around the flow pipe—isometric view.
3. S. Ibrahim et al. / ISA Transactions 51 (2012) 821–826
objects block the path from light transmitter to the receiver the
sensor will produce a zero output value, neglecting the effect of noise
inherent in the system.
The system was tested on the hydraulic flow rig shown in Fig. 2.
The measurement system is built around a vertical pipe 1.27 m long
with circular cross-section. Control of the water flow is effected by
the use of a pump and by various valves installed in the rig. Bubbles
are injected into the measurement section through two bubble
injectors placed at the base of the vertical section. The two small air
injectors are utilised to blow different sized gas bubbles from the
bottom of the pipe. Small bubbles are generated by a porous plug in
the base of the flow rig, producing bubbles which visually appear to
be in the range 10 mm. Large bubbles are produced by direct gas
injection into the flowing water. These bubbles are about 20 mm in
diameter. When the bubbles rise up and pass though the imaging
cross-section, the two-phase distribution over the imaging plane can
be measured. The time history of bubbles rising up through the
measurement section can be obtained in an off-line manner with
the data stored on the hard-disc. Control of bubble flow is achieved
through the use of two valves linked to the two bubble injectors. The
valves can control the size of the bubbles as well as generating
various flow regimes. The air pressure supply to the bubble injectors
can be varied from 0 to 420 kP. Throughout the experiment, gas is
injected at a constant pressure of 50 kP. The bubble collapsed as it
reached the surface of the water.
The flow pipe is made of perspex to enable visual observation
of the flow. The lower measurement section, consisting of thirtyeight sensors, is placed 62 cm above the gas injection points and
the second sensing array, which also consists of thirty-eight
sensors, is placed 15 cm downstream of the former. The measurement section is of modular construction and comprises a series of
perspex blocks 90 mm square with an 80 mm diameter central
bore so that when bolted together they provide a continuous
80 mm diameter internal flow passage. In order to reduce optical
distortion and to allow optical observation, a flat square shape
perspex window is used [10]. The flow rig is equipped with two
823
rotameters: a water rotameter (0–7 l/min) and a gas rotameter
(0–7 l/min). Each rotameter provides direct readings of the total
flow rate of water and bubbles respectively. For the experiments
described in this paper water always formed the continuous
phase and the gas flow was always in the bubbly regime. The
volumetric flow rate of bubbles ranged from 0 to 7 l/min.
3. Average concentration measurement
The measurements presented in this section consider the results
from each sensor as a continuous sample of the gas concentration
within its sensing field. The method was to obtain two hundred and
twenty-two samples for all thirty-eight sensors; each sensor was
Table 1
The flow rates of bubbles and the corresponding sum of pixel voltages for small
bubbles and large bubbles.
Flow rates
(l/min)
Sum of pixel
voltages for
small bubbles
(V)
Error
percentage of
small bubbles
(%)
Sum of pixel
voltages for
large bubbles
(V)
Error
percentage
of large
bubbles (%)
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
6
6.5
7
4.9
262.2
345.6
285.4
240.5
249.3
243.8
201.6
188
185.5
180.1
186.4
185.4
179.3
188.2
2
6.4
3.2
17.3
30.3
27.7
29.3
40.6
45.5
46.2
47.8
46.0
46.3
48.0
45.4
4.9
291.1
334.2
365.5
397.9
389.3
412
408.4
406.4
390.8
371.4
340.3
316.1
303.9
282.4
5.1
4.0
5.9
1.2
0.1
0.7
0.5
0.9
0.9
1.1
1.0
17.4
23.3
26.2
31.5
Fig. 2. The hydraulic flow rig.
4. 824
S. Ibrahim et al. / ISA Transactions 51 (2012) 821–826
Sum of pixels (V)
sampled at 500 Hz. The individual sensor gains were compensated for
in software. The mean value for each sensor was calculated at the
specified flow rate used with the standard linear back projection
algorithm to produce an image consisting of grey level pixels; white
represents maximum flow, black zero flow. These pixel grey levels are
represented by pixel voltages. The pixel voltages are summed over
the measurement cross-section. The values obtained for each flow
rate for small bubbles are compared with the values obtained for
large bubbles to observe the effect of flow rates and bubble size.
Table 1 shows the sum of pixels for small bubbles and large bubbles
corresponding to various volumetric flow rates. The results in Table 1
are shown graphically in Fig. 3 and discussed in Section 5.
Gas Flow Rate Calibration
450
400
350
300
250
200
150
100
50
0
Measured
results
(Small
bubbles)
Measured
results (Big
bubbles)
Polynomial
regression
(Big bubbles)
Polynomial
regression
(Small
bubbles)
0
2
4
6
Flow Rate (l/min)
Fig. 3. Gas flow rate calibration graph.
8
4. Concentration profiles
The measurement system for concentration consists of thirtyeight sensors. Ideally, with zero flow, all sensors should have zero
output. In practice many of the sensors have an output voltage
due to factors such as drift, intrinsic noise and offset in operational amplifiers. To reduce these errors all the sensors were
sampled at 500 Hz for 0.44 s with no gas flow at the start of each
series of experiments. The root mean square voltage was then
calculated for each sensor to provide a zero flow reference
voltage. The zero flow reference voltages were used to correct
the gas flow measurements for offset errors in each sensor.
The experiments were conducted with the laboratory lights
switched off to ensure that the mains lighting did not affect the light
receivers. Measurements were made by energising all thirty-eight
optical sensors and monitoring their outputs at several gas flow rates
ranging from 0 l/min to 7 l/min. Throughout the experiment, water
flowed upwards at a volumetric flow rate of 3 l/min.
The following results show a sequence of images representing
the reconstructed fields of bubbles flowing in water, which were
generated at selected volumetric flow rates. A sequence of images
representing small bubbles flowing at a volumetric flow rate of
0.5 l/min, are shown on Fig. 4a–d. A sequence of images representing large bubbles generated at a volumetric gas flow rate of
0.5 l/min, are shown on Fig. 5a–d. The existence of various colours
in the matrix of Fig. 4a–d as well as Fig. 5a–d except the black colour
indicates the location of bubbles. Black colour indicates no flow. For
example in Fig. 4a, the concentration profile show the bubbles are
located at pixels (6,3), (4,5), (5,5), (2,6), (4,6), (5,6) and (6,6).
Fig. 4. (a) Matrix and concentration profile of the first sample representing small bubbles at 0.5 l/min. (b) Matrix and concentration profile of the second sample
representing small bubbles at 0.5 l/min. (c) Matrix and concentration profile of the third sample representing small bubbles at 0.5 l/min. (d) Matrix and concentration
profile of the fourth sample representing small bubbles at 0.5 l/min.
5. S. Ibrahim et al. / ISA Transactions 51 (2012) 821–826
825
Fig. 5. (a) Matrix and concentration profile of the first sample representing large bubbles at 0.5 l/min. (b) Matrix and concentration profile of the second sample
representing large bubbles at 0.5 l/min. (c) Matrix and concentration profile of the third sample representing large bubbles at 0.5 l/min. (d) Matrix and concentration
profile of the fourth sample representing large bubbles at 0.5 l/min.
5. Discussion of results
The gas flow rate calibration graph (Fig. 3) shows the sum of
voltages in all pixels within the flow pipe plotted against the
volumetric flow rate of the bubbles for both small bubbles and
large bubbles. The results shown in Table 1 give a noise level of
4.9 V, at zero flow, which corresponds to levels of 1.4% of
maximum flow reading for small bubbles and 1.2% for large
bubbles. The results obtained in section 4 indicate that the system
reacts to large and small bubbles in a similar manner. However,
the peak of the graph occurs at higher flow rates with large
bubbles than small bubbles. Fig. 3 shows the peak occurring at a
gas flow rate of 1 l/min for small bubbles and 3 l/min for large
bubbles. Empirical equations obtained using EXCEL software have
been fitted to the results. For small bubbles the equation used is
Sum of pixels ¼ À0:32x6 þ 7:76x5 þ 72:93x4 þ340:26x3
þ 810:16x2 þ 860:21x
ð1Þ
where x is the gas flow rate in l/min.
The equation used the large bubbles is
in the centre of the pipe. This means that the majority of small
bubbles are confined to the central part of the measurement
cross-section and only affect a few sensors. As the flow rate
increases the bubbles get closer together.
In the case of small bubbles, the gas flow rate calibration graph
shows that initially from 0 to 1 l/min, the sum of the pixels increases,
but from 1.5 l/min to 4 l/min, the sum of the pixels begins to decrease
as the flow rate increases. The small gas bubbles are generally
confined to the centre of the pipe. As the volumetric flow rate
increases, more bubbles are released, resulting in only a few sensors
being affected and as such, the sensors as well as the electronics have
little time to recover from bubbles that flowed previously. The signal
conditioning causes the signal at high gas flow rate to gradually
reduce.
Large bubbles occupy a much larger cross section of the
conveyor than small bubbles and so the majority of the sensors
detect the presence of the bubbles. The sum of the pixels
increases over the flow rate of 0–3 l/min as shown in Fig. 3.
However, beyond the volumetric flow rate of 3 l/min, the sum of
the pixels begins to decrease as shown in Fig. 3.
Sum of pixels ¼ À0:31x6 þ 7:39x5 À67:63x4 þ 302:38x3
À694:29x2 þ 796:74x þ 10:76
ð2Þ
where x again is the gas flow rate in l/min.
The majority of measurements were made with circulating
water to ensure that the bubbles flowed upwards in the pipe.
However, the flowing water appeared to keep the small bubbles
6. Conclusions
In this paper, a non-intrusive concentration measurement of
bubbles flowing in a water column has been presented. The
results showed that the measurement system is able to fulfil
the original objectives of obtaining an average gas concentration
6. 826
S. Ibrahim et al. / ISA Transactions 51 (2012) 821–826
and gas distribution profiles. For future work, it is suggested that
experiments will be conducted over a larger range of flow
regimes. Ideally in an industrial environment, it is preferable to
use laser as the light source due to its monochromatic and
coherent characteristics. The resolution can be increased by
increasing the number of views per light sources for each pixel.
Further investigation using other types of reconstruction algorithms and different forms of filtering techniques should be
performed. The use of multi modality tomography should be
investigated in which the optical tomographic system can be
combined with other types of sensing with the aim of comparing
the accuracy of the measurements and increasing the understanding of the flow process.
Acknowledgement
The authors wish to acknowledge the assistance of Universiti
Teknologi Malaysia for providing the funds and resources in
carrying out this research.
References
[1] Dyakowski T. Process tomography applied to multi-phase flow measurement.
Measurement Science & Technology 1996;7:343–53.
[2] /http://www.sensirion.com/en/pdf/RD-notes/RD-note_February2012.pdfS.
[3] Rossi GL. Error analysis based development of a bubble velocity measurement chain. Flow Measurement and Instrumentation 1996;7(1):39–47.
[4] Northrop RB. Noninvasive instrumentation and measurement in medical
diagnosis. Boca Raton, FL: CRC Press; 2002.
[5] Wang M. Process tomography (editorial). Measurement Science and Technology 2006:17.
[6] Abdul Rahim R, Kok San C. Optical tomography imaging in pneumatic
conveyor. Sensors and Transducers Journal 2008;95(8):40–8.
[7] Daniels AR. Dual modality tomography for the monitoring of constituent
volumes in multi-component flow. PhD thesis. Sheffield Hallam University;
1996.
[8] Abdul Rahim R, Fea P J, Kok San C. Optical tomography sensor configuration
using two orthogonal and two rectilinear projection arrays. Flow Measurement and Instrumentation 2005;16:327–40.
[9] Ibrahim S, Green R G, Dutton K, Evans K, Goude A, Abdul Rahim R. Optical
sensor configurations for process tomography. Measurement science &
technology 1999;10:1079–86.
[10] King NW, Purfit GL, Pursley WC. A laboratory facility for the study of mixing
phenomena in water/oil flows. In: Proceedings of the symposium flow
metering and proving techniques in the offshore oil industry (Aberdeen);
1983: p. 233–41.