The document describes the development and training of an artificial neural network model to predict the performance of an adiabatic packed tower regenerator using lithium bromide as a desiccant. The neural network was trained using input parameters like temperature, flow rates, and humidity ratios of air and desiccant. The output parameters used for evaluation were moisture removal rate and regenerator effectiveness. The neural network was trained using an error backpropagation algorithm and was found to accurately predict the experimental moisture removal rate and effectiveness values from tests of the regenerator, with average differences between predicted and measured values being well below 5%.
Conference on the Environment- GUERRA presentation Nov 19, 2014Sergio A. Guerra
This document discusses innovative dispersion modeling practices to achieve reasonable conservatism in regulatory modeling demonstrations. It presents a case study evaluating the Emissions and Meteorological Variability Processor (EMVAP) and approaches to establish background concentrations. The case study models SO2 concentrations from a power plant using 1) constant emissions, 2) variable emissions, and 3) EMVAP. EMVAP provides more realistic concentrations while accounting for emission variability. Using the 50th percentile monitored background concentration when combining with modeled values provides statistical conservatism compared to using high percentile values.
IRJET- Application of Nanofluids to Improve Performance of a Flat Plate Solar...IRJET Journal
This document reviews research on using nanofluids to improve the performance of flat plate solar collectors. Nanofluids are fluids containing nano-sized particles that can enhance heat transfer properties. Several studies have found that nanofluids can increase heat transfer coefficients and collector efficiencies compared to using plain water. Specifically, alumina, copper oxide, and carbon nanotubes dispersed in water have shown efficiency improvements of up to 29% relative to water alone. Higher nanofluid concentrations and lower flow rates tend to increase efficiency, up to an optimal point. Overall, the literature demonstrates that nanofluids have promising potential to enhance flat plate solar collector performance.
Innovative Dispersion Modeling Practices to Achieve a Reasonable Level of Con...Sergio A. Guerra
The document discusses innovative modeling practices to achieve reasonable conservatism in AERMOD modeling demonstrations. It presents a case study evaluating three modeling techniques: EMVAP, which assigns random emission rates over iterations; ARM2, which calculates NOx to NO2 conversion based on plume entrapment; and using the 50th percentile monitored background concentration. The case study found lower modeled concentrations using EMVAP and ARM2 compared to current practices, demonstrating these techniques can provide more realistic results while still protecting air quality standards. Pairing the 98th percentile predicted concentration with the 50th percentile monitored background provided a statistically conservative but reasonable level of conservatism.
Using Physical Modeling to Evaluate Re-entrainment of Stack EmissionsSergio A. Guerra
Fume re-entry is an important concern for many types of facilities such as hospitals and laboratories that emit pathogens and toxic chemicals that may impact public health by being re-entrained into the building though nearby air intakes. Numerical methods can be used to evaluate dispersion of pollutants from stacks at sensitive receptors. However, numerical methods have limitations and simplifications that can significantly affect its predictions. An alternate way of analyzing stack re-entrainment is with physical modeling in a wind tunnel. In such a study, a scale model that accounts for buildings, topography, and vegetation is used with planned and alternate stack designs to determine the toxic emission impacts on air intakes and other sensitive locations. In a wind tunnel study different stack designs and possible mitigation options can be evaluated. This method is superior to numerical methods (e.g., dispersion models) because it accounts for the immediate structures, topography, and vegetation that is often ignored or oversimplified in numerical methods.
This presentation will show a hypothetical case study evaluating a site with toxic air emissions using AERMOD and physical modeling.
INNOVATIVE DISPERSION MODELING PRACTICES TO ACHIEVE A REASONABLE LEVEL OF CON...Sergio A. Guerra
Presentation delivered at the Board meeting for the Upper Midwest section of the Air and Waste Management Association meeting on September 16, 2014.
Innovative dispersion modeling techniques are presented including ARM2, EMVAP and the 50th percentile background concentration. Case study involves peaking engines that are used 250 hour per year. These intermittent sources are required to undergo a modeling evaluation in many states. Current modeling techniques grossly overestimate the emissions from these sporadic sources.
EFFECTS OF MET DATA PROCESSING IN AERMOD CONCENTRATIONSSergio A. Guerra
This document summarizes the results of a sensitivity analysis using AERMOD to model pollutant concentrations from three hypothetical emission sources under nine different meteorological data processing scenarios. The analysis found that: 1) Changing the meteorological station location had a modest effect on modeled concentrations for short and tall stacks. 2) Surface roughness category (urban vs. rural) had the largest effect on concentrations for tall stacks. 3) Varying the anemometer height resulted in small concentration changes, while surface moisture variation did not significantly affect outcomes. 4) For tall stacks, use of AERMINUTE data to include low wind hours led to much higher modeled concentrations compared to excluding this data.
CPP is an air quality and wind engineering consulting firm that provides air permitting and advanced dispersion modeling services. They have expertise in AERMOD modeling, wind tunnel modeling, and other advanced analysis methods like equivalent building dimensions and emission variability processing. Using these advanced methods, CPP can help optimize clients' emission control equipment and stack heights to make projects compliant with permitting requirements in cases where initial modeling shows exceedances.
Conference on the Environment- GUERRA presentation Nov 19, 2014Sergio A. Guerra
This document discusses innovative dispersion modeling practices to achieve reasonable conservatism in regulatory modeling demonstrations. It presents a case study evaluating the Emissions and Meteorological Variability Processor (EMVAP) and approaches to establish background concentrations. The case study models SO2 concentrations from a power plant using 1) constant emissions, 2) variable emissions, and 3) EMVAP. EMVAP provides more realistic concentrations while accounting for emission variability. Using the 50th percentile monitored background concentration when combining with modeled values provides statistical conservatism compared to using high percentile values.
IRJET- Application of Nanofluids to Improve Performance of a Flat Plate Solar...IRJET Journal
This document reviews research on using nanofluids to improve the performance of flat plate solar collectors. Nanofluids are fluids containing nano-sized particles that can enhance heat transfer properties. Several studies have found that nanofluids can increase heat transfer coefficients and collector efficiencies compared to using plain water. Specifically, alumina, copper oxide, and carbon nanotubes dispersed in water have shown efficiency improvements of up to 29% relative to water alone. Higher nanofluid concentrations and lower flow rates tend to increase efficiency, up to an optimal point. Overall, the literature demonstrates that nanofluids have promising potential to enhance flat plate solar collector performance.
Innovative Dispersion Modeling Practices to Achieve a Reasonable Level of Con...Sergio A. Guerra
The document discusses innovative modeling practices to achieve reasonable conservatism in AERMOD modeling demonstrations. It presents a case study evaluating three modeling techniques: EMVAP, which assigns random emission rates over iterations; ARM2, which calculates NOx to NO2 conversion based on plume entrapment; and using the 50th percentile monitored background concentration. The case study found lower modeled concentrations using EMVAP and ARM2 compared to current practices, demonstrating these techniques can provide more realistic results while still protecting air quality standards. Pairing the 98th percentile predicted concentration with the 50th percentile monitored background provided a statistically conservative but reasonable level of conservatism.
Using Physical Modeling to Evaluate Re-entrainment of Stack EmissionsSergio A. Guerra
Fume re-entry is an important concern for many types of facilities such as hospitals and laboratories that emit pathogens and toxic chemicals that may impact public health by being re-entrained into the building though nearby air intakes. Numerical methods can be used to evaluate dispersion of pollutants from stacks at sensitive receptors. However, numerical methods have limitations and simplifications that can significantly affect its predictions. An alternate way of analyzing stack re-entrainment is with physical modeling in a wind tunnel. In such a study, a scale model that accounts for buildings, topography, and vegetation is used with planned and alternate stack designs to determine the toxic emission impacts on air intakes and other sensitive locations. In a wind tunnel study different stack designs and possible mitigation options can be evaluated. This method is superior to numerical methods (e.g., dispersion models) because it accounts for the immediate structures, topography, and vegetation that is often ignored or oversimplified in numerical methods.
This presentation will show a hypothetical case study evaluating a site with toxic air emissions using AERMOD and physical modeling.
INNOVATIVE DISPERSION MODELING PRACTICES TO ACHIEVE A REASONABLE LEVEL OF CON...Sergio A. Guerra
Presentation delivered at the Board meeting for the Upper Midwest section of the Air and Waste Management Association meeting on September 16, 2014.
Innovative dispersion modeling techniques are presented including ARM2, EMVAP and the 50th percentile background concentration. Case study involves peaking engines that are used 250 hour per year. These intermittent sources are required to undergo a modeling evaluation in many states. Current modeling techniques grossly overestimate the emissions from these sporadic sources.
EFFECTS OF MET DATA PROCESSING IN AERMOD CONCENTRATIONSSergio A. Guerra
This document summarizes the results of a sensitivity analysis using AERMOD to model pollutant concentrations from three hypothetical emission sources under nine different meteorological data processing scenarios. The analysis found that: 1) Changing the meteorological station location had a modest effect on modeled concentrations for short and tall stacks. 2) Surface roughness category (urban vs. rural) had the largest effect on concentrations for tall stacks. 3) Varying the anemometer height resulted in small concentration changes, while surface moisture variation did not significantly affect outcomes. 4) For tall stacks, use of AERMINUTE data to include low wind hours led to much higher modeled concentrations compared to excluding this data.
CPP is an air quality and wind engineering consulting firm that provides air permitting and advanced dispersion modeling services. They have expertise in AERMOD modeling, wind tunnel modeling, and other advanced analysis methods like equivalent building dimensions and emission variability processing. Using these advanced methods, CPP can help optimize clients' emission control equipment and stack heights to make projects compliant with permitting requirements in cases where initial modeling shows exceedances.
This document summarizes the results of a study on quantifying ventilation effectiveness in plant and animal environments. The study involved laboratory experiments and field measurements to evaluate different methods for measuring ventilation effectiveness.
The laboratory experiments examined how ventilation parameters like the type of ventilation system, ventilation rate, and pollutant source location affect the spatial distribution of total suspended particulates and carbon dioxide concentrations. The field measurements evaluated these methods in a commercial swine building.
The results showed that the type of ventilation system and pollutant source location significantly impact pollutant concentrations, but ventilation rate did not substantially alter their spatial distribution. Based on these findings, the document provides recommendations for measuring ventilation effectiveness in animal buildings using contaminant concentration measurements at multiple sampling locations
In this paper, a mathematical model is developed to study the performance of a parabolic trough collector (PTC). The proposed model consists of three parts. The first part is a solar radiation model that used to estimate the amount of solar radiation incident upon Earth by using equations and relationships between the sun and the Earth. The second part is the optical model; This part has the ability to determine the optical efficiency of PTC throughout the daytime. The last part is the thermal model. The aim of this part is to estimate the amount of energy collected by different types of fluids and capable to calculate the heat losses, thermal efficiency and the outlet temperature of fluid. All heat balance equations and heat transfer mechanisms: conduction, convection, and radiation, have been incorporated. The proposed model is implemented in MATLAB. A new nanofluids like Water+PEO+1%CNT, PEO+1%CNT and PEO+0.2%CUO where tested and were compared with conventional water and molten salt during the winter and the summer to the city of Basra and good results were obtained in improving the performance of the solar collector. The results explained both the design and environmental parameters that effect on the performance of PTC. Percentage of improvement in the thermal efficiency at the summer when using nanofluids (Water+PEO+1%CNT, PEO+1%CNT and PEO+0.2%CUO) Nano fluids are (19.68%, 17.47% and 15.1%) respectively compared to the water and (10.98%, 8.93% and 6.7%) respectively compared to the molten salt, as well as the percentage decreases in the heat losses by using the Nano fluids through the vacuum space between the receiver tube and the glass envelope compared with water (86 %, 76 % and 66 %) and molten salt (79.15 %, 64.34 % and 48.47 % ) . As final a Water+PEO+1%CNT nanofluid gives the best performance
Advanced Modeling Techniques for Permit Modeling - Turning challenges into o...Sergio A. Guerra
Advance modeling techniques can be used in AERMOD to refine the inputs that are entered in the model to get more accurate results. This presentation covers:
-AERMOD’s Temporal Mismatch Limitation
-Building Downwash Limitations in BPIP/PRIME
-Advanced Modeling Techniques to Overcome these Limitations
Solutions include:
Equivalent Building Dimensions (EBD)
Emission Variability Processor (EMVAP)
Updated ambient ratio method (ARM2)
Pairing AERMOD values with the 50th % background concentrations in cumulative analyses.
Using Physical Modeling to Refine Downwash Inputs to AERMODSergio A. Guerra
Achieving compliance in dispersion modeling can be quite challenging because of the tight National Ambient Air Quality Standards (NAAQS). In addition, AERMOD’s limitations can, in many cases, produce higher than normal concentrations due to the inherent assumptions and simplifications in its formulation. In the case of downwash, the theory used to estimate these effects was developed for a limited set of building types. However, these formulations are commonly used indiscriminately for all types of buildings. This presentation will cover how the basics of wind tunnel modeling can overcome some of these limitations and be used to mitigate downwash induced overpredictions to achieve compliance.
INNOVATIVE DISPERSION MODELING PRACTICES TO ACHIEVE A REASONABLE LEVEL OF CON...Sergio A. Guerra
Presentation delivered at the Annual Air and Waste Management Association conference in Long beach, California on June 26, 2014.
Innovative dispersion modeling techniques are presented including ARM2, EMVAP and the 50th percentile background concentration. Case study involves peaking engines that are used 250 hour per year. These intermittent sources are required to undergo a modeling evaluation in many states. Current modeling techniques grossly overestimate the emissions from these sporadic sources.
PRIME2_consequence_analysis_and _model_evaluationSergio A. Guerra
The Plume Rise Model Enhancements (PRIME) building downwash algorithms1 (Schulman et al. 2000) in AERMOD2 are being updated to address some of the most critical limitations in the current theory. These enhancements will incorporate the latest advancements related to building downwash effects. The technical aspects of these enhancements are discussed in more detail in a companion paper titled “PRIME2: Development and Evaluation of Improved Building Downwash Algorithms for Solid and Streamlined Structures (MO13)”. The updates to the PRIME code include new equations to account for building wake effects that decay rapidly back to ambient levels above the top of the building; reduced wake effects for streamlined structures; and reduced wake effects for high approach roughness. A consequence analysis comparing the current AERMOD/PRIME model versus the new AERMOD/PRIME2 model was performed. Additionally, a field data evaluation was conducted with the Bowline Point database. The results from these analyses are discussed below.
Use of Probabilistic Statistical Techniques in AERMOD Modeling EvaluationsSergio A. Guerra
The advent of the short term National Ambient Air Quality Standards (NAAQS) prompted modelers to reassess the common practices in dispersion modeling analyses. The probabilistic nature of the new short term standards also opens the door to alternative modeling techniques that are based on probability. One of these is the Monte Carlo technique that can be used to account for emission variability in permit modeling.
Currently, it is assumed that a given emission unit is in operation at its maximum capacity every hour of the year. This assumption may be appropriate for facilities that operate at full capacity most of the time. However, in most cases, emission units operate at variable loads that produce variable emissions. Thus, assuming constant maximum emissions is overly conservative for facilities such as power plants that are not in operation all the time and which exhibit high concentrations during very short periods of time.
Another element of conservatism in NAAQS demonstrations relates to combining predicted concentrations from the AMS/EPA Regulatory Model (AERMOD) with observed (monitored) background concentrations. Normally, some of the highest monitored observations are added to the AERMOD results yielding a very conservative combined concentration.
A case study is presented to evaluate the use of alternative probabilistic methods to complement the shortcomings of current dispersion modeling practices. This case study includes the use of the Monte Carlo technique and the use of a reasonable background concentration to combine with the AERMOD predicted concentrations. The use of these methods is in harmony with the probabilistic nature of the NAAQS and can help demonstrate compliance through dispersion modeling analyses, while still being protective of the NAAQS.
Energy and Exergy Analysis of Organic Rankine Cycle Using Alternative Working...IOSR Journals
This document analyzes the energy and exergy efficiency of organic Rankine cycles (ORC) using different working fluids, including HFO-1234yf, HFC-134a, HFC-245fa, ethanol, and iso-pentane. The study models saturated and trilateral ORC cycles and compares the thermal and exergetic efficiency of the cycles using different working fluids. The results show that HFO-1234yf and HFC-134a have the highest thermal and exergetic efficiencies. HFO-1234yf is identified as a promising working fluid for low to medium temperature ORC applications due to its low global warming potential, zero ozone depletion, and low evaporation temperature. The paper
NOVEL DATA ANALYSIS TECHNIQUE USED TO EVALUATE NOX AND CO2 CONTINUOUS EMISSIO...Sergio A. Guerra
The current study presents a new data analysis technique developed while evaluating continuous emission data collected from a trash compactor. The evaluation involved tailpipe sampling with a portable emission monitoring system (PEMS) from a diesel fueled 525-horsepower trash
compactor. The sampling campaign took place by running the compactor with regular no. 2 diesel, B20 and ULSD fuels. The purpose was to determine the possible emission reductions in nitrous oxides (NOx) and carbon dioxide (CO2) from the use of B20 and ULSD in an off-road
vehicle. The results from the NOx analysis are discussed.
The initial data analysis identified two important issues. The first concern related to a bias in the calculated F values due to the very large number of samples (N). The large N influenced the probability values and indicated a false statistical significance for all factors tested. Additionally,
the data observations were found to be highly autocorrelated. Thus, a time interval data reduction
technique was used to address these two statistical limitations to the robustness of the statistical
analyses. The result in each case was a subset of quasi-independent observations sampled at an interval of 800 seconds. The autocorrelation and false statistical significance issues were promptly resolved by using this technique. Since the issues of false statistical significance and autocorrelation are inherent in continuous data, the positive results obtained from the use of this technique can be far-reaching. This technique allowed for a valid use of the general linear model (GLM) with engine speed as the covariate factor to test day, fuel type and compactor factors. This technique is most relevant given the advancements in data collection capabilities that
require data handling techniques to satisfy the statistical assumptions necessary for valid analyses to ensue.
Investigating The Performance of A Steam Power PlantIJMERJOURNAL
ABSTRACT: The performance analysis of Shobra El-Khima power plant in Cairo, Egypt is presented based on energy and exergy analysis to determine the causes , the sites with high exergy destruction , losses and the possibilities of improving the plant performance. The performance of the plant was evaluated at different loads (Full, 75% and, 50 %). The calculated thermal efficiency based on the heat added to the steam was found to be 41.9 %, 41.7 %, 43.9% , while the exergetic efficiency of the power cycle was found to be 44.8%, 45.5% and 48.8% at max, 75% and, 50 % load respectively. The condenser was found to have the largest energy losses where (54.3%, 55.1% and 56.3% at max, 75% and, 50 % load respectively) of the added energy to the steam is lost to the environment. The maximum exergy destruction was found to be in the turbine where the percentage of the exergy destruction was found to be (42%, 59% and 46.1% at max, 75% and, 50 % load respectively). The pump was found to have the minimum exergy destruction. It was also found that the exergy destruction in feed water heaters and in the condenser together represents the maximum exergy destruction in the plant (about 52%). This means that the irreversibilities in the heat transfer devices in the plant have a significant role on the exergy destruction. So, it is thought that the improvement in the power plant will be limited due to the heat transfer devices.
Performance prediction of a thermal system using Artificial Neural NetworksIJERD Editor
This document summarizes a study on using artificial neural networks (ANNs) to predict the performance of a condenser system and assess fouling over time. Experiments were conducted on an industrial condenser to collect temperature and flow rate data. An ANN model was developed and trained to predict the overall heat transfer coefficient of the clean condenser system based on the input parameters. The model was then used to calculate the fouling factor by comparing the predicted clean performance to the actual performance measured over time, indicating degradation due to fouling on the heat transfer surfaces. The developed system provides a method to monitor condenser performance and identify when cleaning is needed to improve efficiency.
Background Concentrations and the Need for a New System to Update AERMODSergio A. Guerra
Presentation delivered at the EPA 11th Conference on Air Quality Modeling at RTP, NC.
Topics covered include background concentrations and the need for a new system to update AERMOD. An evaluation of what is being proposed in the draft guidance related to background concentrations and an alternative approach to determine background concentrations for dispersion modeling evaluations is presented. A review of the lessons learned from Appendix W and a proposed new method to incorporate science into the model.
Presentation includes information related to gently sloping terrain, AERMINUTE, and EPA formula height.
Presented at the 27th Annual Conference on the Environment on November 13, 2012.
Pairing aermod concentrations with the 50th percentile monitored valueSergio A. Guerra
This document proposes a new method for combining modeled concentrations from AERMOD with monitored background concentrations.
The current practice of adding the maximum or 98th percentile monitored concentration is overly conservative. Instead, the document suggests using the 50th percentile (median) monitored concentration.
Pairing the 98th percentile modeled concentration with the 50th percentile monitored concentration results in a combined 99th percentile concentration. This provides a more conservative estimate than the form of the short-term air quality standards, while avoiding the mismatch of temporal pairing in AERMOD and the influence of exceptional events.
The proposed method is presented as a simple, protective approach for demonstrating compliance with air quality standards when considering both modeled and monitored background concentrations.
A study of himreen reservoir water quality using in situAlexander Decker
This document summarizes a study of the water quality of Himreen Reservoir in Iraq using remote sensing techniques and in situ measurements. Satellite images from 1989 and 2002 were analyzed to study parameters like turbidity and suspended sediments. Water samples from 12 locations were also tested. The results show a correlation between higher reflectance in images and higher turbidity. The reservoir water was found to be of medium salinity and good quality. A comparison of the two years showed less shallow water and suspended sediments in 2002, likely due to changes in climate and river flow.
This research poster presents a study on scaling up photocatalytic reactors. The objectives were to determine the relationship between photocatalysis parameters like reactor volume, hydrogen peroxide dosage, catalyst loading and time. It also aimed to quantify the hydroxyl radicals required to treat methyl orange. A factorial design experiment was conducted varying the reactor volume from 200 to 1100mL, hydrogen peroxide dosage and catalyst loading. Results showed reactor volume was the most influential factor. A hydroxyl quantification study was then performed using 7-hydroxycoumarin to better understand parameter limits for scaling up from 200mL to 5000mL reactors.
Pairing aermod concentrations with the 50th percentile monitored valueSergio A. Guerra
Presentation delivered to the Background Concentrations Workgroup for Air Dispersion Modeling organized by the Minnesota Pollution Control Agency. delivered on March 25, 2014. Three topics covered include 1) Screening monitoring data, 2) AERMOD’s time-space mismatch, and
3) Proposed 50th % Bkg Method
New Guideline on Air Quality Models and the Electric Utility IndustrySergio A. Guerra
The new EPA guideline on air quality models makes several changes, including adopting AERMOD version 16216r as the new default model. It establishes a two-tiered approach for modeling ozone and secondary PM2.5 formation, using existing empirical relationships (Tier 1) or chemical transport models (Tier 2). CALPUFF is no longer preferred for long-range transport modeling beyond 50 km. The guideline also allows the use of prognostic meteorological data in some cases. While the changes aim to promote consistency, the increased flexibility may lead to legal challenges and delays.
The Plume Rise Model Enhancements (PRIME) formulation in AERMOD has been updated based new equations developed from wind tunnel measurements taken downwind of various solid and streamlined structures. These new equations, along with other building downwash improvements have been included as alpha options in the upcoming new version of AERMOD. The PRIME2 options include: • PRIME2UTurb which enables enhanced calculations of turbulence and wind speed • PRIME2Ueff which defines the height used to compute effective parameters Ueff, Sweff, Sveff and Tgeff at plume height and at 30 m • Streamline defines the set of constants for modeling all structures as streamlined. If omitted, rectangular building constants are used. The ORD Options include: • PRIMEUeff which controls the heights for which the wind speed is calculated for the main plume concentrations. • Average between plume height and receptor height recommended in ORD version • Default is current method in AERMOD, stack height wind speed. • PRIMETurb which adjusts the vertical turbulence intensity, wiz0 from 0.6 to 0.7. • PRIMECav modifies the cavity calculations These improvements aim to address important theoretical issues that significantly affect the accuracy of predicted concentrations subject to downwash effects. This research effort was funded in part by the American Petroleum Institute, the Electric Power Research Institute, the Corn Refiners Association and the American Forest & Paper Association. As part of it, the PRIME2 subcommittee under the A&WMA APM committee was formed to: (1) establish a mechanism to review, approve and implement new science into the model for this and future improvements; and (2) provide a technical review forum to improve the PRIME building downwash algorithms. Collaboration and cooperation from the EPA Office of Research and Development (ORD) has been on-going during the research project resulting in new alpha options aimed at solving known issues with the treatment of building downwash effects in AERMOD. The intent is that these experimental options will be tested by the user community to create enough justification to make these beta (approved on a case-by-case basis) and eventually default options in AERMOD. A preliminary evaluation for the following four cases will be presented: • Arconic- Davenport, IA (formerly Alcoa) • Mirant Potomac River Generating Station- Alexandria, VA • Basic American Foods- Blackfoot, ID • Oakley Generating Station- Oakley, CA The evaluation includes comparing 1-hr, 24-hr and annual averages along with Q-Q plots and isopleths. A discussion related to the results obtained will also be presented.
AIR DISPERSION MODELING HIGHLIGHTS FROM 2012 ACESergio A. Guerra
Presentation includes some highlights from the dispersion modeling papers presented at the Annual AWMA conference in San Antonio, TX. Topics covered include: EMVAP, distance limitations of AERMOD, and two case studies comparing predicted and monitoring data,
Presented at the A&WMA UMS Board Meeting on August 21, 2012.
Wind power forecasting: A Case Study in Terrain using Artificial IntelligenceIRJET Journal
This document presents a study on using artificial neural networks to forecast wind power. Real-time data on wind speed, direction, temperature, humidity and pressure was collected from a measurement station. 100 artificial neural networks with different structures were trained and tested. The best performing network was a multilayer perceptron with 6 inputs, 24 hidden layers, exponential activation and identity output activation. This network achieved a 99% success rate in estimating wind power compared to real measured data. The study demonstrates that artificial neural networks can accurately estimate wind power for short-term forecasting.
The document summarizes a study that used artificial neural networks (ANN) to predict chemical oxygen demand (COD) levels in an anaerobic wastewater treatment system. Four ANN backpropagation training algorithms - Levenberg-Marquardt, gradient descent with adaptive learning, gradient descent with momentum, and resilient backpropagation - were tested on a model using COD input data. The Levenberg-Marquardt algorithm produced the best results with the lowest mean squared error of 0.533 and highest regression value of 0.991, accurately predicting COD levels. The study demonstrates ANNs can effectively model and predict values in nonlinear wastewater treatment processes.
This document summarizes the results of a study on quantifying ventilation effectiveness in plant and animal environments. The study involved laboratory experiments and field measurements to evaluate different methods for measuring ventilation effectiveness.
The laboratory experiments examined how ventilation parameters like the type of ventilation system, ventilation rate, and pollutant source location affect the spatial distribution of total suspended particulates and carbon dioxide concentrations. The field measurements evaluated these methods in a commercial swine building.
The results showed that the type of ventilation system and pollutant source location significantly impact pollutant concentrations, but ventilation rate did not substantially alter their spatial distribution. Based on these findings, the document provides recommendations for measuring ventilation effectiveness in animal buildings using contaminant concentration measurements at multiple sampling locations
In this paper, a mathematical model is developed to study the performance of a parabolic trough collector (PTC). The proposed model consists of three parts. The first part is a solar radiation model that used to estimate the amount of solar radiation incident upon Earth by using equations and relationships between the sun and the Earth. The second part is the optical model; This part has the ability to determine the optical efficiency of PTC throughout the daytime. The last part is the thermal model. The aim of this part is to estimate the amount of energy collected by different types of fluids and capable to calculate the heat losses, thermal efficiency and the outlet temperature of fluid. All heat balance equations and heat transfer mechanisms: conduction, convection, and radiation, have been incorporated. The proposed model is implemented in MATLAB. A new nanofluids like Water+PEO+1%CNT, PEO+1%CNT and PEO+0.2%CUO where tested and were compared with conventional water and molten salt during the winter and the summer to the city of Basra and good results were obtained in improving the performance of the solar collector. The results explained both the design and environmental parameters that effect on the performance of PTC. Percentage of improvement in the thermal efficiency at the summer when using nanofluids (Water+PEO+1%CNT, PEO+1%CNT and PEO+0.2%CUO) Nano fluids are (19.68%, 17.47% and 15.1%) respectively compared to the water and (10.98%, 8.93% and 6.7%) respectively compared to the molten salt, as well as the percentage decreases in the heat losses by using the Nano fluids through the vacuum space between the receiver tube and the glass envelope compared with water (86 %, 76 % and 66 %) and molten salt (79.15 %, 64.34 % and 48.47 % ) . As final a Water+PEO+1%CNT nanofluid gives the best performance
Advanced Modeling Techniques for Permit Modeling - Turning challenges into o...Sergio A. Guerra
Advance modeling techniques can be used in AERMOD to refine the inputs that are entered in the model to get more accurate results. This presentation covers:
-AERMOD’s Temporal Mismatch Limitation
-Building Downwash Limitations in BPIP/PRIME
-Advanced Modeling Techniques to Overcome these Limitations
Solutions include:
Equivalent Building Dimensions (EBD)
Emission Variability Processor (EMVAP)
Updated ambient ratio method (ARM2)
Pairing AERMOD values with the 50th % background concentrations in cumulative analyses.
Using Physical Modeling to Refine Downwash Inputs to AERMODSergio A. Guerra
Achieving compliance in dispersion modeling can be quite challenging because of the tight National Ambient Air Quality Standards (NAAQS). In addition, AERMOD’s limitations can, in many cases, produce higher than normal concentrations due to the inherent assumptions and simplifications in its formulation. In the case of downwash, the theory used to estimate these effects was developed for a limited set of building types. However, these formulations are commonly used indiscriminately for all types of buildings. This presentation will cover how the basics of wind tunnel modeling can overcome some of these limitations and be used to mitigate downwash induced overpredictions to achieve compliance.
INNOVATIVE DISPERSION MODELING PRACTICES TO ACHIEVE A REASONABLE LEVEL OF CON...Sergio A. Guerra
Presentation delivered at the Annual Air and Waste Management Association conference in Long beach, California on June 26, 2014.
Innovative dispersion modeling techniques are presented including ARM2, EMVAP and the 50th percentile background concentration. Case study involves peaking engines that are used 250 hour per year. These intermittent sources are required to undergo a modeling evaluation in many states. Current modeling techniques grossly overestimate the emissions from these sporadic sources.
PRIME2_consequence_analysis_and _model_evaluationSergio A. Guerra
The Plume Rise Model Enhancements (PRIME) building downwash algorithms1 (Schulman et al. 2000) in AERMOD2 are being updated to address some of the most critical limitations in the current theory. These enhancements will incorporate the latest advancements related to building downwash effects. The technical aspects of these enhancements are discussed in more detail in a companion paper titled “PRIME2: Development and Evaluation of Improved Building Downwash Algorithms for Solid and Streamlined Structures (MO13)”. The updates to the PRIME code include new equations to account for building wake effects that decay rapidly back to ambient levels above the top of the building; reduced wake effects for streamlined structures; and reduced wake effects for high approach roughness. A consequence analysis comparing the current AERMOD/PRIME model versus the new AERMOD/PRIME2 model was performed. Additionally, a field data evaluation was conducted with the Bowline Point database. The results from these analyses are discussed below.
Use of Probabilistic Statistical Techniques in AERMOD Modeling EvaluationsSergio A. Guerra
The advent of the short term National Ambient Air Quality Standards (NAAQS) prompted modelers to reassess the common practices in dispersion modeling analyses. The probabilistic nature of the new short term standards also opens the door to alternative modeling techniques that are based on probability. One of these is the Monte Carlo technique that can be used to account for emission variability in permit modeling.
Currently, it is assumed that a given emission unit is in operation at its maximum capacity every hour of the year. This assumption may be appropriate for facilities that operate at full capacity most of the time. However, in most cases, emission units operate at variable loads that produce variable emissions. Thus, assuming constant maximum emissions is overly conservative for facilities such as power plants that are not in operation all the time and which exhibit high concentrations during very short periods of time.
Another element of conservatism in NAAQS demonstrations relates to combining predicted concentrations from the AMS/EPA Regulatory Model (AERMOD) with observed (monitored) background concentrations. Normally, some of the highest monitored observations are added to the AERMOD results yielding a very conservative combined concentration.
A case study is presented to evaluate the use of alternative probabilistic methods to complement the shortcomings of current dispersion modeling practices. This case study includes the use of the Monte Carlo technique and the use of a reasonable background concentration to combine with the AERMOD predicted concentrations. The use of these methods is in harmony with the probabilistic nature of the NAAQS and can help demonstrate compliance through dispersion modeling analyses, while still being protective of the NAAQS.
Energy and Exergy Analysis of Organic Rankine Cycle Using Alternative Working...IOSR Journals
This document analyzes the energy and exergy efficiency of organic Rankine cycles (ORC) using different working fluids, including HFO-1234yf, HFC-134a, HFC-245fa, ethanol, and iso-pentane. The study models saturated and trilateral ORC cycles and compares the thermal and exergetic efficiency of the cycles using different working fluids. The results show that HFO-1234yf and HFC-134a have the highest thermal and exergetic efficiencies. HFO-1234yf is identified as a promising working fluid for low to medium temperature ORC applications due to its low global warming potential, zero ozone depletion, and low evaporation temperature. The paper
NOVEL DATA ANALYSIS TECHNIQUE USED TO EVALUATE NOX AND CO2 CONTINUOUS EMISSIO...Sergio A. Guerra
The current study presents a new data analysis technique developed while evaluating continuous emission data collected from a trash compactor. The evaluation involved tailpipe sampling with a portable emission monitoring system (PEMS) from a diesel fueled 525-horsepower trash
compactor. The sampling campaign took place by running the compactor with regular no. 2 diesel, B20 and ULSD fuels. The purpose was to determine the possible emission reductions in nitrous oxides (NOx) and carbon dioxide (CO2) from the use of B20 and ULSD in an off-road
vehicle. The results from the NOx analysis are discussed.
The initial data analysis identified two important issues. The first concern related to a bias in the calculated F values due to the very large number of samples (N). The large N influenced the probability values and indicated a false statistical significance for all factors tested. Additionally,
the data observations were found to be highly autocorrelated. Thus, a time interval data reduction
technique was used to address these two statistical limitations to the robustness of the statistical
analyses. The result in each case was a subset of quasi-independent observations sampled at an interval of 800 seconds. The autocorrelation and false statistical significance issues were promptly resolved by using this technique. Since the issues of false statistical significance and autocorrelation are inherent in continuous data, the positive results obtained from the use of this technique can be far-reaching. This technique allowed for a valid use of the general linear model (GLM) with engine speed as the covariate factor to test day, fuel type and compactor factors. This technique is most relevant given the advancements in data collection capabilities that
require data handling techniques to satisfy the statistical assumptions necessary for valid analyses to ensue.
Investigating The Performance of A Steam Power PlantIJMERJOURNAL
ABSTRACT: The performance analysis of Shobra El-Khima power plant in Cairo, Egypt is presented based on energy and exergy analysis to determine the causes , the sites with high exergy destruction , losses and the possibilities of improving the plant performance. The performance of the plant was evaluated at different loads (Full, 75% and, 50 %). The calculated thermal efficiency based on the heat added to the steam was found to be 41.9 %, 41.7 %, 43.9% , while the exergetic efficiency of the power cycle was found to be 44.8%, 45.5% and 48.8% at max, 75% and, 50 % load respectively. The condenser was found to have the largest energy losses where (54.3%, 55.1% and 56.3% at max, 75% and, 50 % load respectively) of the added energy to the steam is lost to the environment. The maximum exergy destruction was found to be in the turbine where the percentage of the exergy destruction was found to be (42%, 59% and 46.1% at max, 75% and, 50 % load respectively). The pump was found to have the minimum exergy destruction. It was also found that the exergy destruction in feed water heaters and in the condenser together represents the maximum exergy destruction in the plant (about 52%). This means that the irreversibilities in the heat transfer devices in the plant have a significant role on the exergy destruction. So, it is thought that the improvement in the power plant will be limited due to the heat transfer devices.
Performance prediction of a thermal system using Artificial Neural NetworksIJERD Editor
This document summarizes a study on using artificial neural networks (ANNs) to predict the performance of a condenser system and assess fouling over time. Experiments were conducted on an industrial condenser to collect temperature and flow rate data. An ANN model was developed and trained to predict the overall heat transfer coefficient of the clean condenser system based on the input parameters. The model was then used to calculate the fouling factor by comparing the predicted clean performance to the actual performance measured over time, indicating degradation due to fouling on the heat transfer surfaces. The developed system provides a method to monitor condenser performance and identify when cleaning is needed to improve efficiency.
Background Concentrations and the Need for a New System to Update AERMODSergio A. Guerra
Presentation delivered at the EPA 11th Conference on Air Quality Modeling at RTP, NC.
Topics covered include background concentrations and the need for a new system to update AERMOD. An evaluation of what is being proposed in the draft guidance related to background concentrations and an alternative approach to determine background concentrations for dispersion modeling evaluations is presented. A review of the lessons learned from Appendix W and a proposed new method to incorporate science into the model.
Presentation includes information related to gently sloping terrain, AERMINUTE, and EPA formula height.
Presented at the 27th Annual Conference on the Environment on November 13, 2012.
Pairing aermod concentrations with the 50th percentile monitored valueSergio A. Guerra
This document proposes a new method for combining modeled concentrations from AERMOD with monitored background concentrations.
The current practice of adding the maximum or 98th percentile monitored concentration is overly conservative. Instead, the document suggests using the 50th percentile (median) monitored concentration.
Pairing the 98th percentile modeled concentration with the 50th percentile monitored concentration results in a combined 99th percentile concentration. This provides a more conservative estimate than the form of the short-term air quality standards, while avoiding the mismatch of temporal pairing in AERMOD and the influence of exceptional events.
The proposed method is presented as a simple, protective approach for demonstrating compliance with air quality standards when considering both modeled and monitored background concentrations.
A study of himreen reservoir water quality using in situAlexander Decker
This document summarizes a study of the water quality of Himreen Reservoir in Iraq using remote sensing techniques and in situ measurements. Satellite images from 1989 and 2002 were analyzed to study parameters like turbidity and suspended sediments. Water samples from 12 locations were also tested. The results show a correlation between higher reflectance in images and higher turbidity. The reservoir water was found to be of medium salinity and good quality. A comparison of the two years showed less shallow water and suspended sediments in 2002, likely due to changes in climate and river flow.
This research poster presents a study on scaling up photocatalytic reactors. The objectives were to determine the relationship between photocatalysis parameters like reactor volume, hydrogen peroxide dosage, catalyst loading and time. It also aimed to quantify the hydroxyl radicals required to treat methyl orange. A factorial design experiment was conducted varying the reactor volume from 200 to 1100mL, hydrogen peroxide dosage and catalyst loading. Results showed reactor volume was the most influential factor. A hydroxyl quantification study was then performed using 7-hydroxycoumarin to better understand parameter limits for scaling up from 200mL to 5000mL reactors.
Pairing aermod concentrations with the 50th percentile monitored valueSergio A. Guerra
Presentation delivered to the Background Concentrations Workgroup for Air Dispersion Modeling organized by the Minnesota Pollution Control Agency. delivered on March 25, 2014. Three topics covered include 1) Screening monitoring data, 2) AERMOD’s time-space mismatch, and
3) Proposed 50th % Bkg Method
New Guideline on Air Quality Models and the Electric Utility IndustrySergio A. Guerra
The new EPA guideline on air quality models makes several changes, including adopting AERMOD version 16216r as the new default model. It establishes a two-tiered approach for modeling ozone and secondary PM2.5 formation, using existing empirical relationships (Tier 1) or chemical transport models (Tier 2). CALPUFF is no longer preferred for long-range transport modeling beyond 50 km. The guideline also allows the use of prognostic meteorological data in some cases. While the changes aim to promote consistency, the increased flexibility may lead to legal challenges and delays.
The Plume Rise Model Enhancements (PRIME) formulation in AERMOD has been updated based new equations developed from wind tunnel measurements taken downwind of various solid and streamlined structures. These new equations, along with other building downwash improvements have been included as alpha options in the upcoming new version of AERMOD. The PRIME2 options include: • PRIME2UTurb which enables enhanced calculations of turbulence and wind speed • PRIME2Ueff which defines the height used to compute effective parameters Ueff, Sweff, Sveff and Tgeff at plume height and at 30 m • Streamline defines the set of constants for modeling all structures as streamlined. If omitted, rectangular building constants are used. The ORD Options include: • PRIMEUeff which controls the heights for which the wind speed is calculated for the main plume concentrations. • Average between plume height and receptor height recommended in ORD version • Default is current method in AERMOD, stack height wind speed. • PRIMETurb which adjusts the vertical turbulence intensity, wiz0 from 0.6 to 0.7. • PRIMECav modifies the cavity calculations These improvements aim to address important theoretical issues that significantly affect the accuracy of predicted concentrations subject to downwash effects. This research effort was funded in part by the American Petroleum Institute, the Electric Power Research Institute, the Corn Refiners Association and the American Forest & Paper Association. As part of it, the PRIME2 subcommittee under the A&WMA APM committee was formed to: (1) establish a mechanism to review, approve and implement new science into the model for this and future improvements; and (2) provide a technical review forum to improve the PRIME building downwash algorithms. Collaboration and cooperation from the EPA Office of Research and Development (ORD) has been on-going during the research project resulting in new alpha options aimed at solving known issues with the treatment of building downwash effects in AERMOD. The intent is that these experimental options will be tested by the user community to create enough justification to make these beta (approved on a case-by-case basis) and eventually default options in AERMOD. A preliminary evaluation for the following four cases will be presented: • Arconic- Davenport, IA (formerly Alcoa) • Mirant Potomac River Generating Station- Alexandria, VA • Basic American Foods- Blackfoot, ID • Oakley Generating Station- Oakley, CA The evaluation includes comparing 1-hr, 24-hr and annual averages along with Q-Q plots and isopleths. A discussion related to the results obtained will also be presented.
AIR DISPERSION MODELING HIGHLIGHTS FROM 2012 ACESergio A. Guerra
Presentation includes some highlights from the dispersion modeling papers presented at the Annual AWMA conference in San Antonio, TX. Topics covered include: EMVAP, distance limitations of AERMOD, and two case studies comparing predicted and monitoring data,
Presented at the A&WMA UMS Board Meeting on August 21, 2012.
Wind power forecasting: A Case Study in Terrain using Artificial IntelligenceIRJET Journal
This document presents a study on using artificial neural networks to forecast wind power. Real-time data on wind speed, direction, temperature, humidity and pressure was collected from a measurement station. 100 artificial neural networks with different structures were trained and tested. The best performing network was a multilayer perceptron with 6 inputs, 24 hidden layers, exponential activation and identity output activation. This network achieved a 99% success rate in estimating wind power compared to real measured data. The study demonstrates that artificial neural networks can accurately estimate wind power for short-term forecasting.
The document summarizes a study that used artificial neural networks (ANN) to predict chemical oxygen demand (COD) levels in an anaerobic wastewater treatment system. Four ANN backpropagation training algorithms - Levenberg-Marquardt, gradient descent with adaptive learning, gradient descent with momentum, and resilient backpropagation - were tested on a model using COD input data. The Levenberg-Marquardt algorithm produced the best results with the lowest mean squared error of 0.533 and highest regression value of 0.991, accurately predicting COD levels. The study demonstrates ANNs can effectively model and predict values in nonlinear wastewater treatment processes.
STOCHASTIC GENERATION OF ARTIFICIAL WEATHER DATA FOR SUBTROPICAL CLIMATES USI...IAEME Publication
Liquid desiccant air conditioning systems provide an efficient and less energy-intensive alternative to conventional vapour compression systems due to their ability to use low-grade energy provided by a hybrid photovoltaic and thermal solar power module. Air conditioning systems are major energy consumers in buildings especially in extreme climatic conditions and are therefore primary targets in so far as energy efficiency is concerned. Building energy performance has traditionally been simulated using typical meteorological year (TMY) and test reference year (TRY) weather tools. In both cases, the value allocation is pegged on the least nonconformity from the long-range data of the past 29 years. The extreme low and high points are successively disregarded which means that the actual prevailing hourly mean settings are not precisely represented. The multivariate Markov chain provides flexibility for use in circumstances where dynamic sequential and categorical weather data for a given region is required. This study presents a simplified higher order multivariate Markov chain analysis founded on a combination of a mixture-transition and a stochastic technique to project the solar radiation, air humidity, ambient temperatures as well as wind speeds and their interrelationships in sub-tropical climates, typically the coastal regions of South Africa. The generic simulation of weather parameters is produced from 20 years of actual weather conditions using a stochastic technique. The series of weather parameters developed are then implemented in the simulation of solar powered air dehumidification and regeneration processes. The outcomes indicate that the model is devoid of constraints and more accurate in the estimation of variable parameters implying that a properly designed solar-powered liquid desiccant air conditioning system is capable of supplying the majority of the latent cooling load
This document provides a review of the historical development and present state of ejector systems. It discusses several topics related to ejector systems including:
1. Developments in ejector models from early single-phase models to more recent two-phase and computational fluid dynamics (CFD) models.
2. Applications of ejector systems including refrigeration, vacuum systems, and other pumping situations.
3. Research aimed at enhancing the performance of ejector systems, such as combining ejectors with other refrigeration cycles to improve overall system performance.
The review categorizes and summarizes the various modeling approaches and research works related to ejector systems, providing useful guidelines on background, operating principles, and areas where
CFD Simulation of Solar Air Heater having Inclined Discrete Rib Roughness wit...IRJET Journal
This document presents a computational fluid dynamics (CFD) simulation of a solar air heater duct with inclined discrete rib roughness and a staggered convex element. The study aims to develop a 3D CFD model to analyze heat transfer and fluid flow performance. Boundary conditions and material properties are defined. A mesh is generated and the RNG k-epsilon turbulence model is used. Results for velocity, temperature, and turbulent kinetic energy contours are presented along with charts showing variations in Nusselt number with Reynolds number and roughness pitch. Inclined discrete ribs with a staggered convex element are found to improve heat transfer in the solar air heater duct.
The document describes a study that used artificial neural networks (ANN) to predict chemical oxygen demand (COD) levels in wastewater from an anaerobic reactor. Four different backpropagation algorithms - Levenberg-Marquardt, gradient descent with adaptive learning rate, gradient descent with momentum, and resilient backpropagation - were used to train a three-layer feedforward ANN model. The model trained with the Levenberg-Marquardt algorithm performed best with a mean squared error of 0.533 and regression coefficient of 0.991, accurately predicting COD levels. The Levenberg-Marquardt algorithm provided the most accurate ANN model for predicting COD in effluent from the ana
Implementation of the characteristic equation method in quasi-dynamic.pdfAlvaro Ochoa
This document presents a mathematical model for simulating the quasi-dynamic behavior of an absorption chiller that uses a lithium nitrate/ammonia working fluid pair. The model is based on the characteristic equation method and solves mass and energy balance equations using the first law of thermodynamics. It models the major system components as single thermal components that exchange heat between external water circuits and internal refrigerant/solution circuits. Validation against experimental data showed good agreement, with relative errors below 5%. A sensitivity analysis then examined the chiller's dynamic response to variations in operating temperatures.
This document discusses the CFD (computational fluid dynamics) analysis of a solar flat plate collector. It begins by introducing solar collectors and their importance. It then describes the objectives of performing CFD simulation on a flat plate collector to better understand flow and temperature distribution. The document outlines the 3D model created in ANSYS Workbench and simulation performed in ANSYS FLUENT. It validates the CFD results by comparing the outlet air temperature to experimental results, showing good agreement. The overall goal is to analyze the collector's heat transfer capability using CFD and gain insights that are difficult to obtain through experimentation alone.
This document presents a computational fluid dynamics (CFD) simulation of a domestic direct type multi-shelf solar dryer. The study aims to validate the design of this dryer and demonstrate its temperature distribution and radiation heat flux. The simulation is performed using ANSYS-Fluent software. The results show that the air temperature inside the dryer cabinet increases significantly to around 326K due to natural air circulation. Radiation heat flux contours indicate that the dryer shelves receive sufficient flux to validate the dryer's design for food drying applications.
Numerical Simulation of Solar Greenhouse Dryer Using Computational Fluid Dyna...RSIS International
Moisture removal from crops and other food items is
one of the ways to preserve them for longer duration. Previously,
drying openly in sun was used to reduce moisture content. But it
had some disadvantages like contamination due to dirt and other
unwanted elements as well as attack by rodents and birds.
Drying in covered close space with vents would be helpful in
overcoming these problems. Solar greenhouse dryers are the
close conduits in which crops can be dried without negatively
affecting the nutrition value. The factors affecting the crop
drying are solar radiation, climatic conditions, material of which
the dryer is made of and shape of the dryer. A lot of
experimental investigations have been done to improve the
drying rate. With the advances in computational power and
numerical techniques, Computational Fluid Dynamics (CFD) has
emerged as a powerful tool to optimize any design. In the present
study, simulations have been done on greenhouse dryer with
modifications to identify the temperature distribution with
variation in wind velocity. Different radiation levels have also
been found out at different locations in the dryer. The model of
the dryer has been created in CREO 5.0 and analysis has been
performed using ANSYS 14.0. The simulation has been done for
both forced and natural convection. Obtained results have been
validated with the experimental work done by previous works.
Better drying rate has been obtained for forced circulation as
compared to natural convection which is in agreement with the
available experimental results.
This document summarizes a study that used computational fluid dynamics (CFD) to analyze heat transfer and flow characteristics in a solar air heater duct with ribbed surfaces. CFD simulations were performed for ducts with different rib configurations and Reynolds numbers. It was found that roughening the duct surface increased heat transfer but also increased friction losses. An optimal rib design was identified that provided maximum heat transfer enhancement with minimum pressure drop. Turbulence kinetic energy and intensity contours helped explain the increased turbulence near ribs that augmented heat transfer.
Implementing Workload Postponing In Cloudsim to Maximize Renewable Energy Uti...IJERA Editor
Green datacenters has become a major research area among researchers in academy and industry. One of the
recent approaches getting higher attention is supplying datacenters with renewable sources of energy, leading to
cleaner and more sustainable datacenters. However, this path poses new challenges. The main problem with
existing renewable energy technologies is high variability, which means high fluctuation of available energy
during different time periods on a day, month or year. In our paper, we address the issue of better managing
datacenter workload in order to achieve higher utilization of available renewable energy. We implement an
algorithm in CloudSim simulator which decides to postpone or urgently run a specific job asking for datacenter
resources, based on job’s deadline and available solar energy. The aim of this algorithm is to make workload
energy consumption through 24 hours match as much as possible the solar energy availability in 24 hours. Two
typical, clear and cloudy days, are taken in consideration for simulation. The results from our experiments show
that, for the chosen workload model, jobs are better managed by postponing or urgently running them, in terms
of leveraging available solar energy. This yields up to 17% higher utilization of daily solar energy
This document summarizes a study that evaluates the potential for optimizing the time response of HVAC control systems in smart buildings. The study proposes an integrated fuzzy logic controller that combines a Mamdani-type fuzzy PI-PD controller with a Takagi-Sugeno-Kang type cluster adaptive training controller. The fuzzy membership functions of the PI-PD controller are tuned online using a simplex search algorithm to minimize time response, while the cluster adaptive training controller is tuned offline and online using gradient descent to enhance stability and disturbance rejection. Simulation results showed the proposed integrated controller improved output accuracy, significantly reduced response time, and increased robustness of indoor conditions control for MIMO HVAC systems.
Optimal artificial neural network configurations for hourly solar irradiation...IJECEIAES
Solar energy is widely used in order to generate clean electric energy. However, due to its intermittent nature, this resource is only inserted in a limited way within the electrical networks. To increase the share of solar energy in the energy balance and allow better management of its production, it is necessary to know precisely the available solar potential at a fine time step to take into account all these stochastic variations. In this paper, a comparison between different artificial neural network (ANN) configurations is elaborated to estimate the hourly solar irradiation. An investigation of the optimal neurons and layers is investigated. To this end, feedforward neural network, cascade forward neural network and fitting neural network have been applied for this purpose. In this context, we have used different meteorological parameters to estimate the hourly global solar irirradiation in the region of Laghouat, Algeria. The validation process shows that choosing the cascade forward neural network two inputs gives an R2 value equal to 97.24% and an normalized root mean square error (NRMSE) equals to 0.1678 compared to the results of three inputs, which gives an R2 value equaled to 95.54% and an NRMSE equals to 0.2252. The comparison between different existing methods in literature show the goodness of the proposed models.
This document discusses the development of a hybrid building model that uses both physical and empirical methods to model energy and moisture transfer. The model divides the building into four subsystems: 1) conditioned indoor air space, 2) opaque exterior surfaces, 3) transparent fenestration surfaces, and 4) slab floors. Conservation laws are applied to each subsystem to model heat and mass transfer. The model uses an empirical residential load factor method to relate indoor and outdoor temperatures and calculate cooling/heating loads for each room. Simulations using the model under various ventilation scenarios can help reduce energy usage for cooling while maintaining thermal comfort.
This document describes an evaluation of natural draft wet cooling tower (NDWCT) performance using different packing fills in Iraq via artificial neural networks (ANN). Experimental tests were conducted on a NDWCT rig using honeycomb, splash, and trickle fills under varying conditions. An ANN with 10 hidden neurons was developed using the Levenberg-Marquardt backpropagation algorithm in MATLAB to predict the experimental results. The ANN predictions showed good agreement with the experiments based on correlation coefficients above 0.994, low root mean square errors below 6%, and mean relative errors below 8.4%.
Radial basis network estimator of oxygen content in the flue gas of debutani...IJECEIAES
The energy efficiency in the debutanizer reboiler combustion can be monitored from the oxygen content of the flue gas of the reboiler. The measurement of the oxygen content can be conducted in situ using an oxygen sensor. However, soot that may appear around the sensor due to the combustion process in the debutanizer reboiler can obstruct the sensor’s function. In-situ redundancy sensors’ unavailability is a significant problem when the sensor is damaged, so measures must be made directly by workers using portable devices. On the other hand, worker safety is a primary concern when working in high-risk work areas. In this paper, we propose a software-based measurement or soft sensor to overcome the problems. The radial basis function network model makes soft sensors adapt to data updates because of their advantage as a universal approximator. The estimation of oxygen content with a soft sensor has been successfully carried out. The soft sensor generates an estimated mean square error of 0.216% with a standard deviation of 0.0242%. Stochastics gradient descent algorithm with momentum acceleration and dimension reduction using principal component analysis successfully improves the soft sensors’ performance.
This document summarizes a study that used artificial neural networks to model and identify dynamic indoor thermal comfort based on the PMV index. The study developed equations to model thermal comfort based on factors like air temperature, humidity, clothing insulation, and metabolism. An artificial neural network was then trained using these equations to approximate the nonlinear relationship between inputs like temperature and outputs like predicted mean vote. Simulation results showed the neural network model could accurately track desired thermal sensations and matched existing fuzzy logic models of human thermal comfort. The neural network approach provides a practical method for real-time identification of thermal comfort that is better than traditional manual calculations.
Development and Performance of Solar based Air WasherIRJET Journal
This document summarizes the development and performance of a solar-powered air washer. It aims to address drawbacks of existing air washers by making it fully solar-powered using a blower, pumps, heating/cooling coils, and reservoir. The system uses water and air to remove dust and particles for cleaning purposes while maintaining the ideal temperature. It is more cost-efficient and compact than traditional air washers. The document reviews several existing studies on related technologies and provides details on the components involved in the solar-powered air washer system, including heating/cooling coils, nozzles, pumps, blowers, reservoirs, filtering plates, and a vapor compression cycle.
This document summarizes a research article that proposes a novel fuzzy model-based multivariable predictive control (FMBMPC) approach for heating, ventilation, and air conditioning (HVAC) systems. The control law is derived analytically in state-space without an optimization algorithm. The FMBMPC approach is tested on a real-world HVAC system. Results show the FMBMPC approach performs better than a classical PI controller due to its ability to handle the HVAC system's nonlinear dynamics and interactions. The FMBMPC approach also exhibits better reference tracking across a wider operating range and is more energy efficient.
Similar to PERFORMANCE PREDICTION OF AN ADIABATIC SOLAR LIQUID DESICCANT REGENERATOR USING ARTIFICIAL NEURAL NETWORK (20)
Submission Deadline: 30th September 2022
Acceptance Notification: Within Three Days’ time period
Online Publication: Within 24 Hrs. time Period
Expected Date of Dispatch of Printed Journal: 5th October 2022
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
White layer thickness (WLT) formed and surface roughness in wire electric discharge turning (WEDT) of tungsten carbide composite has been made to model through response surface methodology (RSM). A Taguchi’s standard Design of experiments involving five input variables with three levels has been employed to establish a mathematical model between input parameters and responses. Percentage of cobalt content, spindle speed, Pulse on-time, wire feed and pulse off-time were changed during the experimental tests based on the Taguchi’s orthogonal array L27 (3^13). Analysis of variance (ANOVA) revealed that the mathematical models obtained can adequately describe performance within the parameters of the factors considered. There was a good agreement between the experimental and predicted values in this study.
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
The study explores the reasons for a transgender to become entrepreneurs. In this study transgender entrepreneur was taken as independent variable and reasons to become as dependent variable. Data were collected through a structured questionnaire containing a five point Likert Scale. The study examined the data of 30 transgender entrepreneurs in Salem Municipal Corporation of Tamil Nadu State, India. Simple Random sampling technique was used. Garrett Ranking Technique (Percentile Position, Mean Scores) was used as the analysis for the present study to identify the top 13 stimulus factors for establishment of trans entrepreneurial venture. Economic advancement of a nation is governed upon the upshot of a resolute entrepreneurial doings. The conception of entrepreneurship has stretched and materialized to the socially deflated uncharted sections of transgender community. Presently transgenders have smashed their stereotypes and are making recent headlines of achievements in various fields of our Indian society. The trans-community is gradually being observed in a new light and has been trying to achieve prospective growth in entrepreneurship. The findings of the research revealed that the optimistic changes are taking place to change affirmative societal outlook of the transgender for entrepreneurial ventureship. It also laid emphasis on other transgenders to renovate their traditional living. The paper also highlights that legislators, supervisory body should endorse an impartial canons and reforms in Tamil Nadu Transgender Welfare Board Association.
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
Since ages gender difference is always a debatable theme whether caused by nature, evolution or environment. The birth of a transgender is dreadful not only for the child but also for their parents. The pain of living in the wrong physique and treated as second class victimized citizen is outrageous and fully harboured with vicious baseless negative scruples. For so long, social exclusion had perpetuated inequality and deprivation experiencing ingrained malign stigma and besieged victims of crime or violence across their life spans. They are pushed into the murky way of life with a source of eternal disgust, bereft sexual potency and perennial fear. Although they are highly visible but very little is known about them. The common public needs to comprehend the ravaged arrogance on these insensitive souls and assist in integrating them into the mainstream by offering equal opportunity, treat with humanity and respect their dignity. Entrepreneurship in the current age is endorsing the gender fairness movement. Unstable careers and economic inadequacy had inclined one of the gender variant people called Transgender to become entrepreneurs. These tiny budding entrepreneurs resulted in economic transition by means of employment, free from the clutches of stereotype jobs, raised standard of living and handful of financial empowerment. Besides all these inhibitions, they were able to witness a platform for skill set development that ignited them to enter into entrepreneurial domain. This paper epitomizes skill sets involved in trans-entrepreneurs of Thoothukudi Municipal Corporation of Tamil Nadu State and is a groundbreaking determination to sightsee various skills incorporated and the impact on entrepreneurship.
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
The banking and financial services industries are experiencing increased technology penetration. Among them, the banking industry has made technological advancements to better serve the general populace. The economy focused on transforming the banking sector's system into a cashless, paperless, and faceless one. The researcher wants to evaluate the user's intention for utilising a mobile banking application. The study also examines the variables affecting the user's behaviour intention when selecting specific applications for financial transactions. The researcher employed a well-structured questionnaire and a descriptive study methodology to gather the respondents' primary data utilising the snowball sampling technique. The study includes variables like performance expectations, effort expectations, social impact, enabling circumstances, and perceived risk. Each of the aforementioned variables has a major impact on how users utilise mobile banking applications. The outcome will assist the service provider in comprehending the user's history with mobile banking applications.
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
Technology upgradation in banking sector took the economy to view that payment mode towards online transactions using mobile applications. This system enabled connectivity between banks, Merchant and user in a convenient mode. there are various applications used for online transactions such as Google pay, Paytm, freecharge, mobikiwi, oxygen, phonepe and so on and it also includes mobile banking applications. The study aimed at evaluating the predilection of the user in adopting digital transaction. The study is descriptive in nature. The researcher used random sample techniques to collect the data. The findings reveal that mobile applications differ with the quality of service rendered by Gpay and Phonepe. The researcher suggest the Phonepe application should focus on implementing the application should be user friendly interface and Gpay on motivating the users to feel the importance of request for money and modes of payments in the application.
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
The prototype of a voice-based ATM for visually impaired using Arduino is to help people who are blind. This uses RFID cards which contain users fingerprint encrypted on it and interacts with the users through voice commands. ATM operates when sensor detects the presence of one person in the cabin. After scanning the RFID card, it will ask to select the mode like –normal or blind. User can select the respective mode through voice input, if blind mode is selected the balance check or cash withdraw can be done through voice input. Normal mode procedure is same as the existing ATM.
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
There is increasing acceptability of emotional intelligence as a major factor in personality assessment and effective human resource management. Emotional intelligence as the ability to build capacity, empathize, co-operate, motivate and develop others cannot be divorced from both effective performance and human resource management systems. The human person is crucial in defining organizational leadership and fortunes in terms of challenges and opportunities and walking across both multinational and bilateral relationships. The growing complexity of the business world requires a great deal of self-confidence, integrity, communication, conflict and diversity management to keep the global enterprise within the paths of productivity and sustainability. Using the exploratory research design and 255 participants the result of this original study indicates strong positive correlation between emotional intelligence and effective human resource management. The paper offers suggestions on further studies between emotional intelligence and human capital development and recommends for conflict management as an integral part of effective human resource management.
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
Our life journey, in general, is closely defined by the way we understand the meaning of why we coexist and deal with its challenges. As we develop the "inspiration economy", we could say that nearly all of the challenges we have faced are opportunities that help us to discover the rest of our journey. In this note paper, we explore how being faced with the opportunity of being a close carer for an aging parent with dementia brought intangible discoveries that changed our insight of the meaning of the rest of our life journey.
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
The main objective of this study is to analyze the impact of aspects of Organizational Culture on the Effectiveness of the Performance Management System (PMS) in the Health Care Organization at Thanjavur. Organizational Culture and PMS play a crucial role in present-day organizations in achieving their objectives. PMS needs employees’ cooperation to achieve its intended objectives. Employees' cooperation depends upon the organization’s culture. The present study uses exploratory research to examine the relationship between the Organization's culture and the Effectiveness of the Performance Management System. The study uses a Structured Questionnaire to collect the primary data. For this study, Thirty-six non-clinical employees were selected from twelve randomly selected Health Care organizations at Thanjavur. Thirty-two fully completed questionnaires were received.
Living in 21st century in itself reminds all of us the necessity of police and its administration. As more and more we are entering into the modern society and culture, the more we require the services of the so called ‘Khaki Worthy’ men i.e., the police personnel. Whether we talk of Indian police or the other nation’s police, they all have the same recognition as they have in India. But as already mentioned, their services and requirements are different after the like 26th November, 2008 incidents, where they without saving their own lives has sacrificed themselves without any hitch and without caring about their respective family members and wards. In other words, they are like our heroes and mentors who can guide us from the darkness of fear, militancy, corruption and other dark sides of life and so on. Now the question arises, if Gandhi would have been alive today, what would have been his reaction/opinion to the police and its functioning? Would he have some thing different in his mind now what he had been in his mind before the partition or would he be going to start some Satyagraha in the form of some improvement in the functioning of the police administration? Really these questions or rather night mares can come to any one’s mind, when there is too much confusion is prevailing in our minds, when there is too much corruption in the society and when the polices working is also in the questioning because of one or the other case throughout the India. It is matter of great concern that we have to thing over our administration and our practical approach because the police personals are also like us, they are part and parcel of our society and among one of us, so why we all are pin pointing towards them.
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
The goal of this study was to see how talent management affected employee retention in the selected IT organizations in Chennai. The fundamental issue was the difficulty to attract, hire, and retain talented personnel who perform well and the gap between supply and demand of talent acquisition and retaining them within the firms. The study's main goals were to determine the impact of talent management on employee retention in IT companies in Chennai, investigate talent management strategies that IT companies could use to improve talent acquisition, performance management, career planning and formulate retention strategies that the IT firms could use. The respondents were given a structured close-ended questionnaire with the 5 Point Likert Scale as part of the study's quantitative research design. The target population consisted of 289 IT professionals. The questionnaires were distributed and collected by the researcher directly. The Statistical Package for Social Sciences (SPSS) was used to collect and analyse the questionnaire responses. Hypotheses that were formulated for the various areas of the study were tested using a variety of statistical tests. The key findings of the study suggested that talent management had an impact on employee retention. The studies also found that there is a clear link between the implementation of talent management and retention measures. Management should provide enough training and development for employees, clarify job responsibilities, provide adequate remuneration packages, and recognise employees for exceptional performance.
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
Globally, Millions of dollars were spent by the organizations for employing skilled Information Technology (IT) professionals. It is costly to replace unskilled employees with IT professionals possessing technical skills and competencies that aid in interconnecting the business processes. The organization’s employment tactics were forced to alter by globalization along with technological innovations as they consistently diminish to remain lean, outsource to concentrate on core competencies along with restructuring/reallocate personnel to gather efficiency. As other jobs, organizations or professions have become reasonably more appropriate in a shifting employment landscape, the above alterations trigger both involuntary as well as voluntary turnover. The employee view on jobs is also afflicted by the COVID-19 pandemic along with the employee-driven labour market. So, having effective strategies is necessary to tackle the withdrawal rate of employees. By associating Emotional Intelligence (EI) along with Talent Management (TM) in the IT industry, the rise in attrition rate was analyzed in this study. Only 303 respondents were collected out of 350 participants to whom questionnaires were distributed. From the employees of IT organizations located in Bangalore (India), the data were congregated. A simple random sampling methodology was employed to congregate data as of the respondents. Generating the hypothesis along with testing is eventuated. The effect of EI and TM along with regression analysis between TM and EI was analyzed. The outcomes indicated that employee and Organizational Performance (OP) were elevated by effective EI along with TM.
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
By implementing talent management strategy, organizations would have the option to retain their skilled professionals while additionally working on their overall performance. It is the course of appropriately utilizing the ideal individuals, setting them up for future top positions, exploring and dealing with their performance, and holding them back from leaving the organization. It is employee performance that determines the success of every organization. The firm quickly obtains an upper hand over its rivals in the event that its employees having particular skills that cannot be duplicated by the competitors. Thus, firms are centred on creating successful talent management practices and processes to deal with the unique human resources. Firms are additionally endeavouring to keep their top/key staff since on the off chance that they leave; the whole store of information leaves the firm's hands. The study's objective was to determine the impact of talent management on organizational performance among the selected IT organizations in Chennai. The study recommends that talent management limitedly affects performance. On the off chance that this talent is appropriately management and implemented properly, organizations might benefit as much as possible from their maintained assets to support development and productivity, both monetarily and non-monetarily.
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
Banking regulations act of India, 1949 defines banking as “acceptance of deposits for the purpose of lending or investment from the public, repayment on demand or otherwise and withdrawable through cheques, drafts order or otherwise”, the major participants of the Indian financial system are commercial banks, the financial institution encompassing term lending institutions. Investments institutions, specialized financial institution and the state level development banks, non banking financial companies (NBFC) and other market intermediaries such has the stock brokers and money lenders are among the oldest of the certain variants of NBFC and the oldest market participants. The asset quality of banks is one of the most important indicators of their financial health. The Indian banking sector has been facing severe problems of increasing Non- Performing Assets (NPAs). The NPAs growth directly and indirectly affects the quality of assets and profitability of banks. It also shows the efficiency of banks credit risk management and the recovery effectiveness. NPA do not generate any income, whereas, the bank is required to make provisions for such as assets that why is a double edge weapon. This paper outlines the concept of quality of bank loans of different types like Housing, Agriculture and MSME loans in state Haryana of selected public and private sector banks. This study is highlighting problems associated with the role of commercial bank in financing Small and Medium Scale Enterprises (SME). The overall objective of the research was to assess the effect of the financing provisions existing for the setting up and operations of MSMEs in the country and to generate recommendations for more robust financing mechanisms for successful operation of the MSMEs, in turn understanding the impact of MSME loans on financial institutions due to NPA. There are many research conducted on the topic of Non- Performing Assets (NPA) Management, concerning particular bank, comparative study of public and private banks etc. In this paper the researcher is considering the aggregate data of selected public sector and private sector banks and attempts to compare the NPA of Housing, Agriculture and MSME loans in state Haryana of public and private sector banks. The tools used in the study are average and Anova test and variance. The findings reveal that NPA is common problem for both public and private sector banks and is associated with all types of loans either that is housing loans, agriculture loans and loans to SMES. NPAs of both public and private sector banks show the increasing trend. In 2010-11 GNPA of public and private sector were at same level it was 2% but after 2010-11 it increased in many fold and at present there is GNPA in some more than 15%. It shows the dark area of Indian banking sector.
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
An experiment conducted in this study found that BaSO4 changed Nylon 6's mechanical properties. By changing the weight ratios, BaSO4 was used to make Nylon 6. This Researcher looked into how hard Nylon-6/BaSO4 composites are and how well they wear. Experiments were done based on Taguchi design L9. Nylon-6/BaSO4 composites can be tested for their hardness number using a Rockwell hardness testing apparatus. On Nylon/BaSO4, the wear behavior was measured by a wear monitor, pinon-disc friction by varying reinforcement, sliding speed, and sliding distance, and the microstructure of the crack surfaces was observed by SEM. This study provides significant contributions to ultimate strength by increasing BaSO4 content up to 16% in the composites, and sliding speed contributes 72.45% to the wear rate
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
The majority of the population in India lives in villages. The village is the back bone of the country. Village or rural industries play an important role in the national economy, particularly in the rural development. Developing the rural economy is one of the key indicators towards a country’s success. Whether it be the need to look after the welfare of the farmers or invest in rural infrastructure, Governments have to ensure that rural development isn’t compromised. The economic development of our country largely depends on the progress of rural areas and the standard of living of rural masses. Village or rural industries play an important role in the national economy, particularly in the rural development. Rural entrepreneurship is based on stimulating local entrepreneurial talent and the subsequent growth of indigenous enterprises. It recognizes opportunity in the rural areas and accelerates a unique blend of resources either inside or outside of agriculture. Rural entrepreneurship brings an economic value to the rural sector by creating new methods of production, new markets, new products and generate employment opportunities thereby ensuring continuous rural development. Social Entrepreneurship has the direct and primary objective of serving the society along with the earning profits. So, social entrepreneurship is different from the economic entrepreneurship as its basic objective is not to earn profits but for providing innovative solutions to meet the society needs which are not taken care by majority of the entrepreneurs as they are in the business for profit making as a sole objective. So, the Social Entrepreneurs have the huge growth potential particularly in the developing countries like India where we have huge societal disparities in terms of the financial positions of the population. Still 22 percent of the Indian population is below the poverty line and also there is disparity among the rural & urban population in terms of families living under BPL. 25.7 percent of the rural population & 13.7 percent of the urban population is under BPL which clearly shows the disparity of the poor people in the rural and urban areas. The need to develop social entrepreneurship in agriculture is dictated by a large number of social problems. Such problems include low living standards, unemployment, and social tension. The reasons that led to the emergence of the practice of social entrepreneurship are the above factors. The research problem lays upon disclosing the importance of role of social entrepreneurship in rural development of India. The paper the tendencies of social entrepreneurship in India, to present successful examples of such business for providing recommendations how to improve situation in rural areas in terms of social entrepreneurship development. Indian government has made some steps towards development of social enterprises, social entrepreneurship, and social in- novation, but a lot remains to be improved.
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
Distribution system is a critical link between the electric power distributor and the consumers. Most of the distribution networks commonly used by the electric utility is the radial distribution network. However in this type of network, it has technical issues such as enormous power losses which affect the quality of the supply. Nowadays, the introduction of Distributed Generation (DG) units in the system help improve and support the voltage profile of the network as well as the performance of the system components through power loss mitigation. In this study network reconfiguration was done using two meta-heuristic algorithms Particle Swarm Optimization and Gravitational Search Algorithm (PSO-GSA) to enhance power quality and voltage profile in the system when simultaneously applied with the DG units. Backward/Forward Sweep Method was used in the load flow analysis and simulated using the MATLAB program. Five cases were considered in the Reconfiguration based on the contribution of DG units. The proposed method was tested using IEEE 33 bus system. Based on the results, there was a voltage profile improvement in the system from 0.9038 p.u. to 0.9594 p.u.. The integration of DG in the network also reduced power losses from 210.98 kW to 69.3963 kW. Simulated results are drawn to show the performance of each case.
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
Manufacturing industries have witnessed an outburst in productivity. For productivity improvement manufacturing industries are taking various initiatives by using lean tools and techniques. However, in different manufacturing industries, frugal approach is applied in product design and services as a tool for improvement. Frugal approach contributed to prove less is more and seems indirectly contributing to improve productivity. Hence, there is need to understand status of frugal approach application in manufacturing industries. All manufacturing industries are trying hard and putting continuous efforts for competitive existence. For productivity improvements, manufacturing industries are coming up with different effective and efficient solutions in manufacturing processes and operations. To overcome current challenges, manufacturing industries have started using frugal approach in product design and services. For this study, methodology adopted with both primary and secondary sources of data. For primary source interview and observation technique is used and for secondary source review has done based on available literatures in website, printed magazines, manual etc. An attempt has made for understanding application of frugal approach with the study of manufacturing industry project. Manufacturing industry selected for this project study is Mahindra and Mahindra Ltd. This paper will help researcher to find the connections between the two concepts productivity improvement and frugal approach. This paper will help to understand significance of frugal approach for productivity improvement in manufacturing industry. This will also help to understand current scenario of frugal approach in manufacturing industry. In manufacturing industries various process are involved to deliver the final product. In the process of converting input in to output through manufacturing process productivity plays very critical role. Hence this study will help to evolve status of frugal approach in productivity improvement programme. The notion of frugal can be viewed as an approach towards productivity improvement in manufacturing industries.
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
In this paper, we investigated a queuing model of fuzzy environment-based a multiple channel queuing model (M/M/C) ( /FCFS) and study its performance under realistic conditions. It applies a nonagonal fuzzy number to analyse the relevant performance of a multiple channel queuing model (M/M/C) ( /FCFS). Based on the sub interval average ranking method for nonagonal fuzzy number, we convert fuzzy number to crisp one. Numerical results reveal that the efficiency of this method. Intuitively, the fuzzy environment adapts well to a multiple channel queuing models (M/M/C) ( /FCFS) are very well.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
2. Performance Prediction of an Adiabatic Solar Liquid Desiccant Regenerator using Artificial
Neural Network
http://www.iaeme.com/IJMET/index.asp 497 editor@iaeme.com
Keywords: Adiabatic regenerator, Liquid desiccant, Solar, Artificial neural network.
Cite this Article: Andrew Y. A. Oyieke and Freddie L. Inambao, Performance
Prediction of an Adiabatic Solar Liquid Desiccant Regenerator using Artificial Neural
Network, International Journal of Mechanical Engineering and Technology, 10(3),
2019, pp. 496-511.
http://www.iaeme.com/IJMET/issues.asp?JType=IJMET&VType=10&IType=3
1. INTRODUCTION
The application of desiccant materials in air conditioning systems has increasingly become
popular in built environments. Liquid desiccants such as lithium bromide, lithium chloride,
and calcium chloride among others have found application in most preferred systems due to
flexibility in operation, ability to neutralize both organic and inorganic contaminants, and
ability to work in the low regeneration temperatures provided by solar energy. The
regenerator is a vessel in which a heated dilute solution comes into contact with air in a
packed environment which enables heat and mass transfer phenomenon to occur. This process
leads to evaporation of water particles from the desiccant to the atmospheric air and results in
a strong solution to near initial concentration.
Evidence from literature shows that there has been a considerable amount of theoretical
modelling and practical experimental tests performed on these units with little reference to use
of artificial intelligence techniques. Even though a lot of success have been recorded with
some well-defined and formulated numerical and analytical models, they still don't offer the
degree of flexibility required for performance in the external domain. Drawing inspiration
from biological neural networks, artificial neural networks (ANN) provide an excellent
alternative with numerous interlinked neurons that are stimulated to solve a number of
complex computational problems applicable in a whole range of scenarios such as prediction,
process optimization and control, substantive memory, and recognition of patterns. Other
favourable benefits of ANN over other methods include dispersed exemplification, learning
and oversimplification capability, adaptability, error forbearance, intrinsic appropriate
statistical dispensation with comparatively little energy intake [1].
ANN research was pioneered by [2] in the 1940s who suggested a dualistic threshold
element computational model for an artificial neuron, with carefully selected weights in an
organized array of neurons to execute widely accepted computations. Rosenblatt [3]
introduced the perception convergence theorem in neurodynamics which was later critically
analysed by [4] for shortcomings. Hopfield [5] further introduced the energy approach which
demonstrated innovative ANN computational capabilities. The perceptron multi-layered
algorithm-based back-propagation learning was first initiated by [6] and re-invented by [7]
through parallel distributed processing. Based on their ideas, modern ANN research has
metamorphosed into a state-of-the-art technology.
The application of ANN technology in heating, ventilation and air conditioning (HVAC)
systems is a fairly recent development involving the use of assorted parameters to study the
behaviour of liquid desiccant air conditioning systems (LDACS) at the regeneration stage.
Gandhidasan [8] predicted the vapour pressures of different aqueous desiccant solutions
(CaCl, LiCl and LiBr) applied in cooling using ANN. Later on, they developed and applied an
ANN model to analyze the connection between input and output parameters in an LiCl based
randomly packed liquid desiccant dehumidification system [9]. Mohammed et al. [10]
implemented and validated an ANN to predict the output of a triethyle glycol (TEG) based
liquid desiccant dehumidifier subjected to several input constraints. Still on the same subject,
Mohammed et al. [11] and [12] ran performance tests on a solar-hybrid air conditioning
3. Andrew Y. A. Oyieke and Freddie L. Inambao
http://www.iaeme.com/IJMET/index.asp 498 editor@iaeme.com
system with LiCl desiccant solution in a packed regenerator using various ANN structures.
Using different input data, the outputs were obtained and compared with experimental data in
terms of moisture removal rate (MRR) and effectiveness. However, due to lack of extensive
experimental data for further training of the ANN the accuracy of their model was not
guaranteed.
A summary of the respective relevant ANN literature reviewed is presented in Table 1 in
terms of process, type of liquid desiccant used, input and output parameters, applied ANN
structure and symbol. This classification forms the basis of distinguishing the relevance of the
present study as the parameters are listed in the last row for comparison. The present study
applies a supervised paradigm based on an error-correction learning rule to develop a multi-
layered perceptron and back-propagation algorithm for use in prediction of performance of
LDACS powered by solar energy.
Table 1 ANN modelling applications in air regeneration
References Process
Liquid
desiccant
Input parameters Output parameters
Applied
network
structure
ANN
structure
symbol
[13] Regeneration CaCL2-
- Air and desiccant
temperature
- Air and desiccant flow
rates
- Air humidity
- Desiccant concentration
- Air and desiccant
temperature
- Air and desiccant flow
rates
- Air humidity ratio
- Desiccant concentration
Multiple
hidden layer
6-2-6
[12] Regeneration LiCl
- Air and desiccant inlet
humidity ratio
- Air and desiccant inlet
temperature
- Air and desiccant flow
rates
- Temperature
- Humidity ratio
- Moisture removal rate
(MMR)
- Effectiveness
Single
and
multilayer
5-5-5-1
5-11-1
Current study Regeneration LiBr
- Air inlet humidity ratio
- Air inlet temperature
- Air flow rates
- Desiccant concentration
- Desiccant inlet
temperature
- Desiccant flow rates
- Temperature
- Humidity ratio
- Moisture removal rate
- Effectiveness
Multilayer
6-4-4-1
6-14-1
2. REGENERATOR THEORY
The basic theoretical assessment of the functional response of the regenerator in an air
conditioning system is arguably essential and necessary before engaging in complex
evaluation techniques. The functional capability of these vessels have most often been
analysed using MRR and effectiveness. MRR rate is the amount of water transferred to and
from the desiccant solution per given time in the dehumidifier and regenerator respectively.
From this definition, MRR is the product of inlet mass flow rate of dry air and the difference
in humidity ratios between inlet and outlet of the vessel. This is mathematically formulated in
terms of the air-side or liquid-side as follows:
( ) ( ) (1)
Where ma and md are the inlet air and desiccant flow rates respectively; and are the
inlet and outlet humidity ratios in kg/kgdryair respectively while, and are the desiccant
concentrations at inlet and outlet conditions respectively. Effectiveness on the other hand is
4. Performance Prediction of an Adiabatic Solar Liquid Desiccant Regenerator using Artificial
Neural Network
http://www.iaeme.com/IJMET/index.asp 499 editor@iaeme.com
the ratio of real humidity change in air to the highest possible difference in humidity ratio,
formulated as:
( ) (2)
Where is the humidity ratio of air at equilibrium conditions expressed as:
( ) (3)
Where P is the aggregate pressure in mmHg and pv,o is the outlet vapour pressure given
by:
( ̇
) (4)
The rate at which water vapour evaporates in the regenerator is governed by the heat
transfer occurrence between the air and desiccant solution. An expression for this
manifestation is thus developed as:
*
̇
̇ ( ) + (5)
Where; is the moisture condensation rate in kg/m-s, is the concealed heat of
condensation kj/kg; ̇ is the mass fluctuation in kg/m-s; C is the specific heat capacity in
kJ/kgK and T is the temperature in K. The subscripts i and o show the inlet and outlet
conditions respectively; while a and d stand for air and desiccant solution respectively. The
desiccant concentration is one of the most essential parameters of consideration because it
determines the rate and amount of water expended or absorbed from the air. Therefore, at
outlet state, the concentration can be found as follows:
( ̇ )
(6)
It should however be noted that the desiccant concentration at dehumidifier outlet was
considered to be the inlet concentration for the regenerator.
3. ARTIFICIAL NEURAL NETWORK MODEL
According to [9], the artificial neural network (ANN), as an upcoming machine learning
technique, applies the analogy of axon-like interconnected neurons for performance prediction
and estimations. These tasks are achieved by combining several neurons in a network capable
of being trained using examples and input data sets to produce desired results.
The interconnection provides a communication channel between successive neurons.
Depending on the complexity of the network, the main parts of a typical ANN includes an
input, output and one or more hidden layers [11]. A feed-forward neural network generally
consist of L-layers and L-1 hidden layers ignoring the front layer of input nodes.
A classical neuron is characterized by sets of interconnecting links with defined weights, a
summing joint where all weighted inputs combine and a stimulation function for control-ling
the magnitude of the outputs. The learning process intricately updates the weights of neuron
connections to effectively accomplish a specific task. The capability of the ANN technique to
consistently learn from examples gives it an edge over other methods. Moreover, ANN
follows basic rules such as input-output interactions from an assortment of typical examples
contrary to traditional procedures decided by human specialists.
5. Andrew Y. A. Oyieke and Freddie L. Inambao
http://www.iaeme.com/IJMET/index.asp 500 editor@iaeme.com
A reinforcement technique of supervised learning based on the error-correction principle
is best suited for application in LDACS due to its capability to formulate a system training
model and provide predictable output for each input configuration. The learning process
encompasses creating a learning paradigm, guides and steps for updating the network weights.
Hence, the ANN can predict the desired results with high precision. Based on the [3]
perceptron convergence theorem, the learning begins immediately an error occurs, thus the
perceptron learning process converges after a definite number of iterative steps. As earlier
enumerated, when dealing with the dehumidifier, each neuron possesses a net and activation
function indicating the possible combination of network outputs in the form of {xj : 1 ≤ j ≤ n}
inside the neuron. Assigning every link between neurons a variable weight factor, each neuron
to produce a sum of all inbound signal weights resulting in an internal activity level ai defined
as:
∑ (7)
Where {wij: 1 ≤ j ≤ n} is the synaptic weight and wo is the bias used to model minimum or
maximum conditions. The activation process of the network solely depended on the applied
threshold which was mathematically represented as:
( ) ( )
For simplicity and convenience of this cluster of ANN, a logic function shown in equation
9 was used for the activation:
( ) (9)
The learning loop containing input formats, error calculation and adjustment was varied
using sets of various input-output examples until an acceptable response level of network sum
of square error was achieved. Knowing the technique of input data format, the expected
output and the type of modelling task, the number of nodes for input and output was easily
determined, though not fixed. For this study, the constructed general layout of the ANN
configuration is presented in Figure 1 with six nodes on the input layer, 4 to14 nodes on each
of the two hidden layers and a one node output layer.
Input
Signals Output
signals
i = 4,6,12,14 j = 4,6,12,14
wixi wjxi
Bias
wio
+1= X0
X1
X2
X3
X4
X5
`
Summing
junction
Figure 1 The artificial neural network structure
6. Performance Prediction of an Adiabatic Solar Liquid Desiccant Regenerator using Artificial
Neural Network
http://www.iaeme.com/IJMET/index.asp 501 editor@iaeme.com
Whereas normalization of data, also known as scaling of input data, significantly enables
transposing of the inputs into statistical series housing the sigmoid stimulation function, it
does not work well and tends to misrepresent dynamic data which formed the majority in this
case. Therefore, an alternative was considered by linearly magnifying the data interval
commensurate to the stimulation function. A linear scale was adopted by having a static
linking weight to each neuron fed with linear stimulation function and a 1:1 linkage to the
input stratum. This enabled the calculation of regressions with the capability of transposing
any input into any output collection
4. ERROR BACK PROPAGATION TRAINING OF ANN
Optimized data weights were to be approximated and then trained to give desirable outcomes
at the fewest number of whole iteration procedures of ANN training also known as epochs. A
bunch of examples go through the learning algorithm concurrently in a single epoch prior to
reorganizing the respective weights in batch training. Alternatively, successive training
involves updating of weight at every instance the training vector passes over the training
algorithm. Whereas, the batch training enables fast processing of numerous non-zero input
data, sequential training was preferred for this study because of its precise accuracy
irrespective of whether the data is defined or undefined.
The same procedure as previously laid out by the authors in analyzing the dehumidifier
was followed. To establish the weight combination of each layer an error backpropagation
training (EBPT) technique was used. Taking a set of training examples in the form of
{x(j);1 j n}, all the n inputs in the neural network were initially entered and then the
expected outputs {z(j);1 j n} were calculated. The training data comprised N sets of
input-output trajectories defining the task. The algorithm minimized the mean square
variation between the actual and anticipated outcomes in a back-propagation scheme. The
performance of the back-propagation algorithm was geared towards a predetermined slip task
involving the general average of the variation of individual neurons in the output stratum and
the anticipated result. The error task was formulated with the aim of varying the weight
matrix W in order to minimize error. Hence, the sum of square error E was then calculated as
follows:
∑ [ ] ∑ [ ( )] (10)
Where: wji = weight matrix [W0W1W2::::::Wn] and x = input vector [X0X1X2:::::Xn]. With j
as the indexing constant for neurons in the output layer and dj as the constituent of the Nth
anticipated vector and f(wjxij) being the component of the output of N inputs, the minimization
of the objective function called for modifying instructions to change the weights of the neuron
linkages. Care was taken to avoid the occurrence of a linear least square optimization
problem, since lessening the error task gives rise to modification instructions to change the
neuron linkage weights. Therefore, to modify the link between two adjacent neurons in layers
L and L+1 respectively without oscillation, an iterative correction factor with a momentum
term was formulated as:
( ) ( ) ( ) (11)
With n number of iterations, the correction factor was . Where index i
represents the units in layer L, is the learning rate, zi is the output of the ith
neuron in layer L,
and j is the error element transmitted from the preceding jth
neuron in layer L+1 determined
for jth
neuron in the output layer as j = [dj - zj]/[1- zj] and j = zj[1- zj] ∑ for the jth
neuron in hidden layer with m neurons in layer L+2. is a real constant which checks
7. Andrew Y. A. Oyieke and Freddie L. Inambao
http://www.iaeme.com/IJMET/index.asp 502 editor@iaeme.com
the influence of previous weight modifications on the current path of traffic in the weight
matrix. The feed-forward ANN algorithm is thus laid down as follows:
1. Start
2. Set the weights to trivial arbitrary values
3. Arbitrarily select an input pattern x( )
4. Disseminate the signal onward over the network
5. Calculate for the output layer ( ) ; Where is the net input to the
ith
level while f’
is the derivative of the stimulation factor f.
6. Repeat procedure 4 for the subsequent levels by transmitting the error towards the
back according to the expression; ( ) ∑ for l = (L-1,…….,1)
7. Modify the weights by the function;
8. Go back to stage 2 and replicate the procedure until the total number of repetitions is
achieved or output layer displays an error under the specified threshold
9. End
A combination of parameters summarized in table 2 and the feedforward algorithm
constituted the ANN model logic procedure and the final decision on the output.
Table 2 ANN modelling parameters
Item Parameter
Liquid desiccant Lithium bromide
Inputs
- Air = inlet humidity ratio, inlet temperature and flow rates,
- Desiccant = concentration, inlet temperature, and flow rates
Outputs - Temperature, humidity ratio, moisture removal rate and effectiveness
Network structures - 6-4-1, 6-6-1, 6-12-1, 6-14-1, 6-4-4-1
Number of hidden layers - 4, 6, 12, 14
Training technique - Feedforward - Error back propagation algorithm
Training ratio
Testing ration
- 70% from data = 60
- 30% from data = 25
Training function - traingdm
Learning function - learngdm
Performance function ∑[ ] ∑[ ( )]
Decision Logic - If (calculated value –assigned value) < 1 x 10-3
then lowest error. Accept output
5. RESULTS AND DISCUSSION
Supervised learning based on the reinforcement technique involving the error correction rule
and perceptron convergence theorem were applied to develop an ANN algorithm in
MATLAB. The choice of appropriate number of training arrangements offering effective
simplification was very trivial for the computational accuracy of the ANN algorithm. To
determine the best ANN configuration, which would give the best training outcomes, various
structures were considered for both moisture removal rate and effectiveness. The ensuing
coefficient of determination R2
values during training, validation and testing were used to
choose the most suitable structure. However, the overall values were obtained for each
combination and the best chosen.
A summary of the respective patterns and their corresponding R2
values during
regeneration process is presented in Table 2. Based on the respective outcomes of numerous
combinations tested and analysed, configurations 6-4-4-1 and 6-14-1 demonstrated the best
performance levels for moisture removal rate and effectiveness respectively for the
8. Performance Prediction of an Adiabatic Solar Liquid Desiccant Regenerator using Artificial
Neural Network
http://www.iaeme.com/IJMET/index.asp 503 editor@iaeme.com
regenerator. The results informed the decision for choice of these configurations for
comparison of various parameters.
The R2
values for the regenerator MRR model ranged from 0.82 to 0.985, 0.82 to 0.991,
0.78 to 0.991 and 0.78 to 0.975 during training, validation, testing and overall respectively. It
was noted that the finest MRR performance prediction was best achieved by configuration 6-
4-4-1 at 0.975, validating at epoch-8 with a value of 1.7735 x10-8
as shown in Figure 3a. In
similar sequence, the regenerator effectiveness was predicted within ranges 0.83-0.999, 0.82 -
0.999, 0.85 - 0.993 and 0.82 - 0.991 respectively. Structure 6-14-1 produced the finest results
at 0.991 attaining an optimum performance prediction level of 3.3323 x 10-7
at epoch-5 as
seen in Figure 3b.
(a) (b)
Figure 3 The best-fit validation outcome for the regenerator (a) MRR (b) effectiveness
The regenerator MRR and effectiveness were best predicted at training output settings of
1*target+0.0034 and 1*target-0.000057 respectively. Other detailed presentation of testing,
validation and overall outputs are shown in Figures 4 and 5. The training target being the
experimental data, corroboration stopped at epochs 3 and 5 respectively at which point the
corresponding R2
values were 0.984 and 0.999 respectively.
9. Andrew Y. A. Oyieke and Freddie L. Inambao
http://www.iaeme.com/IJMET/index.asp 504 editor@iaeme.com
Figure 4 The 6-4-4-1 ANN structure training regression validation halt at epoch 3 for MRR
10. Performance Prediction of an Adiabatic Solar Liquid Desiccant Regenerator using Artificial
Neural Network
http://www.iaeme.com/IJMET/index.asp 505 editor@iaeme.com
Figure 5 The 6-14-1 ANN structure training regression validation halt at epoch 5 for effectiveness
6. MODEL AND EXPERIMENTAL RESULTS COMPARISON
The regenerator performance was characterized by MRR and effectiveness subjected to
varying inlet temperatures of air and desiccant solution as well as inlet air humidity ratio. The
training process was terminated when the iterations peaked at the defined total epochs of 25
000 or upon attainment of the least error on validation procedure, whichever came first. As a
result, based on the comparison between the experimental and predicted results for MRR and
effectiveness, the following findings were made.
The experimental and predicted regenerator MRR were plotted side by side against the
number of testing in Figure 6 for structure 6-4-4-1. The highest MRR experienced occurred at
a point of highest desiccant temperature as dictated by the solar radiation. However, on
evaluation, the predicted and experimental profiles aligned perfectly with a maximum and
mean difference being 0.18 g/s and 0.11 g/s respectively. As presented in Figure 7, the
regenerator effectiveness was also computed and plotted for structure 6-14-1. Again, the
profiles agreed well with a few negligible disparities with a mean and maximum difference of
0.6% and 1% respectively.
11. Andrew Y. A. Oyieke and Freddie L. Inambao
http://www.iaeme.com/IJMET/index.asp 506 editor@iaeme.com
0 2 4 6 8 10 12 14 16 18 20 22
0,6
0,8
1
1,2
1,4
1,6
1,8
Testing
ExperimentalandPredicted(ANN)MRR(g/s)
ANN Exp
Figure 6 The degree of accuracy between experimental and ANN predicted MRR values
0 2 4 6 8 10 12 14 16 18 20 22
0,3
0,35
0,4
0,45
0,5
0,55
Testing
Experimentalandpredicted(ANN)effectiveness(%)
ANN Exp
Figure 7 The degree of accuracy between experimental and ANN predicted effectiveness
Since the humidity ratio (HR) of inlet air is essential in the design of LDAC systems, the
humidity ratio at inlet conditions was monitored and recorded and used for training the neural
network algorithm to mimic exact experimental outcomes. The respective outcomes of the
predicted parameters were compared to those obtained from experimental processes. The
variation of MRR and effectiveness against inlet air HR was plotted for the regenerator as
shown in Figure 8. The MRR was observed to increase with as HR increased up to a
maximum value of 1.47 g/s corresponding to 0.03 kgH2O/kgdryair then slightly declined. The
algorithm predicted the experimental values to maximum and mean accuracies of 0.0925 %
and -0,012 % respectively. On the effectiveness, higher values were initially recorded up to
HR of 0.018 kgH2O/kgdryair then began to decline steadily. The maximum and mean accuracies
of 4.14 % and 0.53 % respectively were realized in the prediction of experimental results by
the neural network algorithm. The highest effectiveness obtained was 70 %, this value falling
below 0.03 kgH2O/kgdryair.
12. Performance Prediction of an Adiabatic Solar Liquid Desiccant Regenerator using Artificial
Neural Network
http://www.iaeme.com/IJMET/index.asp 507 editor@iaeme.com
0,015 0,02 0,025 0,03 0,035
0,2
0,4
0,6
0,8
1
1,2
1,4
1,6
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
Air humidity ratio at inlet (kgH
2O/kg
dry air)
MRR(g/s)
MRR-ANN MRR-EXP
e(%)
EFF-ANN EFF-EXP
Figure 8 The variation of MRR and effectiveness in relation to humidity ratio of air at inlet conditions
6.1. Effect of inlet desiccant temperature
The effect of inlet desiccant temperature variation of the regenerator was plotted as shown
in Figure 9. The MRR displayed low sensitivity to changes in desiccant temperature at entry
to the regenerator. However, beyond 32 o
C a diminishing trend was realized. In other words,
MRR reduced with increase in temperature beyond this point. The highest difference between
predicted and measured temperature was 3.496 %. From the above findings, it can be
concluded that the ANN model precisely predicted the experimental inlet desiccant
temperature with an average deviation of -0.5290 %. However, some see-saw variations were
observed where the model didn't come close and these were attributed to minor discrepancies
in experiments and oversimplification of the algorithm.
Of more interest was how the regenerator effectiveness varied with change in inlet
desiccant temperature as a stimulant for heat and mass transfer. An increase in desiccant
temperature resulted in improved regenerator effectiveness. This implied that desiccant at
elevated temperature readily lost water vapour to the atmospheric air which resulted in a re-
concentration to near initial conditions in readiness for re-circulation to the dehumidifier. This
temperature increase could be provided by any renewable source or waste heat. In this case a
hybrid PV/T was used. The variation of regenerator effectiveness is clearly evident in Figure
10 which shows a side-by-side comparison of the ANN generated values with those from the
experiment. The maximum and mean deviations attained were 2.61 % and 0.21 %
respectively, implying a near perfect fit.
13. Andrew Y. A. Oyieke and Freddie L. Inambao
http://www.iaeme.com/IJMET/index.asp 508 editor@iaeme.com
25 30 35 40
0,2
0,4
0,6
0,8
1
1,2
1,4
1,6
Desiccant temperature (o
C) at inlet of the regenerator
MRR(g/s)
ANN
EXP
Figure 9 The effect of inlet desiccant temperature on moisture removal rate of the regenerator MRR
24 28 32 36 40
0,3
0,35
0,4
0,45
0,5
0,55
Desiccant temperature at inlet (o
C)
e(%)
ANN
EXP
Figure 10 Effect of inlet desiccant temperature on the effectiveness of the regenerator
6.2. Effect of inlet air temperature
In the regenerator, water vapour is expelled from the desiccant and absorbed in the air which
is then exhausted to the atmosphere. The regenerator moisture removal rate varied
proportionally with the air temperature, as depicted in Figure 11. The more the temperature
escalated, the more the moisture removal rate showed an upward trend. This trend continued
to a level of 30 o
C then a slight reduction ensued. However, up to the 40 o
C mark, the MRR
was still well over 1 g/s. Again, the predicted MRR values matched perfectly with the
calculated values from measured data. Although there were some negligible variations, the
highest MRR was 1.5 g/s with a mean and highest difference of -0.12 % and 3.2 %
respectively. The deviations were insignificant compared to the maximum allowable value of
20 %, hence the algorithm was deemed a success in this case.
14. Performance Prediction of an Adiabatic Solar Liquid Desiccant Regenerator using Artificial
Neural Network
http://www.iaeme.com/IJMET/index.asp 509 editor@iaeme.com
The highest regenerator effectiveness achieved was 70 % with air temperature at room
temperature of 25 o
C as shown in Figure 12. Beyond this point the effectiveness reduced
significantly. The effectiveness outcomes of the ANN model were matched with the
experimental data and found to be within mean and maximum deviation of -0.23 % and 2.1 %
respectively. The insensitivity of effectiveness to air temperature was generally due to the air
properties at room temperature which made it favourable for water vapour absorption by the
liquid desiccant. In contrast, for the regeneration process, the higher desiccant temperatures
resulted in higher effectiveness hence better performance.
20 25 30 35 40
0,2
0,4
0,6
0,8
1
1,2
1,4
1,6
Temperature of air at inlet (o
C)
MRR(g/s)
ANN
EXP
Figure 11 Effect of the regenerator inlet air temperature on MRR
20 25 30 35 40
0,1
0,2
0,3
0,4
0,5
0,6
0,7
Temperature of air at inlet (o
C)
e(%)
ANN
EXP
Figure 12 The effect of inlet air temperature on effectiveness of the regenerator
7. CONCLUSION
Moisture removal rate and effectiveness were used as the performance analysis parameters for
a solar adiabatic liquid desiccant regenerator. Using the reinforcement technique of
supervised learning, error correction and perceptron convergence theorem, an ANN algorithm
was developed and implemented in MATLAB. A regression analysis was performed on
various ANN structures during training and the respective coefficient of determination R2
established which then formed the basis for choosing the best combination with the best-fit.
Data from the previous experimental results were used to train, test and validate the ANN
15. Andrew Y. A. Oyieke and Freddie L. Inambao
http://www.iaeme.com/IJMET/index.asp 510 editor@iaeme.com
algorithm. In order to avoid oversimplification and/or over-complication of the model, the
quantity of neurons and the number of layers were carefully chosen for exact accuracy of the
algorithm. From the respective outcomes, the regenerator performance was best predicted by
patterns 6-4-4-1 and 6-14-1 for MRR and effectiveness respectively. Hence, the results
discussed for various items and comparisons were based on these configurations. From an in-
depth detailed analysis of the algorithm performance and upon comparison of the ANN
generated results to those from experiments, a number of conclusions were drawn, as
presented below.
The predicted and experimental regenerator MRR profiles aligned perfectly, with the
maximum and mean difference being 0.18 g/s and 0.11 g/s respectively. The regenerator
effectiveness profiles agreed well with a few negligible disparities with a mean and maximum
difference of 0.6 % and 1 % respectively. The algorithm predicted the experimental MRR
values to maximum and mean accuracies of 0.0925% and -0,012 % respectively. The
maximum and mean accuracies of 4.14 % and 0.53 % respectively were realized in the
prediction of experimental regenerator effectiveness by the neural network algorithm. Overall,
the prediction was deemed perfect since deviations were negligible and within acceptable
limits. The ANN model precisely predicted the experimental regenerator MRR with respect to
inlet desiccant temperature with an average deviation of -0.5290 % while the highest
difference was 3.496 % between predicted and measured temperature.
As the stimulant for heat and mass transfer in the regenerator, the effectiveness varied
with change in inlet desiccant temperature. The side-by-side comparison of the general trends
as predicted by the ANN algorithm against the experimental values revealed maximum and
mean deviations of 2.61 % and 0.21 % respectively. While the regenerator moisture removal
rate varied proportionally with the air temperature, the predicted MRR values matched
perfectly with the calculated values from measured data, with the mean and highest difference
being -0.12 % and 3.2 % respectively.
The regenerator effectiveness outcomes of the ANN model were matched with the
experimental data and found to be within a mean and maximum deviation of -0.23 % and 2.1
% respectively. In all the aforementioned cases, the mean and maximum differences between
the ANN model and experimental values were way below the allowable limit of 5%, hence
the algorithm was deemed to be successful and could find use in air conditioning scenarios.
The ANN algorithm's capability and flexibility test of processing unforeseen inputs was
accurate with negligible deviations in predicting the regenerator effectiveness and MRR
within all ranges of temperature and concentrations.
REFERENCES
[1] Anil, K. J., Mao, J., and Mohiuddin, K. M. Artificial Neural Networks: A Tutorial. IEEE
Transactions on Neural Networks, 21(3), 1996, pp. 31-44.
[2] McCulloch, W. S. and Pitts, W. A Logical Calculus of Ideas Immanent in Nervous
Activity. Bull Mathematical Biophysics, 5, 1943, pp. 115-133.
[3] Rosenblatt, R. Principles of Neurodynamics. New York: Spartan Books, 1962.
[4] Minsky, M. and Papert, S. Perceptions: An Introduction to Computational Geometry.
Cambridge, Mass.: MIT Press, 1969.
[5] Hopfield, J. J. Neural Networks and Physical Systems with Emergent Collective
Computational Abilities. Proceedings of the National Academy of Sciences of the United
States of America, 79, 132(f), 1982, pp. 2554-2558.
[6] Werbos, P. Beyond Regression: New Tools for Prediction and Analysis in the Behavioural
Sciences. PhD thesis, Department of Applied Mathematics, Harvard University,
Cambridge, Mass, 1974.
16. Performance Prediction of an Adiabatic Solar Liquid Desiccant Regenerator using Artificial
Neural Network
http://www.iaeme.com/IJMET/index.asp 511 editor@iaeme.com
[7] Rumelhart, D. E. and McClelland, J. L. Parallel Distributed Processing; Exploration in the
Microstructure of Cognition. Cambridge, Mass.: MIT Press, 1988.
[8] Gandhidasan, P. and Mohandes, M. A. Prediction of Vapour Pressures of Aqueous
Desiccants for Cooling Applications by Using Artificial Neural Networks. Applied
Thermal Engineering, 28, 2008, pp. 126-135.
[9] Gandhidasan, P. and Mohandes, M. A. Artificial Neural Network Analysis of Liquid
Desiccant Dehumidification System. Energy, 36(2), 2011, pp. 1180-1186.
[10] Mohammed, T. H., Sohif, B. N., Sulaiman, M. Y., Sopian, K., and Abduljalil, A. A.
Artificial Neural Network Analysis of Liquid Desiccant Dehumidifier Performance in a
Solar Hybrid Air-Conditioning System. Applied Thermal Engineering, 59, 2013, pp. 389-
397.
[11] Mohammed, T. H., Sohif, B. N., Sulaiman, M. Y., Sopian, K., and Abduljalil, A. A.
Artificial Neural Network Analysis of Liquid Desiccant Regenerator Performance in a
Solar Hybrid Air-Conditioning System. Sustainable Energy Technologies and Assessment,
4, 2013, pp. 11-19.
[12] Mohammed, T. H., Sohif, B. N., Sulaiman, M. Y., Sopian, K., and Abduljalil, A. A.
Implementation and Validation of an Artificial Neural Network for Predicting the
Performance of a Liquid Desiccant Dehumidifier. Energy Conservation and Management,
67, 2013, pp. 240-250.
[13] Zeidan, E. B., Aly, A. A., and Hamed, A. M. Investigation on the Effect of Operating
Parameters on the Performance of Solar Desiccant Cooling System Using Artificial
Neural Networks. International Journal of Thermal Environment Engineering, 1, 2010,
pp. 91-98.