Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
LES Analysis on Confined Swirling Flow in a Gas Turbine Swirl BurnerROSHAN SAH
This presentation describes a Large Eddy Simulation (LES) investigation into flow fields in a model gas turbine combustor equipped with a swirl burner. A probability density function was used to describe the interaction physics of chemical reaction and turbulent flow as liquid fuel was directly injected into the combustion chamber and rapidly mixed with the swirling air. Simulation results showed that heat release during combustion accelerated the axial velocity motion and made the recirculation zone more compact
Climate model parameterizations of cumulus convection and other clouds that form due to small-scale turbulent eddies are a leading source of uncertainty in predicting the sensitivity of global warming to greenhouse gas increases. Even though we can write down equations governing the physics of cloud formation and fluid motion, these cloud-forming eddies are not resolved by the grid of a climate model, so the subgrid covariability of cloud processes and turbulence must be parameterized. Many approaches are used, all involving numerous subjective assumptions. Even when optimized to match present-day climate, these approaches produce a broad range of predictions about how clouds will change in a future climate.
High resolution models which explicitly simulate the clouds and turbulence on a very fine computational grid more realistically simulate cloud formation compared to observations. But it has proved challenging to translate this skill into better climate model parameterizations.
We will present one naturally stochastic approach for this using a computationally expensive approach called ‘superparameterization’ and then we will lay out a vision for how machine learning could be used to do this translation, which amounts to a form of stochastic coarse-graining. Developing the statistical and computational methods to realize this vision is a good challenge for this SAMSI year.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
LES Analysis on Confined Swirling Flow in a Gas Turbine Swirl BurnerROSHAN SAH
This presentation describes a Large Eddy Simulation (LES) investigation into flow fields in a model gas turbine combustor equipped with a swirl burner. A probability density function was used to describe the interaction physics of chemical reaction and turbulent flow as liquid fuel was directly injected into the combustion chamber and rapidly mixed with the swirling air. Simulation results showed that heat release during combustion accelerated the axial velocity motion and made the recirculation zone more compact
Climate model parameterizations of cumulus convection and other clouds that form due to small-scale turbulent eddies are a leading source of uncertainty in predicting the sensitivity of global warming to greenhouse gas increases. Even though we can write down equations governing the physics of cloud formation and fluid motion, these cloud-forming eddies are not resolved by the grid of a climate model, so the subgrid covariability of cloud processes and turbulence must be parameterized. Many approaches are used, all involving numerous subjective assumptions. Even when optimized to match present-day climate, these approaches produce a broad range of predictions about how clouds will change in a future climate.
High resolution models which explicitly simulate the clouds and turbulence on a very fine computational grid more realistically simulate cloud formation compared to observations. But it has proved challenging to translate this skill into better climate model parameterizations.
We will present one naturally stochastic approach for this using a computationally expensive approach called ‘superparameterization’ and then we will lay out a vision for how machine learning could be used to do this translation, which amounts to a form of stochastic coarse-graining. Developing the statistical and computational methods to realize this vision is a good challenge for this SAMSI year.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
Agu chen a31_g-2917_retrieving temperature and relative humidity profiles fro...Maosi Chen
Atmospheric temperature and relative humidity profiles are fundamental for atmospheric research such as numerical weather prediction and climate change assessment. Hyperspectral satellite data contain a wealth of relevant information and have been used in many algorithms (e.g. regression-based methods) to retrieve these profiles. Deep Learning or Deep Neural Network (DNN) is capable of finding complex relationships (functions) between pairs of input and output variables by assembling many simple non-linear modules together and learning the parameters therein from large amounts of observations. DNN has been successfully applied in many fields (such as image classification, object detection, language translation). In this study, we explored the potential of retrieving atmospheric profiles from hyperspectral satellite radiation data using DNN. The requirement for applying the DNN technique is satisfied with large amount of hyperspectral radiance data provided by United States Suomi National Polar (NPP) Cross-track Infrared Sounder (CrIS) and the reanalyzed atmospheric profiles data provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). The proposed DNN consists of two consecutive parts. In the first part, the first 1245 bands of the NPP CrIS hyperspectral radiance data (648.75 to 2555 cm-1) are compressed into a 300-element vector representing their key features by stacked AutoEncoders. Then, in the second part, the multi-layer Self-Normalizing Neural Network (SNN) is used to map the compressed vector (of 300 elements) into 55-layer temperature and relative humidity profiles. The DNN trainable variables are optimized by minimizing the difference of its predictions and the matched ECMWF temperature and humidity profiles (53230 samples). Finally, the DNN retrieved atmospheric temperature and relative humidity profiles and those provided by the NOAA Unique Combined Atmospheric Processing System (NUCAPS, the official retrieval products for CrIS) are compared with the matched radiosonde observations at one location.
Stochastic runoff forecasting and real time control of urban drainage systemsEVAnetDenmark
Modellering af afløbssystemer, Usikkerheder, stokastiske modeller ecipient og Sundhed
• Udvikling af metoder til at assimilere målinger fra afløbssystemet i fysisk baserede modeller som Mike Urban samt at undersøge, hvorledes de assimilerede målinger påvirkes af diverse fejlkilder.
• Stokastisk forecasts af regnvandsmængder i forbindelse med afløbsmodellering, herunder real time control. Dette gøres med Greybox modellering. ræsentation af det ambitiøse forskningsprojekt Storm- and Waste water Informatics SWI..
Multivariable Parametric Modeling of a Greenhouse by Minimizing the Quadratic...TELKOMNIKA JOURNAL
This paper concerns the identification of a greenhouse described in a multivariable linear system
with two inputs and two outputs (TITO). The method proposed is based on the least squares identification
method, without being less efficient, presents an iterative calculation algorithm with a reduced
computational cost. Moreover, its recursive character allows it to overcome, with a good initialization, slight
variations of parameters, inevitable in a real multivariable process. A comparison with other method s
recently proposed in the literature demonstrates the advantage of this method. Simulations obtained will be
exposed to showthe effectiveness and application of the method on multivariable systems.
A Novel Technique in Software Engineering for Building Scalable Large Paralle...Eswar Publications
Parallel processing is the only alternative for meeting computational demand of scientific and technological advancement. Yet first few parallelized versions of a large application code- in the present case-a meteorological Global Circulation Model- are not usually optimal or efficient. Large size and complexity of the code cause making changes for efficient parallelization and further validation difficult. The paper presents some novel techniques to enable change of parallelization strategy keeping the correctness of the code under control throughout the modification.
Mathematical Calculation toFindtheBest Chamber andDetector Radii Used for Mea...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Theoretical work submitted to the Journal should be original in its motivation or modeling structure. Empirical analysis should be based on a theoretical framework and should be capable of replication. It is expected that all materials required for replication (including computer programs and data sets) should be available upon request to the authors.
The International Journal of Engineering & Science would take much care in making your article published without much delay with your kind cooperation
Generating and Using Meteorological Data in AERMOD BREEZE Software
AERMOD, the preferred model of the U.S. EPA for near-field air dispersion modeling, requires the use of two meteorological files: the surface (.SFC) and profile (.PFL) files.
energy lifetime control algorithm for variable target load demands of ad hoc ...INFOGAIN PUBLICATION
The energy and lifetime of Ad hoc wireless sensor-target networks are improved using load control algorithm with different parameters and coverage load in demand, as well as sensor-target configurations. The main goal is to increase the lifetime of sensors by selecting appropriate sensor subsets to satisfy the minimum required value of overall coverage failure probability. The algorithm investigates the different sensor subsets, according to their coverage failure probabilities, and varying intervals of target load demands
Physical processes in the earth system are modeled with mathematical representations called parameterizations. This talk will describe some of the conceptual approaches and mathematics used do describe physical parameterizations focusing on cloud parameterizations. This includes tracing physical laws to discrete representations in coarse scale models. Clouds illustrate several of the complexities and techniques common to many physical parameterizations. This includes the problem of different scales, sub-grid scale variability. Discussions of mathematical methods for dealing with the sub-grid scale will be discussed. In-exactness or indeterminate problems for both weather and climate will be discussed, including the problems of indeterminate parameterizations, and inexact initial conditions. Different mathematical methods, including the use of stochastic methods, will be described and discussed, with examples from contemporary earth system models.
Improved Kalman Filtered Neuro-Fuzzy Wind Speed Predictor For Real Data Set ...IJMER
Wind energy plays an important role as a contributing source of energy, as well as, and in
future. It has become very important to predict the speed and direction in wind farms. Effective wind
prediction has always been challenged by the nonlinear and non-stationary characteristics of the wind
stream. This paper presents three new models for wind speed forecasting, a day ahead, for Egyptian
North-Western Mediterranean coast. These wind speed models are based on adaptive neuro-fuzzy
inference system (ANFIS) estimation scheme. The first proposed model predicts wind speed for one
day ahead twenty four hours based on same month of real data in seven consecutive years. The second
proposed model predicts twenty four hours ahead based only one month of data using a time series
predication schemes. The third proposed model is based on one month of data to predict twenty four
hours ahead; the data initially passed through discrete Kalman filter (KF) for the purpose of
minimizing the noise contents that resulted from the uncertainties encountered during the wind speed
measurement. Kalman filtered data manipulated by the third model showed better estimation results
over the other two models, and decreased the mean absolute percentage error by approximately 64 %
over the first model.
A NEW METHOD OF SMALL-SIGNAL CALIBRATION BASED ON KALMAN FILTERijcseit
The basic principle of Kalman filter (KF) is introduced in this paper, based on which, it presents a new
method for high precision measurement of small-signal instead of the unreal direct one. We have designed a
method of multi-meter information infusion. With this method, we filter the measured value of a type of
special equipment and extract the optimal estimate for true value. Experimental results show that this
method can effectively eliminate the random error of the measurement process. The optimal estimate error
meets the basic requirements of conformity assessment, 3푈95 ≤ 푀푃퐸푉. This method can provide an
algorithm reference for the design of automatic calibration equipment.
Extreme weather events pose great potential risk on ecosystem, infrastructure and human health. Analyzing extreme weather in the observed record (satellite, reanalysis products) and characterizing changes in extremes in simulations of future climate regimes is an important task. Thus far, extreme weather events have been typically specified by the community through hand-coded, multi-variate threshold conditions. Such criteria are usually subjective, and often there is little agreement in the community on the specific algorithm that should be used. We propose the use of a different approach: machine learning (and in particular deep learning) for solving this important problem. If human experts can provide spatio-temporal patches of a climate dataset, and associated labels, we can turn to a machine learning system to learn the underlying feature representation. The trained Machine Learning (ML) system can then be applied to novel datasets, thereby automating the pattern detection step. Summary statistics, such as location, intensity and frequency of such events can be easily computed as a post-process.
We will report compelling results from our investigations of Deep Learning for the tasks of classifying tropical cyclones, atmospheric rivers and weather front events. For all of these events, we observe 90-99% classification accuracy. We will also report on progress in localizing such events: namely drawing a bounding box (of the correct size and scale) around the weather pattern of interest. Both tasks currently utilize multi-layer convolutional networks in conjunction with hyper-parameter optimization. We utilize HPC systems at NERSC to perform the optimization across multiple nodes, and utilize highly-tuned libraries to utilize multiple cores on a single node. We will conclude with thoughts on the frontier of Deep Learning and the role of humans (vis-a-vis AI) in the scientific discovery process.
Adaptive Hyper-Parameter Tuning for Black-box LiDAR Odometry [IROS2021]KenjiKoide1
Adaptive Hyper-Parameter Tuning for Black-box LiDAR Odometry
Kenji Koide, Masashi Yokozuka, Shuji Oishi, and Atsuhiko Banno
Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2021), pp. 7708-7714, Prague, Czech Republic, Sep., 2021
https://staff.aist.go.jp/k.koide/
Agu chen a31_g-2917_retrieving temperature and relative humidity profiles fro...Maosi Chen
Atmospheric temperature and relative humidity profiles are fundamental for atmospheric research such as numerical weather prediction and climate change assessment. Hyperspectral satellite data contain a wealth of relevant information and have been used in many algorithms (e.g. regression-based methods) to retrieve these profiles. Deep Learning or Deep Neural Network (DNN) is capable of finding complex relationships (functions) between pairs of input and output variables by assembling many simple non-linear modules together and learning the parameters therein from large amounts of observations. DNN has been successfully applied in many fields (such as image classification, object detection, language translation). In this study, we explored the potential of retrieving atmospheric profiles from hyperspectral satellite radiation data using DNN. The requirement for applying the DNN technique is satisfied with large amount of hyperspectral radiance data provided by United States Suomi National Polar (NPP) Cross-track Infrared Sounder (CrIS) and the reanalyzed atmospheric profiles data provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). The proposed DNN consists of two consecutive parts. In the first part, the first 1245 bands of the NPP CrIS hyperspectral radiance data (648.75 to 2555 cm-1) are compressed into a 300-element vector representing their key features by stacked AutoEncoders. Then, in the second part, the multi-layer Self-Normalizing Neural Network (SNN) is used to map the compressed vector (of 300 elements) into 55-layer temperature and relative humidity profiles. The DNN trainable variables are optimized by minimizing the difference of its predictions and the matched ECMWF temperature and humidity profiles (53230 samples). Finally, the DNN retrieved atmospheric temperature and relative humidity profiles and those provided by the NOAA Unique Combined Atmospheric Processing System (NUCAPS, the official retrieval products for CrIS) are compared with the matched radiosonde observations at one location.
Stochastic runoff forecasting and real time control of urban drainage systemsEVAnetDenmark
Modellering af afløbssystemer, Usikkerheder, stokastiske modeller ecipient og Sundhed
• Udvikling af metoder til at assimilere målinger fra afløbssystemet i fysisk baserede modeller som Mike Urban samt at undersøge, hvorledes de assimilerede målinger påvirkes af diverse fejlkilder.
• Stokastisk forecasts af regnvandsmængder i forbindelse med afløbsmodellering, herunder real time control. Dette gøres med Greybox modellering. ræsentation af det ambitiøse forskningsprojekt Storm- and Waste water Informatics SWI..
Multivariable Parametric Modeling of a Greenhouse by Minimizing the Quadratic...TELKOMNIKA JOURNAL
This paper concerns the identification of a greenhouse described in a multivariable linear system
with two inputs and two outputs (TITO). The method proposed is based on the least squares identification
method, without being less efficient, presents an iterative calculation algorithm with a reduced
computational cost. Moreover, its recursive character allows it to overcome, with a good initialization, slight
variations of parameters, inevitable in a real multivariable process. A comparison with other method s
recently proposed in the literature demonstrates the advantage of this method. Simulations obtained will be
exposed to showthe effectiveness and application of the method on multivariable systems.
A Novel Technique in Software Engineering for Building Scalable Large Paralle...Eswar Publications
Parallel processing is the only alternative for meeting computational demand of scientific and technological advancement. Yet first few parallelized versions of a large application code- in the present case-a meteorological Global Circulation Model- are not usually optimal or efficient. Large size and complexity of the code cause making changes for efficient parallelization and further validation difficult. The paper presents some novel techniques to enable change of parallelization strategy keeping the correctness of the code under control throughout the modification.
Mathematical Calculation toFindtheBest Chamber andDetector Radii Used for Mea...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Theoretical work submitted to the Journal should be original in its motivation or modeling structure. Empirical analysis should be based on a theoretical framework and should be capable of replication. It is expected that all materials required for replication (including computer programs and data sets) should be available upon request to the authors.
The International Journal of Engineering & Science would take much care in making your article published without much delay with your kind cooperation
Generating and Using Meteorological Data in AERMOD BREEZE Software
AERMOD, the preferred model of the U.S. EPA for near-field air dispersion modeling, requires the use of two meteorological files: the surface (.SFC) and profile (.PFL) files.
energy lifetime control algorithm for variable target load demands of ad hoc ...INFOGAIN PUBLICATION
The energy and lifetime of Ad hoc wireless sensor-target networks are improved using load control algorithm with different parameters and coverage load in demand, as well as sensor-target configurations. The main goal is to increase the lifetime of sensors by selecting appropriate sensor subsets to satisfy the minimum required value of overall coverage failure probability. The algorithm investigates the different sensor subsets, according to their coverage failure probabilities, and varying intervals of target load demands
Physical processes in the earth system are modeled with mathematical representations called parameterizations. This talk will describe some of the conceptual approaches and mathematics used do describe physical parameterizations focusing on cloud parameterizations. This includes tracing physical laws to discrete representations in coarse scale models. Clouds illustrate several of the complexities and techniques common to many physical parameterizations. This includes the problem of different scales, sub-grid scale variability. Discussions of mathematical methods for dealing with the sub-grid scale will be discussed. In-exactness or indeterminate problems for both weather and climate will be discussed, including the problems of indeterminate parameterizations, and inexact initial conditions. Different mathematical methods, including the use of stochastic methods, will be described and discussed, with examples from contemporary earth system models.
Improved Kalman Filtered Neuro-Fuzzy Wind Speed Predictor For Real Data Set ...IJMER
Wind energy plays an important role as a contributing source of energy, as well as, and in
future. It has become very important to predict the speed and direction in wind farms. Effective wind
prediction has always been challenged by the nonlinear and non-stationary characteristics of the wind
stream. This paper presents three new models for wind speed forecasting, a day ahead, for Egyptian
North-Western Mediterranean coast. These wind speed models are based on adaptive neuro-fuzzy
inference system (ANFIS) estimation scheme. The first proposed model predicts wind speed for one
day ahead twenty four hours based on same month of real data in seven consecutive years. The second
proposed model predicts twenty four hours ahead based only one month of data using a time series
predication schemes. The third proposed model is based on one month of data to predict twenty four
hours ahead; the data initially passed through discrete Kalman filter (KF) for the purpose of
minimizing the noise contents that resulted from the uncertainties encountered during the wind speed
measurement. Kalman filtered data manipulated by the third model showed better estimation results
over the other two models, and decreased the mean absolute percentage error by approximately 64 %
over the first model.
A NEW METHOD OF SMALL-SIGNAL CALIBRATION BASED ON KALMAN FILTERijcseit
The basic principle of Kalman filter (KF) is introduced in this paper, based on which, it presents a new
method for high precision measurement of small-signal instead of the unreal direct one. We have designed a
method of multi-meter information infusion. With this method, we filter the measured value of a type of
special equipment and extract the optimal estimate for true value. Experimental results show that this
method can effectively eliminate the random error of the measurement process. The optimal estimate error
meets the basic requirements of conformity assessment, 3푈95 ≤ 푀푃퐸푉. This method can provide an
algorithm reference for the design of automatic calibration equipment.
Extreme weather events pose great potential risk on ecosystem, infrastructure and human health. Analyzing extreme weather in the observed record (satellite, reanalysis products) and characterizing changes in extremes in simulations of future climate regimes is an important task. Thus far, extreme weather events have been typically specified by the community through hand-coded, multi-variate threshold conditions. Such criteria are usually subjective, and often there is little agreement in the community on the specific algorithm that should be used. We propose the use of a different approach: machine learning (and in particular deep learning) for solving this important problem. If human experts can provide spatio-temporal patches of a climate dataset, and associated labels, we can turn to a machine learning system to learn the underlying feature representation. The trained Machine Learning (ML) system can then be applied to novel datasets, thereby automating the pattern detection step. Summary statistics, such as location, intensity and frequency of such events can be easily computed as a post-process.
We will report compelling results from our investigations of Deep Learning for the tasks of classifying tropical cyclones, atmospheric rivers and weather front events. For all of these events, we observe 90-99% classification accuracy. We will also report on progress in localizing such events: namely drawing a bounding box (of the correct size and scale) around the weather pattern of interest. Both tasks currently utilize multi-layer convolutional networks in conjunction with hyper-parameter optimization. We utilize HPC systems at NERSC to perform the optimization across multiple nodes, and utilize highly-tuned libraries to utilize multiple cores on a single node. We will conclude with thoughts on the frontier of Deep Learning and the role of humans (vis-a-vis AI) in the scientific discovery process.
Adaptive Hyper-Parameter Tuning for Black-box LiDAR Odometry [IROS2021]KenjiKoide1
Adaptive Hyper-Parameter Tuning for Black-box LiDAR Odometry
Kenji Koide, Masashi Yokozuka, Shuji Oishi, and Atsuhiko Banno
Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2021), pp. 7708-7714, Prague, Czech Republic, Sep., 2021
https://staff.aist.go.jp/k.koide/
This presentation on Process Analysis using 3D plots was given at Emerson Exchange, 2010. Details are provided on a field trail in which the DeltaV historian was modified to support array parameters and a web enabled interfaces was used to provide a 3D plot of array data. Information is provided on how this was used to look at absorber column and stripper column temperature profiles.
University of Victoria Talk - Metocean analysis and Machine Learning for Impr...Aaron Barker
A presentation given on Metocean analysis and Machine Learning for improved estimates of energy production in WECs by Aaron Barker at the University of Victoria on the 7th of December 2017
The concepts related of the New Model of River Adige, and especially an analysys of the existing OMS components ready and their interpretation on the basis of travel time approaches
Backscatter Working Group Software Inter-comparison ProjectRequesting and Co...Giuseppe Masetti
Backscatter mosaics of the seafloor are now routinely produced from multibeam sonar data, and used in a wide range of marine applications. However, significant differences (up to 5 dB) have been observed between the levels of mosaics produced by different software processing a same dataset. This is a major detriment to several possible uses of backscatter mosaics, including quantitative analysis, monitoring seafloor change over time, and combining mosaics. A recently concluded international Backscatter Working Group (BSWG) identified this issue and recommended that “to check the consistency of the processing results provided by various software suites, initiatives promoting comparative tests on common data sets should be encouraged […]”. However, backscatter data processing is a complex (and often proprietary) sequence of steps, so that simply comparing end-results between software does not provide much information as to the root cause of the differences between results.
In order to pinpoint the source(s) of inconsistency between software, it is necessary to understand at which stage(s) of the data processing chain do the differences become substantial. We have invited willing software developers to discuss this framework and collectively adopt a list of intermediate processing steps. We provided a small dataset consisting of various seafloor types surveyed with the same multibeam sonar system, using constant acquisition settings and sea conditions, and have the software developers generate these intermediate processing results, to be eventually compared. If the experiment proves fruitful, we may extend it to more datasets, software and intermediate results. Eventually, software developers may consider making the results from intermediate stages a standard output as well as adhering to a consistent terminology, as advocated by Schimel et al. (2018). To date, the developers of four software (Sonarscope, QPS FMGT, CARIS SIPS, MB Process) have expressed their interest in collaborating on this project.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
Parameter estimation of distributed hydrological model using polynomial chaos expansion
1. Study Report
M2 - Putika Ashfar Khoiri
Water Engineering Laboratory
Department of Civil Engineering
April 26th , 2018
2. Contents Plan for Master’s Thesis
Master Thesis Title: A study of Parameter Estimation of Distributed Hydrological
(DHM) Model to Increase Simulation Accuracy using Polynomial Chaos
Expansion (PCE)
Chapter 1 Introduction to research backgrounds,
objectives and methodology
Chapter 2 Data and Study Area
1. Conditions of Ibo River
2. Description of precipitation data used (AmeDAS and
X-RAIN)
Chapter 3 Parameter Optimization of Distributed
Hydrological Model
1. Outline of Distributed Hydrological Model
2. Parameter estimation of hydrological model
3. Selection of Distributed Hydrological Model
Parameters
4. PCE setup for DHM model
5. Calculation conditions and calculation period
Chapter 4 Parameter Optimization Results
1. Sensitivity analysis results
2. Parameter optimization results from
Polynomial Chaos Expansion (PCE)
3. Reproduced Calculation from PCE
4. Validation Results
Chapter 5 Results Analysis and Discussion
1. Period-dependence of optimal
parameter values
2. Spatial differences in optimal parameter
values
Chapter 6 Conclusion and
Recommendation
1
3. Background, objectives and methodology
Background Due to the spatial heterogeneity of the distributed hydrological model,
determination of input and parameters setting is difficult
Input and parameters
𝑥 = [𝑥1, 𝑥2, … … . . 𝑥 𝑛]
Distributed Hydrological Model
• Set up
• Calculation conditions
𝑦 = 𝑓(𝑥)
Simulation Discharge
𝑦 = [𝑦1, 𝑦2, . . 𝑦𝑛]
DATA ASSIMILATION
- Improve the model reproducibility by reduce the uncertainty in model input and model
parameters by data assimilation
- It requires to produce better model forecast ability
Validation with
observed discharge
Results
improve
• Parameter optimization
2
4. Background, objectives and methodology
Research Objectives : Construct a parameter estimation system using PCE for DHM and
evaluate its applicability in Ibo river watershed.
Background
3 broad categories of data assimilation
1. Variational Techniques ( 3D Var, 4D Var)
2. Monte-Carlo based techniques ( EnKF, Particle
filter, Markov-Chain Monte Carlo, etc)
3. Emulator techniques (Polynomial Chaos
Expansion (PCE))
Advantage of PCE
Not costly and more effective than Monte-Carlo
because of their random sampling
Disadvantage of PCE
Only 2 parameters can be optimized in one
calculation condition, so it may be varying through
spatial differences and time differences.
PCE approach is computationally cheaper than Monte-Carlo but its rarely used in
hydrological simulation for parameter optimization
Example of Monte Carlo simulation using
contour plot of Nash-Sutcliffe efficiency response
(Khu and Werner, 2003)
3
5. Background, objectives and methodology
Methodology
List the DHM parameters and its ranges
Conduct the sensitivity analysis for
those parameters
Select the parameters
Put the parameters into PCE
Check the reproducibility of calculation
Result analysis
• observation discharge
points,
• peak of discharges
Based on each model efficiency
(RMSE, NSE, R2)
By spatial difference, calculation period difference
4
6. Selection of DHM Parameters
Lower boundary value = 0.5 x parameter original value
Upper boundary value = 1.5 x parameter original value
林内雨量比率 (rate of amount of rain in the forest) 𝛼1 - 0.46-1.215 0.81
樹幹流比率 (rate of stem flow) 𝛼2 - 0.055-0.137 0.11
タンクI(樹冠部)の最大貯留水深 max. storage (TANK 1) 𝑆1𝑚𝑎𝑥 mm 0.72-2.16 1.44
タンクII(樹幹部)の最大貯留水深 max. storage (TANK 2) 𝑆2𝑚𝑎𝑥 mm 0.265-0.795 0.53
タンクIII(林地系表層部)の貯留定数 storage constant TANK
3
𝐾3 hr 3.00-9.00 6.00
タンクIV(林地系下層部)の貯留定数 storage constant TANK
4
𝐾4 mm23/25∙h2/
25
50.00-150.00 100.00
タンクIIIの貯留べき定数 storage power exponent (TANK 3) 𝑃3 - 0.50 – 1.50 0.6
タンクIVの貯留べき定数storage power exponent (TANK 4) 𝑃4 - 0.04 – 0.12 0.08
流出寄与率16 %となる有効土層深 16% D16 particle size
distribution
𝐷16 mm 5.00 – 15.00 10.00
流出寄与率50 %となる有効土層深 D50 particle size
distribution
𝐷50 mm 25.00 – 75.00 50.00
A層の透水係数 (hydraulic conductivity of layer A) 𝑘 cm/sec 0.15 – 0.45 0.30
A層の有効間隙率 (effective porosity of layer A) 𝛾 - 0.10 – 0.30 0.20
A層の厚さ (layer A thickness) 𝐷 mm 100.00 – 300.00 200.00
Parameter Candicate that I want to estimate
Parameter’s name Symbol Units Ranges value
Ranges value
Original value
5
7. Selection of DHM Parameters (1)
The statistical metrics I used for model efficiency evaluation
Calculation conditions for initial calculation and sensitivity analysis
Model domain Ibo river watershed in Hyogo
Grid resolution 2 km x 2 km (307 grids)
Spin up calculation Jan 1, 2015 to April 30, 2015
Calculation period (1) May 1, 2015 to July 15, 2015
Rainfall data
AMeDAS
XRAIN
Temperature data JMA data (monthly data, 3 points)
Observation discharge
at Kamigawara ( 上川原) station
(hourly observation)
Time step (∆t) 0.0005
Nash-Sutcliffe coefficient Root mean square error (RMSE)
Coefficient of determination (R2)
𝑁𝑆 = 1 −
𝑖=1
𝑛
(𝑄 𝑜,𝑖− 𝑄𝑠,𝑖
2
𝑖=1
𝑛 (𝑄 𝑜,𝑖 − 𝑄 𝑜
2 𝑅𝑀𝑆𝐸 =
𝑄𝑠 − 𝑄𝑖
2
𝑛
𝑅2
= 1 −
𝑄 𝑜 − 𝑄 𝑜)(𝑄𝑠 − 𝑄𝑠
𝑄 𝑜 − 𝑄 𝑜
2 𝑄𝑠 − 𝑄𝑠
2
2
𝑄𝑠= simulated discharge
𝑄 𝑜= observed discharge
I used those performance criteria
because they are commonly used to
evaluate the model runoff behaviour
in hydrological model
上川原
(kamigawara)
構
(kamae)
龍野
(tatsuno)
東栗栖
(higashi kurisu)
山崎
(yamazaki)
塩野
(shiono)
曲里
(magari)
6
8. Annual average data of AmeDAS and X-RAIN
XRAIN
Amedas
若桜
佐用
上郡
一宮
姫路
sayou
Kamigori
ichinomiya
Wakasa
Himeji
AmeDAS
Rainfall
observation
point
Average distance weighting
𝑋 = 𝑎𝑛𝑛𝑢𝑎𝑙 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑟𝑒𝑐𝑖𝑝𝑖𝑡𝑎𝑡𝑖𝑜𝑛 (𝑚𝑚/𝑦)
𝑤1 = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑔𝑟𝑖𝑑 𝑖𝑛 𝑎𝑟𝑒𝑎 1 (𝑚𝑚/𝑦)
𝑥1 = 𝑝𝑟𝑒𝑐𝑖𝑝𝑖𝑡𝑎𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑎 𝑖𝑛 𝑔𝑟𝑖𝑑 1 1 (𝑚𝑚/𝑦)
𝑋(𝑚𝑚/𝑦)
𝑖𝑛𝑑𝑒𝑥 𝑑𝑎𝑡𝑎
7
9. Selection of DHM Parameters (1)
Comparison of simulation results using X-RAIN and AmeDAS data in Kamigawara station without any
parameter changes
May 1, 2015 to July 15, 2015
X RAIN (x) AmeDAS
NSE 0.675 0.772
RMSE 13.373 15.961
R2 0.829 0.897
In terms of RMSE, the simulation discharge from X-RAIN data has lower value than AmeDAS data. The statistical
analysis that we will use in PCE is RMSE, therefore I want to use X-RAIN data for next sensitivity analysis . 8
10. Selection of DHM Parameters (1)
Sensitivity analysis results
For upper boundary, S1 has the lowest value of RMSE and highest value of NSE and R2 for the sensitivity analysis using
X-RAIN data in Kamigawara station and K3 for the lower boundary, respectively.
The result of this sensitivity analysis may changes within different assimilation period and different observation
points.
May 1, 2015 to July 15, 2015
Kamigawara station
0.000
0.100
0.200
0.300
0.400
0.500
0.600
0.700
0.800
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ D
Nash-Sutcliffe
Parameter name
upper bc
lower bc
0.000
2.000
4.000
6.000
8.000
10.000
12.000
14.000
16.000
18.000
20.000
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ D
RMSE
Parameter name
upper bc
lower bc
0.720
0.740
0.760
0.780
0.800
0.820
0.840
0.860
0.880
0.900
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ D
coefficientofdetermination(R2) Paremeter name
upper bc
lower bc
9
11. Selection of DHM Parameters (1)
Comparison of simulation using
X-RAIN and AmeDAS data in
Kamigawara station without any
parameter changes
June 20, 2015 to August 15, 2015
X RAIN (x) Amedas
NSE 0.927 0.840
RMSE 22.661 23.137
R2 0.974 0.948
August 20, 2015 to Sept 20, 2015
X RAIN (x) Amedas
NSE 0.495 0.521
RMSE 17.512 16.261
R2 0.720 0.782
Largest discharge period
Medium discharge period
Each of the efficiency criteria has
specific effect to the simulation
result for high and low flow
conditions
10
12. Selection of DHM Parameters (1)
Sensitivity analysis results
June 20, 2015 to August 15, 2015
Kamigawara station
9
0.000
0.200
0.400
0.600
0.800
1.000
1.200
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ D
Nash-Sutcliffe
Parameter name
upper bc
lower bc
0.000
5.000
10.000
15.000
20.000
25.000
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ D
RMSE
Parameter name
upper bc
lower bc
0.780
0.800
0.820
0.840
0.860
0.880
0.900
0.920
0.940
0.960
0.980
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ Dcoefficientofdetermination(R2)
Paremeter name
upper bc
lower bc
13. Selection of DHM Parameters (1)
Sensitivity analysis results
August 20, 2015 to Sept 20, 2015
Kamigawara station
0.000
0.100
0.200
0.300
0.400
0.500
0.600
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ D
Nash-Sutcliffe
Parameter name
upper bc
lower bc
0.000
2.000
4.000
6.000
8.000
10.000
12.000
14.000
16.000
18.000
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ D
RMSE
Parameter name
upper bc
lower bc
0.000
0.100
0.200
0.300
0.400
0.500
0.600
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ D
coefficientofdetermination(R2)
Paremeter name
upper bc
lower bc
14. PCE setup for parameter optimization (1)
Calculation period (1) May 1, 2015 to July 15, 2015
Precipitation data XRAIN data
Observation discharge
at Kamigawara ( 上川原) station
(hourly observation)
Parameter name Range value Original value
storage constant TANK 3 (K3) 1.5 to 12 6.00
max. storage TANK 1 (S1) 0.36 to 2.88 1.44
RMSE RMSE
Wider changes in parameter
ranges can cause a strong effect
on the polynomial interpolation
which also can affect the layout of
quadrature points in parameter
space
11
15. PCE optimization results (1)
Calculation period (1) May 1, 2015 to July 15, 2015
Precipitation data XRAIN data
Observation discharge
at Kamigawara ( 上川原) station
(hourly observation)
RMSE
optimum parameter
Parameter name value
storage constant TANK 3 (K3) 4.97
max. storage TANK 1 (S1) 1.75
Calculated discharge using optimum
parameter (uniform distribution)
12
16. Selection of DHM Parameters (2)
Sensitivity analysis results
May 1, 2015 to July 15, 2015
Tatsuno station
-1.400
-1.200
-1.000
-0.800
-0.600
-0.400
-0.200
0.000
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ D
Nash-sutcliffe
upper bc
lower bc
0.000
5.000
10.000
15.000
20.000
25.000
α1 α2 S1 S2 K3 K4 P3 P4 D16 D50 k γ D
RMSE
parameter name
upper bc
lower bc
0.680
0.700
0.720
0.740
0.760
0.780
0.800
0.820
0.840
0.860
0.880
α1 α2 S1 S2 K3 K4 P3 P4 D16D50 k γ D
coefficientofdetermination(R2)
parameter name
upper bc
lower bc
Upper boundary simulation result for
parameter K3 indicates that simulation
results fits the observation data well with
high coefficient of determination
Another results from sensitivity analysis
for NSE and R2 may give different impact
to the simulation result
13
17. PCE optimization results (2)
Calculation period (1) May 1, 2015 to July 15, 2015
Precipitation data XRAIN data
Observation discharge
at龍野 (tatsuno) station
(hourly observation)
RMSE
Calculated discharge using optimum
parameter
optimum parameter
value
170
21.00
Parameter name Range value Original value
storage constant TANK 4 (K4) 25 to 400 100
16% particle size distribution (D16) 2.5 to40 10
14
18. Considerations
1. The most frequently used efficiency coefficient for hydrological model is Nash-Sutcliffe
efficiency and coefficient of determination (R2) which may give different performance relative
to the peak flow or low flow conditions.
2. The efficiency coefficient may works differently with different precipitation data which is used
for the simulation.
3. Parameter result for sensitivity analysis changed every simulation setting is obtained
(location of observation data and assimilation period), thus it will not lead to find the value of
parameter with general usage for forecasting.
Same points
Period 1 Period 2 Period 3
NSE
R2
RMSE
NSE
R2
RMSE
NSE
R2
RMSE
PCE 1 PCE 2 PCE 3
Same period
Point 1 Point 2 Point 3
NSE
R2
RMSE
NSE
R2
RMSE
NSE
R2
RMSE
PCE 4 PCE 5 PCE 6
15
19. Future Tasks
Future task
-Find the difference between X-RAIN and AmeDAS data
-Write chapter 1-2 of master's thesis
-check the result for PCE from another observation points and another assimilation
period
- For the assimilation period in May 1, 2015 to July 15, 2015. It is necessary to perform
PCE for NSE value and coefficient of determination from sensitivity analysis in all the
observation points
-Determine the calculation setting for simulation validation