This document provides guidance on compiling rainfall data from various time intervals into longer standardized durations. It discusses aggregating hourly data into daily totals, daily data into weekly, ten-daily, monthly, and yearly totals. Methods are presented for arithmetic averaging and Thiessen polygons to estimate areal rainfall from point measurements. Guidance is also given on transforming non-equidistant time series into equidistant series and compiling extreme rainfall statistics. The document aims to help hydrologists properly process rainfall observations for validation, reporting, and analysis.
This document describes a training module on how to carry out secondary validation of rainfall data. It discusses various methods for validating rainfall data, including screening data series, scrutinizing multiple time series graphs and tabulations, checking against data limits for longer durations, spatial homogeneity testing, and identifying common errors. The module provides examples of applying these validation methods to rainfall data from the Kheda catchment in India. It aims to teach participants how to perform secondary validation of rainfall data to identify suspect values by making comparisons with neighboring stations.
This document provides guidance on entering water level data into a hydrological data processing system. It describes how staff gauge readings, readings from autographic charts, and digital records are checked before being entered. Data can be entered for multiple daily readings, or hourly readings. Formats are provided for entering this data along with date, station information, and data limits. Graphs can then be generated to check the data entry against original records. The system also checks that computed statistics match those entered from source documents.
This document provides guidance on how to compile discharge data, including:
1. Aggregating data to longer time intervals through arithmetic averaging or summation.
2. Calculating volumes in cubic meters and runoff depth in millimeters from discharge data and catchment area.
3. Extracting maximum and minimum values over various time periods like days, months, or years for analyses.
Presentation of Four Centennial-long Global Gridded Datasets of the Standardi...Agriculture Journal IJOEAR
Abstract— In this article four global gridded datasets of the Standardized Precipitation Index (SPI) are presented. They are computed from four different data sources: UDEL/GEOG/CCR v3.02, GPCC/ v7.0, NOAA-CIRES 20CR v2c and ECMWF ERA-20C each covering more than a century-long period. The SPI is calculated for the most frequently used time windows of 1, 3, 6, and 12 months. UDEL/GEOG/CCR v3.02 and GPCC/ v7.0 are used in the highest native resolution of 0.5×0.5° whilst NOAA-CIRES 20CR v2c and ECMWF ERA-20C are interpolated at 1.5×1.5° and 0.5×0.5° correspondingly. In contrast to some other indices, for example the popular Palmer Drought Severity Index (PDSI), SPI has significant advantages such as simplicity, suitability on variable time scales and robustness rooted in a solid theoretical development. SPI has been selected by the World Meteorological Organization (WMO) as a key indicator for monitoring drought ('Lincoln declaration'). As a result, drought monitoring centres worldwide are effectively exploiting this index and the National Meteorological and Hydrological Services (NMHSs) are encouraged to use it for monitoring meteorological droughts. These facts and the strong conviction of the authors that the free exchange of data and software services are а basis of effective scientific collaboration, are the main motivators to provide these datasets free of charge at ftp://xeo.cfd.meteo.bg/SPI/. The paper briefly presents some possible applications of the SPI data, revealing its suitability for various objective long-term drought studies at any geographical location.
Presentation given by Peter Gibbs, Met Office and BBC broadcast meteorologist, as part of the EDINA Geoforum 2014 event on Thursday 19th June 2014 at the Informatics Forum, University of Edinburgh.
Presentation given by Darius Bazazi, GeoPlace, as part of the EDINA Geoforum 2014 event on Thursday 19th June 2014 at the Informatics Forum, University of Edinburgh.
1. The study develops intensity-duration-frequency (IDF) curves for Lahore City, Pakistan using rainfall data from 1960-2014. Four probability distributions (Normal, Log-normal, Gumbel, and Log-Pearson) are used to derive the IDF curves.
2. Rainfall intensities are calculated for durations of 1 to 24 hours using an Indian meteorological formula. IDF curves are plotted for return periods of 2, 5, 10, 25, and 50 years.
3. Empirical formulas are derived for each distribution by converting the rainfall intensity equation to linear form and calculating parameters. Goodness of fit is tested using chi-square values between empirical formulas and distribution values. The Log-normal
Climate Change Impact Assessment on Hydrological Regime of Kali Gandaki BasinHI-AWARE
The presentation focuses on the findings of the impact of climate change on the hydrological regime and water balance components of the Kali Gandaki basin in Nepal. The Soil and Water Assessment Tool (SWAT) has been used to predict future projections.
This document describes a training module on how to carry out secondary validation of rainfall data. It discusses various methods for validating rainfall data, including screening data series, scrutinizing multiple time series graphs and tabulations, checking against data limits for longer durations, spatial homogeneity testing, and identifying common errors. The module provides examples of applying these validation methods to rainfall data from the Kheda catchment in India. It aims to teach participants how to perform secondary validation of rainfall data to identify suspect values by making comparisons with neighboring stations.
This document provides guidance on entering water level data into a hydrological data processing system. It describes how staff gauge readings, readings from autographic charts, and digital records are checked before being entered. Data can be entered for multiple daily readings, or hourly readings. Formats are provided for entering this data along with date, station information, and data limits. Graphs can then be generated to check the data entry against original records. The system also checks that computed statistics match those entered from source documents.
This document provides guidance on how to compile discharge data, including:
1. Aggregating data to longer time intervals through arithmetic averaging or summation.
2. Calculating volumes in cubic meters and runoff depth in millimeters from discharge data and catchment area.
3. Extracting maximum and minimum values over various time periods like days, months, or years for analyses.
Presentation of Four Centennial-long Global Gridded Datasets of the Standardi...Agriculture Journal IJOEAR
Abstract— In this article four global gridded datasets of the Standardized Precipitation Index (SPI) are presented. They are computed from four different data sources: UDEL/GEOG/CCR v3.02, GPCC/ v7.0, NOAA-CIRES 20CR v2c and ECMWF ERA-20C each covering more than a century-long period. The SPI is calculated for the most frequently used time windows of 1, 3, 6, and 12 months. UDEL/GEOG/CCR v3.02 and GPCC/ v7.0 are used in the highest native resolution of 0.5×0.5° whilst NOAA-CIRES 20CR v2c and ECMWF ERA-20C are interpolated at 1.5×1.5° and 0.5×0.5° correspondingly. In contrast to some other indices, for example the popular Palmer Drought Severity Index (PDSI), SPI has significant advantages such as simplicity, suitability on variable time scales and robustness rooted in a solid theoretical development. SPI has been selected by the World Meteorological Organization (WMO) as a key indicator for monitoring drought ('Lincoln declaration'). As a result, drought monitoring centres worldwide are effectively exploiting this index and the National Meteorological and Hydrological Services (NMHSs) are encouraged to use it for monitoring meteorological droughts. These facts and the strong conviction of the authors that the free exchange of data and software services are а basis of effective scientific collaboration, are the main motivators to provide these datasets free of charge at ftp://xeo.cfd.meteo.bg/SPI/. The paper briefly presents some possible applications of the SPI data, revealing its suitability for various objective long-term drought studies at any geographical location.
Presentation given by Peter Gibbs, Met Office and BBC broadcast meteorologist, as part of the EDINA Geoforum 2014 event on Thursday 19th June 2014 at the Informatics Forum, University of Edinburgh.
Presentation given by Darius Bazazi, GeoPlace, as part of the EDINA Geoforum 2014 event on Thursday 19th June 2014 at the Informatics Forum, University of Edinburgh.
1. The study develops intensity-duration-frequency (IDF) curves for Lahore City, Pakistan using rainfall data from 1960-2014. Four probability distributions (Normal, Log-normal, Gumbel, and Log-Pearson) are used to derive the IDF curves.
2. Rainfall intensities are calculated for durations of 1 to 24 hours using an Indian meteorological formula. IDF curves are plotted for return periods of 2, 5, 10, 25, and 50 years.
3. Empirical formulas are derived for each distribution by converting the rainfall intensity equation to linear form and calculating parameters. Goodness of fit is tested using chi-square values between empirical formulas and distribution values. The Log-normal
Climate Change Impact Assessment on Hydrological Regime of Kali Gandaki BasinHI-AWARE
The presentation focuses on the findings of the impact of climate change on the hydrological regime and water balance components of the Kali Gandaki basin in Nepal. The Soil and Water Assessment Tool (SWAT) has been used to predict future projections.
Progressive Improvements in basic Intensity-Duration-Frequency Curves Derivin...IRJET Journal
This document reviews different methods used to develop intensity-duration-frequency (IDF) curves, which relate rainfall intensity, duration, and return period. Early methods treated IDF relationships as "black boxes" without physical understanding. Later approaches model intensity as the single important variable or model both intensity and duration jointly using copula statistics. The copula method facilitates analysis of multiple random variables and their joint effects. It has been shown to better explain the physical rainfall phenomenon by incorporating the negative correlation between intensity and duration. Overall, the reviewed methods have progressively improved the physical basis and statistical representation of IDF curves over time.
This document summarizes a study on the impact of climate change on water availability in the Oebobolili Bawatershed in Kupang City, Indonesia under two climate change scenarios, RCP 2.6 and RCP 8.5. The study finds that under the RCP 2.6 scenario, temperatures are projected to increase by 0.86°C from 2046-2065 and 2.25°C from 2081-2100, leading to higher evapotranspiration and reduced rainfall runoff. Under the RCP 8.5 scenario, temperatures are projected to increase by 0.83°C and 2.13°C in the same time periods, resulting in even lower rainfall runoff.
Revised intensity frequency-duration (ifd) design rainfalls estimates for wa ...Engineers Australia
This document summarizes the revision of Intensity-Frequency-Duration (IFD) design rainfalls for Australia. A team updated the IFDs using a larger database that includes sub-daily rainfall data. They quality controlled the data, tested different frequency distributions, and derived short duration estimates using Bayesian modeling. The revised IFDs will be disseminated online and provide rainfall depths for standard durations, exceedance probabilities, and incorporate climate change considerations through ongoing research.
This document discusses geodetic control in Wisconsin. It provides information on the National Geodetic Survey's (NGS) mission to define and maintain the National Spatial Reference System. The current horizontal and vertical datums are NAD 83 and NAVD 88, respectively. The accuracy of positional data has improved over time through adjustments to these datums. The document demonstrates obtaining geodetic control data using NGS's DSWorld software and Google Earth. It notes that a new NAD 83 adjustment, NAD 83(2011), will be released by the end of 2011.
On March 11, 2016, ICLR held a Friday Forum workshop entitled 'Mapping extreme rainfall statistics for Canada', led by Dr. Slobodan Simonovic of Western University.
Climate change is expected to increase the frequency and intensity of extreme rainfall events, affecting rainfall intensity-duration-frequency (IDF) curve information used in the design, maintenance and operation of water infrastructure in Canada. Presented in this lecture are analyses of precipitation data from 567 Environment Canada hydro-meteorological stations using the IDF_CC tool. Results for the year 2100 based on Canadian climate model and an ensemble of 22 GCMs have been generated. A spatial interpolation method was used to produce Canadian precipitation maps for events of various return periods. Results based on the Canadian climate model indicate a reduction in extreme precipitation in central regions of Canada and increases in other regions. Relative to the ensemble approach, the Canadian climate model results (a) suggest more spatial variability in change of IDFs, and (b) the ensemble approach generated generally lower values than the Canadian climate model.
Dr. Simonovic has extensive research, teaching and consulting experience in water resources systems engineering. He teaches courses in water resources and civil engineering systems. He actively works for national and international professional organizations. Dr. Simonovic’s primary research interest focuses on the application of systems approach to management of complex water and environmental systems. Most of his work is related to the integration of risk, reliability, and uncertainty in hydrology and water resources management. He has received a number of awards for excellence in teaching, research and outreach. He has published over 450 professional publications and three major textbooks. He was inducted to the Canadian Academy of Engineering in June of 2013.
This document outlines plans for the Showcase Climate project, which aims to expand current weather and climate services with seasonal forecast information from Copernicus. It will develop these services for sectors like climate, energy, forestry, urban resilience, transport, and tourism. Key activities include improving global carbon information, developing services on the WekEO DIAS platform using Copernicus data, and operationalizing user interfaces. The document describes several pilot projects covering topics like urban resilience, forestry conditions, hydropower, and seasonal preparedness. It provides timelines and key performance indicators for tracking the pilots' success.
In the first part of the talk, we will present a sensitivity analysis of a novel sea ice model. neXtSIM is a continuous Lagrangian numerical model that uses an elastobrittle rheology to simulate the ice response to external forces. The response of the model is evaluated in terms of simulated ice drift distances from its initial position and from the mean position of the ensemble. The simulated ice drift is decomposed into advective and diffusive parts that are characterized separately both spatially and temporally and compared to what is obtained with a free-drift model, i.e. when the ice rheology does not play any role. Overall the large-scale response of neXtSIM is correlated to the ice thickness and the wind velocity fields while the free-drift model response is mostly correlated to the wind velocity pattern only. The seasonal variability of the model sensitivity shows the role of the ice compactness and rheology at both local and Arctic scales. Indeed, the ice drift simulated by neXtSIM in summer is close to the free-drift model, while the more compact and solid ice pack is showing a significantly different mechanical and drift behavior in winter. In contrast of the free-drift model, neXtSIM reproduces the sea ice Lagrangian diffusion regimes as found from observed trajectories. The forecast capability of neXtSIM is also evaluated using a large set of real buoy’s trajectories. We found that neXtSIM performs better in simulating sea ice drift, both in terms of forecast error and as a tool to assist search-and-rescue operations. Adaptive meshes, as the one used in neXtSIM, are used to model a wide variety of physical phenomena. Some of these models, in particular those of sea ice movement, use a remeshing process to remove and insert mesh points at various points in their evolution. This represents a challenge in developing compatible data assimilation schemes, as the dimension of the state space we wish to estimate can change over time when these remeshings occur.
In the second part of the talk, we highlight the challenges that such a modeling framework represents for data assimilation setup. We then describe a remeshing scheme for an adaptive mesh in one dimension. The development of advanced data assimilation methods that are appropriate for such a moving and remeshed grid is presented. Finally we discuss the extension of these techniques to two-dimensional models, like neXtSIM.
The document discusses the impact of climate change on snowmelt runoff in the Tamakoshi River basin in Nepal. It summarizes that rising temperatures due to climate change are causing Himalayan glaciers to retreat faster than the global average. This study uses a snowmelt runoff model to simulate snowmelt and runoff in the Tamakoshi basin, finding that stream flow is increasing with higher snowmelt contributions from rising temperatures. The model accurately simulates observed discharge data. Climate change simulations show stream flow and winter flow increasing approximately 3% and 8% respectively for every 1 degree Celsius of warming from increased melting of snow and glaciers in the basin.
This document discusses challenges and opportunities for using machine learning and data mining techniques on big climate data. It describes various types of climate and Earth observation data available from satellites and models. Research highlights are presented on using pattern mining to track ocean eddies, extreme value theory to study heatwaves and rainfall, and relationship mining to study seasonal hurricane activity. The challenges of analyzing multi-scale, heterogeneous climate data are also discussed.
We present a survey of computational and applied mathematical techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties.
Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
This problem represents an interesting opportunity for scientists and statisticians to collaborate since the problem is too big for either community. The science is not well established, although fairly sophisticated ice flow models exist. They are even becoming relevant to explain some of the complexity seen in observational data. At the same time, the complex phenomena we see in observations may not be particularly relevant to assessing the risks of significant increases in sea level rise over the near future. The talk will review what we have learned about this problem through the PISCEES SciDAC project. This problem is rich with challenges and opportunities, particularly for realigning how our two communities engage each other. The talk will review the computational, scientific, and mathematical "reality checks" that might stop any reasonable person from considering this topic further. I then will point out how each of these challenges could be mitigated if these different perspectives were better integrated.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Intensity-Duration-Frequency Curves and RegionalisationAM Publications
Storm sewers make up a large percentage of drainage system in an urban setup. The design of these
components are based on rainfall intensities of a specific design period for that location. These can be derived from
intensity-duration-frequency (IDF) relationship. These IDF relationships are derived from historical rainfall, using
an extreme value distribution for maximum rainfall intensity. In the present study the IDF curves and parameter
regionalisation were studied for various kinds of basins. These equation parameters can be then used to understand
the spatial variation of rainfall intensity in the study area. The parameter contour maps subsequently generated using
various interpolation method are then used for plotting IDF curves for any ungauged station in the basin.
The document provides a mid-term progress report on activities related to hydropower development and environmental flows in the Tamor River basin in Nepal. It summarizes the status of various sub-activities conducted by Nepal Engineering College, including collecting precipitation and temperature data, identifying hydropower projects, preparing maps, establishing hydrological monitoring equipment, developing an environmental flows syllabus, and conducting training programs. Many activities were partially completed or delayed due to issues like data limitations, field work cancellations from natural disasters, and developing partnerships with other organizations. The report concludes with a planned time schedule of remaining activities for 2015.
The document provides an overview of the World Bank Monitoring Mission for the Hydrology Project Phase II in India from May 06-09, 2014. It summarizes the key achievements and post-project plans for each of the implementing agencies. The agencies include 13 state organizations and 8 central agencies. The objectives of HP-II were to extend and promote the sustained use of hydrological information systems to improve water resources planning and management. The estimated cost was Rs. 631.83 crore with funding from the World Bank. Several agencies had completed construction of data centers, monitoring equipment installations, and pilot studies. Plans after the project included continuing maintenance and operations, staff training, and further developing applications.
This document provides operational details for groundwater data processing and analysis in India. It outlines the monitoring networks for water levels, quality, and hydro-meteorology. It describes the geological structures, soil types, typical groundwater issues, and the organizational setup of the responsible groundwater agency. The agency collects various dynamic data through monitoring networks to estimate groundwater resources and inform management recommendations in an annual groundwater yearbook.
The document provides information on the financial targets and achievements of a hydrological project in India. It summarizes that as of March 2014, expenditure was Rs. 304.959 crores out of the revised target of Rs. 399.808 crores. It also describes various components of the project including institutional strengthening activities conducted, the development of decision support systems and real-time data systems for river basins, and studies carried out on optimizing monitoring networks and evaluating the impacts of water allocation changes. Lessons learned included the need for stronger central-state linkages and continued consultant support to meet project goals.
The document provides information on study tour sites related to surface water, groundwater, hydrometeorology, and water quality in India. Sites are listed for Andhra Pradesh, Gujarat, and Karnataka and include river gauging stations, groundwater observation wells, and water resource management offices. Contact details and logistical information are provided for each site.
This document summarizes the progress and completion of the Odisha Hydrology Project-II. The key points are:
1) The project had a total revised cost of Rs. 13.46 crore and ran from April 2006 to May 2014 to strengthen surface water data collection and decision support systems in Odisha.
2) Financial progress shows that Rs. 891.04 crore was spent out of the total revised cost of Rs. 1346 crore. Major components included installing a real-time data acquisition system and developing decision support systems for drought monitoring and conjunctive surface and groundwater use.
3) Key achievements were establishing the concept for a real-time data acquisition system,
The document summarizes a review meeting for the Hydrology Project Phase II in Madhya Pradesh, India. The project involves establishing surface water and groundwater monitoring stations. For surface water, 24 river gauge stations and 52 meteorological stations were set up across three river basins. For groundwater, 3750 observation wells and 625 piezometer wells were established. The project period was from 2004-2014 with a total cost of Rs. 24.67 crores. Major achievements included upgrading monitoring stations, establishing new stations, and developing decision support systems for reservoir management and groundwater planning. Lessons learned and plans for continuing activities after the project are also discussed.
This document provides guidance on using regression analysis for data validation in hydrological data processing. It discusses simple linear regression, multiple linear regression, and stepwise regression. Regression analysis can be used to validate and fill in missing water level, rainfall, and discharge data. It establishes relationships between dependent and independent variables. Both linear and nonlinear regression models are used in hydrological applications. Key applications mentioned include rating curves, spatial interpolation of rainfall, and validating station data against nearby stations.
This document provides guidance on validating rainfall data from different measurement instruments. It describes common rainfall measurement tools like daily rain gauges, autographic rain gauges, and tipping bucket rain gauges. It outlines potential errors from each tool and recommends comparing daily time series between tools to identify discrepancies. Discrepancies over 5% should be investigated further. Likely error sources are diagnosed based on patterns of discrepancies. The guidance aims to help validate rainfall data and make corrections to measurement tools or recorded values when necessary.
Progressive Improvements in basic Intensity-Duration-Frequency Curves Derivin...IRJET Journal
This document reviews different methods used to develop intensity-duration-frequency (IDF) curves, which relate rainfall intensity, duration, and return period. Early methods treated IDF relationships as "black boxes" without physical understanding. Later approaches model intensity as the single important variable or model both intensity and duration jointly using copula statistics. The copula method facilitates analysis of multiple random variables and their joint effects. It has been shown to better explain the physical rainfall phenomenon by incorporating the negative correlation between intensity and duration. Overall, the reviewed methods have progressively improved the physical basis and statistical representation of IDF curves over time.
This document summarizes a study on the impact of climate change on water availability in the Oebobolili Bawatershed in Kupang City, Indonesia under two climate change scenarios, RCP 2.6 and RCP 8.5. The study finds that under the RCP 2.6 scenario, temperatures are projected to increase by 0.86°C from 2046-2065 and 2.25°C from 2081-2100, leading to higher evapotranspiration and reduced rainfall runoff. Under the RCP 8.5 scenario, temperatures are projected to increase by 0.83°C and 2.13°C in the same time periods, resulting in even lower rainfall runoff.
Revised intensity frequency-duration (ifd) design rainfalls estimates for wa ...Engineers Australia
This document summarizes the revision of Intensity-Frequency-Duration (IFD) design rainfalls for Australia. A team updated the IFDs using a larger database that includes sub-daily rainfall data. They quality controlled the data, tested different frequency distributions, and derived short duration estimates using Bayesian modeling. The revised IFDs will be disseminated online and provide rainfall depths for standard durations, exceedance probabilities, and incorporate climate change considerations through ongoing research.
This document discusses geodetic control in Wisconsin. It provides information on the National Geodetic Survey's (NGS) mission to define and maintain the National Spatial Reference System. The current horizontal and vertical datums are NAD 83 and NAVD 88, respectively. The accuracy of positional data has improved over time through adjustments to these datums. The document demonstrates obtaining geodetic control data using NGS's DSWorld software and Google Earth. It notes that a new NAD 83 adjustment, NAD 83(2011), will be released by the end of 2011.
On March 11, 2016, ICLR held a Friday Forum workshop entitled 'Mapping extreme rainfall statistics for Canada', led by Dr. Slobodan Simonovic of Western University.
Climate change is expected to increase the frequency and intensity of extreme rainfall events, affecting rainfall intensity-duration-frequency (IDF) curve information used in the design, maintenance and operation of water infrastructure in Canada. Presented in this lecture are analyses of precipitation data from 567 Environment Canada hydro-meteorological stations using the IDF_CC tool. Results for the year 2100 based on Canadian climate model and an ensemble of 22 GCMs have been generated. A spatial interpolation method was used to produce Canadian precipitation maps for events of various return periods. Results based on the Canadian climate model indicate a reduction in extreme precipitation in central regions of Canada and increases in other regions. Relative to the ensemble approach, the Canadian climate model results (a) suggest more spatial variability in change of IDFs, and (b) the ensemble approach generated generally lower values than the Canadian climate model.
Dr. Simonovic has extensive research, teaching and consulting experience in water resources systems engineering. He teaches courses in water resources and civil engineering systems. He actively works for national and international professional organizations. Dr. Simonovic’s primary research interest focuses on the application of systems approach to management of complex water and environmental systems. Most of his work is related to the integration of risk, reliability, and uncertainty in hydrology and water resources management. He has received a number of awards for excellence in teaching, research and outreach. He has published over 450 professional publications and three major textbooks. He was inducted to the Canadian Academy of Engineering in June of 2013.
This document outlines plans for the Showcase Climate project, which aims to expand current weather and climate services with seasonal forecast information from Copernicus. It will develop these services for sectors like climate, energy, forestry, urban resilience, transport, and tourism. Key activities include improving global carbon information, developing services on the WekEO DIAS platform using Copernicus data, and operationalizing user interfaces. The document describes several pilot projects covering topics like urban resilience, forestry conditions, hydropower, and seasonal preparedness. It provides timelines and key performance indicators for tracking the pilots' success.
In the first part of the talk, we will present a sensitivity analysis of a novel sea ice model. neXtSIM is a continuous Lagrangian numerical model that uses an elastobrittle rheology to simulate the ice response to external forces. The response of the model is evaluated in terms of simulated ice drift distances from its initial position and from the mean position of the ensemble. The simulated ice drift is decomposed into advective and diffusive parts that are characterized separately both spatially and temporally and compared to what is obtained with a free-drift model, i.e. when the ice rheology does not play any role. Overall the large-scale response of neXtSIM is correlated to the ice thickness and the wind velocity fields while the free-drift model response is mostly correlated to the wind velocity pattern only. The seasonal variability of the model sensitivity shows the role of the ice compactness and rheology at both local and Arctic scales. Indeed, the ice drift simulated by neXtSIM in summer is close to the free-drift model, while the more compact and solid ice pack is showing a significantly different mechanical and drift behavior in winter. In contrast of the free-drift model, neXtSIM reproduces the sea ice Lagrangian diffusion regimes as found from observed trajectories. The forecast capability of neXtSIM is also evaluated using a large set of real buoy’s trajectories. We found that neXtSIM performs better in simulating sea ice drift, both in terms of forecast error and as a tool to assist search-and-rescue operations. Adaptive meshes, as the one used in neXtSIM, are used to model a wide variety of physical phenomena. Some of these models, in particular those of sea ice movement, use a remeshing process to remove and insert mesh points at various points in their evolution. This represents a challenge in developing compatible data assimilation schemes, as the dimension of the state space we wish to estimate can change over time when these remeshings occur.
In the second part of the talk, we highlight the challenges that such a modeling framework represents for data assimilation setup. We then describe a remeshing scheme for an adaptive mesh in one dimension. The development of advanced data assimilation methods that are appropriate for such a moving and remeshed grid is presented. Finally we discuss the extension of these techniques to two-dimensional models, like neXtSIM.
The document discusses the impact of climate change on snowmelt runoff in the Tamakoshi River basin in Nepal. It summarizes that rising temperatures due to climate change are causing Himalayan glaciers to retreat faster than the global average. This study uses a snowmelt runoff model to simulate snowmelt and runoff in the Tamakoshi basin, finding that stream flow is increasing with higher snowmelt contributions from rising temperatures. The model accurately simulates observed discharge data. Climate change simulations show stream flow and winter flow increasing approximately 3% and 8% respectively for every 1 degree Celsius of warming from increased melting of snow and glaciers in the basin.
This document discusses challenges and opportunities for using machine learning and data mining techniques on big climate data. It describes various types of climate and Earth observation data available from satellites and models. Research highlights are presented on using pattern mining to track ocean eddies, extreme value theory to study heatwaves and rainfall, and relationship mining to study seasonal hurricane activity. The challenges of analyzing multi-scale, heterogeneous climate data are also discussed.
We present a survey of computational and applied mathematical techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties.
Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
This problem represents an interesting opportunity for scientists and statisticians to collaborate since the problem is too big for either community. The science is not well established, although fairly sophisticated ice flow models exist. They are even becoming relevant to explain some of the complexity seen in observational data. At the same time, the complex phenomena we see in observations may not be particularly relevant to assessing the risks of significant increases in sea level rise over the near future. The talk will review what we have learned about this problem through the PISCEES SciDAC project. This problem is rich with challenges and opportunities, particularly for realigning how our two communities engage each other. The talk will review the computational, scientific, and mathematical "reality checks" that might stop any reasonable person from considering this topic further. I then will point out how each of these challenges could be mitigated if these different perspectives were better integrated.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Intensity-Duration-Frequency Curves and RegionalisationAM Publications
Storm sewers make up a large percentage of drainage system in an urban setup. The design of these
components are based on rainfall intensities of a specific design period for that location. These can be derived from
intensity-duration-frequency (IDF) relationship. These IDF relationships are derived from historical rainfall, using
an extreme value distribution for maximum rainfall intensity. In the present study the IDF curves and parameter
regionalisation were studied for various kinds of basins. These equation parameters can be then used to understand
the spatial variation of rainfall intensity in the study area. The parameter contour maps subsequently generated using
various interpolation method are then used for plotting IDF curves for any ungauged station in the basin.
The document provides a mid-term progress report on activities related to hydropower development and environmental flows in the Tamor River basin in Nepal. It summarizes the status of various sub-activities conducted by Nepal Engineering College, including collecting precipitation and temperature data, identifying hydropower projects, preparing maps, establishing hydrological monitoring equipment, developing an environmental flows syllabus, and conducting training programs. Many activities were partially completed or delayed due to issues like data limitations, field work cancellations from natural disasters, and developing partnerships with other organizations. The report concludes with a planned time schedule of remaining activities for 2015.
The document provides an overview of the World Bank Monitoring Mission for the Hydrology Project Phase II in India from May 06-09, 2014. It summarizes the key achievements and post-project plans for each of the implementing agencies. The agencies include 13 state organizations and 8 central agencies. The objectives of HP-II were to extend and promote the sustained use of hydrological information systems to improve water resources planning and management. The estimated cost was Rs. 631.83 crore with funding from the World Bank. Several agencies had completed construction of data centers, monitoring equipment installations, and pilot studies. Plans after the project included continuing maintenance and operations, staff training, and further developing applications.
This document provides operational details for groundwater data processing and analysis in India. It outlines the monitoring networks for water levels, quality, and hydro-meteorology. It describes the geological structures, soil types, typical groundwater issues, and the organizational setup of the responsible groundwater agency. The agency collects various dynamic data through monitoring networks to estimate groundwater resources and inform management recommendations in an annual groundwater yearbook.
The document provides information on the financial targets and achievements of a hydrological project in India. It summarizes that as of March 2014, expenditure was Rs. 304.959 crores out of the revised target of Rs. 399.808 crores. It also describes various components of the project including institutional strengthening activities conducted, the development of decision support systems and real-time data systems for river basins, and studies carried out on optimizing monitoring networks and evaluating the impacts of water allocation changes. Lessons learned included the need for stronger central-state linkages and continued consultant support to meet project goals.
The document provides information on study tour sites related to surface water, groundwater, hydrometeorology, and water quality in India. Sites are listed for Andhra Pradesh, Gujarat, and Karnataka and include river gauging stations, groundwater observation wells, and water resource management offices. Contact details and logistical information are provided for each site.
This document summarizes the progress and completion of the Odisha Hydrology Project-II. The key points are:
1) The project had a total revised cost of Rs. 13.46 crore and ran from April 2006 to May 2014 to strengthen surface water data collection and decision support systems in Odisha.
2) Financial progress shows that Rs. 891.04 crore was spent out of the total revised cost of Rs. 1346 crore. Major components included installing a real-time data acquisition system and developing decision support systems for drought monitoring and conjunctive surface and groundwater use.
3) Key achievements were establishing the concept for a real-time data acquisition system,
The document summarizes a review meeting for the Hydrology Project Phase II in Madhya Pradesh, India. The project involves establishing surface water and groundwater monitoring stations. For surface water, 24 river gauge stations and 52 meteorological stations were set up across three river basins. For groundwater, 3750 observation wells and 625 piezometer wells were established. The project period was from 2004-2014 with a total cost of Rs. 24.67 crores. Major achievements included upgrading monitoring stations, establishing new stations, and developing decision support systems for reservoir management and groundwater planning. Lessons learned and plans for continuing activities after the project are also discussed.
This document provides guidance on using regression analysis for data validation in hydrological data processing. It discusses simple linear regression, multiple linear regression, and stepwise regression. Regression analysis can be used to validate and fill in missing water level, rainfall, and discharge data. It establishes relationships between dependent and independent variables. Both linear and nonlinear regression models are used in hydrological applications. Key applications mentioned include rating curves, spatial interpolation of rainfall, and validating station data against nearby stations.
This document provides guidance on validating rainfall data from different measurement instruments. It describes common rainfall measurement tools like daily rain gauges, autographic rain gauges, and tipping bucket rain gauges. It outlines potential errors from each tool and recommends comparing daily time series between tools to identify discrepancies. Discrepancies over 5% should be investigated further. Likely error sources are diagnosed based on patterns of discrepancies. The guidance aims to help validate rainfall data and make corrections to measurement tools or recorded values when necessary.
The document summarizes the Hydrology Project-II being implemented in Punjab, India. Key points:
- The Rs. 46.65 crore project aims to improve water resource data collection and management. Around 80% of the work and funding has been used.
- Networks to monitor groundwater, surface water, and rainfall have been installed across 700, 25, and 81 stations respectively. Digital equipment transmits data in real time.
- Three data centers have been constructed to store and analyze water data. A state data center in Mohali will house various water resource offices and laboratories.
- Observed hydrological data will be shared with state agencies, CGWB, and other users to inform water
The World Bank conducted a final supervision mission in May 2014 to review a water resources project in Chhattisgarh, India. The project aimed to strengthen water resource management institutions and expand hydrological monitoring networks. Over 90% of allocated funds had been spent as of March 2014, with additional expenditures expected through May 2014. Key achievements included upgrading data centers, installing rain and groundwater monitoring equipment, conducting trainings, and publishing water resources data. The project improved availability of hydrological data for use in planning irrigation projects, infrastructure design, and other development activities in Chhattisgarh.
The document describes a training module on analyzing rainfall data. It includes sessions on checking data homogeneity, computing basic statistics, fitting frequency distributions, and deriving frequency-duration and intensity-duration-frequency curves. Exercises are provided for trainees to practice analyzing monthly and daily rainfall series, fitting distributions, and deriving curves for different durations and return periods. Case studies from India are referenced as examples throughout the training material.
This document provides guidance on analyzing rainfall data. It discusses checking data homogeneity, computing basic statistics, developing annual exceedance rainfall series, fitting frequency distributions, and deriving frequency-duration and intensity-duration-frequency curves. The document includes examples demonstrating how to calculate statistics for a monthly rainfall series and develop frequency curves. It also outlines computational procedures and examples for depth-area-duration analysis. Key steps in the rainfall data analysis process are presented along with example results and figures.
This document describes a training module on how to carry out secondary validation of rainfall data. It includes the following key points:
1. Secondary validation involves comparing rainfall data to neighboring stations to identify suspect values, taking into account spatial correlation which depends on duration, distance, precipitation type, and physiography.
2. Validation methods described include screening data against limits, scrutinizing multiple time series graphs and tabulations, checking against data limits for longer durations, spatial homogeneity testing, and double mass analysis.
3. Examples demonstrate how spatial correlation varies with duration and distance, and how physiography affects correlation. Screening listings with basic statistics are used to flag suspect data values.
This document provides guidance on how to report rainfall data in yearly and periodic reports. It outlines the typical contents and structure of annual reports including descriptive summaries of rainfall patterns, comparisons to long-term averages, basic statistics, and descriptions of major storms. Periodic reports produced every 10 years would include long-term statistics updated over the previous decade as well as frequency analysis of rainfall data. The reports aim to inform stakeholders of rainfall patterns and data availability as well as validate and improve the quality of data collection.
This document provides guidance on how to report rainfall data in yearly and periodic reports. It outlines the typical contents and structure of annual reports including descriptive summaries of rainfall patterns, comparisons to long-term averages, basic statistics, and descriptions of major storms. Periodic reports produced every 10 years would include long-term statistics updated over the previous decade as well as frequency analysis of rainfall data. The reports aim to inform stakeholders of rainfall patterns and data availability as well as validate and improve the quality of data collection.
This document provides guidance on correcting and completing rainfall data. It discusses using autographic rain gauge (ARG) and standard rain gauge (SRG) data to correct errors. When the SRG is faulty but ARG is available, the SRG can be corrected to match the ARG totals. When the ARG is faulty but SRG is available, hourly distributions from neighboring stations can be used to estimate hourly totals for the station based on its daily SRG total. The document also discusses correcting time shifts, apportioning partial daily accumulations, adjusting for systematic shifts using double mass analysis, and using spatial interpolation methods to estimate missing values. Examples are provided to demonstrate each technique.
This document provides guidance on correcting and completing rainfall data. It discusses using autographic rain gauge (ARG) and standard rain gauge (SRG) data to correct errors when one instrument fails. When the SRG fails but ARG data is available, the SRG data can be replaced with totals from the ARG record. When the ARG fails, hourly distributions from neighboring stations can be used to estimate missing hourly values based on the daily total from the station's SRG. The document also discusses correcting errors like wrong dates and apportioning partial daily accumulations. It describes using double mass analysis to adjust for systematic shifts and spatial interpolation methods to estimate missing values using data from surrounding stations. Examples are provided to demonstrate the techniques.
This document provides guidance on reporting climatic data in India. It discusses the purpose and contents of annual reports on climatic data, including evaporation data. Key points covered include:
- Annual reports summarize evaporation data for the reporting year and compare to long-term statistics.
- Reports include details on the observational network, basic evaporation statistics, data validation processes.
- Network maps and station listings provide details of monitoring locations. Statistics include monthly and annual evaporation amounts for the current year and historical averages.
- Reports aim to inform water resource planning, acknowledge data collection efforts, and provide access to climatic data records.
This document provides guidance on reporting climatic data in India. It discusses the purpose and contents of annual reports on climatic data, including evaporation data. Key points covered include:
- Annual reports summarize evaporation data for the reporting year and compare to long-term statistics.
- Reports include details on the observational network, basic evaporation statistics, data validation processes.
- Network maps and station listings provide details on locations and recorded variables. Statistics include monthly and annual summaries for the current year and historical averages.
- Reports aim to inform users and support planning, while also recognizing data producers and maintaining the climatic observation system.
This document provides guidance on how to carry out primary validation of rainfall data. It discusses comparing daily rainfall measurements from a standard raingauge to those from an autographic or digital raingauge. Differences greater than 5% between the two measurements would be further investigated. Likely sources of error are outlined for each type of raingauge. The validation can be done graphically or tabularly by aggregating hourly rainfall data to daily totals and comparing. Actions are suggested based on the patterns of discrepancies found.
This document provides guidance on reporting discharge data from hydrological monitoring stations. It outlines the contents and purpose of yearly reports, including descriptive summaries of streamflow patterns, basic statistics for selected stations, and comparisons to long-term averages. Periodic long-term reports every 5-10 years are also recommended to analyze trends over longer time periods. The reports aim to inform water resource planning and make hydrological data more accessible and understandable for users.
This document provides guidance on entering climatic data into a hydrological data processing software called SWDES. It describes the various types of climatic data that can be entered, including daily, twice daily, hourly, and sunshine duration data. Instructions are provided on inspecting paper records, setting up data entry screens, entering values, and performing basic data validation checks. The overall aim is to make climatic data available electronically using SWDES in order to facilitate validation, processing, and reporting of the data.
This document provides guidance on entering climatic data into a hydrological data processing software called SWDES. It describes the various types of climatic data that can be entered, including daily, twice daily, hourly, and sunshine duration data. Instructions are provided on inspecting paper records, setting up data entry screens, entering values, and performing basic data validation checks. The overall aim is to make climatic data available electronically using SWDES in order to facilitate validation, processing, and reporting of the data.
This document outlines the design of an active control outlet for a stormwater drainage basin. It provides background on climate change, increasing impervious surfaces, and the rationale for small-scale stormwater solutions. Traditional static outlets are discussed alongside the potential benefits of active control outlets, which can adjust outlet conditions based on factors like weather forecasts and pond water levels. The objective, approaches, deliverables, timeline, and materials/methods are presented for a project to design and evaluate an adaptive control structure for a pond in Pelzer, SC. Literature on programming, instrumentation, and regulations is also reviewed.
This document provides guidance on entering rainfall data into a dedicated hydrological data processing software (SWDES). It discusses entering daily rainfall data, twice daily rainfall data, and hourly rainfall data from manual records or digital loggers. The key steps described are manually inspecting field records before entry, using customized SWDES forms that match field sheets, entering values and computing totals, plotting time series graphs, and performing data validation checks. The overall aim is to efficiently digitize rainfall observations near monitoring stations for further validation and analysis.
This document provides guidance on entering rainfall data into a dedicated hydrological data processing software (SWDES). It discusses entering daily rainfall data, twice daily rainfall data, and hourly rainfall data from manual records or digital loggers. The key steps are:
1. Manually inspecting field records for completeness and errors before data entry.
2. Entering data into customized SWDES forms that match field observation sheets. This allows direct data transfer with minimal risk of errors.
3. Performing automated checks of the entered data against limits and computed totals to ensure accuracy. Any errors are flagged for further inspection.
4. Graphing the entered time series data during the entry process as an additional validation check.
This document provides guidance on entering rainfall data into a dedicated hydrological data processing software (SWDES). It discusses entering daily rainfall data, twice daily rainfall data, and hourly rainfall data from manual records or digital loggers. The key steps are:
1. Manually inspecting field records for completeness and errors before data entry.
2. Entering data into customized SWDES forms that match field observation sheets. This allows direct data transfer with minimal risk of errors.
3. Performing automated checks of the entered data to flag errors and ensure consistency between computed and recorded totals.
4. Providing graphical outputs of the rainfall time series to aid validation of complete and correct data entry.
This document provides information and guidance on analyzing climatic data to estimate evaporation and evapotranspiration rates. It discusses the use of evaporation pans and appropriate pan coefficients to estimate open water evaporation from lakes and reservoirs. It also describes the Penman method for estimating potential evapotranspiration using standard climatological measurements. The Penman method combines the energy budget and mass transfer approaches and provides formulas for calculating evapotranspiration based on climatic variables like temperature, humidity, wind speed, and solar radiation. Substitutions are suggested when some climatic variables are not directly measured.
This document provides information and guidance on analyzing climatic data for hydrological purposes. It discusses analyzing pan evaporation data and estimating potential evapotranspiration using methods like the Penman equation. The document includes the module context, profile, session plan, overhead masters, handouts, and main text on analyzing pan evaporation, estimating potential evapotranspiration, and references.
This document discusses using a seasonal autoregressive integrated moving average (SARIMA) model to forecast precipitation in Mt. Kenya region. It fits various SARIMA models to monthly precipitation data from 1970 to 2011 and selects the best model with the lowest AIC and BIC values. The best model was found to be SARIMA(1,0,1)x(1,0,0)12, which had two statistically significant variables and passed diagnostic checks. Forecast accuracy statistics for this model, including ME, MSE, RMSE and MAE, indicated the SARIMA model provides a good method for precipitation forecasting in Mt. Kenya region.
Similar to Download-manuals-hydrometeorology-data processing-11howtocompilerainfalldata (20)
This document provides guidance on working with map layers and network layers in HYMOS, a hydrological modeling software. It describes how to obtain map layers from digitized topographic maps and remotely sensed data. It also explains how to create network layers by manually adding observation stations or importing them from another database. The document outlines how to manage and set properties for map layers and network layers within HYMOS to control visibility, styling, and other display options.
This document contains information about receiving hydrological data at different levels in India, including:
1. Data is transferred from field stations to subdivisional offices, then to divisional offices and state/regional data processing centers in stages. Target dates are set for receipt and transmission at each level to ensure smooth processing.
2. Records of receipt are maintained at each office to track data and identify delays, with feedback provided if data is not received by targets.
3. Original paper records are filed by station for easy retrieval, while digital copies are stored for long-term archiving.
The document describes a training module on understanding different types and forms of data in hydrological information systems (HIS). It was developed with funding from the World Bank and Government of the Netherlands. The module provides an overview of the session plan and covers various types of data in HIS, including space-oriented data like catchment maps, time-oriented data such as meteorological observations, and relation-oriented data like stage-discharge relationships. The goal is for participants to learn about all the different types and forms of data managed in HIS.
The document provides details on a surface water data processing plan for India. It discusses distributing data processing activities across three levels - sub-divisional, divisional, and state data processing centers. It outlines the activities, computing facilities, staffing, and time schedules needed at each level to efficiently manage the large volume of hydrological data. The plan aims to ensure data is properly validated and processed within time limits while not overwhelming staff.
This document outlines the stages of surface water data processing under the Hydrological Information System (HIS) in India. It discusses: 1) Receipt of data from field stations and storage of raw records; 2) Data entry at sub-divisional offices; 3) Validation of data through primary, secondary, and hydrological checks; 4) Completion and correction of missing or erroneous data; 5) Compilation, analysis, and reporting of validated data; 6) Transfer of data between processing levels from sub-division to division to state centers. The overall goal is to process field data in a systematic series of steps to produce quality-controlled hydrological information.
This document provides information on a training module for understanding hydrological information system (HIS) concepts and setup. It includes an introduction to HIS, why they are needed, how they are set up under the Hydrology Project. It also discusses who the key users of hydrological data are and how computers are used in hydrological data processing. The training module contains session plans, presentations, handouts, and text to educate participants on HIS objectives, components, and how they provide reliable hydrological data to various end users.
This document provides guidance on how to carry out secondary validation of climatic data. It describes various methods for validating data spatially using multiple station comparisons, including comparison plots, balance series, regression analysis, and double mass curves. It also describes single station validation tests for homogeneity, including mass curves and tests of differences in means. The document is part of a training module on secondary validation of climatic data funded by the World Bank and Government of the Netherlands. It provides context for the training and outlines the session plan, materials, and main validation methods to be covered.
This document provides guidance on how to carry out primary validation of climatic data. It discusses validating temperature, humidity, wind speed, atmospheric pressure, sunshine duration, and pan evaporation data. For each variable, it describes typical variations and measurement methods, potential errors, and approaches to error detection such as setting maximum/minimum limits. The goal of primary validation is to check for errors by comparing individual observations to physical limits and sequential observations for unacceptable changes.
This document provides guidance on compiling rainfall data from various time intervals into longer standardized durations. It discusses aggregating hourly data into daily totals, daily data into weekly, ten-daily, monthly, and yearly totals. Methods are presented for arithmetic averaging and Thiessen polygons to estimate areal rainfall from point measurements. Guidance is also given on transforming non-equidistant time series into equidistant series and compiling extreme rainfall statistics. Examples demonstrate compiling hourly rainfall from an autographic rain gauge into daily totals and further aggregating daily point rainfall into areal averages and statistics for various durations.
The document provides guidance on sampling surface waters for water quality analysis. It discusses selecting sampling sites that are representative of the waterbody and safely accessible. It describes three types of samples - grab samples, composite samples, and integrated samples - and when each would be used. It also outlines appropriate sampling devices and containers for different analyses, as well as procedures for sample handling, preservation, and identification. The overall aim is to collect samples that accurately represent water quality without significant changes prior to analysis.
The document describes methods for hydrological observations including rainfall, water level, discharge, and inspection of observation stations. It contains sections on ordinary and recording rainfall observation, ordinary and recording water level observation, observation of discharge using current meters and floats, and inspection of rainfall and water level observation stations. The document was produced by the Ministry of Construction in Japan.
This document provides guidance on how to review monitoring networks. It begins with an introduction on the objectives and physical characteristics that networks are based on. It then discusses the types of networks, including basic, secondary, dedicated, and representative networks. The document outlines the steps in network design, which include assessing data needs, setting objectives, determining required network density, reviewing the existing network, and conducting a cost-effectiveness analysis. Specific guidance is given on reviewing rainfall and hydrometric networks.
This document provides information on how to carry out correlation and spectral analysis. It discusses autocovariance and autocorrelation functions, cross-covariance and cross-correlation functions, and various spectrum and spectral density functions. The document includes examples and explanations of how to estimate these functions from time series data and interpret the results. It also discusses how these analysis techniques can be used to identify periodicities and correlations in hydrological time series data.
This document provides guidance on statistical analysis of rainfall and discharge data. It discusses graphical representation of data including histograms, line diagrams, and cumulative frequency diagrams. It also covers measures of central tendency, dispersion, skewness, kurtosis, and percentiles. The document emphasizes that hydrological time series must meet stationarity conditions to be suitable for statistical analysis and discusses evaluating and accounting for trends and periodic components when analyzing rainfall and discharge data.
This document provides guidance on analyzing discharge data. It discusses computing basic statistics, constructing flow duration curves to analyze variability, fitting theoretical frequency distributions, and other time series analysis techniques like moving averages and mass curves. The main text provides detailed explanations of these methods and their uses in hydrological analysis, data validation, and reporting. It is intended to train hydrologists and data managers on effectively analyzing discharge data.
This document provides guidance on how to correct and complete discharge data records. It discusses several methods for estimating missing or incorrect discharge values, including interpolation during short gaps or recessions, regression analysis using data from neighboring stations, flow routing to ensure water balance, and rainfall-runoff simulation with a calibrated hydrologic model. The Muskingum method for flow routing between stations is presented as an example. The key is to select the most appropriate technique depending on the type, duration and location of the missing data, while ensuring continuity and physical realism in the corrected or completed record.
This document provides guidance on using hydrological models to validate hydrological data and fill in missing data. It describes a training module on hydrological data validation using the Sacramento hydrological rainfall-runoff model. The module includes an introduction to hydrological models, the conceptualization and components of the Sacramento model, and case studies of applying the model. The overall aim is to teach participants how to carry out hydrological data validation and fill in missing data by calibrating the Sacramento model using measured rainfall, evapotranspiration, and runoff time series from catchments.
This document provides guidance on using regression analysis to validate hydrological data. It discusses using simple linear regression to establish relationships between variables like rainfall and runoff. Key steps covered include estimating regression coefficients to minimize the error variance, measuring the goodness of fit using the coefficient of determination, and examining residuals over time and versus other variables to evaluate changes in the rainfall-runoff relationship. The overall aim is to detect errors in discharge data by comparing observed and computed runoff derived from regression models.
This document provides guidance on how to carry out secondary validation of discharge data. It discusses validating a single station's data against limits and expected behavior through graphical inspection. It also describes validating multiple stations by comparing their time series plots and residual series, as well as comparing streamflow and rainfall data. The overall goal of secondary validation is to identify potential errors or anomalies in discharge data for further investigation and correction.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
1. World Bank & Government of The Netherlands funded
Training module # SWDP - 11
How to compile rainfall data
New Delhi, February 2002
CSMRS Building, 4th Floor, Olof Palme Marg, Hauz Khas,
New Delhi – 11 00 16 India
Tel: 68 61 681 / 84 Fax: (+ 91 11) 68 61 685
E-Mail: dhvdelft@del2.vsnl.net.in
DHV Consultants BV & DELFT HYDRAULICS
with
HALCROW, TAHAL, CES, ORG & JPS
2. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version Feb. 2002 Page 1
Table of contents
Page
1. Module context 2
2. Module profile 3
3. Session plan 4
4. Overhead/flipchart master 6
5. Handout 7
6. Additional handout 9
7. Main text 10
3. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version Feb. 2002 Page 2
1. Module context
While designing a training course, the relationship between this module and the others,
would be maintained by keeping them close together in the syllabus and place them in a
logical sequence. The actual selection of the topics and the depth of training would, of
course, depend on the training needs of the participants, i.e. their knowledge level and skills
performance upon the start of the course.
4. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version Feb. 2002 Page 3
2. Module profile
Title : How to compile rainfall data
Target group : Assistant Hydrologists, Hydrologists, Data Processing Centre
Managers
Duration : Five sessions of 60 minutes each
Objectives : After the training the participants will be able to:
• Compile rainfall data for different durations
• Estimate areal rainfall by different methods
• Drawing isohyets
Key concepts : • Aggregation of data to longer durations
• Areal rainfall
• Arithmetic average
• Weighted average
• Thiessen method
• Kriging method
• Inverse distance method
Training methods : Lecture, exercises, softwares
Training tools
required
: Board, OHS, computers
Handouts : As provided in this module
Further reading
and references
:
5. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version Feb. 2002 Page 4
3. Session plan
No
Activities Time Tools
1 General
• Important points
5 min
OHS 1
2 Aggregatiuon of data to longer duration
• Objectives
• Plot of hourly data
• Plot of compiled daily data
• Plot of weekly data
• Plot of ten-daily data
• Plot of monthly data
• Plot of yearly data
• Multiple plots for various intervals (a)
• Multiple plots for various intervals (b)
Working with HYMOS
5 min
OHS 2
OHS 3
OHS 4
OHS 5
OHS 6
OHS 7
OHS 8
OHS 9
OHS 10
3 Estimation of areal rainfall (1)
• Objective & definition
• Various methods
• Arithmetic & weighted average
• Example 3.1 – Arithmetic average
• Thiessen polygon method
• Example 3.2 (a) – Thiessen polygons
• Example 3.2 (b) – Thiessen weights & plot of areal average
series
• Comparison of results from two methods
Working with HYMOS
15 min
OHS 11
OHS 12
OHS 13
OHS 14
OHS 15
OHS 16
OHS 17
OHS 18
4 Estimation of areal rainfall (2)
• Procedure in Isohyetal Method, flat terrain
• Example Isohyetal Method
• Procedure Isohyetal Method, mountainous terrain
• Procedure Isopercental method
• Combining isopercentals with normals
• Drawing isohyets with additional data from normals
• Procedure Hypsometric Method
• Hypsometric Method application
30 min
OHS 23
OHS 24
OHS 25
OHS 26
OHS 27
OHS 28
OHS 29
OHS 30
6. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version Feb. 2002 Page 5
5 Estimation of areal rainfall (3)
• Rainfall interpolation by Kriging and Inverse Distance Meth.
• Estimation of values on a grid
• Rainfall interpolation by kriging (1)
• Assumption for ordinary kriging
• Kriging: unbiasedness and variance minimisation
• Kriging equations
• Exponential spatial correlation function
• Exponential co-variance function
• Exponential semi-variogram
• Possible semi-variogram models in HYMOS
• Sensitivity analysis on variogram parameters (1)
• Sensitivity analysis on variogram parameters (2)
• Sensitivity analysis on variogram parameters (3)
• Sensitivity analysis on variogram parameters (4)
• Sensitivity analysis on variogram parameters (5)
• Application of kriging and inverse distance method
• Example Bilodra: spatial correlation
• Example Bilodra: fit of semi-variance to spherical model (1)
• Example Bilodra: fit of semi-variance to spherical model (2)
• Example Bilodra: fit of semi-variance to exponential model
• Example Bilodra: isohyets June 1984 using kriging
• Example Bilodra: estimation variance June 1984
• Example Bilodra: isohyets June 1984 using inverse distance
90 min
OHS 31
OHS 32
OHS 33
OHS 34
OHS 35
OHS 36
OHS 37
OHS 38
OHS 39
OHS 40
OHS 41
OHS 42
OHS 43
OHS 44
OHS 45
OHS 46
OHS 47
OHS 48
OHS 49
OHS 50
OHS 51
OHS 52
OHS 53
OHS 54
6 Transformation of non-equidistant to equidistant series
• General
3 min
OHS 19
7 Compilation of maximum and minimum series
• Statistical inferences
• Example 5.1 (a) – Min., max., mean etc.
• Example 5.1 (b) – Tabular results
2 min
OHS 20
OHS 21
OHS 22
8 Exercise
• Compilation of hourly rainfall data to daily interval and
observed daily interval to ten-daily, monthly and yearly
intervals
• Estimation of areal average using arithmetic and Thiessen
polygon method
• Compilation of extremes for ten-daily rainfall data series for
part of year (July 1 – Sept. 30)
• Fitting of semi-variogram for monthly rainfall in Bilodra
catchment (1960-2000), based on aggregated daily rainfall
series. Fit different semi-variance models and compare the
results in a spreadsheet
• Application of semi-variance models to selected months
and compare the results of different models (interpolations
and variances)
• Estimate monthly isohyets by Inverse Distance Method
using different powers and compare results, also with
kriging results
30 min
30 min
30 min
30 min
20 min
10 min
7. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version Feb. 2002 Page 6
4. Overhead/flipchart master
8. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version Feb. 2002 Page 7
5. Handout
9. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version Feb. 2002 Page 8
Add copy of Main text in chapter 7, for all participants.
10. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version Feb. 2002 Page 9
6. Additional handout
These handouts are distributed during delivery and contain test questions, answers to
questions, special worksheets, optional information, and other matters you would not like to
be seen in the regular handouts.
It is a good practice to pre-punch these additional handouts, so the participants can easily
insert them in the main handout folder.
11. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 10
7. Main text
Contents
1. General 1
2. Aggregation of data to longer durations 1
3. Estimation of areal rainfall 5
4. Transformation of non-equidistant to equidistant series 25
5. Compilation of minimum, maximum and mean series 25
12. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 1
How to compile rainfall data
1. General
• Rainfall compilation is the process by which observed rainfall is transformed:
from one time interval to another
from one unit of measurement to another
from point to areal values
from non-equidistant to equidistant series
• Compilation is required for validation, reporting and analysis
• Compilation is carried out at the State Data Processing Centre; it is done prior to
validation if required, but final compilation is carried out after correction and
‘completion’.
2. Aggregation of data to longer durations
Rainfall from different sources is observed at different time intervals, but these are generally
one day or less. For the standard raingauge, rainfall is measured once or twice daily. For
autographic records, a continuous trace is produced from which hourly rainfall is extracted.
For digital rainfall recorders rainfall is recorded at variable interval with each tip of the tipping
bucket. Hourly data are typically aggregated to daily; daily data are typically aggregated to
weekly, ten daily, 15 daily, monthly, seasonally or yearly time intervals
Aggregation to longer time intervals is required for validation and analysis. For validation
small persistent errors may not be detected at the small time interval of observation but may
more readily be detected at longer time intervals.
2.1 Aggregation of daily to weekly
Aggregation of daily to weekly time interval is usually done by considering the first 51 weeks
of equal length (i.e. 7 days) and the last 52nd
week of either 8 or 9 days according to whether
the year is non-leap year or a leap year respectively. The rainfall for such weekly time
periods is obtained by simple summation of consecutive sets of seven days rainfalls. The
last week’s rainfall is obtained by summing up the last 8 or 9 days daily rainfall values.
For some application it may be required to get the weekly compilation done for the exact
calendar weeks (from Monday to Sunday). In such a case the first week in any year will start
from the first Monday in that year and thus there will be 51 or 52 full weeks in the year and
one or more days left in the beginning and/or end of the year. The days left out at the end of
a year or beginning of the next year could be considered for the 52nd
of the year under
consideration. There will also be cases of a 53rd
week when the 1st
day of the year is also
the first day of the week (for non-leap years) and 1st
or 2nd
day of the year is also first day of
the week (for leap years).
2.2 Aggregation of daily to ten daily
Aggregation of daily to ten daily time interval is usually done by considering each month of
three ten daily periods. Hence, every month will have first two ten daily periods of ten days
each and last ten daily period of either 8, 9, 10 or 11 days according to the month and the
13. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 2
year. Rainfall data for such ten daily periods is obtained by summing the corresponding daily
rainfall data. Rainfall data for 15 daily periods is also be obtained in a similar manner for
each of the two parts of every month.
2.3 Aggregation from daily to monthly
Monthly data are obtained from daily data by summing the daily rainfall data for the calendar
months. Thus, the number of daily data to be summed up will be 28, 29, 30 or 31 according
to the month and year under consideration. Similarly, yearly rainfall data are obtained by
either summing the corresponding daily data or monthly data, if available.
2.4 Hourly to other intervals
From rainfall data at hourly or lesser time intervals, it may be desired to obtain rainfall data
for every 2 hours, 3 hours, 6 hours, 12 hours etc. for any specific requirement. Such
compilations are carried out by simply adding up the corresponding rainfall data at available
smaller time interval.
Example 2.1:
Daily rainfall at ANIOR station (KHEDA catchment) is observed with Standard Raingauge
(SRG). An Autographic Raingauge (ARG) is also available at the same station for recording
rainfall continuously and hourly rainfall data is obtained by tabulating information from the
chart records.
It is required that the hourly data is compiled to the daily interval corresponding to the
observations synoptic observations at 0830 hrs. This compilation is done using the
aggregation option and choosing to convert from hourly to daily interval. The observed
hourly data and compiled daily data is shown if Fig. 2.1 and Fig. 2.2 respectively.
Fig. 2.1: Plot of observed hourly rainfall data
Plot of Hourly Rainfall
ANIOR
Time
10/09/9409/09/9408/09/9407/09/9406/09/9405/09/9404/09/9403/09/9402/09/9401/09/94
Rainfall(mm)
35
30
25
20
15
10
5
0
14. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 3
Fig. 2.2: Compiled daily rainfall from hourly data tabulated from ARG charts
Similarly, daily data observed using SRG is required to be compiled at weekly, ten-daily,
monthly and/or yearly interval for various application and for the purpose of data validation.
For this, the daily data obtained using SRG is taken as the basic data and compilation is
done to weekly, ten-daily, monthly and yearly intervals. These are illustrated in Fig. 2.3, Fig.
2.4, Fig. 2.5 and Fig. 2.6 respectively.
Fig. 2.3: Compiled weekly rainfall from hourly data tabulated from ARG charts
Plot of Daily Rainfall
ANIOR
Time
20/09/9413/09/9406/09/9430/08/9423/08/9416/08/9409/08/9402/08/9426/07/9419/07/9412/07/9405/07/94
Rainfall(mm)
150
125
100
75
50
25
0
Plot of Weekly Rainfall
ANIOR
Time
10/9509/9508/9507/9506/9505/9504/9503/9502/9501/9512/9411/9410/9409/9408/9407/94
Rainfall(mm)
300
250
200
150
100
50
0
15. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 4
Fig. 2.4: Compiled ten-daily data from daily data obtained from SRG records
Fig. 2.5: Compiled monthly data from daily data obtained from SRG records
Fig. 2.6: Compiled yearly data from daily data obtained from SRG records
Plot of Yearly Rainfall
ANIOR
Time (Years)
9796959493929190898887868584838281
Rainfall(mm)
2,000
1,800
1,600
1,400
1,200
1,000
800
600
400
200
0
Plot of Monthly Rainfall
ANIOR
Time
12/9706/9712/9606/9612/9506/9512/9406/9412/9306/9312/9206/9212/9106/91
Rainfall(mm)
800
700
600
500
400
300
200
100
0
Plot of Ten-daily Rainfall
ANIOR
Time
10/9509/9508/9507/9506/9505/9504/9503/9502/9501/9512/9411/9410/9409/9408/9407/94
Rainfall(mm)
350
300
250
200
150
100
50
0
16. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 5
3. Estimation of areal rainfall
3.1 General description
Raingauges generally measure rainfall at individual points. However, many
hydrological applications require the average depth of rainfall occurring over an area
which can then be compared directly with runoff from that area. The area under
consideration can be a principal river basin or a component sub-basin. Occasionally,
average areal rainfall is required for country, state or other administrative unit, and the areal
average is obtained within the appropriate political or administrative boundary.
Since rainfall is spatially variable and the spatial distribution varies between events,
point rainfall does not provide a precise estimate or representation of the areal
rainfall. The areal rainfall will always be an estimate and not the true rainfall depth
irrespective of the method.
There are number of methods which can be employed for estimation of the areal
rainfall including:
• The Arithmetic average method,
• Weighted average method
• Thiessen polygon method.
• Kriging techniques
All these methods for estimation of areal average rainfall compute the weighted
average of the point rainfall values; the difference between various methods is only in
assigning the weights to these individual point rainfall values, the weights being
primarily based on the proportional area represented by a point gauge. Methods are outlined
below:
3.2 Arithmetic average
This is the simplest of all the methods and as the name suggests the areal average
rainfall depth is estimated by simple averaging of all selected point rainfall values for
the area under consideration. That is:
Where:
Pat= estimated average areal rainfall depth at time t
Pit = individual point rainfall values considered for an area, at station i ( for i = 1,N) and time
t,
N = total number of point rainfall stations considered
In this case, all point rainfall stations are allocated weights of equal magnitude, equal to the
reciprocal of the total number of stations considered. Generally, stations located within the
area under consideration are taken into account. However, it is good practice also to include
such stations which are outside but close to the areal boundary and thus to represent some
part of the areal rainfall within the boundary. This method is also sometimes called as
unweighted average method since all the stations are given the same weights irrespective of
their locations.
∑
=
=++++=
N
1i
itNtt3t2t1at P
N
1
)PPPP(
N
1
P L
17. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 6
This method gives satisfactory estimates and is recommended where the area under
consideration is flat, the spatial distribution of rainfall is fairly uniform, and the
variation of individual gauge records from the mean is not great.
3.3 Weighted average using user defined weights
In the arithmetic averaging method, all rainfall stations are assigned equal weights. To
account for orographic effects and especially where raingauges are predominantly
located in the lower rainfall valleys, it is sometimes required to weight the stations
differently. In this case, instead of equal weights, user defined weights can be
assigned to the stations under consideration. The estimation of areal average rainfall
depth can be made as follows:
Where:
ci = weight assigned to individual raingauge station i (i = 1,N).
To account for under-representation by gauges located in valleys the weights do not
necessarily need to add up to 1.
3.4 Thiessen polygon method
This widely-used method was proposed by A.M. Thiessen in 1911. The Thiessen polygon
method accounts for the variability in spatial distribution of gauges and the
consequent variable area
which each gauge represents. The areas representing each gauge are defined by drawing
lines between adjacent stations on a map. The perpendicular bisectors of these lines form a
pattern of polygons (the Thiessen polygons) with one station in each polygon (see Fig. 3.1).
Stations outside the basin boundary should be included in the analysis as they may have
polygons which extend into the basin area. The area of a polygon for an individual station as
a proportion of the total basin area represents the Thiessen weight for that station. Areal
rainfall is thus estimated by first multiplying individual station totals by their Thiessen weights
and then summing the weighted totals as follows:
where:
Ai = the area of Thiessen polygon for station i
A = total area under consideration
∑
=
=++++=
N
1i
it
i
Nt
N
t3
3
t2
2
t1
1
at P)
A
A
(P
A
A
P
A
A
P
A
A
P
A
A
P L
ti
N
1i
iNtNt33t22t11wt Pc
N
1
)Pc....PcPcPc(
N
1
P ∑
=
=++++=
18. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 7
Fig. 3.1: Small basin upto BILODRA gauging site (portion shown with Thiessen
polygons)
The Thiessen method is objective and readily computerised but is not ideal for mountainous
areas where orographic effects are significant or where raingauges are predominantly
located at lower elevations of the basin. Altitude weighted polygons (including altitude as
well as areal effects) have been devised but are not widely used.
Example 3.1
Areal average rainfall for a small basin upto BILODRA gauging site (shown highlighted in
Fig. 3.1) in KHEDA catchment is required to be compiled on the basis of daily rainfall data
observed at a number of raingauges in and around the region. Areal average is worked out
using two methods: (a) Arithmetic average and (b) Thiessen method.
(a) Arithmetic Average
For the arithmetic average method rainfall stations located inside and very nearby to the
catchment boundary are considered and equal weights are assigned to all of them. Since
there are 11 stations considered the individual station weights work out as 0.0909 and is
given in Table 3.1 below. On the basis of these equal station weights daily areal average is
computed. The compiled areal daily rainfall worked out using arithmetic average method is
shown for the year 1994 in Fig. 3.2.
19. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 8
Table 3.1: List of stations and corresponding weights for arithmetic average method
Areal computation – Arithmetic Average
Areal series: BILODRA MA1
Station weights
BALASINOR = 0.0909
DAKOR = 0.0909
KAPADWANJ = 0.0909
BAYAD = 0.0909
MAHISA = 0.0909
MAHUDHA = 0.0909
SAVLITANK = 0.0909
THASARA = 0.0909
VAGHAROLI = 0.0909
VADOL = 0.0909
KATHLAL = 0.0909
Sum = 0.999
Fig. 3.2: Plot of areal daily rainfall for BILODRA catchment using arithmetic
average method
(b) Thiessen polygon method
Computation of areal average using Thiessen method is accomplished by first getting the
Thiessen polygon layer (defining the boundary of Thiessen polygon for each contributing
point rainfall station). The station weights are automatically worked out on the basis of areas
of these polygons with respect to the total area of the catchment. The layout of the Thiessen
polygons as worked out by the system is graphically shown in Fig. 3.1 and the
corresponding station weights are as given in Table 3.2. On the basis of these Thiessen
polygon weights the areal average of the basin is computed and this is shown in Fig. 3.3 for
the year 1994. In this case it may be noticed that there is no significant change in the values
of the areal rainfall obtained by the two methods primarily on account of lesser variation in
rainfall from station to station.
Areal Average Daily Rainfall (Arithmetic Average)
BILODRA CATCHMENT RAINFALL
Time
15/10/9401/10/9415/09/9401/09/9415/08/9401/08/9415/07/9401/07/9415/06/94
Rainfall(mm)
250
225
200
175
150
125
100
75
50
25
0
20. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 9
Table 3.2: List of stations and corresponding weights as per Thiessen polygon
method
Areal computation – Thiessen Polygon Method
Areal series: BILODRA MA3
Station weights
ANIOR = 0.0127
BALASINOR = 0.0556
BAYAD = 0.1785
DAKOR = 0.0659
KAPADWANJ = 0.1369
KATHLAL = 0.0763
MAHISA = 0.0969
MAHUDHA = 0.0755
SAVLITANK = 0.0724
THASARA = 0.0348
VADOL = 0.1329
VAGHAROLI = 0.0610
Sum = 1.00
Fig. 3.4: Plot of areal daily rainfall for BILODRA catchment using Thiessen polygon
method
3.5 Isohyetal and related methods
The main difficulty with the Thiessen method is its inability to deal with orographical effects
on rainfall. A method, which can incorporate such effects, is the isohyetal method, where
lines of equal rainfall (= isohyets) are being drawn by interpolation between point rainfall
stations taking into account orographic effects.
In flat areas where no orographic effects are present the method simply interpolates linearly
between the point rainfall stations. Manually the procedure is as follows. On a basin map first
the locations of the rainfall stations within the basin and outside near the basin boundary are
plotted. Next, the stations are connected with their neighbouring stations by straight lines.
Dependent on the rain depths for which isohyets are to be shown by linear interpolation
between two neighbouring stations the position of the isohyet(s) on these connecting lines
are indicated. After having completed this for all connected stations, smooth curves are
Areal Average Daily Rainfall (Thiessen Weights)
BILODRA CATCHMENT RAINFALL
Time
15/10/9401/10/9415/09/9401/09/9415/08/9401/08/9415/07/9401/07/9415/06/94
Rainfall(mm)
250
225
200
175
150
125
100
75
50
25
0
21. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 10
drawn through the points marked on the straight lines between the stations connecting the
concurrent rainfall values for which isohyets are to be shown, see Figure 3.5. In drawing the
isohyets personal experience with local conditions and information on storm orientation can
be taken into account. Subsequently, the area between two adjacent isohyets and the
catchment boundary is planimetered. The average rainfall obtained from the two adjacent
isohyets is assumed to have occurred over the entire inter-isohyet area. Hence, if the
isohyets are indicated by P1, P2, …, Pn with inter-isohyet areas a1, a2, …, an-1 the mean
precipitation over the catchment is computed from:
(3.4)
It is noted that if the maximum and/or minimum point rainfall value(s) are within the
catchment boundaries then P1 and/or Pn is to be replaced by the highest and/or lowest point
rainfall values. A slightly biased result will be obtained if e.g. the lowest (highest) isohyet is
located outside the catchment area as the averaging over two successive isohyets will
underestimate (overestimate) the average rainfall in the area bounded by the catchment
boundary and the first inside isohyet.
Figure 3.5:
Example of drawing of
isohyets using linear
interpolation
For flat areas the isohyetal method is superior to the Thiessen method if individual storms
are considered as it allows for incorporation of storm features like orientation; for monthly,
seasonal or annual values such preference is not available. But its added value is
particularly generated when special topographically induced meteorological features like
orographic effects are present in the catchment rainfall. In such cases the above procedure
is executed with a catchment map overlaying a topographical map to be able to draw the
isohyets parallel to the contour lines. Also the extent of rain shadow areas at the leeward
side of mountain chains can easily be identified from topographical maps. The computations
are again carried out with the aid of equation 3.4. In such situations the isohyetal method is
likely to be superior to the Thiessen method.
The isopercental method is very well suited to incorporate long term seasonal orographical
patterns in drawing isohyets for individual storms or seasons. The assumption is that the
long term seasonal orographical effect as displayed in the isohyets of season normals
applies for individual storms and seasons as well. The procedure involves the following
steps, and is worked out in Example 3.2:
1. compute point rainfall as percentage of seasonal normal for all point rainfall stations
A
)
2
PP
(a.......)
2
PP
(a)
2
PP
(a
P
n1n
1n
32
2
21
1
+
++
+
+
+
=
−
−
12.3
9.2
9.1
7.2
7.0
4.0
12
10
10
8
8
6
6
4
4
Legend
station
12 mm
10 mm
8 mm
6 mm
isohyet
22. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 11
2. draw isopercentals (= lines of equal actual point rainfall to station normal rainfall) on a
transparent overlay
3. superimpose the overlay on the seasonal isohyetal map
4. mark each crossing of seasonal isohyets with isopercentals
5. multiply for each crossing the isohyet with the isopercental value and add the value to
the crossing on the map with the observed rainfall values; hence, the data set is
extended with the rainfall estimated derived in step 4
6. draw isohyets using linear interpolation while making use of all data points, i.e. observed
and estimated data (see step 5).
Special attention is to be paid to situations where at the higher elevations raingauge stations
are non-existing. Then the orographic effect has to be extrapolated from the lower reaches
of the mountains by estimating a relation between rainfall and elevation which is assumed to
be valid for the higher elevations as well. Using this rainfall-elevation curve a number of
points in the ungauged upper reaches are added to the point rainfall data to guide the
interpolation process.
Figure 3.6:
Principle of hypsometric method
A simple technique to deal with such situations is the hypsometric method, see e.g.
Dingman, 1994, where a precipitation-elevation curve is combined with an area-elevation
curve (called hypsometric curve) to determine the areal rainfall. The latter method avoids
recurrent planimetering of inter-isohyet areas, whereas the results will be similar to the
isohyetal method. The precipitation-elevation curve has to be prepared for each storm,
month, season or year, but its development will be guided by the rainfall normal-elevation
curve also called the orographic equation. Often the orographic equation can be
approximated by a simple linear relation of the form:
P(z) = a + bz (3.5)
This relation may vary systematically in a region (e.g. the windward side of a mountain range
may have a more rapid increase of precipitation with elevation than the leeward side). In
such cases separate hypsometric curves and orographic equations are established for the
distinguished sub-regions. The areal rainfall is estimated by:
(3.6)
where: P = areal rainfall
P(zi) = rainfall read from precipitation-elevation curve at elevation zi
∆A(zi) = percentage of basin area contained within elevation zi ± 1/2∆zi
n = number of elevation interval in the hypsometric curve has been divided.
rainfall (mm)
elevation(m+MSL)
elevation(m+MSL)
Basin area above given elevation (%)
0 100
zi
P(zi)
∆z
∆A(zi)
Precipitation-elevation curve Hypsometric curve
∑
=
∆=
n
1i
ii )z(A)z(PP
23. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 12
Example 3.2
In this example the application of the isopercental method is demonstrated (NIH, 1988). The
areal rainfall for the storm of 30 August 1982 has to be determined for the catchment shown
in Figure 3.7a. The total catchment area amounts 5,600 km2
. The observed and normal
annual rainfall amounts for the point rainfall stations in the area are given in Table 3.3.
Station 30 August 1982 storm Normal annual rainfall Storm rainfall as
percentage of annual
normal
(mm) (mm) (%)
1. Paikmal
2. Padampur
3. Bijepur
4. Sohela
5. Binka
6. Bolangir
338.0
177.0
521.0
262.0
158.0
401.6
1728
1302
1237
1247
1493
1440
19.6
13.6
42.1
21.0
10.6
27.9
Table 3.3: Storm rainfall and annual normals
For each station the point rainfall as percentage of seasonal normal is displayed in the last
column of Table 3.3. Based on this information isopercetals are drawn on a transparent
overlay, which is subsequently superimposed on the annual normal isohyetal map. The
intersections of the isopercetals and isohyets are identified and for each intersection the
isopercental is multiplied with the isohyet to get an estimate of the storm rainfall for that
point. These estimates are then added to the point rainfall observations to draw the isohyets,
see Figure 3.7b. The inter-isohyet area is planimetered and the areal rainfall is subsequently
computed with the aid of equation 3.4 as shown in Table 3.4.
Isohyetal range Mean rainfall Area Volume
(mm) (mm) (km
2
) (km
2
xmm)
110-150
150-200
177-200
200-250
250-300
300-400
400-500
500-521
130
175
188.5
225
275
350
450
510.5
80
600
600
3370
620
230
90
10
10400
105000
113100
758250
170500
80500
40500
5105
Total 5600
Average
1283355
1283355/5600=
229.2 mm
Table 3.4: Computation of areal rainfall by isohyetal/isopercental method
Figure 3.7a:
Isopercental map
24. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 13
Figure 3.7b:
Isohyetal map drawn by
isopercental method
3.6 Kriging method
General
The Kriging Method is an interpolation method. It provides rainfall estimates (or estimates of
any other variable) at points (point-kriging) or blocks (block-kriging) based on a weighted
average of observations made at surrounding stations. In this section point-kriging will be
discussed. In the application of the kriging method for areal rainfall estimation and drawing of
isohyets a dense grid is put over the catchment. By estimating the rainfall for the gridpoints
the areal rainfall is simply determined as the average rainfall of all grid points within the
catchment. In addition, in view of the dense grid, it is very easy to draw isohyets based on
the rainfall values at the grid points.
At each gridpoint the rainfall is estimated from:
At each gridpoint the rainfall is estimated from:
(3.7)
where: Pe0 = rainfall estimate at some gridpoint “0”
w0,k = weight of station k in the estimate of the rainfall at point “0”
Pk = rainfall observed at station k
N = number of stations considered in the estimation of Pe0
The weights are different for each grid point and observation station. The weight given to a
particular observation station k in estimating the rainfall at gridpoint “0” depends on the
gridpoint-station distance and the spatial correlation structure of the rainfall field. The kriging
method provides weights, which have the following properties:
• the weights are linear, i.e. the estimates are weighted linear combinations of the
available observations
• the weights lead to unbiased estimates of the rainfall at the grid points, i.e. the expected
estimation error at all grid points is zero
• the weights minimise the error variance at all grid points.
∑
=
=
N
1k
kk,00 P.wPe
25. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 14
Particularly the error variance minimisation distinguishes the kriging method from other
methods like e.g. inverse distance weighting. The advantage of the kriging method above
other methods is that it provides besides the best linear estimate of rainfall for a point on the
grid also the uncertainty in the estimate. The latter property makes the method useful if
locations for additional stations have to be selected when the network is to be upgraded,
because then the new locations can be chosen such that overall error variance is reduced
most.
Bias elimination and error variance minimisation
The claims of unbiasedness and minimum error variance require further explanation. Let the
true rainfall at location 0 be indicated by P0 then the estimation error at “0” becomes:
e0 = Pe0 – P0 (3.8)
with Pe0 estimated by (3.7). It is clear from (3.8) that any statement about the mean and
variance of the estimation error requires knowledge about the true behaviour of the rainfall at
unmeasured locations, which is not known. This problem is solved by hypothesising:
• that the rainfall in the catchment is statistically homogeneous so that the rainfall at all
observation stations is governed by the same probability distribution
• consequently, under the above assumption also the rainfall at the unmeasured locations
in the catchment follows the same probability distribution as applicable to the
observation sites.
Hence, any pair of locations within the catchment (measured or unmeasured) has a joint
probability distribution that depends only on the distance between the locations and not on
their locations. So:
• at all locations E[P] is the same and hence E[P(x1)] – E[P(x1-d)] = 0, where d refers to
distance
• the covariance between any pair of locations is only a function of the distance d between
the locations and not dependent of the location itself: C(d).
The unbiasedness implies:
Hence for each and every grid point the sum of the weights should be 1 to ensure
unbiasedness:
(3.9)
The error variance can be shown to be (see e.g. Isaaks and Srivastava, 1989):
(3.10)
where 0 refers to the site with unknown rainfall and i,j to the observation station locations.
Minimising the error variance implies equating the N first partial derivatives of σe
2
to zero to
solve the w0,i. In doing so the weights w0,i will not necessarily sum up to 1 as it should to
ensure unbiasedness. Therefore, in the computational process one more equation is added
to the set of equations to solve w0,i, which includes a Lagrangian multiplyer µ. The set of
equations to solve the stations weights, also called ordinary kriging system, then reads:
01w]P[E:or0]P[E]P.w[E:so0]e[E
N
1k
k,000
N
1k
kk,00 =
−=−= ∑∑
==
∑∑ ∑
= = =
−+σ=−=σ
N
1i
N
1j
N
1i
i,0i,0j,ij,0i,0
2
P
2
00
2
e Cw2Cww])PPe[(E
1w
N
1k
k,0 =∑
=
26. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 15
C . w = D (3.11)
where:
Note that the last column and row in C are added because of the introduction of the
Lagrangian multiplyer µ in the set of N+1 equations. By inverting the covariance matrix the
station weights to estimate the rainfall at location 0 follow from (3.11) as:
w = C-1
. D (3.12)
The error variance is then determined from:
σe
2
= σP
2
– wT
. D (3.13)
From the above equations it is observed that C-1
is to be determined only once as it is solely
determined by the covariances between the observation stations being a function of the
distance between the stations only. Matrix D differs for every grid point as the distances
between location “0” and the gauging stations vary from grid point to grid point.
Covariance and variogram models
To actually solve above equations a function is required which describes the covariance of
the rainfall field as a function of distance. For this we recall the correlation structure between
the rainfall stations discussed in module 9. The spatial correlation structure is usually well
described by an exponential relation of the following type:
r(d) = r0 exp(-d/d0) (3.14)
where: r(d) = correlation coefficient as a function of distance
r0 = correlation coefficient at small distance, with r0 ≤ 1
d0 = characteristic correlation distance.
Two features of this function are of importance:
• r0 ≤ 1, where values < 1 are usually found in practice due to measurement errors or
micro-climatic variations
• the characteristic correlation distance d0, i.e the distance at which r(d) reduces to 0.37r0.
It is a measure for the spatial extent of the correlation, e.g. the daily rainfall d0 is much
smaller than the monthly rainfall d0. Note that for d = 3 d0 the correlation has effectively
vanished (only 5% of the correlation at d = 0 is left).
The exponential correlation function is shown in Figure (3.8).
The covariance function of the exponential model is generally expressed as:
(3.15)
=
µ
=
=
1
C
.
.
C
w
.
.
w
01...................1
1C...............C
...
...
1C...............C
N,0
1,0
N,0
1,0
NN1N
N111
DwC
0dfor)
a
d3
exp(C)d(C
0dforCC)d(C
1
10
>−=
=+=
27. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 16
Since according to the definition C(d) = r(d)σP
2
, the coefficients C0 and C1 in (3.15) can be
related to those of the exponential correlation model in (3.14) as follows:
C0 = σP
2
(1-r0) ; C1 = σP
2
r0 and a = 3d0 (3.16)
Figure 3.8 Spatial correlation structure of rainfall field
In kriging literature instead of using the covariance function C(d) often the semi-variogram
γ(d) is used, which is halve of the expected squared difference between the rainfall at
locations distanced d apart; γ(d) is easily shown to be related to C(d) as:
γ(d) = ½ E[{P(x1) – P(x1-d)}2
] = σP
2
– C(d) (3.17)
Hence the (semi-)variogram of the exponential model reads:
(3.18)
Figure 3.9:
Exponential covariance model
1
r0
Distance d
Exponential spatial correlation function
Exponential spatial correlation
function:
r(d) = r0 exp(- d / d0)
0
0.37r0
d0
0d:for)
a
d3
exp(1CC)d(
0d:for0)d(
10 >
−−+=γ
==γ
C0 + C1
C1
Nugget effect
Distance da
Exponential covariance function
Covariance function:
C(d) = C0 + C1 for d = 0
C(d) = C1 exp(- 3d / a) for d > 0
Range = a (C(a) = 0.05C1 ≈ 0 )
28. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 17
Figure 3.10:
Exponential variogram model
Features of the exponential model are following:
• C0, which is called the nugget effect, provides a discontinuity at the origin; according to
(3.16): C0 = σP
2
(1-r0), hence in most applications of this model to rainfall data a small
nugget effect will always be present
• The distance ‘a’ in the covariance function and variogram is called the range and refers
to the distance above which the functions are essentially constant; for the exponential
model a = 3d0 can be applied
• C0 + C1 is called the sill of the variogram and provides the limiting value for large
distance and becomes equal to σP
2
; it also gives the covariance for d = 0.
Other Covariance and semi-variogram models
Beside the exponential model other models are in use for ordinary kriging, viz:
• Spherical model, and
• Gaussian model
These models have the following forms:
Spherical:
(3.19)
Gaussian:
(3.20)
The Spherical and Gaussian models are shown with the Exponential Model in Figure 3.11.
otherwise1)d(
adif
a
d
2
1
a
d
2
3
(CC)d(
3
10
=γ
≤
−+=γ
−−+=γ 2
2
10
a
d3
exp1CC)d(
C0 + C1
Distance da
Exponential
variogram
Variogram function:
γ (d) = 0 for d = 0
γ (d) = C0 + C1(1- exp(- 3d / a) for d > 0
Range = a
Nugget
effect
σP
2
C0
γ (a) ≈ C0 + C1
Sill
29. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 18
Figure 3.11:
Example of Spherical,
Gaussian and Exponential
type of variogram models,
with C0=0,C1=1 and a = 10
The spherical model has a linear behaviour at small separation distances near the origin,
with the tangent at the origin intersecting the sill at about 2/3 of the range “a”. The model
reaches the sill at the range. The gaussian model is fit for extremely continuous phenomena,
with only gradually diminishing correlation near the origin, much smoother than the other two
models. The range “a” is at a distance the variogram value is 95% of the sill. The exponential
model rises sharper than the other two but flattens out more gradually at larger distances;
the tangent at the origin reaches the sill at about 1/5 of the range.
Sensitivity analysis of variogram model parameters
To show the effect of variation in the covariance or variogram models on the weights
attributed to the observation stations to estimate the value at a grid point an example
presented by Isaaks and Srivastava (1989) is presented. Observations made at the stations
as shown in Figure 3.12 are used. Some 7 stations are available to estimate the value at
point ‘0’ (65,137).
0
0.2
0.4
0.6
0.8
1
1.2
0 2 4 6 8 10 12 14 16 18 20
Distance (d)
γ(d)
Spherical model
Gaussian model
Exponential model
30. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 19
Figure 3.12:
Layout of network with
location of stations 1, …., 7
Observations:
Station 1: 477
Station 2: 696
Station 3: 227
Station 4: 646
Station 5: 606
Station 6: 791
Station 7: 783
The following models (cases) have been applied to estimate the value for “0”:
Figure The covariance and variograms for the cases are shown in Figures 3.13 and 3.14.
Figure 3.13:
Covariance models for the
various cases
126
128
130
132
134
136
138
140
142
144
146
60 62 64 66 68 70 72 74 76 78 80
1
2
3
4
5
6
7
Point to be estimated
X-direction
Y-direction
20a10C0C)
20
d3
exp(110)d(:5Case
10a5C5C0d:for)
10
d3
exp(155)d(
0d:for0)d(:4Case
)Gaussian(10a10C0C))
10
d
(3exp(110)d(:3Case
10a20C0C)d(2)
10
d3
exp(120)d(:2Case
10a10C0C)
10
d3
exp(110)d(:1Case
105
104
4
10
2
3
1012
101
===
−−=γ
===>
−−+=γ
==γ
===
−−=γ
===γ=
−−=γ
===
−−=γ
0
5
10
15
20
25
0 2 4 6 8 10 12 14 16 18 20
Distance (d)
γ(d)
F2
F4
F1
F5
F3
31. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 20
Figure 3.14:
Semi-variograms for the
various cases
The results of the estimate and variance at point “0” as well as the weights of the stations
computed with the models in estimating point “0” are presented in Table 3.5.
Stations 1 2 3 4 5 6 7Case Estimate
at “0”
Error
variance Distance to “0” 4.47 3.61 8.06 9.49 6.71 8.94 13.45
(mm) (mm
2
) weights
1 593 8.86 0.17 0.32 0.13 0.09 0.15 0.06 0.09
2 593 17.91 0.17 0.32 0.13 0.09 0.15 0.06 0.09
3 559 4.78 -0.02 0.68 0.17 -0.01 0.44 -0.29 0.04
4 603 11.23 0.15 0.18 0.14 0.14 0.13 0.13 0.14
5 572 5.76 0.18 0.38 0.14 0.07 0.20 0.00 0.03
ID 590 - 0.44 0.49 0.02 0.01 0.02 0.01 0.01
Table 3.5: Results of computations for Cases 1 to 5 and ID
(=Inverse Distance Method) with p = 2
From the results the following can be concluded:
• Effect of scale: compare Case 1 with Case 2
In Case 2 the process variance, i.e. the sill is twice as large as in Case 1. The only
effect this has on the result is a doubled error variance at “0”. The weights and therefore
also the estimate remains unchanged. The result is easily confirmed from equations
(2.12) and (3.13) as both C, D and σP
2
are multiplied with a factor 2 in the second case.
• Effect of shape: compare Case 1 with Case 3
In Case 3 the spatial continuity near the origin is much larger than in Case 1, but the sill
is the same in both cases. It is observed that in Case 3 the estimate for “0” is almost
entirely determined by the three nearest stations. Note that kriging does cope with
clustered stations; even negative weights are generated by stations in the clusters of
stations (5, 6) and (1, 2) to reduce the effect of a particular cluster. Note also that the
estimate has changed and that the error variance has reduced as more weight is given
to stations at small distance. It shows that due attention is to be given to the correlation
structure at small distances as it affects the outcome significantly.
• The nugget effect: compare Case 1 with Case 4
In Case 4, which shows a strong nugget effect, the spatial correlation has substantially
been reduced near the origin compared to Case 1. As a result the model discriminates
less among the stations. This is reflected in the weights given to the stations. It is
observed that almost equal weight is given to the stations in Case 4. In case correlation
would have been zero the weights would have been exactly equal.
0
2
4
6
8
10
12
14
16
18
20
0 2 4 6 8 10 12 14 16 18 20
Distance (d)
CovarianceC(d)
F2
F1
F3
F4
F5
32. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 21
• Effect of range: compare Case 1 with Case 5
The range in Case 5 is twice as large as in Case 1. It means that the spatial correlation
is more pronounced than in Case 1. Hence one would expect more weight to the
nearest stations and a reduced error variance, which is indeed the case as can be
observed from Table 3.5. Cases1 and 5 basically are representative for rainfall at a low
and high aggregation level, respectively (e.g. daily data and monthly data).
There are more effects to be concerned about like effects of anisotropy (spatial covariance
being direction dependent) and spatial inhomogeneity (like trends due to orographic effects).
The latter can be dealt with by normalising or detrending the data prior to the application of
kriging and denormalise or re-invoke the trend after the computations. In case of anisotropy
the contour map of the covariance surface will be elliptic rather than circular. Anisotropy will
require variograms to be developed for the two main axis of the ellips separately.
Estimation of the spatial covariance function or variogram.
Generally the spatial correlation (and hence the spatial covariance) as a function of distance
will show a huge scatter as shown in Figures 1.1 to 1.4 of Module 9. To reduce the scatter
the variogram is being estimated from average values per distance interval. The distance
intervals are equal and should be selected such that sufficient data points are present in an
interval but also that the correct nature of the spatial correlation is reflected in the estimated
variogram.
Alternative to kriging
HYMOS offers an alternative to station weight determination by kriging through the inverse
distance method. In this method the station weights and estimate are determined by:
(3.21)
It is observed that the weights are proportional to the distance between “0” and station j to
some power p. For rainfall estimation often p = 2 is applied.
Different from kriging the Inverse Distance Method does not take account of station
clusters, which is convincingly shown in Table 3.5, last row; the estimate for “0” is seen to
be almost entirely determined by the cluster (1, 2) which is nearest to “0”. Hence, this
method is to be applied only when the stations are more or less evenly distributed and
clusters are not existing.
Example 3.3: Application of Kriging Method
The kriging method has been applied to monthly rainfall in the BILODRA catchment, i.e. the
south-eastern part of the KHEDA basin in Gujarat. Daily rainfall for the years 1960-2000
have been aggregated to monthly totals. The spatial correlation structure of the monthly
rainfall for values > 10 mm (to eliminate the dry periods) is shown in Figure 3.15.
∑
∑=
=
==
N
1k
N
1j
p
j
p
k
k,0kk,00
d/1
d/1
wP.wPe
33. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 22
Figure 3.15:
Spatial correlation structure of
monthly rainfall data in and
around Bilodra catchment
(values > 10 mm)
From Figure 3.15 it is observed that the correlation only slowly decays. Fitting an exponential
correlation model to the monthly data gives: r0 ≈ 0.8 and d0 = 410 km. The average variance
of the monthly point rainfall data (>10 mm) amounts approx. 27,000 mm2
. It implies that the
sill of the semi-variogram will be 27,000 mm2
and the range is approximately 1200 km (≈ 3
d0). The nugget is theoretically σP
2
(1-r0), but is practically obtained by fitting the semi-
variogram model to the semi-variance versus distance plot. In making this plot a lag-distance
is to be applied, i.e. a distance interval for averaging the semi-variances to reduce the
spread in the plot. In the example a lag-distance of 10 km has been applied. The results of
the fit overall and in detail to a spherical semi-variogram model is shown in Figure 3.16. A
nugget effect (C0) of 2000 mm2
is observed.
Details of semi-variance
fit
Figure 3.16: Fit of spherical model to semi-variance, monthly rainfall Bilodra
Similarly, the semi-variance was modelled by the exponential model, which in Figure 3.17 is
seen to fit in this case equally well, with parameters C0 = 2,000 mm2
, Sill = 27,000 mm2
and
Range = 800 km. Note that C0 is considerably smaller than one would expect based on the
spatial correlation function, shown in Figure 3.15. To arrive at the nugget value of 2,000 mm2
an r0 value of 0.93 would be needed. Important for fitting the semi-variogram model is to
apply an appropriate value for the lag-distance, such that the noise in the semi-variance is
substantially reduced.
Correlation Correlation function
Distance [km]
1009080706050403020100
Correlationcoefficient
1
0.8
0.6
0.4
0.2
0
Semivariance Semivariogram function
Distance
1,2001,0008006004002000
Semivariance(mm2)
30,000
28,000
26,000
24,000
22,000
20,000
18,000
16,000
14,000
12,000
10,000
8,000
6,000
4,000
2,000
C0 = nugget
Variance
Range
Semivariance Semivariogram function
Distance
1009080706050403020100
Semivariance(mm2)
5,000
4,500
4,000
3,500
3,000
2,500
2,000
1,500
1,000
500
0
34. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 23
The results with the spherical model applied to the rainfall of June 1984 in the BILODRA
catchment is shown in Figure 3.18. A grid-size of 500 m has been applied. The variance of
the estimates is shown in Figure 3.19. It is observed that the estimation variance at the
observation points is zero. Further away from the observation stations the variance is seen
to increase considerably. Reference is made to Table 3.6 for a tabular output.
For comparisons reasons also the isohyets derived by the inverse distance method is
shown, see Figure 3.20. The pattern deviates from the kriging results in the sense that the
isohyets are more pulled towards the observation stations. As was shown in the sensitivity
analysis, the nearest station(s) weigh heavier than in the kriging method.
Figure 3.17: Fit of exponential model to semi-variogram, monthly data Bilodra
Figure 3.18 Figure 3.19
Isohyets of June 1984 rainfall in Bilodra catchment using by spherical semi-variogram
model (Figure 3.18) and the variance of the estimates at the grid-points (Figure 3.19).
Semivariance Semivariogram function
Distance
1009080706050403020100
Semivariance(mm2)
5,000
4,500
4,000
3,500
3,000
2,500
2,000
1,500
1,000
500
0
35. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 24
Figure 3.20: Isohyets derived for June 1984 rainfall in Bilodra catchment using
inverse distance weighting (compare with Figure 3.18)
Table 3.6: Example output of interpolation by kriging
Variogram parameters
Nugget (C0): 2000.000000
Sill (C1) 25000.000000
Range (a): 1200.000000
Grid characteristics:
Number of cells in X, Y: 200 200
Origin of X and Y Blocks: 0.000000E+00 0.000000E+00
Size of X and Y Blocks: 5.000000E-01 5.000000E-01
Search Radius: 1.000000E+10
Minimum number of samples: 4
Maximum number of samples: 15
Data: ANIOR MP2 1 at 65.473 63.440 value: 10.40000
Data: BALASINOR MP2 1 at 59.317 21.981 value: .00000
Data: BAYAD MP2 1 at 49.430 52.552 value: 1.00000
Data: BHEMPODA MP2 1 at 70.017 63.390 value: 18.70000
Data: DAKOR MP2 1 at 39.945 -.552 value: 176.00000
Data: KAPADWANJ MP2 1 at 32.921 29.687 value: 11.00000
Data: KATHLAL MP2 1 at 24.756 15.644 value: .00000
Data: MAHEMDABADMP2 1 at .122 7.998 value: 68.20000
Data: MAHISA MP2 1 at 31.242 10.327 value: .00000
Data: MAHUDHA MP2 1 at 18.654 7.421 value: .00000
36. Hydrology Project Training Module File: “ 11 How to compile rainfall data.doc” Version: Feb. 2002 Page 25
Data: SAVLITANK MP2 1 at 36.817 22.560 value: 54.00000
Data: THASARA MP2 1 at 46.848 3.977 value: 22.00000
Data: VADAGAM MP2 1 at 43.604 64.007 value: .00000
Data: VADOL MP2 1 at 45.951 23.984 value: .00000
Data: VAGHAROLI MP2 1 at 52.382 13.755 value: 5.00000
Estimated 40000 blocks
average 17.581280
variance 101.393300
Column Row Estimate Variance
1 1 45.806480 2685.862000
1 2 45.719250 2680.906000
1 3 45.625660 2676.289000
1 4 45.525860 2672.001000
1 5 45.420020 2668.018000
etc.
4. Transformation of non-equidistant to equidistant series
Data obtained from digital raingauges based on the tipping bucket principle may sometime
be recording information as the time of each tip of the tipping bucket, i.e. a non-equidistant
series.
HYMOS provides a means of transforming such non-equidistant series to equidistant series
by accumulating each unit tip measurement to the corresponding time interval. All those time
interval for which no tip has been recorded are filled with zero values.
5. Compilation of minimum, maximum and mean series
The annual, seasonal or monthly maximum series of rainfall is frequently required for flood
analysis, whilst minimum series may be required for drought analysis. Options are available
in HYMOS for the extraction of minimum, maximum, mean, median and any two user-
defined percentile values (at a time) for any defined period within the year or for the
complete year.
For example if the selected time period is ‘monsoon months’ (say July to October) and the
time interval of the series to be analysed is ‘ten daily’, then the above statistics are extracted
for every monsoon period between a specified start and end date.
Example 5.1
From daily rainfall records available for MEGHARAJ station (KHEDA catchment), ten-daily
data series is compiled. For this ten-daily data series for the period 1961 to 1997, a few
statistics like minimum, maximum, mean, median and 25 & 90 %ile values are compiled
specifically for the period between 1st
July and 30th
Sept. every year.
These statistics are shown graphically in Fig. 5.1 and are listed in tabular form in Table 5.1.
Data of one of the year (1975) is not available and is thus missing. Lot of inferences may be
derived from plot of such statistics. Different pattern of variation between 25 %ile and 90
%ile values for similar ranges of values in a year may be noticed. Median value is always
lower than the mean value suggesting higher positive skew in the ten daily data (which is
obvious owing to many zero or low values). A few extreme values have been highlighted in
the table for general observation.