This document analyzes the time complexities and accuracy of various algorithms used for depression filling in digital elevation models (DEMs). It discusses conventional algorithms like Jenson and Domingue (O(n^2) time complexity) and Planchon and Darboux (O(nlogn) time complexity on average), as well as more recent approaches like the priority-flood algorithm (O(nlogn) time complexity) and quantile classification method (thousand times faster than Jenson and Domingue). The document analyzes the performance of these algorithms based on processing time and number of DEM cell modifications required, concluding that approaches like priority-flood and quantile classification are more suitable for large, high-resolution
Modelling An Overland Water Flow Path In An Urban Catchment Using GISWaqas Tariq
Â
This document describes a GIS-based method for modeling overland water flow paths in an urban catchment area. The method incorporates subsurface drainage networks into the overland flow modeling. It first reviews existing DEM analysis and flow path determination methods. It then describes the authors' approach, which uses GIS tools like Arc Hydro to remove spurious sinks and flats from DEM data, assign flow directions, and delineate a final overland flow path model. This customized urban flow path method could help local governments better calculate flooding risks in developing areas.
This document provides an analysis of different approaches to computing the watershed transform in digital images. It discusses watershed transforms based on flooding, path-cost minimization, topology preservation, local conditions, and minimum spanning forests. For each approach, it describes the processing procedure, mathematical foundations, and classic algorithms. It aims to classify watershed transform algorithms according to criteria like solution uniqueness, topology preservation, and complexity. The document concludes by summarizing the approaches and discussing future work.
The document summarizes the HortonMachine, a spatial analysis package integrated within the JGrass GIS system. It began as standalone routines in C and has been rewritten in Java and integrated into JGrass. The HortonMachine contains tools for DEM manipulation, geomorphology, network analysis, hillslope analysis, basin attributes, statistics, and hydrogeomorphology modeling. It implements hydrological analyses and aims to provide tools adhering to standards of the scientific community.
Changes in dam break hydrodynamic modelling practice - Suter et alStephen Flood
Â
Abstract: Today, many organisations rely on hydrodynamic modelling to assess the consequences of dam break failure on downstream populations and infrastructure. The availability of finite volume shock-capturing schemes and flexible mesh schematisations in widely used software platforms imply that dam break modelling projects will be carried out differently in the future: Finite volume based platforms allow widespread application of shock-capturing methods and flexible mesh platforms can represent features in the study area more realistically and are more flexible thanks to varying mesh resolutions. Furthermore, the recent adoption of Graphics Processing Unit (GPU) technology in mainstream scientific and engineering computing will also significantly decrease computation times at relatively low cost.
This paper examines the application of finite volume, flexible mesh and GPU technologies to dam break modelling. One-dimensional (1D) modelling results are compared to those from two-dimensional (2D) finite difference and finite volume approaches. The results demonstrate that there are differences between modelling approaches and that the computational speeds of 2D simulations can be significantly reduced by the use of GPU processors.
The document summarizes a presentation on geoinformatics in hydrology and water resources. It discusses watershed analysis using GRASS GIS, including delineating watershed boundaries and factors that influence the analysis. It also covers groundwater modeling in GRASS, including defining initial conditions, parameters for modeling flow and solute transport, and a case study applying the techniques. Remote sensing and field data can be used to generate accurate modeling inputs. The presentation provides an overview of conducting watershed and groundwater analyses using open-source GIS tools in GRASS.
Optimal design of storm sewer networksBhanu Pratap
Â
This document provides a review of past, present and future approaches to optimal design of storm sewer networks. It discusses how optimization techniques have been used since the 1960s to minimize construction costs while ensuring system performance, moving from linear programming and non-linear programming to more advanced techniques like dynamic programming and discrete differential dynamic programming. The document also outlines key advantages of optimal design over traditional design methods.
This presentation gives a brief introduction to the concept of coupled CFD-DEM Modeling.
Link to file: https://drive.google.com/open?id=1nO2n49BwhzBtT6NnvpxADG5WsC9uMJ-i
SIBE2013-Paper-A 3-Dimensional Numerical Study Of Flow Patterns Around Three ...Fitra Adinata
Â
This document summarizes the results of a 3D numerical study of flow patterns around three types of drop spillways (I-Type, T-Type, and U-Type) using computational fluid dynamics software. The spillways are intended for use in small embung dams in Indonesia and were modeled in two stages. Simulation results including velocities, water levels, and discharge rates are presented and compared for the different spillway types and construction stages. Preliminary numerical modeling found that drop spillways can reduce construction costs compared to conventional designs and allow expansion without interrupting dam operations.
Modelling An Overland Water Flow Path In An Urban Catchment Using GISWaqas Tariq
Â
This document describes a GIS-based method for modeling overland water flow paths in an urban catchment area. The method incorporates subsurface drainage networks into the overland flow modeling. It first reviews existing DEM analysis and flow path determination methods. It then describes the authors' approach, which uses GIS tools like Arc Hydro to remove spurious sinks and flats from DEM data, assign flow directions, and delineate a final overland flow path model. This customized urban flow path method could help local governments better calculate flooding risks in developing areas.
This document provides an analysis of different approaches to computing the watershed transform in digital images. It discusses watershed transforms based on flooding, path-cost minimization, topology preservation, local conditions, and minimum spanning forests. For each approach, it describes the processing procedure, mathematical foundations, and classic algorithms. It aims to classify watershed transform algorithms according to criteria like solution uniqueness, topology preservation, and complexity. The document concludes by summarizing the approaches and discussing future work.
The document summarizes the HortonMachine, a spatial analysis package integrated within the JGrass GIS system. It began as standalone routines in C and has been rewritten in Java and integrated into JGrass. The HortonMachine contains tools for DEM manipulation, geomorphology, network analysis, hillslope analysis, basin attributes, statistics, and hydrogeomorphology modeling. It implements hydrological analyses and aims to provide tools adhering to standards of the scientific community.
Changes in dam break hydrodynamic modelling practice - Suter et alStephen Flood
Â
Abstract: Today, many organisations rely on hydrodynamic modelling to assess the consequences of dam break failure on downstream populations and infrastructure. The availability of finite volume shock-capturing schemes and flexible mesh schematisations in widely used software platforms imply that dam break modelling projects will be carried out differently in the future: Finite volume based platforms allow widespread application of shock-capturing methods and flexible mesh platforms can represent features in the study area more realistically and are more flexible thanks to varying mesh resolutions. Furthermore, the recent adoption of Graphics Processing Unit (GPU) technology in mainstream scientific and engineering computing will also significantly decrease computation times at relatively low cost.
This paper examines the application of finite volume, flexible mesh and GPU technologies to dam break modelling. One-dimensional (1D) modelling results are compared to those from two-dimensional (2D) finite difference and finite volume approaches. The results demonstrate that there are differences between modelling approaches and that the computational speeds of 2D simulations can be significantly reduced by the use of GPU processors.
The document summarizes a presentation on geoinformatics in hydrology and water resources. It discusses watershed analysis using GRASS GIS, including delineating watershed boundaries and factors that influence the analysis. It also covers groundwater modeling in GRASS, including defining initial conditions, parameters for modeling flow and solute transport, and a case study applying the techniques. Remote sensing and field data can be used to generate accurate modeling inputs. The presentation provides an overview of conducting watershed and groundwater analyses using open-source GIS tools in GRASS.
Optimal design of storm sewer networksBhanu Pratap
Â
This document provides a review of past, present and future approaches to optimal design of storm sewer networks. It discusses how optimization techniques have been used since the 1960s to minimize construction costs while ensuring system performance, moving from linear programming and non-linear programming to more advanced techniques like dynamic programming and discrete differential dynamic programming. The document also outlines key advantages of optimal design over traditional design methods.
This presentation gives a brief introduction to the concept of coupled CFD-DEM Modeling.
Link to file: https://drive.google.com/open?id=1nO2n49BwhzBtT6NnvpxADG5WsC9uMJ-i
SIBE2013-Paper-A 3-Dimensional Numerical Study Of Flow Patterns Around Three ...Fitra Adinata
Â
This document summarizes the results of a 3D numerical study of flow patterns around three types of drop spillways (I-Type, T-Type, and U-Type) using computational fluid dynamics software. The spillways are intended for use in small embung dams in Indonesia and were modeled in two stages. Simulation results including velocities, water levels, and discharge rates are presented and compared for the different spillway types and construction stages. Preliminary numerical modeling found that drop spillways can reduce construction costs compared to conventional designs and allow expansion without interrupting dam operations.
NUMERICAL MODELS AS MITIGATION MEASURES FOR RIVERINE AND URBAN FLOODAmaljit Bharali
Â
The document discusses numerical modeling tools for riverine and urban flood mitigation. It provides an overview of the MIKE 11 and MIKE 21 modeling suites, including their strengths and limitations. MIKE 11 is useful for one-dimensional river modeling while MIKE 21 enables two-dimensional modeling of floodplains and urban areas. The document outlines the basic file types and steps to set up simulations in MIKE 11 and MIKE 21, including network, cross-section, boundary, bathymetry, and parameter files. Dynamically coupling MIKE 11 and MIKE 21 in MIKE FLOOD allows for integrated one-dimensional and two-dimensional flood modeling.
Gis based-hydrological-modeling.-a-comparative-study-of-hec-hms-and-the-xinan...Sikandar Ali
Â
This document compares the HEC-HMS and Xinanjiang hydrological models. HEC-HMS and the GIS extension HEC-GeoHMS were used to preprocess data and develop a rainfall-runoff model for the Wanjiabu catchment in China. Both HEC-HMS and the Xinanjiang model were applied to the catchment. The results showed that HEC-HMS was more convenient for parameter optimization but less accurate than the Xinanjiang model, possibly because Xinanjiang has more parameters. The document concludes by comparing the simulated and observed hydrographs from both models.
AIAA Technical Paper, Reduction of Semi-Truck Aerodynamic Drag Sifat Syed
Â
The document summarizes a study that tested methods to reduce aerodynamic drag on semi-trucks. Wind tunnel testing and computational fluid dynamics (CFD) analysis were conducted on a 1/34th scale semi-truck model to analyze the effects of lowering the freight container between the truck's axles and adding fairings around the gap between the container and cab. Wind tunnel results showed a 24% reduction in drag from lowering the container and a total 38% reduction when fairings were added. CFD analysis confirmed these results and provided additional flow visualization.
Cost Optimization of Elevated Circular Water Storage Tanktheijes
Â
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
This document summarizes a computational fluid dynamics (CFD) simulation of flow over an Ahmed body using Reynolds-averaged Navier-Stokes (RANS) turbulence modelling. Three grids with different refinements were used. The Realizable k-epsilon turbulence model was chosen. The simulation results showed improved prediction of drag coefficient with finer grids but poorer prediction of velocity profiles compared to experimental data. Flow analysis identified two main vortices in the wake, with higher turbulence kinetic energy around the lower vortex, consistent with experiments.
Study of Velocity and Pressure Distribution Characteristics Inside Of Catalyt...ijceronline
Â
This document summarizes a study of velocity and pressure distribution inside two catalytic converter geometries - cylindrical and convergent-divergent shapes - using computational fluid dynamics (CFD) modeling and simulation software. The simulations found that the cylindrical catalytic converter creates more pressure drop compared to the convergent-divergent shape. Velocity contours, pressure contours, and plots of velocity and pressure values along the axial length were generated from the simulations. The results show differences in flow patterns and pressure drops between the empty and porous substrate configurations within each geometry.
This document discusses unstructured grid generation for reservoir modeling and simulation that considers reservoir properties. It presents two approaches for designing unstructured grids for flow simulators using tetrahedral grids. One technique directly uses reservoir properties to define grid element volumes, while the other uses flow-based gridding with a permeability model. The research aims to develop grid generation methods that integrate geological and flow modeling requirements. It also outlines a modified reservoir modeling workflow where grid design considers both modeling heterogeneity and flow simulation needs simultaneously.
2016 conservation track: automated river classification using gis delineated ...GIS in the Rockies
Â
The document describes an automated GIS tool called RESonate that is used to classify river systems into functional process zones (FPZs) based on hydrogeomorphic characteristics. The tool extracts over a dozen variables like elevation, slope, and width from geospatial datasets. It then uses these variables to generate sample points and calculate additional metrics. Statistical analysis is applied to cluster sample segments into distinct FPZ classes. The tool was tested on the Carson River where it identified 5 FPZ classes. The goal of the tool is to provide a consistent classification method that can enhance compatibility between river analyses and improve communication among scientists.
Implentation of Inverse Distance Weighting, Local Polynomial Interpolation, a...Sachin Mehta
Â
The general purpose of this project is to discuss the interpolation a set of points to create four predicted surfaces. The points that were used represent pollution samples taken along the Maas River measured in parts per million (ppm). The four surfaces will be created in Arc Map using the tools found in the Geo-statistical Analyst. The created surfaces will then be used to predict the occurrence of a specified pollutant along the flood plain of the Maas River. For this exercise I chose to look at the spatial variation of Mercury along the flood plain of the Maas River.
Sachin Mehta Reno, Nevada
Hec ras flood modeling little river newburyportWilliam Mullen
Â
This document describes a HEC-RAS 2D flood modeling case study of the Little River in Newburyport and Newbury, Massachusetts. It summarizes the advantages of 2D modeling, details the HEC-RAS model setup including terrain and hydrologic inputs, and presents calibration results from a historic 2006 rainfall event. Next steps include running additional storm simulations and using the model to evaluate potential flood mitigation measures under future sea level and climate change scenarios.
Presented in the ASEAN Cooperation on Utilization of Space Technology for Disaster Management Seminar, 11th Aug 2010 at Miracle Grand Convention Hotel, Thailand. Hosted by GISTDA
This document is a career portfolio for Imtiaz Ahmed Taher that includes samples of his work in wastewater collection network modelling and design, water distribution network modelling and design, drainage network modelling and design, and GIS mapping. It provides details on several projects he worked on in roles like Sewer System Engineer, Senior Network Modeller, Water Infrastructure Specialist, and Cost Estimator. The portfolio highlights his skills in areas like network modelling using Bentley SewerCad and WaterCad, hydrologic modelling using HEC-HMS, DEM and remote sensing analysis, and GIS mapping. It includes summaries and visualizations of the technical work conducted as part of various water, wastewater and drainage projects in countries like
Performances evaluation of surface water areas extraction techniques using l...Abdelazim Negm
Â
This presentation was presented at:
9th International Conference Interdisciplinarity in Engineering, INTER-ENG 2015, 8-9 October 2015, Tirgu-Mures, Romania
The complete paper will be published in Procedia Technology Journal soon.
Two Dimensional Flood Inundation Modelling In Urban Area Using WMS, HEC-RAS a...Amro Elfeki
Â
This research presents a two-dimensional flood inundation modelling in urbanized areas when some features such as roads, buildings, and fences have great effect on flood propagation. Wadi Qows located in Jeddah City, Saudi Arabia was chosen as case study area because of the flood occurrence of 2009 causing lots of losses either economic or loss of life. The WMS and HEC-RAS program were used for a hydraulic simulation based on channel geometry built by incorporating urban features into DEM using GIS effectively. A resampling method of DEM 90 Ă 90 m become 10 Ă 10 m grid cell sizes was conducted to produce a higher resolution DEM suitable for urban flood inundation modelling. The results show that a higher resolution leads to increasing the average flood depth and decreasing the flood extent. Although the change of the grid cell sizes does not affect its elevation values, this approach is helpful to perform flood simulations in urban areas when high resolution DEM availability is limited. In addition, the integration of WMS, HEC-RAS and GIS are powerful tools for flood modelling in rural, mountainous and urban areas.
https://www.researchgate.net/publication/330004725_Two_Dimensional_Flood_Inundation_Modelling_in_Urban_Areas_Using_WMS_HEC-RAS_and_GIS_Case_Study_in_Jeddah_City_Saudi_Arabia_IEREK_Interdisciplinary_Series_for_Sustainable_Development
OSPCV: Off-line Signature Verification using Principal Component VariancesIOSR Journals
Â
This document presents an offline signature verification system called OSPCV (Offline Signature Verification using Principal Component Variances) that analyzes two features - pixel density and center of gravity distance. It describes the related work in signature verification, the proposed OSPCV algorithm, and experimental results showing it provides a notable improvement over existing systems. The OSPCV system overcomes intra-signature and inter-signature variations to produce a better equal error rate for differentiating genuine versus forged signatures.
Toxicological Effect of Effluents from Indomie Plc on Some Biochemical Parame...IOSR Journals
Â
1) The study examined the effect of effluent from an Indomie food company on biochemical parameters of fish in the New Calabar River in Nigeria.
2) Fish and water samples were collected from four stations - a non-point control station, the effluent discharge point, and stations 10m upstream and downstream.
3) Analysis found higher levels of potassium, sodium, urea and creatinine in the blood, gills, liver and muscles of fish sampled closest to the discharge point, indicating pollution has the greatest effect near the source of the effluent.
This document compares the advanced manufacturing technology strategies, policies, and performances of China and the United States. It finds that China has risen rapidly in this area in recent decades through large government investments in education, research, and advanced manufacturing. While the US used to lead in areas like R&D spending and talent-driven innovation, China has progressed up the maturity path through reforms and is now the most competitive global manufacturer, outpacing countries like Germany and the US. The strategies, policies, and priorities around manufacturing process, technology adoption, new product development, and supply chain management in Asia including China show higher levels of implementation and focus on continuous improvement compared to North America.
This document presents a histogram-based approach for automatic number plate recognition (ANPR) using MATLAB. The proposed methodology involves 4 main steps: 1) image acquisition and pre-processing, 2) dilation, 3) horizontal and vertical edge processing to generate histograms, and 4) filtering, segmentation, and region of interest extraction to isolate the number plate region. The algorithm is tested on images with different lighting conditions, tilts, and backgrounds, and is able to accurately recognize plates in all cases. The histogram-based approach aims to reduce computational complexity compared to other ANPR methods.
This document presents a study that aims to numerically model and predict the compression index of clay soils based on soil index properties. Twenty clay soil samples were collected from various locations in Coimbatore, India and tested in the laboratory to determine their liquid limit, plastic limit, compression index and other properties. Correlations between compression index and liquid limit and plasticity index were developed using regression analysis and artificial neural network modeling. The artificial neural network approach produced a higher accuracy correlation compared to regression analysis. This correlation could help geotechnical engineers predict compression index from common soil index tests and reduce the need for time-consuming consolidation testing.
This document summarizes research on q-analogues of classical numerical methods for finding solutions to algebraic and transcendental equations. It discusses how classical methods like Newton's method can fail under certain conditions, such as when the derivative is zero at the root or an inappropriate starting point is chosen. The document then proposes q-analogues of classical methods and discusses how q-methods may perform better than classical approaches in some cases where the classical methods fail. It provides background on the history and development of research on q-functions and hypergeometric series.
NUMERICAL MODELS AS MITIGATION MEASURES FOR RIVERINE AND URBAN FLOODAmaljit Bharali
Â
The document discusses numerical modeling tools for riverine and urban flood mitigation. It provides an overview of the MIKE 11 and MIKE 21 modeling suites, including their strengths and limitations. MIKE 11 is useful for one-dimensional river modeling while MIKE 21 enables two-dimensional modeling of floodplains and urban areas. The document outlines the basic file types and steps to set up simulations in MIKE 11 and MIKE 21, including network, cross-section, boundary, bathymetry, and parameter files. Dynamically coupling MIKE 11 and MIKE 21 in MIKE FLOOD allows for integrated one-dimensional and two-dimensional flood modeling.
Gis based-hydrological-modeling.-a-comparative-study-of-hec-hms-and-the-xinan...Sikandar Ali
Â
This document compares the HEC-HMS and Xinanjiang hydrological models. HEC-HMS and the GIS extension HEC-GeoHMS were used to preprocess data and develop a rainfall-runoff model for the Wanjiabu catchment in China. Both HEC-HMS and the Xinanjiang model were applied to the catchment. The results showed that HEC-HMS was more convenient for parameter optimization but less accurate than the Xinanjiang model, possibly because Xinanjiang has more parameters. The document concludes by comparing the simulated and observed hydrographs from both models.
AIAA Technical Paper, Reduction of Semi-Truck Aerodynamic Drag Sifat Syed
Â
The document summarizes a study that tested methods to reduce aerodynamic drag on semi-trucks. Wind tunnel testing and computational fluid dynamics (CFD) analysis were conducted on a 1/34th scale semi-truck model to analyze the effects of lowering the freight container between the truck's axles and adding fairings around the gap between the container and cab. Wind tunnel results showed a 24% reduction in drag from lowering the container and a total 38% reduction when fairings were added. CFD analysis confirmed these results and provided additional flow visualization.
Cost Optimization of Elevated Circular Water Storage Tanktheijes
Â
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
This document summarizes a computational fluid dynamics (CFD) simulation of flow over an Ahmed body using Reynolds-averaged Navier-Stokes (RANS) turbulence modelling. Three grids with different refinements were used. The Realizable k-epsilon turbulence model was chosen. The simulation results showed improved prediction of drag coefficient with finer grids but poorer prediction of velocity profiles compared to experimental data. Flow analysis identified two main vortices in the wake, with higher turbulence kinetic energy around the lower vortex, consistent with experiments.
Study of Velocity and Pressure Distribution Characteristics Inside Of Catalyt...ijceronline
Â
This document summarizes a study of velocity and pressure distribution inside two catalytic converter geometries - cylindrical and convergent-divergent shapes - using computational fluid dynamics (CFD) modeling and simulation software. The simulations found that the cylindrical catalytic converter creates more pressure drop compared to the convergent-divergent shape. Velocity contours, pressure contours, and plots of velocity and pressure values along the axial length were generated from the simulations. The results show differences in flow patterns and pressure drops between the empty and porous substrate configurations within each geometry.
This document discusses unstructured grid generation for reservoir modeling and simulation that considers reservoir properties. It presents two approaches for designing unstructured grids for flow simulators using tetrahedral grids. One technique directly uses reservoir properties to define grid element volumes, while the other uses flow-based gridding with a permeability model. The research aims to develop grid generation methods that integrate geological and flow modeling requirements. It also outlines a modified reservoir modeling workflow where grid design considers both modeling heterogeneity and flow simulation needs simultaneously.
2016 conservation track: automated river classification using gis delineated ...GIS in the Rockies
Â
The document describes an automated GIS tool called RESonate that is used to classify river systems into functional process zones (FPZs) based on hydrogeomorphic characteristics. The tool extracts over a dozen variables like elevation, slope, and width from geospatial datasets. It then uses these variables to generate sample points and calculate additional metrics. Statistical analysis is applied to cluster sample segments into distinct FPZ classes. The tool was tested on the Carson River where it identified 5 FPZ classes. The goal of the tool is to provide a consistent classification method that can enhance compatibility between river analyses and improve communication among scientists.
Implentation of Inverse Distance Weighting, Local Polynomial Interpolation, a...Sachin Mehta
Â
The general purpose of this project is to discuss the interpolation a set of points to create four predicted surfaces. The points that were used represent pollution samples taken along the Maas River measured in parts per million (ppm). The four surfaces will be created in Arc Map using the tools found in the Geo-statistical Analyst. The created surfaces will then be used to predict the occurrence of a specified pollutant along the flood plain of the Maas River. For this exercise I chose to look at the spatial variation of Mercury along the flood plain of the Maas River.
Sachin Mehta Reno, Nevada
Hec ras flood modeling little river newburyportWilliam Mullen
Â
This document describes a HEC-RAS 2D flood modeling case study of the Little River in Newburyport and Newbury, Massachusetts. It summarizes the advantages of 2D modeling, details the HEC-RAS model setup including terrain and hydrologic inputs, and presents calibration results from a historic 2006 rainfall event. Next steps include running additional storm simulations and using the model to evaluate potential flood mitigation measures under future sea level and climate change scenarios.
Presented in the ASEAN Cooperation on Utilization of Space Technology for Disaster Management Seminar, 11th Aug 2010 at Miracle Grand Convention Hotel, Thailand. Hosted by GISTDA
This document is a career portfolio for Imtiaz Ahmed Taher that includes samples of his work in wastewater collection network modelling and design, water distribution network modelling and design, drainage network modelling and design, and GIS mapping. It provides details on several projects he worked on in roles like Sewer System Engineer, Senior Network Modeller, Water Infrastructure Specialist, and Cost Estimator. The portfolio highlights his skills in areas like network modelling using Bentley SewerCad and WaterCad, hydrologic modelling using HEC-HMS, DEM and remote sensing analysis, and GIS mapping. It includes summaries and visualizations of the technical work conducted as part of various water, wastewater and drainage projects in countries like
Performances evaluation of surface water areas extraction techniques using l...Abdelazim Negm
Â
This presentation was presented at:
9th International Conference Interdisciplinarity in Engineering, INTER-ENG 2015, 8-9 October 2015, Tirgu-Mures, Romania
The complete paper will be published in Procedia Technology Journal soon.
Two Dimensional Flood Inundation Modelling In Urban Area Using WMS, HEC-RAS a...Amro Elfeki
Â
This research presents a two-dimensional flood inundation modelling in urbanized areas when some features such as roads, buildings, and fences have great effect on flood propagation. Wadi Qows located in Jeddah City, Saudi Arabia was chosen as case study area because of the flood occurrence of 2009 causing lots of losses either economic or loss of life. The WMS and HEC-RAS program were used for a hydraulic simulation based on channel geometry built by incorporating urban features into DEM using GIS effectively. A resampling method of DEM 90 Ă 90 m become 10 Ă 10 m grid cell sizes was conducted to produce a higher resolution DEM suitable for urban flood inundation modelling. The results show that a higher resolution leads to increasing the average flood depth and decreasing the flood extent. Although the change of the grid cell sizes does not affect its elevation values, this approach is helpful to perform flood simulations in urban areas when high resolution DEM availability is limited. In addition, the integration of WMS, HEC-RAS and GIS are powerful tools for flood modelling in rural, mountainous and urban areas.
https://www.researchgate.net/publication/330004725_Two_Dimensional_Flood_Inundation_Modelling_in_Urban_Areas_Using_WMS_HEC-RAS_and_GIS_Case_Study_in_Jeddah_City_Saudi_Arabia_IEREK_Interdisciplinary_Series_for_Sustainable_Development
OSPCV: Off-line Signature Verification using Principal Component VariancesIOSR Journals
Â
This document presents an offline signature verification system called OSPCV (Offline Signature Verification using Principal Component Variances) that analyzes two features - pixel density and center of gravity distance. It describes the related work in signature verification, the proposed OSPCV algorithm, and experimental results showing it provides a notable improvement over existing systems. The OSPCV system overcomes intra-signature and inter-signature variations to produce a better equal error rate for differentiating genuine versus forged signatures.
Toxicological Effect of Effluents from Indomie Plc on Some Biochemical Parame...IOSR Journals
Â
1) The study examined the effect of effluent from an Indomie food company on biochemical parameters of fish in the New Calabar River in Nigeria.
2) Fish and water samples were collected from four stations - a non-point control station, the effluent discharge point, and stations 10m upstream and downstream.
3) Analysis found higher levels of potassium, sodium, urea and creatinine in the blood, gills, liver and muscles of fish sampled closest to the discharge point, indicating pollution has the greatest effect near the source of the effluent.
This document compares the advanced manufacturing technology strategies, policies, and performances of China and the United States. It finds that China has risen rapidly in this area in recent decades through large government investments in education, research, and advanced manufacturing. While the US used to lead in areas like R&D spending and talent-driven innovation, China has progressed up the maturity path through reforms and is now the most competitive global manufacturer, outpacing countries like Germany and the US. The strategies, policies, and priorities around manufacturing process, technology adoption, new product development, and supply chain management in Asia including China show higher levels of implementation and focus on continuous improvement compared to North America.
This document presents a histogram-based approach for automatic number plate recognition (ANPR) using MATLAB. The proposed methodology involves 4 main steps: 1) image acquisition and pre-processing, 2) dilation, 3) horizontal and vertical edge processing to generate histograms, and 4) filtering, segmentation, and region of interest extraction to isolate the number plate region. The algorithm is tested on images with different lighting conditions, tilts, and backgrounds, and is able to accurately recognize plates in all cases. The histogram-based approach aims to reduce computational complexity compared to other ANPR methods.
This document presents a study that aims to numerically model and predict the compression index of clay soils based on soil index properties. Twenty clay soil samples were collected from various locations in Coimbatore, India and tested in the laboratory to determine their liquid limit, plastic limit, compression index and other properties. Correlations between compression index and liquid limit and plasticity index were developed using regression analysis and artificial neural network modeling. The artificial neural network approach produced a higher accuracy correlation compared to regression analysis. This correlation could help geotechnical engineers predict compression index from common soil index tests and reduce the need for time-consuming consolidation testing.
This document summarizes research on q-analogues of classical numerical methods for finding solutions to algebraic and transcendental equations. It discusses how classical methods like Newton's method can fail under certain conditions, such as when the derivative is zero at the root or an inappropriate starting point is chosen. The document then proposes q-analogues of classical methods and discusses how q-methods may perform better than classical approaches in some cases where the classical methods fail. It provides background on the history and development of research on q-functions and hypergeometric series.
This document describes the implementation of a DDR SDRAM controller using Verilog HDL. It begins with background on DDR SDRAM and its advantages over SDR SDRAM. It then discusses the design of the DDR SDRAM controller, including its main functional blocks - the control interface module, command module, and data path module. The control interface module contains a finite state machine to generate control signals. The command module contains registers and multiplexers to handle commands. The data path module interfaces with the processor and SDRAM. The controller was simulated and synthesized using Modelsim and Xilinx ISE, with the results shown. In conclusion, the DDR SDRAM controller takes advantage of the high speed and pipelined
Non- Newtonian behavior of blood in very narrow vesselsIOSR Journals
Â
The purpose of the study is to get some qualitative and quantitative insight into the problem of flow in vessels under consideration where the concentration of lubrication film of plasma is present between each red cells and tube wall. This film is potentially important in region to mass transfer and to hydraulic resistance, as well as to the relative resistance times of red cells and plasma in the vessels network.
This document discusses sentiment analysis of online reviews using a hybrid polarity detection system. It first provides background on sentiment analysis and different levels of analysis (document, sentence, aspect). It then describes related work on techniques like Naive Bayes, maximum entropy, and support vector machines. The hybrid system is described as having three modules: 1) data preprocessing, 2) sentiment feature generation that extracts 14 features, and 3) an SVM classifier. Experimental results on movie, hotel, and mobile phone data show the proposed system with two additional features achieves slightly better accuracy than existing approaches. The document concludes that sentiment-based features may provide promising outcomes for sentiment analysis tasks.
Perceived Effects of Facebook on Academic Activities of Agricultural Students...IOSR Journals
Â
1) The study assessed the perceived effects of Facebook usage on the academic activities of 80 agricultural students at the University of Port Harcourt in Nigeria.
2) It found that the most frequently used social media by students were Facebook (94%), Blackberry Messenger (90%), and WhatsApp (72.5%). Most students visited Facebook once every 3 days and spent 1 hour or less on the site daily, mainly for chatting.
3) Students agreed that Facebook had positive effects by facilitating networking with other agricultural students, encouraging collaboration, and easing information flow. However, it was also found to distract students from academic assignments. The overall rating showed Facebook had a positive effect on students' academic activities.
Old wine in new wineskins: Revisiting counselling in traditional Ndebele and ...IOSR Journals
Â
The document summarizes counselling in traditional Ndebele and Shona societies in Zimbabwe before the advent of western formal counselling. It discusses how counselling was an integral part of communities with various roles providing advice, such as close family friends, aunts, uncles, grandparents, traditional healers, and elders. Folklore and proverbs were also important means of imparting guidance. Counselling emphasized prevention and was provided informally and freely within a holistic community approach, rather than a professionalized, crisis-focused model as seen today. The document argues counselling is not new but an old practice now in a modernized form, representing "old wine in new wineskins."
This document discusses the design of a fiber optic security sensor based on monitoring speckle patterns in multimode optical fibers. The sensor is designed to detect vibrations on perimeters or fences by observing changes in the output speckle pattern from the fiber. An experimental model was built using readily available components - a CCD camera, multimode laser light source, length of optical fiber, and MATLAB software. The sensor is low-cost, lightweight, and can potentially be used to monitor large structures. When disturbances occur on the fiber, it causes changes in the propagation constants of fiber modes, altering the output speckle pattern in a way that can be analyzed to detect vibrations.
Modeling & Testing Of Hybrid Composite LaminateIOSR Journals
Â
This document summarizes research conducted on modeling and testing a hybrid composite laminate made of glass wool and epoxy resin. Four different types of laminates were prepared by varying the fiber orientation between layers. The laminates were tested for mechanical properties according to ASTM standards. Results showed that increasing cure time improved bonding strength. Changing fiber orientation between layers decreased strength due to delamination. The hybrid laminates were found to have higher strength to weight ratios than steel, indicating their usefulness in applications requiring both strength and light weight. Finite element analysis using ANSYS was also conducted to model stresses in the laminate.
The document describes the implementation of a high-speed and power-efficient reliable multiplier using an adaptive hold technique (AHT). Traditional multipliers experience performance degradation over time due to negative bias temperature instability (NBTI) and positive bias temperature instability (PBTI) effects. The proposed design uses an AHT circuit with a column-bypassing or row-bypassing multiplier, Razor flip-flops, and other components to reduce delays, power consumption, and eliminate timing violations caused by aging effects. Simulation results show the AHT multipliers have significantly lower delay and power compared to traditional array, row-bypassing, and column-bypassing multipliers for both 16-bit and 32-bit designs.
This document discusses the development of protective relaying in Nigeria's power system automation. It describes how relay technology has progressed from electromechanical relays to solid-state relays to microprocessor-based relays. The implementation of microprocessor-based relays in substation automation has improved performance over electromechanical relays by eliminating manually reading meters, providing more precise load data, enabling wider communication of information, and enhancing control functions. Substation automation now includes supervisory control and data acquisition systems for remote monitoring and control of substations.
Measurement of Efficiency Level in Nigerian Seaport after Reform Policy Imple...IOSR Journals
Â
This paper focuses on the impact of reforms on port performance using Onne and Rivers ports as a reference point. It analyses the pre and post reform eras of the ports in terms of their performance. The reforms took effect from 1996 after the Federal Government of Nigeria concessioned the ports to private investors. Parameters such as Ship traffic, Cargo throughput, Ship turn round time, Berth Occupancy and personnel were used as variables for the assessment. Secondary Data were collected from the Nigerian Ports Authority and Integrated Logistic Services Nigeria (Intels) for the period 2001 to 2010 and analyzed using Data Envelopment Analysis to assess the efficiency of the port. Analysis revealed a continuous improvement in the overall efficiency of both Ports Since 2006 when the new measure was introduced. Average Ship turn-around time improved in the ports due to modern and fast cargo handling equipment and more cargo handling space which were provided. There is an increase in Ship traffic calling at the ports, resulting in increased cargo throughput and berth occupancy rate at ports of Onne and Rivers. The reform also led to more private investment in the portsâ existing and new facilities and the introduction of a World Class service in port operation. This study concludes that the Ports of Onne and Rivers are performing better under the reform programme of the Federal Government of Nigeria. It finally recommends the urgent need for a regulator to appraise the performance of the reform programme from time to time as provided by the agreement and for the full adoption and utilization of management information system (MIS) to aid performance efficiency.
Chebyshev Collocation Approach for a Continuous Formulation of Implicit Hybri...IOSR Journals
Â
In this paper, an implicit one-step method for numerical solution of second order Initial Value
Problems of Ordinary Differential Equations has been developed by collocation and interpolation technique.
The one-step method was developed using Chebyshev polynomial as basis function and, the method was
augmented by the introduction of offstep points in order to bring about zero stability and upgrade the order of
consistency of the new method. An advantage of the derived continuous scheme is that it can produce several
outputs of solution at the off-grid points without requiring additional interpolation. Numerical examples are
presented to portray the applicability and the efficiency of the method.
Determining Tax Literacy of Salaried Individuals - An Empirical AnalysisIOSR Journals
Â
In personal financial planning, tax management plays a very important role. An individual should have thorough knowledge of various aspects of taxes and tax policies, which would help him to understand how much he can save even after paying taxes. Those people who have not taken any formal course on taxation finds it difficult to understand and comprehend the issues related to determination of tax liability, tax filling and tax saving. An attempt has been made through this paper to determine tax literacy level of salaried individuals based on various demographic and socio-economic factors. Findings of the study suggest that overall tax literacy level of respondents is not very high. The results suggest that level of tax literacy varies significantly among respondents. Also tax literacy level gets affected by gender, age, education, income, nature of employment and place of work whereas it does not get affected by geographic region. Findings of this paper suggest that government should adopt more aggressive approaches to educate taxpayers, thereby raising the level of tax literacy among them.
This document presents an analytical model to simulate a single story brick masonry in-filled frame strengthened with carbon fiber reinforced polymer (CFRP) to resist lateral loads. The model is based on experimental testing of half-scale in-filled frame specimens with different CFRP strengthening techniques. The analytical model represents the in-filled frame as diagonal struts acting in compression. Based on this model, the document derives two formulas to determine the required amount of CFRP to resist lateral loads - an accurate solution and a simplified empirical design equation. Both formulas showed good agreement with results from the experimental testing.
This document summarizes a study that evaluates the seismic performance of a 10-story reinforced concrete frame building using pushover analysis and the performance-based seismic design procedures from the first, second, and next generations. The building is modeled in SAP 2000 software and subjected to pushover analysis. Performance levels are evaluated based on deformation and damage criteria from each generation of procedures. The study aims to compare the seismic evaluation and performance level results from the different performance-based seismic design procedures.
The document discusses various geomorphological analysis tools available in the open-source software HortonMachine. It describes how HortonMachine can be used to analyze digital elevation models (DEMs), calculate terrain attributes, extract stream networks, and delineate catchment boundaries. Specific commands are mentioned for calculating flow directions, drainage networks, slope, curvature, catchment attributes and more. The goal is to provide quantitative and qualitative tools for understanding catchment morphology.
HortonMachine is a software tool integrated with JGrass that provides quantitative and qualitative analysis of catchment morphology using hydro-geomorphological methods. It allows users to analyze erosion processes, network incision, and landslide potential in alpine catchments of various sizes. The tool calculates various topographic attributes, derives flow networks, and extracts channel networks to facilitate geomorphological analysis of catchment basins. It has evolved from standalone routines to being integrated within GRASS and now JGrass for improved interoperability and interface.
This document summarizes a numerical study on free-surface flow conducted using a computational fluid dynamics (CFD) solver. The study examines the wave profile generated by a submerged hydrofoil through several test cases varying parameters like the turbulence model, grid resolution, and hydrofoil depth. The document provides background on the governing equations solved by the CFD solver and the interface capturing technique used to model the free surface. Five test cases are described that investigate grid convergence, the impact of laminar vs turbulent models, the relationship between hydrofoil depth and wave height, and the effect of discretization schemes.
ODDLS: Overlapping domain decomposition Level Set MethodAleix Valls
Â
The document presents a new method called ODDLS for simulating free surface flows. ODDLS uses domain decomposition combined with level set and stabilized finite element methods. It increases accuracy of free surface capturing and governing equation solutions at fluid interfaces. The method can also solve for a single fluid, improving efficiency for many naval applications. Example applications demonstrate the method's capability to accurately simulate green water flows and ship motions in waves.
The groundwater is one of leachate generation components in landfills. So, the control of
groundwater level below the base level of landfills is very important for both of decreasing the rate of
leachate generation and minimizing the potential for groundwater contamination. The main aim of this study
is how to control on the pollution problem in landfill site using an improved dewatering system. In this
study, the use of double drainage pipes as a protecting system to control on the pollution in landfill pattern
in case of rising the groundwater level are obtained. Flow patterns for models representing dewatering of
groundwater flow outward landfill site that has geo-membrane liner using the double drainage pipes are
investigated. The double drainage pipes are designed with various parameters for each model. All
investigated models are founded on isotropic soil. Numerical model was used to construct the flow pattern
(flow net) for the models. The solution was presented to study the effect of the depth and the distance
between the single drainage system on the depression of groundwater level as well as the influences of
horizontal and vertical distances between the perforated pipes in double drainage system were achieved.
This document discusses computational fluid dynamics (CFD). CFD uses numerical analysis and algorithms to solve and analyze fluid flow problems. It can be used at various stages of engineering to study designs, develop products, optimize designs, troubleshoot issues, and aid redesign. CFD complements experimental testing by reducing costs and effort required for data acquisition. It involves discretizing the fluid domain, applying boundary conditions, solving equations for conservation of properties, and interpolating results. Turbulence models and discretization methods like finite volume are discussed. The CFD process involves pre-processing the problem, solving it, and post-processing the results.
This document summarizes a large eddy simulation of flow around a sharp-edged surface-mounted cube. The simulation was performed using the Petsc-Fem code developed at CIMEC. The flow conditions matched published benchmarks, with a Reynolds number of 40,000. An upstream channel flow was first simulated to provide turbulent inflow conditions. The simulation results are analyzed to validate the LES implementation and identify areas for improving turbulence modeling.
An efficient image segmentation approach through enhanced watershed algorithmAlexander Decker
Â
This document proposes an efficient image segmentation approach combining an enhanced watershed algorithm and color histogram analysis. The watershed algorithm is applied to preprocessed images after merging the results with an enhanced edge detection. Over-segmentation issues are addressed through a post-processing step applying color histogram analysis to each segmented region, improving overall performance. The document provides background on image segmentation techniques, reviews related work applying watershed algorithms, and discusses challenges like over-segmentation that watershed approaches can face.
TELEMAC is hydrodynamic modeling software that can:
1. Solve the shallow water equations using finite element or finite volume methods on an unstructured triangular grid.
2. Perform simulations of free surface flows, accounting for effects like turbulence, temperature/salinity gradients, and dry areas.
3. Include pre- and post-processing tools for generating grids, visualizing results, and exchanging data with GIS software.
The document discusses the chimera grid method for computational fluid dynamics simulations of complex geometries. It has two main elements: (1) decomposition of the computational domain into sub-domains that are each gridded independently, and (2) communication of solution data between sub-domains through interpolation. Overlapping grids allow each sub-domain to be gridded with structured grids while handling interfaces through hole and outer boundaries. The chimera grid method makes it possible to model problems with complex geometries using easier-to-generate body-fitted grids. It has been used successfully for simulations of configurations like the integrated space shuttle.
This document summarizes a study that uses computational fluid dynamics (CFD) to model flow through a screen and validate an existing correlation for calculating pressure drop. The study models a screen with a 25% porosity and runs simulations over a wide range of Reynolds numbers from 0.1 to 105 for both incompressible (water) and compressible (air) fluids. Discharge coefficients calculated from the simulations are compared to published experimental values and correlations. The results show the effect of compressibility on discharge coefficients and validate the existing correlation for incompressible flow.
This study used computational fluid dynamics (CFD) software to analyze airflow patterns in a theoretical city model at two wind velocities. A 3D city model was created in SolidWorks and a grid was generated in Pointwise. Simulations using the Euler equations were run in Cobalt at 4.6 m/s and 30 m/s wind speeds. Results showed increased airflow velocities in streets parallel to wind direction and vortex formation in perpendicular streets. Pressure on building surfaces was highest in windward sides. The study demonstrated CFD's usefulness for urban planning by simulating wind flow effects of building placement and layout. Future work could examine angled wind directions and different building patterns.
This document summarizes a computational fluid dynamics (CFD) simulation of airflow around a simplified pickup van model. The study used a commercial CFD software to perform a 3D, steady-state simulation using the Reynolds-averaged Navier-Stokes equations and a k-Îľ turbulence model. The simulation was run at Reynolds numbers of 3x105 and results were validated against experimental wind tunnel data. Key findings included pressure and velocity distributions that matched experimental data well, indicating CFD can be an effective alternative to wind tunnel testing for pickup van aerodynamic analysis.
Simulations Of Unsteady Flow Around A Generic Pickup Truck Using Reynolds Ave...Abhishek Jain
Â
Above Research Paper can be downloaded from www.zeusnumerix.com
The research paper aims to replicate the wind tunnel test of General Motors pick-up truck using CFD analysis. The pickup is a blunt body and simulation reveals vortex shedding from the edges of the vehicle downstream. The unsteadiness of this phenomenon is seen in the oscillation of residue. The paper shows matching of velocity magnitude downstream of the vortex. Authors - Bahram Khalighi (GM, USA), Basant Gupta et al Zeus Numerix.
The Effect of Geometry Parameters and Flow Characteristics on Erosion and Sed...Dr. Amarjeet Singh
Â
One of the most critical problems in the river
engineering field is scouring, sedimentation and morphology
of a river bed. In this paper, a finite volume method
FORTRAN code is provided and used. The code is able to
model the sedimentation. The flow and sediment were
modeled at the interception of the two channels. It is applied
an experimental model to evaluate the results. Regarding the
numerical model, the effects of geometry parameters such as
proportion of secondary channel to main channel width and
intersection angle and also hydraulic conditionals like
secondary to main channel discharge ratio and inlet flow
Froude number were studied on bed topographical and flow
pattern. The numerical results show that the maximum
height of bed increased to 32 percent as the discharge ratio
reaches to 51 percent, on average. It is observed that the
maximum height of sedimentation decreases by declining in
main channel to secondary channel Froude number ratio. On
the assessment of the channel width, velocity and final bed
height variations have changed by given trend, in all the
ratios. Also, increasing in the intersection angle accompanied
by decreasing in flow velocity variations along the channel.
The pattern of velocity and topographical bed variations are
also constant in any studied angles.
1) The study evaluates the impacts of implementing low impact development (LID) techniques on peak discharge and runoff volume in an urban watershed in Washington D.C. using the Storm Water Management Model.
2) Three stormwater models (Rational Method, HEC-HMS, and SWMM) were used to simulate rainfall-runoff processes and estimate peak flows and volumes in the watershed.
3) The results found that LIDs can significantly reduce runoff volume by over 30% but have a negligible impact on peak discharge reduction. Integrating LIDs provides both environmental and economic benefits through reduced flooding and infrastructure costs.
This paper investigates the turbulent flow around a marine propeller used to actuate an underwater vehicle (UUV) using computational fluid dynamics (CFD). The propeller often operates at low Reynolds numbers and off-design conditions for UUV applications. A RANS solver with a k-Ď turbulence model was used to simulate the 3D flow field for the propeller operating at different advance ratios, representing forward, hovering, and crashback conditions. Pressure and velocity distributions were analyzed and thrust/torque coefficients were compared to experimental data, showing good agreement. The results provide insight into the complex low Reynolds number flow for propeller design and UUV control.
The document discusses different models for simulating debris flows, focusing on two models: the Trent2D hydraulic model and a cell model. The Trent2D model numerically integrates the depth-averaged mass and momentum balance equations using closure equations for solid concentration and bottom shear stress. The cell model represents the flow field as interacting cells linked by uniform flow or weir flow equations, and simulates erosion and deposition based on velocity and slope thresholds. Both models make simplifying assumptions that could lead to incorrect results if not physically suitable for the problem being modeled.
CFD and Artificial Neural Networks Analysis of Plane Sudden Expansion FlowsCSCJournals
Â
It has been clearly established that the reattachment length for laminar flow depends on two non-dimensional parameters, the Reynolds number and the expansion ratio, therefore in this work, an ANN model that predict reattachment positions for the expansion ratios of 2, 3 and 5 based on the above two parameters has been developed. The R2 values of the testing set output Xr1, Xr2, Xr3, and Xr4 were 0.9383, 0.8577, 0.997 and 0.999 respectively. These results indicate that the network model produced reattachment positions that were in close agreement with the actual values. When considering the reattachment length of plane sudden-expansions the judicious combination of CFD calculated solutions with ANN will result in a considerable saving in computing and turnaround time. Thus CFD can be used in the first instance to obtain reattachment lengths for a limited choice of Reynolds numbers and ANN will be used subsequently to predict the reattachment lengths for other intermediate Reynolds number values. The CFD calculations concern unsteady laminar flow through a plane sudden expansion and are performed using a commercial CFD code STAR-CD while the training process of the corresponding ANN model was performed using the NeuroShellTM simulator.
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
Â
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This yearâs report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Â
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Â
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
Â
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Â
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
Â
An English đŹđ§ translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech đ¨đż version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
Â
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power gridâs behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
Â
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of whatâs possible in finance.
In summary, DeFi in 2024 is not just a trend; itâs a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Â
Are you ready to revolutionize how you handle data? Join us for a webinar where weâll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, weâll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sourcesâfrom PDF floorplans to web pagesâusing FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether itâs populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
Weâll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Â
E017352634
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 3, Ver. V (May â Jun. 2015), PP 26-34
www.iosrjournals.org
DOI: 10.9790/0661-17352634 www.iosrjournals.org 26 | Page
Analysis of Time Complexities and Accuracy of Depression Filling
Algorithms in DEM
Kritika Pathak1
and Praveen Kaushik2
Department of Computer Science and Engineering Maulana Azad National Institute of Technology, Bhopal,
M.P.,India
Abstract: The recent development of digital representation has stimulated the development of automatic
extraction of topographic and hydrologic information from Digital Elevation Model (DEM). A DEM is used to
create hydrologic models which can be used for various purposes such as predicting stream discharges, river
network definition, estimating flood extent and timing, locating areas contributing pollutants to a stream, and
simulating the effects of landscape alterations on surface water runoff. DEM is processed to produce accurate
stream delineation. The process includes filling of sinks and depressions, treating flat areas and determining
flow direction at every pixel. Depressions (or pits) and flat surfaces (or flats) are general types of terrain in
raster digital elevation models. Depressions are lower areas surrounded by terrain without outlets and flat
surfaces are areas with no local gradient. The problem with these pits and depressions is that they interrupt
continuous flow paths in DEMs. To avoid these problems, all pits have to be rectified and create a
depressionless DEM before calculating flow directions or any related topographic parameters. Various
algorithms like Jenson and Dominigue, Planchon and Darboux, Carving etc. have been developed to treat the
sinks, depressions and flat areas. The conventional methods are computationally intensive and time consuming.
Moreover they are inadequate for high resolution DEMs. The conventional algorithms for creating a
depressionless DEM have time complexity of O(n2
) where n is the number of cells.
Drainage networks obtained after processing the DEM should be accurate. At the same time it is desirable to
simplify the automatic extracting procedure with minimum modification to retain its originality. Recent
improvements are successful in reducing the complexity to O(nlogn) with accuracy, minimum space and time
requirement and less modifications to pixels. This survey analyses on various conventional approaches to fill
depressions, their time complexities, advantages, limitations and evolution of modern methodologies with
improved time requirements and accuracy.
I. Introduction:
A DEM is a digital representation of a continuous terrain surface and consists of a two-dimensional
array of elevation values at regularly spaced ground positions. To create a fully connected and fully labeled
drainage network and watershed partition, water outflow at every grid cell of the DEM needs to be routed to an
outlet on the border of the DEM[3][6][7][18]. Nevertheless, the frequent presence of surface depressions in the
DEM prevents simulated water flow from draining into outlets, resulting in disconnected stream-flow patterns
and spurious interior sub-watersheds pouring into these depressions. Due to the undesirable results, surface
depressions in DEMs are treated as nuisance features in hydrologic modelling[11]. The common practice is to
locate and remove surface depressions in the DEM at the very first step of hydrologic analysis. Only then the
flow directions can be determined[1][2][8]. Marks et al. and OâCallaghan in 1984 were the first to give solution
to this depression problem. They developed a method to fill surface depressions based on the identification of
the pour point for each depression. The algorithm applies a smoothing filter to remove problematic feature
which causes information loss in non problematic regions and hence interfere with the originality of DEM.
Jenson and Domingue[16] in 1988 developed a method which is faster and more operationally viable. The J&D
algorithm consists of two steps to handle depressions. The first step fills all depressions containing a single cell
by raising each cellâs elevation to the lowest elevation of its neighbors. The second step fills complex
depressions containing more than one cell. Its time complexity is O(n2
). Therefore it is inadequate for large data
sets.
Planchon and Darboux(2001)[5] proposed an algorithm which is faster than Jenson and Dominigue.
The P&D algorithm first inundates the surface by assigning a maximal water surface elevation to all DEM cells,
and then, it iteratively drains excess water from every cell. Despite its simplicity and delicacy, this algorithm
remains difficult to understand due to its three complex subroutines and its recursive execution.
An alternative approach Carving[10] was proposed by Soille (2004). It suppresses each single pit by
creating a descending path from it to the nearest point having a lower elevation value. This method is easy to
implement but it significantly reduces accuracy by interfering with the originality of DEM. Therefore, a new
2. Analysis of Time Complexities and Accuracy of Depression Filling Algorithms in DEM
DOI: 10.9790/0661-17352634 www.iosrjournals.org 27 | Page
hybrid approach[23] was introduced which combines both approaches of pit filling and carving. It minimizes the
cost necessary for transforming an input digital elevation model into a pitless digital elevation model.
While searching for outlet of pit or flat, the methods described above only check the eight adjacent
cells of pit or flat and do not consider the general trend of the DEM. The actual drainage lines and the stream
line network obtained by above methods are different. A heuristic approach was introduced later in which the
direction of flow is determined by finding the direction of maximum drop from each cell to its eight adjacent
neighbors. This algorithm is based on Dynamic programming and comparatively slow.
Wang and Liu presented a Priority-flood Algorithm[9][17][24]. All the edge cells of the DEM are
pushed into a priority queue( which internally forms a min- heap) for processing. This algorithm processes
depressions from the edge cells of the DEM to the interior cells. The worst case time complexity of this
algorithm is O(nlogn). It is faster than all other filling algorithms.
A new approach, unlike all existing methods was introduced to reduce the processing time from
O(nlogn) to O(n). This is called as Quantile Classification method[14]. The DEM is classified in eight groups
and then the algorithm is applied. It is thousand times faster than Jenson and Dominigueâs method and seven
times faster than Planchon and Darbouxâs method.
A mathematical approach includes Linear Interpolation method[22] which treats flat areas and
depressions. It provides a natural way to scale elevation adjustments with minimum modification which is an
advantage over conventional approaches.
II. Creating a depressionless DEM and removing pits and Flats:
A depression is also known as a sink or pit. It is a local minimum that does not have a down slope flow
path to any adjacent cells in a DEM. A surface depression may consist of one or a group of spatially connected
cells of the same elevation that are completely surrounded by other cells at a higher elevation. Depressions in
DEMs can be natural, real landscape features or spurious artefacts. Spurious depressions represent imperfections
in DEMs. They may arise from input-data errors, interpolation defects during DEM generation, truncation or
rounding of interpolated values to lower precision, or averaging of elevation values within grid cells.
In fact, depressions within DEMs are often a combination of actual topographic depressions and
artificial depressions caused by various data collection and processing. Research has shown that depressions
interrupt overland flow routing in a DEM and significantly alter defined flow directions used for hydrological
parameter extraction.
The algorithm used for the former group applies a smoothing filter to remove problematic features. The
smoothing operation is effective in removing artificial depressions, but it usually causes information loss in non
problematic regions of the DEM. To preserve the original information of the DEM, over smoothing should be
avoided; however, this has been proven difficult to control.
A number of algorithms are available to deal with pits in DEMs. They can be categorized into three main classes
according to their approach of rectifying pits[4][19].
a. Incremental methods: fill pits by increasing their elevation value until their lowest pour point is reached.
b. Decremental methods: where values along a path starting from the bottom of the pit and reaching a pixel of
lower elevation value are decremented by setting their elevation value to that of the bottom of the pit.
c. Hybrid methods: combining incremental and decremental methods.
d. Heuristic Approach: uses Dynamic Programming approach.
2.1 Incremental methods:
2.1.1 Jenson and Domingue algorithm ( J&D):The J&D algorithm[16] consists of two steps to handle
depressions. The first step fills all depressions containing a single cell by raising each cellâs elevation to the
lowest elevation of its neighbors (pour points).The second step fills complex depressions containing more than
one Cell. This is achieved by identifying and labeling the interior catchments of depressions by calculating the
flow directions for every cell in the DEM. Instead of filling depressions one by one, a table of pour points is
built for interior catchments adjacent to depressions. The path of the pour points for the adjacent depressions is
traced until it reaches the border of the DEM. Among all the pour points on the path, the one with the highest
elevation is selected as the threshold. Then, all cells in the interior catchments of the depressions, which are
lower than the highest pour point, are raised to the threshold value. After the depressions are filled, an iterative
process is used to identify the drainage directions of flat surfaces. Although the J&D algorithm can
accommodate complex depressions and flat surfaces, its time complexity is O(n2
), thus clearly making it
inadequate for large data sets.
2.1.2 Planchon and Darboux Method: Planchon and Darboux[5] introduced a pit filling algorithm which
reduces the time complexity of the existing algorithm The new method involves two basic stages. The P&D
3. Analysis of Time Complexities and Accuracy of Depression Filling Algorithms in DEM
DOI: 10.9790/0661-17352634 www.iosrjournals.org 28 | Page
algorithm first inundates the surface by assigning a maximal water surface elevation to all DEM cells. i.e. the
surface W is initialized with infinite altitudes except for the boundaries. It iteratively drains excess water from
every cell i.e. Altitudes of the surface W are decreased iteratively. With a seed cell (a cell that is used to generate
a dependence graph) as the root, an upstream tree is progressively searched by following the dependence links,
and excess water is removed for all cells on the tree.During the final stage, the water in depressions is drained to
the level of the highest pour point on the flow path to an outlet on the border of the DEM, resulting in flat
depression surfaces. Water on the cells outside the interior catchments of depressions is completely drained out,
and their final elevation values keep the same as before inundation.The DEM handled by the P&D algorithm has
neither depressions nor flat surfaces (depressions have been filled and increments added to the flat surfaces);
therefore, it is easy to extract flow directions from the DEM. The terrain makes the time complexity of the P&D
algorithm unstable, ranging from O(n1.2
), on average, to O(n1.5
), for the worst case. In addition, the P&D
algorithm adds increments directly to the DEMâs elevation, which might entail a significant alteration of the
DEM. Figure 1 shows how much time is reduced by using this algorithm over J&D.
Figure 1
Analysis of Figure 1: The earliest approach J&D method is very time consuming and inefficient for large
DEMs as compared to P&D algorithm. For small sized DEMs any one of the above two algorithms can be used.
The performance difference varies by 0.581 deviation for small DEM s. But as the size of pixels increases from
0.5*106
to 4*106
, the standard deviation shows steepest increase from 2.51 to 42.06 with a very large variance of
1769.04 in case of J&D algorithm. While in case of P&D algorithm, the deviation from 2 to 5.301 is acceptable
for DEM growing large in size with upcoming technologies. A small variance of 28.1 serves the purpose. Hence
it is more suitable to use this algorithm with large DEMs of high resolutions.
2.1.3 Priority-flood algorithm ( Wang and Liu ): A new Priority flood algorithm or W&L algorithm has been
a stardom among all the algorithms which has been continuously improving and accepted in
2013[9][17][24].The W&L algorithm consists of two parts. The first part is to initialize the algorithm by pushing
all of the edge cells of the DEM into a priority queue and marking these cells with a Boolean array. In the
priority queue, a cell with lower elevation has greater priority, and the cell with the lowest elevation in the
priority queue is always the cell popped first. The second part is an iterative process .In each iterative step, a cell
is popped from the priority queue (this cell is called the center cell, and its elevation is the current lowest spill
elevation), and its neighbors are traversed. When an adjacent cell is unprocessed and lower than the center cell
(a depression), the adjacent cellâs elevation is raised to that of the center cell, and it is pushed into the priority
queue. When an adjacent cell is unprocessed and it is not lower than the center cell, the adjacent cell is pushed
into the priority queue directly. The W&L algorithm finishes when the priority queue is empty. It reduces the
time complexity to O(nlogn).
The reduced time complexity is followed by some limitations. Each cell is sorted with its elevation in
the priority queue to determine the priority, but the cells of depressions or flat surfaces have the same elevation
(the depressions become flat surfaces after being filled); thus, there is no need to sort these cells. The W&L
algorithm only fills depressions; it does not add increments to flat surfaces.
4. Analysis of Time Complexities and Accuracy of Depression Filling Algorithms in DEM
DOI: 10.9790/0661-17352634 www.iosrjournals.org 29 | Page
Figure 2
Analysis of Figure 2: It is clear from the figure that performance of Priority flood algorithm outperforms P&D
method for small as well as large DEMs. For small DEMs within the range of 4000*4000 pixels, the time
difference is between the two is quite less which lies between 1.38 to 1.91 seconds. As the size of DEM
increases to 7000* 5500 pixels, the time required to execute P&D algorithm increases rapidly from 5.75 to
12.67 seconds while in Priority Flood, it is much less. i.e. 3.1 to 6.75. The deviation is much more higher in
P&D method which is 4.0743. Therefore it is not suitable for large DEMs.
2.1.4 Quantile classification Method: A new innovative approach, unlike all existing methods that process
DEM data straightforward without utilizing the topographic features implied in them, was proposed in this study
to improve the DEM-processing efficiency. First, classify the initial DEM data into eight groups according to the
elevation values using the quantile classification method, and set each category cell values to its quantile and
store them in a transient matrix[14]. Second, scan the transient matrix from the minimum category to the
maximum one, restore its initial value if its initial value is larger than or equal to its neighborâs, else set its value
to the minimum of its neighborâs if its quantile is larger than its neighborâs and repeat this process until all the
depressions are filled.
Furthermore, unlike the traditional method, which have to scan all the DEM cells in each loop, the new
one only scan the data needed to be further processed by storing their location information into two stacks. As a
result, the total cells and scanning times are dramatically decreased .It helps improving the accuracy and also
does less modifications on DEM to retain the originality.
Figure 3
Analysis of Figure 3: We can conclude that more number of alterations are required to execute P&D algorithm
for the DEM of the same size. For J&D method, the deviation for initial 2 points is 177528401.48. As the size of
DEM increases to 8* 1042441, it shows a steeper increase in deviation up to 3095436120.77. However, P&D
5. Analysis of Time Complexities and Accuracy of Depression Filling Algorithms in DEM
DOI: 10.9790/0661-17352634 www.iosrjournals.org 30 | Page
algorithm performs ample of modifications. The deviation varies from 694088130.811 for small DEMs to
5298336085.73 for large DEMs, which is much higher than J&D algorithm.
Figure 4
Analysis of figure 4: While executing quantile classification approach, the number of cells modified increases
from 229 for small DEMs of size 1*1042441to 2440 for large DEMs of size 8*1042441. The curve shows a
constant increase in variation up to 683.91 when the size of DEM reaches to 3*1042441. A slight decrease in
number of modified cells reduces the overall deviation amount to 661.25 for DEM of size 4* 1042441. It shows
a linear increase in number of modified cells with rise in deviation upto 785.8019 which is much less than
conventional methods.
Figure 5
Analysis of Figure 5: No. of comparisons increase linearly with the size of DEM. A steep rise is seen in number
of comparisons from 62800000 for small DEMs to 578000000 for large DEMs. The standard deviation also
shows a linear increase from 40446507.88 to 184404276.4 as the size of DEM grows large.
6. Analysis of Time Complexities and Accuracy of Depression Filling Algorithms in DEM
DOI: 10.9790/0661-17352634 www.iosrjournals.org 31 | Page
Figure 6
Analysis of Figure 6: From the above figure, it is clear that number of comparisons increase with increase in
the size of DEM. While P&D method shows a linear increase from 1.8*109
to 7.45*1010
, it is much better than
the conventional J&D method in terms of comparisons. The curve representing J&D method shows a drastic
increase in number of comparisons when the size of DEM grows from 4*1042441 pixels. The deviation of J&D
curve from 2.8*1010
(for small DEMs) to 6.39*1011
(for large DEMs) is much higher than P&D method which
shows a deviation increase from 2.7*109
to 3.45*1010
. More is the number of comparisons, more is the
complexity.
Figure 7
Analysis of Figure 7: Figure shows how much time is required for the execution of three algorithms. Time
required for executing J&D method increases drastically from 4.7 minutes for small DEMs (1*1042441) to 15.2
hours for large DEMs (8*1042441) which is very inefficient. P&D method reduces the time complexity to some
extent. For small DEMs it takes 17-100 seconds while for large DEMs it takes 55.71 minutes (1 hour
approx).Quantile Classification magnificently improves efficiency by executing in 1-10 seconds for small as
well as large DEMs. The deviation shows a drastic decrease from 18664.30 (J&D method) to 1303.66 (P&D
method) and further to 5.89 (Quantile classification method).
Analysis of Incremental Methods: Incremental methods are widely used in Depression Filling and flat surface
processing. Conventional incremental methods involve a lot of comparisons and modifications, thus resulting in
a very high time complexity and alteration to non problematic regions. Figure 1 and 2 shows how Priority Flood
algorithm improves efficiency and reduces the time complexity and surpasses the performance of old depression
filling methods.
From Figure 3 and 4, it is clear that P&D Algorithm scans a lot more cells than J&D algorithm due to three
complex subroutines and a recursive routine in its procedure. However there was an improvement in its direct
implementation due to which the time complexity reduced to O(n1.2
). But No. of cells required to be scanned
7. Analysis of Time Complexities and Accuracy of Depression Filling Algorithms in DEM
DOI: 10.9790/0661-17352634 www.iosrjournals.org 32 | Page
could not be less than J&D Method despite of every improvement. The new approach called Quantile
classification [14] has significant advantages. Furthermore, unlike the traditional method, which have to scan all
the DEM cells in each loop, the new one only scan the data needed to be further processed by storing their
location information into two stacks. As a result, the total cells and scanning times are dramatically decreased
due to stackâs nature of âfirst in last outâ, and the location information of depressions is all stored in one of the
two stacks, which facilitate the subsequent hydrologic analysis.
From Figures 5 and 6, we can conclude that no. of comparisons required to execute is very large in J&D
Algorithm which has a direct impact on time complexity. While it is slightly less in P&D Algorithm and much
less in Quantile Classification approach. From Figure 7 it is clear that, new method (Quantile) is thousands of
times faster than the famous Jenson and Domingueâs (1988) method and is over seven times faster than
Planchon and Darbouxâs on average.
2.2 Decremetal methods
2.2.1 Carving: Spurious pits in grid digital elevation models are often removed by filling them up to the level of
their outflow point. [10] Recently, an alternative approach called carving has been proposed to suppress each
single pit by creating a descending path from it to the nearest point having a lower elevation value. The cost of
this transformation is defined by the sum of the altitude differences between the input and output pitless digital
elevation models. In contrast to the pit filling procedure which acts along two-dimensional regions, carving acts
along one dimensional path. Therefore carving usually leads to smaller transformation costs than pit filling.
Although carving leads to smaller transformation costs than pit filling for any DEM of reasonable size, some
specific pits may be removed at a lower cost by pit filling. The carving procedure, its application to adaptive
drainage enforcement and an enhanced algorithm for determining flow directions on plateaus are detailed by
Soille et al. in 2003.
Decremental methods are not very popular since they cause unnecessary modification in non
problematic regions. They are either not used or used only with incremental approaches.
2.3 Hybrid Approach: A hybrid approach of carving and pit filling method is being used to alter the less
DEM[23].It is done as an improvement to carving where the modification to original DEM can be minimized
and accuracy can be achieved. Rather than suppressing a pit by either pit filling or carving depending on the cost
of each individual procedure, we propose to go one step further by defining a hybrid approach allowing for
combinations of both procedures. That is, pits are filled up to a certain level and carving proceeds from this
level.We define the cost for transforming an input DEM into an output pitless DEM as the sum of the altitude
differences between the input and output DEMs, denoting the input DEM by DEMi (before processing) and the
output DEM by DEMo (after processing).
C= Sum [ DEMi(x)- DEMo(x)] where 1<x<n and C= cost of transformation
Although carving leads to smaller transformation costs than pit filling for any DEM of reasonable size,
some specific pits may be removed at a lower cost by pit filling. Pit filling method has a limitation of creating
artificial flat areas and carving alters the originality of DEM. This new hybrid implementation of the
combination of pit filling and carving can solve many problems. We can device new algorithms for the efficient
and proper mixing of pit filling and carving in a better way. Table 1 gives information about cost of
transformation and cells modification in a DEM of resolution of 250*250.
METHOD USED COST OF TRANSFORMATION(m) CELLS MODIFIED
Plain Pit Filling 1.7*106
2.0*105
Plain Carving 4.4*105
7.3*104
Hybrid Implementation 3.4*105
7.9*104
Filling Part 2.1*105
5.2*104
Carving Part 1.3*105
2.6*104
Table 1
Analysis of Table 1: It is clear from the above scenario that the cost of transformation and number of modified
cells decreases when we use hybrid approach instead of using filling or carving alone. It improves accuracy.
2.4 Heuristic algorithm: While searching for outlet of pit or flat, the methods described above only check the
eight adjacent cells of pit or flat and do not consider the general trend of the DEM. In other words, they have no
additional information about states beyond that provides in the problem definition. All they can do is generate
successors and distinguish a goal state from a non goal state. These methods can find the outlet, but they are
incredible and inefficient in most cases. Unrealistic parallel drainage lines, unreal drainage patterns and spurious
terrain features are most likely to be generated.
This heuristic method[12][20][21] contains two steps. First we calculate the incipient flow direction
using the basic algorithm. The direction of flow is determined by finding the direction of maximum drop from
8. Analysis of Time Complexities and Accuracy of Depression Filling Algorithms in DEM
DOI: 10.9790/0661-17352634 www.iosrjournals.org 33 | Page
each cell to its eight adjacent neighbors (D-8 Algorithm). The second stage is the master stage to find the
optimal outlet of pits or flats. We scan the reconditioned DEM and put all the sinks and depressions nodes in a
stack called Marked stack. For every node which is a sink, its eight adjacent neighbors are taken in an OPEN
LIST and a heuristic function is applied.
The heuristic information consists of two elements, actual cost and estimated cost, according to:
f (n)=g(n)+h(n)
Where, f (n) is heuristic information of node n, g(n) is actual cost i.e. the difference of elevation
between the starting node and the node n, h(n) is estimated cost, the cost to go from this node to the outlet with
maximum heuristic information. The proposed algorithm always selects the node in the open list with the
maximum heuristic information as the next node to be checked. This heuristic information ensures the proposed
algorithm only tries to select nodes that most likely to lead to the direction towards the outlet. But the only
limitation is that there is a need to develop an efficient heuristic function and the research is going further in its
development.
Figure 8 shows how the accuracy is improved using heuristic algorithm and exact drainage lines are obtained
[12].
i) Drainage lines delineated using ArcGIS 9.2 ( J&d method for pit filling and D-8 and D-infinity)
ii) Drainage lines delineated using heuristic pit filling algorithm
iii) River Geospatial Data Presentation
We can see from the above figure that the drainage lines obtained by applying Heuristic approach are far much
closer to the real river network extraction as compared to Arc-GIS which uses J&D approach with D-8 method.
III. Conclusion
Our main aim in the survey is to enrich the information content of digital elevation data by automatic
sink removal, treatment of flat areas and Flow field derivation with accuracy. Various flow determination and pit
filling algorithms have been introduced. Always a new algorithm is introduced with reduced time complexity
and improved accuracy. The conventional approach which started in the early 80's could solve the problem but
their methods are ineffective with high resolution large DEMs. It took hours and days to complete and the
drainage lines obtained are far away from reality. Modern methods especially Priority- flood and quantile
classification are much more effective. Heuristic approach improves accuracy but there is a need to build an
9. Analysis of Time Complexities and Accuracy of Depression Filling Algorithms in DEM
DOI: 10.9790/0661-17352634 www.iosrjournals.org 34 | Page
efficient heuristic function. Moreover we also need to decide which heuristic information a node should contain.
Decremental methods introduced in 2003 are not much efficient but there is a scope to come up with new hybrid
approaches of intermixing these incremental and decremental approaches. Earlier we had limited sources of
obtaining DEM, mainly SRTM (Shuttle Radar Topographic Mission), which is now obsolete. With new
upcoming technologies like LIDAR based elevation model and ASTER DEM, we need to device new
algorithms which can effectively remove the depressions of the terrain with minimum alteration and reduced
time complexity. While devising an algorithm, we have to maintain a tradeoff between complexity and
modifications since our aim is accuracy with less time requirement. Simple, efficient and less complex
algorithms are implemented as a tool in GIS processing.
References:
[1]. Quinn, P.F., Beven, P., Chevallier, P., Planchon, O.âThe prediction of hillslope flow paths for distributed hydrological modelling
using digital terrain models.â Hydrological processes, Volume 5, Issue 1, pages 59â79, 1991.
[2]. Tarboton, D.G., 1997. âA new method for the determination of flow directions and upslope areas in grid digital elevation models.â
Water Resources Research, VOL. 33, no. 2, pages-309-319, February 1997.
[3]. Kun Hou, Jigui Sun, Wei Yang, Tieli Sun, Ying Wang, and Sijun Ma, âExtraction Algorithms for Using a Regular Grid DEMsâ,
Jinggangshan, P. R. China, 2-4, April. 2010, pp. 112-115.
[4]. N. Senevirathne a and G. Willgoose, âA Comparison of the Performance of Digital Elevation Model Pit Filling Algorithms for
Hydrologyâ, 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1â6 December 2013.
[5]. O. Planchon and F. Darboux, "A fast, simple and versatile algorithm to fill the depressions of digital elevation models", Catena, vol.
46(2-3), 2001 , pp. 1 59-176.
[6]. D. M. Mark, âAutomatic detection of drainage networks from digital elevation models,â Cartographica, vol. 21, pp. 168â178, 1983.
[7]. Q. Zhu, X. Tian, and Y. Zhang, âThe Extraction of Catchment and Subcatchment from Regular Grid DEMs,â Acta Geodaetica et
Cartographica Sinica, vol. 34, no. 2, pp. 129â133, May 2005.
[8]. Jan Seibert and Brian L. McGlynn, 2007, âA new triangular multiple flow direction algorithm for computing upslope areas from
gridded digital elevation modelsâ, Water Resources Research, VOL. 43, W04501, doi:10.1029/2006WR005128, 2007.
[9]. Richard Barnesa,Clarence Lehmanb, David Mullac -Priority-Flood: âAn Optimal Depression-Filling and Watershed-
LabelingAlgorithm for Digital Elevation Modelsâ July 2013.
[10]. Pierre Soille, Ju¨ rgen Vogt, and Roberto Colombo- âCarving and adaptive drainage enforcement of grid digital elevation
modelsâ, Water Resources Research, Volume 39, Issue 12, December 2003.
[11]. Ao Tianqi , Kuniyoshi Takeuchi , Hiroshi Ishidaira , Junich Yoshitani & Kazuhiko Fukami âDevelopment and application of a
new algorithm for automated pit removal for grid DEMs- (2010)â, Hydrological Sciences Journal, 48:6, 985-997, DOI:
10.1623/hysj.48.6.985.51423.
[12]. Wei Yang, Tieli Sun1, Kun Hou14, Fanhua Yu and Zhiming Liu, âAn adaptive approach for extraction of drainage network from
shuttle radar topography mission and satellite imagery,â International Journal of Innovative Computing, Information and Control,
Volume 7, Number 12, Pg 6965-6978, December 2011.
[13]. Jan Seibert and Brian L. McGlynn,-â A new triangular multiple flow direction algorithm for computing upslope areas from gridded
digital elevation modelsâ, Water Resources Research, Volume 43, Issue 4, April 2007.
[14]. Jingwen Xu , Wanchang Zhang , Chuansheng Liu ââ A novel method for Filling the Depressions in Massive DEM Dataâ, IEEE
2007. Geoscience and Remote Sensing Symposium, Publication Year: 2007 , Page(s): 4080 â 4083.
[15]. Wei-Bin Yu, Cheng Su, Chun-Na Yu, Xi-Zhi Wang, Cun-Jun Feng, and Xiao-Can Zhang - An Efficient Algorithm for Depression
Filling and Flat-Surface Processing in Raster DEMs , IEEE Geoscience and Remote Sensing letters, vol. 11, no. 12, December
2014
[16]. S. K. Jenson and J. O. Domingue, âExtracting Topographic Structure from Digital Elevation Data for Geographic Information
System Analysis,â Photogramm. Eng. Remote Sens., vol. 54, no. 11, pp. 1593â1600, Nov. 1988.
[17]. L. Wang and H. Liu, âAn efficient method for identifying and filling surface depressions in digital elevation models for hydrologic
analysis and modeling,â Int. J. Geogr. Inf. Sci, vol. 20, no. 2, pp. 193â213, Feb. 2006.
[18]. J. F. Ocallachan and D. M. Mark, âThe extraction of drainage networks from digital elevation data,â Comput. Vis. Graph. Image
Process., vol. 28, no. 3, pp. 323â344, Dec. 1984.
[19]. Robert H. Erskine, Timothy R. Green, Jorge A. Ramirez ,Lee H. MacDonald- âComparison of grid-based algorithms for computing
upslope contributing areaâ Water Resources Research, VOL. 42, W09416, doi:10.1029/2005WR004648, 2006.
[20]. W. Yang, K.Hou, F.Yu, Z.Liu,T.Sun- âA novel algorithm with heuristic information for extracting drainage networks from raster
DEMs.â Hydrol. Earth Syst. Sci. Discuss., 7, 441â459, 2010,www.hydrol-earth-syst-sci-discuss.net/7/441/2010.
[21]. W. Yang, K.Hou, F.Yu, Z.Liu,T.Sun- âExtraction Algorithms for Using a Regular Gridded DEMs.â Jinggangshan, P. R. China, 2-4,
April. 2010, pp. 112-115.
[22]. Feifei Pan, Mark Stiglitz, Robert B. McKane- â An algorithm for treating flat areas and depressions in digital elevation models using
linear interpolation.â Water Resources Research, VOL.48, W00L 10, 2012.
[23]. Pierre Soille (2004) - âOptimal removal of spurious pits in grid digital elevation modelsâ, Water Resources Research,40, W12509,
doi:10.1029/2004WR003060.
[24]. LIU Yong-He, ZHANG Wan-Chang, and XU Jing-Wen-â Another Fast and Simple DEM Depression-Filling Algorithm Based on
Priority Queue Structureâ, Atmospheric and Oceanic Science letters, 2009, vol. 2, no. 4, 214â219.