IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage based on needs. This is because of the virtualization technology. The scheduling objectives are to improve the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage based on needs. This is because of the virtualization technology. The scheduling objectives are to improve the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
Advanced Automated Analytics Using OSS Tools, GA Tech FDA Conference 2016Grid Protection Alliance
The exponential increase in data available to analyze power system events is universally recognized, but in many cases the approach to using this data is to do what we already do but do it faster, or get more people to do it. Unfortunately, spinning the hamster wheel faster is not keeping up with the demand to make decisions faster in support of grid modernization. Open source software (OSS) tools offer tremendous opportunity for collaboration that encourages innovation, and the speed and flexibility of development to keep pace with these demands.
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...Raffaele Montella
FACE-IT is an effort to develop a new IT infrastructure to accelerate existing disciplinary research and enable information transfer among traditionally separate fields. At present, finding data and processing it into usable form can dominate research efforts. By providing ready access to not only data but also the software tools used to process it for specific uses (e.g., climate impact and economic model inputs), FACE-IT allows researchers to concentrate their efforts on analysis. Lowering barriers to data access allows researchers to stretch in new directions and allows researchers to learn and respond to the needs of other fields. FACE-IT builds on the Globus Galaxies platform, which has been developed over the past several years at the University of Chicago. FACE-IT also benefit from substantial software development undertaken by the communities who have developed most of the domain-specific tools required to populate FACE-IT with useful capabilities. The FACE-IT Galaxy manages earth system datatypes (as NetCDF), new tool parameters (dates, map, opendap), aggregated datatypes (RAFT), service providers and cool map visualizers.
Abstract: With development of the information technology, the scale of data is increasing quickly. The massive data poses a great challenge for data processing and classification. In order to classify the data, there were several algorithm proposed to efficiently cluster the data. One among that is the random forest algorithm, which is used for the feature subset selection. The feature selection involves identifying a subset of the most useful features that produces compatible results as the original entire set of features. It is achieved by classifying the given data. The efficiency is calculated based on the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. The existing system deals with fast clustering based feature selection algorithm, which is proven to be powerful, but when the size of the dataset increases rapidly, the current algorithm is found to be less efficient as the clustering of datasets takes quiet more number of time. Hence the new method of implementation is proposed in this project to efficiently cluster the data and persist on the back-end database accordingly to reduce the time. It is achieved by scalable random forest algorithm. The Scalable random forest is implemented using Map Reduce Programming (An implementation of Big Data) to efficiently cluster the data. In works on two phases, the first step deals with the gathering the datasets and persisting on the datastore and the second step deals with the clustering and classification of data. This process is completely implemented using Google App Engine’s hadoop platform, which is a widely used open-source implementation of Google's distributed file system using MapReduce framework for scalable distributed computing or cloud computing. MapReduce programming model provides an efficient framework for processing large datasets in an extremely parallel mining. And it comes to being the most popular parallel model for data processing in cloud computing platform. However, designing the traditional machine learning algorithms with MapReduce programming framework is very necessary in dealing with massive datasets.Keywords: Data mining, Hadoop, Map Reduce, Clustering Tree.
Title: Big Data on Implementation of Many to Many Clustering
Author: Ravi. R, Michael. G
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
DSD-INT 2023 Deltares Hydrology Suite - An introduction - SlootjesDeltares
Presentation by Nadine Slootjes and Timo Kroon (Deltares, Netherlands) at the Hydrology Suite User Days (Day 1) - Hydrology Suite introduction and River Basin Management software (RIBASIM), during the Delft Software Days - Edition 2023 (DSD-INT 2023). Tuesday, 28 November 2023, Delft.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
Towards an Infrastructure for Enabling Systematic Development and Research of...Rafael Ferreira da Silva
Presentation held at the 17th IEEE eScience Conference
Scientific workflows have been used almost universally across scientific domains, and have underpinned some of the most significant discoveries of the past several decades. Many of these workflows have high computational, storage, and/or communication demands, and thus must execute on a wide range of large-scale platforms, from large clouds to upcoming exascale high-performance computing (HPC) platforms. These executions must be managed using some software infrastructure. Due to the popularity of workflows, workflow management systems (WMSs) have been developed to provide abstractions for creating and executing workflows conveniently, efficiently, and portably. While these efforts are all worthwhile, there are now hundreds of independent WMSs, many of which are moribund. As a result, the WMS landscape is segmented and presents significant barriers to entry due to the hundreds of seemingly comparable, yet incompatible, systems that exist. As a result, many teams, small and large, still elect to build their own custom workflow solution rather than adopt, or build upon, existing WMSs. This current state of the WMS landscape negatively impacts workflow users, developers, and researchers. In this talk, I will provide a view of the state of the art and some of my previous research and technical contributions, and identify crucial research challenges in the workflow community.
The assets of remote senses digital world daily generate massive volume of real-time data where insight information has a potential significance if collected and aggregated effectively. we propose real-time Big Data analytical architecture for remote sensing satellite application that welcomes both online and offline data processing
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
Advanced Automated Analytics Using OSS Tools, GA Tech FDA Conference 2016Grid Protection Alliance
The exponential increase in data available to analyze power system events is universally recognized, but in many cases the approach to using this data is to do what we already do but do it faster, or get more people to do it. Unfortunately, spinning the hamster wheel faster is not keeping up with the demand to make decisions faster in support of grid modernization. Open source software (OSS) tools offer tremendous opportunity for collaboration that encourages innovation, and the speed and flexibility of development to keep pace with these demands.
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...Raffaele Montella
FACE-IT is an effort to develop a new IT infrastructure to accelerate existing disciplinary research and enable information transfer among traditionally separate fields. At present, finding data and processing it into usable form can dominate research efforts. By providing ready access to not only data but also the software tools used to process it for specific uses (e.g., climate impact and economic model inputs), FACE-IT allows researchers to concentrate their efforts on analysis. Lowering barriers to data access allows researchers to stretch in new directions and allows researchers to learn and respond to the needs of other fields. FACE-IT builds on the Globus Galaxies platform, which has been developed over the past several years at the University of Chicago. FACE-IT also benefit from substantial software development undertaken by the communities who have developed most of the domain-specific tools required to populate FACE-IT with useful capabilities. The FACE-IT Galaxy manages earth system datatypes (as NetCDF), new tool parameters (dates, map, opendap), aggregated datatypes (RAFT), service providers and cool map visualizers.
Abstract: With development of the information technology, the scale of data is increasing quickly. The massive data poses a great challenge for data processing and classification. In order to classify the data, there were several algorithm proposed to efficiently cluster the data. One among that is the random forest algorithm, which is used for the feature subset selection. The feature selection involves identifying a subset of the most useful features that produces compatible results as the original entire set of features. It is achieved by classifying the given data. The efficiency is calculated based on the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. The existing system deals with fast clustering based feature selection algorithm, which is proven to be powerful, but when the size of the dataset increases rapidly, the current algorithm is found to be less efficient as the clustering of datasets takes quiet more number of time. Hence the new method of implementation is proposed in this project to efficiently cluster the data and persist on the back-end database accordingly to reduce the time. It is achieved by scalable random forest algorithm. The Scalable random forest is implemented using Map Reduce Programming (An implementation of Big Data) to efficiently cluster the data. In works on two phases, the first step deals with the gathering the datasets and persisting on the datastore and the second step deals with the clustering and classification of data. This process is completely implemented using Google App Engine’s hadoop platform, which is a widely used open-source implementation of Google's distributed file system using MapReduce framework for scalable distributed computing or cloud computing. MapReduce programming model provides an efficient framework for processing large datasets in an extremely parallel mining. And it comes to being the most popular parallel model for data processing in cloud computing platform. However, designing the traditional machine learning algorithms with MapReduce programming framework is very necessary in dealing with massive datasets.Keywords: Data mining, Hadoop, Map Reduce, Clustering Tree.
Title: Big Data on Implementation of Many to Many Clustering
Author: Ravi. R, Michael. G
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
DSD-INT 2023 Deltares Hydrology Suite - An introduction - SlootjesDeltares
Presentation by Nadine Slootjes and Timo Kroon (Deltares, Netherlands) at the Hydrology Suite User Days (Day 1) - Hydrology Suite introduction and River Basin Management software (RIBASIM), during the Delft Software Days - Edition 2023 (DSD-INT 2023). Tuesday, 28 November 2023, Delft.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
Towards an Infrastructure for Enabling Systematic Development and Research of...Rafael Ferreira da Silva
Presentation held at the 17th IEEE eScience Conference
Scientific workflows have been used almost universally across scientific domains, and have underpinned some of the most significant discoveries of the past several decades. Many of these workflows have high computational, storage, and/or communication demands, and thus must execute on a wide range of large-scale platforms, from large clouds to upcoming exascale high-performance computing (HPC) platforms. These executions must be managed using some software infrastructure. Due to the popularity of workflows, workflow management systems (WMSs) have been developed to provide abstractions for creating and executing workflows conveniently, efficiently, and portably. While these efforts are all worthwhile, there are now hundreds of independent WMSs, many of which are moribund. As a result, the WMS landscape is segmented and presents significant barriers to entry due to the hundreds of seemingly comparable, yet incompatible, systems that exist. As a result, many teams, small and large, still elect to build their own custom workflow solution rather than adopt, or build upon, existing WMSs. This current state of the WMS landscape negatively impacts workflow users, developers, and researchers. In this talk, I will provide a view of the state of the art and some of my previous research and technical contributions, and identify crucial research challenges in the workflow community.
The assets of remote senses digital world daily generate massive volume of real-time data where insight information has a potential significance if collected and aggregated effectively. we propose real-time Big Data analytical architecture for remote sensing satellite application that welcomes both online and offline data processing
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
TASK-DECOMPOSITION BASED ANOMALY DETECTION OF MASSIVE AND HIGH-VOLATILITY SES...ijdpsjournal
The Science Information Network (SINET) is a Japanese academic backbone network for more than 800 universities and research institutions. The characteristic of SINET traffic is that it is enormous and highly variable. In this paper, we present a task-decomposition based anomaly detection of massive and highvolatility session data of SINET. Three main features are discussed: Tash scheduling, Traffic discrimination, and Histogramming. We adopt a task-decomposition based dynamic scheduling method to handle the massive session data stream of SINET. In the experiment, we have analysed SINET traffic from 2/27 to 3/8 and detect some anomalies by LSTM based time-series data processing.
Taking advantage of state of the art underwater vehicles and current networking capabilities, the visionary
double objective of this work is to “open to people connected to the Internet, an access to ocean depths
anytime, anywhere.” Today, these people can just perceive the changing surface of the sea from the shores,
but ignore almost everything on what is hidden. If they could explore seabed and become knowledgeable,
they would get involved in finding alternative solutions for our vital terrestrial problems – pollution,
climate changes, destruction of biodiversity and exhaustion of Earth resources. The second objective is to
assist professionals of underwater world in performing their tasks by augmenting the perception of the
scene and offering automated actions such as wildlife monitoring and counting. The introduction of Mixed
Reality and Internet in aquatic activities constitutes a technological breakthrough when compared with the
status of existing related technologies. Through Internet, anyone, anywhere, at any moment will be
naturally able to dive in real-time using a Remote Operated Vehicle (ROV) in the most remarkable sites
around the world. The heart of this work is focused on Mixed Reality. The main challenge is to reach real
time display of digital video stream to web users, by mixing 3D entities (objects or pre-processed
underwater terrain surfaces), with 2D videos of live images collected in real time by a teleoperated ROV.
A Reconfigurable Component-Based Problem Solving Environment
Presentation for slideshare
1. Department of Civil, Architectural & Environmental Engineering 1 June 2, 2011 Development of a Hydrologic Community Modeling System Using a Workflow Engine Committee BO LU Dr. Michael Piasecki Drexel University Dr. Jonathan Goodall Dr. Franco Montalto Dr. Mira Olson Dr. Ilya Zaslavsky 6/2/2011
2. Department of Civil, Architectural & Environmental Engineering 1 Let’s imagine… Model/Module Data Data Data Data Data Data Data Data/Data access Model Model Model Model Model Model Model Model Tools of transformation, analysis, display etc. Tool Tool Tool Tool Tool Tool Tool 6/2/2011
3. Department of Civil, Architectural & Environmental Engineering 1 Let’s imagine… Model/Module Data Data Data Data Data Data Data Data Data Data/Data access Model Model Model Model Model Model Model Model Model Tools of transformation, analysis, display etc. Tool Tool Tool Tool Tool Tool Tool 6/2/2011
4. Department of Civil, Architectural & Environmental Engineering 1 Let’s imagine… Model/Module Data Data Data Data Data Data Data Data Data Data/Data access Model Model Model Model Model Model Model Model Model Tools of transformation, analysis, display etc. Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool 6/2/2011
5. Department of Civil, Architectural & Environmental Engineering 1 Let’s imagine… Model Model/Module Data Data Data Data Data Data Data Data Data Data/Data access Model Model Model Model Model Model Model Model Model Tools of transformation, analysis, display etc. Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool 6/2/2011
6. Department of Civil, Architectural & Environmental Engineering 1 Let’s imagine… Model/Module Data Data Data Data Data Data Data Data Data Data/Data access Model Model Model Model Model Model Model Model Model Model Model Tools of transformation, analysis, display etc. Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool 6/2/2011
7. Department of Civil, Architectural & Environmental Engineering 1 Let’s imagine… Objective: Develop a Hydrologic Community Modeling System(HCMS) that allows constructing seamlessly integrated hydrologic models with swappable and portable modules. Model/Module Data Data Data Data Data Data Data Data Data Data/Data access Model Model Model Model Model Model Model Model Model Model Model Model Tools of transformation, analysis, display etc. Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool 6/2/2011
8.
9. Lack of credibility of the algorithms or methods encapsulated in the codes
34. Department of Civil, Architectural & Environmental Engineering 7 Why use TRIDENT in hydrologic modeling? Composing workflows with swappable activities via the drag-and-drop manner on a GUI. Flexible Model Setup Allowing automatic and holistic execution without any external intervenes, or alternatively, interactive execution with the control of users. Interactive/Non-interactive Execution High-performance Computing Allowing parallel or concurrent execution, distributed computations in the GRID environment. Recording who, how, what and which resources are used in a workflow, and the derivation flow of data products. It ensures repeatability of model executions. Provenance Capture Easy to Share Sharing workflow through publication mechanismsor repositories. 6/2/2011
35. 6/2/2011 Department of Civil, Architectural & Environmental Engineering Introduction of the libraries of HCMS Data Access Library Data Processing Library Hydrologic Model Library Post-Anaylysis & Utilities Library 8
36. 6/2/2011 Department of Civil, Architectural & Environmental Engineering 9 1.Data Access Library Data Sources: Retrieving data from following data sources using SOAP/FTP protocols .
37.
38. NLCD: 30m * 30m, GeoTIFF[Activity 1] — Access NED or NLCD data within a specified area via Application Services. [Activity 2] — Decompress downloaded data files.
39.
40. NLCD: 30m * 30m, GeoTIFF[Activity 1] — Access NED or NLCD data within a specified area via Application Services. [Activity 2] — Decompress downloaded data files.
41.
42. NLCD: 30m * 30m, GeoTIFF[Activity 1] — Access NED or NLCD data within a specified area via Application Services. [Activity 2] — Decompress downloaded data files.
43.
44. Temperature, Precipitation, Long wave/Short wave radiation, Pressure, Vertical/Horizontal wind speed etc.[Activity 1] — Download hourly data files(GRIB) from NLDAS-2 data server. ftp://hydro1.sci.gsfc.nasa.gov/data/s4pa/NLDAS/NLDAS_FORA0125_H.002/ [Activity 2] — Make a choice of fields from a given field list, the activity then extracts data of selected fields from the downloaded data files via a decoder “WGRIB”. [Activity 3] — Cut gridded data set within a specified geospatial extent. 6/2/2011 Department of Civil, Architectural & Environmental Engineering 11
45.
46. Temperature, Precipitation, Long wave/Short wave radiation, Pressure, Vertical/Horizontal wind speed etc.[Activity 1] — Download hourly data files(GRIB) from NLDAS-2 data server. ftp://hydro1.sci.gsfc.nasa.gov/data/s4pa/NLDAS/NLDAS_FORA0125_H.002/ [Activity 2] — Make a choice of fields from a given field list, the activity then extracts data of selected fields from the downloaded data files via a decoder “WGRIB”. [Activity 3] — Cut gridded data set within a specified geospatial extent. 6/2/2011 Department of Civil, Architectural & Environmental Engineering 11
47.
48.
49. It facilitates retrieving hydrologic and meteorological observation time series data from a central metadata catalogue (HISCentral located at the San Diego Supercomputer Center) which holds the richest metadata information in the world for water data. Variable Name (e.g. precipitation) Service ID (optional) Geographical Extent (watershed boundary or latitude/longitude ) Temporal Extent Get Web Services In Box Semantic Checking… Get Sites Web Service IDs Updated Variables Ontology Dictionary Sites Metadata Get Variables Verify Variable Catalog Get Time Series Data Variable Codes WaterML Time Series Data/Metadata Parse Output UI Processing Step Configuration Input Web Service 6/2/2011
50. Get Data via WaterOneFlow Web Services in TRIDENT [Activity 1] — Get web services within a specified geospatial extent. [Activity 2] — Get site and variable metadata based on given variable name. [Activity 3] — Get time series data of given variable within the given geospatial extent. 6/2/2011 Department of Civil, Architectural & Environmental Engineering 14
51.
52.
53. Delineate watershed/sub-watershed boundary, Generate river network; Create Triangulated Irregular Network(TIN); Process Soil, Land Cover data; Create Hydrologic Response Unit (HRU).
63. 6/2/2011 Department of Civil, Architectural & Environmental Engineering 20 Creating Hydrologic Response Unit Step 2: Processing Land Cover Data Step 3: Create HRU
64. 6/2/2011 Department of Civil, Architectural & Environmental Engineering 21 Processing Time Series Data
72. A physically based, semi-distributed watershed model that simulates hydrologic fluxes.
73. The VB version converted from 9502 FORTRAN version is migrated into the following workflow. [Activity 1] — Compute Topographic Index Histogram for the whole watershed or each sub-basin. [Activity 2] — Compute Area-Distance Histogram for routing flow. [Activity 3] — Interactive activity for inputting/modifying initial condition and parameters. [Activity 4] — TOPMODEL computation kernel. 6/2/2011
74.
75.
76.
77.
78.
79. Encoded in C# and compiled into Dynamic Link Libraries (DLL).
93. estimate potential evapotranpiration using activities encapsulating different approaches. 6/2/2011
94.
95. Workflows: 1)Step by Step workflow, 2)Terrain Processing workflow, 3)Web service based workflow
96. Delineation: 1) 7 sub-basins: 500,000 cells as threshold 2) 33 sub-basins: 100,000 cells as threshold Total Flow Path Sink Filled DEM Flow Direction Flow Accumulation Raw DEM Watershed Grid Stream order Watershed and River Network (.shp) Stream Raster 6/2/2011
125. TRIDENT workflow system provides a platform for designing the HCMS and for assembling hydrologic models as workflow sequences.
126. The HCMS was tested by carrying out several typical hydrologic modeling studies over Schuylkill watershed. It is proved to be used quite well as a modeling platform. While it is not computational cost free due to the middle ware layer, the additional time consumption is “affordable”, especially in the lengthy data preparation arena. 6/2/2011
127. Department of Civil, Architectural & Environmental Engineering 40 Future Work Data Data Data Data Data Data Data Data Data Data Data Data Model Model Model Model Model Model Model Model Model Model Model Model Model Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool 6/2/2011
128. Department of Civil, Architectural & Environmental Engineering 40 Future Work Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool 6/2/2011
129. Department of Civil, Architectural & Environmental Engineering 40 Future Work Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Data Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Model Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool Tool 6/2/2011
Editor's Notes
Sce-ua: Shuffled Complex Evolution-University of Arizona