The document describes the OGCE WorkflowSuite, which provides tools for composing and executing scientific workflows. It includes the Generic Service Toolkit for wrapping applications as web services, the XRegistry for information sharing, and XBaya for graphical workflow composition and monitoring. Workflows can integrate various resources and be made flexible, dynamic, and interoperable. Example applications discussed are weather forecasting, genome analysis, and computational evaluation.
HP Microsoft SQL Server Data Management SolutionsEduardo Castro
In this presentation was used in the MSDN WebCast and we cover some details about the hardware offerings to run SQL Server DataWarehouse, some detail about HP Hardware is shown.
Best Regards,
Ing. Eduardo Castro Martinez
http://ecastrom.blogspot.com
The HDFS NameNode is a robust and reliable service as seen in practice in production at Yahoo and other customers. However, the NameNode does not have automatic failover support. A hot failover solution called HA NameNode is currently under active development (HDFS-1623). This talk will cover the architecture, design and setup. We will also discuss the future direction for HA NameNode.
HP Microsoft SQL Server Data Management SolutionsEduardo Castro
In this presentation was used in the MSDN WebCast and we cover some details about the hardware offerings to run SQL Server DataWarehouse, some detail about HP Hardware is shown.
Best Regards,
Ing. Eduardo Castro Martinez
http://ecastrom.blogspot.com
The HDFS NameNode is a robust and reliable service as seen in practice in production at Yahoo and other customers. However, the NameNode does not have automatic failover support. A hot failover solution called HA NameNode is currently under active development (HDFS-1623). This talk will cover the architecture, design and setup. We will also discuss the future direction for HA NameNode.
With Biocep-R, We propose to build on top the mainstream Statistical and Scientific Computing Environments (R,cilab, Matlab, SAS..) a federative and user-centric OSS platform for High Performance Computing, data analysis and collaboration. Biocep-R computational engines can be running locally or remotely (on Servers/Clusters/Grids/Clouds) and can be accessed from the Researcher's laptop. The Researcher can use an extensible cross-platform workbench to pilot the engines and can also control them programmatically.
The workbench includes highly programmable server-side spreadsheets fully integrated with the SCEs functions and data and that can be mirrored to Excel's spreadsheets.
Multiple Researchers can connect simultaneously to the same the remote computational engine and use it collaboratively via a set of broadcasted views .
The Researcher can easily create or connect to multiple engines running on one or multiple heterogeneous infrastructures and use them for parallel computing. The plug-ins architecture offers a highly innovative way to produce and distribute SCE-based User Interfaces for Academia (Science gateways) and Industry (Financial Dashboards, What-if-analysis user interfaces, analytical applications...). Biocep-R on local virtual appliances opens new perspectives for reproducible computational research.
Virtual Machines with R,Scilab and Biocep are publically available on Amazon's Elastic Cloud and can be run on demand to perform statistical/numerical computing using "unlimited" computational and storage resources. The Presentation will give an overview of this new platform and the main usage scenarios will be demonstarted.
An explanation of the Distributed Annotation System (DAS) with a worked example of how to attach an RNA-Seq DAS source to the VectorBase genome browser.
Taverna workflows: provenance and reproducibility - STFC/NERC workshop 2013anpawlik
Slides on Taverna www.tvaerna.org.uk from the talk given at STFC/NERC workshop "Workflow approaches to investigation of biological complexity", 15-16 October 2013.
OpenStack Collaboration made in heaven with Heat, Mistral, Neutron and more..Trinath Somanchi
Cross-project collaboration is something OpenStack community has embraced for a long time. Common libraries like Oslo reduces the time and effort to build a new service. Another way this manifests is in new OpenStack services getting built using existing services to solve an higher level use-case.
In this talk we are present how the band of projects comprising of Mistral, Tacker, Neutron, Heat, TOSCA-parser and Barbican came together to build an industry leading ETSI NFV Orchestrator that leveraged the best of these projects. Each of these projects brought in critical functionalities needed towards the final product. You will learn how, when strung together, this solution follows the classic Microservices design pattern that the industry is rapidly adopting.
Opal: Simple Web Services Wrappers for Scientific ApplicationsSriram Krishnan
The grid-based infrastructure enables large-scale scientific applications to be run on distributed resources and coupled in innovative ways. However, in practice, grid resources are not very easy to use for the end-users who have to learn how to generate security credentials, stage inputs and outputs, access grid-based schedulers, and install complex client software. There is an imminent need to provide transparent access to these resources so that the end-users are shielded from the complicated details, and free to concentrate on their domain science. Scientific applications wrapped as Web services alleviate some of these problems by hiding the complexities of the back-end security and computational infrastructure, only exposing a simple SOAP API that can be accessed programmatically by application-specific user interfaces. However, writing the application services that access grid resources can be quite complicated, especially if it has to be replicated for every application. In this presentation, we present Opal which is a toolkit for wrapping scientific applications as Web services in a matter of hours, providing features such as scheduling, standards-based grid security and data management in an easy-to-use and configurable manner
Team-Based Approach to Deploying VDI in Learning EnvironmentsJeremy Anderson
A presentation delivered at the 2013 NERCOMP annual conference. Main focus revolves around building teams to deconstruct silos within IS, academic schools, and between IS and clients.
With Biocep-R, We propose to build on top the mainstream Statistical and Scientific Computing Environments (R,cilab, Matlab, SAS..) a federative and user-centric OSS platform for High Performance Computing, data analysis and collaboration. Biocep-R computational engines can be running locally or remotely (on Servers/Clusters/Grids/Clouds) and can be accessed from the Researcher's laptop. The Researcher can use an extensible cross-platform workbench to pilot the engines and can also control them programmatically.
The workbench includes highly programmable server-side spreadsheets fully integrated with the SCEs functions and data and that can be mirrored to Excel's spreadsheets.
Multiple Researchers can connect simultaneously to the same the remote computational engine and use it collaboratively via a set of broadcasted views .
The Researcher can easily create or connect to multiple engines running on one or multiple heterogeneous infrastructures and use them for parallel computing. The plug-ins architecture offers a highly innovative way to produce and distribute SCE-based User Interfaces for Academia (Science gateways) and Industry (Financial Dashboards, What-if-analysis user interfaces, analytical applications...). Biocep-R on local virtual appliances opens new perspectives for reproducible computational research.
Virtual Machines with R,Scilab and Biocep are publically available on Amazon's Elastic Cloud and can be run on demand to perform statistical/numerical computing using "unlimited" computational and storage resources. The Presentation will give an overview of this new platform and the main usage scenarios will be demonstarted.
An explanation of the Distributed Annotation System (DAS) with a worked example of how to attach an RNA-Seq DAS source to the VectorBase genome browser.
Taverna workflows: provenance and reproducibility - STFC/NERC workshop 2013anpawlik
Slides on Taverna www.tvaerna.org.uk from the talk given at STFC/NERC workshop "Workflow approaches to investigation of biological complexity", 15-16 October 2013.
OpenStack Collaboration made in heaven with Heat, Mistral, Neutron and more..Trinath Somanchi
Cross-project collaboration is something OpenStack community has embraced for a long time. Common libraries like Oslo reduces the time and effort to build a new service. Another way this manifests is in new OpenStack services getting built using existing services to solve an higher level use-case.
In this talk we are present how the band of projects comprising of Mistral, Tacker, Neutron, Heat, TOSCA-parser and Barbican came together to build an industry leading ETSI NFV Orchestrator that leveraged the best of these projects. Each of these projects brought in critical functionalities needed towards the final product. You will learn how, when strung together, this solution follows the classic Microservices design pattern that the industry is rapidly adopting.
Opal: Simple Web Services Wrappers for Scientific ApplicationsSriram Krishnan
The grid-based infrastructure enables large-scale scientific applications to be run on distributed resources and coupled in innovative ways. However, in practice, grid resources are not very easy to use for the end-users who have to learn how to generate security credentials, stage inputs and outputs, access grid-based schedulers, and install complex client software. There is an imminent need to provide transparent access to these resources so that the end-users are shielded from the complicated details, and free to concentrate on their domain science. Scientific applications wrapped as Web services alleviate some of these problems by hiding the complexities of the back-end security and computational infrastructure, only exposing a simple SOAP API that can be accessed programmatically by application-specific user interfaces. However, writing the application services that access grid resources can be quite complicated, especially if it has to be replicated for every application. In this presentation, we present Opal which is a toolkit for wrapping scientific applications as Web services in a matter of hours, providing features such as scheduling, standards-based grid security and data management in an easy-to-use and configurable manner
Team-Based Approach to Deploying VDI in Learning EnvironmentsJeremy Anderson
A presentation delivered at the 2013 NERCOMP annual conference. Main focus revolves around building teams to deconstruct silos within IS, academic schools, and between IS and clients.
Cloud-Native Apache Spark Scheduling with YuniKorn SchedulerDatabricks
Kubernetes is the most popular container orchestration system that is natively designed for Cloud. At Lyft and Cloudera, we have both emerged the next-generation, cloud-native infrastructure based on Kubernetes, which supports various distributed workloads.
Big Data Streams Architectures. Why? What? How?Anton Nazaruk
With a current zoo of technologies and different ways of their interaction it's a big challenge to architect a system (or adopt existed one) that will conform to low-latency BigData analysis requirements. Apache Kafka and Kappa Architecture in particular take more and more attention over classic Hadoop-centric technologies stack. New Consumer API put significant boost in this direction. Microservices-based streaming processing and new Kafka Streams tend to be a synergy in BigData world.
Cyberinfrastructure Experiences with Apache Airavatasmarru
In this short presentation, we summarize the Apache Airavata's use of component-based architecture to encompass major gateway capabilities (such as metadata management, meta-scheduling, execution management, and messaging).
RESTLess Design with Apache Thrift: Experiences from Apache Airavatasmarru
Apache Airavata is software for providing services to manage scientific applications on a wide range of remote computing resources. Airavata can be used by both individual scientists to run scientific workflows as well as communities of scientists through Web browser interfaces. It is a challenge to bring all of Airavata’s capabilities together in the single API layer that is our prerequisite for a 1.0 release. To support our diverse use cases, we have developed a rich data model and messaging format that we need to expose to client developers using many programming languages. We do not believe this is a good match for REST style services. In this presentation, we present our use and evaluation of Apache Thrift as an interface and data model definition tool, its use internally in Airavata, and its use to deliver and distribute client development kits.
The goal of this talk is to highlight open source opportunities for students especially through an opportunity to earn $5000 through Google Summer of Code program. I will discuss some of the tips on how to engage with open source communities, the befits for contributing. I will provide motivating examples on how students can gain significant experience in contributing challenging distributed systems problems while impacting scientific research. I will specifically focus with a concrete example of Apache Airavata software suite for Web-based science gateways. I will list some example GSoC topics of interest and provide some recipes for success in getting accepted and navigating through success.
The success of the Google Summer of Code program within ASF demonstrates the interest and potential impact Apache projects could have on grooming next generation software developers. Many projects have benefited from the GSoC contributions and some have succeeded in retaining the students as active PMC members. While GSoC is a good vehicle for potential student committers, we could extend the impact and broaden the reach. Beyond GSoC, currently there is no compelling mechanism for interested students to venture into the 150+ Apache project issue trackers to find out an interesting topic to contribute. We propose to build on the GSoC success and create a common forum for PMC’s to propose topics and volunteer to mentor well defined and suitably scoped student research projects. These student projects create a win-win situation for both the Apache projects and the students.
As an exemplar, we will discuss the Apache Airavata project engagement with student academic projects. The globally distributed locations of PMC members of the Apache Airavata project has resulted in the successful launch of many student research projects in the US, Indian and Sri Lanka. Brief descriptions of the projects, their inclusion within existing university curricula and their successes and challenges will be presented. We will then elaborate on how these experiences can be generalized and modeled as a systematic mechanism to catalyze student research projects. While particularly sharing the experiences from developing countries, we discuss how these ideas are globally applicable in exposing students to the ASF model, enabling them to discuss their ideas and work with leading researchers and open source developers around the world, motivating them through virtual hackathons and eventually creating potential pathways to Apache Committership.
The proposed effort raises many open questions. However, initiated through this talk, we would like to hear feedback from Apache projects and the user community and take the idea further with the Apache Community Development PMC.
This talk introduces the Apache Airavata software for executing and managing computational jobs on distributed computing resources including local clusters, supercomputers, national grids, academic and commercial clouds. Airavata is currently used to build Web-based science gateways and assist to compose, manage, execute, and monitor large scale applications and workflows composed of these services.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
Ogce Workflow Suite Tg09
1. OGCE WorkflowSuite for Science
Gateways
Suresh Marru, Raminder Singh,
Chathura Herath & Marlon Pierce
Indiana University
2. OGCE
Gateways TeraGrid
User Portal
(LEAD,
GridChem,
…)
TG GIG
Generalize,
Harden, Build
Test
Gateways/E-Science Community
3. Requirements from gateways
• Gateways demand scientific workflow systems
to be:
– Flexible
– Dynamic
– Interactive
– Technology Adaptive
– Interoperable with Emerging Computational
Resources and their job management interfaces
4. OGCE Workflow Suite
• Generic Service Toolkit
– Tool to wrap command-line applications as web services
– Handles file staging&job submissions
– Extensible runtime for security, resource brokering& urgent computing
– Generic Factory service for on-demand creation of application services
• XRegistry
– Information repository for the OGCE workflow suite
– Register, search, retrieve&share XML documents
– User & hierarchical group based authorization
• XBaya
– GUI based tool to compose&monitor workflows
– Extensible support for compiler plug-ins like BPEL &Jython
– Dynamic Workflow Execution support to start, pause, resume, rewind
of workflow executions
OGCE Workflow Tutorial
5. Features
• Security
– Authentication and authorization
– Secure invocations between services
– Support for gateway community accounts
– Support for multiple user accounts
• Reliability
– Retry job submissions and file staging
– Fault Tolerance and Recovery service
• Over-provisioning and migration
• Compatibility
– Taverna, Kepler and Trianna
OGCE Workflow Tutorial
6. Application Services
• Workflows are built by composing web Application Factory
services c
– Fortran applications are “wrapped” by a
Application Factory which generates a web
service for the app.
• Registers WSDL for the service with a registry
App
– Each service generates a stream of Service
notifications that log the service actions back
to the XMC Cat Metadata Catalog.
Run program
& publish events
7. Workflow Composition, Execution
& Monitoring
Baya enables users to
construct, share, execute
and monitor sequence of
tasks executing on their
local workstations to
high-end compute
resources.
8. Service Monitoring via Events
• The service output is a stream of events Application
Service
– I am running your request Instance
– I have started to move your input files.
– I have all the files 6
5
– I am running your application. 4
– The application is finished 3
– I am moving the output to you file space 2
1
– I am done.
• These are automatically generated
by the service using a
distributed event system
(WS-Eventing / WS-Notification) Notification
– Topic based pub-sub system with Channel
a well known “channel”.
Subscribe
Topic=x x
x
listener publisher
11. XML Metadata Catalog (XMC Cat)
Taming Complex Scientific Metadata Schemas
“A significant need exists in
many disciplines for long- Message Bus
term, distributed, and
Notifications
Workflow
Workflow
N otification
s
stable data and metadata Record
Workflo
w Outputs
repositories”
Intermediate Results
Workflow Configuration and
– NSF Blue-Ribbon Advisory In puts Metadata Catalog
rkflow
Panel on Cyberinfrastructure r d Wo
Reco
s
low
ws
sults
o rkf
lo
W
kf
Workflow or
e
or
yF
Search R
rW
Co e r
Qu
ito
mp
on
os
“Metadata is key to being eW
M
or
kfl
ow
able to share results”
– UK e-Science Core Programme Study
Portal
More Info: Scott Jensen
12. Applications
• LEAD
– Lower entry barrier to using weather analysis tools
– Improve detection, analysis & prediction of mesoscale weather
• Motif-Network
– Transformation of sequenced genomes to “domain-space”
• Cyber-Infrastructure Evaluation
– Performance evaluation of future supercomputer architectures
• ADAM
– Algorithms for feature extraction, data normalization, classification
and normalization
• GridChem
– Molecular Chemistry Grid helping researchers run chemistry
applications on Grid Environment
OGCE Workflow Tutorial
13. LEAD: A Weather Forecasting Workflow (1/2)
Terrain data files
NAM, RUC, GFS data 9
3 3D Model Data
1 Interpolator
Terrain 3D Model (lateral Boundary
Data Surface data, Conditions)
Preprocessor upper air mesonet data and
Interpolator
(Initial Boundary wind profiler data 11 15
Conditions)
2
ARPS to WRF IDV
WRF Static Data
Preprocessor Interpolator
4
88D Radar
Re-mapper
Surface, terrestrial
7
data files 10 WRF
ADAS WRF
ARPS 12 WRFWRF
Radar data
Run once per (Level II) Ensemble
forecast region 5 Generator
13
NIDS Radar 8
Radar data Re-mapper WRF to ARPS Data
(Level III) ADAM Interpolator
Satellite 6
data Visualization on
Satellite Data users request
Re-mapper 14
Repeat ARPS Plotting
periodically Program
for new data Data mining:
look for storm
signature Triggered if a storm
Static data Real time data Initialization Forecast
13
Visualization is detected
Analysis Data Mining
14. LEAD: A Weather Forecasting Workflow (2/2)
WRF-Static running
on Tungsten
OGCE Workflow Tutorial
15. Motif-Network: Whole Genome
workflow
• Domain webs of large genomes
– Input list of amino acid sequences
– Identify all known domains
– Construct webs
Ensemble-type processing
(minimal network reqs)
Capacity-type computing
Parallel processing
Capability-type computing
Jeff Tilson, RENCI
16. CI: Execute Sub-Workflow
• Input a campaign step filename
• Execute GAMESS per step
specification
Jeff Tilson, RENCI
17. Example: “Optimal” Weather
Prediction Using Dynamic Adaptivity
Storms Forming
Forecast Model
Streaming
Observations Data Mining
Instrument Steering
Refine forecast grid
On-Demand
Grid Computing