Presentation by Hidde Elzinga, lead software developer, at Deltares, at the webinar Towards a Deltares cloud computing service, during Delft Software Days - Edition 2021. Friday, 5 November 2021.
Presentation by Rob Brinkman, Senior Advisor Safe and Resilient Infrastructure, at Deltares, at the webinar Probabilistic Toolkit (PTK), during Delft Software Days - Edition 2021. Tuesday, 2 November 2021.
DSD-INT 2021 TVA and MongoDb Archive - MillerDeltares
Presentation by Gabriel Miller (Tennesee Valley Authority), at the Delft-FEWS User Days (Day 1), during Delft Software Days - Edition 2021. Monday, 8 November 2021.
OSMC 2021 | Handling 250K flows per second with OpenNMS: a case studyNETWAYS
What does it take to go from no flow support, to handling huge volumes of heterogeneous flow data in a 100% open-source monitoring stack, in a real-world environment? Expect a brief refresher on flows, an overview of the customer environment, and discussion of the engineering challenges faced. A medium dive follows into the movement of flow data from ingest to query and display, the solution architecture as it exists today, and lessons learned and their application to the project roadmap.
This document summarizes the International Delft-FEWS User Days 2014 conference. It introduces several of the FEWS developers, the features they focused on in 2014, and challenges for 2015. These include improving integration with archives, models, and metadata standards. It also discusses the organization of FEWS development and support teams.
COOL WAYS TO GET STARTED
Join us for a live InfluxDB training to learn how to easily ingest at scale in a matter of seconds to help you build powerful time series based applications. Join our 45-minute demos with experts who will showcase key InfluxDB features and answer questions live from the audience.
After attending this training, attendees will be able to:
Use sample data sets to try out various visualization options
Utilize the available data ingestion methods to construct a data pipeline to InfluxDB
Leverage Notebooks to collaborate with team members
Gain best practices for InfluxDB, Telegraf and Flux
Intro to open source observability with grafana, prometheus, loki, and tempo(...LibbySchulze
This document provides an introduction to open source observability tools including Grafana, Prometheus, Loki, and Tempo. It summarizes each tool and how they work together. Prometheus is introduced as a time series database that collects metrics. Loki is described as a log aggregation system that handles logs at scale without high costs. Tempo is explained as a tracing system that allows tracing from logs, metrics, and between services. The document emphasizes that these tools can be run together to gain observability across an entire system from logs to metrics to traces.
InfluxEnterprise Architecture Patterns by Tim Hall & Sam DillardInfluxData
1. The document provides an overview of InfluxEnterprise, including its core open source functionality, high availability features, scalability, fine-grained authorization, support options, and on-premise or cloud deployment options.
2. It discusses signs that an organization may be ready for InfluxEnterprise, such as high CPU usage, issues with single node deployments, and needing improved data durability or throughput.
3. The document covers InfluxEnterprise cluster architecture including meta nodes, data nodes, replication patterns, ingestion and query rates for different replication configurations, and examples for mothership, durable data ingest, and integrating with ElasticSearch deployments.
Presentation by Rob Brinkman, Senior Advisor Safe and Resilient Infrastructure, at Deltares, at the webinar Probabilistic Toolkit (PTK), during Delft Software Days - Edition 2021. Tuesday, 2 November 2021.
DSD-INT 2021 TVA and MongoDb Archive - MillerDeltares
Presentation by Gabriel Miller (Tennesee Valley Authority), at the Delft-FEWS User Days (Day 1), during Delft Software Days - Edition 2021. Monday, 8 November 2021.
OSMC 2021 | Handling 250K flows per second with OpenNMS: a case studyNETWAYS
What does it take to go from no flow support, to handling huge volumes of heterogeneous flow data in a 100% open-source monitoring stack, in a real-world environment? Expect a brief refresher on flows, an overview of the customer environment, and discussion of the engineering challenges faced. A medium dive follows into the movement of flow data from ingest to query and display, the solution architecture as it exists today, and lessons learned and their application to the project roadmap.
This document summarizes the International Delft-FEWS User Days 2014 conference. It introduces several of the FEWS developers, the features they focused on in 2014, and challenges for 2015. These include improving integration with archives, models, and metadata standards. It also discusses the organization of FEWS development and support teams.
COOL WAYS TO GET STARTED
Join us for a live InfluxDB training to learn how to easily ingest at scale in a matter of seconds to help you build powerful time series based applications. Join our 45-minute demos with experts who will showcase key InfluxDB features and answer questions live from the audience.
After attending this training, attendees will be able to:
Use sample data sets to try out various visualization options
Utilize the available data ingestion methods to construct a data pipeline to InfluxDB
Leverage Notebooks to collaborate with team members
Gain best practices for InfluxDB, Telegraf and Flux
Intro to open source observability with grafana, prometheus, loki, and tempo(...LibbySchulze
This document provides an introduction to open source observability tools including Grafana, Prometheus, Loki, and Tempo. It summarizes each tool and how they work together. Prometheus is introduced as a time series database that collects metrics. Loki is described as a log aggregation system that handles logs at scale without high costs. Tempo is explained as a tracing system that allows tracing from logs, metrics, and between services. The document emphasizes that these tools can be run together to gain observability across an entire system from logs to metrics to traces.
InfluxEnterprise Architecture Patterns by Tim Hall & Sam DillardInfluxData
1. The document provides an overview of InfluxEnterprise, including its core open source functionality, high availability features, scalability, fine-grained authorization, support options, and on-premise or cloud deployment options.
2. It discusses signs that an organization may be ready for InfluxEnterprise, such as high CPU usage, issues with single node deployments, and needing improved data durability or throughput.
3. The document covers InfluxEnterprise cluster architecture including meta nodes, data nodes, replication patterns, ingestion and query rates for different replication configurations, and examples for mothership, durable data ingest, and integrating with ElasticSearch deployments.
stackconf 2020 | Ignite talk: Opensource in Advanced Research Computing, How ...NETWAYS
Opensource software is becoming a pillar in our everyday life, leveraged by our cell phones, our transportation systems and on the websites we visit. In this quick talk, we will have a look on how Canada’s Advanced Research Computing (“ARC”) organizations use opensource software to deploy and operate some of the largest Supercomputers and Cloud deployments on Earth. We will briefly introduce the systems and dig deeper into the opensource technologies that together make the magic happen !
Introducción a Stream Processing utilizando Kafka Streamsconfluent
Matías Cascallares, Confluent, Customer Success Architect
Streams es uno de los keywords de moda! En esta presentación, veremos cómo implementar stream processing con Kafka Streams, que consideraciones tenemos que tener en cuenta, y un pequeño tour por ksqlDB como herramienta.
https://www.meetup.com/Mexico-Kafka/events/276717476/
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
Setting Up InfluxDB for IoT by David G SimmonsInfluxData
David will be walking you through a typical data architecture for an IoT device. Then, it will be a hands-on workshop to gather data from the device, display it on a dashboard and trigger alerts based on thresholds that you set. View this InfluxDays NYC 2019 presentation to learn about setting up InfluxDB for IoT.
OpenNebulaconf2017EU: OpenNebula 5.4 and Beyond by Tino Vázquez and Ruben S. ...OpenNebula Project
In this talk, Rubén and Tino will lay our the novelties (not all of them, there are many!) present in 5.4, ranging from core new functionality to the big changes in vCenter. Also, the roadmap for 5.6 and future versions would be laid out, as far as it is consolidated (it won't be closed yet, but nearly so).
It would also be the perfect session for feature requests, so don't miss it!
YouTube: https://youtu.be/Czzm2EimayY
Database ingest with Apache NiFi and MiNiFiLucian Neghina
This document discusses data ingestion using Apache NiFi and MiNiFi. Apache NiFi is a dataflow system that allows for reliable and secure transfer of data between systems. It is used for ingesting data from sources into analytic platforms and preparing data through actions like format conversion and parsing. Apache MiNiFi is a subproject that deploys smaller agents to collect data at the edge and push it to NiFi in the data center. The document provides an overview of NiFi's capabilities like guaranteed delivery, buffering, security, and clustering support.
MSF: Sync your Data On-Premises And To The Cloud - dotNetwork Gathering, Oct ...sameh samir
The document discusses Microsoft Sync Framework, which allows for synchronizing data both on-premises and to the cloud. It describes the framework components, responsibilities, participants in synchronization, and application scenarios for offline and collaboration synchronization. It explains how synchronization works, including change tracking, conflict resolution, and key concepts. It demonstrates synchronization in two-tier and multi-tier architectures and discusses choosing primary keys and enabling tracing.
Phil Day [Configured Things] | Policy-Driven Real-Time Data Filtering from Io...InfluxData
Phil Day presented on Configured Things' data gateway for filtering real-time IoT sensor data using declarative policies. The gateway ingests data through MQTT or HTTPS and applies user-defined policies in Flux to control data visibility, quality, and filtering. Policies can dynamically define data scopes, quality metrics like aggregation, and filters. The policies are mapped to Flux queries to process the data in InfluxDB. This allows different stakeholders to securely access customized data streams from sensors in applications like smart cities.
How to Store and Visualize CAN Bus Telematic Data with InfluxDB Cloud and Gra...InfluxData
The document discusses how CSS Electronics stores and visualizes telematic data collected from CAN bus networks using InfluxDB and Grafana. Specifically, it summarizes:
1) CSS Electronics develops CAN bus data loggers and sensors to collect data from vehicles and industrial equipment. They implemented InfluxDB to store the decoded telematic data and Grafana to visualize it in customizable dashboards.
2) The process involves logging raw CAN bus data, decoding it using Python scripts, and writing it to InfluxDB. Grafana then pulls the data from InfluxDB to generate intuitive dashboards without coding.
3) Examples are shown of heavy vehicle, automotive, maritime, and sensor fusion dashboards. The solution is open source
InfluxData builds a time series platform primarily deployed for DevOps and IoT monitoring. This talk presents several lessons learned while scaling the platform across a large number of deployments—from single server open source instances to highly available high-throughput clusters.
This talk presents a number of failure conditions that informed subsequent design choices. Ryan Betts (Director of Engineering at InfluxData) will discuss designing backpressure in an AP system with tens of thousands of resource-limited writers; trade-offs between monolithic and service-oriented database implementations; and lessons learned implementing multiple query processing systems.
Introducing Confluent labs Parallel Consumer client | Anthony Stubbes, ConfluentHostedbyConfluent
Consuming messages in parallel is what Apache Kafka® is all about, so you may well wonder, why would we want anything else? It turns out that, in practice, there are a number of situations where Kafka’s partition-level parallelism gets in the way of optimal design.
This session will go over some of these types of situations that can benefit from parallel message processing within a single application instance (aka slow consumers or competing consumers), and then introduce the new Parallel Consumer labs project from Confluent, which can improve functionality and massively improve performance in such situations.
It will cover the
- Different ordering modes of the client
- Relative performance improvements
- Usage with other components like Kafka Streams
- An introduction to the internal architecture of the project
- How it can achieve all this in a reassignment friendly manner
PMIx provides standardized APIs to enable portability of applications across systems and support interactions between applications and resource managers. The document discusses integrating PMIx with tiered storage systems to allow applications to query storage resources, pre-stage files, and coordinate caching across different storage tiers. This integration could improve launch times and support dynamic data movement in response to faults or changing workloads. The document outlines initial plans to leverage existing PMIx functions and consider new APIs to enable these capabilities.
[WSO2Con USA 2018] Deploying Applications in K8S and DockerWSO2
In this slide deck, Lakmal discusses best practices for deploying applications in Docker and Kubernetes while discussing Docker and Kubernetes concepts.
Kafka, Apache Kafka evolved from an enterprise messaging system to a fully distributed streaming data platform (Kafka Core + Kafka Connect + Kafka Streams) for building streaming data pipelines and streaming data applications.
This talk, that I gave at the Chicago Java Users Group (CJUG) on June 8th 2017, is mainly focusing on Kafka Streams, a lightweight open source Java library for building stream processing applications on top of Kafka using Kafka topics as input/output.
You will learn more about the following:
1. Apache Kafka: a Streaming Data Platform
2. Overview of Kafka Streams: Before Kafka Streams? What is Kafka Streams? Why Kafka Streams? What are Kafka Streams key concepts? Kafka Streams APIs and code examples?
3. Writing, deploying and running your first Kafka Streams application
4. Code and Demo of an end-to-end Kafka-based Streaming Data Application
5. Where to go from here?
Jess Ingrassellino [InfluxData] | How to Get Data Into InfluxDB | InfluxDays ...InfluxData
There are many ways to collect and store data in InfluxDB. Learn more about Telegraf, Client SDKs, Direct API, CLI, UI Uploader and Flux. InfluxDB's CLI can handle CSV’s and Line Protocol. Discover how to use Flux pull data into InfluxDB.
Apache Kafka Fundamentals for Architects, Admins and Developersconfluent
This document summarizes a presentation about Apache Kafka. It introduces Apache Kafka as a modern, distributed platform for data streams made up of distributed, immutable, append-only commit logs. It describes Kafka's scalability similar to a filesystem and guarantees similar to a database, with the ability to rewind and replay data. The document discusses Kafka topics and partitions, partition leadership and replication, and provides resources for further information.
Spring Boot+Kafka: the New Enterprise PlatformVMware Tanzu
This document discusses how Spring Boot and Kafka can form the basis of a new enterprise application platform focused on continuous delivery, event-driven architectures, and streaming data. It provides examples of companies that have successfully adopted this approach, such as Netflix transitioning to Spring Boot and a banking brand building a new core banking system using Spring Streams and Kafka. The document advocates an "event-first" and microservices-oriented mindset enabled by a streaming data platform and suggests that Spring Boot, Kafka, and related technologies provide a turnkey solution for implementing this new application development approach at large enterprises.
Javier Lopez_Mihail Vieru - Flink in Zalando's World of Microservices - Flink...Flink Forward
http://flink-forward.org/kb_sessions/flink-in-zalandos-world-of-microservices/
In this talk we present Zalando’s microservices architecture, introduce Saiki – our next generation data integration and distribution platform on AWS and show how we employ stream processing with Apache Flink for near-real time business intelligence.
Zalando is one of the largest online fashion retailers in Europe. In order to secure our future growth and remain competitive in this dynamic market, we are transitioning from a monolithic to a microservices architecture and from a hierarchical to an agile organization.
We first have a look at how business intelligence processes have been working inside Zalando for the last years and present our current approach – Saiki. It is a scalable, cloud-based data integration and distribution infrastructure that makes data from our many microservices readily available for analytical teams.
We no longer live in a world of static data sets, but are instead confronted with endless streams of events that constantly inform us about relevant happenings from all over the enterprise. The processing of these event streams enables us to do near-real time business intelligence. In this context we have evaluated Apache Flink vs. Apache Spark in order to choose the right stream processing framework. Given our requirements, we decided to use Flink as part of our technology stack, alongside with Kafka and Elasticsearch.
With these technologies we are currently working on two use cases: a near real-time business process monitoring solution and streaming ETL.
Monitoring our business processes enables us to check if technically the Zalando platform works. It also helps us analyze data streams on the fly, e.g. order velocities, delivery velocities and to control service level agreements.
On the other hand, streaming ETL is used to relinquish resources from our relational data warehouse, as it struggles with increasingly high loads. In addition to that, it also reduces the latency and facilitates the platform scalability.
Finally, we have an outlook on our future use cases, e.g. near-real time sales and price monitoring. Another aspect to be addressed is to lower the entry barrier of stream processing for our colleagues coming from a relational database background.
This document summarizes Work Package 4 (WP4) which focuses on deploying a "smart" services toolkit. WP4 is led by MOSS and involves 9 tasks from July 2012 to January 2014. It aims to develop smart city services, platforms, and clients. Current achievements include establishing the 3D technology platform and prototypes for solar energy potential assessment and noise mapping services. Next steps involve further integrating core platform infrastructure, services, and clients.
EzBake is a secure application engine for DoDIIS that provides an integrated platform for building applications. It allows developers to focus on application logic while leveraging distributed frameworks for data ingestion, storage, querying and elastic deployment. The platform uses open source components like Frack for streaming data, common services for reusable logic, datasets for standardized data access, and OpenShift for deployment. Security is built into all components. EzBake aims to reduce costs by consolidating functionality and enabling data and resource sharing across applications. Future enhancements include distributed querying with Impala and integration with Apache Spark and Titan graph.
Cloud architecture, conception and computing PPTNangVictorin
These platforms hide the complexity and details of the underlying infrastructure from users and applications by providing very simple graphical interface or API (Applications Programming Interface). Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
stackconf 2020 | Ignite talk: Opensource in Advanced Research Computing, How ...NETWAYS
Opensource software is becoming a pillar in our everyday life, leveraged by our cell phones, our transportation systems and on the websites we visit. In this quick talk, we will have a look on how Canada’s Advanced Research Computing (“ARC”) organizations use opensource software to deploy and operate some of the largest Supercomputers and Cloud deployments on Earth. We will briefly introduce the systems and dig deeper into the opensource technologies that together make the magic happen !
Introducción a Stream Processing utilizando Kafka Streamsconfluent
Matías Cascallares, Confluent, Customer Success Architect
Streams es uno de los keywords de moda! En esta presentación, veremos cómo implementar stream processing con Kafka Streams, que consideraciones tenemos que tener en cuenta, y un pequeño tour por ksqlDB como herramienta.
https://www.meetup.com/Mexico-Kafka/events/276717476/
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
Setting Up InfluxDB for IoT by David G SimmonsInfluxData
David will be walking you through a typical data architecture for an IoT device. Then, it will be a hands-on workshop to gather data from the device, display it on a dashboard and trigger alerts based on thresholds that you set. View this InfluxDays NYC 2019 presentation to learn about setting up InfluxDB for IoT.
OpenNebulaconf2017EU: OpenNebula 5.4 and Beyond by Tino Vázquez and Ruben S. ...OpenNebula Project
In this talk, Rubén and Tino will lay our the novelties (not all of them, there are many!) present in 5.4, ranging from core new functionality to the big changes in vCenter. Also, the roadmap for 5.6 and future versions would be laid out, as far as it is consolidated (it won't be closed yet, but nearly so).
It would also be the perfect session for feature requests, so don't miss it!
YouTube: https://youtu.be/Czzm2EimayY
Database ingest with Apache NiFi and MiNiFiLucian Neghina
This document discusses data ingestion using Apache NiFi and MiNiFi. Apache NiFi is a dataflow system that allows for reliable and secure transfer of data between systems. It is used for ingesting data from sources into analytic platforms and preparing data through actions like format conversion and parsing. Apache MiNiFi is a subproject that deploys smaller agents to collect data at the edge and push it to NiFi in the data center. The document provides an overview of NiFi's capabilities like guaranteed delivery, buffering, security, and clustering support.
MSF: Sync your Data On-Premises And To The Cloud - dotNetwork Gathering, Oct ...sameh samir
The document discusses Microsoft Sync Framework, which allows for synchronizing data both on-premises and to the cloud. It describes the framework components, responsibilities, participants in synchronization, and application scenarios for offline and collaboration synchronization. It explains how synchronization works, including change tracking, conflict resolution, and key concepts. It demonstrates synchronization in two-tier and multi-tier architectures and discusses choosing primary keys and enabling tracing.
Phil Day [Configured Things] | Policy-Driven Real-Time Data Filtering from Io...InfluxData
Phil Day presented on Configured Things' data gateway for filtering real-time IoT sensor data using declarative policies. The gateway ingests data through MQTT or HTTPS and applies user-defined policies in Flux to control data visibility, quality, and filtering. Policies can dynamically define data scopes, quality metrics like aggregation, and filters. The policies are mapped to Flux queries to process the data in InfluxDB. This allows different stakeholders to securely access customized data streams from sensors in applications like smart cities.
How to Store and Visualize CAN Bus Telematic Data with InfluxDB Cloud and Gra...InfluxData
The document discusses how CSS Electronics stores and visualizes telematic data collected from CAN bus networks using InfluxDB and Grafana. Specifically, it summarizes:
1) CSS Electronics develops CAN bus data loggers and sensors to collect data from vehicles and industrial equipment. They implemented InfluxDB to store the decoded telematic data and Grafana to visualize it in customizable dashboards.
2) The process involves logging raw CAN bus data, decoding it using Python scripts, and writing it to InfluxDB. Grafana then pulls the data from InfluxDB to generate intuitive dashboards without coding.
3) Examples are shown of heavy vehicle, automotive, maritime, and sensor fusion dashboards. The solution is open source
InfluxData builds a time series platform primarily deployed for DevOps and IoT monitoring. This talk presents several lessons learned while scaling the platform across a large number of deployments—from single server open source instances to highly available high-throughput clusters.
This talk presents a number of failure conditions that informed subsequent design choices. Ryan Betts (Director of Engineering at InfluxData) will discuss designing backpressure in an AP system with tens of thousands of resource-limited writers; trade-offs between monolithic and service-oriented database implementations; and lessons learned implementing multiple query processing systems.
Introducing Confluent labs Parallel Consumer client | Anthony Stubbes, ConfluentHostedbyConfluent
Consuming messages in parallel is what Apache Kafka® is all about, so you may well wonder, why would we want anything else? It turns out that, in practice, there are a number of situations where Kafka’s partition-level parallelism gets in the way of optimal design.
This session will go over some of these types of situations that can benefit from parallel message processing within a single application instance (aka slow consumers or competing consumers), and then introduce the new Parallel Consumer labs project from Confluent, which can improve functionality and massively improve performance in such situations.
It will cover the
- Different ordering modes of the client
- Relative performance improvements
- Usage with other components like Kafka Streams
- An introduction to the internal architecture of the project
- How it can achieve all this in a reassignment friendly manner
PMIx provides standardized APIs to enable portability of applications across systems and support interactions between applications and resource managers. The document discusses integrating PMIx with tiered storage systems to allow applications to query storage resources, pre-stage files, and coordinate caching across different storage tiers. This integration could improve launch times and support dynamic data movement in response to faults or changing workloads. The document outlines initial plans to leverage existing PMIx functions and consider new APIs to enable these capabilities.
[WSO2Con USA 2018] Deploying Applications in K8S and DockerWSO2
In this slide deck, Lakmal discusses best practices for deploying applications in Docker and Kubernetes while discussing Docker and Kubernetes concepts.
Kafka, Apache Kafka evolved from an enterprise messaging system to a fully distributed streaming data platform (Kafka Core + Kafka Connect + Kafka Streams) for building streaming data pipelines and streaming data applications.
This talk, that I gave at the Chicago Java Users Group (CJUG) on June 8th 2017, is mainly focusing on Kafka Streams, a lightweight open source Java library for building stream processing applications on top of Kafka using Kafka topics as input/output.
You will learn more about the following:
1. Apache Kafka: a Streaming Data Platform
2. Overview of Kafka Streams: Before Kafka Streams? What is Kafka Streams? Why Kafka Streams? What are Kafka Streams key concepts? Kafka Streams APIs and code examples?
3. Writing, deploying and running your first Kafka Streams application
4. Code and Demo of an end-to-end Kafka-based Streaming Data Application
5. Where to go from here?
Jess Ingrassellino [InfluxData] | How to Get Data Into InfluxDB | InfluxDays ...InfluxData
There are many ways to collect and store data in InfluxDB. Learn more about Telegraf, Client SDKs, Direct API, CLI, UI Uploader and Flux. InfluxDB's CLI can handle CSV’s and Line Protocol. Discover how to use Flux pull data into InfluxDB.
Apache Kafka Fundamentals for Architects, Admins and Developersconfluent
This document summarizes a presentation about Apache Kafka. It introduces Apache Kafka as a modern, distributed platform for data streams made up of distributed, immutable, append-only commit logs. It describes Kafka's scalability similar to a filesystem and guarantees similar to a database, with the ability to rewind and replay data. The document discusses Kafka topics and partitions, partition leadership and replication, and provides resources for further information.
Spring Boot+Kafka: the New Enterprise PlatformVMware Tanzu
This document discusses how Spring Boot and Kafka can form the basis of a new enterprise application platform focused on continuous delivery, event-driven architectures, and streaming data. It provides examples of companies that have successfully adopted this approach, such as Netflix transitioning to Spring Boot and a banking brand building a new core banking system using Spring Streams and Kafka. The document advocates an "event-first" and microservices-oriented mindset enabled by a streaming data platform and suggests that Spring Boot, Kafka, and related technologies provide a turnkey solution for implementing this new application development approach at large enterprises.
Javier Lopez_Mihail Vieru - Flink in Zalando's World of Microservices - Flink...Flink Forward
http://flink-forward.org/kb_sessions/flink-in-zalandos-world-of-microservices/
In this talk we present Zalando’s microservices architecture, introduce Saiki – our next generation data integration and distribution platform on AWS and show how we employ stream processing with Apache Flink for near-real time business intelligence.
Zalando is one of the largest online fashion retailers in Europe. In order to secure our future growth and remain competitive in this dynamic market, we are transitioning from a monolithic to a microservices architecture and from a hierarchical to an agile organization.
We first have a look at how business intelligence processes have been working inside Zalando for the last years and present our current approach – Saiki. It is a scalable, cloud-based data integration and distribution infrastructure that makes data from our many microservices readily available for analytical teams.
We no longer live in a world of static data sets, but are instead confronted with endless streams of events that constantly inform us about relevant happenings from all over the enterprise. The processing of these event streams enables us to do near-real time business intelligence. In this context we have evaluated Apache Flink vs. Apache Spark in order to choose the right stream processing framework. Given our requirements, we decided to use Flink as part of our technology stack, alongside with Kafka and Elasticsearch.
With these technologies we are currently working on two use cases: a near real-time business process monitoring solution and streaming ETL.
Monitoring our business processes enables us to check if technically the Zalando platform works. It also helps us analyze data streams on the fly, e.g. order velocities, delivery velocities and to control service level agreements.
On the other hand, streaming ETL is used to relinquish resources from our relational data warehouse, as it struggles with increasingly high loads. In addition to that, it also reduces the latency and facilitates the platform scalability.
Finally, we have an outlook on our future use cases, e.g. near-real time sales and price monitoring. Another aspect to be addressed is to lower the entry barrier of stream processing for our colleagues coming from a relational database background.
This document summarizes Work Package 4 (WP4) which focuses on deploying a "smart" services toolkit. WP4 is led by MOSS and involves 9 tasks from July 2012 to January 2014. It aims to develop smart city services, platforms, and clients. Current achievements include establishing the 3D technology platform and prototypes for solar energy potential assessment and noise mapping services. Next steps involve further integrating core platform infrastructure, services, and clients.
EzBake is a secure application engine for DoDIIS that provides an integrated platform for building applications. It allows developers to focus on application logic while leveraging distributed frameworks for data ingestion, storage, querying and elastic deployment. The platform uses open source components like Frack for streaming data, common services for reusable logic, datasets for standardized data access, and OpenShift for deployment. Security is built into all components. EzBake aims to reduce costs by consolidating functionality and enabling data and resource sharing across applications. Future enhancements include distributed querying with Impala and integration with Apache Spark and Titan graph.
Cloud architecture, conception and computing PPTNangVictorin
These platforms hide the complexity and details of the underlying infrastructure from users and applications by providing very simple graphical interface or API (Applications Programming Interface). Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
Cloud computing refers to applications and services delivered over the Internet. It provides on-demand access to shared computing resources like servers, storage, databases and software that can be provisioned with minimal management effort. Major cloud service models include SaaS, PaaS and IaaS. The cloud computing market is growing rapidly with major players like Amazon, Microsoft and Google dominating different segments. Emerging services like STaaS, Daas and Caas are facilitating wider cloud adoption.
DSD-INT 2018 Delft-FEWS new features - Boot VerversDeltares
Presentation by Gerben Boot & Marcel Ververs (Deltares) at the Delft-FEWS International User Days 2018, during the Delft Software Days - Edition 2018. 7 & 8 November 2018, Delft.
Deploying a Modern Data Stack by Lasse Benninga - GoDataFest 2022GoDataDriven
Deploy your own modern data stack using open source components usingTerraform cloud-agnostic tooling. By leveraging open-source components you can deploy a state-of-the-art modern data platform in a day. What are the pro's and con's of “build-it-yourself" in the data+analytics space?
Logic Apps: El Poder de la nueva Integración (por Félix Mondelo) Jorge Millán Cabrera
This document summarizes a presentation given by Félix Mondelo, an Integration Solution's Architect at Kabel, at the 2017 IBM Madrid conference. The presentation covered the evolution of integration from on-premises to cloud-based approaches using Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and serverless computing. It discussed how Logic Apps can be used for serverless integration and orchestration of workflows. It also provided an overview of how the Enterprise Integration Pack in Azure can be used to implement enterprise integration patterns like B2B using standards like AS2 and EDI. The presentation concluded with a demo of using Logic Apps along with an Integration Account to receive an XML order request
Big Data Quickstart Series 3: Perform Data IntegrationAlibaba Cloud
This document summarizes Derek Meng's presentation on data integration using Alibaba Cloud's MaxCompute big data platform. It discusses the general process of data integration including data acquisition, transformation, and governance. It provides an overview of MaxCompute basics, including its architecture, basic concepts such as projects and tables, and how to use MaxCompute's data channel and SQL. The document concludes with a brief introduction to DataWorks for data integration and a demo.
Cloud computing is a model for enabling ubiquitous and convenient access to shared pools of configurable computing resources via the internet. It provides hardware, software, storage and networking services that can be rapidly provisioned with minimal management effort. Key characteristics include rapid elasticity, broad network access, resource pooling, on-demand self-service and measured service. While cloud computing provides opportunities to lower costs and improve access to resources and collaboration, it also poses security, performance and connectivity reliance disadvantages that must be addressed.
Pranav Vashistha presented on cloud computing. He discussed basic concepts like traditional on-premise computing versus cloud computing. He covered first movers in cloud like Amazon, Google, and Microsoft. Pranav defined cloud computing and explained its components including clients, data centers, distributed servers. He described the three main cloud service models - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Pranav also covered types of cloud, benefits like scalability and cost savings, and applications like storage and databases.
Designing and Implementing a cloud-hosted SaaS for data movement and Sharing ...Haripds Shrestha
SlapOS (Simple Language for Accounting and Provisioning Operating System) aim
to hide the complexity of IT infrastructures Software deployment from users.
Through a software as a service (SaaS) solution, users can request and install automatically
any data movement and sharing tools like Stork and Bitdew without any intervention of a system administrator.
Here's all you want to know on Cloud Computing..... why used, advantanges, structure etc. All queries regarding cloud computing are met in this presentation. For demo of such software in accounting field visit www.arcus-universe.com
1) Cloud computing involves sharing computing resources over the Internet rather than having local servers or devices. It allows users access to software, storage, databases, analytics and more without managing physical hardware.
2) The main benefits of cloud computing include lower costs by paying only for what is used, flexibility to quickly scale resources up or down as needed, global access to services, and increased productivity by eliminating local management of infrastructure.
3) The main types of cloud computing models are public clouds (owned by third parties), private clouds (for exclusive single organization use), hybrid clouds (combining public and private), and community clouds (shared by organizations with common interests).
Cloud computing provides utility computing resources and applications over the Internet. It has various deployment and service models including public, private, hybrid, infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Cloud computing offers advantages like lower costs, improved performance, unlimited storage, and device independence but also disadvantages such as requiring an Internet connection and potential security issues.
Maximizing Data Lake ROI with Data Virtualization: A Technical DemonstrationDenodo
Watch full webinar here: https://bit.ly/3ohtRqm
Companies with corporate data lakes also need a strategy for how to best integrate them with their overall data fabric. To take full advantage of a data lake, data architects must determine what data belongs in the Lake vs. other sources, how end users are going to find and connect to the data they need as well as the best way to leverage the processing power of the data lake. This webinar will provide you with a deep dive look at how the Denodo Platform for data virtualization enables companies to maximize their investment in their corporate data lake.
Watch on-demand this webinar to learn:
- How to create a logical data fabric with Denodo
- How to leverage the a data lake for MPP Acceleration and Summary Views
- How to leverage Presto with Denodo for file based data lakes (ie. S3, ADLS, HDFS, etc.)
Introduction to Cloud Computing CA03.pptxabcxyz1337
Cloud computing allows users to access computing resources like servers, storage, databases, networking, software, analytics and more over the Internet. It provides on-demand services that are scalable, available anywhere, and users only pay for what they use. There are different deployment models like public, private, hybrid and community clouds. The main service models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Cloud computing provides benefits like cost savings, flexibility and scalability to users, but also has disadvantages like reliance on internet and potential security and data loss issues if providers experience problems.
Lift Your Legacy UNIX Applications & Databases into the Cloud Fadi Semaan
Unlock efficiency and innovation while reducing costs. In this presentation we will address:
1) Legacy pain overview
2) Dell application modernization services
3) UNIX to Linux migration
4) Case studies
Presented by Rich Cronheim
Executive Director , Dell Application Modernization
Services
DSD-INT 2023 Hydrology User Days - Intro - Day 3 - KroonDeltares
Presentation by Timo Kroon and Nadine Slootjes (Deltares, Netherlands) at the Hydrology Suite User Days (Day 3) - Groundwater modelling, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Thursday, 30 November 2023, Delft.
Presentation by Sabrina Couvin Rodriguez (Deltares, Netherlands) at the Climate Adaptation Symposium 2023, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Wednesday, 29 November 2023, Delft.
Presentation by Umit Taner (Deltares, Netherlands) at the Climate Adaptation Symposium 2023, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Wednesday, 29 November 2023, Delft.
Presentation by Daan Rooze (Deltares, Netherlands) at the Climate Adaptation Symposium 2023, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Wednesday, 29 November 2023, Delft.
DSD-INT 2023 Approaches for assessing multi-hazard risk - WardDeltares
Presentation by Philip Ward (Deltares and IVM VU Amsterdam) at the Climate Adaptation Symposium 2023, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Wednesday, 29 November 2023, Delft.
Presentation by Andrew Warren (Deltares, Netherlands) at the Climate Adaptation Symposium 2023, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Wednesday, 29 November 2023, Delft.
DSD-INT 2023 Global hydrological modelling to support worldwide water assessm...Deltares
Presentation by Marc Bierkens (Utrecht University and Deltares, Netherlands) at the Climate Adaptation Symposium 2023, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Wednesday, 29 November 2023, Delft.
DSD-INT 2023 Modelling implications - IPCC Working Group II - From AR6 to AR7...Deltares
Presentation by Bart van den Hurk (WGII Co-Chair, IPCC AR7, Deltares) at the Climate Adaptation Symposium 2023, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Wednesday, 29 November 2023, Delft.
DSD-INT 2023 Knowledge and tools for Climate Adaptation - JeukenDeltares
Presentation by Ad Jeuken (Deltares, Netherlands) at the Climate Adaptation Symposium 2023, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Wednesday, 29 November 2023, Delft.
DSD-INT 2023 Coupling RIBASIM to a MODFLOW groundwater model - BootsmaDeltares
Presentation by Huite Bootsma (Deltares, Netherlands) at the Hydrology Suite User Days (Day 3) - Groundwater modelling, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Thursday, 30 November 2023, Delft.
DSD-INT 2023 Create your own MODFLOW 6 sub-variant - MullerDeltares
Presentation by Mike Muller (hydrocomputing GmbH & Co. KG, Germany) at the Hydrology Suite User Days (Day 3) - Groundwater modelling, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Thursday, 30 November 2023, Delft.
DSD-INT 2023 Example of unstructured MODFLOW 6 modelling in California - RomeroDeltares
Presentation by Betsy Romero Verástegui (Deltares, Netherlands) at the Hydrology Suite User Days (Day 3) - Groundwater modelling, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Thursday, 30 November 2023, Delft.
DSD-INT 2023 Challenges and developments in groundwater modeling - BakkerDeltares
Presentation by Mark Bakker (Delft University of Technology, Netherlands) at the Hydrology Suite User Days (Day 3) - Groundwater modelling, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Thursday, 30 November 2023, Delft.
DSD-INT 2023 Demo new features iMOD Suite - van EngelenDeltares
Presentation by Joeri van Engelen (Deltares, Netherlands) at the Hydrology Suite User Days (Day 3) - Groundwater modelling, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Thursday, 30 November 2023, Delft.
DSD-INT 2023 iMOD and new developments - DavidsDeltares
Presentation by Tess Davids (Deltares, Netherlands) at the Hydrology Suite User Days (Day 3) - Groundwater modelling, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Thursday, 30 November 2023, Delft.
Presentation by Christian Langevin (U.S. Geological Survey (USGS), USA) at the Hydrology Suite User Days (Day 3) - Groundwater modelling, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Thursday, 30 November 2023, Delft.
DSD-INT 2023 Hydrology User Days - Presentations - Day 2Deltares
Presentation by several speakers at the Hydrology Suite User Days (Day 2) - wflow and HydroMT, during the Delft Software Days - Edition 2023 (DSD-INT 2023). Wednesday, 29 November 2023, Delft.
DSD-INT 2023 Needs related to user interfaces - SnippenDeltares
Presentation by Edwin Snippen (Deltares, Netherlands) at the Hydrology Suite User Days (Day 1) - Hydrology Suite introduction and River Basin Management software (RIBASIM), during the Delft Software Days - Edition 2023 (DSD-INT 2023). Tuesday, 28 November 2023, Delft.
DSD-INT 2023 Coupling RIBASIM to a MODFLOW groundwater model - BootsmaDeltares
Presentation by Huite Bootsma (Deltares, Netherlands) at the Hydrology Suite User Days (Day 1) - Hydrology Suite introduction and River Basin Management software (RIBASIM), during the Delft Software Days - Edition 2023 (DSD-INT 2023). Tuesday, 28 November 2023, Delft.
DSD-INT 2023 Parameterization of a RIBASIM model and the network lumping appr...Deltares
Presentation by Harm Nomden (SWECO, Netherlands) at the Hydrology Suite User Days (Day 1) - Hydrology Suite introduction and River Basin Management software (RIBASIM), during the Delft Software Days - Edition 2023 (DSD-INT 2023). Tuesday, 28 November 2023, Delft.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
2. Table of contents
2
• Introduction
• Overview
• Computing service
• Current developments
• How to use it / Demo
• Questions from us (Mentimeter)
• Questions
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
3. Introduction
3
Why we started with cloud services
• We see a growing demand from our users for online services
• We want to have scalable solutions when it comes to large
models, large amount of model runs and handling large datasets
• Offer a different way for running models using a Web API
• Make it easy to incorporate cloud services in our users workflow
• Offer more automation in building models based on data
• Make it easy for us to deliver the latest versions of our software
and to add new functionalities (modularity)
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
4. Overview
4
How do we want to make the cloud services available
d
Deltares
Marketplace
Software
Data Compute Model Training
Support Events Research Communities
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
5. Overview
5
• Single Sign-On
• Register for
events/courses
• Links to communities
• Central point to get to
Deltares services
Sign on
Services and products
Courses and events
News
Profile
Communities
Example
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
6. Computing service
6
• Generic computing service (supporting all
kinds of models, scripts etc.)
• It is available as a Web API so it can be easily
be integrated and used as a building block
• It support multiple (internal) steps
• Modular (build up from separate (micro)
services)
• Using MyDeltares for access
This Photo by Unknown Author is licensed under CC BY-NC-ND
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
9. Current developments
9
We implemented first
working version
(using Delft3D FM,
HydroMT workflows)
2020
2021
We added a payment
module and are
making a test version
We try to launch a first
externally available
version
2022
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
10. How to use it (Delft3D FM workflow)
10
To be able to use it, we need the following
• Valid MyDeltares login
• Credit available (to pay for used resources)
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
11. 11
Starting a run
• Start by logging on with MyDeltares credentials
• Upload a zip file with the DIMR files (containing your Delft3D FM schematization)
• Start a run providing
− The workflow (process id) you would like to run (Delft3D FM in this case)
− Name of the run
− Description
− The location of the DIMR files (zip file)
− The length the model can run for (maximum time)
− What credit to use (credit number)
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
How to use it (Delft3D FM workflow)
12. What happens in the background when the run is requested
• The maximum run costs are determined based on the maximum time
• A claim is added on the selected credit
• Computation starts
• After the computation finishes the real costs are determined
• The costs are deducted from the claim
• The remainder of the claimed amount is freed
12
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
How to use it (Delft3D FM workflow)
13. Retrieving results when a run is finished
• Using the run id (job id) query the run details
• In the retrieved details the output path is specified
• Download the results from the output path
13
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
How to use it (Delft3D FM workflow)
14. Managing runs
• The computing service will provide an overview of the runs
− Showing there status (running/finished)
− What input was used
− Where the output is (or will be)
− Allowing you to stop running models
− Delete old runs
14
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
How to use it (Delft3D FM workflow)
15. • Demo
− API
− Web frontend
− Desktop frontend
15
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)
How to use it (Delft3D FM workflow)
16. Questions
If you want to help us test the service or have feedback/suggestions please let me know
Hidde.Elzinga@Deltares.nl
We love to hear what you think!
16
Delft
Software
Days
-
Edition
2021
(DSD-INT
2021)