Ellip Studio - A workspace for developing Cloud-ready Earth Observation Appl...terradue
A JupyterLab Environment for developing Cloud-ready Earth Observation Applications compliant with the Open Geospatial Consortium's Best Practice for Earth Observation Application Package https://docs.ogc.org/bp/20-089r1.html
A performance analysis of OpenStack Cloud vs Real System on Hadoop ClustersKumari Surabhi
It introduces the performance analysis of OpenStack Cloud with the commodity computers in the big data environments. It concludes that the data storage and analysis in hadoop cluster in cloud is more flexible and easily scalable than the real system cluster. It also concludes the cluster in commodities computers are faster than the cloud clusters.
Cloud Computing:
Cloud computing is the delivery of different services through the Internet. These resources include tools and applications like data storage, servers, databases, networking, and software.
Devoxx 2015 - Building the Internet of Things with Eclipse IoTBenjamin Cabé
Eclipse is much more than an IDE. Repeat after me: "Eclipse is much more than just an IDE! Eclipse has a lot of cool projects that can get me started with the Internet of Things!". So whether or not you are using Eclipse as your IDE, this session will give you a crash course on the available technologies to build the Internet of Things on top of Java. You will learn how protocols like MQTT, CoAP or LwM2M and embedded frameworks like Kura help solve classical IoT issues, and you will get useful tips to move from "yay, I blinked an LED!" to more useful industrial IoT scenarios.
Workload Automation for Cloud Migration and Machine Learning PlatformActiveeon
Activeeon has developed two Innovative Solutions based on workflows for:
1. Workload Automation for Cloud Migration
2. Data Science and Machine Learning Platform
These slides are from my Ph.D. defense at the University of California, Santa Barbara, discussing how we contribute research tools to forward how science is performed with cloud systems.
Ellip Studio - A workspace for developing Cloud-ready Earth Observation Appl...terradue
A JupyterLab Environment for developing Cloud-ready Earth Observation Applications compliant with the Open Geospatial Consortium's Best Practice for Earth Observation Application Package https://docs.ogc.org/bp/20-089r1.html
A performance analysis of OpenStack Cloud vs Real System on Hadoop ClustersKumari Surabhi
It introduces the performance analysis of OpenStack Cloud with the commodity computers in the big data environments. It concludes that the data storage and analysis in hadoop cluster in cloud is more flexible and easily scalable than the real system cluster. It also concludes the cluster in commodities computers are faster than the cloud clusters.
Cloud Computing:
Cloud computing is the delivery of different services through the Internet. These resources include tools and applications like data storage, servers, databases, networking, and software.
Devoxx 2015 - Building the Internet of Things with Eclipse IoTBenjamin Cabé
Eclipse is much more than an IDE. Repeat after me: "Eclipse is much more than just an IDE! Eclipse has a lot of cool projects that can get me started with the Internet of Things!". So whether or not you are using Eclipse as your IDE, this session will give you a crash course on the available technologies to build the Internet of Things on top of Java. You will learn how protocols like MQTT, CoAP or LwM2M and embedded frameworks like Kura help solve classical IoT issues, and you will get useful tips to move from "yay, I blinked an LED!" to more useful industrial IoT scenarios.
Workload Automation for Cloud Migration and Machine Learning PlatformActiveeon
Activeeon has developed two Innovative Solutions based on workflows for:
1. Workload Automation for Cloud Migration
2. Data Science and Machine Learning Platform
These slides are from my Ph.D. defense at the University of California, Santa Barbara, discussing how we contribute research tools to forward how science is performed with cloud systems.
OSDC 2016 - rkt and Kubernentes what's new with Container Runtimes and Orches...NETWAYS
Application containers are changing some of the fundamentals of how Linux is used in the server environment. rkt is a daemon-free container runtime with a focus on security. rkt is also an implementation of the App Container (appc) runtime specification, which defines the concept of a pod: a grouping of multiple containerized applications in a single execution unit. Pods are also used as the abstraction within Kubernetes, and having rkt work natively with pods makes it uniquely suited as a Kubernetes container runtime engine. With different application container runtimes on Linux to choose from (including Docker, kurma and rkt) this session will cover the differences. It will also dive into use cases for rkt under Kubernetes.
OSDC 2016 | rkt and Kubernetes: What’s new with Container Runtimes and Orches...NETWAYS
Application containers are changing some of the fundamentals of how Linux is used in the server environment. rkt is a daemon-free container runtime with a focus on security. rkt is also an implementation of the App Container (appc) runtime specification, which defines the concept of a pod: a grouping of multiple containerized applications in a single execution unit. Pods are also used as the abstraction within Kubernetes, and having rkt work natively with pods makes it uniquely suited as a Kubernetes container runtime engine. With different application container runtimes on Linux to choose from (including Docker, kurma and rkt) this session will cover the differences. It will also dive into use cases for rkt under Kubernetes.
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...NETWAYS
Kishore works with the engineering team in building the open source product with a future focussed cloud technical strategy for “Megam – Cloud Automation Platform “http://gomegam.com”. In his prior incarnation Kishore has worked as an Architect in complex system integration projects for Airport systems with high availability. Kishore has avid experience in architecting large scale build and packaging tools for mainframe platform integrated via thin clients and eclipse IDE.
OpenStack and Kubernetes - A match made for Telco HeavenTrinath Somanchi
With the advent of Containerization of Telco Clouds for NFV and SDN based deployments, OpenStack with Kubernetes is a best chosen option to solve the challenges is a better way to build a containerized Telco cloud. This involves, "Kubernetes in OpenStack", "OpenStack in Kubernetes" and "Independent OpenStack and Kubernetes". With this complementing collaboration, in the Stadium of OpenStack's Open Infrastructure, Telecom gaints are developing cloud-native solutions to best fit the next generation networking deployments. In this Presentation, we talk about Containerization and benefits, OpenStack and Kubernetes match making and we give a brief overview on Airship and Kata Container projects.
Introduction to containers, k8s, Microservices & Cloud NativeTerry Wang
Slides built to upskill and enable internal team and/or partners on foundational infra skills to work in a containerized world.
Topics covered
- Container / Containerization
- Docker
- k8s / container orchestration
- Microservices
- Service Mesh / Serverless
- Cloud Native (apps & infra)
- Relationship between Kubernetes and Runtime Fabric
Audiences: MuleSoft internal technical team, partners, Runtime Fabric users.
At Opendoor, we do a lot of big data processing, and use Spark and Dask clusters for the computations. Our machine learning platform is written in Dask and we are actively moving data ingestion pipelines and geo computations to PySpark. The biggest challenge is that jobs vary in memory, cpu needs, and the load in not evenly distributed over time, which causes our workers and clusters to be over-provisioned. In addition to this, we need to enable data scientists and engineers run their code without having to upgrade the cluster for every request and deal with the dependency hell.
To solve all of these problems, we introduce a lightweight integration across some popular tools like Kubernetes, Docker, Airflow and Spark. Using a combination of these tools, we are able to spin up on-demand Spark and Dask clusters for our computing jobs, bring down the cost using autoscaling and spot pricing, unify DAGs across many teams with different stacks on the single Airflow instance, and all of it at minimal cost.
Safer Commutes & Streaming Data | George Padavick, Ohio Department of Transpo...HostedbyConfluent
The Ohio Department of Transportation has adopted Confluent as the event driven enabler of DriveOhio, a modern Intelligent Transportation System. DriveOhio digitally links sensors, cameras, speed monitoring equipment, and smart highway assets in real time, to dynamically adjust the surface road network to maximize the safety and efficiency for travelers. Over the past 24 months the team has increased the number and types of devices within the DriveOhio environment, while also working to see their vendors adopt Kafka to better participate in data sharing.
Revolutionary container based hybrid cloud solution for MLPlatform
Ness' data science platform, NextGenML, puts the entire machine learning process: modelling, execution and deployment in the hands of data science teams.
The entire paradigm approaches collaboration around AI/ML, being implemented with full respect for best practices and commitment to innovation.
Kubernetes (onPrem) + Docker, Azure Kubernetes Cluster (AKS), Nexus, Azure Container Registry(ACR), GlusterFS
Workflow
Argo->Kubeflow
DevOps
Helm, kSonnet, Kustomize,Azure DevOps
Code Management & CI/CD
Git, TeamCity, SonarQube, Jenkins
Security
MS Active Directory, Azure VPN, Dex (K8s) integrated with GitLab
Machine Learning
TensorFlow (model training, boarding, serving), Keras, Seldon
Storage (Azure)
Storage Gen1 & Gen2, Data Lake, File Storage
ETL (Azure)
Databricks, Spark on K8, Data Factory (ADF), HDInsight (Kafka and Spark), Service Bus (ASB)
Lambda functions & VMs, Cache for Redis
Monitoring and Logging
Graphana, Prometeus, GrayLog
NoR Webinar 2024 - Introduction to GEP.pdfterradue
The Geohazards Thematic Exploitation Platform is an open and collaborative platform designed for the analysis of earthquakes, volcanoes and landslides, using satellite earth observations. It supports users monitoring terrain deformation and hazard prone land surfaces, with dedicated algorithm hosting solutions, data analytics and processing services.
NoR Webinar 2024 - Introduction to Ellip.pdfterradue
Ellip is a global collaborative platform prepared for the data-driven economy. It connects consumers and producers of resources, interacting on the platform to produce software items, service endpoints, data assets. Ellip is moreover a work environment leveraging earth observation standards and an “Open Cloud” strategy.
OSDC 2016 - rkt and Kubernentes what's new with Container Runtimes and Orches...NETWAYS
Application containers are changing some of the fundamentals of how Linux is used in the server environment. rkt is a daemon-free container runtime with a focus on security. rkt is also an implementation of the App Container (appc) runtime specification, which defines the concept of a pod: a grouping of multiple containerized applications in a single execution unit. Pods are also used as the abstraction within Kubernetes, and having rkt work natively with pods makes it uniquely suited as a Kubernetes container runtime engine. With different application container runtimes on Linux to choose from (including Docker, kurma and rkt) this session will cover the differences. It will also dive into use cases for rkt under Kubernetes.
OSDC 2016 | rkt and Kubernetes: What’s new with Container Runtimes and Orches...NETWAYS
Application containers are changing some of the fundamentals of how Linux is used in the server environment. rkt is a daemon-free container runtime with a focus on security. rkt is also an implementation of the App Container (appc) runtime specification, which defines the concept of a pod: a grouping of multiple containerized applications in a single execution unit. Pods are also used as the abstraction within Kubernetes, and having rkt work natively with pods makes it uniquely suited as a Kubernetes container runtime engine. With different application container runtimes on Linux to choose from (including Docker, kurma and rkt) this session will cover the differences. It will also dive into use cases for rkt under Kubernetes.
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...NETWAYS
Kishore works with the engineering team in building the open source product with a future focussed cloud technical strategy for “Megam – Cloud Automation Platform “http://gomegam.com”. In his prior incarnation Kishore has worked as an Architect in complex system integration projects for Airport systems with high availability. Kishore has avid experience in architecting large scale build and packaging tools for mainframe platform integrated via thin clients and eclipse IDE.
OpenStack and Kubernetes - A match made for Telco HeavenTrinath Somanchi
With the advent of Containerization of Telco Clouds for NFV and SDN based deployments, OpenStack with Kubernetes is a best chosen option to solve the challenges is a better way to build a containerized Telco cloud. This involves, "Kubernetes in OpenStack", "OpenStack in Kubernetes" and "Independent OpenStack and Kubernetes". With this complementing collaboration, in the Stadium of OpenStack's Open Infrastructure, Telecom gaints are developing cloud-native solutions to best fit the next generation networking deployments. In this Presentation, we talk about Containerization and benefits, OpenStack and Kubernetes match making and we give a brief overview on Airship and Kata Container projects.
Introduction to containers, k8s, Microservices & Cloud NativeTerry Wang
Slides built to upskill and enable internal team and/or partners on foundational infra skills to work in a containerized world.
Topics covered
- Container / Containerization
- Docker
- k8s / container orchestration
- Microservices
- Service Mesh / Serverless
- Cloud Native (apps & infra)
- Relationship between Kubernetes and Runtime Fabric
Audiences: MuleSoft internal technical team, partners, Runtime Fabric users.
At Opendoor, we do a lot of big data processing, and use Spark and Dask clusters for the computations. Our machine learning platform is written in Dask and we are actively moving data ingestion pipelines and geo computations to PySpark. The biggest challenge is that jobs vary in memory, cpu needs, and the load in not evenly distributed over time, which causes our workers and clusters to be over-provisioned. In addition to this, we need to enable data scientists and engineers run their code without having to upgrade the cluster for every request and deal with the dependency hell.
To solve all of these problems, we introduce a lightweight integration across some popular tools like Kubernetes, Docker, Airflow and Spark. Using a combination of these tools, we are able to spin up on-demand Spark and Dask clusters for our computing jobs, bring down the cost using autoscaling and spot pricing, unify DAGs across many teams with different stacks on the single Airflow instance, and all of it at minimal cost.
Safer Commutes & Streaming Data | George Padavick, Ohio Department of Transpo...HostedbyConfluent
The Ohio Department of Transportation has adopted Confluent as the event driven enabler of DriveOhio, a modern Intelligent Transportation System. DriveOhio digitally links sensors, cameras, speed monitoring equipment, and smart highway assets in real time, to dynamically adjust the surface road network to maximize the safety and efficiency for travelers. Over the past 24 months the team has increased the number and types of devices within the DriveOhio environment, while also working to see their vendors adopt Kafka to better participate in data sharing.
Revolutionary container based hybrid cloud solution for MLPlatform
Ness' data science platform, NextGenML, puts the entire machine learning process: modelling, execution and deployment in the hands of data science teams.
The entire paradigm approaches collaboration around AI/ML, being implemented with full respect for best practices and commitment to innovation.
Kubernetes (onPrem) + Docker, Azure Kubernetes Cluster (AKS), Nexus, Azure Container Registry(ACR), GlusterFS
Workflow
Argo->Kubeflow
DevOps
Helm, kSonnet, Kustomize,Azure DevOps
Code Management & CI/CD
Git, TeamCity, SonarQube, Jenkins
Security
MS Active Directory, Azure VPN, Dex (K8s) integrated with GitLab
Machine Learning
TensorFlow (model training, boarding, serving), Keras, Seldon
Storage (Azure)
Storage Gen1 & Gen2, Data Lake, File Storage
ETL (Azure)
Databricks, Spark on K8, Data Factory (ADF), HDInsight (Kafka and Spark), Service Bus (ASB)
Lambda functions & VMs, Cache for Redis
Monitoring and Logging
Graphana, Prometeus, GrayLog
NoR Webinar 2024 - Introduction to GEP.pdfterradue
The Geohazards Thematic Exploitation Platform is an open and collaborative platform designed for the analysis of earthquakes, volcanoes and landslides, using satellite earth observations. It supports users monitoring terrain deformation and hazard prone land surfaces, with dedicated algorithm hosting solutions, data analytics and processing services.
NoR Webinar 2024 - Introduction to Ellip.pdfterradue
Ellip is a global collaborative platform prepared for the data-driven economy. It connects consumers and producers of resources, interacting on the platform to produce software items, service endpoints, data assets. Ellip is moreover a work environment leveraging earth observation standards and an “Open Cloud” strategy.
Ellip Studio implements the OGC Best Practice for Earth Observation Application Package, a set of recommendations for application design patterns, package encoding, container and data interfaces
We present Terradue's move to a Platform-centered business model, with the Ellip soluitons providing a collaborative workplace for value adders to interact & co-create in building data processing applications and deploying processing Software-as-a-Service for their user communities.
After a short introduction on the API landscape within Cloud Computing services & its operational impacts for the NextGEOSS users, we’ll go through the EGI Federated Cloud journey in improving and harmonizing its federation and interoperability services. The EGI Cloud resources are supporting the NextGEOSS Pilot applications needs for compute and storage. These Pilot applications are empowered for Cloud Computing thanks to some NextGEOSS Platform services operated by the NextGEOSS partner Terradue. These Cloud Integration and Cloud Bursting services are based on Cloud APIs, and both deliver vendor-agnostic capabilities for the management and deployment at scale of a range of NextGEOSS EO data processing applications.
Recorded Webinar:
https://youtu.be/peqOhar0OcA
GEO Expert Advisory Group - ESA Thematic Exploitation Platforms - Geohazardsterradue
The second EAG (Expert Advisory Group) meeting was held on February 5th, 2019 in Geneva. Terradue as EAG member was invited to present on solutions supporting the GEO vision for Knowledge Hubs
NextGEOSS Cloud Computing needs managed by Terradue: key benefits of the new ...terradue
NextGEOSS has a number of pilots running in production on EGI.eu cloud resources. Furthermore, the NextGEOSS core services (e.g. user management, DataHub) are also hosted in the EGI cloud.
Terradue presents the foreseen key benefits about using the new EGI Marketplace as part of the NextGEOSS project, emphasizing one of the pilots as an example use case.
EOSCpilot Mission:
- Facilitate access of researchers across all scientific disciplines to data
- Establish a governance and business model that sets the rules for the use of EOSC
- Create a cross-border and multi-disciplinary open innovation environment for research data, knowledge and services
- Establish global standards for interoperability for scientific data
Terradue's Lightning talk at the 2nd EOSC Stakeholders Forum in Vienna, November 22nd 2018
DI4R 2018 - Ellip: a collaborative workplace for EO Open Scienceterradue
Digital Infrastructures for Research 2018
https://www.digitalinfrastructures.eu/
Topic "Business Models, Sustainability and Policies"
Session "Open Science: Skills and Credits"
ISCTE, Lisbon, Tuesday 9th October 2018
Geohazards Exploitation Platform (GEP) at EuroGEOSS Workshop 2018terradue
GEP provides large scale processing of Earth Observation data.
Designed in the context of the Geohazards Supersite initiative (GSNL) and the CEOS Disasters Working Group which address a Task of the Disaster Societal Benefit Area of the intergovernmental Group on Earth Observations (GEO).
A model for partnership and community building that is user driven. Started from the International Forum on Satellite EO and Geohazards organised by ESA and GEO in Santorini in 2012 (140+ participants from 20 countries, 70+ organisations incl. international organisations, public institutes, space agencies, universities & private sector).
What does a Platform mean nowadays?
▪ A lever of Web and Cloud technologies
▪ A business model for value co-creation
▪ A framework to bring innovation to new or larger communities
Building earth observation applications with NextGEOSS - webinarterradue
Training taster for the NextGEOSS Workshop to be held in Geneva on September 11th, 2018.
A review of the NextGEOSS components and services available to partners for the integration of their applications on the NextGEOSS Platform.
Announcement: https://nextgeoss.eu/second-nextgeoss-training/
Application packaging and systematic processing in earth observation exploita...terradue
An overview of Terradue's solutions supporting Earth Observations (EO) Exploitation Platforms across multiple domains.
Presentation done as part of the Open Geospatial Consortium (OGC) Technical Committee ad-hoc meeting for the setup of a new domain working group on EO Exploitation Platforms.
Advancing Earth Science with Elasticsearch at Terradueterradue
Elastic{ON} User Conference 2017
March 7 - 9, 2017 | San Francisco, CA
https://www.elastic.co/elasticon/conf/2017/sf/agenda?sess=advancing-earth-science-with-elasticsearch-at-terradue
Terradue uses Elasticsearch in conjunction with the Elasticsearch.Net client to tackle scientific challenges in several domains, such as geohazards for rapid response monitoring of volcanoes or earthquakes, hydrology by mapping flood extent, and urban by detecting settlement footprint.
The NextGEOSS project, a European contribution to GEOSS (Global Earth Observation System of Systems), proposes to develop the next generation data hub for Earth Observations, where the users can connect to access data and deploy data-driven applications.
Engaging earth observation in the platform economyterradue
With the MELODIES project, Terradue invested a lot in collaborations to understand where EO data can meet a workplace able to accelerate the time to market of environmental applications, as well as provide better support to Open Science practitioners. This is where Earth observation would meet the Platform economy, defined as the new oil since the advent of disrupting & successful Platform-based business models, most of them enabled by Web and Cloud technologies.
Processing Open Data using Terradue Cloud Platformterradue
We gave this talk at the "Open Data Projects cluster meeting" organised by the European Commission on 07-08th September 2015, in Brussels.
It was part of the Session IV: Sustainability and business strategies.
Presentation at the European Grid Infrastructure (EGI) Conference 2015, Business Track "Examples of SMEs as consumers and providers".
EGI is focused on supporting SMEs and the full innovation chain between business and academia to create opportunities of economic impact through open data generated and the technical services both offered and required to support research and innovation.
Terradue Cloud Platform delivers Cloud bursting for Earth Science Applications and Services, illustrated in this presentation by many use cases and collaborations fostering the reuse of Earth Observation open data and open services.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
2. ● Terradue is an ESA spin-off based in Rome
○ with a subsidiary in the UK (Oxford)
● Support to application builders in Earth sciences to
use satellite data as their information source
○ Algorithms in different data programming models &
languages
○ Producing Analysis Ready Data and Added Value Products
● Deploy & operate Earth observation data
processing applications in multiple Clouds without
lock-in
○ Processed near the EO data holdings
○ Fixed time period or continuous operations
About Us
3. ● An Earth Observation Application is set of
command-line tools with numeric, textual and EO
data parameters organized as a computational
workflow
● An Application Package uses an explicit language
that describes the input and output interface of
the computational workflow and the orchestration
of its command-line tools.
● The Application Package guarantees the
automation, scalability, reusability, portability of
the Application while also being workflow-engine
and vendor neutral.
EO Application Package
4. ● The command-line tools (e.g. Python, shell
script, C++) and their dependencies are
containerized and registered in a container
registry
● The computational workflow input and
output interfaces and the orchestration of
its command-line tools are described with
Common Workflow Language (CWL)
EO Application Package
The Common Workflow Language
(CWL) is an open standard for
describing analysis workflows and
tools in a way that makes them
portable and scalable across a
variety of software and hardware
environments, from workstations
to cluster, cloud, and high
performance computing
environments.
5. ● The computational workflow data interfaces use the Spatio Temporal Asset
Catalog (STAC) to describe the EO data inputs and generated results.
EO Application Package
Stage-in Stage-out
App
6. ● The Platform takes the CWL application
package and exposes an OGC API
Processes processing service.
● The Platform provides the automation,
scalability, reusability, portability by
converting the OGC API Processes
execution request into a CWL
execution using a runner and the
computing resources of the selected
provider.
EO Application Package
8. Tooling
● A container engine: docker or podman
● A CWL runner: cwltool, calrissian (k8s)
● An IDE: VS Code or Theia/Coder (in the Cloud)
● An object storage (S3)
● Access to a container registry (e.g. docker.io,
quay.io Gitlab, Github)
● Access to Continuous Integration service (e.g.
Gitlab CI, Github Actions, Jenkins, etc.)
● Access to a Package Registry (e.g. Gitlab, Github,
Artifactory)
Skills and tooling
Skills
● YAML
● Containers (docker files, docker
build, tags, etc.)
9. https://github.com/Terradue/ogc-eo-application-package-hands-on
Hands-on on Binder
Additional resources:
● CWL specification https://www.commonwl.org/v1.2/
● OGC Best Practice for Earth Observation Application Package
https://docs.ogc.org/bp/20-089r1.html
● https://cwl-for-eo.github.io/guide/ (work in progress)
● CWL runners:
○ https://github.com/common-workflow-language/cwltool
○ https://github.com/Duke-GCB/calrissian
● From zero to CWL on kubernetes https://github.com/Terradue/calrissian-session