an API for standardised access to Big Earth Observation data in a landscape of a growing number of EO cloud providers
11:40
16/11/2019
With the opening of global archives of Earth Observation data streams from satellites we have arrived at a richness of operationally available observations over the whole globe, starting from the Landsat series of satellites and now the plethora of available data coming through Copernicus and its series of Sentinel satellites, that has never been available before. This created huge opportunities for research and businesses, being able to exploit the temporal domain of those observations in a powerful manner, but also poses challenges in terms of data management and processing capacities. As a consequence, a growing number of cloud services and customized solutions in various research centres have been developed, leading to processing workflows optimized for specific system architectures and back-end infrastructures. As such this is limiting portability and reproducibility of workflows across backends, both for science and business applications. (...)
IJCAR 2018 keynote: Industrial Data AccessMartin Giese
Optique (http://optique-project.eu) was an EU FP7 project that ran from November 2012 to October 2016. The main objective was to test the idea of “Ontology Based Data Access” (OBDA) on real industrial applications. Concretely: to support the work of geologists and geophysicists in the oil & gas company Statoil, and the work of turbine engineers at Siemens AG. This line of work now continues in the nationally funded ‘Centre for Research-based Innovation’ SIRIUS (http://sirius-labs.no) at the University of Oslo, with participation from the Universities of Oxford and Trondheim, as well a large number of participating companies.
The software produced by the project features elaborate user interfaces, and no ∀ or ∃ can be seen on the surface. Still, most of the functionality is controlled by an ontology, which is nothing more than a set of axioms in a particular description logic. As a consequence, a variety of reasoning tasks takes place under the hood, all the way from query optimisation, via entity alignment and up to the user interface control code. This talk presents a selection of these problems, both solved and as-yet unsolved.
Though logic and reasoning are close to the hearts of many of the researchers involved, the success of the project was also dependent on other factors: inter- disciplinary communication, usability considerations, and many pragmatic com- promises, to name some. And sometimes, these would again lead to ‘nice’ re- search. The talk also covers some of these extra-logical aspects of the project.
EDINA is a national data center in the UK that delivers geospatial data and services using open standards and open source software. It provides access to collections like Digimap and OpenBoundaries through web mapping applications and data downloads. EDINA uses open standards like OGC and open source software from OSGeo projects to build interoperable and resilient systems while reducing costs. This hybrid approach provides flexible and innovative services to users while meeting the needs of funders.
EDINA is a national data center in the UK that delivers geospatial data and services using open standards and open source software. It provides access to collections like Digimap and OpenBoundaries through web mapping applications and data downloads. EDINA uses open standards like OGC and open source software from projects in OSGeo to build robust and interoperable systems while reducing costs and increasing flexibility.
This document provides an overview of ONOS (Open Network Operating System) including:
- What ONOS is and its architectural tenets of high availability, scalability, and modularity
- ONOS's distributed architecture with core subsystems and components running on multiple nodes
- The SDN-IP application which allows ONOS to communicate with external IP networks
- Guidelines for deploying SDN-IP including physical setup and basic workflow
- Using SDN-IP and ONOS for an SDX use case including route validation with RPKI
- A tutorial demonstrating setting up an SDN-IP environment in Mininet and ONOS
Use Cases of the CPaaS.io project as presented at the first year review meeting in Tokyo on October 5, 2017.
Disclaimer:
This document has been produced in the context of the CPaaS.io project which is jointly funded by the European Commission (grant agreement n° 723076) and NICT from Japan (management number 18302). All information provided in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European Commission and NICT have no liability in respect of this document, which is merely representing the view of the project consortium. This document is subject to change without notice.
RootedCON 2014 - Kicking around SCADA!testpurposes
Slides of the SCADA security talk presented at RootedCON 2014 by Juan Vazquez (Rapid7) and Julian Vilas (independent security researcher): "Kicking around SCADA"
IJCAR 2018 keynote: Industrial Data AccessMartin Giese
Optique (http://optique-project.eu) was an EU FP7 project that ran from November 2012 to October 2016. The main objective was to test the idea of “Ontology Based Data Access” (OBDA) on real industrial applications. Concretely: to support the work of geologists and geophysicists in the oil & gas company Statoil, and the work of turbine engineers at Siemens AG. This line of work now continues in the nationally funded ‘Centre for Research-based Innovation’ SIRIUS (http://sirius-labs.no) at the University of Oslo, with participation from the Universities of Oxford and Trondheim, as well a large number of participating companies.
The software produced by the project features elaborate user interfaces, and no ∀ or ∃ can be seen on the surface. Still, most of the functionality is controlled by an ontology, which is nothing more than a set of axioms in a particular description logic. As a consequence, a variety of reasoning tasks takes place under the hood, all the way from query optimisation, via entity alignment and up to the user interface control code. This talk presents a selection of these problems, both solved and as-yet unsolved.
Though logic and reasoning are close to the hearts of many of the researchers involved, the success of the project was also dependent on other factors: inter- disciplinary communication, usability considerations, and many pragmatic com- promises, to name some. And sometimes, these would again lead to ‘nice’ re- search. The talk also covers some of these extra-logical aspects of the project.
EDINA is a national data center in the UK that delivers geospatial data and services using open standards and open source software. It provides access to collections like Digimap and OpenBoundaries through web mapping applications and data downloads. EDINA uses open standards like OGC and open source software from OSGeo projects to build interoperable and resilient systems while reducing costs. This hybrid approach provides flexible and innovative services to users while meeting the needs of funders.
EDINA is a national data center in the UK that delivers geospatial data and services using open standards and open source software. It provides access to collections like Digimap and OpenBoundaries through web mapping applications and data downloads. EDINA uses open standards like OGC and open source software from projects in OSGeo to build robust and interoperable systems while reducing costs and increasing flexibility.
This document provides an overview of ONOS (Open Network Operating System) including:
- What ONOS is and its architectural tenets of high availability, scalability, and modularity
- ONOS's distributed architecture with core subsystems and components running on multiple nodes
- The SDN-IP application which allows ONOS to communicate with external IP networks
- Guidelines for deploying SDN-IP including physical setup and basic workflow
- Using SDN-IP and ONOS for an SDX use case including route validation with RPKI
- A tutorial demonstrating setting up an SDN-IP environment in Mininet and ONOS
Use Cases of the CPaaS.io project as presented at the first year review meeting in Tokyo on October 5, 2017.
Disclaimer:
This document has been produced in the context of the CPaaS.io project which is jointly funded by the European Commission (grant agreement n° 723076) and NICT from Japan (management number 18302). All information provided in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European Commission and NICT have no liability in respect of this document, which is merely representing the view of the project consortium. This document is subject to change without notice.
RootedCON 2014 - Kicking around SCADA!testpurposes
Slides of the SCADA security talk presented at RootedCON 2014 by Juan Vazquez (Rapid7) and Julian Vilas (independent security researcher): "Kicking around SCADA"
Stay up-to-date on the latest news, research and resources. This month's edition covers the Georgia Tech Open Hackathon, milestones in OpenACC development, upcoming Open Hackathons and Bootcamps, NVIDIA's developer program, and more!
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the on-demand sessions from the OpenACC Summit 2020, upcoming GPU Hackathons and Bootcamps, an OpenACC-to-FPGA framework, the NERSC GPU Hackathon, new resources and more!
This document discusses Oracle Reports & Dashboards and introduces analytics. It provides examples of project hierarchies and timelines. Specifically, it shows a sample project dealing with Sysaid migration from July 29th to August 31st. It also displays a sample EBS data divestiture project timeline from May 14, 2019 to January 22, 2020 with tasks broken down into inception, elaboration, execution and post go-live activities phases. Finally, it discusses how Oracle Application Express can be used for database-centric application development using a browser-based tool to develop desktop, mobile and cloud applications.
AnalogIST/ezPAARSE: Analysing Locally Gathered Logfiles to Determine Users’ A...LIBER Europe
AnalogIST/ezPAARSE: Analysing Locally Gathered Logfiles to Determine Users’ Accesses to Subscribed e-Resources (Thomas Jouneau, Université de Lorraine, France). This presentation was one of the 10 most highly ranked at LIBER's Annual Conference 2014 in Riga, Latvia. Learn more: www.libereurope.eu
Hosting open data endpoints at IRCEL-CELINE serving air quality data from the...Open Knowledge Belgium
Presentation by Olav Peeters at the OpenDataDay event 'Towards Clean Air with Open Data'. The event took place at BeCentral in Brussels on Saturday 3 March 2018.
This document provides a monthly highlights summary of OpenACC:
- OpenACC is a programming model for parallel computing on CPUs and GPUs using compiler directives to add parallelism to existing serial code.
- OpenACC is seeing wide adoption across major HPC applications and allows performance portability between CPU and GPU.
- The document highlights recent optimizations, events, publications and resources around OpenACC programming.
Model-driven Telemetry: The Foundation of Big Data AnalyticsCisco Canada
This document discusses model-driven telemetry. It begins by explaining the origins of telemetry, noting its use in applications like military, medical, and networking. It then discusses telemetry use cases like network health monitoring, troubleshooting, and capacity planning. Next, it covers challenges with traditional telemetry methods like SNMP and syslog being too slow, incomplete, and hard to operationalize. The document then introduces the concepts of streaming telemetry and model-driven telemetry as an improved approach, discussing how it is based on open standards like YANG data models, gRPC protocol, and protocol buffer encodings. It provides examples of configuring sensors, destinations, and subscriptions on Cisco networking devices.
An overview of the Open Grid Forum's Open Cloud Computing Interface standards effort and the (non-OGF) CloudAudit ("A6") working group. Presented at CloudConnect on 17 March 2010.
Mark Hughes Annual Seminar Presentation on Open Source Tracy Kent
VuFind was chosen as the discovery system to integrate the catalogs of three different library management systems used by the academic libraries in South West Wales. It required overcoming challenges like hosting multiple instances, merging data from different sources and standards, designing a dual language interface, and developing drivers to connect to each library system. Lessons learned include that open source solutions can work well but require significant staff time and resources, and collaboration is key to success. Future plans include sustaining and mainstreaming the system, exploring additional shared services, and investigating other open source library systems like Evergreen.
The document discusses model-driven telemetry as an approach to network visibility and monitoring. It describes some of the challenges with traditional monitoring approaches like SNMP polling. Model-driven telemetry uses data models to push analytics-ready data from network devices to collectors. Key aspects covered include using YANG models to map native device data, encoding the data using protocols like gRPC and Google Protocol Buffers, and configuring subscriptions to stream telemetry data from sensors to destinations.
How to Share and Reuse Learning Resources: the ARIADNE ExperienceJoris Klerkx
This document discusses the ARIADNE experience in sharing and reusing learning resources. It describes the key technologies used in ARIADNE including metadata standards, harvesting protocols, federated search, and publishing interfaces. It provides examples of metadata validation, transformation, and harvesting workflows. ARIADNE services including repository software, validation tools, harvesting, and federated search are presented. Software quality attributes and performance results are evaluated. The conclusion discusses the open, standards-based architecture and its ability to integrate and manage learning objects across applications.
The document provides details about an OpenPOWER and AI workshop being held on June 18-19, 2018 at the Barcelona Supercomputing Center.
Day 1 will provide an introduction to AI and cover topics like Power9 and PowerAI features, large model support, and use case demonstrations. Day 2 will focus on deeper learning exercises and industry use cases using Power9 features like distributed deep learning.
The agenda lists out the schedule and topics to be covered each day, including welcome sessions, technical presentations, breaks and wrap-up discussions.
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the upcoming OpenACC Summit and GPU Bootcamp, a complete schedule of upcoming events, OpenACC and base language parallelism, FortranCon2020, VASP 6, OmpSs-2@OpenACC version of the ZIPC application, new resources and more!
The document proposes designing an open IoT ecosystem to provide interoperability among existing and new IoT systems. Currently, developers must build all components of an IoT application from end to end. In the future, sensing and actuation systems will already exist. The open ecosystem would allow new systems to utilize existing components. The SWAMP project provides an example of an open IoT ecosystem for smart irrigation applications. Open source code, platforms, services, data, and knowledge are key enablers of such an ecosystem by allowing components and information to be shared.
Present and future of unified, portable, and efficient data processing with A...DataWorks Summit
The world of big data involves an ever-changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. In a way, Apache Beam is a glue that can connect the big data ecosystem together; it enables users to "run any data processing pipeline anywhere."
This talk will briefly cover the capabilities of the Beam model for data processing and discuss its architecture, including the portability model. We’ll focus on the present state of the community and the current status of the Beam ecosystem. We’ll cover the state of the art in data processing and discuss where Beam is going next, including completion of the portability framework and the Streaming SQL. Finally, we’ll discuss areas of improvement and how anybody can join us on the path of creating the glue that interconnects the big data ecosystem.
Speaker
Davor Bonaci, Apache Software Foundation; Simbly, V.P. of Apache Beam; Founder/CEO at Operiant
This document provides an overview of Oracle's core technology stack and evolution from mainframe to multi-tier architectures. It discusses Oracle database, middleware, and development products. Case studies on Amazon.com and GE Power Systems are presented showing migrations to multi-tier environments. Job roles that interact with Oracle technologies are defined, including administrators, developers, and end users. Product families and typical career paths for different roles are outlined.
The document discusses deployment scenarios for the ARCHIVER project, which aims to demonstrate long-term preservation and archiving services for scientific data. It outlines scenarios for high energy physics, life sciences, astronomy, and photon science. These scenarios involve data volumes ranging from terabytes to petabytes and retention periods ranging from less than 5 years to over 25 years. The document also provides information on data ingest rates, which range from gigabytes per second to tens of gigabytes per second. It concludes with a summary of next steps for the project.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
At Splunk, we have made the decision to deprecate a home-brewed platform that powers the DSP's (Data Stream Processor) connector framework in favor of a framework that is powered by Pulsar IO.
In this talk, I will go over our evaluation and decision process on choosing to use the Pulsar IO framework. I will also discuss how the Splunk's DSP product is leveraging the Pulsar IO framework and especially batch sources that was recently added to Pulsar IO. I will conclude the talk with discussing the various improvements that we at Splunk have contributed to the Pulsar Functions/IO framework to increase scalability and stability. In my final remarks, I will also discuss how we intend to leverage and use Pulsar IO/Functions further in the future at Splunk.
The complexity of agricultural droughts requires a consistent, reliable, and systematic method for monitoring and reporting. Amongst the various indices used to monitor this phenomenon, the soil moisture anomaly has been proven to be a more reliable predictor. However, the datasets required for computing this index are often large and computationally demanding. To address this challenge, we have developed SMODEX, a Python package that enables scalable, fast, and open-source standard-compliant computation and visualization of soil moisture anomalies.
SMODEX simplifies the computation and visualization of time-series for soil moisture and soil moisture anomalies from high-dimensional climate datasets. It allows for quick and easy parallelization of the computation on a daily, weekly, and monthly timescale. Additionally, SMODEX implements a straightforward workflow for automating the use of FAIR (Findable, Accessible, Interoperable, and Reusable) principles in producing and sharing outputs by leveraging the open source STAC API. The package is extendible and provides information on how to contribute to the project, test suites, test coverage, and a use case for the South Tyrol region, all provided in the package repository. In the future, additional agricultural drought indices and indicators would be included to serve to even larger community of researchers, policy makers, and individual users.
The Open Hardware PowerPC Notebook designed around GNU/Linux will be showed at NOI Techpark. We had presented here its motherboard design in 2018. We will updates regarding last developments for u-boot AMD video drivers, re-design of heat pipes, and CE test certification process. We will give future availability milestones of this notebook and details regarding the GNU/Linux distributions or other OS that could runs on it.
Stay up-to-date on the latest news, research and resources. This month's edition covers the Georgia Tech Open Hackathon, milestones in OpenACC development, upcoming Open Hackathons and Bootcamps, NVIDIA's developer program, and more!
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the on-demand sessions from the OpenACC Summit 2020, upcoming GPU Hackathons and Bootcamps, an OpenACC-to-FPGA framework, the NERSC GPU Hackathon, new resources and more!
This document discusses Oracle Reports & Dashboards and introduces analytics. It provides examples of project hierarchies and timelines. Specifically, it shows a sample project dealing with Sysaid migration from July 29th to August 31st. It also displays a sample EBS data divestiture project timeline from May 14, 2019 to January 22, 2020 with tasks broken down into inception, elaboration, execution and post go-live activities phases. Finally, it discusses how Oracle Application Express can be used for database-centric application development using a browser-based tool to develop desktop, mobile and cloud applications.
AnalogIST/ezPAARSE: Analysing Locally Gathered Logfiles to Determine Users’ A...LIBER Europe
AnalogIST/ezPAARSE: Analysing Locally Gathered Logfiles to Determine Users’ Accesses to Subscribed e-Resources (Thomas Jouneau, Université de Lorraine, France). This presentation was one of the 10 most highly ranked at LIBER's Annual Conference 2014 in Riga, Latvia. Learn more: www.libereurope.eu
Hosting open data endpoints at IRCEL-CELINE serving air quality data from the...Open Knowledge Belgium
Presentation by Olav Peeters at the OpenDataDay event 'Towards Clean Air with Open Data'. The event took place at BeCentral in Brussels on Saturday 3 March 2018.
This document provides a monthly highlights summary of OpenACC:
- OpenACC is a programming model for parallel computing on CPUs and GPUs using compiler directives to add parallelism to existing serial code.
- OpenACC is seeing wide adoption across major HPC applications and allows performance portability between CPU and GPU.
- The document highlights recent optimizations, events, publications and resources around OpenACC programming.
Model-driven Telemetry: The Foundation of Big Data AnalyticsCisco Canada
This document discusses model-driven telemetry. It begins by explaining the origins of telemetry, noting its use in applications like military, medical, and networking. It then discusses telemetry use cases like network health monitoring, troubleshooting, and capacity planning. Next, it covers challenges with traditional telemetry methods like SNMP and syslog being too slow, incomplete, and hard to operationalize. The document then introduces the concepts of streaming telemetry and model-driven telemetry as an improved approach, discussing how it is based on open standards like YANG data models, gRPC protocol, and protocol buffer encodings. It provides examples of configuring sensors, destinations, and subscriptions on Cisco networking devices.
An overview of the Open Grid Forum's Open Cloud Computing Interface standards effort and the (non-OGF) CloudAudit ("A6") working group. Presented at CloudConnect on 17 March 2010.
Mark Hughes Annual Seminar Presentation on Open Source Tracy Kent
VuFind was chosen as the discovery system to integrate the catalogs of three different library management systems used by the academic libraries in South West Wales. It required overcoming challenges like hosting multiple instances, merging data from different sources and standards, designing a dual language interface, and developing drivers to connect to each library system. Lessons learned include that open source solutions can work well but require significant staff time and resources, and collaboration is key to success. Future plans include sustaining and mainstreaming the system, exploring additional shared services, and investigating other open source library systems like Evergreen.
The document discusses model-driven telemetry as an approach to network visibility and monitoring. It describes some of the challenges with traditional monitoring approaches like SNMP polling. Model-driven telemetry uses data models to push analytics-ready data from network devices to collectors. Key aspects covered include using YANG models to map native device data, encoding the data using protocols like gRPC and Google Protocol Buffers, and configuring subscriptions to stream telemetry data from sensors to destinations.
How to Share and Reuse Learning Resources: the ARIADNE ExperienceJoris Klerkx
This document discusses the ARIADNE experience in sharing and reusing learning resources. It describes the key technologies used in ARIADNE including metadata standards, harvesting protocols, federated search, and publishing interfaces. It provides examples of metadata validation, transformation, and harvesting workflows. ARIADNE services including repository software, validation tools, harvesting, and federated search are presented. Software quality attributes and performance results are evaluated. The conclusion discusses the open, standards-based architecture and its ability to integrate and manage learning objects across applications.
The document provides details about an OpenPOWER and AI workshop being held on June 18-19, 2018 at the Barcelona Supercomputing Center.
Day 1 will provide an introduction to AI and cover topics like Power9 and PowerAI features, large model support, and use case demonstrations. Day 2 will focus on deeper learning exercises and industry use cases using Power9 features like distributed deep learning.
The agenda lists out the schedule and topics to be covered each day, including welcome sessions, technical presentations, breaks and wrap-up discussions.
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the upcoming OpenACC Summit and GPU Bootcamp, a complete schedule of upcoming events, OpenACC and base language parallelism, FortranCon2020, VASP 6, OmpSs-2@OpenACC version of the ZIPC application, new resources and more!
The document proposes designing an open IoT ecosystem to provide interoperability among existing and new IoT systems. Currently, developers must build all components of an IoT application from end to end. In the future, sensing and actuation systems will already exist. The open ecosystem would allow new systems to utilize existing components. The SWAMP project provides an example of an open IoT ecosystem for smart irrigation applications. Open source code, platforms, services, data, and knowledge are key enablers of such an ecosystem by allowing components and information to be shared.
Present and future of unified, portable, and efficient data processing with A...DataWorks Summit
The world of big data involves an ever-changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. In a way, Apache Beam is a glue that can connect the big data ecosystem together; it enables users to "run any data processing pipeline anywhere."
This talk will briefly cover the capabilities of the Beam model for data processing and discuss its architecture, including the portability model. We’ll focus on the present state of the community and the current status of the Beam ecosystem. We’ll cover the state of the art in data processing and discuss where Beam is going next, including completion of the portability framework and the Streaming SQL. Finally, we’ll discuss areas of improvement and how anybody can join us on the path of creating the glue that interconnects the big data ecosystem.
Speaker
Davor Bonaci, Apache Software Foundation; Simbly, V.P. of Apache Beam; Founder/CEO at Operiant
This document provides an overview of Oracle's core technology stack and evolution from mainframe to multi-tier architectures. It discusses Oracle database, middleware, and development products. Case studies on Amazon.com and GE Power Systems are presented showing migrations to multi-tier environments. Job roles that interact with Oracle technologies are defined, including administrators, developers, and end users. Product families and typical career paths for different roles are outlined.
The document discusses deployment scenarios for the ARCHIVER project, which aims to demonstrate long-term preservation and archiving services for scientific data. It outlines scenarios for high energy physics, life sciences, astronomy, and photon science. These scenarios involve data volumes ranging from terabytes to petabytes and retention periods ranging from less than 5 years to over 25 years. The document also provides information on data ingest rates, which range from gigabytes per second to tens of gigabytes per second. It concludes with a summary of next steps for the project.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
At Splunk, we have made the decision to deprecate a home-brewed platform that powers the DSP's (Data Stream Processor) connector framework in favor of a framework that is powered by Pulsar IO.
In this talk, I will go over our evaluation and decision process on choosing to use the Pulsar IO framework. I will also discuss how the Splunk's DSP product is leveraging the Pulsar IO framework and especially batch sources that was recently added to Pulsar IO. I will conclude the talk with discussing the various improvements that we at Splunk have contributed to the Pulsar Functions/IO framework to increase scalability and stability. In my final remarks, I will also discuss how we intend to leverage and use Pulsar IO/Functions further in the future at Splunk.
Similar to SFScon19 - Alexander Jacob - openEO (20)
The complexity of agricultural droughts requires a consistent, reliable, and systematic method for monitoring and reporting. Amongst the various indices used to monitor this phenomenon, the soil moisture anomaly has been proven to be a more reliable predictor. However, the datasets required for computing this index are often large and computationally demanding. To address this challenge, we have developed SMODEX, a Python package that enables scalable, fast, and open-source standard-compliant computation and visualization of soil moisture anomalies.
SMODEX simplifies the computation and visualization of time-series for soil moisture and soil moisture anomalies from high-dimensional climate datasets. It allows for quick and easy parallelization of the computation on a daily, weekly, and monthly timescale. Additionally, SMODEX implements a straightforward workflow for automating the use of FAIR (Findable, Accessible, Interoperable, and Reusable) principles in producing and sharing outputs by leveraging the open source STAC API. The package is extendible and provides information on how to contribute to the project, test suites, test coverage, and a use case for the South Tyrol region, all provided in the package repository. In the future, additional agricultural drought indices and indicators would be included to serve to even larger community of researchers, policy makers, and individual users.
The Open Hardware PowerPC Notebook designed around GNU/Linux will be showed at NOI Techpark. We had presented here its motherboard design in 2018. We will updates regarding last developments for u-boot AMD video drivers, re-design of heat pipes, and CE test certification process. We will give future availability milestones of this notebook and details regarding the GNU/Linux distributions or other OS that could runs on it.
RMBtec proposes using the Open Data Hub as the real-time data backbone for a start-up creating an aircraft tracking application. The Open Data Hub offers data providers visibility, infrastructure to publish data, documentation, analytics and support. It describes how aircraft position data captured by sensors could be preprocessed and sent to the Open Data Hub via MQTT, then streamed in real-time and stored in a data mart via transformers. The Open Data Hub provides an alternative to cloud services and has capabilities that could support publishing and consuming aircraft tracking data.
The transition from Web 2.0 to Web 3.0 has fueled the need for a secure and decentralized cloud storage solution for digital assets. Web 2.0 was characterized by centralized platforms where user data was under the control of companies. In contrast, Web 3.0 aims to empower individuals and foster a decentralized web that supports and benefits the Free Software and Open Data Communities.
Blockchain technologies facilitate seamless collaboration and interoperability among diverse stakeholders in the Free Software and Open Data communities. Developers can establish open and transparent ecosystems where data can be shared, verified, and integrated across multiple platforms.
Beez, with its own blockchain infrastructure, offers a secure and transparent platform for digital asset exchanges, bolstering transaction integrity and trust. By distributing data across a network of nodes, Beez ensures security and mitigates the risk of single points of failure. Users retain control over their data, safeguard their privacy, and can take advantage of the incentive mechanisms offered by blockchain networks.
During our presentation, we will explore the role of AI within Beez's ecosystem, facilitating accelerated data processing, correlation, and intelligent automation. AI unlocks valuable insights from blockchain data, and we will touch upon the use of Inductive Logic Programming (ILP) to enhance programming performance.
The integration of Blockchain and AI technologies holds great potential for advancing the safety and efficiency of the Open Data ecosystem. By combining decentralized data storage, trust-building mechanisms, and intelligent data processing, Beez is paving the way for a more secure, transparent, and user-centric digital landscape.
We are becoming more and more dependent on the Internet for our work, education, communication, personal relations and entertainment. Our digital devices conquered an unprecedented level of importance in our life.
However, we are facing a loss of control over our smartphones, tablets and other devices for internet connection. It's time to resolve monopolies and re-establish democratic control over the technology we most depend upon.
This talk will present the challenges end-users are facing to get more control over their devices and how Free Software is key for a consumer re-empowerement.
The talk will present real-life examples of policy demands against gatekeepers on digital markets, such as the struggle for Router Freedom in the last years and how Device Neutrality can serve as an important instrument for pushing forward end-user-oriented digital policies.
MOSH and MOAH are the abbreviation of two groups of chemical compounds found in mineral oils. “MOSH” stands for Mineral Oil Saturated Hydrocarbons. MOAH stands for Mineral Oil Aromatic Hydrocarbons. Both of them are under European deeply evaluation because there are two food contaminants. According to the current state of scientific knowledge, there is no sufficient toxicological evidence to prove a health risk to humans from saturated mineral oil fractions (MOSH). Meanwhile, MOAH are suspected to be carcinogenic (especially PAH-like compounds with 3-7 ring systems), therefore their levels in food should be reduced according to the ALARA-principle (as low as reasonably achievable). Gruppo FOS with CNR ( MOSH and MOAH are the abbreviation of two groups of chemical compounds found in mineral oils. “MOSH” stands for Mineral Oil Saturated Hydrocarbons. MOAH stands for Mineral Oil Aromatic Hydrocarbons. Both of them are under European deeply evaluation because there are two food contaminants. According to the current state of scientific knowledge, there is no sufficient toxicological evidence to prove a health risk to humans from saturated mineral oil fractions (MOSH). Meanwhile, MOAH are suspected to be carcinogenic (especially PAH-like compounds with 3-7 ring systems), therefore their levels in food should be reduced according to the ALARA-principle (as low as reasonably achievable). Gruppo FOS with CNR (Consiglio Nazionale delle Ricerche), Santagata 1907 and Enginius are searching the system for finding and trace their presence in the virgin and extra virgin olive oils by using open fingerprints methods, open hardware and open source blockchain and AI technologies.
Up-to date measurements of surface meteorological variables are essential to monitor weather conditions, their spatio-temporal variability and the potential effects on a wide range of sectors and applications. Moreover, when included in continuous records of long historical observations spanning several decades, they become essential for assessing long-term climate variability and change locally and on a regional level.
Automated pipelines capable of retrieving and processing near-real time meteorological data satisfy the primary prerequisites towards the development and advancement of effective and operational climate services.
With a public and operational near real-time monitoring web platform in mind, we present automated pipelines to collect and process up-to-date daily temperature and precipitation records for Trentino South Tyrol (Italy) and surrounding areas, and to derive their spatially interpolated fields at sub-km scale. Our pipelines are composed by multiple steps including data download, sanity checks, reconstruction of missing daily records, integration into the historical archive, spatial interpolation and publication onto online FAIR catalogues as (openEO) “datacubes”. The different APIs, data formats and structure across the various data sources, and the need to merge the data onto harmonized meteorological layers, make this a typical case of the so-called Extract, Transform and Load (ETL) pipelines, and, in order to follow the principles of data reproducibility and Open Science, we embraced open-source automated workflow management through GitLab’s Continuous Integration / Continuous Development (CI/CD) capabilities.
CI/CD workflows greatly help the management of the relatively complex graphs of tasks required for our climate application, ensuring seamless orchestration with thorough flow monitoring, application logs, transactions rollbacks, and exception handling in general. Native pipeline-oriented software development also fosters a clean separation of roles among the tasks, and a more modular architecture. This effectively reduces barriers to collaborative development and paves the way for robust operational climate services for researchers and decision makers in the face of the changing climate.
The Open Science movement aims to increase the transparency, reproducibility and inclusiveness of academic research. One of its central goals is therefore to make research outputs broadly available, e.g., manuscripts (Open Access) or research data (Open Data). While software/code created in the course of scientific research is a key artifact of scientific research that is clear distinct from the latter two, it has until recently not received the same attention as manuscripts or data, although it follows its own set of paradigms.
In this talk I will present an overview on how the core concepts of Free Software and the FAIR (findable, accessible, interoperable, reuseable) Principles intersect, what this means for managing code as research output and recent initiatives on the European level that will provide support for these issues.
Software freedom can be defined in many ways but in legal terms it is squarely defined by a set of approved FSF and OSI software licenses. Yet everyone realizes that beyond these licenses the goal of software freedom and digital sovereignty cannot be achieved without the ability to master and create hardware components and systems - and beyond that, to rely on open digital infrastructure (servers, datacenters, and resources) . This talk will present the challenges around these topics and what we, collectively in Europe already do and can do to ensure our independence and our freedoms.
This document discusses efforts to make environmental data from the Environmental Data Platform (EDP) more FAIR (Findable, Accessible, Interoperable, Reusable) compliant. It describes the components of EDP including data repositories, a metadata catalog, web portal, and tools. It outlines work done to improve metadata quality for humans and interoperability for machines through signposting. This includes adding DOIs, citations, and linked data/linkset JSON files. The document emphasizes that maintaining high quality metadata requires significant ongoing effort.
The document discusses how artificial intelligence (AI) and the Internet of Things (IoT) are revolutionizing mass customization. Traditional mass customization allows customers to customize products online through selection of options. AI enables more intuitive customization by learning customer preferences and offering dynamic, individual recommendations. IoT connects devices to customize the usage experience based on a person's context and needs. The integration of these technologies has the potential to transform mass customization by enabling mood-based and natural language processing for customization as well as products designed through AI with minimal user input.
Since 2020 Stadtwerke Meran have realized 5 Use cases:
- Control of the control cabinets of public lighting.
- Optimizing the service on Waste Press container.
- Bike Boxes
- Just Nature Project , temperature measuring over Lorawan
- Smart Lighting , communication with single light points over Lorwan.
As open source software becomes the foundation to build digital products, to run the backbones of ICT infrastructure and to ensure digital sovereignty and cyber resilience, both the technology as well as the communities that develop it inevitably move into the focus of regulators. The European Union is advancing a number of policy initiatives that regulate liability, cyber security, data handling and AI applications in digital products, among others. This is a challenge for the still quite decentralised and globally operating open source community. How could the open source community participate in legislative processes, and what may be the potential impacts of the upcoming regulation on the open source development process and community dynamics?
The public transport in South Tyrol is going through a huge transformation: new investments, many new green vehicles and a brand new software. Transition will take time and how do we develop a fleet monitoring system to use during the transition without spending a fortune ? maybe with free software!
AICS is the Italian Agency for Development Cooperation that started operating in 2016 with the ambition of aligning Italy with the main European and international partners in the commitment to development. KNOWAGE Labs are developing for AICS a platform that is probably unique in the world and will allow both the Agency and the public to access all the major indicators on the UN Sustainable Development Goals provided by international sources (World Bank, WTO, ILO..) and easily compare them. The solution will allow analysis to start from 3 different touch points: the infographic of SDG goals, the advanced search criteria, and the virtual assistant. Then, a customized dashboard will be provided to the user, allowing to further expand the analysis by interacting with charts, maps, tables, etc. This talk will show the state of art of the solution, highlighting objectives and expected results of the project, but also the new developments of KNOWAGE related to AI.
Interoperability is a core element of the ongoing digitalisation of Europe. With the Interoperable Europe Act, the EU is aiming to create a dedicated legal framework for interoperability and to enhance cross-border digital public services across the European Union. This talk will give an overview of the state of play of this proposed regulation in the ongoing EU legislative process, some of its flaws, and the important role that Free Software and its community can play in it.
How to sharpen the demand for public code across Europe and monitor progress with TEDective
For six years, the Free Software Foundation Europe has been calling with a broad alliance for publicly funded software to be published as Free Software. This initiative has become a great success: Our demand "Public Money? Public Code!" has found its way into government strategy papers, party programs, as well as coalition treaties, and is being discussed in public administrations across Europe.
At the same time, we see less progress than expected and vendor lock ins remain a crucial issue. Digital sovereignty is redefined bypassing Free Software. There is openwashing in publicly funded companies, and government projects in favour of Free Software remain empty words. Public statistics on the procurement of Free Software are largely unavailable.
It is therefore no longer enough to promote the idea of "Public Money? Public Code!". We as the Free Software community should be even more vigilant than before – continuing to praise small steps in the right direction, but pointing out and criticising omissions and lack of implementation. We should become more like watchdogs.
In the talk we will look at some examples of lack of implementation of Free Software policies. We will discuss how we, as civil society, can identify such shortcomings and how to deal with them. We will present our initiative TEDective – a free-software solution that makes European public procurement data explorable for non-experts, aiming to provide you with a powerful tool to keep an eye on real progress towards "Public Money? Public Code!" across Europe.
The Internet today forms the backbone of the digitisation of our society and economy. As connectivity increases, the boundaries between the real and digital world get increasingly blurred. However, there has been an erosion of trust in the Internet following revelations about the exploitation of personal data, large-scale cybersecurity and data breaches, and growing awareness of the proliferation and impacts of online disinformation.
What can be done to improve the Internet as a platform for future generations? What initiatives are currently in place to build key technological blocks of an Internet that supports human-centric values, such as privacy, security, and inclusion, while reflecting the values and norms all citizens should enjoy in Europe?
This talk will explore why the current state of the internet must be re-imagined and re-engineered in order to support healthy societies, the existing European Commission initiative to work towards doing so, and the role of Free Software in accomplishing these goals.
2023 saw the launch, after a long and well-structured revision and development process, all based on a fruitful collaboration between several departments of the Autonomous Province of Bolzano, most of the township in South Tyrol, Informatica Alto Adige (SIAG - Technical partner) and the Consortium of Municipalities of the Province of Bolzano, of the new version of the integrated geographic data management system IGis Maps. In use for years in South Tyrol, has in the Consortium one of its most enthusiastic contributors and supporters.
The very first version was released about eight years ago and its implementation was based on the idea of creating a multi-purpose GIS management system that could support different types of users, that was highly customizable, and, above all, that could be widely shared among the various management entities, both public and private, present within our territory.
After years of use and ad-hoc developments, we can finally present the new version of the IGis Maps system, which incorporates all the technical and technological improvements we realized the system needed.
It was not just a major update together with new functionalities combined inside the previous software structure, but a true re-engineering that led, among other things, to a new and more efficient user interface, a major advancement regarding the internal security, an optimization and improvement of the entire editing section as well as an optimization of the section regarding the automatic geo-processes.
A mobile version is currently under development to better support any field activities, for which a very powerful option will be included, the possibility of creating special work sessions in off-line mode so as to be able to operate even in areas without a proper cellular line network coverage.
Other very important peculiarities concern that the system is developed using a totally free software code and infrastructure, that a detailed documentation has been produced to ensure sustainability to any further future evolution, even in case of technical partner turnover, and finally, that by taking advantage of the high standards and levels of security access can be guaranteed to any type of user. From professional users, through dedicated access and qualifications or, using the ordinary SPID, to the private citizen.
We will show examples of how different types of users and stakeholders now permanently use the system for the management of a variety of tasks related to their activities, and how it was possible to customize IGis Maps to create visualization and data management contexts that best meet their needs.
We will also present a related project concerning the updating and the correction of the new technical basal cartography, built upon the new Basic Core specification, achieved through the automatic conversion implemented by the SIAG team starting from the previous National Core cartography. With the new IGis Maps it was possible to create an a
KNOWAGE is the open source analytics and business intelligence suite made in Italy. KNOWAGE aims to provide company and organizations with analytical capabilities to exploit data to increase their efficiency and sustainability. Also thanks to the open source community support, the suite is constantly evolving combining the reliability of the most popular business intelligence solutions with the security and the transparency guaranteed by open source.
This talk will show the last year advancements and new features towards a more mobile, accessible and user-friendly product, focusing on the newly rewritten dashboarding tool.
More from South Tyrol Free Software Conference (20)
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectors
SFScon19 - Alexander Jacob - openEO
1. Authors: Alexander Jacob, Jeroen Dries , Matthias Mohr , Luca Foresta , Markus Neteler, Edzer Pebesma, Prateek
Budhwar, Simone Tritini, Peter Zellner, Armin Costa, Matthias Schramm
Contact: alexander.jacob@eurac.edu, openEO@list.tuwien.ac.at
An API for Standardised Access to
Big Earth Observation Data in a
Landscape of a Growing Number of
EO Cloud Providers
2. Why do we need openEO?
1openEO – SFSCon – 2019-11-16
Eurac Research 2019, contains modified Copernicus Sentinel data [2016], processed by ESA.
3. Why do we need openEO?
1openEO – SFSCon – 2019-11-16
Eurac Research 2019, contains modified Copernicus Sentinel data [2016], processed by ESA.
4. Why do we need openEO?
1openEO – SFSCon – 2019-11-16
Eurac Research 2019, contains modified Copernicus Sentinel data [2016], processed by ESA.
5. Why do we need openEO?
1openEO – SFSCon – 2019-11-16
Eurac Research 2019, contains modified Copernicus Sentinel data [2016], processed by ESA.
6. Why do we need openEO?
1openEO – SFSCon – 2019-11-16
Eurac Research 2019, contains modified Copernicus Sentinel data [2016], processed by ESA.
7. 2
Why do we need openEO?
CEPH_FS CEPH_RBD
Hardware
File Systems
Cluster Orchestration &
Virtual Environments
Applications
Data Models &
Data Bases
1.4
PB
2 x 40 Gb/s
336 Cores
3 TB RAM
~50 Cores
~200 MB
RAM
e.g. pre -
processing
of S1 / S2
WCS
WCPS
SOS
openEO – SFSCon – 2019-11-16
8. 2
Why do we need openEO?
CEPH_FS CEPH_RBD
Hardware
File Systems
Cluster Orchestration &
Virtual Environments
Applications
Data Models &
Data Bases
1.4
PB
2 x 40 Gb/s
336 Cores
3 TB RAM
~50 Cores
~200 MB
RAM
e.g. pre -
processing
of S1 / S2
WCS
WCPS
SOS
openEO – SFSCon – 2019-11-16
9. 2
Why do we need openEO?
CEPH_FS CEPH_RBD
Hardware
File Systems
Cluster Orchestration &
Virtual Environments
Applications
Data Models &
Data Bases
1.4
PB
2 x 40 Gb/s
336 Cores
3 TB RAM
~50 Cores
~200 MB
RAM
e.g. pre -
processing
of S1 / S2
WCS
WCPS
SOS
openEO – SFSCon – 2019-11-16
10. 3
Why do we need openEO?
CEPH_FS CEPH_RBD
Hardware
File Systems
Cluster Orchestration &
Virtual Environments
Applications
Data Models &
Data Bases
1.4
PB
2 x 40 Gb/s
336 Cores
3 TB RAM
~50 Cores
~200 MB
RAM
e.g. pre -
processing
of S1 / S2
WCS
WCPS
SOS
openEO – SFSCon – 2019-11-16
11. 3
Why do we need openEO?
openEO – SFSCon – 2019-11-16
12. 3
Why do we need openEO?
openEO – SFSCon – 2019-11-16
13. 3
Why do we need openEO?
openEO – SFSCon – 2019-11-16
14. 3
Why do we need openEO?
openEO – SFSCon – 2019-11-16
15. 3
Why do we need openEO?
openEO – SFSCon – 2019-11-16
16. 3
Why do we need openEO?
openEO – SFSCon – 2019-11-16
17. 3
Why do we need openEO?
openEO – SFSCon – 2019-11-16
18. 3
Why do we need openEO?
openEO – SFSCon – 2019-11-16
19. 4
Why do we need openEO?
API
openEO – SFSCon – 2019-11-16
20. 5
Why do we need openEO?
API
openEO – SFSCon – 2019-11-16
21. 6
Getting started…
• First check for existing drivers @
https://github.com/Open-EO
https://open-eo.github.io/openeo-api/gettingstarted-backends/
RESTful implementation using
OpenAPI Specification Version 3.0.1
openEO – SFSCon – 2019-11-16
22. 6
Getting started…
• First check for existing drivers @
https://github.com/Open-EO
https://open-eo.github.io/openeo-api/gettingstarted-backends/
RESTful implementation using
OpenAPI Specification Version 3.0.1
openEO – SFSCon – 2019-11-16
23. 6
Getting started…
• First check for existing drivers @
https://github.com/Open-EO
• If an own implementation is needed:
• You can still rely on some base functionality in
the existing implementations
• Or start with openAPI code generator @
https://github.com/OpenAPITools/openapi-
generator
• Start with implementing the essential
endpoints
https://open-eo.github.io/openeo-api/gettingstarted-backends/
RESTful implementation using
OpenAPI Specification Version 3.0.1
openEO – SFSCon – 2019-11-16
24. 7
The Endpoints of openEO
root slash for capabilities and well-known
for versioning/.well-known/openeo
/
/output_formats
openEO – SFSCon – 2019-11-16
RESTful implementation using
OpenAPI Specification Version 3.0.1
25. 7
The Endpoints of openEO
openEO strives for compatibility
with STAC and OGC API as far as possible
for data discovery.
root slash for capabilities and well-known
for versioning
/collections
/.well-known/openeo
/
/output_formats
/collections/{collectionid}
openEO – SFSCon – 2019-11-16
RESTful implementation using
OpenAPI Specification Version 3.0.1
26. 7
The Endpoints of openEO
openEO strives for compatibility
with STAC and OGC API as far as possible
for data discovery.
The basis for all computation are
processes
root slash for capabilities and well-known
for versioning
/collections
/processes
/.well-known/openeo
/
/output_formats
/collections/{collectionid}
openEO – SFSCon – 2019-11-16
RESTful implementation using
OpenAPI Specification Version 3.0.1
27. 7
The Endpoints of openEO
openEO strives for compatibility
with STAC and OGC API as far as possible
for data discovery.
The basis for all computation are
processes
Processes can be chained into process
graphs
root slash for capabilities and well-known
for versioning
/collections
/processes
/.well-known/openeo
/
/process_graphs
/process_graphs/{graphID}
/output_formats
/collections/{collectionid}
openEO – SFSCon – 2019-11-16
RESTful implementation using
OpenAPI Specification Version 3.0.1
28. 7
The Endpoints of openEO
openEO strives for compatibility
with STAC and OGC API as far as possible
for data discovery.
The basis for all computation are
processes
Processes can be chained into process
graphs
Results can be processed and
downloaded synchronously
Process graphs can be submitted as batch-
jobs, queued for processing and results
can be download upon completion
root slash for capabilities and well-known
for versioning
/collections
/processes
/jobs/{jobid}
/results
/jobs/{jobid}/results
/.well-known/openeo
/
/jobs
/process_graphs
/process_graphs/{graphID}
/output_formats
/collections/{collectionid}
openEO – SFSCon – 2019-11-16
RESTful implementation using
OpenAPI Specification Version 3.0.1
29. 7
The Endpoints of openEO
openEO strives for compatibility
with STAC and OGC API as far as possible
for data discovery.
The basis for all computation are
processes
Processes can be chained into process
graphs
Results can be processed and
downloaded synchronously
Process graphs can be submitted as batch-
jobs, queued for processing and results
can be download upon completion
Handling of user authentication and
billing.
root slash for capabilities and well-known
for versioning
/collections
/processes
/jobs/{jobid}
/credentials/oidc
/results
/jobs/{jobid}/results
/.well-known/openeo
/
/credentials/basic
/jobs
/process_graphs
/process_graphs/{graphID}
/output_formats
/collections/{collectionid}
openEO – SFSCon – 2019-11-16
RESTful implementation using
OpenAPI Specification Version 3.0.1
30. 7
The Endpoints of openEO
openEO strives for compatibility
with STAC and OGC API as far as possible
for data discovery.
The basis for all computation are
processes
Processes can be chained into process
graphs
Results can be processed and
downloaded synchronously
Process graphs can be submitted as batch-
jobs, queued for processing and results
can be download upon completion
User can upload own files
Handling of user authentication and
billing.
root slash for capabilities and well-known
for versioning
/collections
/processes
/jobs/{jobid}
/files/{userid}
/credentials/oidc
/results
/jobs/{jobid}/results
/.well-known/openeo
/
/credentials/basic
/jobs
/process_graphs
/process_graphs/{graphID}
/output_formats
/collections/{collectionid}
openEO – SFSCon – 2019-11-16
RESTful implementation using
OpenAPI Specification Version 3.0.1
31. 7
The Endpoints of openEO
openEO strives for compatibility
with STAC and OGC API as far as possible
for data discovery.
The basis for all computation are
processes
Processes can be chained into process
graphs
Results can be processed and
downloaded synchronously
Process graphs can be submitted as batch-
jobs, queued for processing and results
can be download upon completion
User can upload own files
Handling of user authentication and
billing.
root slash for capabilities and well-known
for versioning
/collections
/processes
/jobs/{jobid}
/files/{userid}
/credentials/oidc
/results
/jobs/{jobid}/results
/service_types/{serviceid}
/.well-known/openeo
/
Consume results as secondary web
services (e.g. WMS, XYZ, WCS)
/credentials/basic
/jobs
/process_graphs
/process_graphs/{graphID}
/output_formats
/collections/{collectionid}
openEO – SFSCon – 2019-11-16
RESTful implementation using
OpenAPI Specification Version 3.0.1
32. 7
The Endpoints of openEO
openEO strives for compatibility
with STAC and OGC API as far as possible
for data discovery.
The basis for all computation are
processes
Processes can be chained into process
graphs
Results can be processed and
downloaded synchronously
Process graphs can be submitted as batch-
jobs, queued for processing and results
can be download upon completion
User can create and integrate user
defined functions into process graphs
User can upload own files
Handling of user authentication and
billing.
root slash for capabilities and well-known
for versioning
/collections
/processes
/jobs/{jobid}
/files/{userid}
/credentials/oidc
/results
/jobs/{jobid}/results
/service_types/{serviceid}
/.well-known/openeo
/
Consume results as secondary web
services (e.g. WMS, XYZ, WCS)
/credentials/basic
/jobs
/process_graphs
/process_graphs/{graphID}
/output_formats
/udf_runtimes
/collections/{collectionid}
openEO – SFSCon – 2019-11-16
RESTful implementation using
OpenAPI Specification Version 3.0.1
39. 12
WCPS backend (EURAC): https://openeo.eurac.edu
CEPH_FS CEPH_RBD
Hardware
File Systems
Cluster Orchestration &
Virtual Environments
Applications
Data Models &
Data Bases
1.4
PB
2 x 40 Gb/s
336 Cores
3 TB RAM
~50 Cores
~200 MB
RAM
e.g. pre -
processing
of S1 / S2
WCS
WCPS
SOS
https://github.com/Open-EO/openeo-wcps-driver
openEO – SFSCon – 2019-11-16
40. 12
WCPS backend (EURAC) : https://openeo.eurac.edu
CEPH_FS CEPH_RBD
Hardware
File Systems
Cluster Orchestration &
Virtual Environments
Applications
Data Models &
Data Bases
1.4
PB
2 x 40 Gb/s
336 Cores
3 TB RAM
~50 Cores
~200 MB
RAM
e.g. pre -
processing
of S1 / S2
WCS
WCPS
SOS
https://github.com/Open-EO/openeo-wcps-driver
openEO – SFSCon – 2019-11-16
41. 13
WCPS backend (EURAC)
https://github.com/Open-EO/openeo-wcps-driver
Based on swagger-jersey2-jaxrs for rest
API implementation.
Sqlite for openEO related DB
=> Batch job management, storing of
process graphs
GDAL for image operations and
coordinate transformations
JJWT for openID connect implementation
=> Linked to Microsoft azure for
authentication
Packaged as web archive using maven
=> Deployable on any java capable web
container (e.g. tomcat or jetty)
openEO – SFSCon – 2019-11-16
42. 13
WCPS backend (EURAC)
https://github.com/Open-EO/openeo-wcps-driver
Based on swagger-jersey2-jaxrs for rest
API implementation.
Sqlite for openEO related DB
=> Batch job management, storing of
process graphs
GDAL for image operations and
coordinate transformations
JJWT for openID connect implementation
=> Linked to Microsoft azure for
authentication
Packaged as web archive using maven
=> Deployable on any java capable web
container (e.g. tomcat or jetty)
Configuration:
• Properties File
• WCPS endpoint
• openEO endpoint
• Authentication endpoint (for oidc)
• DB location
• TMP location
• Session timings (auth expiry, tmp duration, etc.)
openEO – SFSCon – 2019-11-16
43. 13
WCPS backend (EURAC)
https://github.com/Open-EO/openeo-wcps-driver
Based on swagger-jersey2-jaxrs for rest
API implementation.
Sqlite for openEO related DB
=> Batch job management, storing of
process graphs
GDAL for image operations and
coordinate transformations
JJWT for openID connect implementation
=> Linked to Microsoft azure for
authentication
Packaged as web archive using maven
=> Deployable on any java capable web
container (e.g. tomcat or jetty)
Configuration:
• Properties File
• WCPS endpoint
• openEO endpoint
• Authentication endpoint (for oidc)
• DB location
• TMP location
• Session timings (auth expiry, tmp duration, etc.)
• Setup of Host Environment
• Centos 7 or Ubuntu 18.04
• Install Tomcat 7 or later
• Configure for https
• Install sqlite (v3) & GDAL (v2.4)
• Deploy openEO.war
• Setup of proxy server for public access
openEO – SFSCon – 2019-11-16
48. 17
Conclusions
• A RESTful API has been defined
• Interface human and machine readable (JSON)
• Process catalogue
• Extendable by user defined functions
• Standardized process graph for abstract description of EO computational workflows
• A number of reference implementations are currently in development
• Based on different programming languages
• Python
• Java
• R
• Java-script
• Based on different existing hard and software infrastructure
• Can be used as starting point for own implementation
• Together with extensive documentation
• And guides
• All of this is open source
openEO – SFSCon – 2019-11-16
49. Thank you for your attention!
H2020 openEO
EO-2-2017: EO Big Data Shift
Grant agreement No 776242
Contact:
http://openeo.org/
openEO@list.tuwien.ac.at
https://github.com/Open-EO
@open_EO
https://www.youtube.com/channel/UCMJQil8
j9sHBQkcSlSaEsvQ
https://www.researchgate.net/project/openEO
https://openeo-chat.eodc.eu/channel/public
https://zenodo.org/communities/openeo
alexander.jacob@eurac.edu
openEO – SFSCon – 2019-11-16