Toulouse, CNES - 2013.10.01
RTSTechniques de l'Information et de la Communication
Synthesis of R&T projects on Cloud Computing and WPS - i.e. OpenStack cloud based solution for processing SRTM data followed by Land Cover classification through WPS
On-demand EO Processing Services in a federation of European Ground Segments
Centralisation vs FedearaMake things work
Limit as possible data transfert be
« Bring process to data » vs « Bring data to process » when possible
tween clouds Processing results should be downloaded only if needed
Good option is to offer a Web service to visualize/manipulate results
Interoperability layers between clouds Authentication, Catalogs search, Virtualization
High speed network between clouds
Using A100 MIG to Scale Astronomy Scientific OutputIgor Sfiligoi
Presented at GTC21.
The raw computing power of GPUs has been steadily increasing, significantly outpacing the CPU gains. This poses a problem for many GPU-enabled scientific applications that use CPU code paths to feed data to the GPU code, resulting in lower GPU utilization, and thus reduced gains in scientific output. Applications that are high-throughput in nature, such as astronomy-focused IceCube and LIGO, can partially work around the problem by running several instances of the executable on the same GPU. This approach, however, is sub-optimal both in terms of application performance and workflow management complexity. The recently introduced Multi-Instance GPU (MIG) capability, available on the NVIDIA A100 GPU, provides a much cleaner and easier-to-use alternative by allowing the logical slicing of the powerful GPU and assigning different slices to different applications. And at least in the case of IceCube, it can provide over 3x more scientific output on the same hardware.
Using commercial Clouds to process IceCube jobsIgor Sfiligoi
Presented at EDUCAUSE CCCG March 2021.
The IceCube Neutrino Observatory is the world’s premier facility to detect neutrinos.
Built at the south pole in natural ice, it requires extensive and expensive calibration to properly track the neutrinos.
Most of the required compute power comes from on-prem resources through the Open Science Grid,
but IceCube can easily harness the Cloud compute at any scale, too, as demonstrated by a series of Cloud bursts.
This talk provides both details of the performed Cloud bursts, as well as some insight in the science itself.
Managing Cloud networking costs for data-intensive applications by provisioni...Igor Sfiligoi
Presented at PEARC21.
Many scientific high-throughput applications can benefit from the elastic nature of Cloud resources, especially when there is a need to reduce time to completion. Cost considerations are usually a major issue in such endeavors, with networking often a major component; for data-intensive applications, egress networking costs can exceed the compute costs. Dedicated network links provide a way to lower the networking costs, but they do add complexity. In this paper we provide a description of a 100 fp32 PFLOPS Cloud burst in support of IceCube production compute, that used Internet2 Cloud Connect service to provision several logically-dedicated network links from the three major Cloud providers, namely Amazon Web Services, Microsoft Azure and Google Cloud Platform, that in aggregate enabled approximately 100 Gbps egress capability to on-prem storage. It provides technical details about the provisioning process, the benefits and limitations of such a setup and an analysis of the costs incurred.
Demonstrating 100 Gbps in and out of the CloudsIgor Sfiligoi
In this presentation, which was supposed to be presented at the cancelled CENIC 2020 Annual Conference, I present an overview of what is possible to achieve in terms of networking inside the Clouds and when exchanging data between cloud resources and on-prem equipment, with an emphasis on research hosted hardware.
There is increased awareness and recognition that public cloud providers do provide capabilities not found elsewhere, with elasticity being a major driver, and funding agencies are taking an increasingly positive stance toward public clouds.
The value of elastic scaling is, however, tightly coupled to the capabilities of the networks that connect all involved resources, both in the public clouds and at the various research institutions.
This presentation tries to shed some light on what is possible today.
On-demand EO Processing Services in a federation of European Ground Segments
Centralisation vs FedearaMake things work
Limit as possible data transfert be
« Bring process to data » vs « Bring data to process » when possible
tween clouds Processing results should be downloaded only if needed
Good option is to offer a Web service to visualize/manipulate results
Interoperability layers between clouds Authentication, Catalogs search, Virtualization
High speed network between clouds
Using A100 MIG to Scale Astronomy Scientific OutputIgor Sfiligoi
Presented at GTC21.
The raw computing power of GPUs has been steadily increasing, significantly outpacing the CPU gains. This poses a problem for many GPU-enabled scientific applications that use CPU code paths to feed data to the GPU code, resulting in lower GPU utilization, and thus reduced gains in scientific output. Applications that are high-throughput in nature, such as astronomy-focused IceCube and LIGO, can partially work around the problem by running several instances of the executable on the same GPU. This approach, however, is sub-optimal both in terms of application performance and workflow management complexity. The recently introduced Multi-Instance GPU (MIG) capability, available on the NVIDIA A100 GPU, provides a much cleaner and easier-to-use alternative by allowing the logical slicing of the powerful GPU and assigning different slices to different applications. And at least in the case of IceCube, it can provide over 3x more scientific output on the same hardware.
Using commercial Clouds to process IceCube jobsIgor Sfiligoi
Presented at EDUCAUSE CCCG March 2021.
The IceCube Neutrino Observatory is the world’s premier facility to detect neutrinos.
Built at the south pole in natural ice, it requires extensive and expensive calibration to properly track the neutrinos.
Most of the required compute power comes from on-prem resources through the Open Science Grid,
but IceCube can easily harness the Cloud compute at any scale, too, as demonstrated by a series of Cloud bursts.
This talk provides both details of the performed Cloud bursts, as well as some insight in the science itself.
Managing Cloud networking costs for data-intensive applications by provisioni...Igor Sfiligoi
Presented at PEARC21.
Many scientific high-throughput applications can benefit from the elastic nature of Cloud resources, especially when there is a need to reduce time to completion. Cost considerations are usually a major issue in such endeavors, with networking often a major component; for data-intensive applications, egress networking costs can exceed the compute costs. Dedicated network links provide a way to lower the networking costs, but they do add complexity. In this paper we provide a description of a 100 fp32 PFLOPS Cloud burst in support of IceCube production compute, that used Internet2 Cloud Connect service to provision several logically-dedicated network links from the three major Cloud providers, namely Amazon Web Services, Microsoft Azure and Google Cloud Platform, that in aggregate enabled approximately 100 Gbps egress capability to on-prem storage. It provides technical details about the provisioning process, the benefits and limitations of such a setup and an analysis of the costs incurred.
Demonstrating 100 Gbps in and out of the CloudsIgor Sfiligoi
In this presentation, which was supposed to be presented at the cancelled CENIC 2020 Annual Conference, I present an overview of what is possible to achieve in terms of networking inside the Clouds and when exchanging data between cloud resources and on-prem equipment, with an emphasis on research hosted hardware.
There is increased awareness and recognition that public cloud providers do provide capabilities not found elsewhere, with elasticity being a major driver, and funding agencies are taking an increasingly positive stance toward public clouds.
The value of elastic scaling is, however, tightly coupled to the capabilities of the networks that connect all involved resources, both in the public clouds and at the various research institutions.
This presentation tries to shed some light on what is possible today.
AWS October Webinar Series - Introducing AWS Import / Export SnowballAmazon Web Services
AWS Import/Export Snowball uses secure appliances and the Snowball client to help accelerate petabyte-scale data transfers into and out of AWS. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.
Serverless Comparison: AWS vs Azure vs Google vs IBMRightScale
Serverless computing, (sometimes called function-as-a-service) is the top-growing cloud service year-over-year in 2018 compared to 2017 according to the RightScale State of the Cloud Survey. Serverless is appropriate for a variety of different use cases. We share how serverless offerings and pricing for different cloud providers compare.
(STG202) AWS Import/Export Snowball: Large-Scale Data Ingest into AWSAmazon Web Services
Moving terabyte and petabyte volumes of data into the cloud can be a challenge for many businesses. Come learn how you can use Snowball, a new AWS feature, to move large-scale (terabyte and petabyte) data to AWS storage services.
Best Practices for Genomic and Bioinformatics Analysis Pipelines on AWS Amazon Web Services
AWS is a great fit for both steady state and episodic computational workloads. Here we present some common architecture patterns for analyzing genomic and other biomedical data on scalable high-throughput computational clusters on AWS. This talk will cover bootstrapping a traditional Beowulf compute cluster on AWS EC2, data transfer and storage strategies for S3.
Empowering Admins by taking away root (Improving platform visibility in Horizon)David Lapsley
OpenStack as a Service enables Adminstrators to move upstack and concentrate more on helping their users and less on the low level details of managing their cloud hardware and software. However, this can come at a perceived cost: Cloud Administrators are used to being able log in to devices in their networks and see exactly what is going on.
In this presentation, we show how providing Admins with enhanced platform visibility through features like LiveStats and Historical Metrics can obsolete the requirement that Admins have root access to every device in their clouds, and enable them to invest their time and energy in areas where their users will benefit the most.
PeopleSoft Cloud Architecture - OpenWorld 2016Graham Smith
Oracle’s PeopleSoft PeopleTools 8.55 saw the introduction of PeopleSoft’s cloud architecture: a platform and set of tools for solving many of the issues associated with effectively running PeopleSoft applications in the cloud. This session explores how you can take advantage of this exciting innovation in PeopleSoft, describes practical use cases for making PeopleSoft’s cloud architecture work for you, and discusses how Oracle Compute Cloud Service can play a key part in this.
FME World Tour 2015 - FME & LIDAR - Glen BambrickIMGS
FME 2015 has notably added functionality for working with
point clouds, there are two new formats (Caris Spatial Archive and Minecraft), and three transformers -
PointCloudStatisticsCalculator, PointCloudSorter, and
PointCloudMerger. This presentation concentrates on using
these transformers in four scenarios: Classification, Rasterization for biomass calculation, Vectorization (feature extraction) and Building Height Calculation.
Improve Page Render Time with Amazon CloudfrontPolyvore
Amazon's CloudFront provides a self-served CDN solution without a
contract. In this talk we will walk through the steps to set up
CloudFront. We will also talk about how we measure page render time
and take a look at how using CloudFront affects page render time for
Polyvore in different countries.
Slides from Polyvore Tech Talk #1
Is Orchestration the Next Big Thing in DevOpsNati Shalom
DevOps processes (such as continuous deployment and delivery) often involve writing many custom scripts that are triggered by the build system. With that approach, it is relatively hard to trace the deployment process and troubleshoot when something goes wrong. Additionally, custom scripts are often not written in an easily understood manner. In this session we will walk through specific DevOps workflows (such as install, update, etc) using Riemann as the framework in subject and see the steps required to automate those processes. We will also discuss how Cloudify uses Riemann to provide simple execution and monitoring of those workflow processes. We will share how one customer, PaddyPower, was able to leverage Cloudify to transition their traditional IT into a DevOps environment, bridging the gap betwe
Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing.Markus Klems
My slides of the WeB 2008 workshop on e-Business in Paris. The framework that I describe in the presentation assists decision makers to understand the value proposition of Cloud Computing technology.
Deze sessie geeft een duidelijk beeld over een nieuwe stroming in de IT-wereld: Cloud Computing. Beginnen doen we met een globaal beeld van wat deze nieuwe technologie te bieden heeft en hoe deze zich verhoudt tot “traditionele” infrastructuur en ontwikkeling. Daarna wordt de focus verlegd naar Microsoft’s implementatie van Cloud Computing: Windows Azure. Gecombineerd met een aantal praktijkvoorbeelden zal deze sessie de mist rond Cloud Computing uitklaren.
Slides for an introductory workshop on cloud computing for a web app developer audience at FOWA Miami 09 (http://events.carsonified.com/fowa/2009/miami/workshops#workshop_36)
AWS October Webinar Series - Introducing AWS Import / Export SnowballAmazon Web Services
AWS Import/Export Snowball uses secure appliances and the Snowball client to help accelerate petabyte-scale data transfers into and out of AWS. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.
Serverless Comparison: AWS vs Azure vs Google vs IBMRightScale
Serverless computing, (sometimes called function-as-a-service) is the top-growing cloud service year-over-year in 2018 compared to 2017 according to the RightScale State of the Cloud Survey. Serverless is appropriate for a variety of different use cases. We share how serverless offerings and pricing for different cloud providers compare.
(STG202) AWS Import/Export Snowball: Large-Scale Data Ingest into AWSAmazon Web Services
Moving terabyte and petabyte volumes of data into the cloud can be a challenge for many businesses. Come learn how you can use Snowball, a new AWS feature, to move large-scale (terabyte and petabyte) data to AWS storage services.
Best Practices for Genomic and Bioinformatics Analysis Pipelines on AWS Amazon Web Services
AWS is a great fit for both steady state and episodic computational workloads. Here we present some common architecture patterns for analyzing genomic and other biomedical data on scalable high-throughput computational clusters on AWS. This talk will cover bootstrapping a traditional Beowulf compute cluster on AWS EC2, data transfer and storage strategies for S3.
Empowering Admins by taking away root (Improving platform visibility in Horizon)David Lapsley
OpenStack as a Service enables Adminstrators to move upstack and concentrate more on helping their users and less on the low level details of managing their cloud hardware and software. However, this can come at a perceived cost: Cloud Administrators are used to being able log in to devices in their networks and see exactly what is going on.
In this presentation, we show how providing Admins with enhanced platform visibility through features like LiveStats and Historical Metrics can obsolete the requirement that Admins have root access to every device in their clouds, and enable them to invest their time and energy in areas where their users will benefit the most.
PeopleSoft Cloud Architecture - OpenWorld 2016Graham Smith
Oracle’s PeopleSoft PeopleTools 8.55 saw the introduction of PeopleSoft’s cloud architecture: a platform and set of tools for solving many of the issues associated with effectively running PeopleSoft applications in the cloud. This session explores how you can take advantage of this exciting innovation in PeopleSoft, describes practical use cases for making PeopleSoft’s cloud architecture work for you, and discusses how Oracle Compute Cloud Service can play a key part in this.
FME World Tour 2015 - FME & LIDAR - Glen BambrickIMGS
FME 2015 has notably added functionality for working with
point clouds, there are two new formats (Caris Spatial Archive and Minecraft), and three transformers -
PointCloudStatisticsCalculator, PointCloudSorter, and
PointCloudMerger. This presentation concentrates on using
these transformers in four scenarios: Classification, Rasterization for biomass calculation, Vectorization (feature extraction) and Building Height Calculation.
Improve Page Render Time with Amazon CloudfrontPolyvore
Amazon's CloudFront provides a self-served CDN solution without a
contract. In this talk we will walk through the steps to set up
CloudFront. We will also talk about how we measure page render time
and take a look at how using CloudFront affects page render time for
Polyvore in different countries.
Slides from Polyvore Tech Talk #1
Is Orchestration the Next Big Thing in DevOpsNati Shalom
DevOps processes (such as continuous deployment and delivery) often involve writing many custom scripts that are triggered by the build system. With that approach, it is relatively hard to trace the deployment process and troubleshoot when something goes wrong. Additionally, custom scripts are often not written in an easily understood manner. In this session we will walk through specific DevOps workflows (such as install, update, etc) using Riemann as the framework in subject and see the steps required to automate those processes. We will also discuss how Cloudify uses Riemann to provide simple execution and monitoring of those workflow processes. We will share how one customer, PaddyPower, was able to leverage Cloudify to transition their traditional IT into a DevOps environment, bridging the gap betwe
Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing.Markus Klems
My slides of the WeB 2008 workshop on e-Business in Paris. The framework that I describe in the presentation assists decision makers to understand the value proposition of Cloud Computing technology.
Deze sessie geeft een duidelijk beeld over een nieuwe stroming in de IT-wereld: Cloud Computing. Beginnen doen we met een globaal beeld van wat deze nieuwe technologie te bieden heeft en hoe deze zich verhoudt tot “traditionele” infrastructuur en ontwikkeling. Daarna wordt de focus verlegd naar Microsoft’s implementatie van Cloud Computing: Windows Azure. Gecombineerd met een aantal praktijkvoorbeelden zal deze sessie de mist rond Cloud Computing uitklaren.
Slides for an introductory workshop on cloud computing for a web app developer audience at FOWA Miami 09 (http://events.carsonified.com/fowa/2009/miami/workshops#workshop_36)
Toulouse, France - 2013.05.30
Centre de Compétences Techniques "Cloud Computing et Big Data"
WPS is an OGC standard which defines interfaces to publish, describe and execute geospatial processes
The Orfeo Toolbox (OTB - http://www.orfeo-toolbox.org/) is an Open Source Remote Sensing Image Processing software library developed by CNES. The aim of the toolbox is to gather a large number of state of the art algo- rithms for building processing chains for satellite images. Using the constellation server (http://www.constellation-sdi.org/), we exposed the main OTB processing chains as Web Processing Services (WPS). The WPS provides rules for standardizing inputs and outputs for invoking geospatial processing services. These services are managed from a web browser using the mapshup web client (http://mapshup.info). mapshup supports both synchronous and asynchronous processes and offers direct visualisation of results. The whole system provides user a complete and comprehensive image processing chain to produce land cover classification from satellite orthoimagery.
With an update to WPS 2.0, this chain should fit well to a Cloud architecture
GoGrid/AppZero: "Moving Windows Server Applications to the Cloud in 3 Easy St...GoGrid Cloud Hosting
Learn how to take the headaches and heartaches out of Windows Server Application hosting and migration using GoGrid Cloud Hosting and AppZero. If you answer "Yes" to any of these following questions, then you should review this slide-show:
* Are you are interested in learning about the cost-effective flexibility of Cloud Computing?
* Do you develop Windows Server Applications?
* Are you hosting with other Cloud Computing providers?
* Do you want to migrate your Windows Applications from a different cloud or data center?
* Are you an Enterprise customer looking to test your application in the cloud?
* Are you afraid of having to re-engineer all of your Applications because you have been told you must move to the cloud?
* Do you want to learn 3 easy steps to move Windows server applications to the cloud?
* Are you afraid of vendor lock-in?
Azure Integration in Production with Logic Apps and moreBizTalk360
In this session we will share our experience in using different Azure Integration components in a Production environment with Logic Apps. The Why? The How? And What Next?
Understanding the network's role in cloud computing requires understanding the effect of cloud computing on networking. The end result is five key trends in cloud networking, as presented by James Urquhart from Cisco Systems, and author of CNET's The Wisdom of Clouds
Openstack Summit Tokyo 2015 - Building a private cloud to efficiently handle ...Pierre GRANDIN
What do you do when your usual setup or turnkey solution isn’t suited for your workload?
Most of the documentation and user feedback that you can find about OpenStack is written for the use-case of running a public facing cloud serving several external customers. When you want to host a single tenant with a single application the problem is completely different, you don't want publicly exposed APIs. You want to ensure optimal resource allocation to maximize your application performance. You want to leverage the fact that you own the infrastructure layer to optimize your instance placement strategy, and to get the best latency and to avoid creating SPOFs using affinity (or anti affinity rules).
This talk will focus on what we learned during a two years journey; from getting OpenStack up and running reliably, to investigating performance bottlenecks, to maximizing the performance of our private cloud.
Java Web Apps and Services on Oracle Java Cloud ServiceAndreas Koop
Mit den Oracle Cloud Services gibt es ein weiteres Maß an Agilität in der Projektabwicklung auf Java EE Basis. Die ausgefeilte Umgebung samt Datenbank, WebLogic Server und Identity Domain erlaubt es, Java EE Anwendungen in kürzester Zeit zu entwickeln und ohne Betrieb einer eigenen Infrastruktur auszurollen. Die nahtlose Integration des Oracle Cloud SDK in Eclipse, Netbeans und JDeveloper sorgt für eine effiziente Handhabung in der gewünschten Entwicklungsumgebung. Dank der Ant-, Maven- und Kommandozeilen-Unterstützung ist die Verwendung innerhalb einer Continuous Integration-Umgebung ebenfalls gewährleistet.
Dieser Vortrag erläutert die Konzepte der Oracle Cloud Services und demonstriert alle essentiellen Schritte, um eine Java-Anwendung sowie Services (z.B. für mobile Apps) von einer lokalen Umgebung in die Oracle Cloud zu bringen. Neben den notwendigen Tipps und Tricks bei der Entwicklung, Konfiguration und Deployment der Anwendung werden Best Practices bei der Aktualisierung der Cloud-Datenbankobjekte und -daten gegeben.
Mit den Oracle Cloud Services gibt es ein weiteres Maß an Agilität in der Projektabwicklung auf Java EE Basis. Die ausgefeilte Umgebung samt Datenbank, WebLogic Server und Identity Domain erlaubt es, Java EE Anwendungen in kürzester Zeit zu entwickeln und ohne Betrieb einer eigenen Infrastruktur auszurollen. Die nahtlose Integration des Oracle Cloud SDK in Eclipse, Netbeans und JDeveloper sorgt für eine effiziente Handhabung in der gewünschten Entwicklungsumgebung. Dank der Ant-, Maven- und Kommandozeilen-Unterstützung ist die Verwendung innerhalb einer Continuous Integration-Umgebung ebenfalls gewährleistet.
Dieser Vortrag erläutert die Konzepte der Oracle Cloud Services und demonstriert alle essentiellen Schritte, um eine Java-Anwendung sowie Services (z.B. für mobile Apps) von einer lokalen Umgebung in die Oracle Cloud zu bringen. Neben den notwendigen Tipps und Tricks bei der Entwicklung, Konfiguration und Deployment der Anwendung werden Best Practices bei der Aktualisierung der Cloud-Datenbankobjekte und -daten gegeben.
In this keynote presentation, Matthew will review the workflow process DeWalt used to capture an entire 200,000 sq.ft building in just 2 days with the use of a FARO Focus3D Laser Scanner, register the entire project without the use of any scan targets using SCENE, and upload the data using a new web-based application known as Web Share Cloud. This workflow solved many of the complications in the beginning stages of this project, and will continue to help the project through its lifecycle over the next couple years.
Netcetera consultants Ronnie Brunner and Jason Brazile present the results of a year long study of existing and potential uses of cloud computing at the European Space Agency. Some unpublished internal material was removed. Queries can be directed to the contract's Technical Officer at ESA ESRIN.
Introduction to Microsoft Azure. Covers the change to a cloud development paradigm. Motivations for the change, Pricing structures, and an exercise in IT portfolio evaluation.
Smuggling Multi-Cloud Support into Cloud-native Applications using Elastic Co...Nane Kratzke
Elastic container platforms (like Kubernetes, Docker Swarm, Apache Mesos) fit very well with existing cloud-native application architecture approaches. So it is more than astonishing, that these already existing and open source available elastic platforms are not considered more consequently for multi-cloud approaches. Elastic container platforms provide inherent multi-cloud support that can be easily accessed. We present a solution proposal of a control process which is able to scale (and migrate as a side effect) elastic container platforms across different public and private cloud-service providers. This control loop can be used in an execution phase of self-adaptive auto-scaling MAPE loops (monitoring, analysis, planning, execution). Additionally, we present several lessons learned from our prototype implementation which might be of general interest for researchers and practitioners. For instance, to describe only the intended state of an elastic platform and let a single control process take care to reach this intended state is far less complex than to define plenty of specific and necessary multi-cloud aware workflows to deploy, migrate, terminate, scale up and scale down elastic platforms or applications.
Le programme Copernicus est entré en phase opérationnelle en 2014. La composante spatiale comprend des missions Sentinel développées par l’ESA qui, pour la première fois, offrent un accès gratuit à des données multi-capteurs de très grande qualité. La mise à disposition de ces données et des services à valeur ajoutée sur ces données stimulera la recherche et le développement du secteur aval. Deux missions Sentinel sont déjà en orbite : Sentinel-1A (imageur radar) a été lancée le 3 avril 2014 et Sentinel-2A (imageur optique) a été lancée le 23 juin 2015. Le lancement de Sentinel-3A (imageur grand champ et altimètre) est actuellement prévu pour février 2016.
Les données Sentinel sont destinées à être diffusées à toutes les communautés d’utilisateurs, en Europe et dans le monde. A terme, elles généreront 13 To/jour, soit presque 5 Po de données par an.
La plateforme PEPS (Plateforme d’Exploitation des Produits Sentinel) assure la diffusion des produits Sentinel au niveau français pour soutenir la mise en place et le suivi des politiques environnementales, favoriser le développement industriel et l’émergence de services aval, et répondre aux attentes de la communauté scientifique. Le projet PEPS est une plateforme conçue pour offrir aux utilisateurs nationaux des performances accrues d’accès aux volumes très élevés des données Sentinel.
2016.02.18 big data from space toulouse data scienceGasperi Jerome
Le programme européen Copernicus vise à doter l'Europe d'une capacité opérationnelle et autonome d'observation de la Terre en tant que « services d’intérêt général européen, à accès libre, plein et entier ». A cet effet, l’ESA développe 6 familles de satellites dédiés à l’observation de la Terre - Les Sentinels. D’ici 2020, le volume de données acquises par ces satellites sera de l’ordre de 20 Pétaoctets. Cette avalanche de données offre des opportunités importantes notamment dans les domaines de la recherche, des services et de l’innovation. Elle pose aussi des défis techniques - comment stocker ces données, et au delà, comment chercher, diffuser et traiter ces données afin de fournir aux utilisateurs le service ou l’information dont ils ont besoin.
Présenté au Toulouse Data Science le 18.02.2016 - http://www.meetup.com/fr-FR/Tlse-Data-Science/events/228423095/
2015.11.12 big data from space - cusi toulouseGasperi Jerome
Les enjeux du Big Data et des technologies liées à son émergence dans les domaines du spatial et de l'information géographique sont nombreux. Après un rappel des concepts du "Big Data", la présentation s’appuiera sur des exemples concrets de réalisations utilisant la donnée spatiale, utilisations qui n'auraient pas pu voir le jour sans ces technologies.
Big Data - Accès et traitement des données d’Observation de laTerreGasperi Jerome
Le succès du programme Copernicus piloté par l’Union Européenne en coordination avec l’ESA et les Etats membres repose sur l’appui du programme aux politiques publiques européennes et sa capacité à favoriser l’innovation par le développement des services à valeur ajoutée en Europe. Une politique de données « libre et gratuite » a été promue dans cette optique avec comme conditions essentielles, d'une part la facilité d'accès aux données par les utilisateurs publics et privés, et, d'autre part, la mise en place d'un plan d’action pour stimuler le secteur aval.
A cet effet, le CNES en tant que « segment sol collaboratif » complète au travers du projet PEPS l’accès large aux produits des satellites Sentinels. Avec 4 To de données supplémentaires chaque jour pour un volume de 20 Po de données à l'horizon 2020, l'enjeu de PEPS est de garantir un accès performant aux données en proposant notamment des capacités de traitement colocalisées et ainsi favoriser l'adoption et l'utilisation des données d'Observation de la Terre
Semantic search within Earth Observation products databases based on automati...Gasperi Jerome
Since 1972 and the launch of Landsat 1– the first Earth Observation civilian satellite - millions of images have been acquired all over the Earth by a constantly growing fleet of more and more sophisticated satellites. Generally, searching within this huge amount of Earth Observation (EO) images is limited by the description of the acquisition conditions stored in the related metadata files, i.e. Where (footprint), When (time of acquisition) and How (viewing angles, instrument, etc.). Thus the larger community of end users misses the What filter - i.e. a way to filter search in term of image content. RESTo [1] uses the iTag [2] footprint-based tagging system to enhance image metadata and hopefully provides a way to express semantic queries on images content in term of land use. We investigated the performance of RESTo against a 12 millions simulated Sentinel-2 granules database representative of the forthcoming French national mirror site of Sentinel products (PEPS).
WPS is an OGC standard that defines how to publish
execute geospatial processes.
The WPS interface standardizes the way processes and their inputs outputs are described, how a client can request the execution of a process, and how the output from a process is handled.
This keynote introduces WPS 1.0 and 2.0 with concerns with process chaining within a cloud federation
RESTo - restful semantic search tool for geospatialGasperi Jerome
RESTo implements search service with semantic query analyzis on Earth Observation metadata database. It conforms to OGC 13-026 standard - OpenSearch Extension for Earth Observation
CEOS WGISS 36 - Frascati, Italy - 2013.09.19
Single Sign On with OAuth and OpenID used for Kalideos project and to be used within the French Land Surface Thematic Center
Traitements de données à la demande - Introduction au Web Processing ServiceGasperi Jerome
Toulouse, France - 2013.01.10
Centre de Compétences Techniques "Extraction des données de télémesures et exploitation en temps différé"
WPS (Web Processing Service) est un standard de l'OGC qui définit des interfaces pour faciliter la publication, la description et l'exécution de traitements.
Nous avons exposé sous forme de WPS les traitements d'images de la librairie ORFEO Toolbox (OTB - http://www.orfeo-toolbox.org/) développée par le CNES. A cet effet, nous avons utilisé le serveur constellation (http://www.constellation- sdi.org /).
Ainsi exposés, les traitements sont pilotés à partir d'un navigateur Web en utilisant le client Web mapshup (http://mapshup.info). mapshup prend en charge les processus synchrones et asynchrones et offre une visualisation directe des résultats.
Data access and data extraction services within the Land Imagery PortalGasperi Jerome
Models for scientific exploitation of EO Data - Frascati - October 12th 2012
Presentation of the architecture of the french Land Imagery portal data access
Semantic search applied to Earth Observation productsGasperi Jerome
Seoul, Korea - 2012.10.10
82th OGC Technical Comittee
How to search for Earth Observation imagery that contains coastal cultivated areas ?
Semantic content extraction from image is a complex and time consuming task. A simpler approach is to use the metadata footprint against exogenous data to perform image characterization.
SLACkER (SimpLe Automated Characterization of EaRth observation products) uses Global Land Cover 2000 classification to perform automatically such characterization
Accès à l’information satellitaire dans un contexte réactif de catastrophe na...Gasperi Jerome
Les rencontres de SIG-la-lettre - Paris, 5 avril 2012
L’imagerie satellitaire est une source d'information déterminante en cas de catastrophe naturelle. Tous les processus mis en œuvre dans ces situations sont soumis à des contraintes de délai. De l’acquisition de la donnée image jusqu’à la production de cartes destinées aux acteurs de terrain, c'est une course contre la montre qui s'engage dans laquelle le choix des sources satellitaires est primordiale. Aujourd'hui, ce sont plusieurs dizaines de sources images qui sont accessibles et l'utilisateur doit pourvoir très rapidement identifier les sources les plus appropriées pour son besoin cartographique.
Dans ce contexte, le catalogue de la Charte internationale sur les Risques Majeurs (http://www.disasterschartercatalog.org) offre un accès aux données acquises dans le cadre de ce projet. En suivant les standards d'interopérabilité et en proposant une interface de recherche innovante, ce service répond aux deux grandes problématiques de la diffusion de données, l'accessibilité et "l'utilisabilité" du service.
Experimenting a cloud based solution for image processing and data accessGasperi Jerome
Toulouse, CNES - 2012.02.29
GSCB - Cloud Computing Workshop (http://earth.esa.int/gscb/)
Presentation of an OpenStack cloud based solution for processing SRTM data
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Climate Impact of Software Testing at Nordic Testing Days
Cloud computing and web processing services
1. Cloud Computing & Web Processing Services
https://speakerdeck.com/jjrom/cloud-computing-and-web-processing-services
Jérôme Gasperi
Jerome.Gasperi@cnes.fr
RTS Techniques de l'Information et de la Communication
CNES - Toulouse, France - October 1st, 2013
2. Cloud Computing
Introduction
What we have done
Issues
Web Processing Services
Introduction
What we have done
Issues
What's next ?
WPS on the cloud
3. R&T Cloud Computing (2011)
Use a cloud infrastructure to process Earth Observation data
10. Data security and user privacy cannot
be guaranteed in public clouds
Majority of cloud providers are subject to governmental law (e.g. US patriot act)
19. Private cloud technologies are quite simple to implement. They reduce
the cost of operation and maintenance by sharing a common
infrastructure across multiple projects
25. ...so
Data and processes should be colocated
Processing results should be downloaded only if needed
26. ...so
Data and processes should be colocated
Processing results should be downloaded only if needed
Better to offer a Web service to visualize/manipulate results
27. ...so
Data and processes should be colocated
Processing results should be downloaded only if needed
Better to offer a Web service to visualize/manipulate results
Standardize processes inputs/outputs description (e.g. WPS)
36. Processing
Supervised learning
Based on SVM
(http://en.wikipedia.org/wiki/Support_vector_machine)
(land cover is computed from a set of "well known areas" given by user)
Orfeo Toolbox
More than 70 high level processing chains
orthorectification
segmentation
classification
etc.
46. However...
WPS 2.0 defines a set of process management operations - GetStatus,
Delete, Pause and Resume
This is a must have to deploy asynchronous WPS on the cloud
47. However...
WPS 2.0 defines a set of process management operations - GetStatus,
Delete, Pause and Resume
This is a must have to deploy asynchronous WPS on the cloud
Should be an official OGC standard by the end of 2013
49. Orthorectifying images hosted by a cloud infrastructure using
WPS standard
In kind contribution to the «Open Mobility» thread of the OGC OWS-10 Testbeb
Final delivery and demonstration April/May 2014
51. 3. Result is displayed within the map trough a WMS
1. Select raw image to orthorectify within
Landsat
CSW catalogLaat
located on the
Landsat
2. Click on '+' to process a new orthorectification.
Process parameters are set by user and sent to an
asynchronous WPS orthorectification process
located on the
Landsat
4. Orthorectified image quality can be checked through
"Assess Quality" WPS process located on the
Landsat
Result is displayed within the map as a WMS quality layer
stored on the
52. Cloud Computing & Web Processing Services
https://speakerdeck.com/jjrom/cloud-computing-and-web-processing-services
Jérôme Gasperi
Jerome.Gasperi@cnes.fr
RTS Techniques de l'Information et de la Communication
CNES - Toulouse, France - October 1st, 2013