This document summarizes a presentation given at the ONE Conference 2013 about using cloud computing for Earth observation ground segments. It describes four cases where the European Space Agency used cloud computing:
1. Mass re-processing of satellite data on Amazon Web Services for validation purposes, allowing processing of 30,000 products in 5 weeks.
2. Coupling large data dissemination and processing capabilities on dedicated servers from Hetzner for analyzing 38,000 satellite images and serving 3,000 users.
3. A collaborative exploitation platform using multiple cloud providers through Helix Nebula for exploiting Earth observation data from various sources and making it available to over 200 users.
4. Plans for a sandbox service providing researchers and service providers
Top Ten Security Considerations when Setting up your OpenNebula CloudNETWAYS
Creating new nodes in your cloud environment was never as easy. Just a few clicks away system engineers create new virtual machines, assign network environments for them and deploy software components. Viable security engineering has ever been a key task to ensure your data’s confidentiality, integrity, and availibity. While hardening your operating systems and wisely designing you applications, cloud computing introduced a new challenge for engineers who are responsible for security.
A breach in the perimeters of one of your central components threatens the overall security of all systems in any environment. The talk discusses predominant attack patterns that system engineers and security officers should consider. The top 10 threats come together with practical suggestions to improve data center security in the cloud.
Enabling Scientific Workflows on FermiCloud using OpenNebulaNETWAYS
The FermiCloud Project has been operating an Infrastructure-as-a-Service private Cloud using OpenNebula since the fall of 2010. FermiCloud has made significant contributions in X.509-based authentication and authorization, accounting, fabric deployment and high-availability cloud infrastructure. Our current program of work, carried out jointly with KISTI, focuses on interoperability and federation with the goal of running scientific cloud-based workflows across multiple clouds. I will identify some of the technical challenges that remain to be solved in widespread cloud deployment, as well as lessons that we have learned from grid computing and applied to the cloud environment.
Superfluidity, Infrastructure for mixed workloads in Mobile Edge Computing - ...Cloud Native Day Tel Aviv
Superfluidity is a European Union research project that innovates in the 5G networks domain. It's goal is to design a converged cloud-based 5G architecture which would enable instantiating services on-the-fly, running them anywhere in the network (core or edge) and shifting them transparently to different locations. 18 partners are contributing to the project, including leaders from the IT and Telco industries and from the academia.
In this session we'll understand the MEC architecture which supports mixed workloads of VMs and containers sharing one networking infrastructure. We’ll discuss side by side deployment of OpenStack and Kubernetes while leveraging Kuryr to build a single networking infrastructure.
BBC Research & Development are in the process of deploying a department wide virtualization solution, catering for use cases including web development, machine learning, transcoding, media ingress and system testing. This talk discusses the implementation of a high performance Ceph storage backend and the challenges of virtualization in a broadcast research and development environment.
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...OpenStack
Audience Level
Intermediate
Synopsis
M3 is the latest generation system of the MASSIVE project, an HPC facility specializing in characterization science (imaging and visualization). Using OpenStack as the compute provisioning layer, M3 is a hybrid HPC/cloud system, custom-integrated by Monash’s R@CMon Research Cloud team. Built to support Monash University’s next-gen high-throughput instrument processing requirements, M3 is half-half GPU-accelerated and CPU-only.
We’ll discuss the design and tech used to build this innovative platform as well as detailing approaches and challenges to building GPU-enabled and HPC clouds. We’ll also discuss some of the software and processing pipelines that this system supports and highlight the importance of tuning for these workloads.
Speaker Bio
Blair Bethwaite: Blair has worked in distributed computing at Monash University for 10 years, with OpenStack for half of that. Having served as team lead, architect, administrator, user, researcher, and occasional hacker, Blair’s unique perspective as a science power-user, developer, and system architect has helped guide the evolution of the research computing engine central to Monash’s 21st Century Microscope.
Lance Wilson: Lance is a mechanical engineer, who has been making tools to break things for the last 20 years. His career has moved through a number of engineering subdisciplines from manufacturing to bioengineering. Now he supports the national characterisation research community in Melbourne, Australia using OpenStack to create HPC systems solving problems too large for your laptop.
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack
Audience Level
Beginner
Synopsis
Layer 2 versus Layer 3, MLAG, Spanning-Tree, switch mechanism drivers, overlays and routing-on-the-host — What scales and what does not? The underlying plumbing of an OpenStack network is something you’d rather not have to think about. This presentation examines the network architectures of web-scale and large enterprise OpenStack users and how those same efficiencies can be used in deployments of all sizes.
Speaker Bio:
Scott is a Member of Technical Staff at Cumulus Networks where he designs, supports and deploys web-scale technologies and architectures in enterprise networks globally. Prior to becoming a founding member of the Cumulus office in Australia, Scott started his career as a network administrator before joining Cisco Systems to support their data centre products.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...OpenStack
Audience Level
Intermediate
Synopsis
The latest SDN revolution is centered on creating efficient virtualized data center networks using VXLAN & EVPN. We will talk about the scale, performance, and cost advantages of using a modern controller-free virtualized network solution built on 100 Gigabit Ethernet switches with hardware based VXLAN Routing. We will explore the ease of automating such a network in an OpenStack environment and take you through a real world use case of using OpenStack Network Node bridging between a bare metal cloud (EVPN) and a fully virtualized cloud environments (orchestrated by Neutron).
Speaker Bio:
David has held leadership roles at 3COM, Cisco Systems, Nortel Networks, and IBM where he promoted advanced network technologies including High Speed Ethernet, Layer 4-7 switching, Virtual Machine-aware networking, and Software Defined Networking.
David’s current focus is on the evolving landscape of data center networking, scale out storage, Open Networking, and cloud computing.
Top Ten Security Considerations when Setting up your OpenNebula CloudNETWAYS
Creating new nodes in your cloud environment was never as easy. Just a few clicks away system engineers create new virtual machines, assign network environments for them and deploy software components. Viable security engineering has ever been a key task to ensure your data’s confidentiality, integrity, and availibity. While hardening your operating systems and wisely designing you applications, cloud computing introduced a new challenge for engineers who are responsible for security.
A breach in the perimeters of one of your central components threatens the overall security of all systems in any environment. The talk discusses predominant attack patterns that system engineers and security officers should consider. The top 10 threats come together with practical suggestions to improve data center security in the cloud.
Enabling Scientific Workflows on FermiCloud using OpenNebulaNETWAYS
The FermiCloud Project has been operating an Infrastructure-as-a-Service private Cloud using OpenNebula since the fall of 2010. FermiCloud has made significant contributions in X.509-based authentication and authorization, accounting, fabric deployment and high-availability cloud infrastructure. Our current program of work, carried out jointly with KISTI, focuses on interoperability and federation with the goal of running scientific cloud-based workflows across multiple clouds. I will identify some of the technical challenges that remain to be solved in widespread cloud deployment, as well as lessons that we have learned from grid computing and applied to the cloud environment.
Superfluidity, Infrastructure for mixed workloads in Mobile Edge Computing - ...Cloud Native Day Tel Aviv
Superfluidity is a European Union research project that innovates in the 5G networks domain. It's goal is to design a converged cloud-based 5G architecture which would enable instantiating services on-the-fly, running them anywhere in the network (core or edge) and shifting them transparently to different locations. 18 partners are contributing to the project, including leaders from the IT and Telco industries and from the academia.
In this session we'll understand the MEC architecture which supports mixed workloads of VMs and containers sharing one networking infrastructure. We’ll discuss side by side deployment of OpenStack and Kubernetes while leveraging Kuryr to build a single networking infrastructure.
BBC Research & Development are in the process of deploying a department wide virtualization solution, catering for use cases including web development, machine learning, transcoding, media ingress and system testing. This talk discusses the implementation of a high performance Ceph storage backend and the challenges of virtualization in a broadcast research and development environment.
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...OpenStack
Audience Level
Intermediate
Synopsis
M3 is the latest generation system of the MASSIVE project, an HPC facility specializing in characterization science (imaging and visualization). Using OpenStack as the compute provisioning layer, M3 is a hybrid HPC/cloud system, custom-integrated by Monash’s R@CMon Research Cloud team. Built to support Monash University’s next-gen high-throughput instrument processing requirements, M3 is half-half GPU-accelerated and CPU-only.
We’ll discuss the design and tech used to build this innovative platform as well as detailing approaches and challenges to building GPU-enabled and HPC clouds. We’ll also discuss some of the software and processing pipelines that this system supports and highlight the importance of tuning for these workloads.
Speaker Bio
Blair Bethwaite: Blair has worked in distributed computing at Monash University for 10 years, with OpenStack for half of that. Having served as team lead, architect, administrator, user, researcher, and occasional hacker, Blair’s unique perspective as a science power-user, developer, and system architect has helped guide the evolution of the research computing engine central to Monash’s 21st Century Microscope.
Lance Wilson: Lance is a mechanical engineer, who has been making tools to break things for the last 20 years. His career has moved through a number of engineering subdisciplines from manufacturing to bioengineering. Now he supports the national characterisation research community in Melbourne, Australia using OpenStack to create HPC systems solving problems too large for your laptop.
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack
Audience Level
Beginner
Synopsis
Layer 2 versus Layer 3, MLAG, Spanning-Tree, switch mechanism drivers, overlays and routing-on-the-host — What scales and what does not? The underlying plumbing of an OpenStack network is something you’d rather not have to think about. This presentation examines the network architectures of web-scale and large enterprise OpenStack users and how those same efficiencies can be used in deployments of all sizes.
Speaker Bio:
Scott is a Member of Technical Staff at Cumulus Networks where he designs, supports and deploys web-scale technologies and architectures in enterprise networks globally. Prior to becoming a founding member of the Cumulus office in Australia, Scott started his career as a network administrator before joining Cisco Systems to support their data centre products.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...OpenStack
Audience Level
Intermediate
Synopsis
The latest SDN revolution is centered on creating efficient virtualized data center networks using VXLAN & EVPN. We will talk about the scale, performance, and cost advantages of using a modern controller-free virtualized network solution built on 100 Gigabit Ethernet switches with hardware based VXLAN Routing. We will explore the ease of automating such a network in an OpenStack environment and take you through a real world use case of using OpenStack Network Node bridging between a bare metal cloud (EVPN) and a fully virtualized cloud environments (orchestrated by Neutron).
Speaker Bio:
David has held leadership roles at 3COM, Cisco Systems, Nortel Networks, and IBM where he promoted advanced network technologies including High Speed Ethernet, Layer 4-7 switching, Virtual Machine-aware networking, and Software Defined Networking.
David’s current focus is on the evolving landscape of data center networking, scale out storage, Open Networking, and cloud computing.
Adam Dagnall: Advanced S3 compatible storage integration in CloudStackShapeBlue
Adam's slides from his talk at the CloudStack European User group meetup, March 13, London. To provide tighter integration between the S3 compatible object store and CloudStack, Cloudian has developed a connector to allow users and their applications to utilize the object store directly from within the CloudStack platform in a single sign-on manner with self-service provisioning. Additionally, CloudStack templates and snapshots are centrally stored within the object store and managed through the CloudStack service. The object store offers protection of these templates and snapshots across data centres using replication or erasure coding.
This talk describes the Fermilab Virtual Facility project, which incorporates bare-metal machines, our OpenNebula-based private cloud, and commercial clouds. After a number of years of research and development we are now doing stable production of data-intensive analysis and simulation for High Energy Experiments on the cloud.
I will pay special attention to the auxiliary services such as code caching, data caching, job submission, autoscaling, and load balancing that we are launching in the cloud. I will also review other significant developments by others in the field with which Fermilab is not directly involved.
Author Biography
Steven Timm has worked on cloud and virtualization issues for the Scientific Computing Division at Fermilab. The new Virtual Facility Project is a way to transparently extend Fermilab’s facility onto commercial and community clouds.
How to Survive an OpenStack Cloud Meltdown with CephSean Cohen
What if you lost your datacenter completely in a catastrophe, but your users hardly noticed? Sounds like a mirage, but it’s absolutely possible.
This talk will showcase OpenStack features enabling multisite and disaster recovery functionalities. We’ll present the latest capabilities of OpenStack and Ceph for Volume and Image Replication using Ceph Block and Object as the backend storage solution, as well as look at the future developments they are driving to improve and simplify the relevant architecture use cases, such as Distributed NFV, an emerging use case that rationalizes your IT by using less control planes and allows you to spread your VNF on multiple datacenters and edge deployments.
In this session you will learn about wew OpenStack features enabling Multisite and distributed deployments, as well as review key use cases, architecture design and best practices to help operations avoid the OpenStack cloud Meltdown nightmare.
https://youtu.be/n2S7uNC_KMw
https://goo.gl/cRNGBK
Audience Level
Intermediate
Synopsis
In this presentation, Shunde will show you how to simplify the migration process with a workload migration engine, making the move to OpenStack easy. This talk will address the various difficulties operators and administrators face when migrating workloads and resources between various cloud platforms, including removing time consuming, repetitive and complicated steps.
This tool can be applied to many cloud migrations, including between Virtual Machines and OpenStack, between Public and Private clouds, as well as between OpenStack and OpenStack. This tool integrates completely with other OpenStack projects minimising deployment and maintenance efforts. So whether you’re looking to upgrade from your existing traditional virtualisation platform, setup a new OpenStack instance, or upgrade to a newer version of OpenStack, we will show you how to simplify this process using GUTS.
Speaker Bio
Shunde is a senior software developer in Aptira with over 15 years experience in software development, automation and system administration. He has worked with OpenStack since the Diablo cycle and has been involved in projects from OpenStack infrastructure to distributed systems running on top of OpenStack.
Here is the slide deck presented at our March 16, 2016 Kubernetes meetup by Aniket Daptari, Sr. Product Manager of Cloud Networking, Juniper Networks. It covers OpenContrail with Kubernetes. Sponsored by StackPointCloud and Concur.
Agenda:
What is Software Defined Storage?
What is Ceph?
What is Rook?
Storage for Kubernetes
Storage Classes
Storage on Kubernetes
Operator Pattern
Custom Resource Definition
Rook Operator
Rook architecture
Ceph on Kubernetes with Rook
Demo
Rook Framework for Storage solutions
How to Get Involved?
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...OpenStack
Audience Level
All levels
Synopsis
Peter has been involved in OpenStack community since its B-release, and he has been enabling and helping customers across various industries adopt OpenStack in strategic ways. In this session, you will learn from his experience what Red Hat’s perspective is on the current state of affairs in the OpenStack community and the path we see ahead that Red Hat is putting its efforts in. OpenStack is not a product that tries to solve any one business problem in particular, but a technology that aims to be usable for many – what are the required steps to make sure that your organisation is ready for the OpenStack-based cloudification and transformation.
Speaker Bio:
Peter Jung is a Senior Business Development Manager at Red Hat where he leads the practice in the areas of Cloud, SDN/NFV and IoT across Australia and New Zealand. He is passionate about open innovation and open source software development model as the foundation for next generation society and ICT systems. Prior to Red Hat, he had various roles at Cisco and Dell for 15 years. He holds a BSEE and an MBA.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Introduction to Container Storage Interface (CSI)Idan Atias
Among the cool stuff we do at Silk, my colleagues and I develop the Silk CSI Plugin for customers who use our system as the storage layer for their Kubernetes workloads.
Before deep diving into the code and as part of my ramp-up on this subject I prepared some slides that cover some basic and important information on this topic.
These slides start by recapping some basic storage principals in containers and Kubernetes, continues with some more advanced use cases (including an "offline demo" of persisting Redis data on EBS volumes), and ends with a detailed information on the CSI solution itself.
IMHO, reviewing these slides can improve your understanding on this matter and can get you started implementing your own CSI plugin.
The main sources of information I used for preparing these slides are:
* Official CSI docs
* Kubernetes Storage Lingo 101 - Saad Ali, Google
* Container Storage Interface: Present and Future - Jie Yu, Mesosphere, Inc.
CERN, the European Organization for Nuclear Research, is one of the world’s largest centres for scientific research. Its business is fundamental physics, finding out what the universe is made of and how it works. At CERN, accelerators such as the 27km Large Hadron Collider, are used to study the basic constituents of matter. This talk reviews the challenges to record and analyse the 25 Petabytes/year produced by the experiments and the investigations into how OpenStack could help to deliver a more agile computing infrastructure.
How Cloud Native VNFs Deployed on OpenStack Will Change the Telecom Industry ...Cloud Native Day Tel Aviv
Many of the existing network functions, such as routers, firewalls, load balancers and such, have undergone the initial transition from a physical appliance to a virtual appliance. That transition required mostly performance optimization to accommodate the additional I/O overhead of the hypervisor and some configuration changes to accommodate the fact that a VM can be more dynamic in nature.
This shift to NFV, which is basically a cloud-based data center, has revolutionized the way network functions can be delivered. The transition to a cloud native world is considered far more disruptive as it touches changes in both the architecture, to accommodate hyper-scale and multi-tenancy, as well as the business model, which needs to be more consumption based, rather than fixed.
This talk will dive into the main requirements that differentiate a cloud native network function from the traditional network function, and, after making the leap from non-virtualized to virtualized network functions, what is then required to achieve cloud native capabilities, along with the challenges and benefits of this transition.
Paul Angus - CloudStack Container ServiceShapeBlue
A walkthrough of the recently released update to ShapeBlue’s CloudStack Container Service (CCS). This update brings CCS bang up-to-date by running the latest version of Kubernetes (v1.11.3) on the latest version of Container Linux. CCS also now makes use of CloudStack’s new CA framework to automatically secure the Kubernetes environments it creates.
Using Kubernetes and TensorFlow to build the Fog Computing Platform that can dynamically deploy the deep learning applications on to the IoT devices (Raspberry PI).
Optimising nfv service chains on open stack using dockerAnanth Padmanabhan
Uploading slides presented in the OpenStack summit, at Austin in April, 2016. Here is the link to the video,
https://www.openstack.org/videos/video/optimising-nfv-service-chains-on-openstack-using-docker
Andre Paul: Importing VMware infrastructures into CloudStackShapeBlue
The talk will show how the VMware ingestion feature uses existing VMware Zones, and ‘imports’ them into CloudStack. We will describe the process by which database entries of already existing components of an existing virtual machine are created and how it enables CloudStack to safely manage such instances even though they were not initially set up by CloudStack.
OpenNebula is a great cloud orchestration and management tool, characterised by its flexibility, durability and focused feature matrix. A feature matrix that is driven largely by real world problem scenarios and real world feedback : something that has been the focus of CentOS project as well. From large scale deployment automation to patch management and state control, the CentOS Project aims to solve real world problems faced by the people who run infrastructure : The Sysadmins administrators and operations teams.
During this talk, I will share why OpenNebula and CentOS Linux are a perfect match and go into some user stories that demonstrate this relationships success in real world scenarios.
Monitoring Large-scale Cloud Infrastructures with OpenNebulaNETWAYS
Efficient monitoring is crucial when managing your Cloud infrastructure. The metrics collected by OpenNebula can be used to trigger automatic scaling, or quickly detect failures to automatically restart virtual machines. During this talk, I will show how OpenNebula can be used to efficiently monitor thousands of virtual machines at sub-1 minute interval. I will show how OpenNebula can be enhanced and optimized, and how different metrics collection tools such as Ganglia and Host-sFlow can be used with OpenNebula to monitor large-scale Cloud infrastructures.
Adam Dagnall: Advanced S3 compatible storage integration in CloudStackShapeBlue
Adam's slides from his talk at the CloudStack European User group meetup, March 13, London. To provide tighter integration between the S3 compatible object store and CloudStack, Cloudian has developed a connector to allow users and their applications to utilize the object store directly from within the CloudStack platform in a single sign-on manner with self-service provisioning. Additionally, CloudStack templates and snapshots are centrally stored within the object store and managed through the CloudStack service. The object store offers protection of these templates and snapshots across data centres using replication or erasure coding.
This talk describes the Fermilab Virtual Facility project, which incorporates bare-metal machines, our OpenNebula-based private cloud, and commercial clouds. After a number of years of research and development we are now doing stable production of data-intensive analysis and simulation for High Energy Experiments on the cloud.
I will pay special attention to the auxiliary services such as code caching, data caching, job submission, autoscaling, and load balancing that we are launching in the cloud. I will also review other significant developments by others in the field with which Fermilab is not directly involved.
Author Biography
Steven Timm has worked on cloud and virtualization issues for the Scientific Computing Division at Fermilab. The new Virtual Facility Project is a way to transparently extend Fermilab’s facility onto commercial and community clouds.
How to Survive an OpenStack Cloud Meltdown with CephSean Cohen
What if you lost your datacenter completely in a catastrophe, but your users hardly noticed? Sounds like a mirage, but it’s absolutely possible.
This talk will showcase OpenStack features enabling multisite and disaster recovery functionalities. We’ll present the latest capabilities of OpenStack and Ceph for Volume and Image Replication using Ceph Block and Object as the backend storage solution, as well as look at the future developments they are driving to improve and simplify the relevant architecture use cases, such as Distributed NFV, an emerging use case that rationalizes your IT by using less control planes and allows you to spread your VNF on multiple datacenters and edge deployments.
In this session you will learn about wew OpenStack features enabling Multisite and distributed deployments, as well as review key use cases, architecture design and best practices to help operations avoid the OpenStack cloud Meltdown nightmare.
https://youtu.be/n2S7uNC_KMw
https://goo.gl/cRNGBK
Audience Level
Intermediate
Synopsis
In this presentation, Shunde will show you how to simplify the migration process with a workload migration engine, making the move to OpenStack easy. This talk will address the various difficulties operators and administrators face when migrating workloads and resources between various cloud platforms, including removing time consuming, repetitive and complicated steps.
This tool can be applied to many cloud migrations, including between Virtual Machines and OpenStack, between Public and Private clouds, as well as between OpenStack and OpenStack. This tool integrates completely with other OpenStack projects minimising deployment and maintenance efforts. So whether you’re looking to upgrade from your existing traditional virtualisation platform, setup a new OpenStack instance, or upgrade to a newer version of OpenStack, we will show you how to simplify this process using GUTS.
Speaker Bio
Shunde is a senior software developer in Aptira with over 15 years experience in software development, automation and system administration. He has worked with OpenStack since the Diablo cycle and has been involved in projects from OpenStack infrastructure to distributed systems running on top of OpenStack.
Here is the slide deck presented at our March 16, 2016 Kubernetes meetup by Aniket Daptari, Sr. Product Manager of Cloud Networking, Juniper Networks. It covers OpenContrail with Kubernetes. Sponsored by StackPointCloud and Concur.
Agenda:
What is Software Defined Storage?
What is Ceph?
What is Rook?
Storage for Kubernetes
Storage Classes
Storage on Kubernetes
Operator Pattern
Custom Resource Definition
Rook Operator
Rook architecture
Ceph on Kubernetes with Rook
Demo
Rook Framework for Storage solutions
How to Get Involved?
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...OpenStack
Audience Level
All levels
Synopsis
Peter has been involved in OpenStack community since its B-release, and he has been enabling and helping customers across various industries adopt OpenStack in strategic ways. In this session, you will learn from his experience what Red Hat’s perspective is on the current state of affairs in the OpenStack community and the path we see ahead that Red Hat is putting its efforts in. OpenStack is not a product that tries to solve any one business problem in particular, but a technology that aims to be usable for many – what are the required steps to make sure that your organisation is ready for the OpenStack-based cloudification and transformation.
Speaker Bio:
Peter Jung is a Senior Business Development Manager at Red Hat where he leads the practice in the areas of Cloud, SDN/NFV and IoT across Australia and New Zealand. He is passionate about open innovation and open source software development model as the foundation for next generation society and ICT systems. Prior to Red Hat, he had various roles at Cisco and Dell for 15 years. He holds a BSEE and an MBA.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Introduction to Container Storage Interface (CSI)Idan Atias
Among the cool stuff we do at Silk, my colleagues and I develop the Silk CSI Plugin for customers who use our system as the storage layer for their Kubernetes workloads.
Before deep diving into the code and as part of my ramp-up on this subject I prepared some slides that cover some basic and important information on this topic.
These slides start by recapping some basic storage principals in containers and Kubernetes, continues with some more advanced use cases (including an "offline demo" of persisting Redis data on EBS volumes), and ends with a detailed information on the CSI solution itself.
IMHO, reviewing these slides can improve your understanding on this matter and can get you started implementing your own CSI plugin.
The main sources of information I used for preparing these slides are:
* Official CSI docs
* Kubernetes Storage Lingo 101 - Saad Ali, Google
* Container Storage Interface: Present and Future - Jie Yu, Mesosphere, Inc.
CERN, the European Organization for Nuclear Research, is one of the world’s largest centres for scientific research. Its business is fundamental physics, finding out what the universe is made of and how it works. At CERN, accelerators such as the 27km Large Hadron Collider, are used to study the basic constituents of matter. This talk reviews the challenges to record and analyse the 25 Petabytes/year produced by the experiments and the investigations into how OpenStack could help to deliver a more agile computing infrastructure.
How Cloud Native VNFs Deployed on OpenStack Will Change the Telecom Industry ...Cloud Native Day Tel Aviv
Many of the existing network functions, such as routers, firewalls, load balancers and such, have undergone the initial transition from a physical appliance to a virtual appliance. That transition required mostly performance optimization to accommodate the additional I/O overhead of the hypervisor and some configuration changes to accommodate the fact that a VM can be more dynamic in nature.
This shift to NFV, which is basically a cloud-based data center, has revolutionized the way network functions can be delivered. The transition to a cloud native world is considered far more disruptive as it touches changes in both the architecture, to accommodate hyper-scale and multi-tenancy, as well as the business model, which needs to be more consumption based, rather than fixed.
This talk will dive into the main requirements that differentiate a cloud native network function from the traditional network function, and, after making the leap from non-virtualized to virtualized network functions, what is then required to achieve cloud native capabilities, along with the challenges and benefits of this transition.
Paul Angus - CloudStack Container ServiceShapeBlue
A walkthrough of the recently released update to ShapeBlue’s CloudStack Container Service (CCS). This update brings CCS bang up-to-date by running the latest version of Kubernetes (v1.11.3) on the latest version of Container Linux. CCS also now makes use of CloudStack’s new CA framework to automatically secure the Kubernetes environments it creates.
Using Kubernetes and TensorFlow to build the Fog Computing Platform that can dynamically deploy the deep learning applications on to the IoT devices (Raspberry PI).
Optimising nfv service chains on open stack using dockerAnanth Padmanabhan
Uploading slides presented in the OpenStack summit, at Austin in April, 2016. Here is the link to the video,
https://www.openstack.org/videos/video/optimising-nfv-service-chains-on-openstack-using-docker
Andre Paul: Importing VMware infrastructures into CloudStackShapeBlue
The talk will show how the VMware ingestion feature uses existing VMware Zones, and ‘imports’ them into CloudStack. We will describe the process by which database entries of already existing components of an existing virtual machine are created and how it enables CloudStack to safely manage such instances even though they were not initially set up by CloudStack.
OpenNebula is a great cloud orchestration and management tool, characterised by its flexibility, durability and focused feature matrix. A feature matrix that is driven largely by real world problem scenarios and real world feedback : something that has been the focus of CentOS project as well. From large scale deployment automation to patch management and state control, the CentOS Project aims to solve real world problems faced by the people who run infrastructure : The Sysadmins administrators and operations teams.
During this talk, I will share why OpenNebula and CentOS Linux are a perfect match and go into some user stories that demonstrate this relationships success in real world scenarios.
Monitoring Large-scale Cloud Infrastructures with OpenNebulaNETWAYS
Efficient monitoring is crucial when managing your Cloud infrastructure. The metrics collected by OpenNebula can be used to trigger automatic scaling, or quickly detect failures to automatically restart virtual machines. During this talk, I will show how OpenNebula can be used to efficiently monitor thousands of virtual machines at sub-1 minute interval. I will show how OpenNebula can be enhanced and optimized, and how different metrics collection tools such as Ganglia and Host-sFlow can be used with OpenNebula to monitor large-scale Cloud infrastructures.
Welcome talk unleashing the future of open-source enterprise cloud computingNETWAYS
The OpenNebula Project has come a long way since the first “technology preview” of OpenNebula almost six years ago. During these years we’ve witnessed the rise and hype of the Cloud, the birth and decline of several virtualization technologies, but specially the encouraging and exciting growth of OpenNebula; both as a technology and as an active and engaged community. As a meeting point for OpenNebula users, developers, administrators, builders, integrators and researchers, this Conference represents an opportunity to look back at how the project has grown in the last six years, and to give a peek at what to expect from the project in the near future.
How Can OpenNebula Fit Your Needs: A European Project FeedbackNETWAYS
BonFIRE is an european project which aims at providing a ”multi-site cloud facility for applications, services and systems research and experimentation”. Grouping different research cloud providers behind a common set of tools, APIs and services, it enables users to run their experiment against a heterogeneous set of infrastructure, hypervisors, networks, etc …
BonFIRE, and thus the (OpenNebula) testbeds, provide a relatively small set of images used to boot VMs. However, the experimental nature of BonFIRE projects results in a big ”turnover” of running VMs. Lot of VMs are used for a time period between a few hours and a few days, and an experiment startup can trigger deployment of many VMs at same time on a small set of OpenNebula workers, which does not correspond to usual Cloud workflow.
Default OpenNebula is not optimized for such usecase (small amount of worker nodes, high VMs turnover). However, thanks to its ability to be easily modified at each level of a Cloud deployment workflow, OpenNebula has been tuned to make it fit better with BonFIRE deployment process. This presentation will explain how to change OpenNebula TM and VMM to improve the parrallel deployment of many VMs in a short amount of time, reducing time needed to deploy an experiment to its lowest without lot of expensive hardware.
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
High Performance Computing Cloud at SURFsara: Experiences with OpenNebula 3.xNETWAYS
SURFsara (previously SARA) operates a High Performance Computing Cloud as a service for Dutch academic use since October 2011. The cloud is offered as IAAS, so the users of the OpenNebula web-interface are academics who control their own virtual clusters, without being IT professionals. We operate a hybrid cloud with 29 nodes totaling 688 cores and 400TB storage. The High Performance and Big Data requirements caused non-obvious design choices and unexpected system behavior. We would like to share our challenges, solutions and user experiences with OpenNebula 3.x and look ahead to the possibilities in OpenNebula 4.0.
The Contrail Virtual Execution Platform (VEP) allows Cloud administrators to manage data centers and monitor the usage of resources. Users can manage their distributed applications on IaaS Cloud providers under the control of Service Level Agreements (SLA). VEP applications are packaged in the standard OVF format and they are deployed inside Constrained Execution Environments (CEE) derived from the SLA, to support the specification of SLA contracts between users and providers.
These CEE environments allow to define constraints concerning virtual hardware performance, localization and affinity allowing the administrator to configure the monitoring system in order to feed external SLA enforcement services. VEP integrates elasticity management capabilities which can be controlled by external SLA enforcement services. A resource allocator service is integrated to dispatch the virtual components on the physical resources of the provider in accordance with the SLA terms.
The first version of VEP is currently implemented on OpenNebula. This talk presents the implementation of VEP on OpenNebula and discusses some implementation choices such as the resource allocator.
Cloud Computing represents a radical change in the way we organize and use computing resources and storage. The scientific and academic communities face the challenge of not only adapting their procedures to this new paradigm, but also contributing Cloud Computing development and leading its evolution towards open, secure and interoperable computing infrastructures, which will playing a key role in the community clouds paradigm.
The Spanish MEGHA initiative promotes and coordinates contributions to cloud computing R&D, education and management made by institutions affiliated with RedIRIS [7] in Spain. In the first phase (2010–2012), MEGHA validated federated cloud platforms using Opennebula and OCCI [10] to streamline the use of cloud technologies among R&E service centers. Representative infrastructure providers (CESCA, CESGA, PIC), middleware providers (OpenNebula, RedIRIS, OSAmI-Commons) and users (UAB, UOC, UM) together with intermediate/identity/brokers resources (RedIRIS) joined efforts to demonstrate the viability of this approach.
The results stimulated the development of use cases including e-learning platforms on demand (Learning Apps project), a distributed HPC platform (e-Science), and Virtual Labs (VDI) in a hybrid scenario (Academic services).
Next Steps?
As next goal, the Spanish research and academic community is working to assess the possibilities of creating a productive Infrastructure Cloud Computing service within member institutions. With this new approach new challenges appear:
Federated user authentication and authorization mechanisms.
Brokering architecture scenario.
Secure VM image distribution and validation.
A federated cloud accounting system integrating the accounting records of multiple cloud managers and supporting federated cloud governance.
Monitoring and notification of unpredictable changes in availability and readability status.
Security Policies and Service Level Agreements (SLA’s).
rOCCI – Providing Interoperability through OCCI 1.1 Support for OpenNebulaNETWAYS
OCCI (Open Cloud Computing Interface ) [1] is an open protocol for management tasks in the cloud environment focused on integration, portability and interoperability with a high degree of extensibility. It is designed to bridge differences between various cloud platforms (or cloud middleware) and provide common ground for users and developers alike.
The rOCCI framework [2], originally developed by GWDG [3], was written to simplify the implementation of the OCCI 1.1 protocol in Ruby and later provided the base for a working client and server implementation targeting OpenNebula as its primary back-end cloud platform. The initial server-side implementation provided basic functionality and served as a proof of concept when it was adopted by the EGI Federated Cloud Task Force [4] and chosen to act as the designated VM management interface. This led to further funding from EGI-InSPIRE [5] and involvement of CESNET [6].
This talk aims to provide basic information about the OCCI protocol, introduce its implementation in rOCCI, describe and/or demonstrate some of the functionality provided by rOCCI client and rOCCI-server in concert with OpenNebula. It also briefly examines its use in the EGI FedCloud environment and explores the possibility of further integration with
OpenNebula as a part of the ON ecosystem or even as an integral part of OpenNebula itself in the future. All this with interoperability in mind.
Making Clouds: Turning OpenNebula into a ProductNETWAYS
What does it takes to bring innovations like private clouds to small and medium enterprises? In the course of this talk we will present our experience in creating a self-service toolkit for creating a complete virtualization and cloud platform based on OpenNebula, as well as our experience gathered in tens of installations of all sizes. From scalable storage (with benchmarks!) to autonomic optimization, we will present what in our view is needed to bring private clouds to everyone, what components and additions we created to better solve our customers’ problems (from replacing industrial control systems to medium scale virtual desktop infrastructures), and why OpenNebula has been chosen over other competing cloud toolkits.
Un cloud pour comparer nos gènes aux images du cerveau" Le pionnier des bases de données, aujourd'hui disparu, Jim Gray avait annoncé en 2007 l'emergence d'un 4eme paradigme scientifique: celui d'une recherche scientifique numérique entierement guidée par l'exploration de données massives. Cette vision est aujourd'hui la réalité de tous les jours dans les laboratoire de recherche scientifique, et elle va bien au delà de ce que l'on appelle communément "BIG DATA". Microsoft Research et Inria on démarré en 2010 un projet intitulé Azure-Brain (ou A-Brain) dont l'originalité consiste à a la fois construire au dessus de Windows Azure une nouvelle plateforme d'acces aux données massives pour les applications scientifiques, et de se confronter à la réalité de la recherche scientifique. Dans cette session nous vous proposons dans une premiere partie de resituer les enjeux recherche concernant la gestion de données massives dans le cloud, et ensuite de vous presenter la plateforme "TOMUS Blob" cloud storage optimisé sur Azure. Enfin nous vous presenterons le projet A-Brain et les résultats que nous avons obtenus: La neuro-imagerie contribue au diagnostic de certaines maladies du système nerveux. Mais nos cerveaux s'avèrent tous un peu différents les uns des autres. Cette variabilité complique l'interprétation médicale. D'où l'idée de corréler ldes images IRM du cerveaux et le patrimoine génétique de chaque patient afin de mieux délimiter les régions cérébrales qui présentent un intérêt symptomatique. Les images IRM haute définition de ce projet sont produites par la plate-forme Neurospin du CEA (Saclay). Problème pour Les chercheurs : la masse d'informations à traiter. Le CV génétique d'un individu comporte environ un million de données. À cela s'ajoutent des volumes tout aussi colossaux de pixel 3D pour décrire les images. Un data deluge: des peta octets de donnés et potentiellement des années de calcul. C'est donc ici qu'entre en jeu le cloud et une plateforme optimisée sur Azure pour traiter des applications massivement parallèles sur des données massives... Comme l'explique Gabriel Antoniu, son responsable, cette équipe de recherche rennaise a développé “des mécanismes de stockage efficaces pour améliorer l'accès à ces données massives et optimiser leur traitement. Nos développements permettent de répondre aux besoins applicatifs de nos collègues de Saclay.
Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-nati...Nane Kratzke
Cloud-native applications are intentionally designed for the cloud in order to leverage cloud platform features like horizontal scaling and elasticity – benefits coming along with cloud platforms. In addition to classical (and very often static) multi-tier deployment scenarios, cloud-native applications are typically operated on much more complex but elastic infrastructures. Furthermore, there is a trend to use elastic container platforms like Kubernetes, Docker Swarm or Apache Mesos. However, especially multi-cloud use cases are astonishingly complex to handle. In consequence, cloud-native applications are prone to vendor lock-in. Very often TOSCA-based approaches are used to tackle this aspect. But, these application topology defining approaches are limited in supporting multi-cloud adaption of a cloud-native application at runtime. In this paper, we analyzed several approaches to define cloud-native applications being multi-cloud transferable at runtime. We have not found an approach that fully satisfies all of our requirements. Therefore we introduce a solution proposal that separates elastic platform definition from cloud application definition. We present first considerations for a domain specific language for application definition and demonstrate evaluation results on the platform level showing that a cloud-native application can be transfered between different cloud service providers like Azure and Google within minutes and without downtime. The evaluation covers public and private cloud service infrastructures provided by Amazon Web Services, Microsoft Azure, Google Compute Engine and OpenStack.
Open Science and GEOSS: the Cloud Sandbox enablersterradue
As part of the European project GEOWOW, Terradue was invited to present views at the GEO-X event on future endeavors to serve data democracy & science literacy in GEOSS (http://www.earthobservations.org/geoss.shtml)
Cloud Testbeds for Standards Development and InnovationAlan Sill
Invited talk given at the 2014 Chip-to-Cloud Security Forum "Advances in Securing Embedded, Mobile and Cloud Services and Ecosystems" in the seminar session on "Procurement, SLAs, and Standardisation on a Global Scale." In this talk, Dr. Sill reviews the history of cloud and grid computing, the formation and charter description for Phases I and II of the US National Institute of Standards and Technology (NIST) "SAJACC" working group, and brings the discussion up to date with an overview of current "DevOps"-oriented cloud standards and software interoperability hands-on testing efforts worldwide.
Improved Utilization of Infrastructure of Clouds by using Upgraded Functional...AM Publications
This paper discusses a propose cloud infrastructure that combines On-Demand allocation of resources with
improved utilization, opportunistic provisioning of cycles from idle cloud nodes to other processes. Because for cloud
computing to avail all the demanded services to the cloud consumers is very difficult. It is a major issue to meet cloud
consumer’s requirements. Hence On-Demand cloud infrastructure using Hadoop configuration with improved CPU
utilization and storage utilization is proposed using splitting algorithm by using Map-Reduce. Hence all cloud nodes which
remains idle are all in use and also improvement in security challenges and achieves load balancing and fast processing of
large data in less amount of time. Here we compare the FTP and HDFS for file uploading and file downloading; and
enhance the CPU utilization and storage utilization. Cloud computing moves the application software and databases to the
large data centres, where the management of the data and services may not be fully trustworthy. Therefore this security
problem is solve by encrypting the data using encryption/decryption algorithm and Map-Reducing algorithm which solve
the problem of utilization of all idle cloud nodes for larger data.
OCCIware@CloudExpoLondon2017 - an extensible, standard XaaS Cloud consumer pl...Marc Dutoo
Who uses multi cloud today ? Everybody. Alas, this leads to a lot of "technical glue". Enter OCCIware's Studio and Runtime : manage all layers and domains of the Cloud (XaaS) in a uniform, standard, extensible way - the Cloud consumer platform. With demos of Docker & Linked Data Studios, OCCInterface playground.
Extensible and Standard-based XaaS Platform To Manage Everything in The Cloud...OCCIware
Who uses multi cloud today ? Everybody. Docker AND VMs, scaling internally AND bursting to Amazon, storing on a public cloud except for data legally required to stay within the country: different solutions for different needs, but more often than not used at the same time. Alas, this leads to a "noodle plate" architecture where a lot of "technical glue" with the various, incompatible clouds creeps in and makes it impossible to evolve.
To solve this problem, the OCCIware project builds on the Open Cloud Computing (OCCI) standard's unified, uniform architectural approach and provides a platform to manage all layers and domains of the Cloud (XaaS), with two main components: the OCCIware Studio Factory and Runtime. The talk includes a demonstration of the Docker connector and of how to use the OCCIware Cloud Designer to configure a real life, SmartCity-themed Cloud application (a Java API server on top of a MongoDB cluster)'s business, platform and infrastructure layers seamlessly on both VirtualBox and OW2's OpenStack infrastructure.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
20240605 QFM017 Machine Intelligence Reading List May 2024
Opening the Path to Technical Excellence
1. Page 1ONE Conference 2013 - 26th September 2013
J.Farres – European Space Agency
R&D for Earth Observation Ground Segment
OpenNebula Conference, Berlin
26/9/2013
Cloud Computing in Space:
Opening the path to technical excellence
2. Page 2ONE Conference 2013 - 26th September 2013
Agenda
1.Background and Objectives
2.ESA Experiences
1. EO Re-processing on Amazon
2. Dissemination and Processing on Hetzner
3. SuperSites Exploitation platform with Helix Nebula
4. A sandbox service for Science
3.Summary of lessons learnt
4.Future prospects
3. Page 3
Objectives (1)
ONE Conference 2013 - 26th September 2013
3- Processing bursting
2- Dissemination peaks
1- ICT Costs savings
4- Collaboration platform
Cloud Computing
IaaS
SaaS
Hosting
(VPS, Rental)
CDN
PaaS
A model for enabling convenient, on-demand network
access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and
released with minimal management effort or service
provider interaction [NIST]
4. Page 4
Background: ESA Mission Statement
ESA's purpose shall be to provide for, and to promote, for
exclusively peaceful purposes, cooperation among European
States in space research and technology and their space
applications, with a view to their being used for scientific
purposes and for operational space applications systems:
• by elaborating and implementing a long-term European
space policy, by recommending space objectives to the
Member States, and by concerting the policies of the
Member States with respect to other national and
international organisations and institutions;
• by elaborating and implementing activities and programmes
in the space field;
• by coordinating the European space programme and national
programmes, and by integrating the latter progressively and
as completely as possible into the European space
programme, in particular as regards the development of
applications satellites;
• by elaborating and implementing the industrial policy
appropriate to its programme and by recommending a
coherent industrial policy to the Member States.ONE Conference 2013 - 26th September 2013
5. Page 5
Objectives (1)
ONE Conference 2013 - 26th September 2013
3- Processing bursting
2- Dissemination peaks
1- ICT Costs savings
4- Collaboration platform
Cloud Computing
IaaS
SaaS
Hosting
(VPS, Rental)
CDN
PaaS
A model for enabling convenient, on-demand network
access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and
released with minimal management effort or service
provider interaction [NIST]
5- Lead effective use of modern
computing infrastructures
by European industry
6. Page 6ONE Conference 2013 - 26th September 2013
Agenda
1.Background and Objectives
2.ESA Experiences
1. Mass processing on Amazon
2. Dissemination and Processing on Hetzner
3. SuperSites Exploitation platform with Helix Nebula
4. A sandbox service for Science
3.Summary of lessons learnt
4.Future prospects
7. Page 7
Case 1: Mass processing on Amazon(1)
Purpose
•Fast re-processing of large EO products collections for CalVal
purposes.
Project / Service
•Timeframe: 2009 and 2011
•Provider: Amazon, EC2, S3
•Data: ERS SAR Wave, MIPAS (30,000 products)
•System: 200 Virtual Servers
configured as Working Nodes to an ESA grid.
•Usage: 11 CPU years of processing in 5 weeks
ONE Conference 2013 - 26th September 2013
8. Page 8ONE Conference 2013 - 26th September 2013
https://earth.esa.int/web/guest/missions/esa-operational-eo-missions/envisat/instruments/mipas
9. Page 9
Pros Cons
ONE Conference 2013 - 26th September 2013
Case 1: Mass processing on Amazon(2)
Pros
1. Excellent processing scalability
2. Efficient bulk-in/out data service
(via HD)
3. “The faster the cheaper” as it
reduces storing costs
4. Good application portability for
gridified applications
Cons
1. Complex and changing pricing,
e.g. periods with cheaper hosts
and free data upload.
2. In-bound / Out-bound costs
3. Ad-hoc scripts to command
provisioning
4. AMI format portability
10. Page 10ONE Conference 2013 - 26th September 2013
Agenda
1.Background and Objectives
2.ESA Experiences
1. Mass processing on Amazon
2. Dissemination and Processing on Hetzner
3. SuperSites Exploitation platform with Helix Nebula
4. A sandbox service for Science
3.Summary of lessons learnt
4.Future prospects
11. Page 11
Case 2: Dissemination and Processing on
Hetzner (1)
Purpose
•Couple large processing and dissemination capabilities for low cost.
Project / Service
•Timeframe: 2011
•Provider: Hetzner
•Data: 60TB
•System: 1 Head: Catalogue, Processor Register
n Nodes: Data Dissemination, Hadoop Processing Cluster
Packaged as a back-end for web portal services
•Usage: GeoHazards SuperSites (38,000 SAR images and 3,000 users)
ONE Conference 2013 - 26th September 2013
13. Page 13
Pros Cons
ONE Conference 2013 - 26th September 2013
Case 2: Dissemination and Processing on
Hetzner (2)
Pros
1. Good archive scalability (chunks
of 8TB)
2. Synergy of processing -
dissemination: processing peaks
followed by dissemination peaks
3. Much cheaper than Amazon
services. No in-boud/out-boud
costs
4. Physical dedicated servers
enabled easier security
Cons
1. System scales storage,
dissemination and processing
capabilities simultaneously.
2. Lower service levels than Level
3 or Amazon
3. Virtualization layer had to be
deployed (KVM and ONE)
14. Page 14ONE Conference 2013 - 26th September 2013
Agenda
1.Background and Objectives
2.ESA Experiences
1. Mass processing on Amazon
2. Dissemination and Processing on Hetzner
3. SuperSites Exploitation platform with Helix Nebula
4. A sandbox service for Science
3.Summary of lessons learnt
4.Future prospects
15. Page 15
Case 3: Exploitation platform with Helix
Nebula (1)
Purpose
•Pilot a collaborative platform for EO exploitation using multisourced
cloud provisioning.
Project / Service
•Timeframe: 2012-2013
•Providers: ATOS, CloudSigma, Interoute, T-Systems, EGI
•Cloud Brokers: SlipStream, Entratius
•Data: GeoHazards data (ESA, CNES, DLR, …)
•Processors: ESA, CNR, Gamma, …
•System: 15 TB of raw data, 4 processing services,
dedicated VM for selected users, >200 users
ONE Conference 2013 - 26th September 2013
18. Page 1810/9/2013
18
ESA PoC on EGI Federated Cloud
The ESA Proof of Concepts on the EGI Federated Cloud focuses on
demonstrating the possibility to provide Processing Services to ESA scientists
using the EGI Federated Cloud resources
• Participants:
– User community
• ESA Research and Service Support section: Configuration and execution of the
tests
– Technology providers
• Sixsq: Provided Open Source SlipStream software and OCCI connector
– Resource providers
• CESNET (OpenNebula): Performance tests and multi-site tests
• GRNET (synnefo): Multi-site tests
• CESGA (OpenNebula): Hosting of the SlipStream server
19. Page 19
Pros Cons
ONE Conference 2013 - 26th September 2013
Case 3: Exploitation platform with Helix
Nebula (2)
Pros
1. High performance storage and
dissemination from cloud
2. Scalable processing co-located
with data (same VDC)
3. Multi-sourced via cloud
brokering services
4. Direct provision to virtual hosts
to users in the cloud, via ESA
5. Easy application deployment via
grid controller
Cons
1. Frequent platform upgrades
2. Need to distribute processing
resources near distributed data
3. No Cloud federation. Limitations
of brokers
4. COTS licensing
5. Grid on Cloud approach.
Workaround to high effort in
application “cloudification”
20. Page 20ONE Conference 2013 - 26th September 2013
Agenda
1.Background and Objectives
2.ESA Experiences
1. Mass processing on Amazon
2. Dissemination and Processing on Hetzner
3. SuperSites Exploitation platform with Helix Nebula
4. A sandbox service for Science
3.Summary of lessons learnt
4.Future prospects
21. Page 21
Case 4: A sandbox service for Science (1)
Purpose
• Provide researchers and service providers a development environment
for cloudifying and exploiting their algorithms/services.
Project / Service
•Timeframe: 2013-2014
•Providers: Private cloud + Helix Nebula
•Data: Multiple reference data sets from ESA archives
•Processors: Those developed by the users
ONE Conference 2013 - 26th September 2013
22. Page 22
Case 4: A sandbox service for Science (2)
10/9/2013
22
Many
Service
Providers
23. Page 23
Pros Cons
ONE Conference 2013 - 26th September 2013
Case 4: A sandbox service for Science (3)
Pros
1. Hybrid cloud model in support of
development (private) ->
deployment (public)
2. Deployment model via PaaS and
SaaS
3. Simplified “cloudification” via
Cloudera + supporting tools (for
SPMD paradigm)
Cons
1. Need for new CSP drivers for
Helix Nebula: T-Systems
(Zimory), Interoute (Jclouds)
2. Limited CSP support to PaaS
and SaaS services.
3. Slow adoption of cloud-reduce
paradigm among application
developers in Remote Sensing
24. Page 24
Role of OpenNebula on the four Cases
Case ONE
1: Mass
processing on
Amazon
Use of GridWay to command IaaS provisioning from
Amazon
2: Dissemination
and Processing
on Hetzner
Use of ONE to host private IaaS on dedicated hosting
services
3: Exploitation
platform with
Helix Nebula
Use of ONE in resource providers to EGI Federated Cloud
4: A sandbox
service for
Science
Use of ONE for hybrid cloud provision ESA + Helix Nebula
Development of drivers for Zimory and Jclouds
ONE Conference 2013 - 26th September 2013
25. Page 25ONE Conference 2013 - 26th September 2013
Agenda
1.Background and Objectives
2.ESA Experiences
1. EO Re-processing on Amazon
2. Dissemination and Processing on Hetzner
3. SuperSites Exploitation platform with Helix Nebula
4. A sandbox service for Science
3.Summary of lessons learnt
4.Future prospects
26. Page 26
Lessons Learnt: ICT provisioning
1. As soon as ICT needs can be predicted and planned, IaaS is
more expensive than other hosting solutions like (rental,
dedicated hosting).
2. Flexibility of Public IaaS is less appealing when internal
resources are pooled, virtualized and managed as an internal
cloud.
3. On the other hand, IaaS services allow to size down internal ICT
resources to the “fixed” need and ensure their maximum
utilization; e.g. using external provisioning for the “variable”
need.
Hybrid ICT provisioning
ONE Conference 2013 - 26th September 2013
27. Page 27
Lessons Learnt: Service Levels
• Terms & Conditions in Public Clouds express surprising low
commitment.
• Cloud opportunities can become risks when applied to critical
systems.
Develop multi-sourcing
Plan contingency scenarios for services hosted in Public
Clouds
ONE Conference 2013 - 26th September 2013
28. Page 28
Lessons Learnt: Application Areas
• Dissemination and on-demand processing
– because are very variable (depending on user demand)
• Secondary archive and re-processing
– because are limited in time
• Temporary resources for integration, testing and demonstration
– because are limited in time
• System sizing
– because needs are unknown
Important areas when remote sensing services can gain
from Cloud Computing
ONE Conference 2013 - 26th September 2013
29. Page 29
Lessons learnt: User expectations
• Open Data
– All data are discoverable, accessible online and free
– Data is arranged on long time series of coherent data from different providers.
• Open Computing
– Users will be able to perform processing directly on the cloud using virtual servers.
– Users can choose their preferred cloud provider
• Open Source Software
– All basic/platform software is open and freely available
– Application can be easily ported across clouds
• Open Collaboration
– Data and applications can be easily shared with other users
Be up to the users expectations
ONE Conference 2013 - 26th September 2013
30. Page 30
Lessons learnt and OpenNebula
ONE Conference 2013 - 26th September 2013
OK, Case 4
OK, Case 3
Lesson learnt ONE
Hybrid cloud OK, Case4: A sandbox service for
Science
Cloud multi-sourcing OK, Case3: Exploitation platform
with Helix Nebula
Application Areas N/A
User Expectations OK, Open Source Software
31. Page 31ONE Conference 2013 - 26th September 2013
Agenda
1.Background and Objectives
2.ESA Experiences
1. EO Re-processing on Amazon
2. Dissemination and Processing on Hetzner
3. SuperSites Exploitation platform with Helix Nebula
4. A sandbox service for Science
3.Summary of lessons learnt
4.Future prospects
32. Page 32
Future Prospects (1)
ICT
• Set-up mid-term relation with 2-4 cloud providers;
similarly to current agreements with network providers.
• Cloudify present corporate computing resources
• Establish common ICT provisioning service based on:
– Hybrid and multi-sourced resources
– Based on in-house brokering layer
ONE Conference 2013 - 26th September 2013
33. Page 33
Future Prospects (2)
POLICY
• Mandate cloud hosting for specific activities: Integration
& Validation, Demonstrators, R&D …
• Promote the use of cloud computing solutions among
ESA service providers, in benefit for their competiveness
• Continue to launch specific flagship projects based on
public clouds
– SuperSites exploitation platform (continuation)
– Thematic Exploitation platforms
ONE Conference 2013 - 26th September 2013
34. Page 34
Future Prospects (3)
1. Adopt and promote Open data policy
– ESA Data Policy
– Data access agreements
2. Adopt and promote interoperability Standards
– Data discovery & ordering
– Processing discovery & ordering
– Data / Results access
– RAAA – Registration, Authentification, Authorisation, Accounting
3. Provide Open Source Software
– EO Toolboxes
– Data discovery/catalogue tools
– EO Data management and access
4. Promote Open computing infrastructures
– Cloud computing paradigm
– Standards for IaaS and brokering services
– Software Licensing Agreements
35. Page 35
In summary: Manifest
+ Open Data
+ Open Software
+ Open Computing
=
Increased market development &
industrial competiveness
ONE Conference 2013 - 26th September 2013
Editor's Notes
Explain briefly what EO Products, Re-processing and CalVal mean.Amazon market leaderData Large data and expensive processingSystem Amazon defaults to a ceiling of 50 VMs but it was very easily extendedUsage Very satisfied users as they could have their results very early and identify processor anomalies which could be resolved in a “relatively” fast engineering process
Technical pre-requisites for deploymentApplication developed in line with specific development environment and/or compliant with established interfaces.Embedded software licensed to run on the cloud platform