Learn about the exciting integration work that has been done with YARN, Red Hat OpenShift and Kurbernetes Docker container orchestration. During this presentation we will cover the basics of this exciting YARN integration effort and then launch into a demo. You won’t want to miss seeing web application docker container, Storm, and Hive SQL queries all running in the same HDP cluster!
Kubernetes Operators are control plane agents that know how to manage the entire lifecycle of stateful, complex, or specialized applications. With an Operator, you can extend the Kubernetes API to encode domain-specific knowledge about running, scaling, recovering, and monitoring your applications. This workshop will guide you through the steps of creating and deploying an Operator using the Operator Framework and SDK, open-source tools from Red Hat that simplify the process of making an Operator to package, deliver, and manage your applications on Kubernetes.
This document summarizes a presentation about installation, upgrades, and migrations of OpenStack clouds in the enterprise. It discusses definitions of upgrades and updates, challenges with OpenStack deployments, and introduces the Cloud Director tool for deploying and managing OpenStack clouds on Red Hat Enterprise Linux in a simplified manner. It provides examples of using Cloud Director to deploy OpenStack infrastructure and manage the life cycle of OpenStack clouds.
Level-up your gaming telemetry using Kafka Streams | DevNation Tech TalkRed Hat Developers
Many modern video games are constantly evolving post-release. New maps, game modes, and game balancing adjustments are rolled out, often on a weekly basis. This continuous iteration to improve player engagement and satisfaction requires data-driven decision making based on events and telemetry captured during gameplay, and from community forums and discussions.
In this session you will learn how OpenShift Streams for Apache Kafka and Kafka Streams can be used to analyze real-time events and telemetry reported by a game server, using a practical example that encourages audience participation. Specifically you’ll learn how to:
Provision Kafka clusters on OpenShift Streams for Apache Kafka.
Develop a Java application that uses Kafka Streams and Quarkus to process event data.
Deploy the application locally, or on OpenShift and connect it to your OpenShift Streams for Apache Kafka Cluster.
We believe that the popularity of Kubernetes derives from its ability to adapt and improve the infrastructure in which is deployed. I'll explain how this is done
LinuxCon 2013 Steven Dake on Using Heat for autoscaling OpenShift on OpenstackOpenShift Origin
OpenStack Heat allows modeling relationships between OpenStack resources and managing infrastructure resources throughout application lifecycles. The presentation discusses Heat architecture, autoscaling workflows using Heat and Ceilometer, and demonstrates an OpenShift autoscaling workflow on OpenStack using Heat templates, DIB elements, and CloudWatch alarms. Future work may expand autoscaling to other resources and integrate it more fully across OpenStack projects.
Casablanca has contributed to ONAP including developing test services and plugins for multi-cloud and Kubernetes environments. Some key contributions include:
1. A MultiCloud/K8S plugin written in Go that offers an API for interacting with cloud regions supporting Kubernetes.
2. A Kubernetes Reference Deployment (KRD) that provides a reference for deploying Kubernetes clusters satisfying ONAP requirements through Ansible playbooks.
3. Work on OVN4NFVK8S and a virtual firewall use case composed of packet generator, firewall, and traffic sink virtual functions to report traffic volumes to ONAP.
Learn about the exciting integration work that has been done with YARN, Red Hat OpenShift and Kurbernetes Docker container orchestration. During this presentation we will cover the basics of this exciting YARN integration effort and then launch into a demo. You won’t want to miss seeing web application docker container, Storm, and Hive SQL queries all running in the same HDP cluster!
Kubernetes Operators are control plane agents that know how to manage the entire lifecycle of stateful, complex, or specialized applications. With an Operator, you can extend the Kubernetes API to encode domain-specific knowledge about running, scaling, recovering, and monitoring your applications. This workshop will guide you through the steps of creating and deploying an Operator using the Operator Framework and SDK, open-source tools from Red Hat that simplify the process of making an Operator to package, deliver, and manage your applications on Kubernetes.
This document summarizes a presentation about installation, upgrades, and migrations of OpenStack clouds in the enterprise. It discusses definitions of upgrades and updates, challenges with OpenStack deployments, and introduces the Cloud Director tool for deploying and managing OpenStack clouds on Red Hat Enterprise Linux in a simplified manner. It provides examples of using Cloud Director to deploy OpenStack infrastructure and manage the life cycle of OpenStack clouds.
Level-up your gaming telemetry using Kafka Streams | DevNation Tech TalkRed Hat Developers
Many modern video games are constantly evolving post-release. New maps, game modes, and game balancing adjustments are rolled out, often on a weekly basis. This continuous iteration to improve player engagement and satisfaction requires data-driven decision making based on events and telemetry captured during gameplay, and from community forums and discussions.
In this session you will learn how OpenShift Streams for Apache Kafka and Kafka Streams can be used to analyze real-time events and telemetry reported by a game server, using a practical example that encourages audience participation. Specifically you’ll learn how to:
Provision Kafka clusters on OpenShift Streams for Apache Kafka.
Develop a Java application that uses Kafka Streams and Quarkus to process event data.
Deploy the application locally, or on OpenShift and connect it to your OpenShift Streams for Apache Kafka Cluster.
We believe that the popularity of Kubernetes derives from its ability to adapt and improve the infrastructure in which is deployed. I'll explain how this is done
LinuxCon 2013 Steven Dake on Using Heat for autoscaling OpenShift on OpenstackOpenShift Origin
OpenStack Heat allows modeling relationships between OpenStack resources and managing infrastructure resources throughout application lifecycles. The presentation discusses Heat architecture, autoscaling workflows using Heat and Ceilometer, and demonstrates an OpenShift autoscaling workflow on OpenStack using Heat templates, DIB elements, and CloudWatch alarms. Future work may expand autoscaling to other resources and integrate it more fully across OpenStack projects.
Casablanca has contributed to ONAP including developing test services and plugins for multi-cloud and Kubernetes environments. Some key contributions include:
1. A MultiCloud/K8S plugin written in Go that offers an API for interacting with cloud regions supporting Kubernetes.
2. A Kubernetes Reference Deployment (KRD) that provides a reference for deploying Kubernetes clusters satisfying ONAP requirements through Ansible playbooks.
3. Work on OVN4NFVK8S and a virtual firewall use case composed of packet generator, firewall, and traffic sink virtual functions to report traffic volumes to ONAP.
This document discusses transforming monolithic applications into microservices using Red Hat OpenShift. It provides an overview of OpenShift capabilities like application lifecycle management, container orchestration, security and monitoring. It then describes a hands-on lab where developers will learn OpenShift concepts, efficient development workflows and promoting applications between environments using CI/CD pipelines.
Exploring Kubeflow on Kubernetes for AI/ML | DevNation Tech TalkRed Hat Developers
The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable, and scalable by leveraging best-of-breed open source projects. These include Jupyter Notebooks, TensorFlow, and Pytorch for Training; Seldon and KFServing for Serving; and Kubeflow Pipelines. These are all wrapped up neatly in an easy-to-use portal so developers and data scientists can easily collaborate and deliver production-ready AI/ML workloads.
Nova Update - OpenStack Ops Midcycle, Manchester, Feb 2016John Garbutt
A quick update of whats happening in Nova covering: API v2.1, Cells v2, Scheduler and much more.
It was presented at the Ops Midcylce meetup in Manchester UK, Feb 2016.
The Analytic Platform behind IBM’s Watson Data Platform - Big Data Spain 2017Luciano Resende
IBM has built a “Data Science Experience” cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.
An intro to Helm capabilities and how it helps make upgrades and rollbacks in Kubernetes,, packaging and sharing and also managing complex dependencies for K8s applications easier.
Machine learning with Apache Spark on Kubernetes | DevNation Tech TalkRed Hat Developers
The first challenge for an AI/ML practitioner is to gather the data inputs needed to feed a learning model. This is where a solution such as Apache Spark’s unified DataFrame API and a scale-out compute model allows you to execute parallelized queries against SQL, Kafka, and S3. In this session, we are going to explore the use of https://radanalytics.io/ and https://opendatahub.io/ on top of Kubernetes/OpenShift to demonstrate a dynamically scalable ETL pipeline for federated data ingestion.
Serverless frameworks are changing the way we do computing. In open source container world, Kubernetes is playing a pivotal role in manifesting this. This presentation will go deep into various features of Kubernetes to create serverless functions.
Also includes a comparative study of various serverless frameworks such as Kubeless, Fission and Funktion are available in open source world. Will conclude with an implementation demo and some real world use cases.
Presented in serverless summit 2017: www.inserverless.com
Kubernetes Helm makes application deployment easy, standardized and reusable. Use of Kubernetes Helm leads to better developer productivity, reduced Kubernetes deployment complexity and enhanced enterprise production readiness.
Enterprises using Kubernetes Helm can speed up the adoption of cloud native applications. These applications can be sourced from open-source community provided repositories, or from an organization’s internal repository of customized application blueprints.
Developers can use Kubernetes Helm as a vehicle for packaging their applications and sharing them with the Kubernetes community. Kubernetes Helm also allows software vendors to offer their containerized applications at “the push of a button.” Through a single command or a few mouse clicks, users can install Kubernetes apps for dev-test or production environments.
OpenStack - Tzu-Mainn Chen, Marek Aufart, Petr Blaho - ManageIQ Design Summit...ManageIQ
This document discusses ManageIQ's integration with OpenStack. It provides an overview of the OpenStack TripleO deployment model and describes how ManageIQ implements OpenStack Cloud and Infrastructure providers. It also outlines future work, including improving dashboard and topology views, adding more cloud features like segregation and backups, and using Mistral workflows to simplify exposing TripleO deployment logic through ManageIQ.
OpenStack Nova Liberty focused on maintaining stability while increasing velocity. Key priorities included improving the API, ensuring reliability, enabling live upgrades and scaling. The architecture evolved to separate the data and control planes to reduce downtime during upgrades. Future releases will focus on continued architecture evolution, reducing scope creep, and improving the user experience.
This document summarizes a presentation about Spinnaker on Kubernetes. It introduces Spinnaker as an open source multi-cloud continuous delivery platform initially developed by Netflix. It describes how Spinnaker can be used to manage Kubernetes clusters and deployments through concepts like accounts, server groups, load balancers and pipelines. The document also compares Spinnaker to alternatives like Jenkins and discusses best practices for productionizing Spinnaker on Kubernetes.
This document discusses Red Hat's cloud platforms, including Infrastructure as a Service (OpenStack), Platform as a Service (OpenShift), and container technologies. It notes that business demands are driving IT transformation toward cloud-based architectures using open source technologies. Red Hat is a top contributor to OpenStack and OpenShift and offers integrated products like Red Hat Atomic Enterprise and OpenShift Enterprise to help customers deploy and manage container-based applications at scale across hybrid cloud environments.
Watch the videos at http://cloudify.co/webinars/tosca-training-videos
Getting up to speed with TOSCA simple profile in YAML and its ARIA implementation.
This work is part of the open source testbed setup for Cloud interoperability & portability. Cloud Security Workgroup will further review and generate complete working set as we move along. This is part I of the effort.
OpenShift on OpenStack: Deploying With HeatAlex Baretto
This session illustrates how devops can use Heat to orchestrate the deployment and scaling of complex applications on top of OpenStack. Starting with a walk-thru of the example deployment Heat Templates for OpenShift Origin (available in openstack github repository) I’ll walk thru the existing templates and enhance them to provide additional functionalities such as positioning alarms, responding to alarms, adding instances, and auto-scaling.
Project Gardener - EclipseCon Europe - 2018-10-23msohn
Open Source project Gardener (https://gardener.cloud) is a production-grade Kubernetes-as-a-Service management tool that works across various cloud-platforms (e.g, AWS, Azure, GCP, Alibaba & SAP Datacenters) and on-premise (e.g. with OpenStack)
Are We Done Yet ? Testing Your OpenStack DeploymentKen Pepple
After constructing your OpenStack cloud, it can be difficult to determine whether you've actually configured all the components correctly. OpenStack Rally and Tempest have been created to help run verification and benchmarking tests for you, but they themselves are difficult to configure and use. This session will explore creating an easy and repeatable process verification and benchmarking process for your OpenStack cloud. Drawing on experience from numerous customer installations, it will delve into the benefits and pitfalls of using specific tools and technologies to achieve your testing goals.
This presentation was given at the 2014 Fall OpenStack Summit. A recording of the presentation is available at https://www.openstack.org/summit/openstack-paris-summit-2014/session-videos/presentation/are-we-done-yet-testing-openstack-deployments .
The document outlines an agenda for a virtual meetup on event-driven integration with Salesforce and CI/CD with GitHub, Maven, and Jenkins. The meetup will include presentations on event-driven integration using Salesforce events and streaming API, and implementing CI/CD pipelines with GitHub, Maven, and Jenkins. It provides details on creating Jenkinsfiles, modifying Maven POM files, and setting up webhooks for automated deployments.
Este documento describe un curso de 25 horas sobre cómo mantenerse informado a través de la prensa y radio digitales. Explica las ventajas de acceder a noticias e información en línea de forma rápida y cómoda. El curso cubrirá cómo leer prensa digital, escuchar radio digital, identificar formatos como RSS y podcasts, y realizar la sindicación de canales de noticias. Proporciona enlaces a varios recursos sobre prensa digital, radio digital, definiciones de RSS y podcasting.
This document discusses transforming monolithic applications into microservices using Red Hat OpenShift. It provides an overview of OpenShift capabilities like application lifecycle management, container orchestration, security and monitoring. It then describes a hands-on lab where developers will learn OpenShift concepts, efficient development workflows and promoting applications between environments using CI/CD pipelines.
Exploring Kubeflow on Kubernetes for AI/ML | DevNation Tech TalkRed Hat Developers
The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable, and scalable by leveraging best-of-breed open source projects. These include Jupyter Notebooks, TensorFlow, and Pytorch for Training; Seldon and KFServing for Serving; and Kubeflow Pipelines. These are all wrapped up neatly in an easy-to-use portal so developers and data scientists can easily collaborate and deliver production-ready AI/ML workloads.
Nova Update - OpenStack Ops Midcycle, Manchester, Feb 2016John Garbutt
A quick update of whats happening in Nova covering: API v2.1, Cells v2, Scheduler and much more.
It was presented at the Ops Midcylce meetup in Manchester UK, Feb 2016.
The Analytic Platform behind IBM’s Watson Data Platform - Big Data Spain 2017Luciano Resende
IBM has built a “Data Science Experience” cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.
An intro to Helm capabilities and how it helps make upgrades and rollbacks in Kubernetes,, packaging and sharing and also managing complex dependencies for K8s applications easier.
Machine learning with Apache Spark on Kubernetes | DevNation Tech TalkRed Hat Developers
The first challenge for an AI/ML practitioner is to gather the data inputs needed to feed a learning model. This is where a solution such as Apache Spark’s unified DataFrame API and a scale-out compute model allows you to execute parallelized queries against SQL, Kafka, and S3. In this session, we are going to explore the use of https://radanalytics.io/ and https://opendatahub.io/ on top of Kubernetes/OpenShift to demonstrate a dynamically scalable ETL pipeline for federated data ingestion.
Serverless frameworks are changing the way we do computing. In open source container world, Kubernetes is playing a pivotal role in manifesting this. This presentation will go deep into various features of Kubernetes to create serverless functions.
Also includes a comparative study of various serverless frameworks such as Kubeless, Fission and Funktion are available in open source world. Will conclude with an implementation demo and some real world use cases.
Presented in serverless summit 2017: www.inserverless.com
Kubernetes Helm makes application deployment easy, standardized and reusable. Use of Kubernetes Helm leads to better developer productivity, reduced Kubernetes deployment complexity and enhanced enterprise production readiness.
Enterprises using Kubernetes Helm can speed up the adoption of cloud native applications. These applications can be sourced from open-source community provided repositories, or from an organization’s internal repository of customized application blueprints.
Developers can use Kubernetes Helm as a vehicle for packaging their applications and sharing them with the Kubernetes community. Kubernetes Helm also allows software vendors to offer their containerized applications at “the push of a button.” Through a single command or a few mouse clicks, users can install Kubernetes apps for dev-test or production environments.
OpenStack - Tzu-Mainn Chen, Marek Aufart, Petr Blaho - ManageIQ Design Summit...ManageIQ
This document discusses ManageIQ's integration with OpenStack. It provides an overview of the OpenStack TripleO deployment model and describes how ManageIQ implements OpenStack Cloud and Infrastructure providers. It also outlines future work, including improving dashboard and topology views, adding more cloud features like segregation and backups, and using Mistral workflows to simplify exposing TripleO deployment logic through ManageIQ.
OpenStack Nova Liberty focused on maintaining stability while increasing velocity. Key priorities included improving the API, ensuring reliability, enabling live upgrades and scaling. The architecture evolved to separate the data and control planes to reduce downtime during upgrades. Future releases will focus on continued architecture evolution, reducing scope creep, and improving the user experience.
This document summarizes a presentation about Spinnaker on Kubernetes. It introduces Spinnaker as an open source multi-cloud continuous delivery platform initially developed by Netflix. It describes how Spinnaker can be used to manage Kubernetes clusters and deployments through concepts like accounts, server groups, load balancers and pipelines. The document also compares Spinnaker to alternatives like Jenkins and discusses best practices for productionizing Spinnaker on Kubernetes.
This document discusses Red Hat's cloud platforms, including Infrastructure as a Service (OpenStack), Platform as a Service (OpenShift), and container technologies. It notes that business demands are driving IT transformation toward cloud-based architectures using open source technologies. Red Hat is a top contributor to OpenStack and OpenShift and offers integrated products like Red Hat Atomic Enterprise and OpenShift Enterprise to help customers deploy and manage container-based applications at scale across hybrid cloud environments.
Watch the videos at http://cloudify.co/webinars/tosca-training-videos
Getting up to speed with TOSCA simple profile in YAML and its ARIA implementation.
This work is part of the open source testbed setup for Cloud interoperability & portability. Cloud Security Workgroup will further review and generate complete working set as we move along. This is part I of the effort.
OpenShift on OpenStack: Deploying With HeatAlex Baretto
This session illustrates how devops can use Heat to orchestrate the deployment and scaling of complex applications on top of OpenStack. Starting with a walk-thru of the example deployment Heat Templates for OpenShift Origin (available in openstack github repository) I’ll walk thru the existing templates and enhance them to provide additional functionalities such as positioning alarms, responding to alarms, adding instances, and auto-scaling.
Project Gardener - EclipseCon Europe - 2018-10-23msohn
Open Source project Gardener (https://gardener.cloud) is a production-grade Kubernetes-as-a-Service management tool that works across various cloud-platforms (e.g, AWS, Azure, GCP, Alibaba & SAP Datacenters) and on-premise (e.g. with OpenStack)
Are We Done Yet ? Testing Your OpenStack DeploymentKen Pepple
After constructing your OpenStack cloud, it can be difficult to determine whether you've actually configured all the components correctly. OpenStack Rally and Tempest have been created to help run verification and benchmarking tests for you, but they themselves are difficult to configure and use. This session will explore creating an easy and repeatable process verification and benchmarking process for your OpenStack cloud. Drawing on experience from numerous customer installations, it will delve into the benefits and pitfalls of using specific tools and technologies to achieve your testing goals.
This presentation was given at the 2014 Fall OpenStack Summit. A recording of the presentation is available at https://www.openstack.org/summit/openstack-paris-summit-2014/session-videos/presentation/are-we-done-yet-testing-openstack-deployments .
The document outlines an agenda for a virtual meetup on event-driven integration with Salesforce and CI/CD with GitHub, Maven, and Jenkins. The meetup will include presentations on event-driven integration using Salesforce events and streaming API, and implementing CI/CD pipelines with GitHub, Maven, and Jenkins. It provides details on creating Jenkinsfiles, modifying Maven POM files, and setting up webhooks for automated deployments.
Este documento describe un curso de 25 horas sobre cómo mantenerse informado a través de la prensa y radio digitales. Explica las ventajas de acceder a noticias e información en línea de forma rápida y cómoda. El curso cubrirá cómo leer prensa digital, escuchar radio digital, identificar formatos como RSS y podcasts, y realizar la sindicación de canales de noticias. Proporciona enlaces a varios recursos sobre prensa digital, radio digital, definiciones de RSS y podcasting.
Este documento describe una actividad formativa de 10 horas sobre el uso de Google Earth. Los participantes aprenderán a utilizar este programa de mapas y fotografías de satélite para explorar cualquier lugar de la Tierra. La actividad incluye instrucciones sobre cómo descargar e instalar Google Earth, ver tutoriales sobre su uso, y realizar búsquedas de puntos geográficos utilizando coordenadas.
Este questionário de satisfação de clientes/serviços tem como objetivo avaliar a satisfação dos clientes com a organização para identificar áreas de melhoria. Ele cobre tópicos como desempenho, imagem, envolvimento do cliente, acessibilidade, produtos/serviços e solicita uma classificação de 1 a 5 para cada item.
Este documento é um índice de evidências com 20 itens numerados, provavelmente referindo-se a documentos relacionados a um caso judicial ou investigação.
The document summarizes a presentation about the 5 commandments of social media strategy, tools, and culture. It discusses (1) listening on social media to understand audiences, (2) engaging with audiences by adding value and being conversational, (3) using social content like user-generated content, (4) generating buzz through multiple channels, and (5) building communities around shared interests. It emphasizes developing a strategic social media plan by identifying goals and audiences, and measuring success both quantitatively and qualitatively.
An introduction the big picture of the Appcelerator Platform and the architecture and principles behind Titanium and Alloy to get you started. Created and presented by myself and Pierre van de Velde at meetup.com/TitaniumNL.
las enfermedades son el limitante de toda producción pecuaria, lo ideal es prevenir, sin embargo si la enfermedad se presenta no queda de otra que identificar y controlar.
All living things are made of cells, which are the basic units of structure and function. Cells can be as tiny as bacteria or joined together to form complex multicellular organisms across different kingdoms. Living things must carry out vital functions like nutrition, response to the environment, and reproduction to survive and thrive as individuals or populations.
Unidad 6. Diseño de Bloques Completos al AzarVerónica Taipe
Este documento describe el diseño de bloques al azar para comparar cuatro niveles de humedad del suelo (0.40, 0.45, 0.50 bar y sin riego) y su efecto en el rendimiento de plátano. El experimento incluyó cuatro tratamientos y seis repeticiones en bloques al azar. Los resultados mostraron diferencias significativas entre los tratamientos, requiriendo una comparación de medias para identificar el nivel de humedad más eficiente.
Requerimientos nutricionales en caprinos poDiego Suarez
Este documento describe los requerimientos nutricionales de los caprinos. Indica que las cabras lecheras consumen entre el 3-6% de su peso vivo en materia seca, con las cabras tipo Alpina consumiendo hasta el 5 kg de materia seca por cada 100 kg de peso. También proporciona tablas con los requerimientos diarios recomendados de energía, proteína, calcio, fósforo y otros minerales para cabras en diferentes etapas productivas como el mantenimiento, la preñez y la lactancia. Finalmente, detalla los valores
Este documento es un álbum de fotos de una visita de estudiantes de primer y segundo grado primaria a una piscifactoría en Saro, España en junio de 2016. Consiste en más de 100 páginas con fotos de los estudiantes observando y aprendiendo sobre el proceso de cultivo de pescado.
El documento es un informe de la excursión de los alumnos de 5 años del Colegio Santa María Micaela Santander a la Delicatessen La Ermita en febrero de 2016. El informe repite la información de la excursión 20 veces.
El documento proporciona información sobre el Colegio Santa María Micaela Santander y los alumnos de 1o de ESO para el curso 2015-2016. El documento fue creado por el Centro Meteorológico en mayo de 2016 y contiene detalles sobre los estudiantes de primer año de educación secundaria obligatoria en esa escuela durante ese año académico.
Manila, an update from Liberty, OpenStack Summit - TokyoSean Cohen
Manila is a community-driven project that presents the management of file shares (e.g. NFS, CIFS, HDFS) as a core service to OpenStack. Manila currently works with a variety of storage platforms, as well as a reference implementation based on a Linux NFS server.
Manila is exploding with new features, use cases, and deployers. In this session, we'll give an update on the new capabilities added in the Liberty release:
• Integration with OpenStack Sahara
• Migration of shares across different storage back-ends
• Support for availability zones (AZs) and share replication across these AZs
• The ability to grow and shrink file shares on demand
• New mount automation framework
• and much more…
As well as provide a quick look of whats coming up in Mitaka release with Share Replication demo
3-2-1 Action! Running OpenStack Shared File System Service in ProductionSean Cohen
As OpenStack’s Shared File System Service is getting more and more adoption as one of top leading emerging projects in OpenStack deployments (according to the last OpenStack foundation user survey), we would like to share some of the key customers use cases such as DevOps, Containers and Enterprise Applications as well review the latest Newton release project updates towards delivering a production-grade deployments.
Slides from OpenStack Summit Barcelona,, October 25, 2016
Session video: https://www.youtube.com/watch?v=F5o-EbESNr8
OpenStack Manila is a project that provisions and manages shared file systems across storage systems through a REST API. It is based on OpenStack Cinder but addresses managing file shares rather than block storage volumes. The latest Train release of Manila introduced improvements to share networking, replication, and types as well as new drivers and enhancements. Looking ahead, the Ussuri release will focus on scalability, resilience, manageability, modularity, and other areas to further Manila's capabilities in large deployments and at the edge.
Presentation of the StratusLab cloud distribution at FOSDEM'13. It summarizes the current cloud services and their implementations. It concludes with the roadmap for upcoming releases.
This document provides an overview of Container as a Service (CaaS) with Docker. It discusses key concepts like Docker containers, images, and orchestration tools. It also covers DevOps practices like continuous delivery that are enabled by Docker. Specific topics covered include Docker networking, volumes, and orchestration with Docker Swarm and compose files. Examples are provided of building and deploying Java applications with Docker, including Spring Boot apps, Java EE apps, and using Docker for builds. Security features of Docker like content trust and scanning are summarized. The document concludes by discussing Docker use cases across different industries and how Docker enables critical transformations around cloud, DevOps, and application modernization.
Delivering IaaS with Open Source SoftwareMark Hinkle
Mark Hinkle presented on delivering Infrastructure-as-a-Service (IaaS) using open source software. He discussed various open source tools for building cloud computing including hypervisors like KVM and Xen, object storage solutions like OpenStack Swift, and automation/orchestration tools like CloudStack and OpenStack. Hinkle emphasized that open source solutions provide many advantages for cloud computing including lower costs, collaboration, and avoidance of vendor lock-in. He also covered management tools for private clouds and highlighted the importance of automation.
Zero to 1000+ Applications - Large Scale CD Adoption at Cisco with Spinnaker ...DevOps.com
As part of its Cloud-native transformation, Cisco needed to modernize its software delivery process. Scalability, multi-cloud deployment to its OpenShift environment and public clouds, and the ability to support Cisco’s extensive policy, compliance, and security requirements made open source Spinnaker a logical choice for a modern continuous delivery platform.
As one of the world’s top technology providers with one of the largest and most diverse software development organizations, Cisco had to overcome some unique challenges to be able to onboard 10,000+ developers, 1000+ monolithic and non-cloud native applications, and achieve the high availability and reliability needed to support mission-critical production applications.
Join us for this new webinar as Balaji Siva, VP of Products at OpsMx engages Anil Anaberumutt, IT architect at Cisco, and Red Hat Sr. Solutions Architect, Vikas Grover, in a discussion about Cisco’s CD challenges and the lessons learned, best practices implemented, and key results achieved on their CD transformation journey from zero to over 1000 applications.
To facilitate a variety of usage scenarios and gradually scale to larger number of users, Galaxy supports deployment on systems ranging from a laptop to a supercomputer to clouds. In this talk, real-world examples of two different models for harnessing a variety of resources will be presented: (1) a centralized Galaxy utilizing a set of geographically distributed resources in support of a large user base, and (2) a model of easily deploying multiple standalone instances of Galaxy to support high resource demands or customizations by a smaller groups. Together, these models showcase the capacity of Galaxy to support a variety of usage scenarios and a variable number of users with a variety of needs.
This document provides an overview of open source cloud computing presented by Mark R. Hinkle. It discusses key cloud concepts like virtualization formats, hypervisors, compute clouds, storage, platforms as a service, APIs, private cloud architecture, provisioning tools, configuration management, monitoring, and automation/orchestration tools. The presentation aims to educate about building clouds with open source software and managing them using open source management tools. Contact information is provided for Mark R. Hinkle for any additional questions.
This document provides an overview of open source cloud computing presented by Mark R. Hinkle. It discusses key cloud concepts like virtualization formats, hypervisors, compute clouds, storage, platforms as a service, APIs, private cloud architecture, provisioning tools, configuration management, monitoring, and automation/orchestration tools. The presentation aims to educate about building clouds with open source software and managing them using open source management tools. Contact information is provided for Mark R. Hinkle for any additional questions.
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...OpenStack
Audience Level
Intermediate
Synopsis
High performance computing and cloud computing have traditionally been seen as separate solutions to separate problems, dealing with issues of performance and flexibility respectively. In a diverse research environment however, both sets of compute requirements can occur. In addition to the administrative benefits in combining both requirements into a single unified system, opportunities are provided for incremental expansion.
The deployment of the Spartan cloud-HPC hybrid system at the University of Melbourne last year is an example of such a design. Despite its small size, it has attracted international attention due to its design features. This presentation, in addition to providing a grounding on why one would wish to build an HPC-cloud hybrid system and the results of the deployment, provides a complete technical overview of the design from the ground up, as well as problems encountered and planned future developments.
Speaker Bio
Lev Lafayette is the HPC and Training Officer at the University of Melbourne. Prior to that he worked at the Victorian Partnership for Advanced Computing for several years in a similar role.
CAPS: What's best for deploying and managing OpenStack? Chef vs. Ansible vs. ...Daniel Krook
Presentation at the OpenStack Summit in Tokyo, Japan on October 29, 2015.
http://sched.co/49vI
This talk will cover the pros and cons of four different OpenStack deployment mechanisms. Puppet, Chef, Ansible, and Salt for OpenStack all claim to make it much easier to configure and maintain hundreds of OpenStack deployment resources. With the advent of large-scale, highly available OpenStack deployments spread across multiple global regions, the choice of which deployment methodology to use has become more and more relevant.
Beyond the initial day-one deployment, when it comes to the day-two and beyond questions of updating and upgrading existing OpenStack deployments, it becomes all the more important choose the right tool.
Come join the Bluebox and IBM team to discuss the pros and cons of these approaches. We look at each of these four tools in depth, explore their design and function, and determine which scores higher than others to address your particular deployment needs.
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
Paul Czarkowski - Cloud Engineer at Blue Box, an IBM company
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
CAPS: What's best for deploying and managing OpenStack? Chef vs. Ansible vs. ...Animesh Singh
Chef, Puppet, Ansible, and Salt are popular configuration management tools for deploying and managing OpenStack. Each tool has its own strengths and weaknesses. Chef focuses on infrastructure automation and uses a Ruby DSL. Puppet uses a custom DSL and is focused on compliance. Ansible emphasizes orchestration and uses YAML playbooks. Salt uses a Python-based interface and focuses on remote execution and data collection at scale. All four tools provide options for deploying and managing OpenStack, with varying levels of documentation and community support.
Linux-Stammtisch Juli 2019, Munich: Talk by Mario-Leander Reimer (@LeanderReimer, Principal Software Architect at QAware)
=== Please download slides if blurred! ===
Abstract: Only a few years ago the move towards microservice architecture was the first big disruption in software engineering: instead of running monoliths, systems were now build, composed and run as autonomous services. But this came at the price of added development and infrastructure complexity. Serverless and FaaS seem to be the next disruption, they are the logical evolution trying to address some of the inherent technology complexity we are currently faced when building cloud native apps.
FaaS frameworks are currently popping up like mushrooms: Knative, Kubeless, OpenFn, Fission, OpenFaas or Open Whisk are just a few to name. But which one of these is safe to pick and use in your next project? Let's find out. This session will start off by briefly explaining the essence of Serverless application architecture. We will then define a criteria catalog for FaaS frameworks and continue by comparing and showcasing the most promising ones.
This document provides an overview of Kubernetes and containerization concepts including Docker containers, container orchestration with Kubernetes, deploying and managing applications on Kubernetes, and using Helm to package and deploy applications to Kubernetes. Key terms like pods, deployments, services, configmaps and secrets are defined. Popular container registries, orchestrators and cloud offerings are also mentioned.
Kubernetes is exploding in popularity right now and has all the buzz and cargo-culting that Docker enjoyed just a few years ago. But what even is Kubernetes? How do I run my PHP apps in it? Should I run my PHP apps in it ?
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 PreviewChip Childers
Chip Childers is the VP of Apache CloudStack and Principal Engineer at SunGard Availability Services.
Apache CloudStack is open source software that can deploy and manage large networks of virtual machines as a scalable IaaS cloud platform. It is a top-level project at the Apache Software Foundation.
CloudStack enables cloud operators to design, install, support, upgrade and scale diverse cloud environments. It also allows application owners to easily consume infrastructure services so that infrastructure does not get in the way of delivering applications to end users.
Oscon 2017: Build your own container-based system with the Moby projectPatrick Chanezon
Build your own container-based system
with the Moby project
Docker Community Edition—an open source product that lets you build, ship, and run containers—is an assembly of modular components built from an upstream open source project called Moby. Moby provides a “Lego set” of dozens of components, the framework for assembling them into specialized container-based systems, and a place for all container enthusiasts to experiment and exchange ideas.
Patrick Chanezon and Mindy Preston explain how you can leverage the Moby project to assemble your own specialized container-based system, whether for IoT, cloud, or bare-metal scenarios. Patrick and Mindy explore Moby’s framework, components, and tooling, focusing on two components: LinuxKit, a toolkit to build container-based Linux subsystems that are secure, lean, and portable, and InfraKit, a toolkit for creating and managing declarative, self-healing infrastructure. Along the way, they demo how to use Moby, LinuxKit, InfraKit, and other components to quickly assemble full-blown container-based systems for several use cases and deploy them on various infrastructures.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
3. Agenda
• What is Manila?
• Why use Manila?
• Use Cases
• Sahara
• Containers
• Liberty Updates
• Distributions Integration
• SUSE Demo
• RedHat Demo
• Upcoming in Mitaka
• Share Replication Demo
• Q+A
4. Manila: The OpenStack Shared File Service Program
Bringing self-service, shared file services
to the cloud
5. Manila History
Beginnings
Juno
Incubated Project
Community Inception
Puppet Support
Share Servers
Tempest Integration
Kilo
Driver Modes
DevStack Plug-in
Storage Pools
Default Share Type
Manage/Unmanage
Manila UI
6. Manila Today -
Production Ready
Number of Drivers: 14
Blue Prints Completed: 51
Major Blue Prints:
Share Instances
REST API Microversions
Experimental APIs
Extend & Shrink
Consistency Groups (CGs)
Share Migrations
etc.
7. Manila Deployment Options and Benefits
Single Storage Virtual Machine (SVM) / Multi SVM
§ driver_handles_share_servers=False/True
Benefit
Network Plugins
Standalone Network Plugin
Nova Network Plugin
Neutron Network Plugin
Benefit
17. Experimental APIs
Expected to change at any time
can be removed without
deprecation period
Usage: needs to set header “X-
OpenStack-Manila-API-
Experimental: true”
http://docs.openstack.org/developer/manila/devref/experimental_apis.html
http://developer.openstack.org/api-ref-share-v2.html
@api_version(min_version='2.1', max_version='2.9')
def show(self, req, id):
.... stuff ....
@api_version(min_version="2.4",
experimental=True)
def my_api_method(self, req, id):
.... stuff ....
19. Consistency Groups (CGs)
grouping different shares together for the purpose of application data
protection (focus of snapshots for disaster recovery)
example use case: database data and log files are on different shares
when doing snapshots for both shares, the data on the shares must be consistent
supported actions
create CG with mutliple shares
create snapshot of CG
create CG from snapshot
21. Oversubscription
Tunable for setting provisioned
capacity and a subscription ratio
added
Addresses 'infinite' and 'unknown'
drivers reported capacity that may
lead to oversubscription
thin_provisoning support needed
22. Share Migrations
Share Migration allows a share to be
migrated from one host pool to
another hostpool through the "manila
migrate <share> <host#pool>"
command, as well as allowing to
perform migration between different
backends.
Basic implementation
The fallback approach for migration is rsync.
Slow, inefficient
Vendors can utilize API for optimized
migration
23. Availability Zones
The availability zones support which was
inherited from Cinder was reworked this
cycle:
Added public API extension
Allow to preserve AZ if creating a share
from a snapshot and set AZ in Share API
or Share Manager.
AZs will benefit share replication and also
give end users control of the locality of
their data w.r.t. consumers of the data.
24. Sahara Integration
Use cases
Stores binaries for job templates - NFS is ideal for this case
Input and output data sources - Manila-provisioned HDFS and NFS offers more
options
Mount NFS share API
Binaries and data I/O from an NFS share path
New development in Sahara this cycle
Mount shares at cluster creation or auto-mount when a share is used for EDP
Manila-provisioned HDFS
Data sources and data processing on Sahara-external clusters
Testing and process verification of extant Manila features this cycle
Coming soon - NFS Hadoop driver (run jobs on your NFS shares)
25. Sahara Integration - Current Implementations
Data sources on Manila-provisioned
HDFS
API to mount NFS shares to clusters (job
binaries and data sources)
Images by Weiting Chen (Intel)
26. Manila + Containers Ceph Example
Simply mount --bind share into
container namespace
NFS re-export from host
mount and export fs on host
private host/guest net
avoid network hop from NFS
service VM
Host mounts CephFS
Bind Manila share/volume into
container
Further integration requires
integration with both Nova and
Manila to manage the
attach/detach process
HOST
M M
RADOS CLUSTER
CONTAINER
MANILA
NATIVE CEPH
CEPH.KO
NOVA
28. Manila in SUSE OpenStack Cloud
The Manila service is tech preview in SUSE Cloud 5
Fully supported in SUSE OpenStack Cloud 6
Crowbar deployment tool integration
Controller HA
NetApp driver
Custom driver possible
29.
30. Manila in RHEL OpenStack Platform 7
The Manila service is tech preview in
RHEL OpenStack Platform 7
RHEL OpenStack director deployment
tool
Offer integration with
GlusterFS native
Gluster NFS
NetApp drivers
Manila Certification program in RHEL
OpenStack Platform 8
Introducing NFS-Ganesha and Gluster
Automated Volume Management
(based on Heketi)
33. Upcoming in Mitaka
Mount Automation
Rolling Updates
Export Location Metadata
Manila QoS
Capability Lists
Interaction Between New Features
Share Migration Loose-ends
Remove All Extensions
Architectural Directions For New 1st-party Drivers
Share Replication
34. Non Disruptive Operations
High Availability
Availability Zones
Failures within an AZ
High Availability Solution
Clustered Storage
Failure of an AZ
High Availability Solution
Share Replication
Manila State of the Art: Share Replication
37. GET INVOLVED WITH MANILA!
Manila Resources
https://github.com/openstack/manila
https://github.com/openstack/python-manilaclient
https://github.com/openstack/manila-ui
https://github.com/openstack/manila-image-elements
https://wiki.openstack.org/wiki/Manila
https://launchpad.net/manila
#openstack-manila on IRC (Freenode)
Weekly meetings @ Thursday, 15:00 UTC
NetApp: http://netapp.github.io
Red Hat: https://www.redhat.com/en/insights/openstack
Suse: https://www.suse.com/products/suse-cloud/
38. MANILA RELATED SESSIONS
MANILA GENERAL SESSIONS
Manila and Sahara: Crossing the Desert to the Big Data Oasis: Tuesday, Oct 27 12:05pm
OpenStack Manila Hands-on Lab Session: Tuesday, Oct 27 2:00pm
The State of Ceph, Manila, and Containers in OpenStack: Wednesday, Oct 28 4:40pm
UPCOMING MANILA SESSION
Manila contributors meetup: Friday, Oct 30, 9:00am
Welcome to this session at the end of the day on Thursday at OpenStack Summit - Tokyo. We hope you had a great Summit and want to thank you for being here with us this late on Thursday. This session is Manila – An Update from Liberty.
I am Akshai Parthasrathy, Technical Markeing Engineer for all things Cloud Computing and OpenStack at NetApp. Here with me are Sean Cohen, Principal Product Manager from Red Hat and Tom Bechtold, OpenStack Cloud Engineer from SUSE.
Let’s take a look at the agenda for this talk. Some of you may recognize Manila as a city in the Phillipines or relate it to those manila folders in your filing cabinet. We will introduce Manila to you, in the context of OpenStack. Next, we’ll cover why you want to use Manila - the advantage or value. There are many use cases for Manila, and we’ll talk about some of them shortly. We’ll then jump into the main topic of this session - the work that was done for Liberty. Sean and Tom will take you through Manila integration into RHEL OSP and SUSE Cloud along with two demos. We then talk about features in Mitaka and I’ll close-off with a third demo with a state-of-the-art feature called Manila replicas.
Very simply, Manila is for file shares what Cinder is for block storage.
Manila dispenses, in a self service open REST API, shared file systems out to tenants of a cloud. Using Manila, we can get a 1GB NFS share and specify the network range that should have access to the share. Or, we can provision a 1TB CIFS share, do authentication with Active Directory and ensure specific tenant networks have access the share.
So we’re dealing with shared file-systems in Manila. Therefore, in Manila, unlike Cinder, we have a networking component. We may want to export an NFS share only to that particular Neutron or Nova network for a tenant. So, there is a little bit more magic behind the scenes to make sure that the storage that’s behind that filesystem can access that network and do it securely.
The first time people started hearing about Manila was in the OpenStack Atlanta Summit, back in May 2014. At that time, the program was overflowing with interest and we were extremely motivated to continue working in the 6-month cadence. We introduced the Manila capability in Juno, and submitted it for consideration as a core service in the Kilo. Manila went through cycles of Continuous Development and Rigorous Testing and we persisted in putting out a significant number of new feature releases and bug fixes in Juno and Kilo.
So, let’s get to where we’re at today. Most importantly, Manila is production ready. We have a total of 14 storage drivers for Manila. There were 51 total features (or blueprints) completed. We went down slightly on this number from Kilo, but that was because we spent more time on bug fixes this time around. We had a total of 184 completed bug fixes. The trends are on the right.
We have rolled out a number of new features for you to take advantage of. They include share instances, REST API Microversions, Experimental APIs, and others. Tom and Sean will lead you through these soon.
Today, a Manila share driver may be configured in one of two modes. We can either use Manila to manage the lifecycle of share servers on its own or use Manila to merely provide storage resources on a pre-configured share server. This mode is defined using the boolean option driver_handles_share_servers in the Manila configuration file. It provides flexibility to deploy Manila shares the way you would like and is available with NetApp Clustered Data ONTAP today.
The Manila architecture also three concrete network plugins. This allows operators to choose from a variety of options for how network resources are assigned to their tenants’ networked storage. The three plugins are Standalone Network Plugin for pre-configured networks, Nova Network plugin for Nova Networks, and most importantly, Neutron Network plugin for Neutron networks. Each of these plugins support a variety of segmentation options.
Here’s the breakdown of contributions for Manila, as of Liberty. NetApp, Red Hat and Suse are major contributors to the program and we welcome new contributors from the community. It should also be pointed out that NetApp has been a pioneer in the Manila project and a leader through all its releases.
For the Liberty release, we would like to thank the following members of the community: CloudBase, Fujitsu, Scality, NEC, NTT, and Letv Cloud Computing.
You’ve had a look at what Manila is. Let us take a look now at why you want to be using Manila.
There is an explosion of data today. Estimates for spending on file-based storage solutions say that it will reach north of $34.6 billion in 2016. The diversity of applications often depend on the performance, scalability and simplicity in management. OpenStack, as the leading open open source IaaS capability, with Manila as a service, is a production-ready option for deploying infrastructure with file-based services.
So the question really becomes why not use Manila with OpenStack?
There are numerous use cases for Manila, the file share services project. These include: Standalone File Services Management, Enterprise Applications, DevOps, Sahara, Containers, Database as a Service,, Automation and Integration with Manila API, Heat, Hybrid Cloud Shares, and much much more. Let’s dive into some of these now.
The first use case we’ll cover is Standalone Share File Services Management. One of the things we hear as we talk to NetApp customers is that a lot of them have one guy in a back office that has written a set of Perl scripts to dispense shares. He’ll go in and run a Perl script to dispense a CIFS share when a new request comes in. There are probably 1 or 2 guys in a company that know how the script works and if they decide to take another job or you switch to a different technology you’ve got to re-invent that entire infrastructure. Not to mention that they break the consumer interface for that. So, we see a lot of interest in Manila for replacing those home-grown legacy systems with an open/standard API that is production ready. Manila provides the same level of self-service – to create a share, delete a share, take snapshots and take other actions - in a completely vendor agnostic framework.
Another use case we want to address is the movement of existing enterprise applications to OpenStack.
One of the common things we hear from customers – I’m using a virtualization technology that works great and has a lot of good functionality but is really expensive. A big reason to move to O/S is just cost. IT budgets aren’t growing a huge amount and you want to do more with less and O/S provides a pretty compelling value proposition to make that a reality. Manila ensures that workloads that are built assuming the existence of a shared file system can move over to O/S and get all the cost benefits that O/S or KVM might avail for us.
Another reason is that, I don’t have to re-write those apps to use an object store, for example. Perhaps over the next 5 years or 10 years I will rewrite those apps to a Swift interface or S3 interface. But frankly, there are a lot of apps that work today with file-based services and people don’t want to fix it if it aint broke. Manila provides a way to move those apps to the O/S infrastructure and leverage the cost benefits.
The #1 use case for Manila is DevOps. You can have clones of Manila in a snap and this speeds up the lifecycle heavily in a fast-paced DevOps environment.
OpenStack is all about being a pluggable infrastructure. We can take any type of backend and expose it using share types. The use of the storage service catalog allows you to separate the needs of your file-share workloads. You can have your IT archive into a different share type using the Manila API – you don’t need to have SSDs being used for your archives. We can provide you the storage you need when you need it, but also make sure that it is stored on the right back-end.
For analytics, you have a HDFS driver since the Kilo release. You can scale out your analytics workloads using Manila.
You can even have Manila file shares mapped to containers. Sean will cover both Analytics with Sahara and Containers later in this talk.
A new feature we introduced in Liberty is Consistency Groups or CGs. With CGs, you can take application-and-database snapshots that are consistent and taken at the same point in time. This is a great feature to have in multi-tiered applications.
If you bring it all together - Manila File Services Management, Enterprise Applications, DevOps, Continuous Integration, Analytics, and Containers, we can rock with OpenStack Manila.
Let me now pass it over to Tom and Sean to lead us through important features implemented in Liberty.
thanks Akshai. I’m going to talk about new features together with Sean
a lot happened during the Liberty cycle!
for example oversubscription, cosistency groups, driver hooks and mor
we also adapted and integrated more into other components. for example we use a tempest plugin, microversions and diskimage builder elements
so in general lot of useful features for users, administrators and driver developers
new concept in Liberty
needed for share migration and replication
only visible for administrators. users don't recognize this change
the main goal of share instances is to decouple share UUID’s which are visible to the user from the UUID’s for share_instances which are visible to drivers.
that way the driver can create/delete share_instances or switch the connection between share_instances without changing what the user is seeing.
implemented in Nova during Kilo cycle
adapted by Manila and other projects will adapt the concept, too
make it possible to evolve the API incrementally - even backwards incompatible changes
how does it work:
client sends version it supports
server uses that API then and returns the expected return values
gives the advantage that developers can try out things and APIs can evolve
users can play with new API’s now
slides with the blue test tube are talking about experimental APIs
plan for M release is to remove all extensions and add them to the current API
shrink only supported via CLI
not all drivers support shrinking
extend share with HDFS_native driver is planned for Mitaka
With CGs you can group together different shares into a consistency group. This is needed for data protection and data consistency.
example: database data and application data or logfiles are on different shares. doing a snapshot of both shares needs to be done at the same point in time when the data is consistent. CGs are doing that.
So you can now: group shares into a CG, snapshot a CG and create a new CG from a snapshot for data recovery
new capability: “consistency_group_support”. Possible values:
None - No support for CGs
host - shares in a CG must be on pool(s) on the same host that also match the CG share type
pool - shares in a CG must live in the same pool as the CG
Unlike Cinder, snapshots in a CGsnapshot are not the same as a normal snapshot. A CGsnapshot is treated as a single unit instead of a collection of snapshots.
API experimental
we have noe external CI for all out drivers
huge effort by all the vendors - so thanks a lot to all drivers vendors for the hard work
tempest tests are now running for all the drivers for every changeset
During Mitaka cycle, minimum CI requirements
Removed drivers:
Hitachi HDS SOP (Scale out platform)
you can now oversubscribe your available capacity. for that the backend needs to support thin provisioning
max_over_subscription_ratio
default is 20 (provisioned capacity can be 20 times the total physical capacity)
LAST SLIDE TOM
API experimental
example use case: maintenance for one backend, so move the share to another backend
should work okay with any backend that doesn't have share servers
share retype not yet implemented
Manila, at its core, provides basic provisioning and management of file shares to users of an OpenStack cloud. The Sahara project provides a framework to expose Big Data services such as Spark and Hadoop. Together these two projects create a solution that is greater than the sum of its parts.
Natural synergy and popular demand led these three teams to develop a joint solution to expose Manila file shares within the Sahara construct to solve real Big Data challenges
Share mount API does require a network copy into local HDFS for data sources.
Manila-provisioned HDFS: insecure cluster integration only at this time.
** Make reference to the Tuesday demo showing a Sahara data processing job running with binaries, data sets, and results hosted in Manila file shares mounted on a Sahara cluster.
The current target is to use the new VSOCK zero-configuration sockets, that requires no configuration on the guest--so we can continue to treat it as a black box--and only a simple network id assignment on the host.
Let us now talk about the upcoming features in Mitaka
SUSE OpenStack Cloud 5 has Manila as tech preview
Version 6, expected in January has Manila fully supported
I’m going to show a demo video of the deployment integration in the current Cloud 6 beta version
Crowbar used as deployment tool
Chef
Barclamps
HA integration
Using GlusterFS via NFS-Ganesha, the Manila file shares are abstracted from underlying hardware and can be grown, shrunk, or migrated across physical systems as necessary.
Storage servers can be added or removed from the system dynamically with data rebalanced across the trusted pool of servers, where the data is always online – this addresses the File Share Elasticity required to provide a Scale-out / Scale-down NAS on demand.
It also delivers File Share Availability as GlusterFS enables you to replicate the whole storage system between different data centers and across geographic locations.
Let us now talk about the upcoming features in Mitaka
Note: don’t deep dive into each one. Choose 1 or 2 and briefly cover others.
So here is a list of upcoming features we have for you in mid 2016.
Mount Automation: there has been a lot of interest to have Manila shares automatically mounted on instances and significant amount of work is already in place. We’re going to complete it for Mitaka.
Rolling upgrades to spin up new API services without any disruption is also scheduled for the next release.
I’ll leave this up there for a few seconds. If you have questions about any of these, feel free to see me after this session.
As with any service in an enterprise or the Cloud, we’re always looking for non-disruptive operations. The way we provide this is through high availability or HA.
In OpenStack we have the concept of Availability Zones. We already have the technology to provide HA within an AZ today. This is achieved through clustering of storage controllers, including Clustered Data ONTAP. Ok, so you can tolerate a malfunctioning storage unit or loss of network connectivity to a storage controller using clustering magic.
But, what if you have the entire AZ going down, say due to loss of power? You want to have failover and failback from one data-center or Availability Zone. This is a really hot state-of-the-art feature (that just got finalized a week ago for Mitaka) and I’ll wallk you through a demo.
Let me show you state of the art for Manila - Share Replication.
Get involved with Manila today. We always welcome new members to the community. This is all the links for Manila and NetApp, Red Hat and Suse. The NetApp website is netapp.github.io, and the Red Hat and Suse links are here as well. Please visit them to learn more.
Here are the Manila related sessions that you can refer to on Youtube. There is also a meetup tomorrow – please attend.
With that, I would like to call this talk to a close and open up the floor for any questions you may have. Please provide any feedback you may have through the OpenStack Summit App. Thank you for staying with us till the end of this Tokyo Summit.