HPCNow! outlines their dynamic provisioning of Hybrid nodes, used primarily for HPC. OpenNebula is a fundamental component, offering the desired flexibility and ease.
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebula Project
At Iguane Solutions, a lot of our "DevOps" tools are developed in Golang, and we have a good amount of experience in contributing to the Goca. I'll review just what contributions we make, as well as how we use Goca with different tools, on a daily basis, to manage and monitor our OpenNebula cloud.
I will delve into the concept of Infrastructure as Code - deployment of VM instances on cloud, as well as, also address the metrics collection of deployed VMs. Finally, I will present how we can abstract VM management with automation tools thanks to GOCA.
Our take on centralized and controlled VM image backups that deal with both CEPH and local QCOW2 datastores. As there are no default means of executing image backups in OpenNebula, I'd like to share our perspective on how we do it.
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebula Project
The document discusses disaggregated data centers using OpenNebula. It describes how OpenNebula allows for scalability through elasticity and avoids issues from human/configuration errors. It discusses types of scalability like predictable, mixed/emergency, and unpredictable scalability. It also briefly discusses provisioning tools like Oneprovision and using provision templates in YAML format.
NetApp’s Hybrid Cloud Infrastructure manages to leverage Kubernetes to a Hybrid Multi Cloud use case where OpenNebula integrates seamlessly. A technical deep dive in how NTS and NetApp integrated NTS Captain into NetApp’s DataFabric world on top of NetApp HC
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebula Project
I will be presenting the ongoing advances of the OnLife Networks project across Spain and Brasil, with a focus on use cases we have implemented in the Central Offices, which serve as the edge resources closest to the end-user. I will share an interesting synopsis of the the projects evolution, as well as provide several lessons learned.
This document provides an introduction to OpenNebula, an open-source software solution for building and managing private and hybrid clouds. Some key points:
- OpenNebula has been downloaded over 210,000 times in the last two years and is used to power over 3,000 clouds, including the largest with 270,000 cores.
- It provides a turnkey solution for building clouds and virtualizing data centers with a single, upgradable package that is lightweight, flexible, robust, and powerful.
- Features include virtual infrastructure management, cloud orchestration, multi-tenancy, elastic provisioning, and hybrid cloud integration.
- The OpenNebula community has over 1,000 members who
A deep insight into a project with codename "TARDIS" at HAUFE Lexware with the purpose to replace vCloud with OpenNebula. A technical deep dive into a focussed project done by real DevOps experts.
This document provides an overview of cloud native storage. It discusses how storage is a key component of cloud native reference architectures and how container-based applications require persistent storage volumes. It introduces the concept of out-of-tree storage plugins that allow various storage platforms to integrate with container orchestrators. The document also outlines common cloud native storage patterns, such as giving containers persistent volumes, and how this enables portability across infrastructure providers. Finally, it provides examples of how storage classes, persistent volumes, and persistent volume claims can be used to provision storage for pods running in containers.
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebula Project
At Iguane Solutions, a lot of our "DevOps" tools are developed in Golang, and we have a good amount of experience in contributing to the Goca. I'll review just what contributions we make, as well as how we use Goca with different tools, on a daily basis, to manage and monitor our OpenNebula cloud.
I will delve into the concept of Infrastructure as Code - deployment of VM instances on cloud, as well as, also address the metrics collection of deployed VMs. Finally, I will present how we can abstract VM management with automation tools thanks to GOCA.
Our take on centralized and controlled VM image backups that deal with both CEPH and local QCOW2 datastores. As there are no default means of executing image backups in OpenNebula, I'd like to share our perspective on how we do it.
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebula Project
The document discusses disaggregated data centers using OpenNebula. It describes how OpenNebula allows for scalability through elasticity and avoids issues from human/configuration errors. It discusses types of scalability like predictable, mixed/emergency, and unpredictable scalability. It also briefly discusses provisioning tools like Oneprovision and using provision templates in YAML format.
NetApp’s Hybrid Cloud Infrastructure manages to leverage Kubernetes to a Hybrid Multi Cloud use case where OpenNebula integrates seamlessly. A technical deep dive in how NTS and NetApp integrated NTS Captain into NetApp’s DataFabric world on top of NetApp HC
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebula Project
I will be presenting the ongoing advances of the OnLife Networks project across Spain and Brasil, with a focus on use cases we have implemented in the Central Offices, which serve as the edge resources closest to the end-user. I will share an interesting synopsis of the the projects evolution, as well as provide several lessons learned.
This document provides an introduction to OpenNebula, an open-source software solution for building and managing private and hybrid clouds. Some key points:
- OpenNebula has been downloaded over 210,000 times in the last two years and is used to power over 3,000 clouds, including the largest with 270,000 cores.
- It provides a turnkey solution for building clouds and virtualizing data centers with a single, upgradable package that is lightweight, flexible, robust, and powerful.
- Features include virtual infrastructure management, cloud orchestration, multi-tenancy, elastic provisioning, and hybrid cloud integration.
- The OpenNebula community has over 1,000 members who
A deep insight into a project with codename "TARDIS" at HAUFE Lexware with the purpose to replace vCloud with OpenNebula. A technical deep dive into a focussed project done by real DevOps experts.
This document provides an overview of cloud native storage. It discusses how storage is a key component of cloud native reference architectures and how container-based applications require persistent storage volumes. It introduces the concept of out-of-tree storage plugins that allow various storage platforms to integrate with container orchestrators. The document also outlines common cloud native storage patterns, such as giving containers persistent volumes, and how this enables portability across infrastructure providers. Finally, it provides examples of how storage classes, persistent volumes, and persistent volume claims can be used to provision storage for pods running in containers.
OpenNebulaConf2017US: Welcome and project update by Ignacio M. Llorente and R...OpenNebula Project
We’re moving into a world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform can not be all things to all people, there will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud flows directly out of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to remind our vision, vision and commitment, to look back at how the project has grown in the last 8 years, and to give a peek at what to expect from the project in the near future.
OpenNebula is an open-source cloud computing tool for managing virtualized infrastructure in a data center. It allows for both private and hybrid cloud deployments. The presentation provided an overview of OpenNebula's architecture and components, how to develop drivers to integrate different technologies, and ways to interact programmatically through APIs and scripting. It also discussed how OpenNebula is used by hosting companies, technology providers, and research organizations to deploy cloud services.
Welcome talk unleashing the future of open-source enterprise cloud computingNETWAYS
The OpenNebula Project has come a long way since the first “technology preview” of OpenNebula almost six years ago. During these years we’ve witnessed the rise and hype of the Cloud, the birth and decline of several virtualization technologies, but specially the encouraging and exciting growth of OpenNebula; both as a technology and as an active and engaged community. As a meeting point for OpenNebula users, developers, administrators, builders, integrators and researchers, this Conference represents an opportunity to look back at how the project has grown in the last six years, and to give a peek at what to expect from the project in the near future.
OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...NETWAYS
Last year, during the first OpenNebula Conference we briefly talked about interoperability in the cloud, introduced the OCCI standard/protocol and focused on one of its implementations — The rOCCI framework. We positioned this framework as the go-to interface for providing interoperability in OpenNebula with significant plans for future development and improvement.
OpenNebula Conf 2014 | Practical experiences with OpenNebula for cloudifying ...NETWAYS
Our team manages a SaaS platform for Business Intelligence and Analytics applications using a diverse set of middleware (mostly IBM). However the original setup of this platform was not done using a cloud architecture. In this talk we describe the reasons for selecting OpenNebula. The architecture of the new setup. The process of migrating to that new setup and the lessons we learned during that process and in the daily operation of the platform. And finally this talk will also cover our vision for the next step which is to move towards a hybrid cloud setup.
We’re moving into a world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform can not be all things to all people, there will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud flows directly out of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to remind our vision, vision and commitment, to look back at how the project has grown in the last 8 years, and to give a peek at what to expect from the project in the near future.
The Nuage Networks Virtualized Cloud Services solution integrates seamlessly with CloudStack’s advanced networking, supporting Shared Networks, Isolated Networks and VPC’s for KVM and ESXI hypervisors. That integration is bidirectional in the way that networks can be provisioned in CloudStack and be programmed into Nuage, as can be provisioned as advanced networking topologies within Nuage and be consumed from CloudStack. Empowered with SDN, operators can boost their clouds with networking scalability and performance which through native VR networking cannot be met.
OpenNebula Conf 2014 | State and future of OpenNebula - Ignacio LlorenteNETWAYS
This document summarizes the keynote presentation given at the OpenNebulaConf 2014 conference. It discusses the history and growth of OpenNebula from its beginnings in 2005 to the present, including major releases that have addressed users' needs and increasing numbers of downloads, deployments, and community contributors. It highlights OpenNebula's commitments to openness, simplicity, reliability, and flexibility for users.
OpenNebula Conf 2014 | From private cloud to laaS public services for Catalan...NETWAYS
Nowadays, Catalan Academic and research institutions can enjoy self-service cloud infrastructure in order to meet their application needs in a flexible pay per use mode. An self-service platform is available for managing servers, networks and assigned storage from the Universities Consortium Data Centers, allowing users access to a customizable infrastructure with an orientation towards the so-call virtual DC.
Intro to Project Calico: a pure layer 3 approach to scale-out networkingPacket
Slide presentation from the April 16th, 2015 Downtown NY Tech Meetup hosted at Control Group and presented by Christopher Liljenstolpe from Project Calico (www.projectcalico.org)
Project Calico is a scale-out networking fabric for bare metal, container, VM, and hybrid environments. Project Calico leverages the same networking techniques used to scale out the Internet to present a highly scaleable, L3 network for those environments without the use of tunnels, overlays, or other complex constructs. We'll also do a demo of a Calico enabled Docker environment, and have plenty of time for q&a during and after.
About Christopher Liljenstolpe
Christopher is the original architect of Project Calico and one of the project's evangelists. In his day job, he's the director of solutions architecture at Metaswitch Networks. Prior to Calico/Metaswitch, he's designed and run some bio-informatics OpenStack clusters, done some SDN architecture work at Big Switch Networks, Run architecture at two large carriers (Telstra - AS1221, and Cable & Wireless/iMCI - AS3561) and been the IP CTO for Alcatel in Asia. He's also run networks in Antarctica (hint, bend radius becomes REALLY important at -50C), and been foolish enough to do a stint as a wg co-chair in the IETF. Occasionally you can have the (mis-)fortune of hearing him speak at conferences and the like.
OpenNebula TechDay Boston 2015 - An introduction to OpenNebulaOpenNebula Project
OpenNebula is an open-source tool for building private clouds on existing infrastructure that focuses on flexibility, simplicity, and being sysadmin-centric. It provides a single platform for automating and orchestrating enterprise clouds. OpenNebula has been downloaded over 150,000 times and is used to run over 3,000 production clouds, including the largest with 270,000 cores. It has been under development as an open community for over 7 years.
rOCCI – Providing Interoperability through OCCI 1.1 Support for OpenNebulaNETWAYS
OCCI (Open Cloud Computing Interface ) [1] is an open protocol for management tasks in the cloud environment focused on integration, portability and interoperability with a high degree of extensibility. It is designed to bridge differences between various cloud platforms (or cloud middleware) and provide common ground for users and developers alike.
The rOCCI framework [2], originally developed by GWDG [3], was written to simplify the implementation of the OCCI 1.1 protocol in Ruby and later provided the base for a working client and server implementation targeting OpenNebula as its primary back-end cloud platform. The initial server-side implementation provided basic functionality and served as a proof of concept when it was adopted by the EGI Federated Cloud Task Force [4] and chosen to act as the designated VM management interface. This led to further funding from EGI-InSPIRE [5] and involvement of CESNET [6].
This talk aims to provide basic information about the OCCI protocol, introduce its implementation in rOCCI, describe and/or demonstrate some of the functionality provided by rOCCI client and rOCCI-server in concert with OpenNebula. It also briefly examines its use in the EGI FedCloud environment and explores the possibility of further integration with
OpenNebula as a part of the ON ecosystem or even as an integral part of OpenNebula itself in the future. All this with interoperability in mind.
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...NETWAYS
Kishore works with the engineering team in building the open source product with a future focussed cloud technical strategy for “Megam – Cloud Automation Platform “http://gomegam.com”. In his prior incarnation Kishore has worked as an Architect in complex system integration projects for Airport systems with high availability. Kishore has avid experience in architecting large scale build and packaging tools for mainframe platform integrated via thin clients and eclipse IDE.
OpenNebulaConf2017EU: IPP Cloud by Jimmy Goffaux, IPPONOpenNebula Project
This document summarizes a demo of using Terraform to provision resources on an OpenNebula infrastructure. It describes the OpenNebula architecture which includes 400 VM instances across 7 nodes with 3TB of RAM, 250 cores, and a CephFS datastore. It also provides links to two Git repositories - one for an OpenNebula API and one for a Terraform provider that uses the API - that can be used to try provisioning VMs, templates, networks and more via Terraform.
This document summarizes key points from an OpenStack Days Israel keynote presentation. It discusses how OpenStack can drive infrastructure innovation by providing an open, common, and composable platform for multiple VMs and containers. It also notes how OpenStack is evolving to think beyond Nova and serve as a standard API for other clouds. Several Israeli companies are highlighted that are leveraging OpenStack for private clouds, NFV, hybrid cloud, and Kubernetes including Bezeq, IDF, Cloudify, Amdocs, Mellanox, Kenshoo, and LivePerson. The presentation concludes by thanking OpenStack contributors.
Performant and Resilient Storage: The Open Source & Linux WayOpenNebula Project
OpenNebula users have a range of storage options available to them, including proprietary appliances, proprietary software and Open Source software projects. This session will present a fully Open Source approach, that tightly integrates with Linux, and makes full use of the mature building blocks within the Linux kernel (LVM, Software RAID, DM-crypt, NVMe-oF Target, DRBD, etc...), and delivers one of the highest performance open source storage stacks currently available. The core goal is to expose the improved performance of NVMe storage devices to VMs and containers. The solution covers both local NVMe drives and NVMe-oF. For interacting with NVMe-oF targets it supports the Swordfish-API and LVM & Linux’s software NVMe-oF target. The solution contains a storage addon for OpenNebula.
OpenNebulaconf2017US: Software defined networking with OpenNebula by Roy Keen...OpenNebula Project
We have created a virtual switch appliance that is extremely low resource utilization and managed entirely through OpenNebula to provide software defined networking solutions within our cluster. This talk will detail how it operates, when it is useful, and give concrete examples of it in use.
This document summarizes what's new in Ceph. Key updates include improved management and usability features like simplified configuration, hands-off operation, and device health tracking. It also covers new orchestrator capabilities for Kubernetes and container platforms, continued performance optimizations, and multi-cloud capabilities like object storage federation across data centers and clouds.
Ceph is an open-source distributed storage system that provides object, block, and file storage on commodity hardware. It uses a pseudo-random placement algorithm called CRUSH to distribute data across a cluster in a fault-tolerant manner without single points of failure. Ceph has various applications including a RADOS Gateway for S3/Swift compatibility, RADOS Block Device for virtual machine images, and a CephFS for a POSIX-compliant distributed file system.
OpenNebulaConf2017US: Welcome and project update by Ignacio M. Llorente and R...OpenNebula Project
We’re moving into a world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform can not be all things to all people, there will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud flows directly out of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to remind our vision, vision and commitment, to look back at how the project has grown in the last 8 years, and to give a peek at what to expect from the project in the near future.
OpenNebula is an open-source cloud computing tool for managing virtualized infrastructure in a data center. It allows for both private and hybrid cloud deployments. The presentation provided an overview of OpenNebula's architecture and components, how to develop drivers to integrate different technologies, and ways to interact programmatically through APIs and scripting. It also discussed how OpenNebula is used by hosting companies, technology providers, and research organizations to deploy cloud services.
Welcome talk unleashing the future of open-source enterprise cloud computingNETWAYS
The OpenNebula Project has come a long way since the first “technology preview” of OpenNebula almost six years ago. During these years we’ve witnessed the rise and hype of the Cloud, the birth and decline of several virtualization technologies, but specially the encouraging and exciting growth of OpenNebula; both as a technology and as an active and engaged community. As a meeting point for OpenNebula users, developers, administrators, builders, integrators and researchers, this Conference represents an opportunity to look back at how the project has grown in the last six years, and to give a peek at what to expect from the project in the near future.
OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...NETWAYS
Last year, during the first OpenNebula Conference we briefly talked about interoperability in the cloud, introduced the OCCI standard/protocol and focused on one of its implementations — The rOCCI framework. We positioned this framework as the go-to interface for providing interoperability in OpenNebula with significant plans for future development and improvement.
OpenNebula Conf 2014 | Practical experiences with OpenNebula for cloudifying ...NETWAYS
Our team manages a SaaS platform for Business Intelligence and Analytics applications using a diverse set of middleware (mostly IBM). However the original setup of this platform was not done using a cloud architecture. In this talk we describe the reasons for selecting OpenNebula. The architecture of the new setup. The process of migrating to that new setup and the lessons we learned during that process and in the daily operation of the platform. And finally this talk will also cover our vision for the next step which is to move towards a hybrid cloud setup.
We’re moving into a world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform can not be all things to all people, there will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud flows directly out of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to remind our vision, vision and commitment, to look back at how the project has grown in the last 8 years, and to give a peek at what to expect from the project in the near future.
The Nuage Networks Virtualized Cloud Services solution integrates seamlessly with CloudStack’s advanced networking, supporting Shared Networks, Isolated Networks and VPC’s for KVM and ESXI hypervisors. That integration is bidirectional in the way that networks can be provisioned in CloudStack and be programmed into Nuage, as can be provisioned as advanced networking topologies within Nuage and be consumed from CloudStack. Empowered with SDN, operators can boost their clouds with networking scalability and performance which through native VR networking cannot be met.
OpenNebula Conf 2014 | State and future of OpenNebula - Ignacio LlorenteNETWAYS
This document summarizes the keynote presentation given at the OpenNebulaConf 2014 conference. It discusses the history and growth of OpenNebula from its beginnings in 2005 to the present, including major releases that have addressed users' needs and increasing numbers of downloads, deployments, and community contributors. It highlights OpenNebula's commitments to openness, simplicity, reliability, and flexibility for users.
OpenNebula Conf 2014 | From private cloud to laaS public services for Catalan...NETWAYS
Nowadays, Catalan Academic and research institutions can enjoy self-service cloud infrastructure in order to meet their application needs in a flexible pay per use mode. An self-service platform is available for managing servers, networks and assigned storage from the Universities Consortium Data Centers, allowing users access to a customizable infrastructure with an orientation towards the so-call virtual DC.
Intro to Project Calico: a pure layer 3 approach to scale-out networkingPacket
Slide presentation from the April 16th, 2015 Downtown NY Tech Meetup hosted at Control Group and presented by Christopher Liljenstolpe from Project Calico (www.projectcalico.org)
Project Calico is a scale-out networking fabric for bare metal, container, VM, and hybrid environments. Project Calico leverages the same networking techniques used to scale out the Internet to present a highly scaleable, L3 network for those environments without the use of tunnels, overlays, or other complex constructs. We'll also do a demo of a Calico enabled Docker environment, and have plenty of time for q&a during and after.
About Christopher Liljenstolpe
Christopher is the original architect of Project Calico and one of the project's evangelists. In his day job, he's the director of solutions architecture at Metaswitch Networks. Prior to Calico/Metaswitch, he's designed and run some bio-informatics OpenStack clusters, done some SDN architecture work at Big Switch Networks, Run architecture at two large carriers (Telstra - AS1221, and Cable & Wireless/iMCI - AS3561) and been the IP CTO for Alcatel in Asia. He's also run networks in Antarctica (hint, bend radius becomes REALLY important at -50C), and been foolish enough to do a stint as a wg co-chair in the IETF. Occasionally you can have the (mis-)fortune of hearing him speak at conferences and the like.
OpenNebula TechDay Boston 2015 - An introduction to OpenNebulaOpenNebula Project
OpenNebula is an open-source tool for building private clouds on existing infrastructure that focuses on flexibility, simplicity, and being sysadmin-centric. It provides a single platform for automating and orchestrating enterprise clouds. OpenNebula has been downloaded over 150,000 times and is used to run over 3,000 production clouds, including the largest with 270,000 cores. It has been under development as an open community for over 7 years.
rOCCI – Providing Interoperability through OCCI 1.1 Support for OpenNebulaNETWAYS
OCCI (Open Cloud Computing Interface ) [1] is an open protocol for management tasks in the cloud environment focused on integration, portability and interoperability with a high degree of extensibility. It is designed to bridge differences between various cloud platforms (or cloud middleware) and provide common ground for users and developers alike.
The rOCCI framework [2], originally developed by GWDG [3], was written to simplify the implementation of the OCCI 1.1 protocol in Ruby and later provided the base for a working client and server implementation targeting OpenNebula as its primary back-end cloud platform. The initial server-side implementation provided basic functionality and served as a proof of concept when it was adopted by the EGI Federated Cloud Task Force [4] and chosen to act as the designated VM management interface. This led to further funding from EGI-InSPIRE [5] and involvement of CESNET [6].
This talk aims to provide basic information about the OCCI protocol, introduce its implementation in rOCCI, describe and/or demonstrate some of the functionality provided by rOCCI client and rOCCI-server in concert with OpenNebula. It also briefly examines its use in the EGI FedCloud environment and explores the possibility of further integration with
OpenNebula as a part of the ON ecosystem or even as an integral part of OpenNebula itself in the future. All this with interoperability in mind.
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...NETWAYS
Kishore works with the engineering team in building the open source product with a future focussed cloud technical strategy for “Megam – Cloud Automation Platform “http://gomegam.com”. In his prior incarnation Kishore has worked as an Architect in complex system integration projects for Airport systems with high availability. Kishore has avid experience in architecting large scale build and packaging tools for mainframe platform integrated via thin clients and eclipse IDE.
OpenNebulaConf2017EU: IPP Cloud by Jimmy Goffaux, IPPONOpenNebula Project
This document summarizes a demo of using Terraform to provision resources on an OpenNebula infrastructure. It describes the OpenNebula architecture which includes 400 VM instances across 7 nodes with 3TB of RAM, 250 cores, and a CephFS datastore. It also provides links to two Git repositories - one for an OpenNebula API and one for a Terraform provider that uses the API - that can be used to try provisioning VMs, templates, networks and more via Terraform.
This document summarizes key points from an OpenStack Days Israel keynote presentation. It discusses how OpenStack can drive infrastructure innovation by providing an open, common, and composable platform for multiple VMs and containers. It also notes how OpenStack is evolving to think beyond Nova and serve as a standard API for other clouds. Several Israeli companies are highlighted that are leveraging OpenStack for private clouds, NFV, hybrid cloud, and Kubernetes including Bezeq, IDF, Cloudify, Amdocs, Mellanox, Kenshoo, and LivePerson. The presentation concludes by thanking OpenStack contributors.
Performant and Resilient Storage: The Open Source & Linux WayOpenNebula Project
OpenNebula users have a range of storage options available to them, including proprietary appliances, proprietary software and Open Source software projects. This session will present a fully Open Source approach, that tightly integrates with Linux, and makes full use of the mature building blocks within the Linux kernel (LVM, Software RAID, DM-crypt, NVMe-oF Target, DRBD, etc...), and delivers one of the highest performance open source storage stacks currently available. The core goal is to expose the improved performance of NVMe storage devices to VMs and containers. The solution covers both local NVMe drives and NVMe-oF. For interacting with NVMe-oF targets it supports the Swordfish-API and LVM & Linux’s software NVMe-oF target. The solution contains a storage addon for OpenNebula.
OpenNebulaconf2017US: Software defined networking with OpenNebula by Roy Keen...OpenNebula Project
We have created a virtual switch appliance that is extremely low resource utilization and managed entirely through OpenNebula to provide software defined networking solutions within our cluster. This talk will detail how it operates, when it is useful, and give concrete examples of it in use.
This document summarizes what's new in Ceph. Key updates include improved management and usability features like simplified configuration, hands-off operation, and device health tracking. It also covers new orchestrator capabilities for Kubernetes and container platforms, continued performance optimizations, and multi-cloud capabilities like object storage federation across data centers and clouds.
Ceph is an open-source distributed storage system that provides object, block, and file storage on commodity hardware. It uses a pseudo-random placement algorithm called CRUSH to distribute data across a cluster in a fault-tolerant manner without single points of failure. Ceph has various applications including a RADOS Gateway for S3/Swift compatibility, RADOS Block Device for virtual machine images, and a CephFS for a POSIX-compliant distributed file system.
Session ID: SFO17-509
Session Name: Deep Learning on ARM Platforms
- SFO17-509
Speaker: Jammy Zhou
Track:
★ Session Summary ★
A new era of deep learning is coming with algorithm evolvement, powerful computing platforms and large dataset availability. This session will focus on existing and potential heterogeneous accelerator solutions (GPU, FPGA, DSP, and etc) for ARM platforms and the work ahead from platform perspective.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/sfo17/sfo17-509/
Presentation:
Video:
---------------------------------------------------
★ Event Details ★
Linaro Connect San Francisco 2017 (SFO17)
25-29 September 2017
Hyatt Regency San Francisco Airport
---------------------------------------------------
Keyword:
http://www.linaro.org
http://connect.linaro.org
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://twitter.com/linaroorg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
Ceph Pacific is a major release of the Ceph distributed storage system scheduled for March 2021. It focuses on five key themes: usability, performance, ecosystem integration, multi-site capabilities, and quality. New features in Pacific include automated upgrades, improved dashboard functionality, snapshot-based CephFS mirroring, per-bucket replication in RGW, and expanded telemetry collection. Looking ahead, the Quincy release will focus on continued improvements in these areas such as resource-aware scheduling in cephadm and multi-site monitoring capabilities.
Ceph began as a research project in 2005 to create a scalable object storage system. It was incubated at DreamHost from 2007-2012 and spun out as an independent company called Inktank in 2012. Key developments included the RADOS distributed storage cluster, erasure coding, and the Ceph filesystem. The project has grown a large community and is used in many production deployments, focusing on areas like tiering, erasure coding, replication, and integrating with the Linux kernel. Future plans include improving CephFS, expanding the ecosystem through different storage backends, strengthening governance, and targeting new use cases in big data and the enterprise.
ISC Cloud'13 - Hands-On Tutorial on “Building Your Cloud for HPC, Here & Now,...OpenNebula Project
The document provides an overview of installing and using basic features of OpenNebula. It discusses planning an OpenNebula environment, installing required software on frontend and worker nodes, and demonstrates how to add hosts, images, networks, templates and instantiate VMs using both the Sunstone GUI and OpenNebula CLI.
Introduction to HPC & Supercomputing in AITyrone Systems
Catch up with our live webinar on Natural Language Processing! Learn about how it works and how it applies to you. We have provided all the information in our video recording you would not miss out on.
Watch the Natural Language Processing webinar here!
OpenStack Best Practices and Considerations - terasky tech dayArthur Berezin
- Arthur Berezin presented on best practices for deploying enterprise-grade OpenStack implementations. The presentation covered OpenStack architecture, layout considerations including high availability, and best practices for compute, storage, and networking deployments. It provided guidance on choosing backend drivers, overcommitting resources, and networking designs.
Bitfusion Nimbix Dev Summit Heterogeneous Architectures Subbu Rama
This document provides an overview of heterogeneous architectures and the challenges they present for developers. It discusses how hardware is becoming more specialized and complex as Moore's Law slows. This leads to difficulties delivering high performance and efficiency in applications. The document then summarizes several available compute devices from easiest to hardest to program, including GPUs, MICs, FPGAs, and automata. It proposes that software and tools are needed to abstract this complexity and automatically realize performance gains across heterogeneous systems. Bifusion technology aims to do this through remote virtualization that scales applications horizontally, vertically, and across different device types in a transparent manner.
In this deck, Paul Isaacs from Linaro presents: State of ARM-based HPC. This talk provides an overview of applications and infrastructure services successfully ported to Aarch64 and benefiting from scale.
"With its debut on the TOP500, the 125,000-core Astra supercomputer at New Mexico’s Sandia Labs uses Cavium ThunderX2 chips to mark Arm’s entry into the petascale world. In Japan, the Fujitsu A64FX Arm-based CPU in the pending Fugaku supercomputer has been optimized to achieve high-level, real-world application performance, anticipating up to one hundred times the application execution performance of the K computer. K was the first computer to top 10 petaflops in 2011."
Watch the video: https://wp.me/p3RLHQ-lIT
Learn more: https://www.linaro.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
CompatibleOne is a open source cloud computing platform with several sub-projects including Infrastructure as a Service, Platform as a Service, and security/management tools. It uses a meta-model based on grid computing concepts to manage and exchange resources. The document discusses CompatibleOne's architecture and components, as well as related tools for cloud modeling, configuration management, and resource description.
Microsoft Project Olympus AI Accelerator Chassis (HGX-1)inside-BigData.com
In this video from the Open Compute Summit, Siamak Tavallaei from Microsoft presents an overview of the Microsoft Project Olympus AI Accelerator Chassis, also known as the HGX-1.
Watch the presentation video: http://wp.me/p3RLHQ-guX
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Hungarian ClusterGrid and its applicationsFerenc Szalai
This document summarizes the Hungarian ClusterGrid and its applications. ClusterGrid integrates individual clusters and supercomputers with grid middleware. It currently includes 1000 nodes across 32 cluster sites providing 22 TB of distributed storage. A new generation middleware called Grid Underground has been developed which uses a pure web services approach. Core services include security, job management, and storage. One application involved virtual screening of 8 million molecules to identify potential new medicines.
Ceph: A decade in the making and still going strongPatrick McGarry
Ceph is an open source distributed storage system that has been in development for over a decade. It started as a research project at UC Santa Cruz to build scalable object storage. Over the years, it has grown to include distributed block storage, file storage and an S3-compatible object store. Ceph is now used in many production deployments and has a thriving developer community, though continued work is needed to improve areas like CephFS and add new features around erasure coding, tiering and replication. The future of Ceph involves strengthening governance, expanding the ecosystem, improving performance and gaining more adoption in enterprise storage environments.
What's New with Ceph - Ceph Day Silicon ValleyCeph Community
This document discusses what's new in Ceph, including priorities around community, management/usability, performance of core Ceph components like RADOS, RBD, RGW and CephFS, and container platforms. Specific updates mentioned include centralized configuration in Mimic, Project Crimson reimplementing the OSD data path, Msgr2 network protocol, automated management features, telemetry/insights, performance optimizations, and the continued development of the Ceph dashboard.
This document provides an overview of getting started with AMD GPUs, including information about the upcoming LUMI supercomputer, ROCm software stack, porting codes to HIP, benchmarking, and considerations for Fortran and tuning codes. It discusses installing ROCm, differences between CUDA and HIP APIs, using Hipify tools to convert CUDA to HIP, compiling with hipcc, benchmarking a matrix multiplication example, porting an N-body simulation to HIP, options for porting Fortran codes, and considerations for profiling, debugging and tuning codes on AMD GPUs.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2016-member-meeting-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Peter McGuinness, representing the Khronos Group, delivers the presentation "New Standards for Embedded Vision and Neural Networks" at the December 2016 Embedded Vision Alliance Member Meeting. McGuinness discusses new standardization work for embedded neural network and vision software.
Build Your Own PaaS, Just like Red Hat's OpenShift from LinuxCon 2013 New Orl...OpenShift Origin
Learn how to build your platform as a service just like RedHat's OpenShift PaaS - covers all the architecture & internals of OpenShift Origin OpenSource project, how to deploy it & configure it for bare metal, AWS, OpenStack, CloudStack or any IaaS, and the community that's collaborating on the project to deliver the next-generation of secure, scale-able PaaS visit: openshift.com for more information
presented at LinuxCon by Diane Mueller in the CloudOpen track
NVIDIA GTC 2019: Red Hat and the NVIDIA DGX: Tried, Tested, TrustedJeremy Eder
Red Hat and NVIDIA collaborated to bring together two of the technology industry's most popular products: Red Hat Enterprise Linux 7 and the NVIDIA DGX system. This talk will cover how the combination of RHELs rock-solid stability with the incredible DGX hardware can deliver tremendous value to enterprise data scientists. We will also show how to leverage NVIDIA GPU Cloud container images with Kubernetes and RHEL to reap maximum benefits from this incredible hardware.
Similar to Deploying OpenNebula in an HPC environment (20)
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebula Project
We've made our way into the world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform cannot be all things to all people. There will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud is at the very base of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to communicate and share our vision and commitment, to look back at how the project has grown in the last 9 years, and to shed some insight into what to expect from the project in the near future.
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebula Project
Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly increasing, and the testing and experimentation of cyber defense solutions require the availability of separate, test environments that best reflect the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, and cyber security training activities, thus enabling enterprises to study cyber defense strategies and allowing security researchers to evaluate their algorithms at scale.
The main objective is delivering to researchers and practitioners an overview of the technological means and the practical steps to setup a private cloud platform based on OpenNebula for the creation and management of virtual environments that support cyber-security activities of training and testing, as well as an overview of its possible applications in the cyber security domain.
In particular:
1. We describe our infrastructure based on OpenNebula
2. We overview our application, sitting on top of OpenNebula, as well as the technological tools involved in the management of its lifecycle (e.g., Ansible) .
3. We show how the platform can support various examples of security research activities
[References] Building an emulation environment for cyber security analyses of complex networked systems, Tanasache, Florin Dragos and Sorella, Mara and Bonomi, Silvia and Rapone, Raniero and Meacci, Davide, ICDCN '19, ACM, 2019
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebula Project
Insight into more than 6 years experience with OpenNebula from different perspectives: ISP & Datacenter Provider and Consultant / System Integrator
Lessons learned, "the dos and don'ts" and how we convince and enable customers with OpenNebula - and the NTS ecosystem.
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebula Project
OpenNebula users have a range of storage options available to them, including proprietary appliances, proprietary software and Open Source software projects. This session will present a fully Open Source approach, that tightly integrates with Linux, and makes full use of the mature building blocks within the Linux kernel (LVM, Software RAID, DM-crypt, NVMe-oF Target, DRBD, etc...), and delivers one of the highest performance open source storage stacks currently available.
The core goal is to expose the improved performance of NVMe storage devices to VMs and containers. The solution covers both local NVMe drives and NVMe-oF. For interacting with NVMe-oF targets it supports the Swordfish-API and LVM & Linux’s software NVMe-oF target. The solution contains a storage addon for OpenNebula.
How and what we do with OpenNebula to enable our customers for a completely new way how it is consumed in a modern, service orientated IT. We will also talk about the question, why we have chosen OpenNebula and how deep is the level - and ability - of integration of the NTS CAPTAIN into existing 2nd and 3rd party tools like IPAM, CMDBs, backup, monitoring, approval processes and much more...
TeleData operates a purpose build IaaS enterprise ready cloud plattfom in the region of lake constance. OpenNebula is used in production since several years. TeleData will share an insight into the "Lessons learned" and a brief summary how to operate a public cloud, built on top of OpenNebula. Content is subject to change!
This document provides information about using OpenNebula's oneprovision tool to provision clusters in a cloud environment. It includes an example of a provision template that specifies details like the driver, project, OS, and networking. It outlines commands for creating and managing provisioned clusters, hosts, datastores and networks. These include listing, deleting, power control and SSH access for hosts. The goal is to demonstrate how to provision resources and hosts on demand using OpenNebula.
Alejandro Huertas Herrero discusses cloud disaggregation with OpenNebula which enables building OpenNebula clouds on public cloud providers and across various data centers in a flexible, easy, fast and compatible way that is transparent to end users. The new Disaggregated Data Centers feature in OpenNebula 5.8.1 uses the oneprovision command and provision drivers to deploy hosts on cloud providers like EC2 and Packet and fully configure them as KVM hypervisors or LXD containers. Provision templates in YAML format are used to describe the new provision including cloud credentials, hardware configuration, resources to create, and connection details.
This document discusses using OpenNebula and StorPool to build powerful clouds. StorPool is a software-defined storage system used by managed service providers, cloud providers, and for private clouds. It integrates deeply with OpenNebula, OpenStack, and other platforms. When used with hyper-converged infrastructure and KVM virtualization, StorPool and OpenNebula can provide a scalable, high-performance solution for private or public clouds. StorPool uses a small percentage of server resources but provides high IOPS performance suitable for demanding workloads.
This document discusses nested virtualization and PCI pass-through, which allow testing virtualized environments without requiring physical hardware. It provides information on host system configuration, including enabling VMCS shadowing and assigning devices to VMs. Configuration for OpenNebula management is also covered, such as modifying device filters and deployment tweaks to interface host devices. Resources for further information on related topics from Intel, Red Hat, and StorPool are included. The document concludes by thanking the audience and providing contact details for StorPool.
Serendipity is an artificial intelligence created by Adata that can automate media monitoring and analysis through three proprietary algorithms for clustering, classification, and named entity recognition. It uses context-aware classifiers and can identify entities, categorize articles, determine sentiment, and find similar documents. Adata has the infrastructure to support Serendipity through 400+ CPUs, 2TB RAM, 10TB storage, and GPU compute nodes.
This document provides an overview of Kubernetes and Rancher. It discusses that Kubernetes is an open-source platform for automating deployment, scaling, and management of containerized applications. It is developed by Google and has a large ecosystem. The document then summarizes Rancher, stating that it is an enterprise container management platform that makes it easy to deploy, manage and secure any Kubernetes deployment. Rancher supports over 5,000 organizations and provides centralized policy, security and workload management across multiple Kubernetes clusters.
Huawei's all-flash storage solution, OceanStor Dorado, provides up to 3x improved application performance and 75% savings in operational expenses compared to conventional storage. It offers lightning fast performance with sub-millisecond latency and rock solid reliability of 99.9999% availability through its intelligent data management features and flash-native design. Huawei has over 10 years of experience developing solid state drives and all-flash arrays and their latest Dorado18000 V3 is positioned as the highest-end solution supporting NVMe protocol for the most demanding workloads.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
Liberarsi dai framework con i Web Component.pptxMassimo Artizzu
In Italian
Presentazione sulle feature e l'utilizzo dei Web Component nell sviluppo di pagine e applicazioni web. Racconto delle ragioni storiche dell'avvento dei Web Component. Evidenziazione dei vantaggi e delle sfide poste, indicazione delle best practices, con particolare accento sulla possibilità di usare web component per facilitare la migrazione delle proprie applicazioni verso nuovi stack tecnologici.
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
3. Quick introduction to HPCNow!
● Global HPC consulting company
● IT + scientific background
● HPC services and solutions
● User-oriented company
● Hardware agnostic
Company overview
7. User environment
User libraries, Modules,
EasyBuild, Spack
Development tools
Compilers: GNU, Intel, PGI, IBM
XL compilers; Debuggers and
profilers: V-Tune, DDT, GDB
Scientific and engineering applications
More than 100 references. Contact us to
know more.
Company overview
13. What is High Performance Computing?
Many tasks and/or threads working together to
solve different parts of a single larger problem.
This is achieved with parallel programming, which
usually requires large shared memory systems or
low latency and high bandwidth network.
Motivation
14. HPC users need more than just compute solution
❅ Workflow: Pre-processing and post-processing, workflow frameworks,...
❅ Web services: RStudio, Galaxy, Jupyter notebook, JMS,...
❅ Software managers: Anaconda, EasyBuild, Spack,...
❅ Prebuilt software: Docker, Singularity, VM image (NeuroDebian,..),...
Motivation
15. Convergence Solution
HPC Cluster, Singularity, Docker Swarm, OpenNebula
Allows to dynamically re-architect / re-purpose
the HPC solution to accommodate different roles /
user needs.
Motivation
20. Global configuration
● OpenNebula v5.6.0
● Ceph v13.2.1 mimic
● Datastore
○ standard ceph configuration
■ cephds type Image
■ ceph_system type System
● Nodes with kvm hypervisor
● NIC’s with virtio model
Architecture
22. Stumbling blocks along the way
● Snapshots
○ datastore for images configured as raw
■ recommended for ceph using RBD
○ images stored as raw, even created as qcow2
○ snapshot of system disk, and recovering from ceph
■ rbd ls -l -p one
● Bridge destroyed when no virtual NIC linked
○ switch keep_empty_bridge to true in
/var/lib/one/remotes/etc/vnm/OpenNebulaNetwork.conf
■ bug preventing to transfer config to hypervisors at
/var/tmp/one/etc/vnm/OpenNebulaNetwork.conf
○ create virtual network with PHYDEV unset
one-2-103-0
one-2-103-0@0
one-2-104-0
Implementation
23. Stumbling blocks along the way
● VM could not communicate with each other
○ switch net.bridge.bridge-nf-call-iptables parameter to 0.
○ tried to do it persistent in /etc/sysctl..d/bridge-nf-call.conf and
/usr/lib/sysctl.d/00-system.conf
■ bug prevents for working, when sysctl runs the bridge kernel
module is not already loaded.
○ fixed by modifying /usr/lib/systemd/system/libvirtd.service
Type=notify
EnvironmentFile=-/etc/sysconfig/libvirtd
ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS
+ExecStartPost=/usr/bin/sleep 30s
+ExecStartPost=/usr/sbin/sysctl -w net.bridge.bridge-nf-call-iptables=0
+ExecStartPost=/usr/sbin/sysctl -p
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
Implementation
24. Stumbling blocks along the way
● VM creation from Sunstone ended with FAILED status
○ error: Cannot check QEMU binary /usr/bin/qemu-system-x86_64: No such file or directory
■ ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
Implementation
26. Conclusions
● We architected and implemented a solution
deploying nodes with hybrid role.
● This solution allows dynamically re-purpose the
cluster to accommodate the user needs.
● OpenNebula has been found to be a really easy
tool to install, deploy and manage.
● Useful tips and collaboration in the forum to
troubleshoot issues.
Conclusions