This document provides an overview and summary of OpenNebula configuration at Harvard FAS Research Computing using Puppet. It describes the hardware setup including OpenNebula and Ceph nodes, network configuration using VLANs across multiple datacenters, and configuration of OpenNebula and associated services like Ceph and MySQL using Puppet modules, roles, profiles, Hiera data and exported resources. The goal is automated provisioning and configuration management of OpenNebula and associated infrastructure at scale.
OpenNebulaConf 2013 - Hands-on Tutorial: 1. Introduction and ArchitectureOpenNebula Project
This document provides an introduction to cloud computing with OpenNebula. It discusses infrastructure as a service (IaaS) and the different types of cloud deployments including public, private and hybrid clouds. It then describes the challenges of managing an IaaS cloud, and how OpenNebula provides a uniform management layer to address these challenges. The document outlines the key aspects of the OpenNebula model and architecture, including its open source nature, enterprise features, and ability to manage different infrastructure technologies. It concludes with a basic overview of an OpenNebula deployment.
The document discusses OpenNebula, an open-source tool for building private and hybrid clouds. It provides tips for installing and configuring OpenNebula on CentOS 7, including disabling the firewall, using qemu instead of KVM for testing, allowing access to host devices from LXC containers, handling temporary directories, and using virtio for better performance. The document aims to help users get started with OpenNebula on CentOS 7.
Installing OpenNebula involves planning the installation environment, installing packages on frontend and worker nodes, configuring passwordless SSH access, adding hosts, creating images, networks and templates, and instantiating VMs. Basic usage involves managing these resources through the CLI and Sunstone interface, including performing actions on and monitoring VMs, creating and managing users, and viewing logs to debug issues.
This document discusses various tools and techniques for customizing and optimizing virtual machine images in qcow2 format. It covers mounting qcow2 images, using libguestfs and virt-customize to modify files and install packages, creating CDROMs to package customization scripts, optimizing images with virt-sparsify using normal and in-place sparsification as well as compression, and tips for using qemu-img.
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebulaOpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the storage subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Storage with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
This document discusses various techniques for customizing and optimizing virtual machine images in QCOW2 format. These include mounting images to modify files, using libguestfs and virt-tools, creating CDROMs and scripts for customization, and optimizing images with virt-sparsify to remove unused data and compress images.
The document discusses different approaches to integrating Docker with OpenNebula including using Docker as a hypervisor, distributing OpenNebula in Docker containers, and integrating Docker Machine with OpenNebula. It recommends integrating Docker Machine with OpenNebula to allow deploying and managing Docker hosts transparently on OpenNebula. Details are provided on requirements, available images, and usage of the Docker Machine OpenNebula driver plugin to deploy and switch between Docker hosts on OpenNebula. A demo is available and OneFlow integration with Docker Swarm clusters is also mentioned as a work in progress. The presentation concludes by asking for feedback on envisioned Docker and OpenNebula integration approaches.
OpenNebulaConf 2013 - Hands-on Tutorial: 1. Introduction and ArchitectureOpenNebula Project
This document provides an introduction to cloud computing with OpenNebula. It discusses infrastructure as a service (IaaS) and the different types of cloud deployments including public, private and hybrid clouds. It then describes the challenges of managing an IaaS cloud, and how OpenNebula provides a uniform management layer to address these challenges. The document outlines the key aspects of the OpenNebula model and architecture, including its open source nature, enterprise features, and ability to manage different infrastructure technologies. It concludes with a basic overview of an OpenNebula deployment.
The document discusses OpenNebula, an open-source tool for building private and hybrid clouds. It provides tips for installing and configuring OpenNebula on CentOS 7, including disabling the firewall, using qemu instead of KVM for testing, allowing access to host devices from LXC containers, handling temporary directories, and using virtio for better performance. The document aims to help users get started with OpenNebula on CentOS 7.
Installing OpenNebula involves planning the installation environment, installing packages on frontend and worker nodes, configuring passwordless SSH access, adding hosts, creating images, networks and templates, and instantiating VMs. Basic usage involves managing these resources through the CLI and Sunstone interface, including performing actions on and monitoring VMs, creating and managing users, and viewing logs to debug issues.
This document discusses various tools and techniques for customizing and optimizing virtual machine images in qcow2 format. It covers mounting qcow2 images, using libguestfs and virt-customize to modify files and install packages, creating CDROMs to package customization scripts, optimizing images with virt-sparsify using normal and in-place sparsification as well as compression, and tips for using qemu-img.
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebulaOpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the storage subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Storage with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
This document discusses various techniques for customizing and optimizing virtual machine images in QCOW2 format. These include mounting images to modify files, using libguestfs and virt-tools, creating CDROMs and scripts for customization, and optimizing images with virt-sparsify to remove unused data and compress images.
The document discusses different approaches to integrating Docker with OpenNebula including using Docker as a hypervisor, distributing OpenNebula in Docker containers, and integrating Docker Machine with OpenNebula. It recommends integrating Docker Machine with OpenNebula to allow deploying and managing Docker hosts transparently on OpenNebula. Details are provided on requirements, available images, and usage of the Docker Machine OpenNebula driver plugin to deploy and switch between Docker hosts on OpenNebula. A demo is available and OneFlow integration with Docker Swarm clusters is also mentioned as a work in progress. The presentation concludes by asking for feedback on envisioned Docker and OpenNebula integration approaches.
Practical information on how to Optimize Virtual Machines for High Performance by Boyan Krosnov, Chief Product Officer at StorPool Storage
Presentation delivered at OpenNebula TechDay Sofia on 25-th of February 2016
This document provides an overview of installing and using OpenNebula. It describes setting up a typical OpenNebula environment with multiple backends and a hypervisor. It then walks through installing OpenNebula on two nodes, configuring passwordless SSH, adding hosts, images, networks, templates, and instantiating VMs. It also covers basic VM actions, contextualization, permissions, groups, and the different views in OpenNebula. Finally, it introduces OneFlow for managing multi-tier applications and services, including templates, deployment strategies, scaling, and auto-scaling based on metrics and schedules.
The document discusses various approaches to integrating Docker with OpenNebula including using Docker as a hypervisor, distributing OpenNebula in Docker containers, and integrating Docker Machine with OpenNebula. It recommends integrating Docker Machine with OpenNebula to allow deploying and managing Docker hosts using OpenNebula transparently. A demo of using Docker Swarm with OpenNebula's OneFlow for elasticity policies is also proposed, while distributing OpenNebula in Docker containers and using Docker as a hypervisor are not recommended.
OpenNebula can provide virtual infrastructure for virtual machines (VMs) and consists of a typical environment with:
- A frontend node running OpenNebula services and a hypervisor for VMs.
- Additional backend nodes running just hypervisors for VMs and sharing storage and networks.
- VMs communicate via a shared bridge and private network.
The tutorial covers installing OpenNebula on a lab environment with two backend nodes, configuring hosts, images, networks and templates. It then demonstrates basic usage like deploying VMs, managing their life cycle and contextualization using groups, quotas and different user views.
OpenNebulaConf2015 2.03 Docker-Machine and OpenNebula - Jaime MelisOpenNebula Project
Introduction to OpenNebula’s integration with Docker-Machine, or how to run dockers in your Cloud without breaking a sweat. Open discussion about what the future awaits for Docker in OpenNebula.
Build a private cloud – prototype and test with open nebulaA B M Moniruzzaman
The document provides step-by-step instructions for installing and configuring OpenNebula to build a private cloud using VMware ESXi 5.0 as the hypervisor. Key steps include downloading and installing VMware Workstation 9.0 and ESXi 5.0, importing the OpenNebula sandbox virtual appliance, powering on the sandbox VM, and logging into the OpenNebula interface using Sunstone. The fully configured private cloud can then be used to deploy and test virtual machines.
Fuze is an enterprise cloud communications company that provides voice, video conferencing, and real-time content sharing solutions. It has over 1,000 customers, 1 million users, and 700 employees across 8 global offices. Fuze uses OpenNebula as its infrastructure cloud orchestrator to provide self-service provisioning and management of virtual machine resources on VMware ESXi. OpenNebula allows Fuze to automate server builds, support multi-tenancy, and burst workloads into the public cloud when needed while providing accounting of usage.
OpenNebulaConf 2016 - ONEDock: Docker as a hypervisor in ONE by Carlos de Alf...OpenNebula Project
ONEDock extends OpenNebula to use Docker containers as virtual machines. When OpenNebula requests a new virtual machine, ONEDock delivers a Docker container instead. ONEDock manages the lifecycle of the containers, such as creating, destroying, and migrating them, similar to how OpenNebula manages virtual machines. ONEDock addresses challenges in mapping Docker concepts like containers and images to the concepts of long-lasting virtual machines used in OpenNebula.
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...OpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the computing subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Hypervisors and Containers with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
OpenNebulaConf 2016 - Networking, NFVs and SDNs Hands-on Workshop by Rubén S....OpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the networking subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Networking, NFVs and SDNs with OpenNebula:
- Deployment scenarios
- Integration
- Tuning & debugging
- Best practices
OpenNebulaConf 2016 - Building a GNU/Linux Distribution by Daniel Dehennin, M...OpenNebula Project
How OpenNebula ease the development and testing of our GNU/Linux distribution?
We are building a turn key GNU/Linux distribution for the Ministère de l’Éducation nationale (France) since 2001 and we start using OpenNebula 3 years ago to smooth the development and test of our solutions. We will follow how our agile team in their day to day use of OpenNebula.
OpenNebulaConf 2016 - Evolution of OpenNebula at Netways by Sebastian Saemann...OpenNebula Project
We at Netways are using OpenNebula in production for more than 4 years now. I will show you and talk about the evolution of our cloud infrastructure from the early days to now with focus on the actual setup and its components, including Ceph, Puppet/Foreman and Fog.
D’une infrastructure de virtualisation scripté à un cloud privé OpenNebulaOpenNebula Project
La direction informatique de l’Université de Strasbourg dispose d’un environnement de virtualisation composé de 700 machines virtuelles hébergées sur une centaine d’hyperviseurs. L’administration est faite à l’aide de virt-manager et de scripts python développés en interne. Suite aux nouvelles demandes de ses utilisateurs, la direction informatique a décider de mettre en place une solution de cloud privé. Le choix de l’outil s’est naturellement orienté vers une solution suffisamment flexible, personnalisable et simple pour permettre d’intégrer l’infrastructure existante et de faire face aux besoins de demain.
Talk given by Guillaume Oberlé from Université de Strasbourg (unistra.fr) during Paris Techday 2015
http://opennebula.org/community/techdays/techday-paris-2015/
OpenNebula TechDay Waterloo 2015 - Open nebula hands on workshopOpenNebula Project
This document provides an overview of installing and using OpenNebula. It discusses planning an OpenNebula environment including repositories, backends, and physical resources. It then covers installing OpenNebula on two nodes, configuring passwordless SSH, and starting services. The document demonstrates adding hosts, images, networks, templates, and instantiating VMs through the CLI and Sunstone interface. It also covers groups, quotas, contextualization, and the different views in OpenNebula including the administrator, VDC administrator, and cloud user perspectives.
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebula Project
Cloud providers are constantly addressing the technology limitations on their infrastructures, which must be overcome to meet customer needs. On this presentation, we will demonstrate how technological agnosticism and management flexibility of OpenNebula has allowed Todoencloud to provide the most efficient open source solution to the needs of its customers, choosing the most appropriate virtualization technology (Xen and KVM), storage approach (ZFS vs CEPH), Cloud Bursting solutions (Azure, Amazon) and customized networking topologies.
The document provides information about an OpenNebula tutorial being given at Loadays 2013 in Brussels, Belgium on April 8th. The tutorial will cover OpenNebula fundamentals and include hands-on exercises using VirtualBox, KVM, or VMware. OpenNebula is an open-source cloud management platform that provides tools and APIs to deploy and manage virtual infrastructure in both private and public clouds.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
Rudder manages the configuration of all systems continuously every 5 minutes. To interface OpenNebula VMs with Rudder, install the Rudder agent on the VMs and add a configuration file to enable communication between the VM agents and Rudder server. Rudder then takes over configuration of the new VMs.
This document introduces OpenNebula, an open-source software for building and managing private, public, and hybrid clouds. Some key points:
- OpenNebula has been downloaded over 210,000 times in the last two years and is used to power over 3,000 cloud deployments, including some with over 270,000 cores.
- It provides a turnkey solution for data center virtualization, with a single package that is lightweight, flexible, robust, and powerful.
- Features include virtual infrastructure management, cloud orchestration, multi-tenancy, elastic provisioning, and integration with technologies like KVM, Xen, and vCenter.
- The OpenNebula project and community promote open development and
TechDay - Cambridge 2016 - OpenNebula at Knight Point SystemsOpenNebula Project
OpenNebula is compared to OpenStack, with the document noting some exceptions. It discusses OpenNebula as "The Integrator's Story" and focuses on two approaches to cloud management: central planning vs autonomous. It provides examples of each approach and notes that the goal is to integrate OpenNebula, not have it dictate terms.
Practical information on how to Optimize Virtual Machines for High Performance by Boyan Krosnov, Chief Product Officer at StorPool Storage
Presentation delivered at OpenNebula TechDay Sofia on 25-th of February 2016
This document provides an overview of installing and using OpenNebula. It describes setting up a typical OpenNebula environment with multiple backends and a hypervisor. It then walks through installing OpenNebula on two nodes, configuring passwordless SSH, adding hosts, images, networks, templates, and instantiating VMs. It also covers basic VM actions, contextualization, permissions, groups, and the different views in OpenNebula. Finally, it introduces OneFlow for managing multi-tier applications and services, including templates, deployment strategies, scaling, and auto-scaling based on metrics and schedules.
The document discusses various approaches to integrating Docker with OpenNebula including using Docker as a hypervisor, distributing OpenNebula in Docker containers, and integrating Docker Machine with OpenNebula. It recommends integrating Docker Machine with OpenNebula to allow deploying and managing Docker hosts using OpenNebula transparently. A demo of using Docker Swarm with OpenNebula's OneFlow for elasticity policies is also proposed, while distributing OpenNebula in Docker containers and using Docker as a hypervisor are not recommended.
OpenNebula can provide virtual infrastructure for virtual machines (VMs) and consists of a typical environment with:
- A frontend node running OpenNebula services and a hypervisor for VMs.
- Additional backend nodes running just hypervisors for VMs and sharing storage and networks.
- VMs communicate via a shared bridge and private network.
The tutorial covers installing OpenNebula on a lab environment with two backend nodes, configuring hosts, images, networks and templates. It then demonstrates basic usage like deploying VMs, managing their life cycle and contextualization using groups, quotas and different user views.
OpenNebulaConf2015 2.03 Docker-Machine and OpenNebula - Jaime MelisOpenNebula Project
Introduction to OpenNebula’s integration with Docker-Machine, or how to run dockers in your Cloud without breaking a sweat. Open discussion about what the future awaits for Docker in OpenNebula.
Build a private cloud – prototype and test with open nebulaA B M Moniruzzaman
The document provides step-by-step instructions for installing and configuring OpenNebula to build a private cloud using VMware ESXi 5.0 as the hypervisor. Key steps include downloading and installing VMware Workstation 9.0 and ESXi 5.0, importing the OpenNebula sandbox virtual appliance, powering on the sandbox VM, and logging into the OpenNebula interface using Sunstone. The fully configured private cloud can then be used to deploy and test virtual machines.
Fuze is an enterprise cloud communications company that provides voice, video conferencing, and real-time content sharing solutions. It has over 1,000 customers, 1 million users, and 700 employees across 8 global offices. Fuze uses OpenNebula as its infrastructure cloud orchestrator to provide self-service provisioning and management of virtual machine resources on VMware ESXi. OpenNebula allows Fuze to automate server builds, support multi-tenancy, and burst workloads into the public cloud when needed while providing accounting of usage.
OpenNebulaConf 2016 - ONEDock: Docker as a hypervisor in ONE by Carlos de Alf...OpenNebula Project
ONEDock extends OpenNebula to use Docker containers as virtual machines. When OpenNebula requests a new virtual machine, ONEDock delivers a Docker container instead. ONEDock manages the lifecycle of the containers, such as creating, destroying, and migrating them, similar to how OpenNebula manages virtual machines. ONEDock addresses challenges in mapping Docker concepts like containers and images to the concepts of long-lasting virtual machines used in OpenNebula.
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...OpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the computing subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Hypervisors and Containers with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
OpenNebulaConf 2016 - Networking, NFVs and SDNs Hands-on Workshop by Rubén S....OpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the networking subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Networking, NFVs and SDNs with OpenNebula:
- Deployment scenarios
- Integration
- Tuning & debugging
- Best practices
OpenNebulaConf 2016 - Building a GNU/Linux Distribution by Daniel Dehennin, M...OpenNebula Project
How OpenNebula ease the development and testing of our GNU/Linux distribution?
We are building a turn key GNU/Linux distribution for the Ministère de l’Éducation nationale (France) since 2001 and we start using OpenNebula 3 years ago to smooth the development and test of our solutions. We will follow how our agile team in their day to day use of OpenNebula.
OpenNebulaConf 2016 - Evolution of OpenNebula at Netways by Sebastian Saemann...OpenNebula Project
We at Netways are using OpenNebula in production for more than 4 years now. I will show you and talk about the evolution of our cloud infrastructure from the early days to now with focus on the actual setup and its components, including Ceph, Puppet/Foreman and Fog.
D’une infrastructure de virtualisation scripté à un cloud privé OpenNebulaOpenNebula Project
La direction informatique de l’Université de Strasbourg dispose d’un environnement de virtualisation composé de 700 machines virtuelles hébergées sur une centaine d’hyperviseurs. L’administration est faite à l’aide de virt-manager et de scripts python développés en interne. Suite aux nouvelles demandes de ses utilisateurs, la direction informatique a décider de mettre en place une solution de cloud privé. Le choix de l’outil s’est naturellement orienté vers une solution suffisamment flexible, personnalisable et simple pour permettre d’intégrer l’infrastructure existante et de faire face aux besoins de demain.
Talk given by Guillaume Oberlé from Université de Strasbourg (unistra.fr) during Paris Techday 2015
http://opennebula.org/community/techdays/techday-paris-2015/
OpenNebula TechDay Waterloo 2015 - Open nebula hands on workshopOpenNebula Project
This document provides an overview of installing and using OpenNebula. It discusses planning an OpenNebula environment including repositories, backends, and physical resources. It then covers installing OpenNebula on two nodes, configuring passwordless SSH, and starting services. The document demonstrates adding hosts, images, networks, templates, and instantiating VMs through the CLI and Sunstone interface. It also covers groups, quotas, contextualization, and the different views in OpenNebula including the administrator, VDC administrator, and cloud user perspectives.
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebula Project
Cloud providers are constantly addressing the technology limitations on their infrastructures, which must be overcome to meet customer needs. On this presentation, we will demonstrate how technological agnosticism and management flexibility of OpenNebula has allowed Todoencloud to provide the most efficient open source solution to the needs of its customers, choosing the most appropriate virtualization technology (Xen and KVM), storage approach (ZFS vs CEPH), Cloud Bursting solutions (Azure, Amazon) and customized networking topologies.
The document provides information about an OpenNebula tutorial being given at Loadays 2013 in Brussels, Belgium on April 8th. The tutorial will cover OpenNebula fundamentals and include hands-on exercises using VirtualBox, KVM, or VMware. OpenNebula is an open-source cloud management platform that provides tools and APIs to deploy and manage virtual infrastructure in both private and public clouds.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
Rudder manages the configuration of all systems continuously every 5 minutes. To interface OpenNebula VMs with Rudder, install the Rudder agent on the VMs and add a configuration file to enable communication between the VM agents and Rudder server. Rudder then takes over configuration of the new VMs.
This document introduces OpenNebula, an open-source software for building and managing private, public, and hybrid clouds. Some key points:
- OpenNebula has been downloaded over 210,000 times in the last two years and is used to power over 3,000 cloud deployments, including some with over 270,000 cores.
- It provides a turnkey solution for data center virtualization, with a single package that is lightweight, flexible, robust, and powerful.
- Features include virtual infrastructure management, cloud orchestration, multi-tenancy, elastic provisioning, and integration with technologies like KVM, Xen, and vCenter.
- The OpenNebula project and community promote open development and
TechDay - Cambridge 2016 - OpenNebula at Knight Point SystemsOpenNebula Project
OpenNebula is compared to OpenStack, with the document noting some exceptions. It discusses OpenNebula as "The Integrator's Story" and focuses on two approaches to cloud management: central planning vs autonomous. It provides examples of each approach and notes that the goal is to integrate OpenNebula, not have it dictate terms.
This document discusses the use of Docker containers to deploy an OpenStack cloud (Corona). It summarizes the different node types used, including controllers, hypervisors, ONE servers, NFS, and Sunstone. It describes challenges with configuring containers that require host privileges or access to resources like cgroups and devices. Systemd and Supervisord are compared for managing processes. Configuration and managing the Oneadmin token/SSH keys across dynamic nodes is challenging. Overall the document evaluates approaches to deploying OpenStack components in Docker containers for scalability, automation, and manageability.
This document provides an overview of installing and using OpenNebula. It describes setting up a typical OpenNebula environment with multiple backends and a hypervisor. It then walks through installing OpenNebula on two nodes, configuring passwordless SSH, adding hosts, images, networks, templates, and instantiating VMs. It also covers basic VM actions, contextualization, permissions, groups, and the different views in OpenNebula. Finally, it introduces OneFlow for managing multi-tier applications and services, including life cycle strategies, auto-scaling based on metrics and schedules, and manually scaling services.
OpenNebulaConf 2016 - Measuring and tuning VM performance by Boyan Krosnov, S...OpenNebula Project
In this session we'll explore measuring VM performance and evaluating changes to settings or infrastructure which can affect performance positively. We'll also share the best current practice for architecture for high performance clouds from our experience.
TechDay - Toronto 2016 - Hyperconvergence and OpenNebulaOpenNebula Project
Hyperconvergence integrates compute, storage, networking and virtualization resources from scratch in a commodity hardware box supported by a single vendor. It offers scalability, performance, centralized management, reliability and is software-focused. StorPool is a storage software that can be installed on servers to pool and aggregate the capacity and performance of drives. It provides standard block devices and replicates data across drives and servers for redundancy. StorPool integrates fully with Opennebula to provide a robust hyperconverged infrastructure on commodity hardware using distributed storage.
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...OpenNebula Project
The document discusses Akamai's system for testing distributed systems at massive scale. It describes Akamai's global content delivery network and the challenges of testing a system as large as Akamai's, with thousands of servers worldwide. It then introduces Vtastic, Akamai's solution for distributed testing, which involves cloning virtual test environments from a master testnet and running automated tests in parallel across the cloned environments.
OpenNebula TechDay Boston 2015 - Bringing Private Cloud Computing to HPC and ...OpenNebula Project
This document discusses bringing private cloud computing to high-performance computing (HPC) and science. It outlines the challenges of using cloud infrastructure for HPC workloads, including performance penalties from virtualization and input/output overhead. It then describes OpenNebula, an open-source tool for managing private clouds that addresses these challenges. Finally, it presents several case studies of research institutions that have implemented private HPC clouds using OpenNebula to gain efficiencies while supporting a variety of applications and user groups.
OpenNebula TechDay Boston 2015 - HA HPC with OpenNebulaOpenNebula Project
This document discusses high performance computing (HPC) and its uses. It provides examples of how HPC is used for physics simulations like lattice quantum chromodynamics and supernovae, planetary science like hurricane modeling, life sciences like molecular dynamics, engineering applications, machine learning, and big data. It then describes Microway's test drive HPC cluster that uses OpenNebula for infrastructure management across CPU and GPU nodes with InfiniBand networking. Virtualizing the cluster provides flexibility for administrators and users while incurring minimal performance penalties.
OpenNebula TechDay Boston 2015 - introduction and architectureOpenNebula Project
This document provides an overview of OpenNebula, an open-source tool for building and managing clouds on existing infrastructure. OpenNebula provides a simple yet powerful unified management layer that allows administrators to pool distributed physical resources and virtualize them for on-demand provisioning. It describes key features for cloud management, virtual infrastructure management, and its benefits for cloud consumers, administrators and architects. The document also discusses OpenNebula's origins as a research project, its growth as an open-source community project used by many large organizations, and its vision of providing flexibility, simplicity and control for managing private and hybrid clouds.
OpenNebula TechDay Boston 2015 - Hyperconvergence and OpenNebulaOpenNebula Project
This document discusses hyperconvergence and compares the hyperconverged infrastructure solution StorPool to Ceph. It defines hyperconvergence as integrating compute, storage, networking and other resources from commodity hardware supported by a single vendor. It explains that StorPool uses commodity hardware and its software controls the drives to aggregate capacity and performance across servers. The document demonstrates StorPool outperforms Ceph on CPU usage and I/O performance. It also summarizes StorPool's integration with Opennebula, allowing common image and VM operations on StorPool block devices.
OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS Super...OpenNebula Project
The document introduces ISS SuperCore and Ceph as solutions for future data storage challenges. Ceph was created as a PhD thesis in 2007 and uses pseudo-random data distribution and CRUSH algorithm to store data without relying on parity calculations or centralized controllers. It provides a reliable, autonomic, distributed object store without limitations on size or scalability. SuperCore delivers Ceph through an active storage cluster that provides resilience, replication optimization, and economy compared to traditional RAID storage systems. It allows limitless expansion, instant rebuilds, self-healing from faults, flexible data placement rules, and on-demand provisioning to economize storage use up to 40%.
OpenNebula TechDay Boston 2015 - An introduction to OpenNebulaOpenNebula Project
OpenNebula is an open-source tool for building private clouds on existing infrastructure that focuses on flexibility, simplicity, and being sysadmin-centric. It provides a single platform for automating and orchestrating enterprise clouds. OpenNebula has been downloaded over 150,000 times and is used to run over 3,000 production clouds, including the largest with 270,000 cores. It has been under development as an open community for over 7 years.
OpenNebula TechDay Boston 2015 - installing and basic usageOpenNebula Project
This document provides an overview of installing and using OpenNebula. It discusses planning an OpenNebula environment including repositories, backends, and physical resources. It then covers installing OpenNebula on two nodes, configuring passwordless SSH, and starting services. The document demonstrates adding hosts, images, networks, templates, and instantiating VMs through the CLI and Sunstone interface. It also covers groups, permissions, contextualization, and the different views in OpenNebula including the administrator, VDC administrator, and cloud user perspectives.
OpenNebulaConf 2016 - The Lightweight Approach to Build Cloud CyberSecurity E...OpenNebula Project
In the era of Cloud Service and Internet of Things, information security has already become a transnational issue. In recent years, the large scale cyber attack via the connection of BotNet has become a thorny issue of Global information security. Taiwan is always the main target of international hackers due to the high dense of information devices and computers in campuses are always the favorite of hackers. To help tackling such an issue, the Ezilla, which is considered as a private Cloud toolkit ( integrated with OpenNebula), has been implemented by the CyberSecurity research team in the National Center for High-performance Computing (NCHC), Taiwan. Through the Ezilla which leverages OpenNebula and CyberSecuirty techniques, Cloud users can easily customize and configure a specified Cloud security training environment. It is an extremely lightweight approach helping users to access virtual computing resources. The main feature of this project is simplifying the utilization of Clouds. Our goal is to make Cloud security scientists or users painlessly to run their own CyberSecurity jobs on Cloud platforms, including Cyber Defense Exercise, Malware Knowledge Base, etc.. Based on the proposed CyberSecurity Exercise Platform, we also develop new functions which are private Cloud information security training service, Captur the Flags (CTF) competition service, and virtual networking service for enterprise.
OpenNebulaConf 2016 - LAB ONE - Vagrant running on OpenNebula? by Florian HeiglOpenNebula Project
Do you remember Vagrant? It was that last hipster thing before Docker turned into the most recent hipster thing! It's also still really helpful for software evaluations or lab environments. Normally, it works with VirtualBox on your laptop, but this approach can be too limiting. Even running just 10 VMs becomes a stretch on a laptop. It burns through your battery, SSD lifetime, disk space and threatens how many dozen browser tabs you can open... Enter the Vagrant OpenNebula providers! You can actually control Vagrant on your workstation but have the VMs running on your cloud. There are multiple ways to do that, and also limitations. In the workshop, we'll look at what is possible and how you can best benefit from - oh right! - your cloud!
OpenNebulaConf 2016 - Fast Prototyping of a C.O. into a Micro Data Center - A...OpenNebula Project
El documento describe la arquitectura de red OnLife de Telefónica, que simplifica la red mediante el uso de computación en el borde. Se implementa OpenNebula para proporcionar capacidad elástica en los Centros de Procesamiento Distribuido (CPD) en el borde de la red. El documento también describe una prueba de concepto con nodos virtualizados que ejecutan aplicaciones de red como ONOS y OpenNebula.
OpenNebulaConf 2016 - Network automation with VR by Karsten Nielsen, Unity Te...OpenNebula Project
When you look at automation there is automation and/or automatic. Which would you have in your infrastructure also when it comes to your networking ? I would prefer to have automatic and the new VR in opennebula 5 helps us to get there.
OpenNebulaconf2017US: Paying down technical debt with "one" dollar bills by ...OpenNebula Project
In addition to providing bare-metal access to large amounts of compute FAS Research Computing (FASRC) at Harvard also builds and fully maintains custom virtual machines tailored to faculty and researchers needs including lab websites, portals, databases, project development environments, and more both locally and on public clouds. Recently FASRC converted its internal VM infrastructure from a completely home-made KVM cluster to a more robust and reliable system powered by OpenNebula and Ceph configured with public cloud integration. Over the years as the number of VMs grew our home-made solution started to show signs of wear and tear with respect to scheduling, provisioning, management, inventory, and performance. Our new deployment improves on all of these areas and provides APIs and features that both help us serve clients more efficiently and improve our internal processes for testing new system configurations and dynamically spinning up resources for continous integration and deployment. Our new VM infrastructure deployment is fully automated via puppet and has been used to provision a multi-datacenter, fault-tolerant, VM infrastructure with a multi-tiered back-up system and robust VM and virtual disk monitoring. We will describe our internal system architecture and deployment, challenges we faced, and innovations we made along the way while deploying OpenNebula and Ceph. We will also discuss a new client-facing OpenNebula cloud deployment we’re currently beta testing with select users where users have full control over the creation and configuration of their VMs on FASRC compute resources via the OpenNebula dashboard and APIs.
Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017 - ...Haidee McMahon
For details on Intel's Out of The Box Network Developers Ireland meetup, goto https://www.meetup.com/Out-of-the-Box-Network-Developers-Ireland/events/237726826/
Intel Talk : Enhanced Platform Awareness for Openstack to increase NFV performance
By Andrew Duignan
Bio: Andrew Duignan is an Electronic Engineering graduate from University College Dublin, Ireland. He has worked as a software engineer in Motorola and now at Intel Corporation. He is now in a Platform Applications Engineering role, supporting technologies such as DPDK and virtualization on Intel CPUs. He is based in the Intel Shannon site in Ireland.
Dockerizing the Hard Services: Neutron and Novaclayton_oneill
Talk about the benefits and pitfalls involved in successfully running complex services like Neutron and Nova inside of Docker containers.
Topics include:
* What magic incantations are needed to run these services at all?
* How to prevent HA router failover on service restarts.
* How to prevent network namespaces from breaking everything.
* Bonus: How network namespace fixes also helped fix Cinder NFS backend
Sanger OpenStack presentation March 2017Dave Holland
A description of the Sanger Institute's journey with OpenStack to date, covering RHOSP, Ceph, S3, user applications, and future plans. Given at the Sanger Institute's OpenStack Day.
Cumulus Linux supports great networking, what’s next? Matt Peterson (@dorkmatt) our resident expert from the office of the CTO shares his previous experience, his views on devops, and how Cumulus Networks makes it easier to manage networks with ONIE, ZTP and no CLI! “Devops is a lifestyle, shared responsibility”. With Linux as the networks OS, “it’s all just one apt-get away!”
Bare Metal to OpenStack with Razor and ChefMatt Ray
Razor is an open source provisioning tool that was originally developed by EMC and Puppet Labs. It can discover hardware, select images to deploy, and provision nodes using model-based provisioning. The demo showed setting up a Razor appliance, adding images, models, policies, and brokers. It then deployed an OpenStack all-in-one environment to a new VM using Razor and Chef. The OpenStack cookbook walkthrough explained the roles, environments, and cookbooks used to deploy and configure OpenStack components using Chef.
Presentation at March 2019 Dutch Postgres User Group Meetup on lessons learnt while migrating from Oracle to Postgres, demo'ed via vagrant test environments and using generic pgbench datasets.
#OktoCampus - Workshop : An introduction to AnsibleCédric Delgehier
- A playbook is defined to check if a pattern is present in the content of a web page retrieved from localhost. The playbook registers the content and fails if the defined pattern is not found.
- The playbook is modified to define different patterns for different host groups - the groups "prod" and "recette" would each have their own unique pattern to check for.
- The playbook uses Ansible modules like uri to retrieve a web page, register to store the content, and fail if a registered pattern is not found in the content. Variables and conditionals allow defining patterns dynamically based on host groups.
Cloud Computing in practice with OpenNebula ~ Develer workshop 2012Giovanni Toraldo
This document provides an overview of Cloud Computing using OpenNebula. It discusses OpenNebula's history and features, including virtual infrastructure management, external cloud connectors, monitoring, accounting, and quotas. It also covers OpenNebula's architecture, shared storage options, and monitoring tools like Ganglia and Check_mk. Finally, it provides an overview of OpenNebula's command line interface.
Cloud computing, in practice ~ develer workshopDeveler S.r.l.
This document provides an overview of Cloud Computing using OpenNebula. It discusses OpenNebula's history and features, including virtual infrastructure management, external cloud connectors, monitoring, accounting, and quotas. It also covers OpenNebula's architecture, shared storage options, and monitoring tools like Ganglia and Check_mk. Finally, it provides an overview of OpenNebula's command line interface.
This document summarizes OpenStack Compute features related to the Libvirt/KVM driver, including updates in Kilo and predictions for Liberty. Key Kilo features discussed include CPU pinning for performance, huge page support, and I/O-based NUMA scheduling. Predictions for Liberty include improved hardware policy configuration, post-plug networking scripts, further SR-IOV support, and hot resize capability. The document provides examples of how these features can be configured and their impact on guest virtual machine configuration and performance.
This presentation was shown at the OpenStack Online Meetup session on August 28, 2014. It is an update to the 2013 sessions, and adds content on Services Plugin, Modular plugins, as well as an Outlook to some Juno features like DVR, HA and IPv6 Support
This document summarizes what's new in Ceph. Key updates include improved management and usability features like simplified configuration, hands-off operation, and device health tracking. It also covers new orchestrator capabilities for Kubernetes and container platforms, continued performance optimizations, and multi-cloud capabilities like object storage federation across data centers and clouds.
Enabling ceph-mgr to control Ceph services via Kubernetesmountpoint.io
The document discusses enabling Ceph management services through Kubernetes using Rook and Ceph-mgr. Rook allows deploying Ceph in a containerized way on Kubernetes for simplified management. Ceph-mgr allows controlling Ceph services and integrating with Kubernetes through Rook. This provides multiple ways to consume Ceph based on needs, from simple storage with Rook to full control with Ceph tools. Upcoming improvements will reduce management complexity through automation.
CERN OpenStack Cloud Control Plane - From VMs to K8sBelmiro Moreira
CERN is the home of the Large Hadron Collider (LHC), a 27km circular proton accelerator that generates petabytes of physics data every year. To process all this data, CERN runs an OpenStack Cloud (>300K cores) that helps scientists all around the world to unveil the mysteries of the Universe. The Infrastructure is also used to run all the IT services of the Organization.
Delivering these services, with high performance and reliable service levels has been one of the major challenges for the CERN Cloud engineering team. We have been constantly iterating the architecture and deployment model of the Cloud control plane.
In this presentation we will describe the different control plane architecture models that we relied over the years. Finally, we will describe all the work done to move the OpenStack Cloud control plane from VMs into a kubernetes cluster. We will report about our experience running this architecture at scale, its advantages and challenges.
This document introduces Docker and provides an overview of its key features and benefits. It explains that Docker allows developers to package applications into lightweight containers that can run on any Linux server. Containers deploy instantly and consistently across environments due to their isolation via namespaces and cgroups. The document also summarizes Docker's architecture including storage drivers, images, and the Dockerfile for building images.
Docker Introduction, and what's new in 0.9 — Docker Palo Alto at RelateIQJérôme Petazzoni
Docker is the Open Source container engine. This is an introduction to Docker, what it is, how it works, and some material presenting the new features in versions 0.8 and 0.9.
Tuesday, July 30th session of the vBrownBag OpenStack Sack Lunch Series: Couch to OpenStack. We cover Nova, the Compute Service that deploys and runs VMs.
Similar to TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity (20)
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebula Project
We've made our way into the world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform cannot be all things to all people. There will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud is at the very base of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to communicate and share our vision and commitment, to look back at how the project has grown in the last 9 years, and to shed some insight into what to expect from the project in the near future.
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebula Project
Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly increasing, and the testing and experimentation of cyber defense solutions require the availability of separate, test environments that best reflect the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, and cyber security training activities, thus enabling enterprises to study cyber defense strategies and allowing security researchers to evaluate their algorithms at scale.
The main objective is delivering to researchers and practitioners an overview of the technological means and the practical steps to setup a private cloud platform based on OpenNebula for the creation and management of virtual environments that support cyber-security activities of training and testing, as well as an overview of its possible applications in the cyber security domain.
In particular:
1. We describe our infrastructure based on OpenNebula
2. We overview our application, sitting on top of OpenNebula, as well as the technological tools involved in the management of its lifecycle (e.g., Ansible) .
3. We show how the platform can support various examples of security research activities
[References] Building an emulation environment for cyber security analyses of complex networked systems, Tanasache, Florin Dragos and Sorella, Mara and Bonomi, Silvia and Rapone, Raniero and Meacci, Davide, ICDCN '19, ACM, 2019
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebula Project
I will be presenting the ongoing advances of the OnLife Networks project across Spain and Brasil, with a focus on use cases we have implemented in the Central Offices, which serve as the edge resources closest to the end-user. I will share an interesting synopsis of the the projects evolution, as well as provide several lessons learned.
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebula Project
Insight into more than 6 years experience with OpenNebula from different perspectives: ISP & Datacenter Provider and Consultant / System Integrator
Lessons learned, "the dos and don'ts" and how we convince and enable customers with OpenNebula - and the NTS ecosystem.
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebula Project
OpenNebula users have a range of storage options available to them, including proprietary appliances, proprietary software and Open Source software projects. This session will present a fully Open Source approach, that tightly integrates with Linux, and makes full use of the mature building blocks within the Linux kernel (LVM, Software RAID, DM-crypt, NVMe-oF Target, DRBD, etc...), and delivers one of the highest performance open source storage stacks currently available.
The core goal is to expose the improved performance of NVMe storage devices to VMs and containers. The solution covers both local NVMe drives and NVMe-oF. For interacting with NVMe-oF targets it supports the Swordfish-API and LVM & Linux’s software NVMe-oF target. The solution contains a storage addon for OpenNebula.
Our take on centralized and controlled VM image backups that deal with both CEPH and local QCOW2 datastores. As there are no default means of executing image backups in OpenNebula, I'd like to share our perspective on how we do it.
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebula Project
At Iguane Solutions, a lot of our "DevOps" tools are developed in Golang, and we have a good amount of experience in contributing to the Goca. I'll review just what contributions we make, as well as how we use Goca with different tools, on a daily basis, to manage and monitor our OpenNebula cloud.
I will delve into the concept of Infrastructure as Code - deployment of VM instances on cloud, as well as, also address the metrics collection of deployed VMs. Finally, I will present how we can abstract VM management with automation tools thanks to GOCA.
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebula Project
The document discusses disaggregated data centers using OpenNebula. It describes how OpenNebula allows for scalability through elasticity and avoids issues from human/configuration errors. It discusses types of scalability like predictable, mixed/emergency, and unpredictable scalability. It also briefly discusses provisioning tools like Oneprovision and using provision templates in YAML format.
A deep insight into a project with codename "TARDIS" at HAUFE Lexware with the purpose to replace vCloud with OpenNebula. A technical deep dive into a focussed project done by real DevOps experts.
How and what we do with OpenNebula to enable our customers for a completely new way how it is consumed in a modern, service orientated IT. We will also talk about the question, why we have chosen OpenNebula and how deep is the level - and ability - of integration of the NTS CAPTAIN into existing 2nd and 3rd party tools like IPAM, CMDBs, backup, monitoring, approval processes and much more...
TeleData operates a purpose build IaaS enterprise ready cloud plattfom in the region of lake constance. OpenNebula is used in production since several years. TeleData will share an insight into the "Lessons learned" and a brief summary how to operate a public cloud, built on top of OpenNebula. Content is subject to change!
Performant and Resilient Storage: The Open Source & Linux WayOpenNebula Project
OpenNebula users have a range of storage options available to them, including proprietary appliances, proprietary software and Open Source software projects. This session will present a fully Open Source approach, that tightly integrates with Linux, and makes full use of the mature building blocks within the Linux kernel (LVM, Software RAID, DM-crypt, NVMe-oF Target, DRBD, etc...), and delivers one of the highest performance open source storage stacks currently available. The core goal is to expose the improved performance of NVMe storage devices to VMs and containers. The solution covers both local NVMe drives and NVMe-oF. For interacting with NVMe-oF targets it supports the Swordfish-API and LVM & Linux’s software NVMe-oF target. The solution contains a storage addon for OpenNebula.
NetApp’s Hybrid Cloud Infrastructure manages to leverage Kubernetes to a Hybrid Multi Cloud use case where OpenNebula integrates seamlessly. A technical deep dive in how NTS and NetApp integrated NTS Captain into NetApp’s DataFabric world on top of NetApp HC
This document provides information about using OpenNebula's oneprovision tool to provision clusters in a cloud environment. It includes an example of a provision template that specifies details like the driver, project, OS, and networking. It outlines commands for creating and managing provisioned clusters, hosts, datastores and networks. These include listing, deleting, power control and SSH access for hosts. The goal is to demonstrate how to provision resources and hosts on demand using OpenNebula.
Alejandro Huertas Herrero discusses cloud disaggregation with OpenNebula which enables building OpenNebula clouds on public cloud providers and across various data centers in a flexible, easy, fast and compatible way that is transparent to end users. The new Disaggregated Data Centers feature in OpenNebula 5.8.1 uses the oneprovision command and provision drivers to deploy hosts on cloud providers like EC2 and Packet and fully configure them as KVM hypervisors or LXD containers. Provision templates in YAML format are used to describe the new provision including cloud credentials, hardware configuration, resources to create, and connection details.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
2. About FAS RC
Our OpenNebula setup:
- OpenNebula and Ceph hardware
- Network setup
Our configuration with puppet:
- opennebula-puppet-module
- roles/profiles
- Config within OpenNebula
Context scripts / load testing
Use cases for OpenNebula at RC
Things we’d love to see
Agenda
4. Overview of Odyssey
•150 racks spanning 3 data centers across 100 miles using 1 MW power
•60k CPU cores, 1M+ GPU Cores
•25 PB (Lustre, NFS, Isilon, Gluster)
•10 miles of cat 5/6 + IB cabling
•300k lines of Puppet code
•300+ VMs
•2015: 25.7 million jobs
240 million CPU hours
5. About FAS RC
Our OpenNebula setup:
- OpenNebula and Ceph hardware
- Network setup
Our configuration with puppet:
- opennebula-puppet-module
- roles/profiles
- Config within OpenNebula
Context scripts / load testing
Use cases for OpenNebula at RC
Things we’d love to see
Agenda
6. Where we’re coming from
● Previous kvm infrastructure:
○ One datacenter
○ 4 C6145s (8 blades, 48 core/ 64 core, 256GB ram)
○ 2 10GbE switches but not 802.3ad LACP, they are active-passive
○ 2 R515 replicated gluster
● VM provisioning process very manual
○ add to dns
○ add to cobbler for dhcp
○ edit in cobbler web GUI if changing disk, ram, or cpu
○ run virt-builder script to provision on a hypervisor (manually selected for load-balancing)
■ Full OS install, and puppet run from scratch - takes a long time
● Issues:
○ Storage issues with gluster, heal client-side (on kvm hypervisors), VMs going read only
○ Management very manual - changing capacity is manual, etc
7. Hardware Setup - OpenNebula
● Hypervisors (nodes):
○ 8 Dell R815
■ 4 each in 2 datacenters
○ 64 core, 256GB ram
○ Intel X520 2-port 10GbE, LACP
● Controller:
○ Currently one node is serving as controller as well as hypervisor, but the controller function
can be moved to a different node manually if the db is on replicated mysql (tested using
galera)
8. Hardware Setup - Ceph
● OSDs:
○ 10 Dell R515
■ 5 each in 2 primary datacenters
○ 16 core, 32GB ram
○ 12x 4TB
○ Intel X520 2-port 10GbE, LACP
● Mon:
○ 5 Dell R415
■ 2 each in 2 primary datacenters
■ 1 in a 3rd datacenter as a tie-breaker
○ 8 core, 32GB ram
○ 2x 120GB SSD, raid1 for mon data device
○ Intel X520 2-port 10GbE, LACP
● MDS
○ Currently using cephfs for opennebula system datastore mount
○ MDS running on one of the mons
9. Network Setup
2x Dell Force10 S4810 10gbe switches in each of the 2 primary datacenters (with 2x 10gb between
datacenters)
2x twinax (one from each switch) to each of the opennebula and ceph nodes, bonded LACP (802.3ad)
Tagged 802.1q vlans for:
1. Admin (ssh, opennebula communication, sunstone, puppet, nagios monitoring, etc; MTU 1500)
2. Ceph-client network (used for clients--opennebula hypervisors--to access ceph; routes only to other
ceph-client vlans in other datacenters; MTU 9000)
3. Ceph-cluster network (MTU 9000) (backend ceph network; routes only to other ceph-cluster vlans in
other datacenters; only on ceph OSDs)
4. Opennebula guest vm networks
a. Some in one datacenter only, some span both datacenters
Note that vlan (1) needs to be tagged to have a normal MTU of 1500, because the bond MTU needs to be
9000 so that (2) and (3) can have their MTU 9000
13. About FAS RC
Our OpenNebula setup:
- OpenNebula and Ceph hardware
- Network setup
Our configuration with puppet:
- opennebula-puppet-module
- roles/profiles
- Config within OpenNebula
Context scripts / load testing
Use cases for OpenNebula at RC
Things we’d love to see
Agenda
14. Configuring OpenNebula with puppet
Installation:
● PXE boot - OS installation, runs puppet
● Puppet - bond configuration, tagged vlans, yum repos, opennebula and
sunstone passenger installation and configuration
○ Combination of local modules and upstream (mysql, apache, galera, opennebula)
● Puppetdb - exported resources to add newly-built hypervisors as onehosts on
controller, and, if using nfs for system datastore, to add to /etc/exports on the
controller and to pick up the mount of /one
Ongoing config management:
● Puppet - adding vnets, addressranges, security groups, datastores (for
various ceph pools, etc)
● Can also create onetemplates, and onevms as well
15. OpenNebula puppet module
Source: https://github.com/epost-dev/opennebula-puppet-module
Or: https://forge.puppet.com/epostdev/one (not up-to-date currently)
(Deutsche Post E-Post Development)
Puppet module to install and manage opennebula:
● Installs and configures opennebula controller and hypervisors
○ Takes care of package installs
○ Takes care of adding hypervisor as onehost on controller (using puppetdb)
● Also can be used for ongoing configuration management of resources inside
opennebula - allows to configure onevnets, onesecgroups, etc, within
opennebula
16. Minimum code to setup opennebula with puppet:
package {'rubygem-nokogiri':
ensure => installed,
} ->
class { '::one':
oned => true,
sunstone => true,
sunstone_listen_ip => '0.0.0.0',
one_version => '4.14',
ssh_priv_key_param => '-----BEGIN RSA PRIVATE KEY-----...',
ssh_pub_key => 'ssh-rsa...',
} ->
onehost { $::fqdn :
im_mad => 'kvm',
vm_mad => 'kvm',
vn_mad => '802.1Q',
}
Only needed if not using
puppetdb
Can encrypt using eyaml
if passing this via hiera
18. Puppet Roles/Profiles
Puppet roles/profiles provide a framework to group technology-specific
configuration (modules, groups of modules, etc) into profiles, and then combine
profiles to make a role for each server or type of server.
- http://www.craigdunn.org/2012/05/239/
- http://garylarizza.com/blog/2014/02/17/puppet-workflow-part-2/
- https://puppet.com/podcasts/podcast-getting-organized-roles-and-profiles
19. OpenNebula roles
# opennebula base role
class roles::opennebula::base inherits roles::base {
include ::profiles::storage::ceph::client
include ::profiles::opennebula::base
}
# opennebula hypervisor node
class roles::opennebula::hypervisor inherits roles::opennebula::base {
include ::profiles::opennebula::hypervisor
include ::profiles::opennebula::controller::nfs_mount
}
# opennebula controller node
class roles::opennebula::controller inherits roles::opennebula::base {
include ::profiles::opennebula::controller
include ::profiles::opennebula::controller::nfs_export
include ::profiles::opennebula::controller::local_mysql
include ::profiles::opennebula::controller::mysql_db
include ::profiles::opennebula::controller::sunstone_passenger
}
21. OpenNebula profiles: NFS mount on hypervisors
class profiles::opennebula::hypervisor::nfs_mount (
$oneid = $::one::oneid,
$puppetdb = $::one::puppetdb,
) {
# exported resource to add myself to /etc/exports on the controller
@@concat::fragment { "export_${oneid}_to_${::fqdn}":
tag => $oneid,
target => '/etc/exports',
content => "/one ${::fqdn}(rw,sync,no_subtree_check,root_squash)n",
}
# set up mount /one from head node
if $::one::oned == true {
} else {
# not on the head node so mount it
# pull in the mount that the head node exported
Mount <<| tag == $oneid and title == "${oneid}_one_mount" |>>
}
}
Collect this from the
controller (note, this will
have a 2-run dependence
before completing
successfully - but, it will
continue past the error on
the first run)
Export this to the controller
22. OpenNebula profiles: NFS export on controller node
class profiles::opennebula::controller::nfs_export (
$oneid = $::one::oneid,
){
concat { '/etc/exports':
ensure => present,
owner => root,
group => root,
require => File['/one'],
notify => Exec['exportfs'],
}
# collect the fragments that have been exported by the hypervisors
Concat::Fragment <<| tag == $oneid and target == '/etc/exports' |>>
# export a mount that the hypervisors will pick up
@@mount { "${oneid}_one_mount":
ensure => 'mounted',
name => '/one',
tag => $oneid,
device => "${::fqdn}:/one",
fstype => 'nfs',
options => 'soft,intr,rsize=8192,wsize=8192',
atboot => true,
require => File['/one'],
}
}
Collect these from the
hypervisors
Export this to the
hypervisors
28. Other puppetized configs: XMLRPC SSL
one::oned_port: 2634
profiles::web::apache::vhosts:
opennebula-xmlrpc-proxy:
Vhost_name: <fqdn>
docroot: /var/www/html/ # doesn’t matter, just needs to be there for the vhost
port: 2633
ssl: true
ssl_cert: "/etc/pki/tls/certs/%{hiera('one::oneid')}_xmlrpc_cert.cer"
ssl_key: "/etc/pki/tls/private/%{hiera('one::oneid')}_xmlrpc.key"
proxy_pass:
path: '/'
url: 'http://localhost:2634/'
file { '/var/lib/one/.one/one_endpoint':
ensure => file,
owner => 'oneadmin',
group => 'oneadmin',
mode => '0644',
content => "http://localhost:${oned_port}/RPC2n", # localhost doesn't use the ssl port
require => Package['opennebula-server'],
before => Class['one::oned::service'],
}
ONE_XMLRPC=https://<fqdn of controller>:2633/RPC2 # for end user CLI access
29. About FAS RC
Our OpenNebula setup:
- OpenNebula and Ceph hardware
- Network setup
Our configuration with puppet:
- opennebula-puppet-module
- roles/profiles
- Config within OpenNebula
Context scripts / load testing
Use cases for OpenNebula at RC
Things we’d love to see
Agenda
30. Configuration inside OpenNebula once it’s running
Types provided by opennebula-puppet-module:
onecluster
onedatastore
onehost
oneimage
onesecgroup
onetemplate
onevm
onevnet
onevnet_addressrange
33. About FAS RC
Our OpenNebula setup:
- OpenNebula and Ceph hardware
- Network setup
Our configuration with puppet:
- opennebula-puppet-module
- roles/profiles
- Config within OpenNebula
Context scripts / load testing
Use cases for OpenNebula at RC
Things we’d love to see
Agenda
34. Context scripts for load testing
Graphite/ Grafana vm
Diamond, bonnie++, dd, etc for load test vms:
35. Context script to configure diamond and load tests
#!/bin/bash
source /mnt/context.sh
cd /root
yum install -y puppet
puppet module install garethr-diamond
puppet module install stahnma-epel
...
cat > diamond.pp <<EOF
class { 'diamond':
graphite_host => "$GRAPHITE_HOST",
...
EOF
puppet apply diamond.pp
diamond
if [ $(echo $LOAD_TESTS | grep dd) ] ; then
dd if=/dev/urandom of=/tmp/random_file bs=$DD_BLOCKSIZE count=$DD_COUNT
for i in $(seq 1 $DD_REPEATS); do
date >> ddlog
sync; { time { time dd if=/tmp/random_file of=/tmp/random_file_copy ; sync ; } ; } 2>> ddlog
...
38. About FAS RC
Our OpenNebula setup:
- OpenNebula and ceph hardware
- Network setup
Our configuration with puppet:
- opennebula-puppet-module
- roles/profiles
- Config within OpenNebula
Context scripts / load testing
Use cases for OpenNebula at RC
Things we’d love to see
Agenda
39. Use cases in RC
● Streamlining and centralizing management of VMs
● Creating testing vms: with OpenNebula, much easier to create and manage
the one-off vms needed to test something out (this makes it less likely to need
to test something in production)
● Automatically spinning up vms to test code: when making a change in puppet,
have a git hook do a test run on each category of system we have in
temporary opennebula vms first
● Oneflow templates, and HA for client applications by leveraging two
datacenters
● Elastic HPC: spin up and down compute nodes as needed
40. About FAS RC
Our OpenNebula setup:
- OpenNebula and Ceph hardware
- Network setup
Our configuration with puppet:
- opennebula-puppet-module
- roles/profiles
- Config within OpenNebula
Context scripts / load testing
Use cases for OpenNebula at RC
Things we’d love to see
Agenda
41. Things we’d love to see
● Confining certain vlans to certain hosts without segmenting into clusters (vlans and datastores can
be in multiple clusters in 5.0)
● Folders or other groupings on vm list, template list, security groups, etc, to organize large numbers
of them in sunstone view (labels coming in 5.0)
● Image resize, not just when launching a VM (coming in 5.0)
● Oneimage upload from CLI - not just specify path local to frontend
● Onefile update from CLI
● Dynamic security groups with auto commit (coming in 5.0)
● Private vlan / router handling (with certain 802.1q vlan id’s trunked to hypervisors; coming in 5.0)
● Changelog on onetemplates, onevm actions, etc (it’s possible to see user in oned.log but not
changes)
● Sunstone: show VM name not just ID when taking action such as shutdown
Sunstone: change the name of “shutdown” to describe what will actually happen for non-persistent
VMs
Sunstone: show eth0 IP on vm info page, or add a copy button for IP from vm list page
● Move Ctrl-Alt-Del button away from the X button to close VNC (or prompt for confirmation)