Hyperconvergence integrates compute, storage, networking and virtualization resources from scratch in a commodity hardware box supported by a single vendor. It offers scalability, performance, centralized management, reliability and is software-focused. StorPool is a storage software that can be installed on servers to pool and aggregate the capacity and performance of drives. It provides standard block devices and replicates data across drives and servers for redundancy. StorPool integrates fully with Opennebula to provide a robust hyperconverged infrastructure on commodity hardware using distributed storage.
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...OpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the computing subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Hypervisors and Containers with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBITOpenNebula Project
You will learn what DRBD is, where it came from in its 15 years of existence. How it evolved into a software defined storage solution interesting for users of OpenNebula and why it is very well suited for hyperconverged deployment architectures. The presentation will contain IO performance results and (if time permits) a live demo.
Practical information on how to Optimize Virtual Machines for High Performance by Boyan Krosnov, Chief Product Officer at StorPool Storage
Presentation delivered at OpenNebula TechDay Sofia on 25-th of February 2016
OpenNebulaConf 2016 - Building a GNU/Linux Distribution by Daniel Dehennin, M...OpenNebula Project
How OpenNebula ease the development and testing of our GNU/Linux distribution?
We are building a turn key GNU/Linux distribution for the Ministère de l’Éducation nationale (France) since 2001 and we start using OpenNebula 3 years ago to smooth the development and test of our solutions. We will follow how our agile team in their day to day use of OpenNebula.
OpenNebulaConf 2016 - Measuring and tuning VM performance by Boyan Krosnov, S...OpenNebula Project
In this session we'll explore measuring VM performance and evaluating changes to settings or infrastructure which can affect performance positively. We'll also share the best current practice for architecture for high performance clouds from our experience.
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebulaOpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the storage subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Storage with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
Talk held by Javier Fontan at the CentOS Dojo in Paris, August 25th (http://wiki.centos.org/Events/Dojo/Paris2014)
In this talk we talk about OpenNebula from the perspsective of the CentOS, explaining tips and considerations for power users.
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...OpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the computing subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Hypervisors and Containers with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBITOpenNebula Project
You will learn what DRBD is, where it came from in its 15 years of existence. How it evolved into a software defined storage solution interesting for users of OpenNebula and why it is very well suited for hyperconverged deployment architectures. The presentation will contain IO performance results and (if time permits) a live demo.
Practical information on how to Optimize Virtual Machines for High Performance by Boyan Krosnov, Chief Product Officer at StorPool Storage
Presentation delivered at OpenNebula TechDay Sofia on 25-th of February 2016
OpenNebulaConf 2016 - Building a GNU/Linux Distribution by Daniel Dehennin, M...OpenNebula Project
How OpenNebula ease the development and testing of our GNU/Linux distribution?
We are building a turn key GNU/Linux distribution for the Ministère de l’Éducation nationale (France) since 2001 and we start using OpenNebula 3 years ago to smooth the development and test of our solutions. We will follow how our agile team in their day to day use of OpenNebula.
OpenNebulaConf 2016 - Measuring and tuning VM performance by Boyan Krosnov, S...OpenNebula Project
In this session we'll explore measuring VM performance and evaluating changes to settings or infrastructure which can affect performance positively. We'll also share the best current practice for architecture for high performance clouds from our experience.
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebulaOpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the storage subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Storage with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
Talk held by Javier Fontan at the CentOS Dojo in Paris, August 25th (http://wiki.centos.org/Events/Dojo/Paris2014)
In this talk we talk about OpenNebula from the perspsective of the CentOS, explaining tips and considerations for power users.
In this tutorial you will learn the basics of an OpenNebula deployment. To follow this tutorial you will need this OVA: https://s3.amazonaws.com/one-tutorials/OpenNebula-Tutorial-4.14.2.ova
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebula Project
Cloud providers are constantly addressing the technology limitations on their infrastructures, which must be overcome to meet customer needs. On this presentation, we will demonstrate how technological agnosticism and management flexibility of OpenNebula has allowed Todoencloud to provide the most efficient open source solution to the needs of its customers, choosing the most appropriate virtualization technology (Xen and KVM), storage approach (ZFS vs CEPH), Cloud Bursting solutions (Azure, Amazon) and customized networking topologies.
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed_Hat_Storage
The next generation of IT will be built around flexible infrastructures and operational efficiencies, lowering costs and increasing overall business value in the organization.
A hyperconverged infrastructure that's built on Red Hat supported technologies--including Linux, Gluster storage, and oVirt virtualization manager--will run on commodity x86 servers using the performance of local storage, to deliver a cost-effective, modular, highly scalable, and secure hyperconverged solution.
OpenNebulaConf 2016 - Networking, NFVs and SDNs Hands-on Workshop by Rubén S....OpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the networking subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Networking, NFVs and SDNs with OpenNebula:
- Deployment scenarios
- Integration
- Tuning & debugging
- Best practices
CloudStack Automated Integration Testing with Marvin NetApp
Integration testing can be an intimidating task to tackle. Where do you start? What is Marvin? What is Jenkins and how does it apply to my testing efforts? How can I leverage a virtual infrastructure to minimize the number of physical hosts that are required? This presentation discusses approaches to leveraging these tools and building an automated regression test suite for a collection of ever-growing features.
OpenNebulaConf2015 2.03 Docker-Machine and OpenNebula - Jaime MelisOpenNebula Project
Introduction to OpenNebula’s integration with Docker-Machine, or how to run dockers in your Cloud without breaking a sweat. Open discussion about what the future awaits for Docker in OpenNebula.
Konrad Wilk is a Software Development Manager at Oracle. His group’s mission is to make Linux and Xen Project virtualization better and faster. As part of this work, Konrad has been the maintainer of the Xen Project subsystem in Linux, Xen Project maintainer and now also Release Manager for the 4.5 release of the Xen Project Hypervisor. Konrad has been active in the Linux and Xen Project communities for more than 6 years and was instrumental in adding Xen Project support to the Linux Kernel.
An introduction to the basics of primary storage in CloudStack, including a discussion of the challenges of guaranteeing storage performance in a cloud. Learn how to leverage the latest enhancements to CloudStack to enable storage administrators to deliver consistent, repeatable performance to 10s, 100s or 1,000s of application workloads in parallel. View now for a detailed look at CloudStack enhancements, the management benefits they provide, and common go-to-market approaches.
OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...NETWAYS
Ceph is a open source distributed storage system which provides object, block and file interfaces. The Ceph block device interface (RBD) and object interface (RGW) are popular building blocks in private cloud deployments, and OpenNebula includes a datastore driver for Ceph.
KVM (Kernel-based Virtual Machine) is a full virtualization solution built into the Linux kernel. OpenStack Foundation user surveys consistently indicate that KVM is the most commonly used Hypervisor for OpenStack deployments, managed using the Libvirt driver for OpenStack Compute (Nova). Despite this sustained popularity development of the driver, and indeed the underlying Hypervisor itself, continues at a frantic pace.
This presentation will help you make sense of it all starting with an overview of the way Nova, Libvirt, and KVM interact before analysing progress made in Kilo on utilizing key Libvirt/KVM features in Nova including:
Instance vCPU pinning
Huge page backed instances
Enhanced NUMA topology awareness
...and more! The session will close with a discussion of how in addition to exposing existing Libvirt/KVM features emerging OpenStack use cases - such as Network Function Virtualization (NFV) and High Performance Computing (HPC) - are driving open innovation in the Libvirt, QEMU, and KVM projects themselves.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
In this tutorial you will learn the basics of an OpenNebula deployment. To follow this tutorial you will need this OVA: https://s3.amazonaws.com/one-tutorials/OpenNebula-Tutorial-4.14.2.ova
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebula Project
Cloud providers are constantly addressing the technology limitations on their infrastructures, which must be overcome to meet customer needs. On this presentation, we will demonstrate how technological agnosticism and management flexibility of OpenNebula has allowed Todoencloud to provide the most efficient open source solution to the needs of its customers, choosing the most appropriate virtualization technology (Xen and KVM), storage approach (ZFS vs CEPH), Cloud Bursting solutions (Azure, Amazon) and customized networking topologies.
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed_Hat_Storage
The next generation of IT will be built around flexible infrastructures and operational efficiencies, lowering costs and increasing overall business value in the organization.
A hyperconverged infrastructure that's built on Red Hat supported technologies--including Linux, Gluster storage, and oVirt virtualization manager--will run on commodity x86 servers using the performance of local storage, to deliver a cost-effective, modular, highly scalable, and secure hyperconverged solution.
OpenNebulaConf 2016 - Networking, NFVs and SDNs Hands-on Workshop by Rubén S....OpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the networking subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Networking, NFVs and SDNs with OpenNebula:
- Deployment scenarios
- Integration
- Tuning & debugging
- Best practices
CloudStack Automated Integration Testing with Marvin NetApp
Integration testing can be an intimidating task to tackle. Where do you start? What is Marvin? What is Jenkins and how does it apply to my testing efforts? How can I leverage a virtual infrastructure to minimize the number of physical hosts that are required? This presentation discusses approaches to leveraging these tools and building an automated regression test suite for a collection of ever-growing features.
OpenNebulaConf2015 2.03 Docker-Machine and OpenNebula - Jaime MelisOpenNebula Project
Introduction to OpenNebula’s integration with Docker-Machine, or how to run dockers in your Cloud without breaking a sweat. Open discussion about what the future awaits for Docker in OpenNebula.
Konrad Wilk is a Software Development Manager at Oracle. His group’s mission is to make Linux and Xen Project virtualization better and faster. As part of this work, Konrad has been the maintainer of the Xen Project subsystem in Linux, Xen Project maintainer and now also Release Manager for the 4.5 release of the Xen Project Hypervisor. Konrad has been active in the Linux and Xen Project communities for more than 6 years and was instrumental in adding Xen Project support to the Linux Kernel.
An introduction to the basics of primary storage in CloudStack, including a discussion of the challenges of guaranteeing storage performance in a cloud. Learn how to leverage the latest enhancements to CloudStack to enable storage administrators to deliver consistent, repeatable performance to 10s, 100s or 1,000s of application workloads in parallel. View now for a detailed look at CloudStack enhancements, the management benefits they provide, and common go-to-market approaches.
OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...NETWAYS
Ceph is a open source distributed storage system which provides object, block and file interfaces. The Ceph block device interface (RBD) and object interface (RGW) are popular building blocks in private cloud deployments, and OpenNebula includes a datastore driver for Ceph.
KVM (Kernel-based Virtual Machine) is a full virtualization solution built into the Linux kernel. OpenStack Foundation user surveys consistently indicate that KVM is the most commonly used Hypervisor for OpenStack deployments, managed using the Libvirt driver for OpenStack Compute (Nova). Despite this sustained popularity development of the driver, and indeed the underlying Hypervisor itself, continues at a frantic pace.
This presentation will help you make sense of it all starting with an overview of the way Nova, Libvirt, and KVM interact before analysing progress made in Kilo on utilizing key Libvirt/KVM features in Nova including:
Instance vCPU pinning
Huge page backed instances
Enhanced NUMA topology awareness
...and more! The session will close with a discussion of how in addition to exposing existing Libvirt/KVM features emerging OpenStack use cases - such as Network Function Virtualization (NFV) and High Performance Computing (HPC) - are driving open innovation in the Libvirt, QEMU, and KVM projects themselves.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
OpenNebulaConf 2016 - Evolution of OpenNebula at Netways by Sebastian Saemann...OpenNebula Project
We at Netways are using OpenNebula in production for more than 4 years now. I will show you and talk about the evolution of our cloud infrastructure from the early days to now with focus on the actual setup and its components, including Ceph, Puppet/Foreman and Fog.
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...OpenNebula Project
VTastic: Akamai Innovations for Distributed System Testing - Jack Wadden, Akamai
Akamai Technologies’ CDN platform is a complex, highly-integrated distributed system consisting of over 200,000 servers in over 120 countries.. Processing over 3 Trillion web requests per day, the Akamai platform regularly serves over 30Tbps of traffic to end users around the world. Setup and maintenance of Akamai integration test environments involves a significant investment of hardware, time and subject matter expertise. As a result, these environments are a scarce resource. Using Opennebula, Akamai has developed a system for saving and cloning multi-node integration test environments on-demand. The system is succeeding and has the potential to revolutionize Akamai’s approach to software development and testing. After exploring Akamai’s platform architecture and testing challenges, we will describe the key innovations that enabled the Vtastic solution, challenges we faced in implementing a reliable system, and future capabilities the system can offer.
OpenNebulaConf 2016 - Network automation with VR by Karsten Nielsen, Unity Te...OpenNebula Project
When you look at automation there is automation and/or automatic. Which would you have in your infrastructure also when it comes to your networking ? I would prefer to have automatic and the new VR in opennebula 5 helps us to get there.
OpenNebulaConf 2016 - The Lightweight Approach to Build Cloud CyberSecurity E...OpenNebula Project
In the era of Cloud Service and Internet of Things, information security has already become a transnational issue. In recent years, the large scale cyber attack via the connection of BotNet has become a thorny issue of Global information security. Taiwan is always the main target of international hackers due to the high dense of information devices and computers in campuses are always the favorite of hackers. To help tackling such an issue, the Ezilla, which is considered as a private Cloud toolkit ( integrated with OpenNebula), has been implemented by the CyberSecurity research team in the National Center for High-performance Computing (NCHC), Taiwan. Through the Ezilla which leverages OpenNebula and CyberSecuirty techniques, Cloud users can easily customize and configure a specified Cloud security training environment. It is an extremely lightweight approach helping users to access virtual computing resources. The main feature of this project is simplifying the utilization of Clouds. Our goal is to make Cloud security scientists or users painlessly to run their own CyberSecurity jobs on Cloud platforms, including Cyber Defense Exercise, Malware Knowledge Base, etc.. Based on the proposed CyberSecurity Exercise Platform, we also develop new functions which are private Cloud information security training service, Captur the Flags (CTF) competition service, and virtual networking service for enterprise.
OpenNebulaConf 2016 - LAB ONE - Vagrant running on OpenNebula? by Florian HeiglOpenNebula Project
Do you remember Vagrant? It was that last hipster thing before Docker turned into the most recent hipster thing! It's also still really helpful for software evaluations or lab environments. Normally, it works with VirtualBox on your laptop, but this approach can be too limiting. Even running just 10 VMs becomes a stretch on a laptop. It burns through your battery, SSD lifetime, disk space and threatens how many dozen browser tabs you can open... Enter the Vagrant OpenNebula providers! You can actually control Vagrant on your workstation but have the VMs running on your cloud. There are multiple ways to do that, and also limitations. In the workshop, we'll look at what is possible and how you can best benefit from - oh right! - your cloud!
OpenNebulaConf 2016 - Sunstone integration with FreeIPA using Single Sign by ...OpenNebula Project
FreeIPA is an integrated Identity and Authentication solution for Linux and Unix environments. It provides a centralized authentication and authorization information and it also stores user data information such as user names, groups, hosts and many different objects to manage the security aspects of a network of computers. FreeIPA uses different technologies, but the core of the authentication system is based on MIT Kerberos technology. Thanks to this technology the authentication works on the basis of tickets to allow users or nodes communicating over a non-secure network to prove their identity to get access to different services. In this talk we will show how it is possible to integrate Sunstone authentication with the FreeIPA SSO thanks to the new Sunstone remote authentication plugin provided by OpenNebula. We will describe how to setup Sunstone in an easy way to include Kerberos authentication using Apache and Phusion Passenger module. This configuration approach also changes the security mechanism used by libvirt to establish the connection between hypervisors. We will explain how it is possible, using the host keytabs generated by FreeIPA, to improve the security between the hypervisors when we have to migrate virtual machines in an insecure network.
Intel IT Open Cloud - What's under the Hood and How do we Drive it?Odinot Stanislas
L'IT d'Intel fait sa révolution et s'impose d'agir comme un "Cloud Service Provider". La transformation est initiée avec au programme la mise en place d'un Cloud Fédéré, Interopérable et Open mais aussi d'un framework de maturité, du DevOps et de la prise de risque. Bref, vraiment intéressant
This talk will introduce a flexible accounting framework with data visualization capabilities called MICHAL, that we at CESNET developed for our infrastructure. Framework is able to gather data from multiple sources, OpenNebula being one of them, process it and present the result in a form of charts. MICHAL isn't bind to only one platform and can be easily extended to support accounting of multiple parts of the infrastructure. As part of the presentation, we will discuss our data gathering techniques, MICHAL's design and functionality, currently available data processing modules for IaaS cloud and plans for the future development.
Unveiling the Evolution: Proprietary Hardware to Agile Software-Defined Solut...MaryJWilliams2
Embark on a captivating journey through the evolution of data center technology. Our webinar delves deep into the transformative shift from traditional proprietary hardware setups to dynamic, software-defined solutions. Join us as we unravel the convergence of compute virtualization, Software-Defined Networking (SDN), and Software-Defined Storage (SDS), reshaping the very foundations of modern data infrastructure. Explore how this revolution is empowering businesses with unparalleled flexibility, scalability, and efficiency, and gain insights into navigating the rapidly evolving landscape of data center architecture. Whether you're a seasoned IT professional or an enthusiast eager to embrace the future of technology, this webinar promises to enlighten and inspire. For more information you can visit here: https://stonefly.com/white-papers/software-defined-data-center-sddc/#wpcf7-f206423-p263417-o2
Virtual SAN - A Deep Dive into Converged Storage (technical whitepaper)DataCore APAC
DataCore™ Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses
the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.
DataCore Virtual SAN virtualizes the local storage on two or more physical x86-64 servers. It can leverage any combination of magnetic disks (SAS, SATA) and optionally flash, to provide persistent storage services as close to the application as possible without having to go out over the wire (network or fabric). Virtual disks provisioned from DataCore Virtual SAN can also be shared across the cluster to support the dynamic migration and failover of applications between hosts.
DataCore Virtual SAN addresses the challenges that exist today within many IT organisations such as single points of failure, poor application performance (particularly within virtualized environments), low storage efficiency and utilisation, and high infrastructure costs.
HPC and cloud distributed computing, as a journeyPeter Clapham
Introducing an internal cloud brings new paradigms, tools and infrastructure management. When placed alongside traditional HPC the new opportunities are significant But getting to the new world with micro-services, autoscaling and autodialing is a journey that cannot be achieved in a single step.
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...Hendrik van Run
IBM IMPACT 2013 presentation
This lecture will provide an overview of a combination of design, development, configuration and deployment best practices for IBM PureApplication System and IBM Workload Deployer captured from customer engagement experiences.
DataCore™ Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses
the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.
DataCore Virtual SAN virtualizes the local storage on two or more physical x86-64 servers. It can leverage any combination of magnetic disks (SAS, SATA) and optionally flash, to provide persistent storage services as close to the application as possible without having to go out over the wire (network or fabric). Virtual disks provisioned from DataCore Virtual SAN can also be shared across the cluster to support the dynamic migration and failover of applications between hosts.
DataCore Virtual SAN addresses the challenges that exist today within many IT organizations such as single points of failure, poor application performance (particularly within virtualized environments), low storage efficiency and utilization, and high infrastructure costs.
Maginatics @ SDC 2013: Architecting An Enterprise Storage Platform Using Obje...Maginatics
How did Maginatics build a strongly consistent and secure distributed file system? Niraj Tolia, Chief Architect at Maginatics, gave this presentation on the design of MagFS at the Storage Developer Conference on September 16, 2013.
For more information about MagFS—The File System for the Cloud, visit maginatics.com or contact us directly at info@maginatics.com.
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebula Project
We've made our way into the world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform cannot be all things to all people. There will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud is at the very base of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to communicate and share our vision and commitment, to look back at how the project has grown in the last 9 years, and to shed some insight into what to expect from the project in the near future.
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebula Project
Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly increasing, and the testing and experimentation of cyber defense solutions require the availability of separate, test environments that best reflect the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, and cyber security training activities, thus enabling enterprises to study cyber defense strategies and allowing security researchers to evaluate their algorithms at scale.
The main objective is delivering to researchers and practitioners an overview of the technological means and the practical steps to setup a private cloud platform based on OpenNebula for the creation and management of virtual environments that support cyber-security activities of training and testing, as well as an overview of its possible applications in the cyber security domain.
In particular:
1. We describe our infrastructure based on OpenNebula
2. We overview our application, sitting on top of OpenNebula, as well as the technological tools involved in the management of its lifecycle (e.g., Ansible) .
3. We show how the platform can support various examples of security research activities
[References] Building an emulation environment for cyber security analyses of complex networked systems, Tanasache, Florin Dragos and Sorella, Mara and Bonomi, Silvia and Rapone, Raniero and Meacci, Davide, ICDCN '19, ACM, 2019
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebula Project
I will be presenting the ongoing advances of the OnLife Networks project across Spain and Brasil, with a focus on use cases we have implemented in the Central Offices, which serve as the edge resources closest to the end-user. I will share an interesting synopsis of the the projects evolution, as well as provide several lessons learned.
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebula Project
Insight into more than 6 years experience with OpenNebula from different perspectives: ISP & Datacenter Provider and Consultant / System Integrator
Lessons learned, "the dos and don'ts" and how we convince and enable customers with OpenNebula - and the NTS ecosystem.
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebula Project
OpenNebula users have a range of storage options available to them, including proprietary appliances, proprietary software and Open Source software projects. This session will present a fully Open Source approach, that tightly integrates with Linux, and makes full use of the mature building blocks within the Linux kernel (LVM, Software RAID, DM-crypt, NVMe-oF Target, DRBD, etc...), and delivers one of the highest performance open source storage stacks currently available.
The core goal is to expose the improved performance of NVMe storage devices to VMs and containers. The solution covers both local NVMe drives and NVMe-oF. For interacting with NVMe-oF targets it supports the Swordfish-API and LVM & Linux’s software NVMe-oF target. The solution contains a storage addon for OpenNebula.
Our take on centralized and controlled VM image backups that deal with both CEPH and local QCOW2 datastores. As there are no default means of executing image backups in OpenNebula, I'd like to share our perspective on how we do it.
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebula Project
At Iguane Solutions, a lot of our "DevOps" tools are developed in Golang, and we have a good amount of experience in contributing to the Goca. I'll review just what contributions we make, as well as how we use Goca with different tools, on a daily basis, to manage and monitor our OpenNebula cloud.
I will delve into the concept of Infrastructure as Code - deployment of VM instances on cloud, as well as, also address the metrics collection of deployed VMs. Finally, I will present how we can abstract VM management with automation tools thanks to GOCA.
A deep insight into a project with codename "TARDIS" at HAUFE Lexware with the purpose to replace vCloud with OpenNebula. A technical deep dive into a focussed project done by real DevOps experts.
How and what we do with OpenNebula to enable our customers for a completely new way how it is consumed in a modern, service orientated IT. We will also talk about the question, why we have chosen OpenNebula and how deep is the level - and ability - of integration of the NTS CAPTAIN into existing 2nd and 3rd party tools like IPAM, CMDBs, backup, monitoring, approval processes and much more...
TeleData operates a purpose build IaaS enterprise ready cloud plattfom in the region of lake constance. OpenNebula is used in production since several years. TeleData will share an insight into the "Lessons learned" and a brief summary how to operate a public cloud, built on top of OpenNebula. Content is subject to change!
Performant and Resilient Storage: The Open Source & Linux WayOpenNebula Project
OpenNebula users have a range of storage options available to them, including proprietary appliances, proprietary software and Open Source software projects. This session will present a fully Open Source approach, that tightly integrates with Linux, and makes full use of the mature building blocks within the Linux kernel (LVM, Software RAID, DM-crypt, NVMe-oF Target, DRBD, etc...), and delivers one of the highest performance open source storage stacks currently available. The core goal is to expose the improved performance of NVMe storage devices to VMs and containers. The solution covers both local NVMe drives and NVMe-oF. For interacting with NVMe-oF targets it supports the Swordfish-API and LVM & Linux’s software NVMe-oF target. The solution contains a storage addon for OpenNebula.
NetApp’s Hybrid Cloud Infrastructure manages to leverage Kubernetes to a Hybrid Multi Cloud use case where OpenNebula integrates seamlessly. A technical deep dive in how NTS and NetApp integrated NTS Captain into NetApp’s DataFabric world on top of NetApp HC
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. “Hyperconvergence is a type of infrastructure system
with a software defined architecture which integrates
compute, storage, networking and virtualization and
other resources from scratch in a commodity hardware
box supported by a single vendor”
Hyperconvergence - Definition
3. ● Use of commodity X86 hardware
● Scalability
● Enhanced performance
● Centralised management
● Reliability
● Software focused
● VM focused
● Shared resources
● Data protection
HC - What does it offer ?
5. It is a regular server with CPU, RAM, network interfaces,
Disk controllers and drives.As far as drives are concerned
there are only three manufacturers in the world.There is
really nothing special about the hardware.
It is all about software….
There is nothing special about
storage servers
6. ● Scale out - Add compute + storage nodes
● Asymmetric scaling -Add only compute nodes
● Asymmetric scaling - Add only storage nodes
● Fault tolerance and High availability
● Add / remove drives on the fly
● Take out drives and insert in any other server.
● Drive agnostic -any mix of drives SS, spinny
● Add servers on the fly and servers need not be identical
● Performance increases with capacity increase
● Handle IO blender effect - Any application on any server
● No special skills are required to manage
Mission impossible ?
8. A robust Hyperconvergent infrastructure can be
created by using Opennebula as a virtualization
platform and combining with high availability solutions
like DRBD and / or a fault tolerant distributed storage
system.
Hyperconvergence
10. StorPool
It pools the attached storage (hard disks or SSDs), of standard servers, to
create a single pool of shared block storage. StorPool works on a cluster of
servers in a distributed shared nothing architecture. All functions are
performed by all of the servers on an equal peer basis. It works on standard
off the shelf servers running GNU/Linux.
11. The software consists of two parts – a storage server, (target) and a storage
client, (driver, initiator), that are installed on each physical server, (host,
node). Each host can be a storage server, a storage client, or both (i.e. a
converged set up, converged infrastructure). To storage clients, StorPool
volumes appear as local block devices under /dev/storpool/*. Data on
volumes can be read and written by all clients simultaneously and consistency
is guaranteed through a synchronous replication protocol. The StorPool client
communicates in parallel with all of the StorPool servers.
Storpool
12. ● StorPool is a storage software installed on every server and controls the
drives (both hard disks and SSDs) in the server.
● Servers communicate with each other to aggregate the capacity and
performance of all the drives.
● StorPool provides standard block devices.
● Users can create one or more volumes through its volume manager.
● Data is replicated and striped across all drives in all servers to provide
redundancy and performance.
● Replication level can be chosen by the user.
● There are no central management or metadata servers.
● The cluster uses a shared-nothing architecture. Performance scales with
every added server or or drive.
● System is managed through a CLI and JSON API.
StorPool overview
13. ● Fully integrated with Opennebula
● Runs on commodity hardware
● Clean code built ground up and not a fork of something
● End to end data integrity with 64 bit checksum for each sector.
● No metadata servers to slow down operations
● Own network protocol designed for efficiency and performance
● Suitable for hyperconvergence as it uses ~10% of the server
resources of a typical server
● Shared nothing architecture for maximum scalability and
performance
● SSD support
● In service rolling upgrades
● Snapshots, clones, rolling upgrades, QoS, synchronous replication
StorPool - Features
14. ● StorPool uses a patented 64 bit end to end data integrity checksum to
protect the customer's data. From the moment an application sends data
to StorPool, a checksum is calculated, which is then stored with the data
itself. This covers not only the storage system, but also network
transfers, protects against software stack bugs, and
misplaced/phantom/partial writes performed by the hard disks or SSDs,
etc.
● StorPool keeps 2 or 3 copies of the data in different servers or racks, in
order to provide data redundancy. StorPool uses replication algorithm,
since it does not have a huge performance impact on the system ,unlike
erasure coded (RAID 5, 6) systems, heavily tax the CPU and use a lot of
RAM. unavailable.
Data redundancy
16. StorPool has built in thin provisioning, StorPool also makes
copies of the data and stripes them across the cluster of
servers for redundancy and performance. However any
stripe which does not hold data, takes zero space on the
drives. When data appears, it is only then that space is
allocated.Thin provisioning is basically allows you to
provision more storage visible to users than is physically
available on the system.
Thin provisioning
17. The StorPool solution is fundamentally about scaling out (i.e. scale by adding more
drives or nodes), rather than scaling up (i.e. adding capacity by buying a larger
storage box and then migrating all of the data to it). This means StorPool can scale
out storage independently by IOPS, storage space and bandwidth. There is no
bottleneck or single point of failure. StorPool can grow seamlessly without
interruption and in small steps – be it one disk drive, one server and one network
interface at a time. Not only is the scale-out approach simpler and less risky, it is
also a far more economical method.
Scale out vs Scale up
18. The QoS (Quality of Service), ensures that the required level of storage
performance and SLAs are met. StorPool has built in QoS between volumes and the
user configurable limit of IOPS and MB/s, per volume. In this way no single user can
take over the resources of the entire storage system. Also by using this feature, a
particular user can be guaranteed a certain level of service parameters.
QOS
19. StorPool offers a data tiering functionality. It is implemented by allowing users
to make groups, (pools), of drives on which to place data. For example the user
can make a pool of SSDs and place important data with high performance
requirements on this set of drives, and then they can make another pool for
HDD drives and place data which has lower performance requirements, on
these drives. The user can then live migrate data between the pools.
The data tiering functionality also allows the building of hybrid pools, where a
copy of the data is stored on SSDs and redundant copies are stored on HDDs.
This hybrid system delivers near all SSD performance, but at a much lower
cost. This functionality also allows customers to deliver “data locality”, by
placing data on a particular compute node, which is going to access the data
locally.
Flexible Data tiering
20. StorPool offers a data tiering functionality. It is implemented by allowing
users to make groups, (pools), of drives on which to place data. For example
the user can make a pool of SSDs and place important data with high
performance requirements on this set of drives, and then they can make
another pool for HDD drives and place data which has lower performance
requirements, on these drives. The user can then live migrate data between
the pools.
The data tiering functionality also allows the building of hybrid pools, where
a copy of the data is stored on SSDs and redundant copies are stored on HDDs.
This hybrid system delivers near all SSD performance, but at a much lower
cost. This functionality also allows customers to deliver “data locality”, by
placing data on a particular compute node, which is going to access the data
locally.
Flexible Data tiering
21. Write back cache (WBC), is a technology that significantly increases the speed
at which storage write operations are performed. In StorPool’s case it is used
to cache writes on hard disks, since they are the slowest type of drives.
StorPool has proprietary Write Back Cache technology which enables sub
millisecond write latencies on hard disks. It allows write operations to be
stashed in memory and immediately acknowledged by the storage system,
and then flushed to the hard disk at a later stage. The benefits of this feature
are a solid increase in performance and a sizable cost reduction, as customers
no longer need a RAID controller.
Write back cache (WBC)
22. Datastore – all common image operations including: define the datastore;
create, import images from Marketplace, clone images
Virtual Machines – instantiate a VM with raw disk images on an attached
StorPool block device, stop, suspend, start, migrate, migrate-live, snapshot-
hot of VM disk.
The add-on is implemented by writing StorPool drivers for datastore_mad and
tm_mad and a patch to Sunstone's datastores-tab.js for UI.
Opennebula - StorPool
24. Colocation or on-premise ?
Hyperconverged infrastructure
can be on premise or colocated.
A "Micro data center" is a stand
alone housing which replicates all
the cooling , security and power
capability of a traditional data
center. Thus it is possible to
seamlessly integrate and manage
on-premise and colocated
infrastructure