Scale your OpenNebula into the Petabytes with LizardFS. Let us show you how to get from a small hyperconverged setup to a Petabyte cloud system utilising LizardFS with very little effort.
YouTube: https://youtu.be/T-6GMwjgQjs
OpenNebulaConf2017EU: Welcome Talk State and Future of OpenNebula by Ignacio ...OpenNebula Project
We’re moving into a world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform can not be all things to all people, there will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud flows directly out of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to remind our vision, vision and commitment, to look back at how the project has grown in the last 8 years, and to give a peek at what to expect from the project in the near future.
YouTube: https://youtu.be/evzy5bLwDSM
OpenNebulaConf2017EU: Enabling Dev and Infra teams by Lodewijk De Schuyter,De...OpenNebula Project
At the departement of environment and spatial planning we started 2 projects. The first was to replace our vmware based hosting environment with an open, hardware-vendor neutral, hypervisor environment. The second project’s goal was to enable our dev-teams more. This is the story of the second project. What we built and how it works using opennebula and ceph and our existing tooling.
At the time of writing of this abstract, our opennebula environment is used by 4 dev-teams (almost 30 developers) and an infra team, hosting 700 virtual servers and counting. We are executing 300 deploys (as part of the development cycle) per week and counting …
I will be talking about the setup we realized, the choices we made and the deployment tool we ended up with, integrating the toolset we already used. I.e. svn, ansible, opennebula, f5, jfrog, ubuntu/centos, zabbix, bareos, barman, opennebula, …
YouTube: https://youtu.be/OEftbpJ_lSY
OpenNebulaConf2017EU: Transforming an Old Supercomputer into a Cloud Platform...OpenNebula Project
Currently, typical supercomputers have an expected useful life of 3 or 4 years. One way of another, after this time period, infrastructure is typically replaced or upgraded to face the increasing resource demand by users and companies. This always gives rise to the same question, namely, what should be done with the old hardware if it was replaced? Possible solutions come in the form of decommissioning, splitting up and using it for spare parts, or donating it, but in several cases, the hardware can still provide value when used for different tasks. In this talk we will describe how we have converted the old Tier1 Flemish supercomputer https://www.ugent.be/hpc/en/infrastructure/tier1 into a cloud platform using OpenNebula. During this conversion process, we faced several technical challenges. The first and foremost of these was how to recycle hardware that was designed for use in a classical HPC environment to use in a private cloud. We will describe which steps were taken to address isolating VM traffic through the existing InfiniBand interconnect using the VXLAN network technology. We will also address how we managed mapping our internal (university) and external (industry) users using the OpenNebula “remote” authentication plugin. Finally, we will discuss how we used the InfiniBand interconnect to share the Ceph storage backend and VM traffic in a secure manner. After a testbed phase, in which only pilot users are given access and provide feedback, the new UGent HPC Cloud platform called “Grimer” will be available in production.
YouTube: https://youtu.be/jHchktxIZnM
OpenNebulaConf2017EU: Elastic Clusters for Data Analysis by Carlos de Alfonso...OpenNebula Project
EUBra-BIGSEA is an european project aiming at developing cloud services for Big Data Analysis in the context of Traffic Recommendation. EUBra-BIGSEA leverages EC3 (Elastic Compute Cloud Clusters - ref) to deploy virtual elastic clusters managed by Apache Mesos, on top of an OpenNebula site.
YouTube: https://youtu.be/xShKKrEMTuQ
OpenNebulaConf2017EU: Welcome Talk State and Future of OpenNebula by Ignacio ...OpenNebula Project
We’re moving into a world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform can not be all things to all people, there will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud flows directly out of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to remind our vision, vision and commitment, to look back at how the project has grown in the last 8 years, and to give a peek at what to expect from the project in the near future.
YouTube: https://youtu.be/evzy5bLwDSM
OpenNebulaConf2017EU: Enabling Dev and Infra teams by Lodewijk De Schuyter,De...OpenNebula Project
At the departement of environment and spatial planning we started 2 projects. The first was to replace our vmware based hosting environment with an open, hardware-vendor neutral, hypervisor environment. The second project’s goal was to enable our dev-teams more. This is the story of the second project. What we built and how it works using opennebula and ceph and our existing tooling.
At the time of writing of this abstract, our opennebula environment is used by 4 dev-teams (almost 30 developers) and an infra team, hosting 700 virtual servers and counting. We are executing 300 deploys (as part of the development cycle) per week and counting …
I will be talking about the setup we realized, the choices we made and the deployment tool we ended up with, integrating the toolset we already used. I.e. svn, ansible, opennebula, f5, jfrog, ubuntu/centos, zabbix, bareos, barman, opennebula, …
YouTube: https://youtu.be/OEftbpJ_lSY
OpenNebulaConf2017EU: Transforming an Old Supercomputer into a Cloud Platform...OpenNebula Project
Currently, typical supercomputers have an expected useful life of 3 or 4 years. One way of another, after this time period, infrastructure is typically replaced or upgraded to face the increasing resource demand by users and companies. This always gives rise to the same question, namely, what should be done with the old hardware if it was replaced? Possible solutions come in the form of decommissioning, splitting up and using it for spare parts, or donating it, but in several cases, the hardware can still provide value when used for different tasks. In this talk we will describe how we have converted the old Tier1 Flemish supercomputer https://www.ugent.be/hpc/en/infrastructure/tier1 into a cloud platform using OpenNebula. During this conversion process, we faced several technical challenges. The first and foremost of these was how to recycle hardware that was designed for use in a classical HPC environment to use in a private cloud. We will describe which steps were taken to address isolating VM traffic through the existing InfiniBand interconnect using the VXLAN network technology. We will also address how we managed mapping our internal (university) and external (industry) users using the OpenNebula “remote” authentication plugin. Finally, we will discuss how we used the InfiniBand interconnect to share the Ceph storage backend and VM traffic in a secure manner. After a testbed phase, in which only pilot users are given access and provide feedback, the new UGent HPC Cloud platform called “Grimer” will be available in production.
YouTube: https://youtu.be/jHchktxIZnM
OpenNebulaConf2017EU: Elastic Clusters for Data Analysis by Carlos de Alfonso...OpenNebula Project
EUBra-BIGSEA is an european project aiming at developing cloud services for Big Data Analysis in the context of Traffic Recommendation. EUBra-BIGSEA leverages EC3 (Elastic Compute Cloud Clusters - ref) to deploy virtual elastic clusters managed by Apache Mesos, on top of an OpenNebula site.
YouTube: https://youtu.be/xShKKrEMTuQ
OpenNebulaConf2017EU: Hyper converged infrastructure with OpenNebula and Ceph...OpenNebula Project
Hyperconvergence is one of the big topics in datacenters at the moment. But is it more than an old wine in new bottles? Why we at Runtastic built an hyperconverged datacenter based on Opennebula with Ceph and what we learned.
YouTube: https://youtu.be/50Z4bmevTpg
OpenNebula Conf 2014 | Bootstrapping a virtual infrastructure using OpenNebul...NETWAYS
This talk shows how to setup a virtual infrastructure using OpenNebula as cloud management platform, SaltStack for configuration management and Foreman for bare-metal/ virtual host provisioning. You will see how to combine OpenNebula with bare-metal deployment on standard server hardware using non-shared storage in an environment without physical access to the hardware and no existing base infrastructure like DNS, NTP, DHCP, VPN or other. The infrastructure installation has been done automatically using public code and free Open Source software.
rOCCI – Providing Interoperability through OCCI 1.1 Support for OpenNebulaNETWAYS
OCCI (Open Cloud Computing Interface ) [1] is an open protocol for management tasks in the cloud environment focused on integration, portability and interoperability with a high degree of extensibility. It is designed to bridge differences between various cloud platforms (or cloud middleware) and provide common ground for users and developers alike.
The rOCCI framework [2], originally developed by GWDG [3], was written to simplify the implementation of the OCCI 1.1 protocol in Ruby and later provided the base for a working client and server implementation targeting OpenNebula as its primary back-end cloud platform. The initial server-side implementation provided basic functionality and served as a proof of concept when it was adopted by the EGI Federated Cloud Task Force [4] and chosen to act as the designated VM management interface. This led to further funding from EGI-InSPIRE [5] and involvement of CESNET [6].
This talk aims to provide basic information about the OCCI protocol, introduce its implementation in rOCCI, describe and/or demonstrate some of the functionality provided by rOCCI client and rOCCI-server in concert with OpenNebula. It also briefly examines its use in the EGI FedCloud environment and explores the possibility of further integration with
OpenNebula as a part of the ON ecosystem or even as an integral part of OpenNebula itself in the future. All this with interoperability in mind.
OpenNebula Conf 2014: Expanding OpenNebula´s support for Cloud Bursting - Emm...NETWAYS
The platform currently runs several solutions for Earth Sciences Researchers: Developer Cloud Sandboxes for scalable scientific processors integration, Virtual Archives federating distributed data repositories, Data Challenges for Earth Observation contests, and Digital Marketplaces for reproducible scientific experiments.
OpenNebula is also powering our Cloud development environment, that has been enhanced during the past year. Terradue’s Cloud development environment is both our laboratory to test the latest developments from OpenNebula, especially for the integration of our own OpenNebula extensions, and our Engineering team facility to provision servers supporting project-based software developments. OpenNebula provides the virtualization and management of hardware clusters, that we rent from commercial ‘bare-metal’ providers.
We have recently further developed several specific drivers for Multi-Cloud bursting, in order to provision virtual machines over public commercial clouds.
When their processor integration and validation phase concludes, our researcher users can seamlessly burst their applications at scale, leveraging OpenNebula drivers for on-demand processing tasks.
OpenNebulaconf2017EU: OpenNebula 5.4 and Beyond by Tino Vázquez and Ruben S. ...OpenNebula Project
In this talk, Rubén and Tino will lay our the novelties (not all of them, there are many!) present in 5.4, ranging from core new functionality to the big changes in vCenter. Also, the roadmap for 5.6 and future versions would be laid out, as far as it is consolidated (it won't be closed yet, but nearly so).
It would also be the perfect session for feature requests, so don't miss it!
YouTube: https://youtu.be/Czzm2EimayY
OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...NETWAYS
Last year, during the first OpenNebula Conference we briefly talked about interoperability in the cloud, introduced the OCCI standard/protocol and focused on one of its implementations — The rOCCI framework. We positioned this framework as the go-to interface for providing interoperability in OpenNebula with significant plans for future development and improvement.
Cloud database vendors tend to report performance numbers for the sweet spot or running in highly optimized hardware with specific workload parameters.
Moreover, many of these systems are not tested under different failure scenarios that may appear in the public cloud.
At Netflix, as a cloud-native enterprise, our focus is on high availability. We achieve high availability by deploying at multiple regions.
Hence, our data store system performance is highly affected by our global deployment model, instance types and workload patterns.
Hence, we were interested in a Cloud database benchmark tool that could be deployed in a loosely-coupled fashion, as a microservice, and with the ability to dynamically change the configurations parameters at run time. In this paper, we present Netflix Data Benchmark (NDBench). NDBench offers pluggable patterns and loads and support for different client APIs. It offers the ability to deploy, manage and monitor multiple instances from a single point. NDBench was designed to run for infinite time. This nature offered us with the ability to test long-running database maintenance jobs, test database systems under conditions that affect the performance, such as compactions, repairs etc., and also gives us a view on client side issues like memory leaks and heap pressure. We have been running NDBench for almost 3 years having validated multiple database versions, tested numerous NoSQL systems running on the Cloud, and tested new functionalities. NDBench is major component of our testing and validation pipelines.
DevOps Fest 2020. Даніель Яворович. Data pipelines: building an efficient ins...DevOps_Fest
Я розповім про досвід будування системи для роботи з великими даними на базі відкритої технологіі Apache Nifi та Kubernetes на прикладі аналізу ресурсів новин з використанням NLP аналізом.
OpenNebulaConf2017EU: Hyper converged infrastructure with OpenNebula and Ceph...OpenNebula Project
Hyperconvergence is one of the big topics in datacenters at the moment. But is it more than an old wine in new bottles? Why we at Runtastic built an hyperconverged datacenter based on Opennebula with Ceph and what we learned.
YouTube: https://youtu.be/50Z4bmevTpg
OpenNebula Conf 2014 | Bootstrapping a virtual infrastructure using OpenNebul...NETWAYS
This talk shows how to setup a virtual infrastructure using OpenNebula as cloud management platform, SaltStack for configuration management and Foreman for bare-metal/ virtual host provisioning. You will see how to combine OpenNebula with bare-metal deployment on standard server hardware using non-shared storage in an environment without physical access to the hardware and no existing base infrastructure like DNS, NTP, DHCP, VPN or other. The infrastructure installation has been done automatically using public code and free Open Source software.
rOCCI – Providing Interoperability through OCCI 1.1 Support for OpenNebulaNETWAYS
OCCI (Open Cloud Computing Interface ) [1] is an open protocol for management tasks in the cloud environment focused on integration, portability and interoperability with a high degree of extensibility. It is designed to bridge differences between various cloud platforms (or cloud middleware) and provide common ground for users and developers alike.
The rOCCI framework [2], originally developed by GWDG [3], was written to simplify the implementation of the OCCI 1.1 protocol in Ruby and later provided the base for a working client and server implementation targeting OpenNebula as its primary back-end cloud platform. The initial server-side implementation provided basic functionality and served as a proof of concept when it was adopted by the EGI Federated Cloud Task Force [4] and chosen to act as the designated VM management interface. This led to further funding from EGI-InSPIRE [5] and involvement of CESNET [6].
This talk aims to provide basic information about the OCCI protocol, introduce its implementation in rOCCI, describe and/or demonstrate some of the functionality provided by rOCCI client and rOCCI-server in concert with OpenNebula. It also briefly examines its use in the EGI FedCloud environment and explores the possibility of further integration with
OpenNebula as a part of the ON ecosystem or even as an integral part of OpenNebula itself in the future. All this with interoperability in mind.
OpenNebula Conf 2014: Expanding OpenNebula´s support for Cloud Bursting - Emm...NETWAYS
The platform currently runs several solutions for Earth Sciences Researchers: Developer Cloud Sandboxes for scalable scientific processors integration, Virtual Archives federating distributed data repositories, Data Challenges for Earth Observation contests, and Digital Marketplaces for reproducible scientific experiments.
OpenNebula is also powering our Cloud development environment, that has been enhanced during the past year. Terradue’s Cloud development environment is both our laboratory to test the latest developments from OpenNebula, especially for the integration of our own OpenNebula extensions, and our Engineering team facility to provision servers supporting project-based software developments. OpenNebula provides the virtualization and management of hardware clusters, that we rent from commercial ‘bare-metal’ providers.
We have recently further developed several specific drivers for Multi-Cloud bursting, in order to provision virtual machines over public commercial clouds.
When their processor integration and validation phase concludes, our researcher users can seamlessly burst their applications at scale, leveraging OpenNebula drivers for on-demand processing tasks.
OpenNebulaconf2017EU: OpenNebula 5.4 and Beyond by Tino Vázquez and Ruben S. ...OpenNebula Project
In this talk, Rubén and Tino will lay our the novelties (not all of them, there are many!) present in 5.4, ranging from core new functionality to the big changes in vCenter. Also, the roadmap for 5.6 and future versions would be laid out, as far as it is consolidated (it won't be closed yet, but nearly so).
It would also be the perfect session for feature requests, so don't miss it!
YouTube: https://youtu.be/Czzm2EimayY
OpenNebula Conf 2014 | The rOCCI project - a year later - alias OpenNebula in...NETWAYS
Last year, during the first OpenNebula Conference we briefly talked about interoperability in the cloud, introduced the OCCI standard/protocol and focused on one of its implementations — The rOCCI framework. We positioned this framework as the go-to interface for providing interoperability in OpenNebula with significant plans for future development and improvement.
Cloud database vendors tend to report performance numbers for the sweet spot or running in highly optimized hardware with specific workload parameters.
Moreover, many of these systems are not tested under different failure scenarios that may appear in the public cloud.
At Netflix, as a cloud-native enterprise, our focus is on high availability. We achieve high availability by deploying at multiple regions.
Hence, our data store system performance is highly affected by our global deployment model, instance types and workload patterns.
Hence, we were interested in a Cloud database benchmark tool that could be deployed in a loosely-coupled fashion, as a microservice, and with the ability to dynamically change the configurations parameters at run time. In this paper, we present Netflix Data Benchmark (NDBench). NDBench offers pluggable patterns and loads and support for different client APIs. It offers the ability to deploy, manage and monitor multiple instances from a single point. NDBench was designed to run for infinite time. This nature offered us with the ability to test long-running database maintenance jobs, test database systems under conditions that affect the performance, such as compactions, repairs etc., and also gives us a view on client side issues like memory leaks and heap pressure. We have been running NDBench for almost 3 years having validated multiple database versions, tested numerous NoSQL systems running on the Cloud, and tested new functionalities. NDBench is major component of our testing and validation pipelines.
DevOps Fest 2020. Даніель Яворович. Data pipelines: building an efficient ins...DevOps_Fest
Я розповім про досвід будування системи для роботи з великими даними на базі відкритої технологіі Apache Nifi та Kubernetes на прикладі аналізу ресурсів новин з використанням NLP аналізом.
Recent presentation on deeplearning4j's new features as well as some underused features of the AI framework like arbiter,datavec's transform process and libnd4j.
Hopsworks at Google AI Huddle, SunnyvaleJim Dowling
Hopsworks is a platform for designing and operating End to End Machine Learning using PySpark and TensorFlow/PyTorch. Early access is now available on GCP. Hopsworks includes the industry's first Feature Store. Hopsworks is open-source.
Talk given at first OmniSci user conference where I discuss cooperating with open-source communities to ensure you get useful answers quickly from your data. I get a chance to introduce OpenTeams in this talk as well and discuss how it can help companies cooperate with communities.
AIDevWorld 23 Apache NiFi 101 Introduction and Best Practices
https://sched.co/1RoAO
Timothy Spann, Cloudera, Principal Developer Advocate
In this talk, we will walk step by step through Apache NiFi from the first load to first application. I will include slides, articles and examples to take away as a Quick Start to utilizing Apache NiFi in your real-time dataflows. I will help you get up and running locally on your laptop, Docker or in CDP Public Cloud.
Wednesday November 1, 2023 12:00pm - 12:25pm PDT
VIRTUAL AI DevWorld -- Main Stage https://app.hopin.com/events/api-world-2023-ai-devworld/stages
Retail & E-Commerce AI (Industry AI Conference)
Session Type OPEN TALK
Track or Conference Retail & E-Commerce AI (Industry AI Conference), Industry AI Conference, VIRTUAL, Tensorflow & PyTorch & Open Source Frameworks (AI/ML Engineering Conference), AI/ML Engineering Conference, AI DevWorld
In-Person/Virtual Virtual, Virtual Exclusive
apache nifi
Timothy Spann
Cloudera
Principal Developer Advocate for Data in Motion
Tim Spann is the Principal Developer Advocate for Data in Motion @ Cloudera where he works with Apache Kafka, Apache Flink, Apache NiFi, Apache Iceberg, TensorFlow, Apache Spark, big data, the IoT, machine learning, and deep learning. Tim has over a decade of experience with the IoT, big data, distributed computing, streaming technologies, and Java programming. Previously, he was a Developer Advocate at StreamNative, Principal Field Engineer at Cloudera, a Senior Solutions Architect at AirisData and a senior field engineer at Pivotal. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton on big data, the IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as IoT Fusion, Strata, ApacheCon, Data Works Summit Berlin, DataWorks Summit Sydney, and Oracle Code NYC. He holds a BS and MS in computer science.
cloudera dataflow
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebula Project
We've made our way into the world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform cannot be all things to all people. There will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud is at the very base of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to communicate and share our vision and commitment, to look back at how the project has grown in the last 9 years, and to shed some insight into what to expect from the project in the near future.
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebula Project
Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly increasing, and the testing and experimentation of cyber defense solutions require the availability of separate, test environments that best reflect the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, and cyber security training activities, thus enabling enterprises to study cyber defense strategies and allowing security researchers to evaluate their algorithms at scale.
The main objective is delivering to researchers and practitioners an overview of the technological means and the practical steps to setup a private cloud platform based on OpenNebula for the creation and management of virtual environments that support cyber-security activities of training and testing, as well as an overview of its possible applications in the cyber security domain.
In particular:
1. We describe our infrastructure based on OpenNebula
2. We overview our application, sitting on top of OpenNebula, as well as the technological tools involved in the management of its lifecycle (e.g., Ansible) .
3. We show how the platform can support various examples of security research activities
[References] Building an emulation environment for cyber security analyses of complex networked systems, Tanasache, Florin Dragos and Sorella, Mara and Bonomi, Silvia and Rapone, Raniero and Meacci, Davide, ICDCN '19, ACM, 2019
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebula Project
I will be presenting the ongoing advances of the OnLife Networks project across Spain and Brasil, with a focus on use cases we have implemented in the Central Offices, which serve as the edge resources closest to the end-user. I will share an interesting synopsis of the the projects evolution, as well as provide several lessons learned.
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebula Project
Insight into more than 6 years experience with OpenNebula from different perspectives: ISP & Datacenter Provider and Consultant / System Integrator
Lessons learned, "the dos and don'ts" and how we convince and enable customers with OpenNebula - and the NTS ecosystem.
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebula Project
OpenNebula users have a range of storage options available to them, including proprietary appliances, proprietary software and Open Source software projects. This session will present a fully Open Source approach, that tightly integrates with Linux, and makes full use of the mature building blocks within the Linux kernel (LVM, Software RAID, DM-crypt, NVMe-oF Target, DRBD, etc...), and delivers one of the highest performance open source storage stacks currently available.
The core goal is to expose the improved performance of NVMe storage devices to VMs and containers. The solution covers both local NVMe drives and NVMe-oF. For interacting with NVMe-oF targets it supports the Swordfish-API and LVM & Linux’s software NVMe-oF target. The solution contains a storage addon for OpenNebula.
Our take on centralized and controlled VM image backups that deal with both CEPH and local QCOW2 datastores. As there are no default means of executing image backups in OpenNebula, I'd like to share our perspective on how we do it.
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebula Project
At Iguane Solutions, a lot of our "DevOps" tools are developed in Golang, and we have a good amount of experience in contributing to the Goca. I'll review just what contributions we make, as well as how we use Goca with different tools, on a daily basis, to manage and monitor our OpenNebula cloud.
I will delve into the concept of Infrastructure as Code - deployment of VM instances on cloud, as well as, also address the metrics collection of deployed VMs. Finally, I will present how we can abstract VM management with automation tools thanks to GOCA.
A deep insight into a project with codename "TARDIS" at HAUFE Lexware with the purpose to replace vCloud with OpenNebula. A technical deep dive into a focussed project done by real DevOps experts.
How and what we do with OpenNebula to enable our customers for a completely new way how it is consumed in a modern, service orientated IT. We will also talk about the question, why we have chosen OpenNebula and how deep is the level - and ability - of integration of the NTS CAPTAIN into existing 2nd and 3rd party tools like IPAM, CMDBs, backup, monitoring, approval processes and much more...
TeleData operates a purpose build IaaS enterprise ready cloud plattfom in the region of lake constance. OpenNebula is used in production since several years. TeleData will share an insight into the "Lessons learned" and a brief summary how to operate a public cloud, built on top of OpenNebula. Content is subject to change!
Performant and Resilient Storage: The Open Source & Linux WayOpenNebula Project
OpenNebula users have a range of storage options available to them, including proprietary appliances, proprietary software and Open Source software projects. This session will present a fully Open Source approach, that tightly integrates with Linux, and makes full use of the mature building blocks within the Linux kernel (LVM, Software RAID, DM-crypt, NVMe-oF Target, DRBD, etc...), and delivers one of the highest performance open source storage stacks currently available. The core goal is to expose the improved performance of NVMe storage devices to VMs and containers. The solution covers both local NVMe drives and NVMe-oF. For interacting with NVMe-oF targets it supports the Swordfish-API and LVM & Linux’s software NVMe-oF target. The solution contains a storage addon for OpenNebula.
NetApp’s Hybrid Cloud Infrastructure manages to leverage Kubernetes to a Hybrid Multi Cloud use case where OpenNebula integrates seamlessly. A technical deep dive in how NTS and NetApp integrated NTS Captain into NetApp’s DataFabric world on top of NetApp HC
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
2. Scale out to petabytes with
LizardFS
Sixth OpenNebula Conference Madrid, October 2017
3. Think LizardFS ...
● Just add chunkservers …
● HW can be anything if used with
replication goals
o Use your old spare or spare systems as
chunkservers
o You can even use them temporary for a start and
move to faster systems later
● Use different chunkserver groups for
different performance requirements …
o Old stuff for archives
o Hyperfast new rocketspeed for performance tier
4. And more ...
● Multi Datacenter Support
● NFS Support
● Erasure Coding
● Horizontaly and verticaly growable
and shrinkable
● Transparent trash
● Many more community and
commercial features