Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
A Container Stack for Openstack - OpenStack Silicon ValleyStephen Gordon
OpenStack is an Infrastructure as a Service offering that provides a powerful abstraction layer for interacting with your datacenter infrastructure, supported by a wide array of pluggable drivers for existing physical and virtual infrastructure investments. In this session, you’ll learn how OpenStack is evolving to integrate with the Linux, Docker, Kubernetes stack to provide the ideal infrastructure platform for modern containerized applications. You’ll learn how you can modernize application delivery using the Linux, Docker, Kubernetes stack provided by Red Hat while seamlessly using the authentication, network, and storage infrastructure services provided by an underlying OpenStack cloud.
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
Deploying Containers at Scale on OpenStackStephen Gordon
magine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
Kubernetes and OpenStack at Scale at OpenStack Summit Boston 2017
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster and elastic infrastructure. Now, take that one step further - all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, you will see just that.
In this presentation, we will walk through a recent benchmarking deployment using Kubernetes and OpenStack on the Cloud Native Computing Foundation’s (CNCF's) 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
You'll also what's been happening in subsequent rounds of testing in Red Hat's own SCALE lab and the CNCF cluster and how we are working with the relevant open source communities including OpenStack, Kubernetes, and Ansible to continue to raise the bar for horizontal scaling of these platforms via community powered innovation.
Besides huge success in mobile, ARM is also ambitious in server field. Software ecosystem is now a barrier for wide deployment of ARM servers in data center. ARM Shanghai Workloads team is working on clouding and big data software enablement and optimization on ARM64 platform.
In this presentation, Yibo Cai will introduce the status and challenges of running OpenStack on ARM servers, with emphasis on OpenStack compute, storage and networking.
The demand for managing a large amount of data in a scalable yet reliable and cost-effective way has became more and more relevant in this day and age. Ceph, a software-defined storage, provides an original solution for this problem and guarantees a resilient and self-healing way for managing large amount of data up to the Exabyte level. In this session I will talk about a new feature introduced in oVirt 3.6 which provides the ability to integrate with Red Hat Ceph storage using Cinder, a storage service used mainly for OpenStack. This integration reveals new opportunities and tools for storage management in a scalable and virtualized way and also opens the door for interesting future integrations with other storage providers.
In this session I will describe how oVirt, an open source virtualization management platform, has extended and elevated its storage virtualization management capabilities by integrating with Cinder, a storage service, to manage resources from the Ceph Storage. oVirt 3.6 revolutionize the way it manages virtualized storage to be much more scalable and flexible, and opens the door for future integrations with well known storage providers such as NetApp, EMC, HP and more.
Presentation delivered at LinuxCon China 2017
Real-Time is used for deadline-oriented applications and time-sensitive workloads. Real-Time KVM is the extension of KVM(Linux Kernel-based Virtual Machine) to allow the virtual machines(VM) to be a truly Real-Time operating system.Users sometimes need to run low-latency applications(such as audio/video streaming, highly interactive systems, etc) to meet their requirements in clouds. NFV is a new network concept which uses virtualization and software instead of dedicated network appliances. For some use cases of telecommunications, network latency must be within a certain range of values. Real-Time KVM can help NFV meet this requirements.
In this presentation, Pei Zhang will talk about:
(1)Real-Time KVM introduction
(2)Real-Time cloud building
(3)Real-Time KVM in NFV: VM with openvswitch, dpdk and qemu’s vhostuser
(4)Performance testing results show
OpenNebulaConf 2016 - Measuring and tuning VM performance by Boyan Krosnov, S...OpenNebula Project
In this session we'll explore measuring VM performance and evaluating changes to settings or infrastructure which can affect performance positively. We'll also share the best current practice for architecture for high performance clouds from our experience.
A Container Stack for Openstack - OpenStack Silicon ValleyStephen Gordon
OpenStack is an Infrastructure as a Service offering that provides a powerful abstraction layer for interacting with your datacenter infrastructure, supported by a wide array of pluggable drivers for existing physical and virtual infrastructure investments. In this session, you’ll learn how OpenStack is evolving to integrate with the Linux, Docker, Kubernetes stack to provide the ideal infrastructure platform for modern containerized applications. You’ll learn how you can modernize application delivery using the Linux, Docker, Kubernetes stack provided by Red Hat while seamlessly using the authentication, network, and storage infrastructure services provided by an underlying OpenStack cloud.
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
Deploying Containers at Scale on OpenStackStephen Gordon
magine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
Kubernetes and OpenStack at Scale at OpenStack Summit Boston 2017
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster and elastic infrastructure. Now, take that one step further - all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, you will see just that.
In this presentation, we will walk through a recent benchmarking deployment using Kubernetes and OpenStack on the Cloud Native Computing Foundation’s (CNCF's) 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
You'll also what's been happening in subsequent rounds of testing in Red Hat's own SCALE lab and the CNCF cluster and how we are working with the relevant open source communities including OpenStack, Kubernetes, and Ansible to continue to raise the bar for horizontal scaling of these platforms via community powered innovation.
Besides huge success in mobile, ARM is also ambitious in server field. Software ecosystem is now a barrier for wide deployment of ARM servers in data center. ARM Shanghai Workloads team is working on clouding and big data software enablement and optimization on ARM64 platform.
In this presentation, Yibo Cai will introduce the status and challenges of running OpenStack on ARM servers, with emphasis on OpenStack compute, storage and networking.
The demand for managing a large amount of data in a scalable yet reliable and cost-effective way has became more and more relevant in this day and age. Ceph, a software-defined storage, provides an original solution for this problem and guarantees a resilient and self-healing way for managing large amount of data up to the Exabyte level. In this session I will talk about a new feature introduced in oVirt 3.6 which provides the ability to integrate with Red Hat Ceph storage using Cinder, a storage service used mainly for OpenStack. This integration reveals new opportunities and tools for storage management in a scalable and virtualized way and also opens the door for interesting future integrations with other storage providers.
In this session I will describe how oVirt, an open source virtualization management platform, has extended and elevated its storage virtualization management capabilities by integrating with Cinder, a storage service, to manage resources from the Ceph Storage. oVirt 3.6 revolutionize the way it manages virtualized storage to be much more scalable and flexible, and opens the door for future integrations with well known storage providers such as NetApp, EMC, HP and more.
Presentation delivered at LinuxCon China 2017
Real-Time is used for deadline-oriented applications and time-sensitive workloads. Real-Time KVM is the extension of KVM(Linux Kernel-based Virtual Machine) to allow the virtual machines(VM) to be a truly Real-Time operating system.Users sometimes need to run low-latency applications(such as audio/video streaming, highly interactive systems, etc) to meet their requirements in clouds. NFV is a new network concept which uses virtualization and software instead of dedicated network appliances. For some use cases of telecommunications, network latency must be within a certain range of values. Real-Time KVM can help NFV meet this requirements.
In this presentation, Pei Zhang will talk about:
(1)Real-Time KVM introduction
(2)Real-Time cloud building
(3)Real-Time KVM in NFV: VM with openvswitch, dpdk and qemu’s vhostuser
(4)Performance testing results show
OpenNebulaConf 2016 - Measuring and tuning VM performance by Boyan Krosnov, S...OpenNebula Project
In this session we'll explore measuring VM performance and evaluating changes to settings or infrastructure which can affect performance positively. We'll also share the best current practice for architecture for high performance clouds from our experience.
- What is NOVA ?
- NOVA architecture
- How instance are spawned in Openstack ?
- Interaction of nova with other openstack projects like neutron, glance and cinder.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
Presentation delivered at LinuxCon China 2017. Rethinking the Operating System.
A new wave of Operating Systems optimized for containers appeared on the horizon making us excited and puzzeled at the same time.
"Why do we need anything different for containers when traditional OSs served us well in the last 25+ years?" "Isn't Kubernetes just another package to install on top of my favorite distro?"" Will this obsolete my whole infrastructure?" are some of the questions this talk will shed some light on.
Explore the journey SUSE made in rethinking the OS: From a conservative linux distribution to a platform that goes hand in hand with the needs of Microservices.
You will get an insight at what lessons were learned during the intense development effort that lead to SUSE Containers as a Service Platform, how the obstacles along the way were lifted and why "Upstream first" is - and should always be - the rule.
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephSean Cohen
IT organizations require a disaster recovery strategy addressing outages with loss of storage, or extended loss of availability at the primary site. Applications need to rapidly migrate to the secondary site and transition with little or no impact to their availability.This talk will cover the various architectural options and levels of maturity in OpenStack services for building multi-site configurations using the Mitaka release. We’ll present the latest capabilities for Volume, Image and Object Storage with Ceph as the backend storage solution, and look at the future developments the OpenStack and Ceph communities are driving to improve and simplify the relevant use cases.
Slides from OpenStack Austin Summit 2016 session: http://alturl.com/hpesz
Container runtime and tooling has matured since Docker brought it to the mainstream a decade ago. There are multiple options for building and running containers available to the developers and system administrators. Oleg Chunikhin, CTO at Kublr, will provide a review and analysis of the popular options.
Can we leverage the resource of public cloud for gaming, streaming, transcoding, machine learning and visualized CAD application on demand? Yes if it provides the capability and infrastructure to utilize GPUs. Can we get high performance networking in the cloud as what I have in the bare metal environment? Yes with SR-IOV. How to achieve them? In this presentation we describe Discrete Device Assignment (also known as PCI Pass-through) support for GPU and network adapter in Linux guest and SR-IOV architectures of Linux guest with near-native performance profile running on Hyper-V. We also will share how to integrate accelerated graphics and networking capabilities in Microsoft Azure infrastructure.
- What is NOVA ?
- NOVA architecture
- How instance are spawned in Openstack ?
- Interaction of nova with other openstack projects like neutron, glance and cinder.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
Presentation delivered at LinuxCon China 2017. Rethinking the Operating System.
A new wave of Operating Systems optimized for containers appeared on the horizon making us excited and puzzeled at the same time.
"Why do we need anything different for containers when traditional OSs served us well in the last 25+ years?" "Isn't Kubernetes just another package to install on top of my favorite distro?"" Will this obsolete my whole infrastructure?" are some of the questions this talk will shed some light on.
Explore the journey SUSE made in rethinking the OS: From a conservative linux distribution to a platform that goes hand in hand with the needs of Microservices.
You will get an insight at what lessons were learned during the intense development effort that lead to SUSE Containers as a Service Platform, how the obstacles along the way were lifted and why "Upstream first" is - and should always be - the rule.
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephSean Cohen
IT organizations require a disaster recovery strategy addressing outages with loss of storage, or extended loss of availability at the primary site. Applications need to rapidly migrate to the secondary site and transition with little or no impact to their availability.This talk will cover the various architectural options and levels of maturity in OpenStack services for building multi-site configurations using the Mitaka release. We’ll present the latest capabilities for Volume, Image and Object Storage with Ceph as the backend storage solution, and look at the future developments the OpenStack and Ceph communities are driving to improve and simplify the relevant use cases.
Slides from OpenStack Austin Summit 2016 session: http://alturl.com/hpesz
Container runtime and tooling has matured since Docker brought it to the mainstream a decade ago. There are multiple options for building and running containers available to the developers and system administrators. Oleg Chunikhin, CTO at Kublr, will provide a review and analysis of the popular options.
Can we leverage the resource of public cloud for gaming, streaming, transcoding, machine learning and visualized CAD application on demand? Yes if it provides the capability and infrastructure to utilize GPUs. Can we get high performance networking in the cloud as what I have in the bare metal environment? Yes with SR-IOV. How to achieve them? In this presentation we describe Discrete Device Assignment (also known as PCI Pass-through) support for GPU and network adapter in Linux guest and SR-IOV architectures of Linux guest with near-native performance profile running on Hyper-V. We also will share how to integrate accelerated graphics and networking capabilities in Microsoft Azure infrastructure.
Presentation slides from DevConf.cz 2017
Challenges, take-aways and recommendations on scaling up OpenShift's logging and metrics stack.
Authors:
Ricardo Lourenço:
https://www.linkedin.com/in/ricardopereira4it/
Elvir Kuric
https://www.linkedin.com/in/elvirkuric/
Stacks and Layers: Integrating P4, C, OVS and OpenStackOpen-NFP
Smart Network Interface Cards (SmartNICs) are increasingly being deployed in cloud data centers to offload inline network processing tasks from server CPUs, thereby improving system throughput while freeing up server CPU cycles for application processing. The match/action and tunnel handling semantics of SmartNIC datapaths can be either expressed directly in the P4 language, be defined by virtual switching software like Open vSwitch (implementing the semantics of a specification like OpenFlow), or by using a combination of these. This presentation compares these approaches, considering aspects like the expressiveness and performance of the resulting datapath as well how these datapath variants can be integrated into existing cloud management systems (e.g. OpenStack).
Johann Tönsing
Chief Architect & SVP, Software, Netronome
Johann is a recognized industry expert in SDN, Linux-based networking technologies, network virtualization, security, and NFV. Johann has been an active contributing member and has been nominated to leadership roles in multiple standards bodies related to SDN and NFV. As Netronome’s Chief Architect, Johann leads all aspects of Netronome’s product design and development, with heavy emphasis on advanced and open server-based networking technologies where he also holds multiple patents. He holds a Masters of Engineering in Electronics.
Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017 - ...Haidee McMahon
For details on Intel's Out of The Box Network Developers Ireland meetup, goto https://www.meetup.com/Out-of-the-Box-Network-Developers-Ireland/events/237726826/
Intel Talk : Enhanced Platform Awareness for Openstack to increase NFV performance
By Andrew Duignan
Bio: Andrew Duignan is an Electronic Engineering graduate from University College Dublin, Ireland. He has worked as a software engineer in Motorola and now at Intel Corporation. He is now in a Platform Applications Engineering role, supporting technologies such as DPDK and virtualization on Intel CPUs. He is based in the Intel Shannon site in Ireland.
Presentation written by Sean Cohen and Steve Gordon at Red Hat covering the highlights of the OpenStack Liberty release.
Presented to the Atlanta OpenStack Meetup on October 15th.
Disaster Recovery Options Running Apache Kafka in Kubernetes with Rema Subra...HostedbyConfluent
Active-Active, Active-Passive, and stretch clusters are hallmark patterns that have been the gold standard in Apache Kafka® disaster recovery architectures for years. Moving to Kubernetes requires unpacking these patterns and choosing a configuration that allows you to meet the same RTO and RPO requirements.
In this talk, we will cover how Active-Active/Active-Passive modes for disaster recovery have worked in the past and how the architecture evolves with deploying Apache Kafka on Kubernetes. We'll also look at how stretch clusters sitting on this architecture give a disaster recovery solution that's built-in!
Armed with this information, you will be able to architect your new Apache Kafka Kubernetes deployment (or retool your existing one) to achieve the resilience you require.
Presentation give at the Melbourne Docker Meetup on container related projects within OpenStack. Specifically looking at Project Magnum and Project Kolla and how they are leveraging technologies like Docker, Kubernetes and Atomic.
About the speaker: Pramod is a software developer in OpenStack and OpenDayLight, working for OTC, SSG at Intel. His Area of Interest is in Cloud Networking and Applications. He has prior experience in Databases and his current focus is on developing features of Cloud Networking Platform. He holds Masters Degree from San Jose State University.
Similar to Containers for the Enterprise: Delivering OpenShift on OpenStack for Performance and Scale (20)
Toronto RHUG: Container-native virtualizationStephen Gordon
November 2018 presentation covering Container-native virtualization, enabling OpenShift/Kubernetes as a common platform for application containers and virtual machines.
KubeVirt (Kubernetes and Cloud Native Toronto)Stephen Gordon
In this session Stephen will present the use cases for and current state of the KubeVirt project (http://www.kubevirt.io/), which aims to build a virtualization API for Kubernetes in order to manage virtual machines which themselves run in Kubernetes pods.
You will also hear how this project differs from, and is complementary to, the recently announced Katacontainers (https://katacontainers.io/) project.
OpenStackTO: Friendly coexistence of Virtual Machines and Containers on Kuber...Stephen Gordon
KubeVirt is intended to provide a convergence point for the data center of the future using Kubernetes as an infrastructure fabric for both application container and virtual machine workloads. Using a unified management approach simplifies deployments, allows for better resource utilization, and supports different workloads in a more optimal way. This session will outline how the Kubevirt project seeks to achieve this while using the extensible nature of Kubernetes in a way that provides a developer workflow that is as consistent as possible with the same patterns used for working with application containers.
OpenStack “Liberty,” due for imminent release, represents the 12th release of the open source computing platform for public and private clouds. Recent OpenStack releases have focused on improving stability and enhancing the operator experience. This is still the case with Liberty, but there are still new features to consider.
Join Sean Cohen and Steve Gordon to review notable features of this new OpenStack release, including:
Network quality of service (QoS) support via a new extensible API for dynamically defining per-port and per-network QoS policies.
Mark host down API enhancement in support of external high-availability solutions, including pacemaker, providing resilient instances in the event of compute node failure.
Enhanced Security Assertion Markup Language (SAML) support including dashboard integration, Ipsilon, and OpenID Connect support.
Role-based access control (RBAC) for networks, providing fine-grained permissions for sharing networks between tenants.
Dashboard support for database-as-a-service (Trove), subnet allocation, floating IP assignment, and volume migration.
Generic volume migration—adding the ability to migrate workloads from iSCSI to non-iSCSI back ends.
New Cinder replication API to allow block level replication between back ends.
Nondisruptive backup to allow backup while the volume is still attached, by performing backup from a temporary attached snapshot.
New Image signing and encryption to guarantee integrity by supporting signing and signature validation of bootable images.
In addition we’ll discuss the state of emerging projects including Manila and Zaqar.
KVM (Kernel-based Virtual Machine) is a full virtualization solution built into the Linux kernel. OpenStack Foundation user surveys consistently indicate that KVM is the most commonly used Hypervisor for OpenStack deployments, managed using the Libvirt driver for OpenStack Compute (Nova). Despite this sustained popularity development of the driver, and indeed the underlying Hypervisor itself, continues at a frantic pace.
This presentation will help you make sense of it all starting with an overview of the way Nova, Libvirt, and KVM interact before analysing progress made in Kilo on utilizing key Libvirt/KVM features in Nova including:
Instance vCPU pinning
Huge page backed instances
Enhanced NUMA topology awareness
...and more! The session will close with a discussion of how in addition to exposing existing Libvirt/KVM features emerging OpenStack use cases - such as Network Function Virtualization (NFV) and High Performance Computing (HPC) - are driving open innovation in the Libvirt, QEMU, and KVM projects themselves.
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Stephen Gordon
This deck begins with a high-level overview of where OpenStack Compute (Nova) fits into the overall OpenStack architecture, as demonstrated in Red Hat Enterprise Linux OpenStack Platform. Before illustrating how OpenStack Compute interacts with other OpenStack components.
The session will also provide a grounding in some common Compute terminology and a deep-dive look into key areas of OpenStack Compute, including the:
Compute APIs.
Compute Scheduler.
Compute Conductor.
Compute Service.
Compute Instance lifecycle.
Intertwined with the architectural information are details on horizontally scaling and dividing compute resources as well as customization of the Compute scheduler. You’ll also learn valuable insights into key OpenStack Compute features present in OpenStack Icehouse.
A brief introduction to Publican for members of the OpenStack documentation community. Originally presented at the OpenStack documentation bootcamp on the 10th of September 2013
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
8. WHERE TO TEST?
8
The CNCF cluster is made up of 1000 nodes deployed at Switch, Las Vegas by Intel for the
use of the CNCF community:
Compute Node Spec
● 2x Intel E5-2680v3 12-core
● 256GB RAM
● 2x Intel S3610 400GB SSD
● 1x Intel P3700 800GB NVMe PCIe
SSD
● 1x QP Intel X710
Storage Node Spec
● 2x Intel E5-2680v3 12-core
● 128GB RAM
● 2x Intel S3610 400GB SSD
● 10x Intel 2TB NLSAS HDD
● 1x QP Intel X710"
Got a nefarious plan for taking over the world using CNCF related open source projects?
Head to https://github.com/cncf/cluster.
9. WHAT TO TEST?
Goal: 1000 OpenShift Container Platform nodes, on 300 Red Hat OpenStack
Platform nodes
● Push deployment to its limit, identify:
○ Bottlenecks,
○ Config changes, and
○ Best practices
● Document, file, and fix issues as appropriate.
9
16. SYSTEM VERIFICATION TEST SUITE
● Red Hat OpenShift Performance and Scalability team’s upstream test suites
● Main tests are
○ cluster-loader
○ Networking/synthetic
○ Workload Generator
○ Reliability/Longevity
● https://github.com/openshift/svt
16
17. 17
CLUSTER-LOADER ARCHITECTURE
Start
Parse args &
config
Config
Obj
End
False
Create
Namespace
True
X
Exists
?
Items < N
False
Create X
Iterate Item
Count
True True
False
X can be:
● Quota
● Template
● Service
● User
● Pod
● RC
21. FUTURE OPPORTUNITIES
● Reference architecture available at red.ht/2ibNmvX
● Containers and OpenStack: A platform for distributed applications, paper
available at http://red.ht/2hSfIPs
● Learn more about Red Hat Summit at redhat.com/summit
21