Bloomberg's Chris Jones and Chris Morgan joined Red Hat Storage Day New York on 1/19/16 to explain how Red Hat Ceph Storage helps the financial giant tackle its data storage challenges.
BlueStore: a new, faster storage backend for CephSage Weil
Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD's requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Ceph data services in a multi- and hybrid cloud worldSage Weil
IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move.
Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.
BlueStore: a new, faster storage backend for CephSage Weil
Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD's requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Ceph data services in a multi- and hybrid cloud worldSage Weil
IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move.
Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
Ceph is a highly scalable open source distributed storage system that provides object, block, and file interfaces on a single platform. Although Ceph RBD block storage has dominated OpenStack deployments for several years, maturing object (S3, Swift, and librados) interfaces and stable CephFS (file) interfaces now make Ceph the only fully open source unified storage platform.
This talk will cover Ceph's architectural vision and project mission and how our approach differs from alternative approaches to storage in the OpenStack ecosystem. In particular, we will look at how our open development model dovetails well with OpenStack, how major contributors are advancing Ceph capabilities and performance at a rapid pace to adapt to new hardware types and deployment models, and what major features we are priotizing for the next few years to meet the needs of expanding cloud workloads.
Ceph is an open source distributed storage system that provides scalable object, block, and file interfaces on a commodity hardware. Luminous, the latest stable release of Ceph, was just released in August. This talk will cover all that is new in Luminous (there is a lot!) and provide a sneak peak at the roadmap for Mimic, which is due out in the Spring.
The State of Ceph, Manila, and Containers in OpenStackSage Weil
OpenStack users deploying Ceph for block (Cinder) and object (S3/Swift) are unsurprisingly looking at Manila and CephFS to round out a unified storage solution. Ceph is based on a low-level object storage layer call RADOS that serves as the foundation for its object, block, and file services. Manila's file as a service in OpenStack enables a range of container-based use-cases with Docker and Kubernetes, but a variety of deployment architectures are possible.
This talk will cover the current state of CephFS support in Manila, including upstream Manila support, Manila works in progress, a progress update on CephFS itself, including new multi-tenancy support to facilitate cloud deployments, and a discussion of how this impact container deployment scenarios in an OpenStack cloud.
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
The Ceph distributed storage system sports object, block, and file interfaces to a single storage cluster. These interface are built on a distributed object storage and compute platform called RADOS, which exports a conceptually simple yet powerful interface for storing and processing large amounts of data and is well-suited for backing web-scale applications and data analytics. In features a rich object model, efficient key/value storage, atomic transactions (including efficient compare-and-swap semantics), object cloning and other primitives for supporting snapshots, simple inter-client communication and coordination (ala Zookeeper), and the ability to extend the object interface using arbitrary code executed on the storage node. This talk will focus on librados API, how it is used, the security model, and some examples of RADOS classes implementing interesting functionality.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Keeping OpenStack storage trendy with Ceph and containersSage Weil
The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object.
Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
Distributed storage is complicated, and historically Ceph hasn't spent a lot of time trying to hide that complexity, instead focusing on correctness, features, and flexibility. There has been a recent shift in focus to simplifying and streamlining the user/operator experience so that the information that is actually important is available without the noise of irrelevant details. Recent feature work has also focused on simplifying configurations that were previously possible but required tedious configuration steps to manage.
This talk will cover the key new efforts in Ceph Luminous that aim to simplify and automate cluster management, as well as the plans for upcoming releases to address longstanding Cephisms that make it "hard" (e.g., choosing PG counts).
Ceph, being a distributed storage system, is highly reliant on the network for resiliency and performance. In addition, it is crucial that the network topology beneath a Ceph cluster be designed in such a way to facilitate easy scaling without service disruption. After an introduction to Ceph itself this talk will dive into the design of Ceph client and cluster network topologies.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Brent Compton and Kyle Bader of Red Hat took the stage at Red Hat Storage Day New York on 1/19/16 to share with attendees best practices and lessons learned for architecting solutions with Red Hat Ceph Storage.
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
Ceph is a highly scalable open source distributed storage system that provides object, block, and file interfaces on a single platform. Although Ceph RBD block storage has dominated OpenStack deployments for several years, maturing object (S3, Swift, and librados) interfaces and stable CephFS (file) interfaces now make Ceph the only fully open source unified storage platform.
This talk will cover Ceph's architectural vision and project mission and how our approach differs from alternative approaches to storage in the OpenStack ecosystem. In particular, we will look at how our open development model dovetails well with OpenStack, how major contributors are advancing Ceph capabilities and performance at a rapid pace to adapt to new hardware types and deployment models, and what major features we are priotizing for the next few years to meet the needs of expanding cloud workloads.
Ceph is an open source distributed storage system that provides scalable object, block, and file interfaces on a commodity hardware. Luminous, the latest stable release of Ceph, was just released in August. This talk will cover all that is new in Luminous (there is a lot!) and provide a sneak peak at the roadmap for Mimic, which is due out in the Spring.
The State of Ceph, Manila, and Containers in OpenStackSage Weil
OpenStack users deploying Ceph for block (Cinder) and object (S3/Swift) are unsurprisingly looking at Manila and CephFS to round out a unified storage solution. Ceph is based on a low-level object storage layer call RADOS that serves as the foundation for its object, block, and file services. Manila's file as a service in OpenStack enables a range of container-based use-cases with Docker and Kubernetes, but a variety of deployment architectures are possible.
This talk will cover the current state of CephFS support in Manila, including upstream Manila support, Manila works in progress, a progress update on CephFS itself, including new multi-tenancy support to facilitate cloud deployments, and a discussion of how this impact container deployment scenarios in an OpenStack cloud.
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
The Ceph distributed storage system sports object, block, and file interfaces to a single storage cluster. These interface are built on a distributed object storage and compute platform called RADOS, which exports a conceptually simple yet powerful interface for storing and processing large amounts of data and is well-suited for backing web-scale applications and data analytics. In features a rich object model, efficient key/value storage, atomic transactions (including efficient compare-and-swap semantics), object cloning and other primitives for supporting snapshots, simple inter-client communication and coordination (ala Zookeeper), and the ability to extend the object interface using arbitrary code executed on the storage node. This talk will focus on librados API, how it is used, the security model, and some examples of RADOS classes implementing interesting functionality.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Keeping OpenStack storage trendy with Ceph and containersSage Weil
The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object.
Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
Distributed storage is complicated, and historically Ceph hasn't spent a lot of time trying to hide that complexity, instead focusing on correctness, features, and flexibility. There has been a recent shift in focus to simplifying and streamlining the user/operator experience so that the information that is actually important is available without the noise of irrelevant details. Recent feature work has also focused on simplifying configurations that were previously possible but required tedious configuration steps to manage.
This talk will cover the key new efforts in Ceph Luminous that aim to simplify and automate cluster management, as well as the plans for upcoming releases to address longstanding Cephisms that make it "hard" (e.g., choosing PG counts).
Ceph, being a distributed storage system, is highly reliant on the network for resiliency and performance. In addition, it is crucial that the network topology beneath a Ceph cluster be designed in such a way to facilitate easy scaling without service disruption. After an introduction to Ceph itself this talk will dive into the design of Ceph client and cluster network topologies.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Brent Compton and Kyle Bader of Red Hat took the stage at Red Hat Storage Day New York on 1/19/16 to share with attendees best practices and lessons learned for architecting solutions with Red Hat Ceph Storage.
At Spreadshirt we have built up our own Ceph cluster(s) to spread customer images around the world. This presentation gives an overview on the setup and how things work (and how some things don't work as expected).
At the Public Sector Red Hat Storage Days on 1/20/16 and 1/21/16, Jason Calloway walked attendees through the basics of scalable POSIX file systems in the cloud.
How to Build Highly Available Shared Storage on Microsoft AzureBuurst
Learn how to quickly build Enterprise-Class, Highly Available, Network Attached Storage on Microsoft Azure. Presented by Bruno Terkaly, Principal Software Engineer/Developer Experience at Microsoft and Mark Bichlmeier, IT expert and Principal Solutions Architect for SoftNAS (a top-selling NAS application on leading cloud platforms). Live Q & A afterwards.
You will learn:
- How to migrate on-premise applications to the Microsoft Azure cloud and use cloud NAS storage
- How to build highly available cloud NAS storage on Microsoft Azure
- How to configure CIFS, NFS and iSCSI
- How to configure Active Directory on Microsoft Azure for cloud NAS storage
w w w . d u p r e e t e a m . c o m The DuPree Team Seller’s Advantage 954-752-1986 Email: dupree@dupreeteam.com Condo | Villa | Townhome
A Message from Our PresidentA Message From Our President The Keyes Company understands that finding the right buyer for your property requires a customized marketing plan that extends beyond the boundaries of standard property marketing. Our exclusive network of professional associates in combination with our advanced technology, relocation services, international network and advanced selling programs, help us successfully market one-of-a-kind properties in Florida. Be assured that the Keyes Company’s standards are high and our competitors cannot match our customer-oriented approach. We have a reputation built on integrity, skill and effectiveness. As a Keyes Company client, you will benefit from our results-oriented approach. Providing exceptional customer service is the foundation of the Keyes Company. Since 1926, we have responded to the needs of our clients. We have a successful network of more than 1,700 associates and 25 offices. The Keyes Company spends over $6 million annually advertising properties – a tremendous advantage for a seller. In combination with our experienced associates, you can be assured a level of service that will certainly exceed your expectations. We guarantee it. Michael Pappas President The Keyes Company The Keyes Company is a full-service real estate company whose heritage can be traced back to a small, one-desk office opened on Biscayne Boulevard in Miami in 1926. The company has become a legend in Florida real estate and among the top real estate companies in the country. As an independently-owned family business, Keyes has served the residential and commercial real estate needs of Florida families, neighborhoods and business communities. Keyes continues to grow with over 1,700 associates and 25 offices located throughout Dade, Broward, Palm Beach, Martin, Monroe, and Volusia counties, as well as in Panama. Keyes is a founding member of the Leading Real Estate Companies of the World and consistently ranks among the top 100 real estate companies in the country as measured by both sales and transaction volume. The Pappas family and the entire Keyes management team understands that this business is about serving people and helping them reach their goals of home ownership. Our family values are upheld not only in our daily business, but through our corporate responsibility and philanthropy in serving the community. The Keyes Company History 954-752-1986 d u p r e e @ d u p r e e t e a m . c o m
The DuPree Team Seller’s Advantage d u p r e e @ d u p r e e t e a m . c o m954-752-1986 What to Expect from The DuPree Team & Keyes When you work with The DuPree Team, you can relax and leave the rest up to us. Our entire team will be devoted to serving you and you will never be “just another listing”. You won’t get passed along to a branch office, to someone’s voice mail w
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...Universidad de Cantabria
SerSeo es una empresa de servicios de marketing digital especializada en pymes: SEO, posicionamiento web, email marketing,Google Adwords en Cantabria, Asturias, Pais Vasco, Valladolid, Madrid, Sevilla y Barcelona. Franquicia de Marketing Digital
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_singJohn Sing
Internet-scale cloud data centers and cloud technology has fundamentally changed the IT and Internet landscape. What is less apparent but absolutely essential, is the very different *IT organizational structure* that must exist in order to properly implement, manage, support, and scale a cloud IT infrastructure. This extensive chart deck, provided in full PowerPoint format, explains these significant and non-avoidable IT organizational changes required. Bottom line: it is (unfortunately) impossible for a traditional IT organization to provide a true modern autonomically managed, scalable, cost-effective cloud infrastructure
TUT18972: Unleash the power of Ceph across the Data CenterEttore Simone
From SUSECon 2015: Smooth integration of emerging Software Defined Storage technologies into traditional Data Center using Fiber Channel and iSCSI as key values for success.
Benchmarking your cloud performance with top 4 global public cloudsdata://disrupted®
In this presentation, we will present the performance measurement metrics of leading cloud providers - AWS, Google Cloud, Microsoft Azure, and Digital Ocean. We’ll give you useful tools to measure your own cloud performance and a handy guide on how to calculate cloud TCO (total cost of ownership). In addition, you’ll learn how to estimate correctly your market positioning and perform better than the cloud giants.
Boyan Krosnov is a Co-Founder and Chief Product Officer of StorPool Storage. He has been part of the technical teams building 5 service providers from scratch in 4 countries. In most of these projects, he has designed the architecture, led the technical teams, and managed the implementation of projects in the millions.
Running OpenStack in Production - Barcamp Saigon 2016Thang Man
My talk at http://www.barcampsaigon.com (2016) about how did we architect and configure the OpenStack-based private cloud for running in production at FimPlus.vn
Ariel Waizel discusses the Data Plane Development Kit (DPDK), an API for developing fast packet processing code in user space.
* Who needs this library? Why bypass the kernel?
* How does it work?
* How good is it? What are the benchmarks?
* Pros and cons
Ariel worked on kernel development at the IDF, Ben Gurion University, and several companies. He is interested in networking, security, machine learning, and basically everything except UI development. Currently a Solution Architect at ConteXtream (an HPE company), which specializes in SDN solutions for the telecom industry.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
1. CEPH AT WORK
IN BLOOMBERG
Object Store, RBD and OpenStack
January 19, 2016
By: Chris Jones & Chris Morgan
2. BLOOMBERG
2
30 Years in under 30 Seconds
● Subscriber based financial provider (Bloomberg Terminal)
● Online, TV, print, real-time streaming information
● Offices and customers in every major financial market and institution
worldwide
3. BLOOMBERG
3
Primary product - Information
● Bloomberg Terminal
− Approximately 60,000 features/functions. For
example, ability to track oil tankers in real-time via
satellite feeds
− Note: Exact numbers are not specified. Contact
media relations for specifics and other important
information.
5. CLOUD INFRASTRUCTURE GROUP
5
Primary customers
– Developers
– Product Groups
● Many different
development
groups throughout
our organization
● Currently about
3,000 R&D
developers
● Everyone of them
wants and needs
resources
6. CLOUD INFRASTRUCTURE GROUP
6
Resource Challenges
● Developers
− Development
− Testing
− Automation (Cattle vs. Pets)
● Organizations
− POC
− Products in production
− Automation
● Security/Networking
− Compliance
7. USER BASE (EXAMPLES)
7
Resources and Use cases
● Multiple Data Centers
− Each DC contains *many* Network Tiers which includes a DMZ for Public-
facing Bloomberg assets
− There is at least one Ceph/OpenStack Cluster per Network Tier
● Developer Community Supported
− Public facing Bloomberg products
− Machine learning backend for smart apps
− Compliance-based resources
− Use cases continue to climb as Devs need more storage and compute
capacity
9. USED IN BLOOMBERG
9
● Ceph – RGW (Object Store)
● Ceph – Block/Volume
● OpenStack
─ Different flavors of compute
─ Ephemeral storage
● Object Store is
becoming one of
the most popular
items
● OpenStack
compute with Ceph
backed block store
volumes are very
popular
● We introduced
ephemeral
compute storage
11. SUPER HYPER-CONVERGED STACK
11
(Original) Converged Architecture Rack Layout
● 3 Head Nodes (Controller Nodes)
− Ceph Monitor
− Ceph OSD
− OpenStack Controllers (All of them!)
− HAProxy
● 1 Bootstrap Node
− Cobbler (PXE Boot)
− Repos
− Chef Server
− Rally/Tempest
● Remaining Nodes
− Nova Compute
− Ceph OSDs
− RGW – Apache
● Ubuntu
● Shared spine with Hadoop resources
Bootstrap Node
Compute/Ceph OSDs/RGW/Apache
Remaining Stack
Sliced View of Stack
12. NEW POD ARCHITECTURE
12
POD
(TOR)
HAProxy
OS-Nova
OS-NovaOS-Rabbit
OS-DB
Number of large providers have taken similar approaches
Note: Illustrative only – Not Representative
POD
(TOR)
Ceph
OSD
Ceph
Mon
Ceph
Mon
Ceph
Mon
Ceph
OSD
Ceph
OSD
RBD Only
Bootstrap
Monitoring
Ephemeral
Ephemeral – Fast/Dangerous
Host aggregates & flavors
Not Ceph backed
14. EPHEMERAL VS. CEPH BLOCK STORAGE
14
Numbers will vary in different environments. Illustrations are simplified.
Ceph Ephemeral
New feature option added to address high IOP applications
15. EPHEMERAL VS. CEPH BLOCK STORAGE
15
Numbers will vary in different environments. Illustrations are simplified.
Ceph – Advantages
● All data is replicated at least 3 ways across the cluster
● Ceph RBD volumes can be created, attached and detached from any hypervisor
● Very fast provisioning using COW (copy-on-write) images
● Allows easy instance re-launch in the event of hypervisor failure
● High read performance
Ephemeral – Advantages
● Offers read/write speeds that can be 3-4 times faster than Ceph with lower latency
● Can provide fairly large volumes for cheap
Ceph – Disadvantages
● All writes must be acknowledged by multiple nodes before being considered as committed (tradeoff for reliability)
● Higher latency due to Ceph being network based instead of local
Ephemeral – Disadvantages
● Trades data integrity for speed: if one drive fails at RAID 0 then all data on that node is lost
● May be difficult to add more capacity (depends on type of RAID)
● Running in JBOD LVM mode w/o RAID performance was not as good as Ceph
● Less important, with RAID your drives need to be same size or you lose capacity
16. EPHEMERAL VS. CEPH BLOCK STORAGE
16
Numbers will vary in different environments. Illustrations are simplified.
EPHEMERAL CEPH
Block write bandwidth (MB/s) 1,094.02 642.15
Block read bandwidth (MB/s) 1,826.43 639.47
Character read bandwidth (MB/s) 4.93 4.31
Character write bandwidth (MB/s) 0.83 0.75
Block write latency (ms) 9.502 37.096
Block read latency (ms) 8.121 4.941
Character read latency (ms) 2.395 3.322
Character write latency (ms) 11.052 13.587
Note: Ephemeral in JBOD/LVM mode is not as fast as Ceph
Numbers can also increase with additional tuning and different devices
17. CHALLENGES – LESSONS LEARNED
17
Network
● It’s all about the network.
− Changed MTU from 1500 to 9000 on certain interfaces (Float interface – Storage interface)
− Hardware Load Balancers – keep an eye on performance
● Hardware
− Moving to a more commodity driven hardware
− All flash storage in compute cluster (high cost, good for block and ephemeral)
Costs
● Storage costs are very high in a converged compute cluster for Object Store
Analytics
● Need to know how the cluster is being used
● Need to know if the tps meets the SLA
● Test going directly against nodes and then layer in network components until you can
verify all choke points in the data flow path
● Monitor and test always
19. OBJECT STORE STACK (RACK CONFIG)
19
RedHat 7.1
● 1 TOR and 1 Rack Mgt Node
● 3 1U nodes (Mon, RGW, Util)
● 17 2U Ceph OSD nodes
● 2x or 3x Replication depending on need (3x default)
● Secondary RGW (may coexist with OSD Node)
● 10g Cluster interface
● 10g Public interface
● 1 IPMI interface
● OSD Nodes (high density server nodes)
− 6TB HDD x 12 – Journal partitions on SSD
− No RAID1 OS drives – instead we partitioned off a
small amount of SSD1 for OS and swap with remainder
of SSD1 used for some journals and SSD2 used for
remaining journals
− Failure domain is a node
3 1U Nodes
TOR/IPMI
Converged
Storage Nodes
2U each
20. OBJECT STORE STACK (ARCHITECTURE)
20
1 Mon/RGW Node
Per rack
TOR - Leaf
Storage Nodes
Spine Spine LBLB
21. OBJECT STORE STACK
21
Standard configuration (Archive Cluster)
● Min of 3 Racks = Cluster
● OS – Redhat 7.1
● Cluster Network: Bonded 10g or higher depending on size of cluster
● Public Network: Bonded 10g for RGW interfaces
● 1 Ceph Mon node per rack except on more than 3 racks. Need to keep odd number of Mons so some racks may not
have Mons. We try to keep larger cluster racks & Mons in different power zones
● We have developed a healthy “Pain” tolerance. We mainly see drive failures and some node failures.
● Min 1 RGW (dedicated Node) per rack (may want more)
● Hardware load balancers to RGWs with redundancy
● Erasure coded pools (no cache tiers at present – testing). We also use a host profile with 8/3 (k/m)
● Near full and full ratios are .75/.85 respectfully
● Index sharding
● Federated (regions/zones)
● All server nodes, no JBOD expansions
● S3 only at present but we do have a few requests for Swift
● Fully AUTOMATED – Chef cookbooks to configure and manage cluster (some Ansible)
22. AUTOMATION
22
All of what we do only happens because of automation
● Company policy – Chef
● Cloud Infrastructure Group uses Chef and Ansible. We use Ansible for
orchestration and maintenance
● Bloomberg Github: https://github.com/bloomberg/bcpc
● Ceph specific options
− Ceph Chef: https://github.com/ceph/ceph-chef
− Bloomberg Object Store: https://github.com/bloomberg/chef-bcs
− Ceph Deploy: https://github.com/ceph/ceph-deploy
− Ceph Ansible: https://github.com/ceph/ceph-ansible
● Our bootstrap server is our Chef server per cluster
23. TESTING
23
Testing is critical. We use different strategies for the different parts of
OpenStack and Ceph we test
● OpenStack
− Tempest – We currently only use this for patches we make. We plan to use this more in our
DevOps pipeline
− Rally – Can’t do distributed testing but we use it to test bottlenecks in OpenStack itself
● Ceph
− RADOS Bench
− COS Bench – Going to try this with CBT
− CBT – Ceph Benchmark Testing
− Bonnie++
− FIO
● Ceph – RGW
− Jmeter – Need to test load at scale. Takes a cloud to test a cloud
● A lot of the times you find it’s your network, load balancers etc
24. CEPH USE CASE DEMAND – GROWING!
24
Ceph
*Real-time
Object
ImmutableOpenStack
Big Data*?
*Possible use cases if performance is enhanced
25. WHAT’S NEXT?
25
Continue to evolve our POD architecture
● OpenStack
− Work on performance improvements and track stats on usage for departments
− Better monitoring
− LBaaS, Neutron
● Containers and PaaS
− We’re currently evaluating PaaS software and container strategies now
● Better DevOps Pipelining
− GO CD and/or Jenkins improved strategies
− Continue to enhance automation and re-provisioning
− Add testing to automation
● Ceph
− New Block Storage Cluster
− Super Cluster design
− Performance improvements – testing Jewel
− RGW Multi-Master (multi-sync) between datacenters
− Enhanced security – encryption at rest (can already do) but with better key management
− NVMe for Journals and maybe for high IOP block devices
− Cache Tier (need validation tests)
27. ADDITIONAL RESOURCES
27
● Chris Jones: cjones303@bloomberg.net
− Github: cloudm2
● Chris Morgan: cmorgan2@bloomberg.net
− Github: mihalis68
Cookbooks:
● BCC: https://github.com/bloomberg/bcpc
− Current repo for Bloomberg’s Converged OpenStack and Ceph cluster
● BCS: https://github.com/bloomberg/chef-bcs
● Ceph-Chef: https://github.com/ceph/ceph-chef
The last two repos make up the Ceph Object Store and full Ceph Chef
Cookbooks.