Cephalocon APAC 2018
March 22-23, 2018 - Beijing, China
Lars Marowsky-Brée SUSE Distinguished Engineer, Ceph Advisory Board member
Marc Koderer, SAP OpenStack Evangelist
Brent Compton and Kyle Bader of Red Hat took the stage at Red Hat Storage Day New York on 1/19/16 to share with attendees best practices and lessons learned for architecting solutions with Red Hat Ceph Storage.
講者:
Jeff Chu (Director of Enterprise Solutions, ARM)
Kan Yan Rong (Technical Expert in Storage and Application) Technology, WDC/SanDisk)
概要:
Jeff from ARM will provide a brief update on the activities furthering Ceph on ARM including some recent progress from ARM as well some increased community activity. After that Chris and Yan from Western Digital/San Disk will be presenting the topic on Ceph Block Performance on Cavium ARM and SATA SSDs.
Red Hat Storage Day New York - New Reference ArchitecturesRed_Hat_Storage
The document provides an overview and summary of Red Hat's reference architecture work including MySQL and Hadoop, software-defined NAS, and digital media repositories. It discusses trends toward disaggregating Hadoop compute and storage and various data flow options. It also summarizes performance testing Red Hat conducted comparing AWS EBS and Ceph for MySQL workloads, and analyzing factors like IOPS/GB ratios, core-to-flash ratios, and pricing. Server categories and vendor examples are defined. Comparisons of throughput and costs at scale between software-defined scale-out storage and traditional enterprise NAS solutions are also presented.
Walk Through a Software Defined Everything PoCCeph Community
This document summarizes a proof of concept for a software defined data center using OpenStack and Midokura MidoNet software defined networking. The POC used 4 controllers, 8 Ceph storage nodes, and 16 compute nodes with Midokura providing logical layer 2-4 networking services. Key lessons learned included planning the underlay network configuration, optimizing Zookeeper connections, and improving OpenStack deployment processes which can be complex. Performance testing showed Ceph throughput was higher for reads than writes and SSD journaling improved IOPS. The streamlined workflow provided by the software defined infrastructure could help reduce costs and management complexity for organizations.
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red_Hat_Storage
This document discusses using Samsung NVMe SSDs and Red Hat Ceph storage to create a high performance storage tier for OpenStack environments. It presents a reference architecture using a 3-node Ceph cluster with Samsung NVMe SSDs that achieved over 28GB/s for sequential reads. This architecture provides scalable, open source storage optimized for performance-intensive workloads like databases, analytics, and networking. Future work is discussed to develop a similar architecture using GlusterFS storage.
Brent Compton and Kyle Bader of Red Hat took the stage at Red Hat Storage Day New York on 1/19/16 to share with attendees best practices and lessons learned for architecting solutions with Red Hat Ceph Storage.
講者:
Jeff Chu (Director of Enterprise Solutions, ARM)
Kan Yan Rong (Technical Expert in Storage and Application) Technology, WDC/SanDisk)
概要:
Jeff from ARM will provide a brief update on the activities furthering Ceph on ARM including some recent progress from ARM as well some increased community activity. After that Chris and Yan from Western Digital/San Disk will be presenting the topic on Ceph Block Performance on Cavium ARM and SATA SSDs.
Red Hat Storage Day New York - New Reference ArchitecturesRed_Hat_Storage
The document provides an overview and summary of Red Hat's reference architecture work including MySQL and Hadoop, software-defined NAS, and digital media repositories. It discusses trends toward disaggregating Hadoop compute and storage and various data flow options. It also summarizes performance testing Red Hat conducted comparing AWS EBS and Ceph for MySQL workloads, and analyzing factors like IOPS/GB ratios, core-to-flash ratios, and pricing. Server categories and vendor examples are defined. Comparisons of throughput and costs at scale between software-defined scale-out storage and traditional enterprise NAS solutions are also presented.
Walk Through a Software Defined Everything PoCCeph Community
This document summarizes a proof of concept for a software defined data center using OpenStack and Midokura MidoNet software defined networking. The POC used 4 controllers, 8 Ceph storage nodes, and 16 compute nodes with Midokura providing logical layer 2-4 networking services. Key lessons learned included planning the underlay network configuration, optimizing Zookeeper connections, and improving OpenStack deployment processes which can be complex. Performance testing showed Ceph throughput was higher for reads than writes and SSD journaling improved IOPS. The streamlined workflow provided by the software defined infrastructure could help reduce costs and management complexity for organizations.
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red_Hat_Storage
This document discusses using Samsung NVMe SSDs and Red Hat Ceph storage to create a high performance storage tier for OpenStack environments. It presents a reference architecture using a 3-node Ceph cluster with Samsung NVMe SSDs that achieved over 28GB/s for sequential reads. This architecture provides scalable, open source storage optimized for performance-intensive workloads like databases, analytics, and networking. Future work is discussed to develop a similar architecture using GlusterFS storage.
Ceph optimized Storage / Global HW solutions for SDS, David AlvarezCeph Community
This document discusses Supermicro's portfolio of scale-out optimized storage nodes and Ceph-ready hardware solutions. It presents several models of storage nodes that support high density and ultra dense storage and are optimized for the Red Hat Ceph storage platform. The document also covers Supermicro's modular LAN switching I/O modules that provide flexible networking connectivity including 10GbE, 25GbE, and InfiniBand options.
Red Hat Storage Day Seattle: Why Software-Defined Storage MattersRed_Hat_Storage
The document discusses the benefits of software-defined storage over traditional storage approaches. It argues that software-defined storage uses standard hardware and open source software, providing flexibility, scalability, and lower costs compared to proprietary appliances or public cloud storage. It also describes Red Hat's portfolio of software-defined storage solutions, including Ceph and Gluster, which leverage open source technologies to power a variety of enterprise workloads.
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...Red_Hat_Storage
Cisco uses Ceph for storage in its OpenStack cloud platform. The initial Ceph cluster design used HDDs which caused stability issues as the cluster grew to petabytes in size. Improvements included throttling client IO, upgrading Ceph versions, moving MON metadata to SSDs, and retrofitting journals to NVMe SSDs. These steps stabilized performance and reduced recovery times. Lessons included having clear stability goals and automating testing to prevent technical debt from shortcuts.
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red_Hat_Storage
Red Hat Gluster Storage provides a software-defined storage solution that is more cost efficient and flexible than traditional storage appliances. It leverages standard x86 hardware and has open source architecture with no vendor lock-in. A comparison shows Gluster Storage outperforms EMC Isilon on factors like cost, scalability, data protection methods, access protocols, and management capabilities. Gluster Storage is positioned to go beyond traditional storage by supporting containers, disaster recovery in cloud environments, and its roadmap includes additional advanced features.
ThunderX ARMV8 Servers: Disruption and Innovation in the Server MarketRed_Hat_Storage
Cavium joined Red Hat Storage Day New York on 1/19/16 to give the history of the ARM server ecosystem, explain the innovation of ThunderX, and describe scale out's influence on target workloads.
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
This document discusses how data growth driven by mobile, social media, IoT, and big data/cloud is requiring a fundamental shift in storage cost structures from scale-up to scale-out architectures. It provides an overview of key storage technologies and workloads driving public cloud storage, and how Ceph can help deliver on the promise of the cloud by providing next generation storage architectures with flash to enable new capabilities in small footprints. It also illustrates the wide performance range Ceph can provide for different workloads and hardware configurations.
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
Kenny Chang (張任伯) (Storage Solution Architect, Intel)
With the trend that Solid State Drive (SSD) becomes more affordable, more and more cloud providers are trying to provide high performance, highly reliable storage for their customers with SSDs. Ceph is becoming one of most open source scale-out storage solutions in worldwide market. More and more customers have strong demands that using SSD in Ceph to build high performance storage solutions for their Openstack clouds.
The disrupted Intel® Optane SSDs based on 3D Xpoint technology fills the performance gap between DRAM and NAND based SSD while the Intel® 3D NAND TLC is reducing cost gap between SSD and traditional spindle hard drive and makes it possible for all flash storage. In this session, we will
1) Discuss OpenStack storage Ceph reference design on the first Intel Optane (3D Xpoint) and P4500 TLC NAND based all-flash Ceph cluster, it delivers multi-million IOPS with extremely low latency as well as increase storage density with competitive dollar-per-gigabyte costs
2) Share Ceph bluestore tunings and optimizations, latency analysis, TCO model, IOPS/TB, IOPS/$ based on the reference architecture to demonstrate this high performance, cost effective solution.
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
Red Hat Ceph Storage can utilize flash technology to accelerate applications in three ways: 1) use all flash storage for highest performance, 2) use a hybrid configuration with performance critical data on flash tier and colder data on HDD tier, or 3) utilize host caching of critical data on flash. Benchmark results showed that using NVMe SSDs in Ceph provided much higher performance than SATA SSDs, with speed increases of up to 8x for some workloads. However, testing also showed that Ceph may not be well-suited for OLTP MySQL workloads due to small random reads/writes, as local SSD storage outperformed the Ceph cluster. Proper Linux tuning is also needed to maximize SSD performance within
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
The document discusses optimizing Ceph storage performance on QCT servers using NUMA-balanced hardware and tuning. It provides details on QCT hardware configurations for throughput, capacity and IOPS-optimized Ceph storage. It also describes testing done in QCT labs using a 5-node all-NVMe Ceph cluster that showed significant performance gains from software tuning and using multiple OSD partitions per SSD.
This document discusses disk health prediction for Ceph storage clusters. It describes current pain points like performance degradation during rebalancing and lack of predictive analytics. The DiskProphet solution uses machine learning to predict future disk failures proactively, reducing performance impacts by 90%. It integrates with Ceph through plugins to provide disk health monitoring and predictions to optimize the cluster.
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
The document discusses Supermicro's evolution from server and storage innovation to total solution innovation. It provides examples of their all-flash storage servers and Red Hat Ceph reference architectures using Supermicro hardware. The document also discusses optimizing hardware configurations for different workloads and summarizes Supermicro's portfolio of Ceph-ready nodes and turnkey storage solutions.
CEPH DAY BERLIN - DISK HEALTH PREDICTION AND RESOURCE ALLOCATION FOR CEPH BY ...Ceph Community
Ceph is intelligent. However, users usually make resource request with no guarantee because of no visibility of underlying disk health, no idea of resource availability and no prediction of future demands. Now machine learning can make it happen. We'll present how machine learning technologies help predict Ceph OSD health, predictive impact on clusters and resolutions. We'll take Kubernetes working with Ceph as an example.
This document outlines an agenda for a conference on MySQL and Ceph storage solutions. The agenda includes sessions on MySQL performance on Ceph versus AWS, a head-to-head performance lab comparing the two platforms, and architectural considerations for optimizing MySQL on Ceph. Specific topics covered are MySQL and Ceph capabilities like live migration and snapshots, ensuring a consistent developer experience between private Ceph and public cloud, results from sysbench tests showing Ceph can match or exceed AWS performance on price per IOPS, and how Ceph node configuration like CPU cores and flash storage affect MySQL workload performance.
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red_Hat_Storage
This document discusses Supermicro's evolution from server and storage innovation to total solutions innovation. It provides examples of their all-flash storage servers and Red Hat Ceph testing results. Finally, it outlines their approach to providing optimized, turnkey storage solutions based on workload requirements and best practices learned from customer deployments and testing.
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
At Red Hat Storage Day New York on 1/19/16, Red Hat partner Seagate presented on how to implement dense storage using HDDs with SSDs and PCIe flash accelerator cards.
Red Hat Storage, based on the upstream GlusterFS project, was developed as a distributed file system in the oil-and-gas, high-performance compute arena. With Red Hat Storage, you can easily set up flexible distributed storage using commodity x86 hardware.
Simple, inexpensive internal or JBOD storage can be linked across multiple physical servers and presented as a single storage namespace. This storage can be used for log files, web content, virtual machine, home directory, and other storage use cases.
In this session, we’ll demonstrate how to:
Install Red Hat Gluster Storage
Configure disks
Link the storage nodes
Define storage bricks
Present storage to clients
We'll talk about tips and tricks, best practices, backup and recovery, and other storage-related topics.
Ceph Day San Jose - Object Storage for Big Data Ceph Community
This document discusses using object storage for big data. It outlines key stakeholders in big data projects and what they want from object storage solutions. It then discusses using the Ceph object store to provide an elastic data lake that can disaggregate compute resources from storage. This allows analytics to be performed directly on the object store without expensive ETL processes. It also describes testing various analytics use cases and workloads with the Ceph object store.
Webinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off TargetStorage Switzerland
In this webinar join Storage Switzerland’s founder and lead analyst George Crump and Vexata’s VP of Products and Solutions Rick Walsworth as they explain how all-flash systems have fallen short and how IT can realize the full potential of flash-based storage without the compromises. Learn 5 areas where all-flash arrays miss the database performance mark.
NetApp provides an enterprise-grade all-flash storage solution called AFF (All Flash FAS) that delivers flash performance and data services. SolidFire is another all-flash storage platform in NetApp's portfolio that is designed for large-scale infrastructure and can guarantee performance to thousands of applications through its quality of service features. The document discusses the benefits of flash storage and how NetApp's solutions help customers transform their data centers and lower costs through flash innovation like inline data compaction in ONTAP 9.
Ceph optimized Storage / Global HW solutions for SDS, David AlvarezCeph Community
This document discusses Supermicro's portfolio of scale-out optimized storage nodes and Ceph-ready hardware solutions. It presents several models of storage nodes that support high density and ultra dense storage and are optimized for the Red Hat Ceph storage platform. The document also covers Supermicro's modular LAN switching I/O modules that provide flexible networking connectivity including 10GbE, 25GbE, and InfiniBand options.
Red Hat Storage Day Seattle: Why Software-Defined Storage MattersRed_Hat_Storage
The document discusses the benefits of software-defined storage over traditional storage approaches. It argues that software-defined storage uses standard hardware and open source software, providing flexibility, scalability, and lower costs compared to proprietary appliances or public cloud storage. It also describes Red Hat's portfolio of software-defined storage solutions, including Ceph and Gluster, which leverage open source technologies to power a variety of enterprise workloads.
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...Red_Hat_Storage
Cisco uses Ceph for storage in its OpenStack cloud platform. The initial Ceph cluster design used HDDs which caused stability issues as the cluster grew to petabytes in size. Improvements included throttling client IO, upgrading Ceph versions, moving MON metadata to SSDs, and retrofitting journals to NVMe SSDs. These steps stabilized performance and reduced recovery times. Lessons included having clear stability goals and automating testing to prevent technical debt from shortcuts.
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red_Hat_Storage
Red Hat Gluster Storage provides a software-defined storage solution that is more cost efficient and flexible than traditional storage appliances. It leverages standard x86 hardware and has open source architecture with no vendor lock-in. A comparison shows Gluster Storage outperforms EMC Isilon on factors like cost, scalability, data protection methods, access protocols, and management capabilities. Gluster Storage is positioned to go beyond traditional storage by supporting containers, disaster recovery in cloud environments, and its roadmap includes additional advanced features.
ThunderX ARMV8 Servers: Disruption and Innovation in the Server MarketRed_Hat_Storage
Cavium joined Red Hat Storage Day New York on 1/19/16 to give the history of the ARM server ecosystem, explain the innovation of ThunderX, and describe scale out's influence on target workloads.
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
This document discusses how data growth driven by mobile, social media, IoT, and big data/cloud is requiring a fundamental shift in storage cost structures from scale-up to scale-out architectures. It provides an overview of key storage technologies and workloads driving public cloud storage, and how Ceph can help deliver on the promise of the cloud by providing next generation storage architectures with flash to enable new capabilities in small footprints. It also illustrates the wide performance range Ceph can provide for different workloads and hardware configurations.
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
Kenny Chang (張任伯) (Storage Solution Architect, Intel)
With the trend that Solid State Drive (SSD) becomes more affordable, more and more cloud providers are trying to provide high performance, highly reliable storage for their customers with SSDs. Ceph is becoming one of most open source scale-out storage solutions in worldwide market. More and more customers have strong demands that using SSD in Ceph to build high performance storage solutions for their Openstack clouds.
The disrupted Intel® Optane SSDs based on 3D Xpoint technology fills the performance gap between DRAM and NAND based SSD while the Intel® 3D NAND TLC is reducing cost gap between SSD and traditional spindle hard drive and makes it possible for all flash storage. In this session, we will
1) Discuss OpenStack storage Ceph reference design on the first Intel Optane (3D Xpoint) and P4500 TLC NAND based all-flash Ceph cluster, it delivers multi-million IOPS with extremely low latency as well as increase storage density with competitive dollar-per-gigabyte costs
2) Share Ceph bluestore tunings and optimizations, latency analysis, TCO model, IOPS/TB, IOPS/$ based on the reference architecture to demonstrate this high performance, cost effective solution.
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
Red Hat Ceph Storage can utilize flash technology to accelerate applications in three ways: 1) use all flash storage for highest performance, 2) use a hybrid configuration with performance critical data on flash tier and colder data on HDD tier, or 3) utilize host caching of critical data on flash. Benchmark results showed that using NVMe SSDs in Ceph provided much higher performance than SATA SSDs, with speed increases of up to 8x for some workloads. However, testing also showed that Ceph may not be well-suited for OLTP MySQL workloads due to small random reads/writes, as local SSD storage outperformed the Ceph cluster. Proper Linux tuning is also needed to maximize SSD performance within
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
The document discusses optimizing Ceph storage performance on QCT servers using NUMA-balanced hardware and tuning. It provides details on QCT hardware configurations for throughput, capacity and IOPS-optimized Ceph storage. It also describes testing done in QCT labs using a 5-node all-NVMe Ceph cluster that showed significant performance gains from software tuning and using multiple OSD partitions per SSD.
This document discusses disk health prediction for Ceph storage clusters. It describes current pain points like performance degradation during rebalancing and lack of predictive analytics. The DiskProphet solution uses machine learning to predict future disk failures proactively, reducing performance impacts by 90%. It integrates with Ceph through plugins to provide disk health monitoring and predictions to optimize the cluster.
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
The document discusses Supermicro's evolution from server and storage innovation to total solution innovation. It provides examples of their all-flash storage servers and Red Hat Ceph reference architectures using Supermicro hardware. The document also discusses optimizing hardware configurations for different workloads and summarizes Supermicro's portfolio of Ceph-ready nodes and turnkey storage solutions.
CEPH DAY BERLIN - DISK HEALTH PREDICTION AND RESOURCE ALLOCATION FOR CEPH BY ...Ceph Community
Ceph is intelligent. However, users usually make resource request with no guarantee because of no visibility of underlying disk health, no idea of resource availability and no prediction of future demands. Now machine learning can make it happen. We'll present how machine learning technologies help predict Ceph OSD health, predictive impact on clusters and resolutions. We'll take Kubernetes working with Ceph as an example.
This document outlines an agenda for a conference on MySQL and Ceph storage solutions. The agenda includes sessions on MySQL performance on Ceph versus AWS, a head-to-head performance lab comparing the two platforms, and architectural considerations for optimizing MySQL on Ceph. Specific topics covered are MySQL and Ceph capabilities like live migration and snapshots, ensuring a consistent developer experience between private Ceph and public cloud, results from sysbench tests showing Ceph can match or exceed AWS performance on price per IOPS, and how Ceph node configuration like CPU cores and flash storage affect MySQL workload performance.
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red_Hat_Storage
This document discusses Supermicro's evolution from server and storage innovation to total solutions innovation. It provides examples of their all-flash storage servers and Red Hat Ceph testing results. Finally, it outlines their approach to providing optimized, turnkey storage solutions based on workload requirements and best practices learned from customer deployments and testing.
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
At Red Hat Storage Day New York on 1/19/16, Red Hat partner Seagate presented on how to implement dense storage using HDDs with SSDs and PCIe flash accelerator cards.
Red Hat Storage, based on the upstream GlusterFS project, was developed as a distributed file system in the oil-and-gas, high-performance compute arena. With Red Hat Storage, you can easily set up flexible distributed storage using commodity x86 hardware.
Simple, inexpensive internal or JBOD storage can be linked across multiple physical servers and presented as a single storage namespace. This storage can be used for log files, web content, virtual machine, home directory, and other storage use cases.
In this session, we’ll demonstrate how to:
Install Red Hat Gluster Storage
Configure disks
Link the storage nodes
Define storage bricks
Present storage to clients
We'll talk about tips and tricks, best practices, backup and recovery, and other storage-related topics.
Ceph Day San Jose - Object Storage for Big Data Ceph Community
This document discusses using object storage for big data. It outlines key stakeholders in big data projects and what they want from object storage solutions. It then discusses using the Ceph object store to provide an elastic data lake that can disaggregate compute resources from storage. This allows analytics to be performed directly on the object store without expensive ETL processes. It also describes testing various analytics use cases and workloads with the Ceph object store.
Webinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off TargetStorage Switzerland
In this webinar join Storage Switzerland’s founder and lead analyst George Crump and Vexata’s VP of Products and Solutions Rick Walsworth as they explain how all-flash systems have fallen short and how IT can realize the full potential of flash-based storage without the compromises. Learn 5 areas where all-flash arrays miss the database performance mark.
NetApp provides an enterprise-grade all-flash storage solution called AFF (All Flash FAS) that delivers flash performance and data services. SolidFire is another all-flash storage platform in NetApp's portfolio that is designed for large-scale infrastructure and can guarantee performance to thousands of applications through its quality of service features. The document discusses the benefits of flash storage and how NetApp's solutions help customers transform their data centers and lower costs through flash innovation like inline data compaction in ONTAP 9.
Lessons learned processing 70 billion data points a day using the hybrid cloudDataWorks Summit
NetApp receives 70 billion data points of telemetry information each day from its customer’s storage systems. This telemetry data contains configuration information, performance counters, and logs. All of this data is processed using multiple Hadoop clusters, and feeds a machine learning pipeline and a data serving infrastructure that produces insights for customers via an application called Active IQ. We describe the evolution of our Hadoop infrastructure from a traditional on-premises architecture to the hybrid cloud, and lessons learned.
We’ll discuss the insights we are able to produce for our customers, and the techniques used. Finally, we describe the data management challenges with our multi-petabyte Hadoop data lake. We solved these problems by building a unified data lake on-premises and using the NetApp Data Fabric to seamlessly connect to public clouds for data science and machine learning compute resources.
Architecting a truly hybrid cloud implementation allowed NetApp to free up our data scientists to use any software on any cloud, kept the customer log data safe on NetApp Private Storage in Equinix, resulted in faster ability to innovate and release new code and provided flexibility to use any public cloud at the same time with data on NetApp in Equinix.
Speaker
Pranoop Erasani, NetApp, Senior Technical Director, ONTAP
Shankar Pasupathy, NetApp, Technical Director, ACE Engineering
EMEA TechTalk – The NetApp Flash Optimized PortfolioNetApp
This document summarizes NetApp's flash optimized storage portfolio. It discusses NetApp's leadership in flash technology and its hybrid arrays that leverage flash media to provide good performance and capacity. It also covers NetApp's all-flash arrays, including the EF-Series optimized for performance and density and the All-Flash FAS that provides robust data management. The document concludes by looking ahead at NetApp's FlashRay storage system designed from the ground up to maximize flash benefits.
This document discusses NetApp's integration with OpenStack. It begins with an introduction to NetApp, describing it as a global Fortune 500 company and leader in data management solutions. It then covers basic OpenStack and NetApp terminology. The remainder summarizes NetApp's storage portfolio for OpenStack and how its different solutions provide capabilities like snapshots, cloning, and quality of service controls when used with OpenStack interfaces like Cinder and Manila. It concludes with a demonstration of provisioning OpenStack volumes using NetApp storage.
The document discusses in-memory computing and emerging technologies. It describes how in-memory applications are driving new storage class memory like 3D XPoint that has lower latency than NAND but higher capacity than DRAM. The document also discusses how in-memory solutions are using tiering of memory and storage like DRAM, 3D XPoint, NVM, and NAND to handle larger datasets. Emerging high speed fabrics and disaggregated storage are enabling more efficient scaling of memory and storage tiers independent of compute.
Flash is een game changing technology, althans dat is wat de markt u graag wil doen geloven. Immers, voorspelbare consistente performance en IO efficiency worden hierdoor mogelijk gemaakt. Maar…
- Microsecondes maken het verschil maar de spelregels veranderen niet.
- Not all Flash was created equal
- Disk is niet dood, al willen sommige leveranciers dat u doen geloven
Bekijk deze presentatie om een nuchtere kijk op Flash te krijgen en uit te vinden wat de echte impact is op uw datacenter infrastructuur.
This document provides an overview of HPE solutions for challenges in AI and big data. It discusses HPE storage solutions including aggregated storage-in-compute using NVMe devices, tiered storage using flash, disk, and object storage, and zero watt storage to reduce power usage. It also covers the Scality object storage platform and WekaIO parallel file system for all-flash environments. The document aims to illustrate how HPE technologies can provide efficient, scalable storage for challenging AI and big data workloads.
IBM Power Systems - enabling cloud solutionsDavid Spurway
This document discusses IBM Power Systems and their ability to enable cloud solutions. It provides an overview of Power8 architecture and performance advantages over Intel systems. It also discusses how Power Systems can be used to build hybrid cloud infrastructures with on-premises and off-premises components using technologies like PowerVC and Bluemix. Case studies on Oracle and SAP workloads show Power Systems provide better performance and lower TCO compared to x86 servers.
The Consequences of Infinite Storage Bandwidth: Allen Samuels, SanDiskOpenStack
Audience: Beginner to Intermediate
About: Overall increases in CPU and DRAM processing power are falling behind the massive acceleration in available storage and network bandwidth. Storage management services are emerging as a serious bottleneck. What does this imply for the datacenter of the future? How will it affect the physical network and storage topologies? And how will storage software need to change to meet these new realities?
Speaker Bio: Allen joined SanDisk in 2013 as an Engineering Fellow, he is responsible for directing software development for SanDisk’s system level products. He has previously served as Chief Architect at Weitek Corp. and Citrix, and founded several companies including AMKAR Consulting, Orbital Data Corporation, and Cirtas Systems. Allen has a Bachelor of Science in Electrical Engineering from Rice University.
OpenStack Australia Day - Sydney 2016
https://events.aptira.com/openstack-australia-day-sydney-2016
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Comparing the TCO of HP NonStop with Oracle RACThomas Burg
HP NonStop is often (wrongly!) perceived as "expensive", specifically compared with the combination of "vanilla X86 hardware" and the Oracle RAC DB offering.
This presentation talks about an in-depth analysis HP did to compare the two offerings fair and square. You might be surprised at the results ...
Hp Converged Systems and Hortonworks - Webinar SlidesHortonworks
Our experts will walk you through some key design considerations when deploying a Hadoop cluster in production. We'll also share practical best practices around HP and Hortonworks Data Platform to get you started on building your modern data architecture.
Learn how to:
- Leverage best practices for deployment
- Choose a deployment model
- Design your Hadoop cluster
- Build a Modern Data Architecture and vision for the Data Lake
Many companies have discovered that there is “gold” in their server log files and machine data. Closely monitoring this data can improve security, help prevent costly outages and reduce the time it takes to recover from a problem. In this presentation, GTRI’s Micah Montgomery explains how operational intelligence can be gained from machine data, and how Splunk Enterprise can turn this data into actionable insights. Also presenting was NetApp’s Steve Fritzinger, who discussed how to manage the challenges of capturing and storing a flood of data without breaking the bank.
Presented at "Denver Big Data Analytics Day" on May 18, 2016 at GTRI.
Is your flash system up to the challenge? Attend this webinar and learn how you can optimize your SQL Server performance. Hear how the pros pinpoint performance bottlenecks and leverage the latest advancements in storage technology to decrease access latency and IO wait times. By the end of the webinar you’ll have the tools and information you need to recommend the best approach for your SQL Server environment.
This document discusses the benefits of using Linux on IBM Power systems servers. It claims that Power systems can reduce costs through higher performance, consolidation, and open source software like KVM and OpenStack. It seeks to dispel myths that Power systems are expensive, that virtualization is different, and that the architecture is closed. It provides examples of using Power systems with Linux to gain performance advantages for applications like SAP and databases through higher core counts, memory and bandwidth compared to x86 servers.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
GlobalLogic Java Community Webinar #18 “How to Improve Web Application Perfor...GlobalLogic Ukraine
Під час доповіді відповімо на питання, навіщо потрібно підвищувати продуктивність аплікації і які є найефективніші способи для цього. А також поговоримо про те, що таке кеш, які його види бувають та, основне — як знайти performance bottleneck?
Відео та деталі заходу: https://bit.ly/45tILxj
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
MySQL InnoDB Storage Engine: Deep Dive - MydbopsMydbops
This presentation, titled "MySQL - InnoDB" and delivered by Mayank Prasad at the Mydbops Open Source Database Meetup 16 on June 8th, 2024, covers dynamic configuration of REDO logs and instant ADD/DROP columns in InnoDB.
This presentation dives deep into the world of InnoDB, exploring two ground-breaking features introduced in MySQL 8.0:
• Dynamic Configuration of REDO Logs: Enhance your database's performance and flexibility with on-the-fly adjustments to REDO log capacity. Unleash the power of the snake metaphor to visualize how InnoDB manages REDO log files.
• Instant ADD/DROP Columns: Say goodbye to costly table rebuilds! This presentation unveils how InnoDB now enables seamless addition and removal of columns without compromising data integrity or incurring downtime.
Key Learnings:
• Grasp the concept of REDO logs and their significance in InnoDB's transaction management.
• Discover the advantages of dynamic REDO log configuration and how to leverage it for optimal performance.
• Understand the inner workings of instant ADD/DROP columns and their impact on database operations.
• Gain valuable insights into the row versioning mechanism that empowers instant column modifications.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdfleebarnesutopia
So… you want to become a Test Automation Engineer (or hire and develop one)? While there’s quite a bit of information available about important technical and tool skills to master, there’s not enough discussion around the path to becoming an effective Test Automation Engineer that knows how to add VALUE. In my experience this had led to a proliferation of engineers who are proficient with tools and building frameworks but have skill and knowledge gaps, especially in software testing, that reduce the value they deliver with test automation.
In this talk, Lee will share his lessons learned from over 30 years of working with, and mentoring, hundreds of Test Automation Engineers. Whether you’re looking to get started in test automation or just want to improve your trade, this talk will give you a solid foundation and roadmap for ensuring your test automation efforts continuously add value. This talk is equally valuable for both aspiring Test Automation Engineers and those managing them! All attendees will take away a set of key foundational knowledge and a high-level learning path for leveling up test automation skills and ensuring they add value to their organizations.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
4. 4
Common Limitations of Traditional
Enterprise Storage
Unable to scale and
manage data
growth
Expensive Won’t extend to
the software-defined
data center
$
PRESENT
FUTURE
£
€
$
5. 5
Data Continues to Grow Very Fast
Medical Data
EmailsVideos
Mobile Data IoT Data
Transactional Data
6. 6
Data Protection Problem is Getting
Compounded
• Inability to keep enough data online
• Not being able to recover lost data fast enough
• Migrations to larger disk or dedupe appliances every year
Increasing volumes of data compound the data protection problem
*SUSE Software Defined Storage Study 2016
Cost of storage
Performance / availability
Challenges with back up, disaster recovery and archiving
Increasing volumes of data
Security and data governance
45%
46%
46%
54%
56%
Customer Facing Numerous Backup Challenges:
Top 5 Data Management Challenges*
9. 9
SUSE’s history with Ceph
August 2012 SUSE Cloud 1 Argonaut
February 2015 SUSE Enterprise Storage 1 Firefly
October 2015 SUSE Enterprise Storage 2 Hammer
June 2016 SUSE Enterprise Storage 3 Jewel
October 2016 openATTIC team joins SUSE!
November 2016 SUSE Enterprise Storage 4 Jewel
October 2017 SUSE Enterprise Storage 5 Luminous
Later 2018 SUSE Enterprise Storage 6 Mimic
10. 10
●
Curate open source solutions – projects and features
●
Track, test, and manage software dependencies
●
Incorporate and provide patches and backports
●
Advise customers on recommended practices and hardware
●
Represent customers and partners in the community
●
Bridge the worlds of community and IHVs/ISVs
●
Provide high quality support
Vendor value add
11. 11
●
Strong contributor to Ceph community & Ceph Advisory Board
●
True Open Source - “Upstream first”
●
iSCSI with Multipathing/VMware support
●
First supported Ceph distribution for ARM64
●
First to support CephFS for production deployments
●
Salt-based orchestration for upgrade and FileStore/BlueStore migration
●
Lead on openATTIC, Prometheus & Grafana, now merged into core
Ceph management functionality
Major SUSE contributions
16. 16
Use Case Focused Solutions
Backup to Disk Solution Compliant
Archives
SAP HANA
Storage Solution
Appliance
HPC Archives
Certified RAs
Cloud
SOC SES
IoT
Configuration CSPs +
SES
Mode1
Customers
Mode2
Customers
Example SUSE Enterprise Storage Partners
30. 30
SUSE Enterprise Storage 6 – based on Mimic
•Incorporates Ceph Mimic
•Based on SUSE Linux Enterprise 15
•Improve interoperability
•Internationalization and localization
•Improved scale-out user experience
•Eventing and alerting
•Metric reporting and telemetry
33. 33
openATTIC <3 Ceph’s Dashboard
Provide a better user experience
Make complex tasks easier
Assist users along the way of sizing,
deploying, and managing a Ceph cluster
35. 35
Ceph? Not if. When.
Open source cloud operating systems and software-defined storage
platforms are based on the Linux operating system.
SUSE is a Linux OS pioneer and successful software vendor with
thousands of installations. Customers should expect to receive
nothing less than expert support for their software-based storage.
Learn more at suse.com/storage/