This document summarizes SUSE Enterprise Cloud Storage and Docker running on Cavium ThunderX servers. Key points include:
- SUSE Enterprise Storage 3.0 and Docker 1.9.1 were tested on ThunderX servers with 384 cores, 1TB RAM, and 4TB SSD storage. Over 300 containers were run.
- SUSE has collaborated with Cavium since 2014 to bring ARM64-based software defined storage to cloud and enterprise using SUSE Enterprise Storage.
- Benefits of running Docker on ThunderX include lower overhead, latency, and startup/shutdown times compared to virtualization.
- Various storage pricing points are calculated based on ThunderX configurations, showing how costs can be reduced through
This document provides an overview of openSUSE Cloud Storage Workshop presented by AvengerMoJo in November 2016. It covers introductory topics on traditional and cloud storage, key components of Ceph including MON, OSD, MDS, and CRUSH map. It also discusses features like thin provisioning, cache tiering, erasure coding, self management/repair. Development topics covered include Ceph source code, use of Salt for configuration and deployment, and SUSE's software lifecycle process.
The document discusses performance analysis of Ceph storage clusters. It begins by providing context on SUSE Enterprise Storage 5 and why performance analysis is important. It then describes how to analyze performance using tools like Ceph commands, FIO, LTTNG, and Iperf. Example results are shown from testing network performance, disk performance, and cluster-level benchmarks on an HPE Apollo storage cluster. Integration with Salt is also discussed for automating performance testing across a Ceph cluster.
Suse Enterprise Storage 3 provides iSCSI access to connect to ceph storage remotely over TCP/IP, allowing clients to access ceph storage using the iSCSI protocol. The iSCSI target driver in SES3 provides access to RADOS block devices. This allows any iSCSI initiator to connect to SES3 over the network. SES3 also includes optimizations for iSCSI gateways like offloading operations to object storage devices to reduce locking on gateway nodes.
This document provides an introduction to cloud storage, including trends driving increased data storage needs, how cloud storage is tiered and used, how software-defined storage works, advantages of using cloud storage, example hardware configurations and costs for setting up a small private cloud storage cluster using Ceph, and basic management of the Ceph cluster, pools, and RBD block storage. It demonstrates configuring a 3-node Ceph cluster on inexpensive hardware that can provide over 30TB of storage, costing about the same or half of 1 year of a commercial cloud storage service.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
iSCSI provides a standard way to access Ceph block storage remotely over TCP/IP. SUSE Enterprise Storage 3 includes an iSCSI target driver that allows any iSCSI initiator to connect to Ceph storage. This provides multiple platforms with standardized access to Ceph without needing to join the cluster. Optimizations are made in iSCSI to efficiently handle SCER operations by offloading work to OSDs.
openATTIC provides a web-based interface for managing Ceph and other storage. It currently allows pool, OSD, and RBD management along with cluster monitoring. Future plans include extended pool and OSD management, CephFS and RGW integration, and deployment/configuration of Ceph nodes via Salt.
i. SUSE Enterprise Storage 3 provides iSCSI access to connect remotely to ceph storage over TCP/IP, allowing any iSCSI initiator to access the storage over a network. The iSCSI target driver sits on top of RBD (RADOS block device) to enable this access.
ii. Configuring the lrbd package simplifies setting up an iSCSI gateway to Ceph. Multiple gateways can be configured for high availability using targetcli utility.
iii. Optimizations have been made to the iSCSI gateway to efficiently handle certain SCSI operations like atomic compare and write by offloading work to OSDs to avoid locking on gateway nodes.
This document summarizes SUSE Enterprise Cloud Storage and Docker running on Cavium ThunderX servers. Key points include:
- SUSE Enterprise Storage 3.0 and Docker 1.9.1 were tested on ThunderX servers with 384 cores, 1TB RAM, and 4TB SSD storage. Over 300 containers were run.
- SUSE has collaborated with Cavium since 2014 to bring ARM64-based software defined storage to cloud and enterprise using SUSE Enterprise Storage.
- Benefits of running Docker on ThunderX include lower overhead, latency, and startup/shutdown times compared to virtualization.
- Various storage pricing points are calculated based on ThunderX configurations, showing how costs can be reduced through
This document provides an overview of openSUSE Cloud Storage Workshop presented by AvengerMoJo in November 2016. It covers introductory topics on traditional and cloud storage, key components of Ceph including MON, OSD, MDS, and CRUSH map. It also discusses features like thin provisioning, cache tiering, erasure coding, self management/repair. Development topics covered include Ceph source code, use of Salt for configuration and deployment, and SUSE's software lifecycle process.
The document discusses performance analysis of Ceph storage clusters. It begins by providing context on SUSE Enterprise Storage 5 and why performance analysis is important. It then describes how to analyze performance using tools like Ceph commands, FIO, LTTNG, and Iperf. Example results are shown from testing network performance, disk performance, and cluster-level benchmarks on an HPE Apollo storage cluster. Integration with Salt is also discussed for automating performance testing across a Ceph cluster.
Suse Enterprise Storage 3 provides iSCSI access to connect to ceph storage remotely over TCP/IP, allowing clients to access ceph storage using the iSCSI protocol. The iSCSI target driver in SES3 provides access to RADOS block devices. This allows any iSCSI initiator to connect to SES3 over the network. SES3 also includes optimizations for iSCSI gateways like offloading operations to object storage devices to reduce locking on gateway nodes.
This document provides an introduction to cloud storage, including trends driving increased data storage needs, how cloud storage is tiered and used, how software-defined storage works, advantages of using cloud storage, example hardware configurations and costs for setting up a small private cloud storage cluster using Ceph, and basic management of the Ceph cluster, pools, and RBD block storage. It demonstrates configuring a 3-node Ceph cluster on inexpensive hardware that can provide over 30TB of storage, costing about the same or half of 1 year of a commercial cloud storage service.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
iSCSI provides a standard way to access Ceph block storage remotely over TCP/IP. SUSE Enterprise Storage 3 includes an iSCSI target driver that allows any iSCSI initiator to connect to Ceph storage. This provides multiple platforms with standardized access to Ceph without needing to join the cluster. Optimizations are made in iSCSI to efficiently handle SCER operations by offloading work to OSDs.
openATTIC provides a web-based interface for managing Ceph and other storage. It currently allows pool, OSD, and RBD management along with cluster monitoring. Future plans include extended pool and OSD management, CephFS and RGW integration, and deployment/configuration of Ceph nodes via Salt.
i. SUSE Enterprise Storage 3 provides iSCSI access to connect remotely to ceph storage over TCP/IP, allowing any iSCSI initiator to access the storage over a network. The iSCSI target driver sits on top of RBD (RADOS block device) to enable this access.
ii. Configuring the lrbd package simplifies setting up an iSCSI gateway to Ceph. Multiple gateways can be configured for high availability using targetcli utility.
iii. Optimizations have been made to the iSCSI gateway to efficiently handle certain SCSI operations like atomic compare and write by offloading work to OSDs to avoid locking on gateway nodes.
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiCeph Community
Vangelis Koukis presented on the Greek Research and Technology Network's (GRNET) public cloud service called Okeanos, which uses Synnefo, Ganeti, and Ceph to provide a production-quality IaaS cloud. Okeanos has been in production since 2011, currently supports over 3,500 users and 5,500 active VMs after initially spawning over 160,000 VMs. The presentation discussed the architecture, challenges of operating a public cloud with persistent VMs, and experiences with rolling upgrades, live migrations, and scaling the cloud infrastructure.
This document discusses high availability (HA) features in SUSE Linux Enterprise Server 12 SP2, including:
- A policy-driven HA cluster with continuous data replication across nodes and simple setup/installation.
- Key HA concepts like resources, constraints, and STONITH (shoot the other node in the head) fencing mechanisms.
- The new Hawk2 web console for managing HA clusters.
- Support for geo-clustering across data centers with concepts like tickets, boothd, and arbitrators.
- Options for maintenance and standby modes, new Cluster-MD software RAID, DRBD replication, OCFS2 and GFS2 cluster filesystems, and easy HA
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
This document discusses optimizing performance in large scale CEPH clusters at Alibaba. It describes two use models for writing data in CEPH and improvements made to recovery performance by implementing partial and asynchronous recovery. It also details fixes made to bugs that caused data loss or inconsistency. Additionally, it proposes offloading transaction queueing from PG workers to improve performance by leveraging asynchronous transaction workers and evaluating this approach through bandwidth testing.
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
Ceph can provide storage tiering with different performance levels. It allows combining SSDs, SAS, and SATA disks from multiple nodes into pools to provide tiered storage. Performance testing showed that for reads, Ceph provided good performance across all tiers, while for writes Nvme disks had the best performance compared to SSD, SAS, and SATA disks. FIO, IOmeter, and IOzone were some of the tools used to measure throughput and IOPS.
The document discusses Ceph, an open-source distributed storage system. It provides an overview of Ceph's architecture and components, how it works, and considerations for setting up a Ceph cluster. Key points include: Ceph provides unified block, file and object storage interfaces and can scale exponentially. It uses CRUSH to deterministically map data across a cluster for redundancy. Setup choices like network, storage nodes, disks, caching and placement groups impact performance and must be tuned for the workload.
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
This document summarizes lessons learned from Target's initial Ceph deployment and subsequent improvements. The initial deployment suffered from poor performance due to using unreliable SATA drives without caching. Instrumentation would have revealed issues sooner. The redesigned deployment used SSD journals and improved hardware, increasing performance 10x. Key lessons are to understand objectives, select suitable hardware, monitor metrics, and not assume Ceph can overcome poor hardware choices. Future work includes all-SSD testing and automating deployments.
This document provides an introduction to Powershell and the Dell Command | Powershell Provider (DCPP). It discusses the history and versions of Powershell, how to get help and use the Integrated Scripting Environment. It also covers the basics of Powershell cmdlet structure and aliases. The document then introduces DCPP, which can be used to configure BIOS settings on Dell devices, and provides instructions for installing DCPP either from a zip file or using the Powershell Gallery.
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
This document outlines an agenda for a presentation on running MySQL on Ceph storage. It includes a comparison of MySQL on Ceph versus AWS, results from a head-to-head performance lab test between the two platforms, and considerations for hardware architectures and configurations optimized for MySQL workloads on Ceph. The lab tests showed that Ceph could match or exceed AWS on both performance metrics like IOPS/GB and price/performance metrics like storage cost per IOP.
This document discusses using iSCSI to provide access to Ceph RADOS Block Device (RBD) images from heterogeneous operating systems and applications. It describes how the Linux IO Target (LIO) can be configured as an iSCSI target with the RBD storage backend to export Ceph RBD images. This allows standard iSCSI initiators to access RBD images without requiring Ceph-aware clients. It also explains how LIO and Lrbd can be used to configure multiple iSCSI gateways for high availability and redundancy.
Ceph Day San Jose - From Zero to Ceph in One Minute Ceph Community
Croit is a new startup that aims to simplify Ceph management. Their solution involves live booting Ceph nodes without installing an operating system, managing the entire cluster from a web interface, and allowing any employee to perform basic tasks. Croit was founded by people experienced with Ceph who encountered common problems like complex management scripts and hardware issues. Their goal is to eliminate the need for specialists by automating tasks and enabling easy scaling through a diskless architecture and centralized management portal.
The document discusses Ceph storage performance on all-flash storage systems. It describes how SanDisk optimized Ceph for all-flash environments by tuning the OSD to handle the high performance of flash drives. The optimizations allowed over 200,000 IOPS per OSD using 12 CPU cores. Testing on SanDisk's InfiniFlash storage system showed it achieving over 1.5 million random read IOPS and 200,000 random write IOPS at 64KB block size. Latency was also very low, with 99% of operations under 5ms for reads. The document outlines reference configurations for the InfiniFlash system optimized for small, medium and large workloads.
This document summarizes BlueStore, a new storage backend for Ceph that provides faster performance compared to the existing FileStore backend. BlueStore manages metadata and data separately, with metadata stored in a key-value database (RocksDB) and data written directly to block devices. This avoids issues with POSIX filesystem transactions and enables more efficient features like checksumming, compression, and cloning. BlueStore addresses consistency and performance problems that arose with previous approaches like FileStore and NewStore.
Ceph is an open-source distributed storage system that provides object, block, and file storage. The document discusses optimizing Ceph for an all-flash configuration and analyzing performance issues when using Ceph on all-flash storage. It describes SK Telecom's testing of Ceph performance on VMs using all-flash SSDs and compares the results to a community Ceph version. SK Telecom also proposes their all-flash Ceph solution with custom hardware configurations and monitoring software.
The document provides recommendations for optimizing an OpenStack cloud environment using Ceph storage. It discusses configuring Glance, Cinder, and Nova to integrate with Ceph, as well as recommendations for the Ceph cluster itself regarding OSDs, journals, networking, and failure domains. Performance was improved by converting image formats to raw, enabling SSD journals, bonding network interfaces, and adjusting scrubbing settings.
Ceph is evolving its network stack to improve performance. It is moving from AsyncMessenger to using RDMA for better scalability and lower latency. RDMA support is now built into Ceph and provides native RDMA using verbs or RDMA-CM. This allows using InfiniBand or RoCE networks with Ceph. Work continues to fully leverage RDMA for features like zero-copy replication and erasure coding offload.
Practical advices how to achieve persistence in Redis. Detailed overview of all cons and pros of RDB snapshots and AOF logging. Tips and tricks for proper persistence configuration with Redis pools and master/slave replication.
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed_Hat_Storage
The next generation of IT will be built around flexible infrastructures and operational efficiencies, lowering costs and increasing overall business value in the organization.
A hyperconverged infrastructure that's built on Red Hat supported technologies--including Linux, Gluster storage, and oVirt virtualization manager--will run on commodity x86 servers using the performance of local storage, to deliver a cost-effective, modular, highly scalable, and secure hyperconverged solution.
This document is a presentation about why openSUSE matters. It discusses how openSUSE provides pre-compiled packages so users do not have to wait years to get answers. It also talks about how openSUSE is forever trendy and up-to-date with the latest Tumbleweed release. The presentation discusses how openSUSE provides tools like Kiwi, YaST, and SUSE Studio to help users customize their systems. It focuses on openSUSE's importance in areas like cloud computing, containers, big data, and how the future is in users' hands.
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiCeph Community
Vangelis Koukis presented on the Greek Research and Technology Network's (GRNET) public cloud service called Okeanos, which uses Synnefo, Ganeti, and Ceph to provide a production-quality IaaS cloud. Okeanos has been in production since 2011, currently supports over 3,500 users and 5,500 active VMs after initially spawning over 160,000 VMs. The presentation discussed the architecture, challenges of operating a public cloud with persistent VMs, and experiences with rolling upgrades, live migrations, and scaling the cloud infrastructure.
This document discusses high availability (HA) features in SUSE Linux Enterprise Server 12 SP2, including:
- A policy-driven HA cluster with continuous data replication across nodes and simple setup/installation.
- Key HA concepts like resources, constraints, and STONITH (shoot the other node in the head) fencing mechanisms.
- The new Hawk2 web console for managing HA clusters.
- Support for geo-clustering across data centers with concepts like tickets, boothd, and arbitrators.
- Options for maintenance and standby modes, new Cluster-MD software RAID, DRBD replication, OCFS2 and GFS2 cluster filesystems, and easy HA
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
This document discusses optimizing performance in large scale CEPH clusters at Alibaba. It describes two use models for writing data in CEPH and improvements made to recovery performance by implementing partial and asynchronous recovery. It also details fixes made to bugs that caused data loss or inconsistency. Additionally, it proposes offloading transaction queueing from PG workers to improve performance by leveraging asynchronous transaction workers and evaluating this approach through bandwidth testing.
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
Ceph can provide storage tiering with different performance levels. It allows combining SSDs, SAS, and SATA disks from multiple nodes into pools to provide tiered storage. Performance testing showed that for reads, Ceph provided good performance across all tiers, while for writes Nvme disks had the best performance compared to SSD, SAS, and SATA disks. FIO, IOmeter, and IOzone were some of the tools used to measure throughput and IOPS.
The document discusses Ceph, an open-source distributed storage system. It provides an overview of Ceph's architecture and components, how it works, and considerations for setting up a Ceph cluster. Key points include: Ceph provides unified block, file and object storage interfaces and can scale exponentially. It uses CRUSH to deterministically map data across a cluster for redundancy. Setup choices like network, storage nodes, disks, caching and placement groups impact performance and must be tuned for the workload.
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
This document summarizes lessons learned from Target's initial Ceph deployment and subsequent improvements. The initial deployment suffered from poor performance due to using unreliable SATA drives without caching. Instrumentation would have revealed issues sooner. The redesigned deployment used SSD journals and improved hardware, increasing performance 10x. Key lessons are to understand objectives, select suitable hardware, monitor metrics, and not assume Ceph can overcome poor hardware choices. Future work includes all-SSD testing and automating deployments.
This document provides an introduction to Powershell and the Dell Command | Powershell Provider (DCPP). It discusses the history and versions of Powershell, how to get help and use the Integrated Scripting Environment. It also covers the basics of Powershell cmdlet structure and aliases. The document then introduces DCPP, which can be used to configure BIOS settings on Dell devices, and provides instructions for installing DCPP either from a zip file or using the Powershell Gallery.
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
This document outlines an agenda for a presentation on running MySQL on Ceph storage. It includes a comparison of MySQL on Ceph versus AWS, results from a head-to-head performance lab test between the two platforms, and considerations for hardware architectures and configurations optimized for MySQL workloads on Ceph. The lab tests showed that Ceph could match or exceed AWS on both performance metrics like IOPS/GB and price/performance metrics like storage cost per IOP.
This document discusses using iSCSI to provide access to Ceph RADOS Block Device (RBD) images from heterogeneous operating systems and applications. It describes how the Linux IO Target (LIO) can be configured as an iSCSI target with the RBD storage backend to export Ceph RBD images. This allows standard iSCSI initiators to access RBD images without requiring Ceph-aware clients. It also explains how LIO and Lrbd can be used to configure multiple iSCSI gateways for high availability and redundancy.
Ceph Day San Jose - From Zero to Ceph in One Minute Ceph Community
Croit is a new startup that aims to simplify Ceph management. Their solution involves live booting Ceph nodes without installing an operating system, managing the entire cluster from a web interface, and allowing any employee to perform basic tasks. Croit was founded by people experienced with Ceph who encountered common problems like complex management scripts and hardware issues. Their goal is to eliminate the need for specialists by automating tasks and enabling easy scaling through a diskless architecture and centralized management portal.
The document discusses Ceph storage performance on all-flash storage systems. It describes how SanDisk optimized Ceph for all-flash environments by tuning the OSD to handle the high performance of flash drives. The optimizations allowed over 200,000 IOPS per OSD using 12 CPU cores. Testing on SanDisk's InfiniFlash storage system showed it achieving over 1.5 million random read IOPS and 200,000 random write IOPS at 64KB block size. Latency was also very low, with 99% of operations under 5ms for reads. The document outlines reference configurations for the InfiniFlash system optimized for small, medium and large workloads.
This document summarizes BlueStore, a new storage backend for Ceph that provides faster performance compared to the existing FileStore backend. BlueStore manages metadata and data separately, with metadata stored in a key-value database (RocksDB) and data written directly to block devices. This avoids issues with POSIX filesystem transactions and enables more efficient features like checksumming, compression, and cloning. BlueStore addresses consistency and performance problems that arose with previous approaches like FileStore and NewStore.
Ceph is an open-source distributed storage system that provides object, block, and file storage. The document discusses optimizing Ceph for an all-flash configuration and analyzing performance issues when using Ceph on all-flash storage. It describes SK Telecom's testing of Ceph performance on VMs using all-flash SSDs and compares the results to a community Ceph version. SK Telecom also proposes their all-flash Ceph solution with custom hardware configurations and monitoring software.
The document provides recommendations for optimizing an OpenStack cloud environment using Ceph storage. It discusses configuring Glance, Cinder, and Nova to integrate with Ceph, as well as recommendations for the Ceph cluster itself regarding OSDs, journals, networking, and failure domains. Performance was improved by converting image formats to raw, enabling SSD journals, bonding network interfaces, and adjusting scrubbing settings.
Ceph is evolving its network stack to improve performance. It is moving from AsyncMessenger to using RDMA for better scalability and lower latency. RDMA support is now built into Ceph and provides native RDMA using verbs or RDMA-CM. This allows using InfiniBand or RoCE networks with Ceph. Work continues to fully leverage RDMA for features like zero-copy replication and erasure coding offload.
Practical advices how to achieve persistence in Redis. Detailed overview of all cons and pros of RDB snapshots and AOF logging. Tips and tricks for proper persistence configuration with Redis pools and master/slave replication.
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed_Hat_Storage
The next generation of IT will be built around flexible infrastructures and operational efficiencies, lowering costs and increasing overall business value in the organization.
A hyperconverged infrastructure that's built on Red Hat supported technologies--including Linux, Gluster storage, and oVirt virtualization manager--will run on commodity x86 servers using the performance of local storage, to deliver a cost-effective, modular, highly scalable, and secure hyperconverged solution.
This document is a presentation about why openSUSE matters. It discusses how openSUSE provides pre-compiled packages so users do not have to wait years to get answers. It also talks about how openSUSE is forever trendy and up-to-date with the latest Tumbleweed release. The presentation discusses how openSUSE provides tools like Kiwi, YaST, and SUSE Studio to help users customize their systems. It focuses on openSUSE's importance in areas like cloud computing, containers, big data, and how the future is in users' hands.
This document discusses how principles of being a good scout, such as leaving places better than you found them, can be applied to software development. Some ways to "be a good scout" in code include improving documentation, adding comments, refactoring code, writing tests, and cleaning up unused files or code. Being a good scout takes hard work, determination, ingenuity, and tenacity - qualities that also make a developer excellent at continually improving code quality.
This document provides an overview of storage best practices in oVirt, including oVirt storage domains, manual tiering across different storage types, volume types and allocation policies, and single disk snapshots. It discusses using different storage domains like NFS, iSCSI, Fibre Channel for manual tiering to choose the best storage. It also covers volume types, allocation policies of preallocated vs thin provisioning, and using QCOW2 format for snapshots. Finally, it describes how oVirt implements single disk snapshots using logical volume manager (LVM).
OpenStack Overview: Deployments and the Big Tent, Toronto 2016Jonathan Le Lous
Where are we with OpenStack deployments worldwide and in Canada?
- This presentation is based on the last OpenStack User Surveys and information collected from OpenStack ecosystem in Canada.
- We will also talk about Big Tent, the new OpenStack projects governance.
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
This document discusses building a high-performance and durable block storage service using Ceph. It describes the architecture, including a minimum deployment of 12 OSD nodes and 3 monitor nodes. It outlines optimizations made to Ceph, Qemu, and the operating system configuration to achieve high performance, including 6000 IOPS and 170MB/s throughput. It also discusses how the CRUSH map can be optimized to reduce recovery times and number of copysets to improve durability to 99.99999999%.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
The document describes Linux containerization and virtualization technologies including containers, control groups (cgroups), namespaces, and backups. It discusses:
1) How cgroups isolate and limit system resources for containers through mechanisms like cpuset, cpuacct, cpu, memory, blkio, and freezer.
2) How namespaces isolate processes by ID, mounting, networking, IPC, and other resources to separate environments for containers.
3) The new backup system which uses thin provisioning and snapshotting to efficiently backup container environments to backup servers and restore individual accounts or full servers as needed.
Oracle database and hardware were reaching end of support and needed to be migrated from an on-premise HP-UX server to AWS RDS. Key considerations for the migration included verifying Oracle license types supported on AWS RDS, supported database versions, available migration methods like Data Pump and Export/Import, storage space needed for data dumps, and potential downtime. The document outlined the steps to configure GoldenGate for a zero downtime migration of the 300GB Oracle database to AWS RDS, including installing and configuring GoldenGate on the on-premise and EC2 systems, setting up the extract and manager processes, and replicating the initial data.
Deep Dive on Amazon EC2 Instances (March 2017)Julien SIMON
This document provides an overview of Amazon EC2 instance types and performance optimization best practices. It discusses the factors that go into choosing an EC2 instance, how instance performance is characterized, and how to optimize workloads through choices like instance type, operating system, and configuration settings. Specific tips are provided around topics like timekeeping, CPU credit monitoring, NUMA, and kernel optimizations. The goal is to help users make the most of their EC2 experience through understanding instance internals and performance tradeoffs.
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...Amazon Web Services
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and Accelerated Computing (GPU and FPGA) instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...Amazon Web Services
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and Accelerated Computing (GPU and FPGA) instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Presented at LISA18: https://www.usenix.org/conference/lisa18/presentation/babrou
This is a technical dive into how we used eBPF to solve real-world issues uncovered during an innocent OS upgrade. We'll see how we debugged 10x CPU increase in Kafka after Debian upgrade and what lessons we learned. We'll get from high-level effects like increased CPU to flamegraphs showing us where the problem lies to tracing timers and functions calls in the Linux kernel.
The focus is on tools what operational engineers can use to debug performance issues in production. This particular issue happened at Cloudflare on a Kafka cluster doing 100Gbps of ingress and many multiple of that egress.
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, and techniques for configuring and optimizing all-flash Ceph performance.
Best Practices & Performance Tuning - OpenStack Cloud Storage with Ceph - In this presentation, we discuss best practices and performance tuning for OpenStack cloud storage with Ceph to achieve high availability, durability, reliability and scalability at any point of time. Also discuss best practices for failure domain, recovery, rebalancing, backfilling, scrubbing, deep-scrubbing and operations
This document provides instructions for various system administration tasks on Sun Solaris systems, including:
- Installing and configuring NFS, DNS, FTP, and other network services.
- Configuring devices like SCSI disks, modems, and tape drives.
- Performing backups, installing software packages, and viewing system information.
- Troubleshooting tips, monitoring performance, and debugging syslog.
It covers topics ranging from low-level kernel configuration to high-level network administration. The document serves as a reference guide for Solaris system administrators to complete common system management and maintenance activities.
The document provides tips for optimizing PostgreSQL performance on hardware and configuration settings. It recommends starting with hard drive optimization using RAID 1 or RAID 10 configurations on an SSD or SAS drive array. It also recommends optimizing memory settings like shared_buffers, work_mem and maintenance_work_mem as well as I/O settings like checkpoint_timeout. The document emphasizes the importance of hardware specifications and configuration tuning to improve PostgreSQL performance.
While probably the most prominent, Docker is not the only tool for building and managing containers. Originally meant to be a "chroot on steroids" to help debug systemd, systemd-nspawn provides a fairly uncomplicated approach to work with containers. Being part of systemd, it is available on most recent distributions out-of-the-box and requires no additional dependencies.
This deck will introduce a few concepts involved in containers and will guide you through the steps of building a container from scratch. The payload will be a simple service, which will be automatically activated by systemd when the first request arrives.
This document provides information on various debugging and profiling tools that can be used for Ruby including:
- lsof to list open files for a process
- strace to trace system calls and signals
- tcpdump to dump network traffic
- google perftools profiler for CPU profiling
- pprof to analyze profiling data
It also discusses how some of these tools have helped identify specific performance issues with Ruby like excessive calls to sigprocmask and memcpy calls slowing down EventMachine with threads.
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...Amazon Web Services
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and Accelerated Computing (GPU and FPGA) instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
This document discusses DM Multipath, which provides multipathing functionality in Linux. It describes DM Multipath components like the dm_multipath kernel module and multipathd daemon. It also provides instructions on setting up DM Multipath, including installing packages, configuring multipath.conf, and starting the multipath daemon. Examples are given of multipath devices being accessed, partitioned using LVM, and mounted. Paths and devices in a multipath configuration are shown.
The document discusses hacking the Swisscom modem by exploiting default credentials to gain access. Upon login, the author runs commands to investigate the system such as viewing configuration files and mapping the internal network. Various system details are discovered including the Linux kernel version and software components.
Aquarium is a SUSE-sponsored Open Source project to build an easy-to-use, rock-solid appliance wrapped around the Ceph project. The project started development in January 2021, and has become a passion project for the storage team at SUSE.
openATTIC is the webUI for managing ceph storage in SUSE Linux and it is also accepted upstream as part of the default managing UI. Within openATTIC, it contains lots of different projects to provide all the functions including salt, Grafana and Prometheus.
This will not be a ceph or openATTIC focus talk, even we use it as the example of showing how everything work together. Instead we will look into openATTIC to see how each project work with each other. Mainly focus on Grafana and Prometheus, they are not only very useful for ceph and openATTIC but also equally powerful for monitoring your clusters, vm or cloud / container status.
Both Grafana and Prometheus are very easy to extend, which allow user or administrator to build the dashboard which fit their own need. Even the presentation example will be base on ceph / storage. Participants can use the same idea to monitor any system status they wanted after understand how they work.
This document discusses Internet of Things (IoT) technology and provides an overview of the IoT market, key players, communication protocols, and various IoT hardware and software solutions. It examines the projected growth of connected devices, current investment areas, major technology companies, open communication standards, and examples of IoT hardware platforms, cloud services, and consumer products from around the world. The document also reviews VIA's VAB-1000 development board and provides suggestions for strengthening its position in the IoT field.
Overview open source business model and company around China. Help open source developers or students to evaluate what have been done and how to pick their career path accordingly
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
3. Storage Trend
> Data Size and Capacity
– Multimedia Contents
– Big Demo binary, Detail Graphic /
Photos, Audio and Video etc.
> Data Functional need
– Different Business requirement
– More Data driven process
– More application with data
– More ecommerce
> Data Backup for a longer
period
– Legislation and Compliance
– Business analysis
5. Software Define Storage
> High Extensibility:
– Distributed over multiple nodes in cluster
> High Availability:
– No single point of failure
> High Flexibility:
– API, Block Device and Cloud Supported Architecture
> Pure Software Define Architecture
> Self Monitoring and Self Repairing
7. Why using Cloud Storage?
> Very High ROI compare to traditional Hard Storage Solution
Vendor
> Cloud Ready and S3 Supported
> Thin Provisioning
> Remote Replication
> Cache Tiering
> Erasure Coding
> Self Manage and Self Repair with continuous monitoring
8. Other Key Features
> Support client from multiple OS
> Data encryption over physical disk ( more CPU needed)
> On the fly data compression
> Basically Unlimited Extendibility
> Copy-On-Writing ( Clone and Snapshot )
> iSCSI support ( VM and thin client etc )
14. HTPC AMD (A8-5545M)
Form factor:
– 29.9 mm x 107.6 mm x 114.4mm
CPU:
– AMD A8-5545M ( Clock up 2.7GHz / 4M 4Core)
RAM:
– 8G DDR-3-1600 KingStone ( Up to 16G SO-DIMM )
Storage:
– mS200 120G/m-SATA/read:550M, write: 520M
Lan:
– Gigabit LAN (RealTek RTL8111G)
Connectivity:
– USB3.0 * 4
Price:
– $6980 (NTD)
15. Enclosure
Form factor:
– 215(D) x 126(w) x 166(H) mm
Storage:
– Support all brand of 3.5" SATA I / II / III hard disk drive 4 x 8TB = 32TB
Connectivity:
– USB 3.0 or eSATA Interface
Price:
– $3000 (NTD)
16. AMD (A8-5545M)
> Node = 6980
> 512SSD + 4TB + 6TB
+ Enclosure =5000 +
4000 + 7000 = 16000
> 30TB total = 16000 * 3
= 58000
> It is about the half of
Amazon Cloud 30TB
cost over 1 year
17.
18. QUICK 3 NODE SETUP
Demo basic setup of a small cluster
19. CEPH Cluster Requirement
> At least 3 MON
> At least 3 OSD
– At least 15GB per osd
– Journal better on SSD
20. ceph-deploy
> ssh no password id need
to pass over to all cluster
nodes
> echo nodes ceph user
has sudo for root
permission
> ceph-deploy new
<node1> <node2>
<node3>
– Create all the new MON
> ceph.conf file will be
created at the current
directory for you to build
your cluster
configuration
> Each cluster node
should have identical
ceph.conf file
25. RBD Management
> rbd --pool ssd create --size 10000 ssd_block
– Create a 1G rbd in ssd pool
> rbd map ssd/ssd_block ( in client )
– It should show up in /dev/rbd/<pool-name>/<block-name>
> Then you can use it like a block device
27. Files prepare for this demo
Kiwi Image SLE12 + SES2
> https://files.secureserver.
net/0fCLysbi0hb8cr
Git Salt Stack Repo
> https://github.com/Aveng
erMoJo/Ceph-Saltstack
28. USB install and then Prepare Salt-Minion
> #accept all node* key from minion
> salt-key -a
> #copy all the module and _systemd /srv/salt/ ;
> sudo salt 'node*' saltutil.sync_all
> #benchmark ( get all the disk io basic number )
> sudo salt "node*" ceph_sles.bench_disk /dev/sda /dev/sdb /dev/sdc /dev/sdd
> #get all the disk information
> sudo salt "node*" ceph_sles.disk_info
> #get all the networking information
> sudo salt -L "salt-master node1 node2 node3 node4 node5"
ceph_sles.bench_network salt-master node1 node2 node3 node4 node5
29. Prepare and Create Clusters Mons
> #create salt-master ssh key
> sudo salt "salt-master" ceph_sles.keygen
> #send key over to nodes
> sudo salt "salt-master" ceph_sles.send_key node1 node2 node3
> #create new cluster with the new mon
> sudo salt "salt-master" ceph_sles.new_mon node1 node2 node3
> #sending cluster conf and key over to the nodes
> sudo salt "salt-master" ceph_sles.push_conf salt-master node1 node2 node3
30. Create Journal and OSD
> #create the osd journal partition
> #we can combin the get_disk_info for ssd auto assign
> sudo salt -L "node1 node2 node3" ceph_sles.prep_osd_journal /dev/sda 40G
> # clean all the osd disk partition first
> sudo salt 'salt-master' ceph_sles.clean_disk_partition "node1,node2,node3"
"/dev/sdb,/dev/sdc,/dev/sdd"
> # prep the list of osd for the cluster
> sudo salt "salt-master" ceph_sles.prep_osd "node1,node2,node3"
"/dev/sdb,/dev/sdc,/dev/sdd"
31. Update Crushmap and do rados benchmark
> # crushmap update for the benchmark
> sudo salt "salt-master"
ceph_sles.crushmap_update_disktype_ssd_hdd node1
node2 node3
> # rados bench
> sudo salt "salt-master" ceph_sles.bench_rados
32. Cache Tier setup
> sudo salt "salt-master" ceph_sles.create_pool samba_ssd_pool
100 2 ssd_replicated
> sudo salt "salt-master" ceph_sles.create_pool samba_hdd_pool
100 3 hdd_replicated
> ceph osd tier add samba_hdd_pool samba_ssd_pool
> ceph osd tier cache-mode samba_ssd_pool writeback
> ceph osd tier set-overlay samba_hdd_pool samba_ssd_pool
> ceph osd pool set samba_ssd_pool hit_set_type bloom
> ceph osd pool set samba_ssd_pool hit_set_count 2
> ceph osd pool set samba_ssd_pool hit_set_period 300