VSM (Virtual Storage Manager) is an open source tool developed by Intel to simplify Ceph storage cluster management and foster greater adoption of Ceph. It allows administrators to deploy, manage, monitor, and integrate Ceph clusters with cloud orchestrators like OpenStack in an easy to use manner through features like automated deployment, visual cluster management, health and performance monitoring, and OpenStack integration. Written in Python, VSM is released under the Apache 2.0 license and supports both Ceph and OpenStack on various Linux distributions.
This document provides background information on Wellgreen Platinum Ltd. and its Wellgreen platinum project. It discusses the large-scale polymetallic deposit containing platinum group metals (PGMs), nickel, copper, cobalt, and gold. The 2014 mineral resource estimate outlines 330 million tonnes of measured and indicated resources containing over 5.5 million ounces of PGMs and 4.9 billion pounds of nickel and copper. An 2015 preliminary economic assessment outlined an open-pit mine with average annual production of 209,000 ounces of PGMs and 128 million pounds of nickel and copper over a 16-year mine life at costs in the lowest quartile. The project benefits from excellent infrastructure and year-round operations in Canada's Yukon
The document is a presentation by Wellgreen Platinum Ltd providing information on its Wellgreen platinum project in Yukon, Canada. It summarizes key details of the project including its large scale polymetallic deposit containing nickel, copper, platinum group metals and other metals, the 2014 mineral resource estimate showing over 5 billion pounds of nickel and copper in the measured and indicated categories, and the positive 2015 preliminary economic assessment showing over $1.2 billion post-tax NPV with a 3.1 year payback. The presentation also outlines the company's strong financial position, management team with extensive experience in project development and operations, and district scale exploration potential.
The document provides background information on Wellgreen Platinum Ltd., a company developing the Wellgreen platinum group metals (PGM) project in Canada. It discusses the large-scale polymetallic deposit, noting its significant PGM component with a platinum to palladium ratio of 1:1 that is open pittable. The document also highlights the project's proximity to LNG power, lack of endangered species, strong government and First Nations support, and the critical nature and growing demand for its metals including nickel, PGMs, copper, gold, cobalt, and lithium which are used in batteries, aerospace, power plants, stainless steel, construction, electronics, green technology, and jewelry. It provides an overview of the
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Community
H3C has requirements for a distributed storage solution for productization including cost, ease of use, reliability, availability, and maintainability. Ceph was chosen due to its scalability, ease of maintenance, unified storage, ability to combine with cloud, and use of commodity hardware. H3C has developed a Ceph-based product with a web UI and automated deployment. There are still technical issues to address like suboptimal CRUSH solutions, OSD flapping, and high availability of iSCSI. Future plans include participating in the open source Ceph community, cooperating with other manufacturers, addressing customer issues, and contributing improvements in reliability, availability and maintainability.
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
This document discusses reference architectures for Ceph storage solutions. It provides guidance on key design considerations for Ceph clusters, including workload profiling, storage access methods, capacity planning, fault tolerance, and data protection schemes. Example hardware configurations are also presented for different performance and cost optimization targets.
London Ceph Day: Erasure Coding: Purpose and Progress Ceph Community
The document discusses erasure coding in Ceph, which allows storing 3 petabytes of data using only 1.3 petabytes of storage space. Erasure coding saves space compared to replication but makes object mutations and recovery harder. The document also mentions plans to refactor erasure coding in Ceph and test with alpha users, with the goal of completing the work by February 2014. It provides contact information for the lead developer on erasure coding in Ceph.
This document summarizes Haomai Wang's presentation on containers and Ceph. Some key points include:
- Ceph can provide block, file, and object storage for containers and virtual machines. Block storage via RBD is commonly used today but file storage via CephFS may be better suited for containers.
- CephFS provides POSIX file sharing across containers and clients, with improvements in snapshotting and statistics capabilities. It inherits Ceph's scalability and resilience.
- Orchestration tools like Kubernetes can integrate Ceph storage, either using existing volume plugins or new plugins being developed for Ceph block and file storage. This allows containers to easily share storage.
This document provides background information on Wellgreen Platinum Ltd. and its Wellgreen platinum project. It discusses the large-scale polymetallic deposit containing platinum group metals (PGMs), nickel, copper, cobalt, and gold. The 2014 mineral resource estimate outlines 330 million tonnes of measured and indicated resources containing over 5.5 million ounces of PGMs and 4.9 billion pounds of nickel and copper. An 2015 preliminary economic assessment outlined an open-pit mine with average annual production of 209,000 ounces of PGMs and 128 million pounds of nickel and copper over a 16-year mine life at costs in the lowest quartile. The project benefits from excellent infrastructure and year-round operations in Canada's Yukon
The document is a presentation by Wellgreen Platinum Ltd providing information on its Wellgreen platinum project in Yukon, Canada. It summarizes key details of the project including its large scale polymetallic deposit containing nickel, copper, platinum group metals and other metals, the 2014 mineral resource estimate showing over 5 billion pounds of nickel and copper in the measured and indicated categories, and the positive 2015 preliminary economic assessment showing over $1.2 billion post-tax NPV with a 3.1 year payback. The presentation also outlines the company's strong financial position, management team with extensive experience in project development and operations, and district scale exploration potential.
The document provides background information on Wellgreen Platinum Ltd., a company developing the Wellgreen platinum group metals (PGM) project in Canada. It discusses the large-scale polymetallic deposit, noting its significant PGM component with a platinum to palladium ratio of 1:1 that is open pittable. The document also highlights the project's proximity to LNG power, lack of endangered species, strong government and First Nations support, and the critical nature and growing demand for its metals including nickel, PGMs, copper, gold, cobalt, and lithium which are used in batteries, aerospace, power plants, stainless steel, construction, electronics, green technology, and jewelry. It provides an overview of the
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Community
H3C has requirements for a distributed storage solution for productization including cost, ease of use, reliability, availability, and maintainability. Ceph was chosen due to its scalability, ease of maintenance, unified storage, ability to combine with cloud, and use of commodity hardware. H3C has developed a Ceph-based product with a web UI and automated deployment. There are still technical issues to address like suboptimal CRUSH solutions, OSD flapping, and high availability of iSCSI. Future plans include participating in the open source Ceph community, cooperating with other manufacturers, addressing customer issues, and contributing improvements in reliability, availability and maintainability.
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
This document discusses reference architectures for Ceph storage solutions. It provides guidance on key design considerations for Ceph clusters, including workload profiling, storage access methods, capacity planning, fault tolerance, and data protection schemes. Example hardware configurations are also presented for different performance and cost optimization targets.
London Ceph Day: Erasure Coding: Purpose and Progress Ceph Community
The document discusses erasure coding in Ceph, which allows storing 3 petabytes of data using only 1.3 petabytes of storage space. Erasure coding saves space compared to replication but makes object mutations and recovery harder. The document also mentions plans to refactor erasure coding in Ceph and test with alpha users, with the goal of completing the work by February 2014. It provides contact information for the lead developer on erasure coding in Ceph.
This document summarizes Haomai Wang's presentation on containers and Ceph. Some key points include:
- Ceph can provide block, file, and object storage for containers and virtual machines. Block storage via RBD is commonly used today but file storage via CephFS may be better suited for containers.
- CephFS provides POSIX file sharing across containers and clients, with improvements in snapshotting and statistics capabilities. It inherits Ceph's scalability and resilience.
- Orchestration tools like Kubernetes can integrate Ceph storage, either using existing volume plugins or new plugins being developed for Ceph block and file storage. This allows containers to easily share storage.
London Ceph Day: Ceph Performance and Optimization Ceph Community
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
This document discusses tools for analyzing Ceph performance. It begins by describing common performance issues users encounter with Ceph and potential solutions like tuning configuration values or benchmarking. The rest of the document details various monitoring and benchmarking tools that can help identify bottlenecks like the dispatch layer, object store, or hardware. It provides examples of using tools like dstat, iostat, perf, systemtap, ceph perf dump, and benchmarking tools like Fio, rbd-replay and ceph_perf_local. It concludes with a case study where unaligned partitions and a driver bug were causing low IOPS that were resolved by fixing the partition alignment and downgrading the NVMe driver.
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
Ceph can provide storage tiering with different performance levels. It allows combining SSDs, SAS, and SATA disks from multiple nodes into pools to provide tiered storage. Performance testing showed that for reads, Ceph provided good performance across all tiers, while for writes Nvme disks had the best performance compared to SSD, SAS, and SATA disks. FIO, IOmeter, and IOzone were some of the tools used to measure throughput and IOPS.
This document summarizes a presentation on scaling academic clouds with Ceph. It discusses the software-defined datacenter and how Ceph can manage cloud resources and work with OpenStack. Two university Ceph deployments are described: a collaboration between 4 UK universities using Ceph for a microbial bioinformatics cloud, with 6.9PB of total capacity across the sites; and a deployment at the University of Zurich with 4.2PB of high-capacity tier storage and 112TB of high-performance SSD storage to provide block access for scientific applications. The presentation covers considerations for infrastructure like storage nodes, networks, and datacenters, and references a Dell/Inktank Ceph reference architecture providing hardware, software, and services.
This document discusses erasure coding in Ceph, including how it saves storage space compared to replication, improvements made over time like faster recovery and locally recoverable codes, and various contributors to erasure coding in Ceph from companies like Red Hat, Fujitsu, Intel, and ARM. It also mentions releases from Firefly to Infernalis where different erasure coding features were added.
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration Ceph Community
This document discusses OpenStack and Ceph integration. It provides an introduction to AWcloud, describes benefits and challenges of using Ceph with OpenStack including performance tuning, high concurrency workload handling, and Cinder backup functionality. Specific topics covered include optimizing the Ceph configuration, customizing the cluster layout, removing underperforming OSDs, adding Cinder volume workers, and using Ceph diff snapshots to backup and restore volumes.
Ceph Day New York 2014: Distributed OLAP queries in seconds using CephFSCeph Community
AdFin is a company that provides analytics tools for programmatic advertising markets to bring transparency. They developed PetaBucket, a distributed, relational OLAP database that can query a petabyte dataset in seconds. AdFin uses CephFS for scalable storage across petabyte datasets and nodes. They contributed code to add local caching support to the Ceph kernel client to improve performance for their workload of querying recent time-series data more frequently.
Ceph Day Beijing: Ceph-Dokan: A Native Windows Ceph Client Ceph Community
Ceph-Dokan is a native Windows client for Ceph that allows mounting Ceph filesystems (CephFS) and block devices (RBD) directly on Windows. It uses Dokan, which implements the Windows file system API, to provide a FUSE-like interface. This allows direct access to Ceph storage on Windows without translation layers. The document outlines the development of Ceph-Dokan, how to compile and use it, and future plans to integrate authentication and develop DLLs for the Ceph libraries.
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
Hyper Converged PLCloud with CEPH
This document discusses PowerLeader Cloud (PLCloud), a cloud computing platform that uses a hyper-converged infrastructure with OpenStack, Docker, and Ceph. It provides an overview of PLCloud and how it has adopted OpenStack, Ceph, and other open source technologies. It then describes PLCloud's hyper-converged architecture and how it leverages OpenStack, Docker, and Ceph. Finally, it discusses a specific use case where Ceph RADOS Gateway is used for media storage and access in PLCloud.
This document discusses using iSCSI to provide access to Ceph RADOS Block Device (RBD) images from heterogeneous operating systems and applications. It describes how the Linux IO Target (LIO) can be configured as an iSCSI target with the RBD storage backend to export Ceph RBD images. This allows standard iSCSI initiators to access RBD images without requiring Ceph-aware clients. It also explains how LIO and Lrbd can be used to configure multiple iSCSI gateways for high availability and redundancy.
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
This document discusses best practices for implementing Ceph-powered storage as a service. It covers planning a Ceph implementation based on business and technical requirements. Various use cases for Ceph are described, including OpenStack, cloud storage, web-scale applications, high performance block storage, archive/cold storage, databases and Hadoop. Architectural considerations for redundancy, servers, networking are also discussed. The document concludes with a case study of a university implementing a Ceph-based storage cloud to address storage needs for cancer and genomic research data.
This document provides troubleshooting guidance for issues with Ceph. It begins by suggesting identifying the problem domain as either performance, hang, crash, or unexpected behavior. For each problem, it recommends tools and techniques for further investigation such as debugging logs, profiling tools, and source code analysis. Debugging steps include establishing baselines, identifying implicated hosts or subsystems, increasing log verbosity, and tracing transactions through logs. The document emphasizes starting at the user end and working back towards Ceph to isolate issues.
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
The document discusses scale and performance challenges in providing storage infrastructure for research computing. It describes Monash University's implementation of the Ceph distributed storage system across multiple clusters to provide a "fabric" for researchers' storage needs in a flexible, scalable way. Key points include:
- Ceph provides software-defined storage that is scalable and can integrate with other systems like OpenStack.
- Multiple Ceph clusters have been implemented at Monash of varying sizes and purposes, including dedicated clusters for research data storage.
- The infrastructure provides different "tiers" of storage with varying performance and cost characteristics to meet different research needs.
- Ongoing work involves expanding capacity and upgrading hardware to improve performance
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Community
Big Data Analytics on Ceph* Object Storage
The document discusses using Ceph* object storage for big data analytics workloads on OpenStack. It covers deployment considerations for analytics clusters using options like VMs, containers, or bare metal. It details the design of using Ceph* RADOS Gateway (RGW) with an SSD cache tier for storage, and developing an RGW file system adapter and proxy for scheduling. Sample performance testing showed container overhead of 1.46x and VM overhead of 2.19x compared to bare metal. The next steps are to complete development and performance testing of the Ceph*/RGW solution.
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Community
This document provides a summary of a presentation about modeling, estimating, and predicting performance for Ceph storage clusters. The presentation discusses the challenges of predicting SDS (software-defined storage) performance due to the large number of configurable options. It proposes collecting standardized benchmark and configuration data from production systems to build a dataset that can provide better performance insights and predictions through analysis. The goal is to develop a benchmark suite to holistically evaluate Ceph performance and address common customer questions about how storage systems with different configurations may perform.
This document summarizes recent developments in the Ceph ecosystem. It discusses upcoming Ceph events like hackathons and developer summits. It provides metrics on Ceph usage and lists recent releases. It also describes work on CephFS, contributions to the Hammer release, and the growing use of the librados library.
Ceph Day New York 2014: Ceph, a physical perspective Ceph Community
The document summarizes the results of testing a Ceph storage cluster configuration using Supermicro hardware. Key findings include:
- Using SSDs for journals improved sequential write bandwidth significantly.
- Erasure coded pools provided reasonable performance at a lower cost compared to replicated pools.
- A single client could saturate the network connection with two 36-bay OSD nodes.
- Network performance was critical as the cluster scaled to support more clients and objects.
- Further testing was needed on erasure coded performance under failure conditions and using newer Ceph and Linux versions.
Transforming the Ceph Integration Tests with OpenStack Ceph Community
This document discusses transforming Ceph tests to use OpenStack. It describes running unit tests locally but running integration tests on OpenStack instances. Developers can now run integration tests on their own OpenStack tenant without waiting for resources. Specifying resources for the OpenStack machines makes the tests more self-service. Future improvements include better multi-cloud support and making archival and setup more convenient.
Elijah Charles from Intel presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"The Exascale computing challenge is the current Holy Grail for high performance computing. It envisages building HPC systems capable of 10^18 floating point operations under a power input in the range of 20-40 MW. To achieve this feat, several barriers need to be overcome. These barriers or “walls” are not completely independent of each other, but present a lens through which HPC system design can be viewed as a whole, and its composing sub-systems optimized to overcome the persistent bottlenecks."
Watch the video presentation: http://wp.me/p3RLHQ-f7X
See more talks in the Switzerland HPC Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Improving the performance of OpenSubdiv* on Intel ArchitectureIntel® Software
This document discusses improving the performance of OpenSubdiv on Intel architecture processors. It provides copyright information and legal disclaimers related to using Intel products. The document contains information about optimizing OpenSubdiv to take advantage of features in Intel processors that can improve performance.
London Ceph Day: Ceph Performance and Optimization Ceph Community
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
This document discusses tools for analyzing Ceph performance. It begins by describing common performance issues users encounter with Ceph and potential solutions like tuning configuration values or benchmarking. The rest of the document details various monitoring and benchmarking tools that can help identify bottlenecks like the dispatch layer, object store, or hardware. It provides examples of using tools like dstat, iostat, perf, systemtap, ceph perf dump, and benchmarking tools like Fio, rbd-replay and ceph_perf_local. It concludes with a case study where unaligned partitions and a driver bug were causing low IOPS that were resolved by fixing the partition alignment and downgrading the NVMe driver.
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
Ceph can provide storage tiering with different performance levels. It allows combining SSDs, SAS, and SATA disks from multiple nodes into pools to provide tiered storage. Performance testing showed that for reads, Ceph provided good performance across all tiers, while for writes Nvme disks had the best performance compared to SSD, SAS, and SATA disks. FIO, IOmeter, and IOzone were some of the tools used to measure throughput and IOPS.
This document summarizes a presentation on scaling academic clouds with Ceph. It discusses the software-defined datacenter and how Ceph can manage cloud resources and work with OpenStack. Two university Ceph deployments are described: a collaboration between 4 UK universities using Ceph for a microbial bioinformatics cloud, with 6.9PB of total capacity across the sites; and a deployment at the University of Zurich with 4.2PB of high-capacity tier storage and 112TB of high-performance SSD storage to provide block access for scientific applications. The presentation covers considerations for infrastructure like storage nodes, networks, and datacenters, and references a Dell/Inktank Ceph reference architecture providing hardware, software, and services.
This document discusses erasure coding in Ceph, including how it saves storage space compared to replication, improvements made over time like faster recovery and locally recoverable codes, and various contributors to erasure coding in Ceph from companies like Red Hat, Fujitsu, Intel, and ARM. It also mentions releases from Firefly to Infernalis where different erasure coding features were added.
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration Ceph Community
This document discusses OpenStack and Ceph integration. It provides an introduction to AWcloud, describes benefits and challenges of using Ceph with OpenStack including performance tuning, high concurrency workload handling, and Cinder backup functionality. Specific topics covered include optimizing the Ceph configuration, customizing the cluster layout, removing underperforming OSDs, adding Cinder volume workers, and using Ceph diff snapshots to backup and restore volumes.
Ceph Day New York 2014: Distributed OLAP queries in seconds using CephFSCeph Community
AdFin is a company that provides analytics tools for programmatic advertising markets to bring transparency. They developed PetaBucket, a distributed, relational OLAP database that can query a petabyte dataset in seconds. AdFin uses CephFS for scalable storage across petabyte datasets and nodes. They contributed code to add local caching support to the Ceph kernel client to improve performance for their workload of querying recent time-series data more frequently.
Ceph Day Beijing: Ceph-Dokan: A Native Windows Ceph Client Ceph Community
Ceph-Dokan is a native Windows client for Ceph that allows mounting Ceph filesystems (CephFS) and block devices (RBD) directly on Windows. It uses Dokan, which implements the Windows file system API, to provide a FUSE-like interface. This allows direct access to Ceph storage on Windows without translation layers. The document outlines the development of Ceph-Dokan, how to compile and use it, and future plans to integrate authentication and develop DLLs for the Ceph libraries.
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
Hyper Converged PLCloud with CEPH
This document discusses PowerLeader Cloud (PLCloud), a cloud computing platform that uses a hyper-converged infrastructure with OpenStack, Docker, and Ceph. It provides an overview of PLCloud and how it has adopted OpenStack, Ceph, and other open source technologies. It then describes PLCloud's hyper-converged architecture and how it leverages OpenStack, Docker, and Ceph. Finally, it discusses a specific use case where Ceph RADOS Gateway is used for media storage and access in PLCloud.
This document discusses using iSCSI to provide access to Ceph RADOS Block Device (RBD) images from heterogeneous operating systems and applications. It describes how the Linux IO Target (LIO) can be configured as an iSCSI target with the RBD storage backend to export Ceph RBD images. This allows standard iSCSI initiators to access RBD images without requiring Ceph-aware clients. It also explains how LIO and Lrbd can be used to configure multiple iSCSI gateways for high availability and redundancy.
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
This document discusses best practices for implementing Ceph-powered storage as a service. It covers planning a Ceph implementation based on business and technical requirements. Various use cases for Ceph are described, including OpenStack, cloud storage, web-scale applications, high performance block storage, archive/cold storage, databases and Hadoop. Architectural considerations for redundancy, servers, networking are also discussed. The document concludes with a case study of a university implementing a Ceph-based storage cloud to address storage needs for cancer and genomic research data.
This document provides troubleshooting guidance for issues with Ceph. It begins by suggesting identifying the problem domain as either performance, hang, crash, or unexpected behavior. For each problem, it recommends tools and techniques for further investigation such as debugging logs, profiling tools, and source code analysis. Debugging steps include establishing baselines, identifying implicated hosts or subsystems, increasing log verbosity, and tracing transactions through logs. The document emphasizes starting at the user end and working back towards Ceph to isolate issues.
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
The document discusses scale and performance challenges in providing storage infrastructure for research computing. It describes Monash University's implementation of the Ceph distributed storage system across multiple clusters to provide a "fabric" for researchers' storage needs in a flexible, scalable way. Key points include:
- Ceph provides software-defined storage that is scalable and can integrate with other systems like OpenStack.
- Multiple Ceph clusters have been implemented at Monash of varying sizes and purposes, including dedicated clusters for research data storage.
- The infrastructure provides different "tiers" of storage with varying performance and cost characteristics to meet different research needs.
- Ongoing work involves expanding capacity and upgrading hardware to improve performance
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Community
Big Data Analytics on Ceph* Object Storage
The document discusses using Ceph* object storage for big data analytics workloads on OpenStack. It covers deployment considerations for analytics clusters using options like VMs, containers, or bare metal. It details the design of using Ceph* RADOS Gateway (RGW) with an SSD cache tier for storage, and developing an RGW file system adapter and proxy for scheduling. Sample performance testing showed container overhead of 1.46x and VM overhead of 2.19x compared to bare metal. The next steps are to complete development and performance testing of the Ceph*/RGW solution.
Ceph Day Berlin: Measuring and predicting performance of Ceph clustersCeph Community
This document provides a summary of a presentation about modeling, estimating, and predicting performance for Ceph storage clusters. The presentation discusses the challenges of predicting SDS (software-defined storage) performance due to the large number of configurable options. It proposes collecting standardized benchmark and configuration data from production systems to build a dataset that can provide better performance insights and predictions through analysis. The goal is to develop a benchmark suite to holistically evaluate Ceph performance and address common customer questions about how storage systems with different configurations may perform.
This document summarizes recent developments in the Ceph ecosystem. It discusses upcoming Ceph events like hackathons and developer summits. It provides metrics on Ceph usage and lists recent releases. It also describes work on CephFS, contributions to the Hammer release, and the growing use of the librados library.
Ceph Day New York 2014: Ceph, a physical perspective Ceph Community
The document summarizes the results of testing a Ceph storage cluster configuration using Supermicro hardware. Key findings include:
- Using SSDs for journals improved sequential write bandwidth significantly.
- Erasure coded pools provided reasonable performance at a lower cost compared to replicated pools.
- A single client could saturate the network connection with two 36-bay OSD nodes.
- Network performance was critical as the cluster scaled to support more clients and objects.
- Further testing was needed on erasure coded performance under failure conditions and using newer Ceph and Linux versions.
Transforming the Ceph Integration Tests with OpenStack Ceph Community
This document discusses transforming Ceph tests to use OpenStack. It describes running unit tests locally but running integration tests on OpenStack instances. Developers can now run integration tests on their own OpenStack tenant without waiting for resources. Specifying resources for the OpenStack machines makes the tests more self-service. Future improvements include better multi-cloud support and making archival and setup more convenient.
Elijah Charles from Intel presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"The Exascale computing challenge is the current Holy Grail for high performance computing. It envisages building HPC systems capable of 10^18 floating point operations under a power input in the range of 20-40 MW. To achieve this feat, several barriers need to be overcome. These barriers or “walls” are not completely independent of each other, but present a lens through which HPC system design can be viewed as a whole, and its composing sub-systems optimized to overcome the persistent bottlenecks."
Watch the video presentation: http://wp.me/p3RLHQ-f7X
See more talks in the Switzerland HPC Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Improving the performance of OpenSubdiv* on Intel ArchitectureIntel® Software
This document discusses improving the performance of OpenSubdiv on Intel architecture processors. It provides copyright information and legal disclaimers related to using Intel products. The document contains information about optimizing OpenSubdiv to take advantage of features in Intel processors that can improve performance.
DreamWorks Animation achieved a 4x speedup in skin deformation for character animation by utilizing SIMD (Single Instruction, Multiple Data) instructions. Anson Chu, Alex Wells, and Martin Watt of DreamWorks Animation presented on how they optimized their skinning deformation algorithm using SIMD at an Intel conference in August 2015. The presentation covered legal disclaimers and information regarding the use of Intel products and benchmarks.
Transforming Business with Advanced AnalyticsIntel IT Center
The document discusses Intel's new Xeon E7 v2 processor family, which provides significantly better performance and lower total cost of ownership for in-memory analytics workloads compared to IBM Power processors. Key capabilities of the Xeon E7 v2 include up to 2x higher performance, 3x greater memory capacity, 4x higher I/O bandwidth, and reliability features. It established 20 new world records across various analytics benchmarks for 2-socket, 4-socket, and 8-socket configurations. The processor is aimed at transforming businesses through advanced analytics in industries like retail, healthcare, and manufacturing.
DreamWorks Animation was working to optimize the performance of its 3D character animation system. The system relies heavily on hierarchical transformations represented by matrices to define the positions and orientations of character skeleton joints. Evaluating these hierarchical transformations was found to be on the critical path and not well suited to parallelization. DreamWorks investigated representing transformations differently using "X-Form blocks" to allow for deferred evaluation, achieving a 1.6x speedup in the motion system.
Advancing Science in Alternative Energy and Bioengineering with Many-Core Pro...inside-BigData.com
In this deck from the 2014 HPC User Forum in Seattle, Michael Brown from Intel presents: Advancing Science in Alternative Energy and Bioengineering with Many-Core Processors.
Watch the video presentation: http://wp.me/p3RLHQ-d1C
Shifting market forces are disrupting entire industries, forcing product managers to think about APIs and services more strategically to remain nimble and relevant. Thinking about the products you manage in a platform way allows you extend the reach of your connected products -- in ways you may never been able to predict.
Yocto Project Open Source Build System and Collaboration InitiativeMarcelo Sanz
The Yocto Project creates a custom embedded Linux distribution for a device by building recipes that define how to obtain, patch, compile and package software, rather than using an existing Linux distribution. It provides a common build environment and tools to configure, build and test embedded systems across different processor architectures.
Intel: мобильность и трансформация рабочего местаExpolink
The document discusses the transition to the third technological platform of ICT and the impact on workplaces and mobility. It notes that millions of applications, services, information sources, content and user experiences will be available to billions of users across thousands of applications on hundreds of platforms. Charts show growth in tablet PC shipments, CPU shipments and Intel's revenue and investment returns. The strategic directions of integration and common development are discussed.
TwilioCon 2013 API Panel with Capital One, ESPN, Accenture, MasheryDelyn Simons
Seems like most companies are exploring what it means to build an API and open that to the developer community. But what does it mean to have an API? How should you be thinking about your API strategically for your business? Hear from companies who have successfully launched APIs for their business and how they approach their API from strategy to execution.
This document contains forward-looking statements and risk factors for Intel's presentation on innovating the future of mobile computing. It notes uncertainties in global economic conditions could impact demand. It also cites competitive risks and challenges forecasting variable demand. The document states Intel's results could be impacted by the timing of acquisitions and divestitures, product introductions and demand, actions by competitors, ability to respond to technological changes, and supply availability.
AI & Computer Vision (OpenVINO) - CPBR12Jomar Silva
This document discusses Intel's compiler optimizations. It states that Intel's compilers may optimize Intel microprocessors differently than non-Intel microprocessors. Some optimizations like SSE2, SSE3, and SSSE3 instructions are designed for Intel microprocessors. Intel does not guarantee the availability, functionality, effectiveness of optimizations on non-Intel microprocessors. The document advises checking product guides for specific instruction set coverage. It provides a notice revision date of August 4, 2011.
- Intel lowered its first-quarter gross margin forecast from 56% to 54% due to lower than expected prices for NAND flash memory chips.
- All other expectations for the first quarter remain unchanged from the previous business outlook published in Intel's fourth quarter earnings release.
- Intel will observe a "Quiet Period" from March 7 until its first-quarter earnings release where it will not update its business outlook.
Intel announced that its fourth-quarter business will be below previous expectations, with revenue expected to be $9 billion, lower than the $10.1-10.9 billion expectation. Gross margin is also expected to be lower at 55% due to lower revenue and other charges from weaker demand. Spending is projected to be $2.8 billion compared to $2.9 billion expected previously. Risk factors that could further impact results include continued uncertainty in global economic conditions, competition, manufacturing costs and yields, and impairment charges.
Como criar um mundo autônomo e conectado - Jomar SilvaiMasters
Jomar Silva - Technical Evangelist, Intel
A evolução das tecnologias de hardware, software e comunicação nos últimos anos permite projetar um novo mundo digital, autônomo e conectado.
A Internet das Coisas quando utilizada em conjunto com Inteligência Artificial propiciam um novo patamar de aplicações autônomas e conectadas, que serão a base para a criação deste novo mundo digital.
O grande desafio neste novo cenário é o imenso volume de dados que precisa ser capturado e processado em tempo real para permitir o desenvolvimento de soluções como carros autônomos e sistemas automáticos de segurança baseado em monitoramento por vídeo.
Na palestra iremos abordar estes desafios técnicos, Internet das Coisas, Inteligência Artificial, Visão Computacional, arquiteturas base para o desenvolvimento de soluções autônomas end-to-end e sobre tecnologias e produtos de hardware e software da Intel que podem te ajudar a enfrentar estes desafios de forma otimizada.
Serão abordados diversos projetos de software Open Source, bem como repositórios de soluções de código aberto que poderão ser utilizados para acelerar o aprendizado do desenvolvedor neste novo mundo digital, autônomo e conectado.
Apresentado no InterCon 2018 - https://eventos.imasters.com.br/intercon
Intel has updated its third-quarter revenue and gross margin expectations. Revenue is now expected to be between $9.4 billion and $9.8 billion, compared to the previous range of $9.0 billion to $9.6 billion. Gross margin is expected to be in the upper half of the previous range of 52 percent plus or minus a couple points. All other expectations remain unchanged and Intel will report third-quarter financial results on October 16. The document outlines various risk factors that could affect Intel's actual results.
The document discusses the future of storage technologies for cloud computing. It notes that cloud adoption is driving significant business opportunities but also increasing complexity. Intel's strategy is to build an open ecosystem, reduce complexity, and enable massive compute capabilities. New storage technologies like SSDs and NVMe can help optimize performance by providing much higher bandwidth and lower latency compared to hard disk drives. For example, using Intel SSDs with NVMe instead of HDDs can provide over 100x cost savings and 1400x power savings while also improving performance for database restart tasks by over 30 times.
Similar to Ceph Day Shanghai - VSM (Virtual Storage Manager) - Simplify Ceph Management and foster adoption (20)
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
3. Risk Factors
The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-
looking statements that involve a number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "plans," "believes," "seeks,"
"estimates," "may," "will," "should" and their variations identify forward-looking statements. Statements that refer to or are based on projections,
uncertain events or assumptions also identify forward-looking statements. Many factors could affect Intel's actual results, and variances from Intel's
current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking statements.
Intel presently considers the following to be important factors that could cause actual results to differ materially from the company's expectations.
Demand for Intel’s products is highly variable and could differ from expectations due to factors including changes in the business and economic
conditions; consumer confidence or income levels; customer acceptance of Intel’s and competitors’ products; competitive and pricing pressures,
including actions taken by competitors; supply constraints and other disruptions affecting customers; changes in customer order patterns including order
cancellations; and changes in the level of inventory at customers. Intel’s gross margin percentage could vary significantly from expectations based on
capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in revenue levels;
segment product mix; the timing and execution of the manufacturing ramp and associated costs; excess or obsolete inventory; changes in unit costs;
defects or disruptions in the supply of materials or resources; and product manufacturing quality/yields. Variations in gross margin may also be caused by
the timing of Intel product introductions and related expenses, including marketing expenses, and Intel’s ability to respond quickly to technological
developments and to introduce new features into existing products, which may result in restructuring and asset impairment charges. Intel's results could
be affected by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate,
including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange
rates. Results may also be affected by the formal or informal imposition by countries of new or revised export and/or import and doing-business
regulations, which could be changed without prior notice. Intel operates in highly competitive industries and its operations have high costs that are
either fixed or difficult to reduce in the short term. The amount, timing and execution of Intel’s stock repurchase program and dividend program could
be affected by changes in Intel’s priorities for the use of cash, such as operational spending, capital spending, acquisitions, and as a result of changes to
Intel’s cash flows and changes in tax laws. Product defects or errata (deviations from published specifications) may adversely impact our expenses,
revenues and reputation. Intel’s results could be affected by litigation or regulatory matters involving intellectual property, stockholder, consumer,
antitrust, disclosure and other issues. An unfavorable ruling could include monetary damages or an injunction prohibiting Intel from manufacturing or
selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as
compulsory licensing of intellectual property. Intel’s results may be affected by the timing of closing of acquisitions, divestitures and other significant
transactions. A detailed discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s
most recent reports on Form 10-Q, Form 10-K and earnings release.
Rev. 1/15/15
3
5. General Information
VSM (Virtual Storage Manager) is an open source ceph
management tool developed by Intel, and announced
on 2014 Nov’s Openstack Paris summit. It is initially
positioned as ceph cluster management tool for
administrator to lower the barrier for adoption of Ceph.
The project earns a lot of interests from community, a
few companies decided to adopt in production.
CD 10000
Code
Repo
Home
Page
VSM
Issue
Tracker
Mailing
List…
In 2 weeks
Overview
5
6. What it is …
VSM Controller
• Connects to Ceph cluster through VSM
agent
• Connects to OpenStack Nova controller
(optional) via SSH
6
Overview
OpenStack
Admin
Cluster-Facing
10GbE
or InfiniBand
VSM
Controller
Administration GbE
Client-Facing 10GbE
or InfiniBand
Compute
RADOS
Server Node
OS
D
Monitor
OS
D
OS
D
OS
D
VSM Agent
SS
D
Server
Node
Monitor
VSM Agent
SS
D
OS
D
Server Node
OS
D
Monitor
OS
D
OS
D
OS
D
VSM Agent
SS
D
SS
D
OS
D
Server Node
OS
D
Monitor
OS
D
OS
D
OS
D
VSM Agent
SS
D
SS
D
OS
D
Server Node
OS
D
OS
D
OS
D
OS
D
VSM Agent
SS
D
SS
D
OS
D
Compute
RADOS
Compute
RADOS
Compute
RADOS
Compute
RADOS
OpenStack-Administered Network(s)
SSH
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
OS
D
VSM Agent
• Runs on every server in the Ceph
cluster
• Relays server configuration & status
information to VSM controller
7. Capacity Performance
High
Performance
SSD SSD SSD
SSD SSDSSD SSD
SSD SSD SSD
Server 03 Server 04 Server 01 Server 02 Server 01 Server 02
SSD
15K RPM HDD
7200 RPM HDD
Organizing Storage with Storage Groups
Storage Groups
Concept
7
8. 8
Performance
Server 03 Server 04
Zone 1 Zone 22
Zone 2
Server 01 Server 02
Zone 1
High Performance
Server 03 Server 04
Zone 2
Server 01 Server 02
Zone 1
Managing Failure Domains with Zone
Zone
Concept
9. Architecture
python-
vsmclient
API
conductorscheduler Rabbit
MQ
Web UI
(dashboard)
Python-vsmclient
• This is a client for the vsm API, it consists of
• a Python API (the vsmclient module),
• a command-line script (vsm). Each implements
100% of the vsm API.
Vsm
• A major module for ceph management
Vsm-dashboard
• web based management interface for VSM.
Vsm-deploy
• The ceph deployment tool kit provided by
VSM .
storage
agent
storage
agent
storage
agent
Architecture
9
12. Major Features
VSM & Ceph Deployment
Automatically deploy controller and
agents
Visually deploy initial Ceph cluster
Cluster Management & Maintenance
Add/remove monitors
Add/remove OSD servers
Bring servers up/down
Replace failed disks/servers
Manage failure domains
Add/modify storage groups
Add/remove replicated/Erasure-
coded/cache-tier pools
Support separate storage group at
pool creation and pool quotas
Cluster Monitoring
Monitor cluster health
Monitor capacity and performance
Orchestration
Presenting storage pools to cloud
orchestrator (OpenStack)
Self Management
Secure access to VSM Web UI
Secure Communication channel
between VSM processes
Backup and restore VSM
Extensibilities
Exposed REST API for integration with
3rd-parties
CLI based process enables the
possibility with automation tools.
Feature
12
13. Getting Started
Create
Cluster
Monitoring
Cluster Health
OSD Status
Monitor
Status
PG Status
Managing
Servers
Add & Remove
Servers
Add & Remove
Monitors
Stop & Start
Servers
Dashboard
Overview
Managing
Capacity
Creating
Storage Pools
Managing
Storage
Devices
Restart OSDs
Remove
OSDs
Restore OSDs
Manage
Servers
Manage
Devices
Manage Pools
MDS Status
RBD Status
Working with
OpenStack
OpenStack
Access
Managing
Pools
Managing VSM
Manage VSM
Users
Manage VSM
Configuration
Operations in one page
13
Log In Navigation
Storage
Group Status
Dashboard
Overview
Feature
14. How VSM can help:
- Easy deployment
Story: An storage administrator knows little about Ceph, and expect to have a quick try to
deploy Ceph cluster
Step 1: the administrator prepares a few storage nodes and devices.
Step 2: the administrator defines the possible storage layout like OSDs in manifest
files
Stpe 3: the administrator installs VSM
Step 4: the administrator deploys ceph cluster through VSM UI
Objective: to demonstrate the capability to easy deploy Ceph cluster from ground
Prepare H/W
Define
Manifest
Deploy VSM
Create Ceph
Cluster
Feature
14
15. How VSM can help:
- Easy management & Monitoring
Story: An Storage operator expects to know cluster health status and operate cluster for
disk/server failure.
Step 1: the operator looks at Dashboard to identify down OSDs
Step 2: the operator identifies the failed physical disks and then isolates them for
replacement
Step 3: the operator replaces those failed disks in servers.
Step 4: the operator brings those OSDs back into cluster, and trigger data recovery.
Objective: to demonstrate the capabilities of cluster diagnosis
Identify Issue
Locate root
cause
Fix issue
Resume
health
Feature
15
16. How VSM can help:
- Scaling Cluster
Story: As business goes up, an storage operator expects to scale storage cluster
Step 1: the operator prepares one node with standard OS installed,
Step 2: the operator executes a node provision tool to deploy vsm agent on the
node,
Step 3: the operator goes to VSM controller console to identify the node, then add it
into cluster
Step 4: after data rebalanced, the operator can work on the upgraded cluster.
Objective: to demonstrate the ease-to-use for scaling Ceph Cluster.
Prepare H/W
Define
Manifest
Deploy VSM
Agent
Add Into
Cluster
Feature
16
17. How VSM can help:
- Orchestrating with OpenStack
Story: An Cloud platform operator expects to have a storage volume for VM use
Step 1: the operator grant VSM permissions to access Openstack controller node,
Step 2: the operator sets Openstack controller address in VSM controller console,
Step 3: the operator creates pools in VSM, then presents the pools to openstack,
Step 4: the operator now can see those pools in openstack for VM use.
Objective: to demonstrate the capabilities of storage provision between Ceph and OpenStack
Grant
permission
Define
Openstack
Access (VSM)
Present Pool
(VSM)
Create
Volume
(Openstack)
Feature
17
18. 18
New Features in 2.0
Environment:
Multiple OS supporting: CentOS 7, Ubuntu
14.04 LTS
Support Ceph Hammer
Support Openstack Juno
Deployment
Auto deployment tool for VSM
(deploy with one command line)
Provision new storage nodes on
demand
Management & Maintenance
Add new disks into Cluster
Ceph upgrade
Monitoring
Visual dashboard
OSD device health status check
Disk space full monitoring
Disk health status check
Ceph performance metrics Monitoring
Orchestration
Present pools to multi-node openstack
cluster
Share one keystone instance with
openstack cluster
Misc.
NOTE: 2.0 was just released in the end of Sept.
Status