STO7534 VSAN Day 2 Operations (VMworld 2016)Cormac Hogan
This document discusses day-to-day Virtual SAN operations and troubleshooting. It begins with an introduction and agenda for the presentation. The presentation then covers monitoring Virtual SAN with tools like logging, trace files, and core dumps. It discusses alerting options like vSphere alarms, vRealize Operations, and vRealize Log Insight. A section covers Virtual SAN upgrades, including prerequisites, the multi-phase process, and potential issues. It ends with a demo of how to handle a Virtual SAN failure using the various monitoring and troubleshooting tools.
This presentation discusses networking design and configuration considerations for VMware vSAN. It provides an overview of vSAN networking components and traffic, requirements for ports and firewalls, and considerations for multicast, unicast, NIC teaming and load balancing. It also reviews supported network topologies like single site, stretched and 2-node clusters and discusses performance considerations.
VMworld 2017 - Top 10 things to know about vSANDuncan Epping
In this session Cormac Hogan and I go over the top 10 things to know about vSAN. This is based on two years of questions/answers from our field and customers. Useful for any VMware vSAN customer!
#STO1264BU #STO1264BE
A look at the new enhancements to core storage in vSphere 6.5, including VMFS6, Automated UNMAP, I/O Filters, and much more, as delivered by Cormac Hogan and Cody Hosterman
VMware Virtual SAN 6.0 includes the following new features and improvements:
1. Increased performance and scalability with support for up to 64 hosts and 9,000 components per host. Virtual machines can now have VMDKs up to 62TB in size.
2. Enhanced all-flash and hybrid architectures with new caching architectures that deliver up to 90,000 IOPS per host.
3. Usability improvements like default storage policies, visualization of storage utilization in policies, and a resynchronization status dashboard.
4. Failure resilience enhancements such as fault domains that account for failures across racks, and proactive rebalancing to leverage new nodes.
This document provides an overview and introduction to VMware Virtual SAN (VSAN). It discusses the VSAN architecture which uses SSDs for caching and HDDs for storage. It also covers how VSAN can be configured through storage policies assigned at the VM level. The document outlines how VSAN provides a software-defined storage solution that is hardware agnostic and can elastically scale storage performance and capacity by adding servers and disks.
What is coming for VMware vSphere?
Delivered at VMUG DK/UK/BE in November 2014. Session is all about vSphere futures, what can be expected in the near future.
STO7534 VSAN Day 2 Operations (VMworld 2016)Cormac Hogan
This document discusses day-to-day Virtual SAN operations and troubleshooting. It begins with an introduction and agenda for the presentation. The presentation then covers monitoring Virtual SAN with tools like logging, trace files, and core dumps. It discusses alerting options like vSphere alarms, vRealize Operations, and vRealize Log Insight. A section covers Virtual SAN upgrades, including prerequisites, the multi-phase process, and potential issues. It ends with a demo of how to handle a Virtual SAN failure using the various monitoring and troubleshooting tools.
This presentation discusses networking design and configuration considerations for VMware vSAN. It provides an overview of vSAN networking components and traffic, requirements for ports and firewalls, and considerations for multicast, unicast, NIC teaming and load balancing. It also reviews supported network topologies like single site, stretched and 2-node clusters and discusses performance considerations.
VMworld 2017 - Top 10 things to know about vSANDuncan Epping
In this session Cormac Hogan and I go over the top 10 things to know about vSAN. This is based on two years of questions/answers from our field and customers. Useful for any VMware vSAN customer!
#STO1264BU #STO1264BE
A look at the new enhancements to core storage in vSphere 6.5, including VMFS6, Automated UNMAP, I/O Filters, and much more, as delivered by Cormac Hogan and Cody Hosterman
VMware Virtual SAN 6.0 includes the following new features and improvements:
1. Increased performance and scalability with support for up to 64 hosts and 9,000 components per host. Virtual machines can now have VMDKs up to 62TB in size.
2. Enhanced all-flash and hybrid architectures with new caching architectures that deliver up to 90,000 IOPS per host.
3. Usability improvements like default storage policies, visualization of storage utilization in policies, and a resynchronization status dashboard.
4. Failure resilience enhancements such as fault domains that account for failures across racks, and proactive rebalancing to leverage new nodes.
This document provides an overview and introduction to VMware Virtual SAN (VSAN). It discusses the VSAN architecture which uses SSDs for caching and HDDs for storage. It also covers how VSAN can be configured through storage policies assigned at the VM level. The document outlines how VSAN provides a software-defined storage solution that is hardware agnostic and can elastically scale storage performance and capacity by adding servers and disks.
What is coming for VMware vSphere?
Delivered at VMUG DK/UK/BE in November 2014. Session is all about vSphere futures, what can be expected in the near future.
VMware VSAN Technical Deep Dive - March 2014David Davis
Virtual SAN 5.5 provides a software-defined storage solution that is integrated with VMware vSphere. It allows storage resources on standard servers to be pooled into a shared datastore. Virtual SAN uses SSDs to provide flash-accelerated performance and HDDs for capacity. It delivers high performance scaling linearly with the addition of servers. Storage policies can be set on a per-VM basis to control capacity, performance and availability without using LUNs or volumes. Virtual SAN simplifies storage management and provides resilience, flexibility and savings over external storage arrays.
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld
This document provides an overview of VMware's Virtual SAN architecture. It discusses Virtual SAN's goals of being easy to manage, providing compelling TCO, and being strongly integrated with VMware products. It describes how Virtual SAN aggregates local flash and HDDs to provide a shared datastore. It also covers topics like Virtual SAN's distributed architecture, scaling capabilities, storage policies, deployment considerations, resiliency features, and monitoring tools.
vSAN provides software-defined storage that pools server storage resources and delivers them as a shared datastore for VMs. It integrates deeply with VMware stacks for simplified management and supports a variety of use cases. vSAN leverages new hardware technologies to provide high performance at low cost through space efficiency techniques and storage policies that control availability, capacity reservation, and QoS.
This document provides an overview of the MRSCAPS design framework and how it can be applied to analyze VMware Virtual SAN (VSAN). It discusses VSAN considerations for each element of MRSCAPS: manageability using the vSphere console and health check plugin; recoverability through backups and replication; security with additional encryption options; cost based on licensing models; availability leveraged through storage policies and HA; performance through hardware optimizations and flash configurations; and scalability to large clusters and additional hosts. The presentation includes screenshots and concludes with a Q&A session.
Five common customer use cases for Virtual SAN - VMworld US / 2015Duncan Epping
This session was presented by Lee Dilworth and Duncan Epping at VMworld in the US in 2015. Five common customer use cases of the last 12-18 months are discussed in this deck.
Virtual SAN (VSAN) is a hypervisor-converged storage solution from VMware that radically simplifies storage. It pools server-attached flash, SSD, and HDD storage and manages it through storage policies from the vSphere client. VSAN is integrated with vSphere and provides high performance, resilience against hardware failures, and linear scalability. It can reduce both capital and operating expenses compared to traditional external storage arrays.
Virtual SAN is VMware's hyper-converged infrastructure storage solution that is integrated with vSphere. It provides a software-defined, distributed storage platform that offers policy-based placement and management of virtual machine storage. Version 6.1 introduced new features like stretched clusters for disaster recovery between sites, support for high-density flash devices, and health monitoring and troubleshooting tools through integration with vRealize Operations. Future enhancements may include RAID 5 and 6 functionality over the network to improve storage efficiency as well as data deduplication and compression.
STO7535 Virtual SAN Proof of Concept - VMworld 2016Cormac Hogan
This document provides an overview of tools that can help administrators successfully conduct a Virtual SAN proof of concept. It discusses the Virtual SAN Health Check plugin, capacity views, performance service, HCIbench, and Virtual SAN Observer for monitoring and validating Virtual SAN configurations. Validation scenarios covered include successfully deploying Virtual SAN, deploying VMs on VSAN storage, VM availability during host and storage failures, and measuring rebuild activity.
A day in the life of a VSAN I/O - STO7875Duncan Epping
This document provides an overview and summary of a VMworld session about Virtual SAN I/O. The session covers Virtual SAN concepts, the I/O flow of reads and writes in Virtual SAN, failure scenarios and how Virtual SAN handles them, and new features like deduplication and compression. The document includes diagrams demonstrating how data is distributed and replicated across hosts in a Virtual SAN cluster. It also provides details on how reads, writes, and failures are handled at a technical level in Virtual SAN. In the conclusion, it recommends three ways for attendees to get started with Virtual SAN: a hands-on lab, 60-day free evaluation, or working with a VMware partner on an assessment.
VMware - Virtual SAN - IT Changes EverythingVMUG IT
Virtual SAN is a hyper-converged storage platform that is built into the ESXi hypervisor. It aggregates locally attached flash and disk drives from each ESXi host in a cluster to provide a shared datastore. Virtual SAN provides dynamic capacity and performance scaling. It utilizes storage policies to provide per-VM storage service levels from the single shared datastore. Virtual SAN simplifies storage management by automating control of storage capacity, performance, and availability based on application needs.
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld
This document provides an overview of VMware Virtual SAN (VSAN) technical best practices. It discusses VSAN's key components, hardware considerations, use cases, management, and demo. VSAN is a software-defined storage solution that clusters direct-attached host storage and provides a virtual SAN datastore. It has integrated management with vSphere and uses capabilities and policies to enable VM-centric storage provisioning and automation. The document demonstrates how to configure VSAN, create VM storage policies, and deploy VMs according to policies and capabilities.
Virtual SAN allows for the creation of a shared storage pool using local disks within an ESXi cluster. It requires a minimum of 3 ESXi hosts and uses a RAIN architecture with no additional virtual appliances. Setup and management can be done in minutes without agents through storage profiles assigned on a per VM/VMDK basis. Virtual SAN scales storage capacity through adding more disks, disk groups, or hosts. It has limitations such as a maximum VMDK size of 2-512TB and does not support fault tolerance or storage I/O control.
This document summarizes a technical deep dive presentation on vSphere Distributed Switches. It discusses the requirements, construction, alternatives, tips and real world use cases of vSphere Distributed Switches. The presenters were Jason Nash from Varrow and Chris Wahl from AHEAD, and they covered topics such as migration from standard to distributed switches, mixing 1Gb and 10Gb networking, and techniques for bandwidth management.
This document provides an overview of VMware Virtual SAN 6.0, including:
- Virtual SAN can be deployed with a hybrid or all-flash architecture to provide high performance.
- Virtual SAN is embedded in the vSphere kernel for simple management and integration.
- Virtual SAN 6.0 provides 4x performance, 2x scale, and new features like snapshots and encryption.
- Case studies show Virtual SAN can reduce storage costs by 60% and management time by 90%.
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld
VMworld 2013
Jad Chamcham, VMware
Narasimha Krishnakumar, VMware, view, vsan, tco
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This document provides an overview of Virtual SAN design and architecture. It discusses Virtual SAN components such as disk groups, datastores, and objects. It describes how data is distributed across disks groups and hosts using techniques like striping and mirroring. It also covers storage policies and how they determine the layout and number of components for distributed objects. Use cases like all-flash configurations, ROBO solutions, and stretched clusters are explained at a high level.
A stretched cluster connects data centers across different sites with shared storage and live migration capabilities. It provides both disaster avoidance and recovery benefits. Key requirements include low latency storage replication, sufficient network bandwidth for vMotion, and considerations for split-brain scenarios. While it improves availability during localized failures, a stretched cluster has limitations compared to independent disaster recovery sites. Additional sites or a traditional DR configuration provide multiple levels of protection.
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld
Virtual Volumes provide a more efficient operational model for external storage management in vSphere. They integrate storage capabilities directly into virtual machines at the individual disk level through Storage Policy-Based Management. This simplifies operations by removing the need for static LUN/volume provisioning and allows storage services to be applied non-disruptively on a per-virtual machine basis according to policies. A key component is the VASA Provider, which is used to publish an array's storage capabilities and manage the creation of VM-level objects called Virtual Volumes on behalf of vSphere.
VMware VSAN Technical Deep Dive - March 2014David Davis
Virtual SAN 5.5 provides a software-defined storage solution that is integrated with VMware vSphere. It allows storage resources on standard servers to be pooled into a shared datastore. Virtual SAN uses SSDs to provide flash-accelerated performance and HDDs for capacity. It delivers high performance scaling linearly with the addition of servers. Storage policies can be set on a per-VM basis to control capacity, performance and availability without using LUNs or volumes. Virtual SAN simplifies storage management and provides resilience, flexibility and savings over external storage arrays.
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld
This document provides an overview of VMware's Virtual SAN architecture. It discusses Virtual SAN's goals of being easy to manage, providing compelling TCO, and being strongly integrated with VMware products. It describes how Virtual SAN aggregates local flash and HDDs to provide a shared datastore. It also covers topics like Virtual SAN's distributed architecture, scaling capabilities, storage policies, deployment considerations, resiliency features, and monitoring tools.
vSAN provides software-defined storage that pools server storage resources and delivers them as a shared datastore for VMs. It integrates deeply with VMware stacks for simplified management and supports a variety of use cases. vSAN leverages new hardware technologies to provide high performance at low cost through space efficiency techniques and storage policies that control availability, capacity reservation, and QoS.
This document provides an overview of the MRSCAPS design framework and how it can be applied to analyze VMware Virtual SAN (VSAN). It discusses VSAN considerations for each element of MRSCAPS: manageability using the vSphere console and health check plugin; recoverability through backups and replication; security with additional encryption options; cost based on licensing models; availability leveraged through storage policies and HA; performance through hardware optimizations and flash configurations; and scalability to large clusters and additional hosts. The presentation includes screenshots and concludes with a Q&A session.
Five common customer use cases for Virtual SAN - VMworld US / 2015Duncan Epping
This session was presented by Lee Dilworth and Duncan Epping at VMworld in the US in 2015. Five common customer use cases of the last 12-18 months are discussed in this deck.
Virtual SAN (VSAN) is a hypervisor-converged storage solution from VMware that radically simplifies storage. It pools server-attached flash, SSD, and HDD storage and manages it through storage policies from the vSphere client. VSAN is integrated with vSphere and provides high performance, resilience against hardware failures, and linear scalability. It can reduce both capital and operating expenses compared to traditional external storage arrays.
Virtual SAN is VMware's hyper-converged infrastructure storage solution that is integrated with vSphere. It provides a software-defined, distributed storage platform that offers policy-based placement and management of virtual machine storage. Version 6.1 introduced new features like stretched clusters for disaster recovery between sites, support for high-density flash devices, and health monitoring and troubleshooting tools through integration with vRealize Operations. Future enhancements may include RAID 5 and 6 functionality over the network to improve storage efficiency as well as data deduplication and compression.
STO7535 Virtual SAN Proof of Concept - VMworld 2016Cormac Hogan
This document provides an overview of tools that can help administrators successfully conduct a Virtual SAN proof of concept. It discusses the Virtual SAN Health Check plugin, capacity views, performance service, HCIbench, and Virtual SAN Observer for monitoring and validating Virtual SAN configurations. Validation scenarios covered include successfully deploying Virtual SAN, deploying VMs on VSAN storage, VM availability during host and storage failures, and measuring rebuild activity.
A day in the life of a VSAN I/O - STO7875Duncan Epping
This document provides an overview and summary of a VMworld session about Virtual SAN I/O. The session covers Virtual SAN concepts, the I/O flow of reads and writes in Virtual SAN, failure scenarios and how Virtual SAN handles them, and new features like deduplication and compression. The document includes diagrams demonstrating how data is distributed and replicated across hosts in a Virtual SAN cluster. It also provides details on how reads, writes, and failures are handled at a technical level in Virtual SAN. In the conclusion, it recommends three ways for attendees to get started with Virtual SAN: a hands-on lab, 60-day free evaluation, or working with a VMware partner on an assessment.
VMware - Virtual SAN - IT Changes EverythingVMUG IT
Virtual SAN is a hyper-converged storage platform that is built into the ESXi hypervisor. It aggregates locally attached flash and disk drives from each ESXi host in a cluster to provide a shared datastore. Virtual SAN provides dynamic capacity and performance scaling. It utilizes storage policies to provide per-VM storage service levels from the single shared datastore. Virtual SAN simplifies storage management by automating control of storage capacity, performance, and availability based on application needs.
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld
This document provides an overview of VMware Virtual SAN (VSAN) technical best practices. It discusses VSAN's key components, hardware considerations, use cases, management, and demo. VSAN is a software-defined storage solution that clusters direct-attached host storage and provides a virtual SAN datastore. It has integrated management with vSphere and uses capabilities and policies to enable VM-centric storage provisioning and automation. The document demonstrates how to configure VSAN, create VM storage policies, and deploy VMs according to policies and capabilities.
Virtual SAN allows for the creation of a shared storage pool using local disks within an ESXi cluster. It requires a minimum of 3 ESXi hosts and uses a RAIN architecture with no additional virtual appliances. Setup and management can be done in minutes without agents through storage profiles assigned on a per VM/VMDK basis. Virtual SAN scales storage capacity through adding more disks, disk groups, or hosts. It has limitations such as a maximum VMDK size of 2-512TB and does not support fault tolerance or storage I/O control.
This document summarizes a technical deep dive presentation on vSphere Distributed Switches. It discusses the requirements, construction, alternatives, tips and real world use cases of vSphere Distributed Switches. The presenters were Jason Nash from Varrow and Chris Wahl from AHEAD, and they covered topics such as migration from standard to distributed switches, mixing 1Gb and 10Gb networking, and techniques for bandwidth management.
This document provides an overview of VMware Virtual SAN 6.0, including:
- Virtual SAN can be deployed with a hybrid or all-flash architecture to provide high performance.
- Virtual SAN is embedded in the vSphere kernel for simple management and integration.
- Virtual SAN 6.0 provides 4x performance, 2x scale, and new features like snapshots and encryption.
- Case studies show Virtual SAN can reduce storage costs by 60% and management time by 90%.
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld
VMworld 2013
Jad Chamcham, VMware
Narasimha Krishnakumar, VMware, view, vsan, tco
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This document provides an overview of Virtual SAN design and architecture. It discusses Virtual SAN components such as disk groups, datastores, and objects. It describes how data is distributed across disks groups and hosts using techniques like striping and mirroring. It also covers storage policies and how they determine the layout and number of components for distributed objects. Use cases like all-flash configurations, ROBO solutions, and stretched clusters are explained at a high level.
A stretched cluster connects data centers across different sites with shared storage and live migration capabilities. It provides both disaster avoidance and recovery benefits. Key requirements include low latency storage replication, sufficient network bandwidth for vMotion, and considerations for split-brain scenarios. While it improves availability during localized failures, a stretched cluster has limitations compared to independent disaster recovery sites. Additional sites or a traditional DR configuration provide multiple levels of protection.
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld
Virtual Volumes provide a more efficient operational model for external storage management in vSphere. They integrate storage capabilities directly into virtual machines at the individual disk level through Storage Policy-Based Management. This simplifies operations by removing the need for static LUN/volume provisioning and allows storage services to be applied non-disruptively on a per-virtual machine basis according to policies. A key component is the VASA Provider, which is used to publish an array's storage capabilities and manage the creation of VM-level objects called Virtual Volumes on behalf of vSphere.
V sphere virtual volumes technical overviewsolarisyougood
This document discusses VMware vSphere Virtual Volumes (VVols), which provide a management and integration framework for external storage. VVols virtualize SAN and NAS devices by representing virtual disks natively on arrays. Key points include:
VVols are enabled through a VASA provider that communicates between vSphere and storage arrays. Storage containers on arrays house VVols and can apply storage policies. Protocol endpoints provide access between ESXi hosts and arrays. Operations like provisioning and migration can be offloaded to arrays for improved efficiency. Snapshots create point-in-time copies of VVols for tasks like backup and testing.
VMworld 2015: Virtual Volumes Technical Deep DiveVMworld
This document provides a technical deep dive on virtual volumes. It begins with an overview of the challenges with today's LUN-centric storage architectures, such as complex provisioning, wasted resources, and lack of granular control. It then introduces an application-centric model using virtual volumes that provides dynamic storage service levels, fine-grained control at the VM level, and common management across arrays. The rest of the document details the management plane, data plane, consumption model using storage policy-based management, virtual machine lifecycles, snapshots, and offloading operations with virtual volumes.
VMware introduced several new features in vSphere 6 including increased scalability limits, usability improvements to the vSphere Web Client, enhanced vMotion capabilities such as cross-vCenter and long distance vMotion, expanded fault tolerance support, and the introduction of vSphere Virtual Volumes and its policy-based management framework. Key networking updates included Network I/O Control version 3 and multiple TCP/IP stacks. Storage features focused on Virtual SAN enhancements, Storage DRS integration, and support for VASA 2.0 storage capabilities.
Not content to simply describe the Virtual Volume (VVOL) framework, this session instead examines practical use cases: How different configurations and workloads benefit from VVOLs. Learn how Storage Policy Based Management (SPBM) couples with VVOLs to provide VM configuration options not previously available. We demonstrate a handful of real-life scenarios, specifically covering how VVOLs benefits oversubscribed systems, disaster recovery preparation and multi-tenant requirements for customers. Specific configuration options and constraints are covered in detail, including how they work with underlying storage.
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld
This document provides an overview of Virtual SAN (VSAN) including:
- VSAN aggregates local flash and HDDs across ESXi hosts into a shared datastore for VMs. It provides software-defined storage that is integrated with VMware's stack.
- VSAN's goals are to provide compelling TCO through reduced CAPEX/OPEX and be the software-defined storage for all VMware products through strong integration.
- The document discusses VSAN architecture, deployment, scaling, performance, resiliency, and management.
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...VMware
VMware Virtual SAN is a software-defined storage solution that is built into vSphere and pools flash-based devices and magnetic disks from standard servers into a shared datastore. It delivers high performance, is highly resilient with zero data loss even during hardware failures, and provides a simplified storage management experience through storage policies applied at the virtual machine level. Virtual SAN supports a variety of use cases including virtual desktop infrastructure, test/development environments, and business critical applications through its scale, performance, integration with VMware technologies, and interoperability with solutions such as Horizon View, vSphere Replication, and OpenStack.
VMworld 2015: Advanced SQL Server on vSphereVMworld
Microsoft SQL Server is one of the most widely deployed “apps” in the market today and is used as the database layer for a myriad of applications, ranging from departmental content repositories to large enterprise OLTP systems. Typical SQL Server workloads are somewhat trivial to virtualize; however, business critical SQL Servers require careful planning to satisfy performance, high availability, and disaster recovery requirements. It is the design of these business critical databases that will be the focus of this breakout session. You will learn how build high-performance SQL Server virtual machines through proper resource allocation, database file management, and use of all-flash storage like XtremIO. You will also learn how to protect these critical systems using a combination of SQL Server and vSphere high availability features. For example, did you know you can vMotion shared-disk Windows Failover Cluster nodes? You can in vSphere 6! Finally, you will learn techniques for rapid deployment, backup, and recovery of SQL Server virtual machines using an all-flash array.
VMworld 2014: Virtual Volumes Technical Deep DiveVMworld
This document provides an overview of virtual volumes (VVols) presented at the STO1965 conference. It begins with an introduction to VVols and the high-level architecture, including storage containers, protocol endpoints, and the VASA provider. The document then covers managing storage capacity with storage containers, ensuring service level objectives through storage policies, and the different types of virtual machine objects that can be VVols. It concludes by discussing data services like snapshots and replication that can be offloaded to arrays, and the transition process from traditional storage to VVols.
VMworld 2013
Christos Karamanolis, VMware
Kiran Madnani, VMware
James Streit, Thomson Reuters
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This document provides an overview and best practices for running Microsoft Exchange 2010 in a virtualized environment using VMware vSphere.
Key points include:
- Performance testing shows Exchange 2010 performs within 5% of physical hardware when virtualized. Storage protocol performance is comparable between Fibre Channel, iSCSI, and NFS.
- Enabling features like DRS and VMotion can increase performance by up to 18% by load balancing VMs across hosts.
- Best practices include proper sizing of virtual memory, using shared storage, multipathing, and dedicating sufficient resources to Exchange VMs.
- vSphere 5.0 introduces several new platform enhancements including support for 2TB of host memory, 160 logical CPUs, and 512 VMs per host. ESXi now runs exclusively as the hypervisor.
- Storage features are improved with VMFS-5, which supports volumes over 2TB and faster operations. Storage DRS allows for initial placement and load balancing of VMs across datastores.
- Networking features include support for multiple vMotion NICs for faster migration. The new web client allows remote administration from any browser.
VMworld 2013: IBM Solutions for VMware Virtual SAN VMworld
VMworld 2013
Eric Deadwyler, IBM
Joseph Russell, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMworld 2013: Storage IO Control: Concepts, Configuration and Best Practices ...VMworld
VMworld 2013
Sachin Manpathak, VMware
Mustafa Uysal, VMware
Sunil Muralidhar, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The Pendulum Swings Back - Understanding Converged and Hyperconverged Integrated Systems, presented Oct 17, 2017 at IBM Systems Technical University, New Orleans LA
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...VMworld
This document provides an overview and agenda for a presentation on virtualizing SQL Server workloads on VMware vSphere. The presentation will cover designing SQL Server virtual machines for performance in production environments, consolidating multiple SQL Server workloads, and ensuring SQL Server availability using vSphere features. It emphasizes understanding the workload, optimizing for storage and network performance, avoiding swapping, using large memory pages, and accounting for NUMA when configuring SQL Server virtual machines.
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld
This document provides an overview of advanced SQL Server techniques and best practices when running SQL Server in a virtualized environment on vSphere. It covers topics such as storage configuration including VMFS, block alignment, and I/O profiling. Networking techniques like jumbo frames and guest tuning are discussed. The document also reviews memory management and optimization, CPU sizing considerations, workload consolidation strategies, and high availability options for SQL Server on vSphere.
VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...VMware
VMware's Virtual SAN 6.0 software enables software-defined storage using the hypervisor. It provides a simplified storage solution that pools server-side storage and manages it through storage policies at the virtual machine level. Virtual SAN delivers high performance, scale, and availability while reducing costs through server-side economics and linear scalability. It is well-integrated with the VMware software stack and supports a variety of use cases including virtual desktop infrastructure, test/development environments, and disaster recovery.
Similar to 2017 VMUG Storage Policy Based Management (20)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
4. 4
How do you manage all of that data?
How do you keep it safe?
How can you choose data services, such as
replication and encryption, on a per-application
or a per–VM or virtual disk basis?
Storage Policy Based Management
5. Agenda
• Introduction
– vSphere APIs for Storage Awareness (VASA)
– Storage Policy Based Management (SPBM)
• SPBM and vSAN
• SPBM and Virtual Volumes (VVols)
• SPBM and VAIO (IO Filters)
– Host-based data services, 3rd parties as well as VMware provided
• SPBM integration with other VMware products
– with vRealize Automation / vRealize Orchestration
– with VMware Horizon View
• Q&A
5
7. VASA – vSphere APIs for Storage Awareness
• VASA – vSphere APIs for Storage Awareness – gives
vSphere insight into data services, either on storage
systems or on hosts.
• VASA providers publish storage capabilities to
vSphere.
• With Virtual Volumes, VASA is also used to initiate
certain operations on the array from vSphere
– e.g. Create VVol, Delete VVol, Take a Snapshot
7
9. The Storage Policy Based Management (SPBM) Paradigm
• SPBM is the foundation of
VMware's Software Defined
Storage vision
• Common framework to allow
storage and host related
capabilities to be consumed
via policies.
• Applies data services (e.g.
replication, encryption,
performance) on a per VM, or
even per VMDK level, through
policies
9
10. Creating Policies via Rules and Rule Sets
• Rule
– A Rule references a combination of a metadata tag and a related value, indicating the quality or
quantity of the capability that is desired.
– These two items act as a key and a value that, when referenced together through a Rule,
become a condition that must be met for compliance.
• E.g. Place VM on datastore where Encryption = True
• Rule Sets
– A Rule Set is comprised of one or more Rules.
– Multiple “Rule Sets” can be leveraged to allow a single storage policy to define alternative
selection parameters, even from several storage providers.
• E.g. Place VM on vSAN datastore where Deduplication = On OR VVol datastore where Deduplication = On.
10
14. VMware vSAN
• Storage scale out architecture built into the hypervisor
• Aggregates locally attached storage from each ESXi
host in a cluster
• Dynamic capacity and performance scalability
• Flash optimized storage solution
• Fully integrated with vSphere:
• vCenter, vMotion, Storage vMotion, DRS, HA, FT, …
• VM-centric data operations through SPBM (policies)
14
VSAN 10GbE network
esxi-01 esxi-02 esxi-03
vSAN and HA/DRS Cluster
vSAN Shared Datastore
16. Storage policy rules available in vSAN 6.6.1
• Primary level of Failures To Tolerate (Primary FTT for cross-site stretched cluster protection)
• Secondary level of Failures To Tolerate (Secondary FTT for local stretched cluster protection)
• Failure Tolerance Method (Mirroring [Raid1:default] or Erasure Coding [Raid5/Raid6])
• IOPS limit for object
• Disable object checksum
• Force provisioning
• Number of disk stripes per object
• Flash read cache reservation (%)
• Object space reservation (%)
• Affinity (when PFTT=0 in stretched clusters)
16
17. Defining a policy for vSAN
• Policies define levels of
protection and performance
• Applied at a per VM level, or
per VMDK level
• vSAN currently provides 10
unique storage capabilities to
vCenter Server
17
What If APIs
18. Assign it to a new or existing VM, or vmdk
• When the policy is selected, vSAN
uses it to place/distribute the
VM/VMDK to guarantee availability
and performance
• Policies can be changed on-the-fly
– In some cases, 2X space may be
temporarily required to change it
– May also introduce rebuild/resync
traffic, so advice is to treat policy
change on-the-fly as maintenance
task
18
19. Policy Setting - Number of Failures to Tolerate (FTT)
• “FTT” defines the number of
failures a VM/VMDK can tolerate.
• For RAID-1, “n” failures tolerated
means “n+1” copies of the object
are created and “2n+1” host
contributing storage are required!
esxi-01 esxi-02 esxi-03
vmdk
RAID-1
FTT=1
esxi-04
witnessvmdk
~50% of I/O ~50% of I/O
19
20. Policy Setting - Number of Disk Stripes Per Object
• Defines the minimum number of
capacity devices across which
each replica of a storage object
is distributed.
• Higher values may result in
better performance. Stripe width
can improve performance of
write destaging, and fetching of
reads
• Higher values may put more
constraints on flexibility of
meeting storage compliance
policies
• Primarily used to achieve
highest performance, even at
expense of flexibility
esxi-01 esxi-02 esxi-03
stripe-2a
RAID-1
esxi-04
witnessstripe-2b
RAID-0 RAID-0
stripe-1a
stripe-1b
FTT=1
Stripe width=2
20
21. Policy Setting – Fault Tolerance Method (FTM) - RAID-5
• Available in all-flash configurations only
• Example: FTT = 1 with FTM = RAID-5
– 3+1 (4 host minimum, 1 host can fail
without data loss)
– 5 hosts would tolerate 1 host failure
or maintenance mode state, and still
maintain redundancy
– 1.33x instead of 2x overhead.
– 30% savings (20GB disk consumes
40GB with RAID-1, now consumes
~27GB with RAID-5)
RAID-5
ESXi Host
parity
data
data
data
ESXi Host
data
parity
data
data
ESXi Host
data
data
parity
data
ESXi Host
data
data
data
parity
21
22. Policy Setting - Fault Tolerance Method (FTM) - RAID-6
• Available in all-flash configurations only
• Example: FTT = 2 with FTM = RAID-6
– 4+2 (6 host minimum. 1 host can fail
without data loss)
– 7 hosts would tolerate 1 host failure
or maintenance mode state, and still
maintain redundancy)
– 1.5x instead of 3x overhead.
– 50% savings. (20GB disk consumes
60GB with RAID-1, now consumes
~30GB with RAID-6)
RAID-6
ESXi Host
parity
data
data
ESXi Host
parity
data
data
ESXi Host
data
parity
data
ESXi Host
data
parity
data
ESXi Host
data
data
parity
ESXi Host
data
data
parity
22
23. Sky’s the limit for expansion on an agile cloud
• Europe’s leading media brand
• 22 million subscribers
• Pay TV, on-demand Internet streaming, broadband mobile
• Always looking for new markets and new revenue stream
• Challenge: Bring new services online, cost-effectively, without
impacting existing services. Avoid creating expensive silos per
service.
• vSAN enabled Sky to scale out its video service on time and on
budget, delivering a fast, cost-effective and reliable platform for
video transport.
23
25. Why VVols?
25
Typical SAN
• Lots of paths to manage
• Lot of devices to manage
• Risk of hitting path/device limits
• IO Blender effect
26. VVols are 1st class citizen on storage array
26
Data services on array are consumed
on a per VM/VMDK basis via SPBM
• Less paths/devices to manage
• Array appears as a Volume
• More scalable than LUNs
• 1:1 relationship with VM:storage
PE
27. • No Filesystem
• ESXi manages array through
VASA APIs.
• Arrays are logically partitioned
into containers, called Storage
Containers.
• NO LUNS
• VM files, called Virtual Volumes,
stored natively on the Storage
Containers.
• IO from ESXi to array is
addressed through an access
point called, Protocol Endpoint
• Data Services (snapshot, etc.)
are offloaded to the array
• Managed through SPBM.
27
High Level Architecture
Overview
vSphere
Storage Policy-Based Mgmt.
Virtual Volumes
Storage Policy
Capacity
Availability
Performance
Data Protection
Security
PE PE
Published Capabilities
Snapshot
Replication
Deduplication
Encryption
VASA Provider
28. 28
VASA Provider (VP)
Virtual Volumes
VASA Provider
• Software component developed by
storage array vendors
• Provides “storage awareness” of array’s
data services
• VASA Provider can be implemented within
the array’s management firmware, in the
array controller or as a virtual appliance.
• Responsible for creating, deleting of
Virtual Volumes (VMs, clones, snapshots)
Characteristics
29. Protocol Endpoints (PE)
Virtual Volumes
VASA ProviderPE
• Separate the access points from the
storage itself
• Allows for fewer access points (compared
to LUN approach)
Why Protocol Endpoints?
• Access points that enables
communication between ESXi hosts and
storage array systems
• SCSI T10 Secondary Addressing scheme
to access VVol (PE + VVol Offset)
What are Protocol Endpoints?
29
30. Protocol Endpoints (PE)
VASA ProvideriSCSI/NFSPE
Virtual Volumes
• Compatible with all SAN and NAS
Protocols:
- iSCSI
- NFS
- FC
- FCoE
• Existing multi-path policies and NFS
topology requirements can be applied
to the PE
• NFS v3 and v4.1 supported.
Scope of Protocol Endpoints
30
31. 0
Storage Container (SC)
Virtual Volumes
• Logical storage constructs for grouping of
virtual volumes.
• Setup by Storage Administrator
• Capacity is based on physical storage
• Logically partition or isolate VMs with
diverse storage needs and requirement
• Minimum one storage container per array
• Maximum depends on the array
• A single Storage Container can be
simultaneously accessed via multiple
Protocol Endpoints
• It is NOT a LUN
What are storage containers?
32
33. 34
Nimble Storage [now HPE]
Populate vCenter info
on
Storage Array
Add Nimble info directly
into vSphere
34. 35
Full visibility into VM
• Home
• Swap
• VMDK
Storage Container
• Create a folder
• Set management
type to VMware
Virtual Volumes
• Set a capacity limit
45. 46
2 new features introduced with vSphere 6.5
- Encryption
- Storage I/O Control v2
Implementation is done via I/O Filters
46. Introduced in vSphere 6.5 - Storage I/O Control v2
• VM Storage Policies in vSphere 6.5 has a new option called “Common Rules”.
• These are used for configuring data services provided by hosts, such as Storage I/O Control
and Encryption. It is the same mechanism used for VAIO/IO Filters.
47
Now managed
via policy and
not set on a per
VM basis –
reduced
operational
overhead
QoS
47. Introduced in vSphere 6.5 - vSphere VM Encryption
• A new VM encryption mechanism.
• Implemented in the hypervisor,
making vSphere VM encryption
agnostic to the Guest OS.
• This not just data-at-rest encryption;
it also encrypts data in-flight.
• vSphere VM Encryption in vSphere
6.5 is policy driven.
• Requires an external Key
Management Server - KMS (not
provided by VMware)
48
48. 3rd Party and vSphere IO Filters can co-exist
49
There are 3 I/O Filters on these hosts:
- VM Encryption
- Storage I/O Control
- Cache Accelerator from Infinio
49. Case Study from Infinio – VAIO Cache Acceleration
• The University of Georgia Center for Continuing Education and Hotel
– Conference center located in Athens, Georgia, USA
• Using DELL Compellent All Flash Array
• Pilot on vSphere Cluster running over 50 VMs
– file and print services
– digital signage applications
– back office applications like SQL and QuickBooks
50
Response times were fast – as low as
170 microseconds – which is
even faster than our all-flash array!”
51. vRealize Automation 7.3 + vRealize Orchestration 6.5 and SPBM
• vRealize Automation (vRA) 7.3 enables SPBM through vRealize Orchestration (vRO)
– vRA itself does not know about SPBM, so relies on vRO
– SPBM policies must be preconfigured
– SPBM policies can be changed on-the-fly (day #2 operation)
• Leverages the latest vCenter Server (6.5) plug-in shipped with vRO out-of-the-box
• All SPBM policies are accessible through API in vRO/vRA
52
56. Summary
• The amount of data in the world is exploding!
• Data is critical to your organization, and in many
cases, how you innovate with this data keeps
you ahead of your competitors.
• Managing that data, keeping it safe and providing
the appropriate data services at the granularity
of an application can be complex
• Storage Policy Based Management, a
fundamental building block to VMware’s Software
Defined Storage achieves this.
• SPBM is integrated with all vSphere storage
technologies, from vSAN to VVols to VAIO.
• With SPBM, data services (e.g. deduplication,
encryption, replication, RAID level) can be
assigned to your data on a per VM or per VMDK
basis.
57
Data, and most especially what you do with it to offer new/better experiences for your customers, is going to be the key differentiator between you and your competition
Self-driving cars – Other projections state that they will generate 1GB of data per second.
Equifax – personal data from 143 million US citizens. Cost CxOs their jobs.
Hurricane Irma in the US, – Are you prepared for Disaster Recovery?
Now what if you put these 2 together? What if someone hacked a self-driving car?
VVOLS KB - https://kb.vmware.com/kb/2113013
Storage providers inform vCenter Server about specific storage devices, and present characteristics of the devices and datastores (as storage capabilities).
Storage Policy-Based Management (SPBM) is the foundation of the VMware SDS Control Plane and enables vSphere administrators to over come upfront storage provisioning challenges, such as capacity planning, differentiated service levels and managing capacity headroom, whether using vSAN or Virtual Volumes (VVols) on external storage arrays. SPBM provides a single unified control plane across a broad range of data services and storage solutions. The framework helps to align storage with application demands of your virtual machines.
SPBM is about ease, and agility. Traditional architectural models relied heavily on the capabilities of an independent storage system in order to meet protection and performance requirements of workloads. Unfortunately the traditional model was overly restrictive in part because standalone hardware based storage solutions were not VM aware, and were limited in their abilities to unique settings to various workloads. Storage Policy Based Management (SPBM) lets you define requirements for VMs or collection of VMs. This SPBM framework is the same framework used for storage arrays supporting VVOLs. Therefore, a common approach to managing and protecting data can be employed, regardless of the backing storage.
----------------------------------
Overview:
Key to software defined storage (SDS) architectural model
SPBM is the common framework to abstract traditional storage related settings away from hardware, and into hypervisor
Applies storage related settings for protection and performance on a per VM, or even per VMDK level
----------------------------------
Defining a policy will let vSAN use “what if” APIs so that you can see the “result” of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes.
Mirroring = RAID-1
Erasure Coding = RAID-5/RAID-6
Key Message/Talk track:
Creating a storage policy is nothing more than defining what your requirements are for a VM, or a collection of VMs. These requirements are typically around protection and performance of the VM. A new policy can be created and applied to a VM, or an existing policy can be adjusted. The VM will adopt the new performance and protection settings without any down time.
----------------------------------
Overview:
Policies define levels of protection and performance
Applied at a per VM level, or per vmdk level
vSAN currently provides five unique storage capabilities to vCenter Server
----------------------------------
Details:
Storage policy rules available (in 6.6) are:
Number of disk stripes per object
Flash read cache reservation (%)
Primary level of failures to tolerate (PFTT - for stretched clusters)
Secondary level of failures to tolerate (SFTT – for local protection)
Failure Tolerance method
Affinity
IOPS limit for object
Disable object checksum
Force provisioning
Object space reservation (%)
Defining a policy will let vSAN use “what if” APIs so that you can see the “result” of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes.
----------------------------------
Key Message/Talk track:
After a policy is created, it can easily be applied to an individual VMDK of a VM, an entire VM, or a collection of VMs in the data center. Applying at a VMDK level can be useful for applications that have different needs within defined drives of of the guest OS. For instance, a drive dedicated for the database may have different requirements than the drive dedicated for transaction logs.
----------------------------------
Overview:
When the policy is selected, vSAN uses it to place/distribute the VM to guarantee availability and Performance
Policies can be changed without any interruption to the VM
----------------------------------
Details:
Defining a policy will let vSAN use “what if” APIs so that you can see the “result” of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes.
Only one SPBM policy is allowed to be applied. vSAN does not support the appended SPBM policies.
Policies can also be assigned by rules or tags. An example might be all VMs with “Prod-SQL” in the VM name or resource group might be set at RAID-1 and an FTT=2. VM named “Test-Web” would never be applied to this SPBM policy, and would adopt the default policy for the environment.
----------------------------------
Key Message/Talk track:
Failures to Tolerate (FTT) is a rule that defines how many failures can be tolerated to let the VM or other object continue to run in the event of a failure. This is one of the key pillars behind vSAN’s ability to protect a VM from failure of a fault domain (disk, disk group, host, defined fault domain, or site)
----------------------------------
Overview:
“FTT” defines the number of hosts, disk or network failures a storage object can tolerate.
For “n” failures tolerated, “n+1” copies of the object are created and “2n+1” host contributing storage are required!
Primary Failures to Tolerate (PFTT) defines the number of sites that can accept failure. (0, 1)
Secondary Failures to Tolerate (SFTT) defines the number within a site that can accept failure (0, 1, 2, 3)
----------------------------------
Details:
FTT can and will be dependent on a number of factors. A few important factors include:
The number of hosts in the vSAN cluster
The Failure Tolerance Method (FTM) that is defined for the object.
Using a RAID-1 (mirroring) Fault Tolerance Method (FTM), an FTT of 2 would mean that a minimum number of hosts in a cluster would be 5.
FTT=3 would require 7 hosts.
Number of Failures Mirror copies Witnesses Min. Hosts Hosts + Maintenance
0 1 0 1 host n/a
1 2 1 3 hosts 4 hosts
2 3 2 5 hosts 6 hosts
3 4 3 7 hosts 8 hosts
----------------------------------------------------------------------------------------------
There is also Primary and Secondary Failures to Tolerate (PFTT and SFTT) are for vSAN stretched clusters
PFTT defines the number of sites failures (0, 1)
SFTT defines the number of failures within a site (0, 1, 2, 3)
Key Message/Talk track:
This policy, sometimes known as “stripe width” defines the minimum number of capacity devices across which each replica of a storage object is distributed. Increasing the predefined number of stripes per object beyond 1 is intended to help performance.
----------------------------------
Overview:
Defines the minimum number of capacity devices across which each replica of a storage object is distributed.
Higher values may result in better performance. Stripe width can improve performance of write Destaging, and fetching of uncached reads
Higher values may put more constraints on flexibility of meeting storage compliance policies
To be used only if performance is an issue
----------------------------------
Details:
Most beneficial on the following scenarios:
A non cached read on a hybrid configuration, where one is typically reliant on the rotational latencies of a single spinning disk.
Reads on an all-flash configuration, where fetching I/O may be able to be improved in some situations.
Destaging buffered writes to persistent tier (all flash, or hybrid). This will relieve some of the backpressure that could be induced by large amount of write activity, whether they are sequential or random in nature.
vSAN may create more stripes than what is defined.
With DD&C, component A with a strip width of 1 will not necessarily live just on disk 1, but rather, be sprinkled around the various capacity disks of a disk group. It becomes an implicit stripe width setting, but will not show up in the UI as a traditional change in a stripe width.
Component size can impact stripe width, as an object over 255GB will be split into two components. This however could end up on the same disk, or a different disk group.
----------------------------------
Key Message/Talk track:
A failure tolerance method (FTM) is the way data will maintain redundancy. The simplest FTM is a RAID-1 mirror. This would have a mirror copy of objects/components across multiple hosts. Another FTM is RAID-5/RAID-6, where data is striped across multiple hosts with parity information written to provide tolerance of a failure. Parity is striped across all hosts. When done over the network using software only, this is sometimes referred to as erasure coding. This is done inline; there is no post-processing required. VMware’s implementation of erasure coding stripes the data with parity across the minimum number of hosts in order to comply with the policy. RAID-5 will offer a guaranteed 30% savings in capacity overhead compared to RAID-1
----------------------------------
Overview:
Available in all-flash configurations only
Example: FTT = 1 with FTM = RAID-5
3+1 (4 host minimum, 1 host can fail without data loss)
5 hosts would tolerate 1 host failure or maintenance mode state, and still maintain redundancy
1.33x instead of 2x overhead. 30% savings
20GB disk consumes 40GB with RAID-1, now consumes ~27GB with RAID-5
----------------------------------
Details:
RAID-5/6 does have I/O amplification on writes (only).
RAID-5. Single write operation results in 2 reads and 2 writes
RAID-6. Single write operation results in 3 reads and 3 writes (due to double parity)
RAID-5/6 only supports the FTT of 1, or 2 (implied by choosing RAID-5 or RAID-6). Will not support FTT=0, or FTT=3
The realized dedup & compression ratios will be different when employing RAID-5/6 than when using RAID-1 mirroring. Space efficiency using erasure codes will be more of a guaranteed space reduction because of the lack of implied multiple full copies. Even if the DD&C ratio may be less on objects that use RAID-5/6, the effective overall capacity used will be equal to, if not better than RAID-1 with DD&C.
FTM can and will be dependent on a number of factors. A few important factors include:
The number of hosts in the vSAN cluster
Stripe width defined for the objects
Using a RAID-5, and an implied FTT of 1 would mean that a minimum number of hosts in a cluster would be 4. With 4 hosts, 1 host can fail without data loss (but will lose redundancy). To maintain full redundancy with a single host in maintenance mode, the minimum would be 5 hosts.
Cluster sizes for RAID-5 need to be 4 or more hosts. Not multiples of 4 hosts.
Since VMware has a design goal of not relying on data locality, this implementation of erasure coding does not bring any negative results by distributing the RAID-5/6 stripe across multiple hosts.
----------------------------------
Internal:
Key Message/Talk track:
VMware’s RAID-6 is a dual parity version of the erasure coding scheme used in the RAID-5 FTM. An FTM of RAID-6 will imply an ability to tolerate 2 failures (e.g. FTT=2) and maintain operation. RAID-6 will offer a guaranteed 50% savings in capacity overhead compared to RAID-1 using an FTT of 2. Just as with RAID-5 erasure coding, this is all done inline, with no post processing required. Parity is striped across all hosts. VMware’s implementation of erasure coding stripes the data with parity across the minimum number of hosts in order to comply with the policy. RAID-6 will offer a guaranteed 50% savings in capacity overhead compared to RAID-1 and an FTT of 2.
----------------------------------
Overview:
Available in all-flash configurations only
Example: FTT = 2 with FTM = RAID-
4+2 (6 host minimum. 1 host can fail without data loss
7 hosts would tolerate 1 host failure or maintenance mode state, and still maintain redundancy
1.5x instead of 3x overhead. 50% savings
20GB disk consumes 60GB with RAID-1, now consumes ~30GB with RAID-6
----------------------------------
Details:
RAID-5/6 does have I/O amplification on writes (only).
RAID-5. Single write operation results in 2 reads and 2 writes
RAID-6. Single write operation results in 3 reads and 3 writes (due to double parity)
RAID-5/6 only supports the FTT of 1, or 2 (implied by choosing RAID-5 or RAID-6). Will not support FTT=0, or FTT=3
The realized dedup & compression ratios will be different when employing RAID-5/6 than when using RAID-1 mirroring. Space efficiency using erasure codes will be more of a guaranteed space reduction because of the lack of implied multiple full copies. Even if the DD&C ratio may be less on objects that use RAID-5/6, the effective overall capacity used will be equal to, if not better than RAID-1 with DD&C.
FTM can and will be dependent on a number of factors. A few important factors include:
The number of hosts in the vSAN cluster
Stripe width defined for the objects
Using a RAID-6, and an implied FTT of 2 would mean that a minimum number of hosts in a cluster would be 6. With 6 hosts, 2 hosts can fail without data loss (but will lose redundancy). To maintain full redundancy with a single host in maintenance mode, the minimum would be 7 hosts.
Cluster sizes for RAID-6 need to be 6 or more hosts. Not multiples of 6 hosts.
Since VMware has a design goal of not relying on data locality, this implementation of erasure coding does not bring any negative results by distributing the RAID-5/6 stripe across multiple hosts.
----------------------------------
We get a lot of questions about whether vSAN is available for prime-time production use. With 10,000 customers, vSAN is now used everywhere for all manner of applications. Here is one such example, where vSAN is used in a mission critical role.
VVol 2.0" refers to additional functionality supported in vSphere for VVol targets written specifically for it, notably replication.
Many VVol solutions still offer only what you might call "VVol 1.0".
Regardless, the vSphere Compatibility Guide will tell you whether a given VVol storage system is certified to work with vSphere 6.5, which could be "VVol 1.0" or "VVol 2.0".
To be clear vSphere 6.5 does NOT REQUIRE "VVol 2.0" on the storage side.
The IO Blender effect – lots of different I/O types – random/sequential, read/write, different block sizes, being handled by the same LUN.
All sorts of mechanisms were introduced to alleviate this situation, such as RAID, wide-striping, QoS, etc.
On the vSphere side of things, we introduced SIOC, SDRS, etc.
Many customers kept spreadsheets of what VMs were supposed to be on which LUNs for performance and data service purposes.
VASA providers the Control Plane.
PEs provide the Data Plane
https://blogs.vmware.com/virtualblocks/2016/11/30/vasa-provider-considerations-controller-embedded-vs-virtual-appliance/
VASA Provider in VVols:
Provides storage awareness services
Centralized connectivity for ESXi and vCenter Servers
Responsible for creating Virtual Volumes (VVols)
Provide support for VASA APIs used for ESXi
Responsible for defining binding operations
Offloading VM related operations directly to array
Why the concept of a PE?
In today’s LUN-Datastore world, the datastore has two purposes – It serves as the access point for ESXi to send IO to and it also serves as storage container to store many VM files (VMDKs). This dual-purpose nature of this entity poses several challenges.
It should not be necessary to have so many access points to the storage.
Because of the rigid nature of the size of the datastore, and the fewer number of datastores, multiple VMs are stored together in the same datastore even if the VMs have different requirements. This leads to the so-called IO blender effect.
So, how about we separate out the concept of the access point from the storage? This way, we can fewer number of access points to several number of storage entities. And hence the introduction of PE.
NFS v41 support statement: https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-AAA99054-4D81-49F8-9927-65E9B08577AD.html
During a rescan ESX will identify PE and maintain then in DBs.
Multi-pathing on the PE ensures high availability
Concept of queue depth in a PE? Yes, PEs are given queue depth of 128. Compare with a LUN which only had 32 or 64, and how many VMs per LUN.
Need at least 1 SC per array. You can have as many as the array can support.
An SC cannot span across array
Login to UI.
Select Administration.
Select vSphere Integration.
Populate VC info.
Select plugins – in this case, web client and VASA Provider.
Note that not all VASA implementations give you this level of detail.
Also, others may take a different approach to configuring PEs and Storage Containers.
Octo is the name of a “group” on the Nimble Array which I provided as part of the registration – it could be anything.
Storage = Nimble Storage
Add a Rule e.g. encryption
Add another rule e.g. protection
Compatible = nimble.
Other refs: https://www.hpe.com/h20195/v2/getpdf.aspx/4AA5-6907ENW.pdf (HPE and Vvols)
Figures provided by HPE – August 2017 (VMworld 2017 Las Vegas)
https://code.vmware.com/programs/vsphere-apis-for-io-filtering
IO request moving between the guest operating system (Initiator), located in the Virtual Machine Monitor(VMM), and a virtual disk (Consumer) are filtered through a series of two IO Filters (per disk), one filter per filter class, invoked in filter class order. For example, a replication filter executes before a cache filter.
Once the IO request has been filtered by all the filters for the particular disk, the IO request moves on to its destination, either the VM or the virtual disk.
Partner will develop IO Filter plug-ins to provide filtering virtual machines. Each IO Filter registers a set of callbacks with the Filter Framework, pertaining to different disk operations. If a filter fails an operation, only the filters prior to it are informed of the failure.
Any filter can complete, fail, pass, or defer an IO request. A filter will defer an IO if the filter has to do a blocking operation like sending the data over the network, but wants to allow further IOs to get processed as well. If a filter performs a blocking operation during the regular IO processing path, it would affect the IOPS of the virtual disk, since we wouldn't be processing any further IOs until the blocking operation completes. If the filter defers an IO request, the Filter Framework will not pass the request to subsequent filters in the class order until the filter completes the request and notifies the Filter Framework that the IO may proceed.
Available since vSphere 6.5.
https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vaio
6 certified partner VAIO products, out of which 3 are Cache and 3 are Replication.
Cache accelerators using local flash devices (or some memory) to accelerate reads, and sometimes writes.
Available since vSphere 6.5.
This is before I added the I/O Accelerator from Infinio.
These are provided by default in vSphere.
When the policy has been created, it may be assigned to newly deployed VMs during provisioning, or to already existing VMs by assigning this new policy to the whole VM (or just an individual VMDK) by editing its settings.
What is the relationship between vCenter Server and KMS Server?
VMware vCenter now contains a KMIP client, which works with many common KMIP key managers (KMS).
VMware does not own the KMS.
Plan for backup, DR, recovery, etc., with your KMS provider. You must be able to retrieve the encryption keys in the event of a failure, or you may render your VMs unusable.
Administrators should not encrypt their vCenter Server.
Possible “chicken-and-egg” situation where you need vCenter to boot (KMS client) so it can get the key from the KMS to unencrypt its files, but it will not be able to boot as its files are encrypted.
vCenter Server does not manage encryption. It is only a client of the KMS.
With VM Home encrypted, only administrators with ‘encryption privileges’ can access the console of the virtual machine.
One misconception: VM Home folder is not encrypted. Only some files in the VM Home folder are encrypted. Some (non-sensitive) VM files and log files are not encrypted.
Core dumps are encrypted on ESXi hosts with encrypted VMs.
Encrypted virtual machines cannot be exported to an OVF, nor can they be suspended.
The VM Encryption and SIOC are available by default.
Infinio is a third party plugin for cache acceleration - I installed this separately.
Screenshots courtesy of http://www.virtualjad.com/2017/05/scoop-vrealize-automation-7-3.html
https://blogs.vmware.com/virtualblocks/2017/05/23/storage-policy-based-management-vrealize-automation/
I don’t know much about this, but I believe that changing the policy will also Storage vMotion the VM to another datastore that meets the policy requirements – checking with Jad.
When you use Virtual SAN, Horizon defines four virtual machine storage requirements, such as capacity, performance, and availability, in the form of default storage policy profiles and automatically deploys them for virtual desktops onto vCenter Server.
The policies are automatically and individually applied per disk (Virtual SAN objects) and maintained throughout the lifecycle of the virtual desktop.
Storage is provisioned and automatically configured according to the assigned policies.
You can modify these policies in vCenter.
Horizon creates vSAN policies for linked-clone desktop pools, instant-clone desktop pools, full-clone desktop pools, or an automated farm per Horizon cluster.