Dell Technologies è un’esclusiva famiglia di aziende che offre alle organizzazioni l’infrastruttura necessaria per costruire il loro futuro digitale, favorire l’IT Transformation e proteggere le loro risorse più importanti: le informazioni.
In particolare per il settore dell’Education di livello superiore, Dell EMC ha studiato un catalogo di soluzioni in aree quali:
Converged Infrastructure
Storage e Protection dei dati
Servizi di didattica digitale
In questo ciclo di webinar illustreremo le soluzioni Dell EMC più all'avanguardia, attualmente oggetto di studio da parte della Fondazione CRUI per un possibile contratto in convenzione.
Dell Technologies è un’esclusiva famiglia di aziende che offre alle organizzazioni l’infrastruttura necessaria per costruire il loro futuro digitale, favorire l’IT Transformation e proteggere le loro risorse più importanti: le informazioni.
In particolare per il settore dell’Education di livello superiore, Dell EMC ha studiato un catalogo di soluzioni in aree quali:
Converged Infrastructure
Storage e Protection dei dati
Servizi di didattica digitale
In questo ciclo di webinar illustreremo le soluzioni Dell EMC più all'avanguardia, attualmente oggetto di studio da parte della Fondazione CRUI per un possibile contratto in convenzione.
Five common customer use cases for Virtual SAN - VMworld US / 2015Duncan Epping
This session was presented by Lee Dilworth and Duncan Epping at VMworld in the US in 2015. Five common customer use cases of the last 12-18 months are discussed in this deck.
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...Maichino Sepede
An overview of the VxRail Appliance, including what’s new with VxRail on the 14th generation PowerEdge server, and advancements in the VxRail 4.5 software.
Updated lifecycle management, improved analytics and support, and the option of Kubernetes — VMware vSphere® 7 is the biggest re-platform of vSphere in years. Learn more about the most significant vSphere evolution in a decade.
Learn more: http://ms.spr.ly/6005TmX9B
Five common customer use cases for Virtual SAN - VMworld US / 2015Duncan Epping
This session was presented by Lee Dilworth and Duncan Epping at VMworld in the US in 2015. Five common customer use cases of the last 12-18 months are discussed in this deck.
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...Maichino Sepede
An overview of the VxRail Appliance, including what’s new with VxRail on the 14th generation PowerEdge server, and advancements in the VxRail 4.5 software.
Updated lifecycle management, improved analytics and support, and the option of Kubernetes — VMware vSphere® 7 is the biggest re-platform of vSphere in years. Learn more about the most significant vSphere evolution in a decade.
Learn more: http://ms.spr.ly/6005TmX9B
VMworld 2013
Christos Karamanolis, VMware
Kiran Madnani, VMware
James Streit, Thomson Reuters
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMworld 2013: IBM Solutions for VMware Virtual SAN VMworld
VMworld 2013
Eric Deadwyler, IBM
Joseph Russell, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld
VMworld 2013
Jad Chamcham, VMware
Narasimha Krishnakumar, VMware, view, vsan, tco
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld
VMworld 2013
Cormac Hogan, VMware
Kiran Madnani, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMware EVO - Fremtidens datarom er hyperkonvergertKenneth de Brucq
Dell Solutions Tour 2014 Norge
Roger Samdal, VMware
Med Dell PowerEdge 13G servere, VMware VSAN og NSX kan du realisere et fullstendig programvare definert datasenter, der all infrastruktur er virtualisert og kjører på industristandard x86 servere. Enkelt, fleksibelt og høy ytelse til halve prisen.
What is coming for VMware vSphere?
Delivered at VMUG DK/UK/BE in November 2014. Session is all about vSphere futures, what can be expected in the near future.
Dell Solutions Tour 2015 - Programvare erstatter maskinvare, revolusjonen har...Kenneth de Brucq
Programvare kan nå erstatte tradisjonell infrastruktur på datasenteret. Med programvare-definert datalagring og nettverk fra VMware oppnår du stor fleksibilitet, valgfrihet og betydelige besparelser. Vi ser nærmere på nye versjoner av VMware Virtual SAN og NSX, samt EVO:RACK hyperconverged som alle lanseres på VMworld høsten 2015
SDDC stands for Software-Defined Data Center, which is a modern approach to managing and provisioning data center resources using software-based technologies and virtualization. The primary goal of an SDDC is to abstract and virtualize the entire data center infrastructure, including compute, storage, networking, and security components, in order to create a more flexible, scalable, and automated environment for running and managing applications and workloads. Here's a detailed description of SDDC:
Abstraction and Virtualization: SDDC abstracts and virtualizes the underlying physical infrastructure, making it independent of the hardware. This abstraction allows data center resources to be allocated and managed dynamically through software, reducing reliance on specific hardware configurations.
Components:
Compute Virtualization: SDDC uses technologies like server virtualization (e.g., VMware vSphere, Microsoft Hyper-V) to create virtual machines (VMs) that run applications. These VMs are decoupled from the physical servers, enabling better resource utilization and mobility.
Storage Virtualization: SDDC abstracts storage resources and manages them as software-defined storage (SDS). It allows for features like automated data tiering, replication, and scaling without being tied to specific storage hardware.
Network Virtualization: Software-defined networking (SDN) is a critical component of SDDC. It decouples network services from the physical infrastructure, making it easier to configure and manage complex network topologies. Technologies like VMware NSX and Cisco ACI are examples of SDN solutions.
Security Virtualization: SDDC incorporates security through micro-segmentation and network security policies. Security policies can be defined and enforced at the virtualization layer, providing granular control over traffic and reducing the attack surface.
Automation and Orchestration: SDDC relies heavily on automation and orchestration tools to streamline operations. Infrastructure as Code (IaC) and configuration management tools enable administrators to define data center configurations in code and automate provisioning, scaling, and maintenance tasks.
Scalability and Flexibility: SDDC architectures are inherently scalable. Resources can be easily added or removed as needed, and workloads can be migrated seamlessly between different parts of the infrastructure. This agility allows data centers to adapt to changing business requirements.
Centralized Management: SDDC provides a single pane of glass for managing the entire data center infrastructure. Administrators can monitor, configure, and troubleshoot resources from a centralized management console, reducing complexity and improving operational efficiency.
Cost Efficiency: By abstracting and virtualizing hardware resources, SDDC can help optimize resource utilization, reducing hardware sprawl and associated costs. Automation also leads to operational cost savings by minimizing manual interv
Similar to Virtual SAN 6.2, hyper-converged infrastructure software (20)
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3
Virtual SAN 6.2, hyper-converged infrastructure software
1. VMware Virtual SAN
Duncan Epping
Chief Technologist
Office of the CTO
Storage & Availability
Hyper-converged infrastructure software
2. Agenda
1 Introduction
2 Virtual SAN, what is it?
3 Virtual SAN, a bit of a deeper dive
4 Virtual SAN Recent Enhancements
5 Wrapping up
2
3. The Software Defined Data Center
Compute Networking Storage
Management
• All infrastructure services virtualized:
compute, networking, storage
• Underlying hardware abstracted,
resources are pooled
• Control of data center automated by
software (management, security)
• Virtual Machines are first class citizens
of the SDDC
• Today’s session will focus on one
aspect of the SDDC - storage
3
6. The Hypervisor is the Strategic High Ground
SAN/NASx86 - HCI Object Storage
VMware vSphere
Cloud Storage
6
7. Storage Policy-Based Management – App centric automation
Overview
• Intelligent placement
• Fine control of services at VM level
• Automation at scale through policy
• Need new services for VM?
• Change current policy on-the-fly
• Attach new policy on-the-fly
Virtual Machine Storage policy
Reserve Capacity 40GB
Availability 2 Failures to tolerate
Read Cache 50%
Stripe Width 6
Storage Policy-Based Management
vSphere
Virtual SAN Virtual Volumes
Virtual Datastore
7
8. Storage Policy Based Management – What does it look like?
If the storage can satisfy the VM
Storage Policy, the VM Summary tab
in the vSphere client will display the
VM as compliant.
If not, either due to failures, lack of
resources or other reasons, the VM
will be shown as non-compliant.
8
10. Virtual SAN, what is it?
Hyper-Converged Infrastructure
Distributed, Scale-out Architecture
Integrated with vSphere platform
Ready for today’s vSphere use cases
Software-Defined Storage
vSphere & Virtual SAN
10
11. But what does that really mean?
VSAN network
Generic x86 hardware
VMware vSphere & Virtual SAN Integrated with your Hypervisor
Leveraging local storage resources
Exposing a single shared datastore
Virtual SAN
11
12. VSAN is the Most Widely Adopted HCI Product
Simplicity is key, on an oil
platform there are no
virtualization, storage or network
admins. The infrastructure is
managed over a satellite link via
a centralized vCenter Server.
Reliability, availability and
predictability is key.
12
13. Virtual SAN Use Cases
VMware vSphere + Virtual SAN
End User
Computing Test/Dev
ROBOStagingManagementDMZ
Business
Critical Apps DR / DA
13
14. Broadest Deployment Options from HCI to SDDC
Built on Industry-Leading VMware Hyper-Converged Software (HCS)
Certified Solutions Engineered Appliances
Virtual SAN Ready Nodes
EVO SDDC
Certified Partner
Hardware
NSX
vRealize
Lifecycle Management
EMC Federation
HCI Appliance
VMware HCS
Virtual SAN + vSphere + vCenter
VMware HCS
Virtual SAN + vSphere + vCenter
VMware HCS
Virtual SAN + vSphere + vCenter
EVO SDDC Manager
14
15. Tiered Hybrid vs All-Flash
All-Flash
100K IOPS per Host
+
sub-millisecond latency
Caching
Writes cached first,
Reads from capacity tier
Capacity Tier
Flash Devices
Reads go directly to capacity tier
SSD PCIe Ultra DIMM
Data
Persistence
Hybrid
40K IOPS per Host
Read and Write Cache
Capacity Tier
SAS / NL-SAS / SATA
SSD PCIe Ultra DIMM
Virtual SAN
15
18. Virtual Machine as a set of Objects on VSAN
• VM Home Namespace
• VM Swap Object
• Virtual Disk (VMDK) Object
• Snapshot (delta) Object
• Snapshot (delta) Memory Object
VM Home
VM Swap
VMDK
Snap delta
Snap memory
Snapshot
18
19. Define a policy first…
Virtual SAN currently surfaces multiple storage capabilities to vCenter Server
What If APIs
New
Capabilities
in VSAN 6.2
19
20. ESXi Host
Virtual SAN Objects and Components
VSAN is an object store!
• Object Tree with Branches
• Each Object has multiple Components
– This allows you to meet availability and
performance requirements
• Here is one example of “Distributed RAID” using
2 techniques:
– Striping (RAID-0)
– Mirroring (RAID-1)
• Data is distributed based on VM Storage Policy
ESXi HostESXi Host
Mirror Copy
stripe-2b
stripe-2a
RAID-0
Mirror Copy
stripe-1b
stripe-1a
RAID-0
witness
VMDK Object
RAID-1
20
21. Number of Failures to Tolerate/Failure Tolerance Method
• Defines the number of hosts, disk or network failures a storage object can tolerate.
• RAID-1 Mirroring used when Failure Tolerance Method set to Performance (default).
• For “n” failures tolerated, “n+1” copies of the object are created and “2n+1” host contributing
storage are required!
esxi-01 esxi-02 esxi-03 esxi-04
Virtual SAN Policy: “Number of failures to tolerate = 1”
vmdk
~50% of I/O
vmdk witness
~50% of I/O
RAID-1
21
22. Assign it to a new or existing VM
When the policy is selected, Virtual SAN uses it to place / distribute the VM to guarantee
availability and Performance
22
23. Fault Domains, increasing availability through rack awareness
• Create fault domains to increase availability
• 8 node cluster with 4 defined fault domains (2 nodes in each)
FD1 = esxi-01, esxi-02 FD3 = esxi-05, esxi-06
FD2 = esxi-03, esxi-04 FD4 = esxi-7, esxi-08
• To protect against one rack failure only 2 replicas are required and a witness across 3 failure domains!
23
FD2 FD3 FD4
esxi-01
esxi-02
esxi-03
esxi-04
esxi-05
esxi-06
esxi-07
esxi-08
FD1
vmdk vmdk witness
RAID-1
23
25. VSAN 5.5
March 2014
VSAN 6.0
March 2015
All Flash Configuration
64 node VSAN cluster
x2 Hybrid Performance
VSAN Snapshots/Clones
Health UI
Rack Awareness
VSAN 6.2
March 2016
Deduplication and Compression
RAID 5/6 support
Software Checksum
QoS via IOPS Limits
IPv6
Performance Service
Enhanced Capacity Views
VSAN 6.1
September 2015
Stretched Cluster
Replication - 5 Minutes RPO
2-node ROBO
Health Monitoring & Remediation
25
26. Virtual SAN – Stretched Cluster
Active-Active data centers
• Virtual SAN cluster split across 2 sites!
• Each site is a Fault Domain (FD)
• Site-level protection with zero data loss
and near-instantaneous recovery
• Support for up to 5ms RTT latency
between data sites
– 10Gbps bandwidth expectation
• Witness VM can reside anywhere
– 200ms RTT latency
– 100Mbps bandwidth required at most
• Automated failover
5ms RTT, 10GbE
Today
VMware vSphere & Virtual SAN
vSphere
witness
vmdk vmdk
witness
26
vSphere & Virtual SAN
Site Recovery Manager
27. Advanced Troubleshooting with VSAN Health Check
• Cluster Health
• Network Health
• Data Health
• Limits Health
• Physical Disk Health
• Stretched Cluster
• Proactive Tests
27
29. Deduplication and Compression for Space Efficiency
• Nearline deduplication and compression per disk group level.
– Enabled on a cluster level
– Deduplicated when de-staging from cache tier to capacity tier
– Fixed block length deduplication (4KB Blocks)
• Compression after deduplication
– If block is compressed <= 2KB
– Otherwise full 4KB block is stored
Beta
esxi-01 esxi-02 esxi-03
vmdk vmdk
vSphere & Virtual SAN
vmdk
All Flash Only
Significant space savings achievable,
making the economics of an all-flash
VSAN very attractive
29
30. RAID-5/6 (Inline Erasure Coding)
• When Number of Failures to Tolerate = 1 and Failure Tolerance Method = Capacity RAID-5
– 3+1 (4 host minimum)
– 1.33x overhead for RAID-5 instead of 2x compared to FTT=1 with RAID-1
• When Number of Failures to Tolerate = 2 and Failure Tolerance Method = Capacity RAID-6
– 4+2 (6 host minimum)
– 1.5x overhead for for RAID-6 instead of 3x compared to FTT=2 with RAID-1
RAID-5
ESXi Host
parity
data
data
data
ESXi Host
data
parity
data
data
ESXi Host
data
data
parity
data
ESXi Host
data
data
data
parity
All Flash Only
30
31. Software Checksum and disk scrubbing
Overview
• End-to-end checksum of the data to detect and resolve silent
disk errors due to faulty hardware/firmware
• Checksum is enabled by default (policy driven)
• If checksum verification fails on a read:
– VSAN fetches the data from another copy in RAID-1
– VSAN recreates the data from other components in RAID-5/6
stripe
• Disk scrubbing is run in the background
Benefits
• Provide additional level of data integrity
• Automatic detection and resolution of silent disk errors
Virtual SAN Datastore
31
32. Other new improvements
Client Cache
• Write through read memory cache
– 0.4% of total host memory, up to 1GB per host
• “Local” to the virtual Machine
• Low overhead, big impact!
Sparse Swap
• Reclaim Space used by memory swap
• Host advanced option enables setting policy for swap to
no space reservation
IOPS limit on object
• Policy driven capability
• Limit IOPS per VM/Virtual Disk
• Eliminate noisy neighbor issues
• Manage performance SLAs
32
33. Enhanced Virtual SAN Management with New Health Service
Built-in performance monitoring
Health and performance APIs and SDK
Storage capacity reporting
And many more health checks…
Performance Monitoring Capacity Monitoring
33
34. Performance, Scale and Availability for Any Application
B U S I N E S S - C R I T I C A L
A P P L I C AT I O N S
SAP Core Ready
Testing and validated
deployments
SAP
Tightly integrated cloud
management
Bundles Virtual SAN
licenses for lowest
cost VDI storage
Horizon
Oracle RAC supported
Testing and validated
deployments
Oracle
34
The Software Defined Data Center
In SDDC, all three core infrastructure components, compute, storage and networking are virtualized.
Virtualization software abstracts underlying hardware, while pooling compute, network and storage resources to deliver better utilization, faster provisioning and simpler operations.
The VM becomes the centerpiece of the operational model, providing automation and agility to repurpose infrastructure according to business needs.
Today we will focus on Storage, which has been growing at an extremely rapid pace and is a fast changing aspect of the datacenter!
What we are trying to achieve is simplify datacenter operations, and our primary focus will be storage and availability. Storage is we all know traditionally has been a painpoint in many data centers, high cost and usually does not provide the performance and scalability one would want. By offering our customers choice we aim to change the world of IT, start a new revolution. But we cannot do this by ourselves, we need the help of you, the consultant / admin / architect.
vSphere is perfectly positioned for this as it abstracts physical resources and can provide them as a shared pooled construct to the administrator.
Because it sits directly in the I/O path, the hypervisor (through the notion of policies associated with virtual machines) has the unique ability to make optimal decisions around matching the demands of virtualized applications with the supply of underlying physical infrastructure.
On top of that the platform provides you the ability to assign service level agreements to workloads which will reduce the operational complexity and as such significantly reduces the chances of making mistakes.
This is where it all starts, without Storage Policy Based Management many of the products and features we are about to talk about would not be possible! If there is one thing you need to remember when you walk away today, then it is Storage Policy Based Management. it is the key enabled for Software Defined Storage and Availability!
Storage Policy Based Management is composed of the following:
Common Policy framework Across Virtual Volumes, Virtual SAN and VMFS-based Storage
Common API Layer for Cloud Management Frameworks (vRealize Automation, OpenStack), Scripting users (PowerShell, JavaScript, Python, etc.) and Orchestration Platforms (vCO)
Represents Application and VM Level Requirements
Consumes Capabilities Published via VASA
SPBM provides the following benefits for customers:
Stable, Robust Automation Platform
Intelligent placement and fine control of services at the VM level
Shields Automation and Orchestration Platforms from infrastructure changes by abstracting the Underlying Storage Implementation
When you deploy a virtual machine using the SPBM frame work then VMs will show up as either compliant or non-compliant.
If a failure has occurred and one of the VMs is impacted you can easily see this as the VM will show up as non-compliant.
Of course if there are sufficient hosts available and there is sufficient disk space then the VM will be re-protected (self-healing) by Virtual SAN.
What is VSAN in a nutshell…
So, it follows a hyper-converged architecture for easy, streamlined management and scaling of both compute and storage. Hyper-converged represents a system architecture – one where compute and persistence are co-located. This system architecture is enabled by software.
It is a SDS product. A layer of software that runs on every ESXi host. It aggregates the local storage devices on ESX hosts (SSD and magnetic disks) and makes them look like a single pool of shared storage across all the hosts.
VSAN has a distributed architecture with no single point of failure.
VSAN goes a step further than other HCI products – VMware owns the most popular hypervisor in the industry. Strong integration of VSAN in the hypervisor means that we can optimize the data path and we ensure optimal resource scheduling (compute, network, storage) according to the needs of each application. At the end, better resource utilization means better consolidation ratios, more bang for your buck! Resource utilization is one part of the story. The other part is the Operational aspects of the product.
VSAN has been designed as a storage product to be used primarily by vSphere admins. So, we put a lot of effort in packaging the product in a way that is ideal for today’s use cases of virtualized environments. Specifically, the VSAN configuration and management workflows have been designed as extensions of the existing host and cluster management features of vSphere. That means easy, intuitive operational experience for vSphere admins. It also means native integration with key vSphere features unlike any other storage product out there, HCI or not.
VSAN is widely adopted, over 3000 customers since launch with some very interesting use cases ranging from Oil Platforms to Trains and now being planned to be deployed on sub-marines and mobile deployment units out in the field.
The Oil Platform scenario is a “robo” deployment managed through a central vCenter Server leveraging a satellite connection.
The sub marines and mobile deployment unit story I can’t reveal who this is, but it is very real. Dual datacenter setups in a ship are not uncommon and Virtual SAN is a natural fit here.
We were very conservative when we initially launched VSAN – after all, this was customers data we were talking about.
However, even though we were conservative, our customer were not.
There are plenty of other use cases. The ones listed on the slide are the most commonly used. It is fair to say that Virtual SAN fits in most scenarios:
Of course customers started with the test/dev workloads, just like they did when virtualization was first introduced
Business Critical Apps – We have customers running Exchange / SQL / SAP and billing systems on Virtual San
Virtual SAN is included in the Horizon Suite Advanced and Enterprise, so VDI/EUC is a natural fit.
As a DR destination VSAN is also commonly used as you can scale out and the cost is relatively low compared to a traditional storage system
Isolation workloads also something that VSAN is often used for, both DMZ and Management clusters fit this bill
Of course there is also ROBO, VSAN can start small and grow when desired, both scale-out and scale-up, and with 6.1 we even made things better by introducing a 2 node, but we will get back to that!
When it comes to deploying VSAN there are 3 options. By far the most popular option is the VSAN Ready Node - pre-installed and configured ready nodes (Ready to Run).
These are pre-configured server models which have been fully certified for and tested with VSAN.
Another option is an integrated out of box experience - HCI nodes from EMC offer an “on rails” solutions.
Lastly EVO:SDDC (Not released yet) offers the capability to deploy VSAN, NSX, vRO and other VMware solutions end to end. An SDDC in a rack, which scales from half a rack to many...
Virtual SAN enables both hybrid and all-flash architectures.
Irrespective of the architecture, there is a flash-based caching tier which can be configured out of flash devices like SSDs, PCIe cards, Ultra DIMMs etc. The flash caching tier acts as the read cache/write buffer that dramatically improves the performance of storage operations.
In the hybrid architecture, server-attached magnetic disks are pooled to create a distributed shared datastore, that persists the data. In this type of architecture, you can get up to 40K IOPs per server host.
In All-Flash architecture, the flash-based caching tier is intelligently used as a write-buffer only, while another set of SSDs forms the persistence tier to store data. Since this architecture utilizes only flash devices, it delivers extremely high IOPs of up to 90K per host, with predictable low latencies.
Deployed, configured and manage from vCenter through the vSphere Web Client
Radically simple
Configure VMkernel interface for Virtual SAN
Enable Virtual SAN by clicking Turn On
Objects are divided and distributed into components based on policies. Components and policies will be covered shortly. VMs are no longer based on a set of files, like we have on traditional storage.
First thing you do before you deploy a VM is define a policy. VSAN has what if APIs so it will show what the “result” would be of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes
Also note that a number of new capabilities were introduces in VSAN 6.2, and these will be discussed in more detail later on.
RAID-0 and RAID-1 were the only distributed RAID options up to and including version 6.1.
New techniques introduced in VSAN 6.2 will be discussed shortly.
RAID-5/6 used when Fault Tolerance Method set to Capacity
Note that in order to protect against a rack failure the minimum required number of failure domains is 3, this is similar to protecting against a host failure using FTT=1 where the minimum number of hosts is 3.
Stretched Cluster
Support for ROBO
Enhanced Replication
Support for SMP-FT
Support for Oracle RAC
Support for Windows Server Failover Clustering
New SSD HW options:
Intel NVMe
Diablo Ultra DIMM
Solution Deployment options
Virtual SAN On-Disk Format Upgrade
Disk Group Bulk Claiming
Disk Claiming per Tier
Stretched Cluster Configuration
Stretched Cluster Health Monitoring
Health Check Plug-in in-box
vRealize Operations Manager Integration
Global data visualization
Capacity planning
Root-Cause analysis
Stretched storage with Virtual SAN will allow you to split the Virtual SAN cluster across 2 sites, so that if a site fails, you would be able to seamlessly failover to the other site without any loss of data. Virtual SAN in a stretched storage deployment will accomplish this by synchronously mirror data across the 2 sites. The failover will be initiated by a witness VM that resides in a central place, accessible by both sites.
Bandwidth to witness is 10Mbps, or 2MB per 1000 components (worse case scenario - very little traffic is observed during steady state, but we need to calculate for owner migration, or site failure)
Point-in-time view of the state of the cluster
Geared to hardware – ensuring that everything is functioning as expected (disks, network, objects, components)
All Flash Only.
“High level description”
Dedupe and compression happens during destaging from the caching tier to the capacity tier. You on a cluster level and deduplication/compression happens on a per disk group basis. Bigger disk groups will result in a higher deduplication ratio. After the blocks are deduplicated they will be compressed. A significant saving already, combined with deduplication and the results achieved can be up to 7x space reduction, of course fully dependent on the workload and type of VMs.
“Lower level description”
Compression (LZ4) would be performed during destaging from the caching tier to the capacity tier. 4KB is the block size for deduplication. For each unique 4k block compression would be performed and if the output block size is less than or equal to 2KB, a compressed block would be saved in place of the 4K block. If the output block size is greater than 2KB, the block would be written uncompressed and tracked as such. The reason is to avoid block alignment issues, as well as reduce the CPU hit for decompressing the data which is greater than compression for data with low compression ratios. All of this data reduction is after the write acknowledgement.
Deduplication domains are within each disk group. This avoids needing a global lookup table (significant resource overhead), and allows us to put those resources towards tracking a smaller and more meaningful block size. We purposefully avoid dedupe of “write hot data” In the cache, or decompressing uncompressible data significant CPU/memory resources can avoid being wasted.
Note: Feature is supported with stretch clusters, ROBO edition
Sometimes RAID 5 and RAID 6 over the network is also referred as erasure coding. This is done inline; there is no post-processing required.
Since VMware has a design goal of not relying on data locality, this implementation of erasure coding does not bring any negative results by distributing the RAID-5/6 stripe across multiple hosts.
In this case RAID-5 requires 4 hosts at a minimum as it uses a 3+1 logic. With 4 hosts 1 can fail without data loss. This results in a significant reduction of required disk capacity. Normally a 20GB disk would require 40GB of disk capacity, but in the case of RAID-5 over the network the requirement is only ~27GB. There is another option if higher availability is desired
Use case Information:
Erasure codes offer “guaranteed capacity reduction unlike deduplication and compression. For customers who have “no thin provisioning policies” have data that is already compressed and deduplicated or have encrypted data this offers “known/fixed” capacity gains.
This can be applied on a granular basis (Per VMDK) using the Storage Policy Based Management system.
30% Savings.
Note: All Flash VSAN only.
Note: Not supported with stretched clusters
Note: this does not require the cluster size be a multiple of 4, just 4or more.
Cluster wide setting (Default is on). Can be disabled on a per object basis using storage policies.
Software checksum will enable customers to detect the corruptions that could be caused by hardware/software components including memory, drives, etc during the read or write operations. In case of drives, there are two basic kinds of corruption. The first is “latent sector errors”, which are typically the result of a physical disk drive malfunction. The other type is silent corruption, which can happen without warning (These are typically called silent data corruption). Undetected or completely silent errors could lead to lost or inaccurate data and significant downtime. There is no effective means of detection without end-to-end integrity checking.
During the read/write operations VSAN will check for the validity of the data based on checksum. If the data is not valid then it should take the necessary steps to either correct the data or report it to the user to take action. These actions could be:
Fetch the data from other copy of the data for RAID1, RAID5/6, etc.
This is what we call recoverable data.
If there is no valid copy of the data the error SHALL be returned
This is what we call Non-recoverable errors
Reporting:
In case of errors the issues will be reported in the UI and logs. This will include impacted blocks and their associated VMs.
A customer will be able to see the list of the VMs/Blocks that are hit by non-recoverable errors.
A customer will be able to see the historical/trending errors on each drive
CRC32 is the algorithm used (CPU offload support reduces overhead)
There will be two level of scrubbing:
Component level scrubbing: every block of each component is checked. If checksum mismatch, the scrubber tries to repair the block by reading other components.
Object level scrubbing: for every block of the object, data of each mirror (or the parity blocks in RAID-5/6) is read and checked. For inconsistent data, mark all data in this stripe as bad.
Repair can happen during normal I/O at DOM Owner or by scrubber.
The repair path for mirror and RAID-5/6 are different. When checksum verification fails, the scrubber or DOM Owner will read the other copy of the data (or other data in the same stripe in case of RAID-5/6), rebuild the correct data and write it out to the bad location.
End-to-end checksum of the data to prevent data integrity issues that could be caused by silent disk errors ( checksum is calculated and stored on the write path )
Detect silent corruptions when reading the data through checksum data
When checksum verification fails, VSAN will read the other copy of the data (or other data in the same stripe in case of RAID-5/6), rebuild the correct data and write it out to the bad location
It is based on 4K block size
This will replace the 1MB cache lines used for read ahead, with a larger cache (.4% of host memory up to 1GB). Preliminary testing with VDI show some impressive numbers and this will compliment CBRC. Data locality will be used for the memory cache (as we do with CBRC) as this is a read only cache (so no need for network ACK). Memory latency is actually low enough for the latency to be a concern. 4KB granularity of cache.
Sparse swap will be an advanced host level option (Swap is not managed by SPBM but the kernel). This will enable the reclaiming of space dedicated to memory. On a cluster with 256GB per host, this would yield TB’s of capacity savings at scale. This should benefit linked clone VDI storage utilization.
Performance Monitoring Service allows from vCenter to be able to monitoring existing workloads.
Customers needing access to tactical performance information will not need to go to vRO.
Performance monitor includes macro level views (Cluster latency, throughput, IOPS) as well as granular views (per disk, cache hit ratios, per disk group stats) without needing to leave vCenter.
The performance monitor allows aggregation of states across the cluster into a “quick view” to see what load and latency look like as well as share that information externally directly to 3rd party monitoring solutions by API.
The Performance monitoring service runs on a distributed database that is stored on VSAN and NOT vCenter (will use up to ~255GB, which is why it will ask for a policy).
Work is being done on SAP HANA. This may not make launch, but PE is working with SAP on this.
SAP Core apps are ready to be supported.
“Horizon should be deployed with VSAN”
Exchange DAG, Microsoft Always On as it was already is supported.
PE team has put together some impressive transaction numbers for Oracle.
Of course we have a vision, and the vision isn’t too far out, it is just ahead
We about to wrap up this session, I want to leave you with one more thing. VSAN is being extended to serve as a generic storage platform. One which in addition to the traditional virtualization use cases of VMs and VSCSI disks, VSAN can also serve storage though new abstractions: lightweight block drivers (perhaps using the NVMe protocol), files, and REST APIs. That’s storage that can be made available to individual hosts or be shared according to the protocol semantics across many hosts and application instances in the infrastructure. besides that VMware has been prototyping a distributed file system which leverages Virtual SAN as their core storage provider and serves storage capacity in an easy way and distributed fashion to thousands of clients. Yes the future is bright, and this is just the beginning.
Icons: openstack – pivotal cloud foundry, nginx, mesos , docker
With that I would (click) like to thank you and open the floor for questions
With that I would (click) like to thank you and open the floor for questions