The Storage for Virtual Environments seminar focuses on the challenges of backup and recovery in a virtual infrastructure, the various solutions that users are now using to solve those challenges, and a roadmap for making the most of all an organization’s virtualization initiatives.
This slide deck was used by Stephen Foskett for his
Veeam Backup & Replication v8 for VMware — General OverviewVeeam Software
Veeam Backup & Replication is much more than backup – it provides fast, flexible, and reliable recovery of virtualized applications and data. We bring backup and replication together in a single solution to reinvent data protection and deliver the #1 VM backup for VMware vSphere and Hyper-V environments. This poster provides a high-level overview of the architecture of Veeam Backup & Replication running in a VMware environment.
Veeam Backup & Replication v8 for VMware — General OverviewVeeam Software
Veeam Backup & Replication is much more than backup – it provides fast, flexible, and reliable recovery of virtualized applications and data. We bring backup and replication together in a single solution to reinvent data protection and deliver the #1 VM backup for VMware vSphere and Hyper-V environments. This poster provides a high-level overview of the architecture of Veeam Backup & Replication running in a VMware environment.
This technical paper discusses the deployment of a VMware environment and best practices in using IBM Scale Out Network Attached Storage (SONAS) for its primary storage. To know more about the Network Attached Storage, visit http://ibm.co/SH8WJo.
White paper: IBM FlashSystems in VMware EnvironmentsthinkASG
Drive performance in VMware environments with IBM FlashSystem. IBM flash storage delivers extreme, scalable performance for virtualized infrastructure.
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
VMworld 2013
Greg Loughmiller, NetApp
Kannan Mani, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Strong Oracle Database 12c performance is vital to the state of your business. Virtualizing such important workloads requires a reliable and high-performing virtualization platform, along with the right servers and storage. EMC, Cisco and VMware offer proven technologies to meet this need. In addition, newer technologies like vFRC can have a positive impact on database performance by offloading some of the storage I/O onto the local server. This can be beneficial to the intended application and has the potential to improve all applications in a mixed workload environment over time by relieving pressure on shared storage resources.
In our tests, we found that the new release of VMware vSphere 5.5 provided a new feature, vSphere Flash Read Cache, that decrease TPC-H-like OLAP workload processing time by 14 percent. We also found that running these workloads on Oracle Database 12c with the new feature didn’t affect the ability of administrators to complete routine vMotion tasks; with vSphere Flash Read Cache enabled during a vMotion, the migration went smoothly and vFRC continued to cache after the migration completed. This means that the combination of VMware vSphere 5.5 platform, Cisco UCS B200 M3 servers, and EMC VMAX 10K storage was able to provide improved Oracle Database 12c performance using the new vSphere Flash Read Cache feature, which improves the reliability and database response times you deliver for customers and employees alike.
Efficient Data Protection in VMware environments. You will learn about the basics of data protection in VMware environments and you will also find sample configurations and recommendations including Symantec Backup Exec / NetBackup, Fujitsu ETERNUS LT and Fujitsu ETERNUS CS800.
VMworld 2013: Part 1: Getting Started with vCenter Orchestrator VMworld
VMworld 2013
James Bowling, General Datatech, LP
Savina Ilieva, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Genau an dem Tag, an dem die neue Veeam Availability Suite V9 erschienen ist, konnten wir unser «What's new» Referat durchführen. Somit erhielten die Teilnehmer am 12. Januar brandheisse News zu den neusten Features.
Unser Veeam Certified Trainer Rinon Belegu zeigte an diesem Abend, warum er der Ansicht ist, dass die «Veeam Availability Suite» das beste Produkt für Hochverfügbarkeit in virtuellen Umgebungen ist und den Erfolgskurs mit der Version 9 fortführt.
Diese Features stellte Rinon Belegu vor:
- Veeam integration with EMC Storage Snapshots
- Veeam Cloud Connect Replication
- Primary Storage Integration
- Veeam Explorer™for Oracle and other Explorer enhancements
- Enterprise Enhancements
- Backup Storage Integration
- Scale-out Backup Repository™
VMworld 2013: VMware vSphere High Availability - What's New and Best PracticesVMworld
VMworld 2013
Keith Farkas, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Jeff Hunter, VMware
This technical paper discusses the deployment of a VMware environment and best practices in using IBM Scale Out Network Attached Storage (SONAS) for its primary storage. To know more about the Network Attached Storage, visit http://ibm.co/SH8WJo.
White paper: IBM FlashSystems in VMware EnvironmentsthinkASG
Drive performance in VMware environments with IBM FlashSystem. IBM flash storage delivers extreme, scalable performance for virtualized infrastructure.
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
VMworld 2013
Greg Loughmiller, NetApp
Kannan Mani, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Strong Oracle Database 12c performance is vital to the state of your business. Virtualizing such important workloads requires a reliable and high-performing virtualization platform, along with the right servers and storage. EMC, Cisco and VMware offer proven technologies to meet this need. In addition, newer technologies like vFRC can have a positive impact on database performance by offloading some of the storage I/O onto the local server. This can be beneficial to the intended application and has the potential to improve all applications in a mixed workload environment over time by relieving pressure on shared storage resources.
In our tests, we found that the new release of VMware vSphere 5.5 provided a new feature, vSphere Flash Read Cache, that decrease TPC-H-like OLAP workload processing time by 14 percent. We also found that running these workloads on Oracle Database 12c with the new feature didn’t affect the ability of administrators to complete routine vMotion tasks; with vSphere Flash Read Cache enabled during a vMotion, the migration went smoothly and vFRC continued to cache after the migration completed. This means that the combination of VMware vSphere 5.5 platform, Cisco UCS B200 M3 servers, and EMC VMAX 10K storage was able to provide improved Oracle Database 12c performance using the new vSphere Flash Read Cache feature, which improves the reliability and database response times you deliver for customers and employees alike.
Efficient Data Protection in VMware environments. You will learn about the basics of data protection in VMware environments and you will also find sample configurations and recommendations including Symantec Backup Exec / NetBackup, Fujitsu ETERNUS LT and Fujitsu ETERNUS CS800.
VMworld 2013: Part 1: Getting Started with vCenter Orchestrator VMworld
VMworld 2013
James Bowling, General Datatech, LP
Savina Ilieva, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Genau an dem Tag, an dem die neue Veeam Availability Suite V9 erschienen ist, konnten wir unser «What's new» Referat durchführen. Somit erhielten die Teilnehmer am 12. Januar brandheisse News zu den neusten Features.
Unser Veeam Certified Trainer Rinon Belegu zeigte an diesem Abend, warum er der Ansicht ist, dass die «Veeam Availability Suite» das beste Produkt für Hochverfügbarkeit in virtuellen Umgebungen ist und den Erfolgskurs mit der Version 9 fortführt.
Diese Features stellte Rinon Belegu vor:
- Veeam integration with EMC Storage Snapshots
- Veeam Cloud Connect Replication
- Primary Storage Integration
- Veeam Explorer™for Oracle and other Explorer enhancements
- Enterprise Enhancements
- Backup Storage Integration
- Scale-out Backup Repository™
VMworld 2013: VMware vSphere High Availability - What's New and Best PracticesVMworld
VMworld 2013
Keith Farkas, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Jeff Hunter, VMware
It's the End of Data Storage As We Know It (And I Feel Fine)Stephen Foskett
Technological change is finally coming to storage, and it will wipe away the architecture we've come to know over the last few decades. Say goodbye to the "do it all" Fibre Channel SAN storage array and get ready for converged infrastructure, distributed storage, alternative attachments like PCIe, and top-of-rack flash! In this session, Stephen Foskett will explain why this change is inevitable and how it will shake out. You won't recognize what's coming, but it will be faster, cheaper, and more integrated than ever! Delivered at
This is a presentation on storage-related changes in VMware vSphere 4.1. I gave this presentation at the Triad VMUG meeting in Greensboro, NC on January 28, 2011.
A Winning Combination: IBM Storage and VMwarePaula Koziol
Together IBM and VMware are uniquely positioned to help you rapidly progress along your virtualization journey, from the desktop to the datacenter to the cloud. Discover the synergies and benefits of leveraging IBM Storage in your VMware deployment.
Originally presented at Think 2019.
http://ibm.com/storage
This is the presentation on VMware integration points, given on October 26, 2010, to the Eastern TN VMUG/EMC User Group at their meeting in Knoxville, TN.
Stephen Foskett presents Five Truths of Storage:
- Preventing Data Loss is All that Matters
- Storage Metrics are Blind
- The Need for Storage is Endless
- Protocols are Irrelevant
- Storage Features are System Features
Stephen then applies these truths to the new architectures that are appearing across IT:
- Converged and Hyper-Converged Infrastructure
- Hybrid Cloud
- True Cloud Computing
- Future Computing Systems
In this keynote from Deltaware Data Solutions' 2016 Emerging Technology Summit, Stephen Foskett gives essential background on the emerging trend of containerization of enterprise applications. What are containers and how will they affect enterprise IT? Why is Docker so important? Foskett addresses both the technical and architectural questions, discussing which applications will be containerized, the benefits and costs, and what it means for IT operations.
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?Stephen Foskett
Enterprise IT has long been a conservative field, with many promising technologies and products skipped in favor of a safer-seeming choice. But technology development continues at a rapid pace, and many IT professionals are seeing the need to transform IT or be left in the dust! In this keynote, datacenter expert Stephen Foskett will share his views on technology adoption: How to judge which products and technologies will sink and which will soar, and which trends are worth betting a career on.
The Four Horsemen of Storage System PerformanceStephen Foskett
Why do some data storage solutions perform better than others? What tradeoffs are made for economy and how do they affect the system as a whole? These questions can be puzzling, but there are core truths that are difficult to avoid. Mechanical disk drives can only move a certain amount of data. RAM caching can improve performance, but only until it runs out. I/O channels can be overwhelmed with data. And above all, a system must be smart to maximize the potential of these components. These are the four horsemen of storage system performance, and they cannot be denied.
Gestalt IT - Why It’s Time to Stop Thinking In Terms of SilosStephen Foskett
This is the presentation given by Stephen Foskett at the Chicago VMUG User Conference on September 23, 2015. I present the case for "gestalt" IT that takes a holistic view of the datacenter. It includes discussion of commodity hardware, server virtualization, converged infrastructure, software-defined, cloud, and Chris Wahl.
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011Stephen Foskett
The notion that Fibre Channel is for data centers and iSCSI is for SMB’s and workgroups is outdated. Increases in LAN speeds and the coming of lossless Ethernet position iSCSI as a good fit for the data center. Whether your organization adopts FC or iSCSI depends on many factors like current product set, future application demands, organizational skill-set and budget. In this session we will discuss the different conditions where FC or IsCSI are the right fit, why you should use one and when to kick either to the curb.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
5. This Hour’s Focus:What Virtualization Does Introducing storage and server virtualization The future of virtualization The virtual datacenter Virtualization confounds storage Three pillars of performance Other issues Storage features for virtualization What’s new in VMware
6. Virtualization of Storage, Serverand Network Storage has been stuck in the Stone Age since the Stone Age! Fake disks, fake file systems, fixed allocation Little integration and no communication Virtualization is a bridge to the future Maintains functionality for existing apps Improves flexibility and efficiency
8. Server Virtualization is On the Rise Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
9. Server Virtualization is a Pile of Lies! What the OS thinks it’s running on… What the OS is actually running on… Physical Hardware VMkernel Binary Translation, Paravirtualization, Hardware Assist Guest OS VM Guest OS VM Scheduler and Memory Allocator vNIC vSwitch NIC Driver vSCSI/PV VMDK VMFS I/O Driver
11. The Virtual Data Center of Tomorrow Management Applications The Cloud™ Applications Legacy Applications Applications Applications CPU Network Backup Storage
14. Confounding Storage Presentation Storage virtualization is nothing new… RAID and NAS virtualized disks Caching arrays and SANs masked volumes New tricks: Thin provisioning, automated tiering, array virtualization But, we wrongly assume this is where it ends Volume managers and file systems Databases Now we have hypervisors virtualizing storage VMFS/VMDK = storage array? Virtual storage appliances (VSAs)
15. Begging for Converged I/O 4G FC Storage 1 GbE Network 1 GbE Cluster How many I/O ports and cables does a server need? Typical server has 4 ports, 2 used Application servers have 4-8 ports used! Do FC and InfiniBand make sense with 10/40/100 GbE? When does commoditization hit I/O? Ethernet momentum is unbeatable Blades and hypervisors demand greater I/O integration and flexibility Other side of the coin – need to virtualize I/O
16. Driving Storage Virtualization Server virtualization demands storage features Data protection with snapshots and replication Allocation efficiency with thin provisioning+ Performance and cost tweaking with automated sub-LUN tiering Improved locking and resource sharing Flexibility is the big one Must be able to create, use, modify and destroy storage on demand Must move storage logically and physically Must allow OS to move too
17. “The I/O Blender” Demands New Architectures Shared storage is challenging to implement Storage arrays “guess” what’s coming next based on allocation (LUN) taking advantage of sequential performance Server virtualization throws I/O into a blender – All I/O is now random I/O!
18. Server Virtualization Requires SAN and NAS Server virtualization has transformed the data center and storage requirements VMware is the #1 driver of SAN adoption today! 60% of virtual server storage is on SAN or NAS 86% have implemented some server virtualization Server virtualization has enabled and demanded centralization and sharing of storage on arrays like never before! Source: ESG, 2008
19. Keys to the Future For Storage Folks Ye Olde Seminar Content!
20. Primary Production Virtualization Platform Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
22. Which Features Are People Using? Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
23. What’s New in vSphere 4 and 4.1 VMware vSphere 4 (AKA ESX/ESXi 4) is a major upgrade for storage Lots of new features like thin provisioning, PSA, any-to-any Storage VMotion, PVSCSI Massive performance upgrade (400k IOPS!) vSphere 4.1 is equally huge for storage Boot from SAN vStorage APIs for Array Integration (VAAI) Storage I/O control (SIOC)
24. What’s New in vSphere 5 VMFS-5 – Scalability and efficiency improvements Storage DRS – Datastore clusters and improved load balancing Storage I/O Control – Cluster-wide and NFS support Profile-Driven Storage – Provisioning, compliance and monitoring FCoE Software Initiator iSCSI Initiator GUI Storage APIs – Storage Awareness (VASA) Storage APIs – Array Integration (VAAI 2) – Thin Stun, NFS, T10 Storage vMotion - Enhanced with mirror mode vSphere Storage Appliance (VSA) vSphere Replication – New in SRM
25. And Then, There’s VDI… Virtual desktop infrastructure (VDI) takes everything we just worried about and amplifies it: Massive I/O crunches Huge duplication of data More wasted capacity More user visibility More backup trouble
27. Technical Considerations - Configuring Storage for VMs The mechanics of presenting and using storage in virtualized environments
28. This Hour’s Focus:Hypervisor Storage Features Storage vMotion VMFS Storage presentation: Shared, raw, NFS, etc. Thin provisioning Multipathing (VMware Pluggable Storage Architecture) VAAI and VASA Storage I/O control and storage DRS
29. Storage vMotion Introduced in ESX 3 as “Upgrade vMotion” ESX 3.5 used a snapshot while the datastore was in motion vSphere 4 used changed-block tracking (CBT) and recursive passes vSphere 5 Mirror Mode mirrors writes to in-progress vMotions and also supports migration of vSphere snapshots and Linked Clones Can be offloaded for VAAI-Block (but not NFS)
30. vSphere 5: What’s New in VMFS 5 Max VMDK size is still 2 TB – 512 bytes Virtual (non-passthru) RDM still limited to 2 TB Max LUNs per host is still 256
31. Hypervisor Storage Options:Shared Storage The common/ workstation approach VMware: VMDK image in VMFS datastore Hyper-V: VHD image in CSV datastore Block storage (direct or FC/iSCSI SAN) Why? Traditional, familiar, common (~90%) Prime features (Storage VMotion, etc) Multipathing, load balancing, failover* But… Overhead of two storage stacks (5-8%) Harder to leverage storage features Often shares storage LUN and queue Difficult storage management VM Host Guest OS VMFS VMDK DAS or SAN Storage
32. Hypervisor Storage Options:Shared Storage on NAS Skip VMFS and use NAS NFS or SMB is the datastore Wow! Simple – no SAN Multiple queues Flexible (on-the-fly changes) Simple snap and replicate* Enables full Vmotion Link aggregation (trunking) is possible But… Less familiar (ESX 3.0+) CPU load questions Limited to 8 NFS datastores (ESX default) Snapshot consistency for multiple VMDK VM Host Guest OS NAS Storage VMDK
33. Hypervisor Storage Options:Guest iSCSI Skip VMFS and use iSCSI directly Access a LUN just like any physical server VMware ESX can even boot from iSCSI! Ok… Storage folks love it! Can be faster than ESX iSCSI Very flexible (on-the-fly changes) Guest can move and still access storage But… Less common to VM folks CPU load questions No Storage VMotion (but doesn’t need it) VM Host Guest OS iSCSI Storage LUN
34. Hypervisor Storage Options:Raw Device Mapping (RDM) Guest VM’s access storage directly over iSCSI or FC VM’s can even boot from raw devices Hyper-V pass-through LUN is similar Great! Per-server queues for performance Easier measurement The only method for clustering Supports LUNs larger than 2 TB (60 TB passthru in vSphere 5!) But… Tricky VMotion and dynamic resource scheduling (DRS) No storage VMotion More management overhead Limited to 256 LUNs per data center VM Host Guest OS I/O Mapping File SAN Storage
35. Hypervisor Storage Options:Direct I/O VMware ESX VMDirectPath - Guest VM’s access I/O hardware directly Leverages AMD IOMMU or Intel VT-d Great! Potential for native performance Just like RDM but better! But… No VMotion or Storage VMotion No ESX fault tolerance (FT) No ESX snapshots or VM suspend No device hot-add No performance benefit in the real world! VM Host Guest OS I/O Mapping File SAN Storage
36. Which VMware Storage Method Performs Best? Mixed random I/O CPU cost per I/O VMFS, RDM (p), or RDM (v) Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc.,ESX 3.5, 2008
37. vSphere 5: Policy or Profile-Driven Storage Allows storage tiers to be defined in vCenter based on SLA, performance, etc. Used during provisioning, cloning, Storage vMotion, Storage DRS Leverages VASA for metrics and characterization All HCL arrays and types (NFS, iSCSI, FC) Custom descriptions and tagging for tiers Compliance status is a simple binary report
38. Native VMware Thin Provisioning VMware ESX 4 allocates storage in 1 MB chunks as capacity is used Similar support enabled for virtual disks on NFS in VI 3 Thin provisioning existed for block, could be enabled on the command line in VI 3 Present in VMware desktop products vSphere 4 fully supports and integrates thin provisioning Every version/license includes thin provisioning Allows thick-to-thin conversion during Storage VMotion In-array thin provisioning also supported (we’ll get to that…)
39. Four Types of VMware ESX Volumes Note: FT is not supported What will your array do? VAAI helps… Friendly to on-array thin provisioning
40. Storage Allocation and Thin Provisioning VMware tests show no performance impact from thin provisioning after zeroing
41. Pluggable Storage Architecture:Native Multipathing VMware ESX includes multipathing built in Basic native multipathing (NMP) is round-robin fail-over only – it will not load balance I/O across multiple paths or make more intelligent decisions about which paths to use Pluggable Storage Architecture (PSA) VMware NMP Third-Party MPP VMware SATP Third-Party SATP VMware PSP Third-Party PSP
42. Pluggable Storage Architecture: PSP and SATP vSphere 4 Pluggable Storage Architecture allows third-party developers to replace ESX’s storage I/O stack ESX Enterprise+ Only There are two classes of third-party plug-ins: Path-selection plug-ins (PSPs) optimize the choice of which path to use, ideal for active/passive type arrays Storage array type plug-ins (SATPs) allow load balancing across multiple paths in addition to path selection for active/active arrays EMC PowerPath/VE for vSphere does everything
43. Storage Array Type Plug-ins (SATP) ESX native approaches Active/Passive Active/Active Pseudo Active Storage Array Type Plug-Ins VMW_SATP_LOCAL – Generic local direct-attached storage VMW_SATP_DEFAULT_AA – Generic for active/active arrays VMW_SATP_DEFAULT_AP – Generic for active/passive arrays VMW_SATP_LSI – LSI/NetApp arrays from Dell, HDS, IBM, Oracle, SGI VMW_SATP_SVC – IBM SVC-based systems (SVC, V7000, Actifio) VMW_SATP_ALUA – Asymmetric Logical Unit Access-compliant arrays VMW_SATP_CX – EMC/Dell CLARiiON and Celerra (also VMW_SATP_ALUA_CX) VMW_SATP_SYMM – EMC Symmetrix DMX-3/DMX-4/VMAX, Invista VMW_SATP_INV – EMC Invista and VPLEX VMW_SATP_EQL – Dell EqualLogic systems Also, EMC PowerPath and HDS HDLM and vendor-unique plugins not detailed in the HCL
44. Path Selection Plug-ins (PSP) VMW_PSP_MRU – Most-Recently Used (MRU) – Supports hundreds of storage arrays VMW_PSP_FIXED – Fixed - Supports hundreds of storage arrays VMW_PSP_RR – Round-Robin - Supports dozens of storage arrays DELL_PSP_EQL_ROUTED – Dell EqualLogic iSCSI arrays Also, EMC PowerPath and other vendor unique
45. vStorage APIs for Array Integration (VAAI) VAAI integrates advanced storage features with VMware Basic requirements: A capable storage array ESX 4.1+ A software plug-in for ESX Not every implementation is equal Block zeroing can be very demanding for some arrays Zeroing might conflict with full copy
47. vSphere 5: VAAI 2 Block (FC/iSCSI) T10 compliance is improved - No plug-in needed for many arrays File (NFS) NAS plugins come from vendors, not VMware
48. vSphere 5: vSphereStorage APIs – Storage Awareness (VASA) VASA is communication mechanism for vCenter to detect array capabilities RAID level, thin provisioning state, replication state, etc. Two locations in vCenter Server: “System-Defined Capabilities” – per-datastore descriptors Storage views and SMS API’s
49. Storage I/O Control (SIOC) Storage I/O Control (SIOC) is all about fairness: Prioritization and QoS for VMFS Re-distributes unused I/O resources Minimizes “noisy neighbor” issues ESX can provide quality of service for storage access to virtual machines Enabled per-datastore When a pre-defined latency level is exceeded on a VM it begins to throttle I/O (default 30 ms) Monitors queues on storage arrays and per-VM I/O latency But: vSphere 4.1 with Enterprise Plus Disabled by default but highly recommended! Block storage only (FC or ISCSI) Whole-LUN only (no extents) No RDM
51. Virtual Machine Mobility Moving virtual machines is the next big challenge Physical servers are difficult to move around and between data centers Pent-up desire to move virtual machines from host to host and even to different physical locations VMware DRS would move live VMs around the data center The “Holy Grail” for server managers Requires networked storage (SAN/NAS)
52. vSphere 5: Storage DRS Datastore clusters aggregate multiple datastores VMs and VMDKs placement metrics: Space - Capacity utilization and availability (80% default) Performance – I/O latency (15 ms default) When thresholds are crossed, vSphere will rebalance all VMs and VMDKs according to Affinity Rules Storage DRS works with either VMFS/block or NFS datastores Maintenance Mode evacuates a datastore
55. This Hour’s Focus:Non-Hypervisor Storage Features Converged networking Storage protocols (FC, iSCSI, NFS) Enhanced Ethernet (DCB, CAN, FCoE) I/O virtualization Storage for virtual storage Tiered storage and SSD/flash Specialized arrays Virtual storage appliances (VSA)
56. Introduction: Converging on Convergence Data centers rely more on standard ingredients What will connect these systems together? IP and Ethernet are logical choices
58. Which Storage Protocol to Use? Server admins don’t know/care about storage protocols and will want whatever they are familiar with Storage admins have preconceived notions about the merits of various options: FC is fast, low-latency, low-CPU, expensive NFS is slow, high-latency, high-CPU, cheap iSCSI is medium, medium, medium, medium
63. Which Storage Protocols Do People Use? Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
64. The Upshot: It Doesn’t Matter Use what you have and are familiar with! FC, iSCSI, NFS all work well Most enterprise production VM data is on FC, many smaller shops using iSCSI or NFS Either/or? - 50% use a combination For IP storage Network hardware and config matter more than protocol (NFS, iSCSI, FC) Use a separate network or VLAN Use a fast switch and consider jumbo frames For FC storage 8 Gb FC/FCoE is awesome for VMs Look into NPIV Look for VAAI
66. Serious Performance 10 GbE is faster than most storage interconnects iSCSI and FCoE both can perform at wire-rate
67. Latency is Critical Too Latency is even more critical in shared storage FCoE with 10 GbE can achieve well over 500,000 4K IOPS (if the array and client can handle it!)
68. Benefits Beyond Speed 10 GbE takes performance off the table (for now…) But performance is only half the story: Simplified connectivity New network architecture Virtual machine mobility 1 GbE Cluster 4G FC Storage 1 GbE Network 10 GbE (Plus 6 Gbps extra capacity)
69.
70. SCSI expects a lossless transport with guaranteed delivery
77. QCN (Qau) is still not readyPriority Flow Control (PFC) 802.1Qbb Congestion Management (QCN) 802.1Qau Bandwidth Management (ETS) 802.1Qaz PAUSE 802.3x Data Center Bridging Exchange Protocol (DCBX) Traffic Classes 802.1p/Q
78. FCoE CNAs for VMware ESX No Intel (OpenFCoE) or Broadcom support in vSphere 4…
79. vSphere 5: FCoE Software Initiator Dramatically expands the FCoE footprint from just a few CNAs Based on Intel OpenFCoE? – Shows as “Intel Corporation FCoE Adapter”
80. I/O Virtualization: Virtual I/O Extends I/O capabilities beyond physical connections (PCIe slots, etc) Increases flexibility and mobility of VMs and blades Reduces hardware, cabling, and cost for high-I/O machines Increases density of blades and VMs
81. I/O Virtualization: IOMMU (Intel VT-d) IOMMU gives devices direct access to system memory AMD IOMMU or Intel VT-d Similar to AGP GART VMware VMDirectPath leverages IOMMU Allows VMs to access devices directly May not improve real-world performance System Memory IOMMU MMU I/O Device CPU
82. Does SSD Change the Equation? RAM and flash promise high performance… But you have to use it right
83. Flash is Not A Disk Flash must be carefully engineered and integrated Cache and intelligence to offset write penalty Automatic block-level data placement to maximize ROI IF a system can do this, everything else improves Overall system performance Utilization of disk capacity Space and power efficiency Even system cost can improve!
86. Three Approaches to SSD For VM EMC Project Lightning promises to deliver all three!
87. Storage for Virtual Servers (Only!) New breed of storage solutions just for virtual servers Highly integrated (vCenter, VMkernel drivers, etc.) High-performance (SSD cache) Mostly from startups (for now) Tintri– NFS-based caching array Virsto+EvoStor – Hyper-V software, moving to VMware
88. Virtual Storage Appliances (VSA) What if the SAN was pulled inside the hypervisor? VSA = A virtual storage array as a guest VM Great for lab or PoC Some are not for production Can build a whole data center in a hypervisor, including LAN, SAN, clusters, etc Physical Server Resources Hypervisor VM Guest VM Guest Virtual Storage Appliance Virtual SAN Virtual LAN CPU RAM
89. vSphere 5: vSphere Storage Appliance (VSA) Aimed at SMB market Two deployment options: 2x replicates storage 4:2 3x replicates round-robin 6:3 Uses local (DAS) storage Enables HA and vMotion with no SAN or NAS Uses NFS for storage access Also manages IP addresses for HA
91. Whew! Let’s Sum Up Server virtualization changes everything Throw your old assumptions about storage workloads and presentation out the window We (storage folks) have some work to do New ways of presenting storage to the server Converged I/O (Ethernet!) New demand for storage virtualization features New architectural assumptions
Up to 256 FC or iSCSI LUNsESX multipathingLoad balancingFailoverFailover between FC and iSCSI*Beware of block sizes greater than 256 KB!If you want virtual disks greater than 256 GB, you must use a VMFS block size larger than 1 MBAlign your virtual disk starting offset to your array (by booting the VM and using diskpart, Windows PE, or UNIX fdisk)*
Link Aggregate Control Protocol (LACP) for trunking/EtherChannel - Use “fixed” path policy, not LRUUp to 8 (or 32) NFS mount pointsTurn off access time updatesThin provisioning? Turn on AutoSize and watch out
http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is 20 – 30 MSFor SAS storage the recommended latency threshold is 20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is 20 – 30 MSFor SAS storage the recommended latency threshold is 20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/