Before you make the plunge, take into consideration every aspect of your VDI project: user experience, admin time, storage capacity, and ongoing costs related to datacenter space. Our experiences with Dell EMC PowerEdge FX2s enclosures outfitted with PowerEdge FC630 compute modules and Dell EMC XtremIO arrays show that this solution is a compelling one for VDI deployments. The Dell EMC XtremIO solution supported 6,000 virtual desktops with a good user experience, offered flexibility by supporting both full and linked clones, recomposed the desktops quickly and easily, and reduced data dramatically through inline deduplication and compression. And it did all this in less than a single rack of datacenter space, to keep server sprawl in check and costs down.
VMworld 2013
John Dodge, VMware
Andre Leibovici, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Real world experience with provisioning servicesCitrix
If you use Citrix NetScaler for secure remote access to your Citrix XenApp/Citrix XenDesktop deployment, you may be wondering if there’s more that it can do. You are correct! NetScaler also offers load balancing, global server load balancing, web interface integration, HDX traffic inspection and much more. It can enhance Citrix ShareFile StorageZones and Citrix mobile deployments. Join this session for a quick NetScaler refresher.
on the most suitable storage architecture for virtualizationJordi Moles Blanco
This is a paper I wrote on the most suitable storage architecture for virtualization that solved some of the problems we had with shared storage at CDmon. The paper talks about pros and cons of both ISCSI and NFS and tries to get the most stable and best performing storage solution with the tools we had available at that moment.
This document describes how XenServer provides and keeps track of the storage supplied to its guests. The first section
is a reminder of how Linux looks at storage and the second section builds on that to explain XenServer storage. Basic
knowledge of Linux is required, as some standard tools are used.
SYN 219 Getting Up Close and Personal With MCS and PVS Citrix
In SYN 219, take a closer look at:
- Provisioning in a virtual desktop environment.
- Machine creation services and provisioning services.
- Getting the most out of MCS and PVS.
Watch the whole session on SynergyTV and follow along with these slides.
http://live.citrixsynergy.com/2016/player/ondemandplayer.php?presentation_id=17298efa-f6d3-4c15-b1cc-b2f0bb0b65dc
VMworld 2013
John Dodge, VMware
Andre Leibovici, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Real world experience with provisioning servicesCitrix
If you use Citrix NetScaler for secure remote access to your Citrix XenApp/Citrix XenDesktop deployment, you may be wondering if there’s more that it can do. You are correct! NetScaler also offers load balancing, global server load balancing, web interface integration, HDX traffic inspection and much more. It can enhance Citrix ShareFile StorageZones and Citrix mobile deployments. Join this session for a quick NetScaler refresher.
on the most suitable storage architecture for virtualizationJordi Moles Blanco
This is a paper I wrote on the most suitable storage architecture for virtualization that solved some of the problems we had with shared storage at CDmon. The paper talks about pros and cons of both ISCSI and NFS and tries to get the most stable and best performing storage solution with the tools we had available at that moment.
This document describes how XenServer provides and keeps track of the storage supplied to its guests. The first section
is a reminder of how Linux looks at storage and the second section builds on that to explain XenServer storage. Basic
knowledge of Linux is required, as some standard tools are used.
SYN 219 Getting Up Close and Personal With MCS and PVS Citrix
In SYN 219, take a closer look at:
- Provisioning in a virtual desktop environment.
- Machine creation services and provisioning services.
- Getting the most out of MCS and PVS.
Watch the whole session on SynergyTV and follow along with these slides.
http://live.citrixsynergy.com/2016/player/ondemandplayer.php?presentation_id=17298efa-f6d3-4c15-b1cc-b2f0bb0b65dc
Data has become an enterprise’s most important asset. How is data stored on the cloud? How is this different from the way it is stored with traditional IT? This chapter will answer these questions.
Hyper v® 2012 vs v sphere™ 5.1 understanding the differencesSolarWinds
With Hyper-V 2012, Microsoft® has closed many of the gaps it previously had with VMware®, and this webcast will walk through a comparison of the scalability and features of both hypervisors:
· Architecture & footprint
· CPU & memory management
· Storage capabilities
· Mobility & availability
The discussion will provide a technical basis for understanding the pros and cons of both platforms for users looking to either choose one or who are considering using both.
Drive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solutionPrincipled Technologies
A Dell EMC XC Series cluster featuring Nutanix software and powered by Toshiba PX05S SAS SSDs delivered strong database performance with a blend of structured and unstructured data
LIQUID-A Scalable Deduplication File System For Virtual Machine Imagesfabna benz
LIQUID-A Scalable Deduplication File System For Virtual Machine Images.
INTRODUCTION:Cloud computing means storing and accessing data programs over internet instead of yours computers hard drive.
A virtual machine is a software that creates a virtualized environment between the computer platform and the end user in which the end user can operate software.
Data Deduplication – data compression technology.
Eliminate duplicate copies of repeating data.
A redundant data block is replaced instead of storing multiple times.Improves storage utilization.
ADVANTAGES OF LIQUID:
*Fast virtual machine deployment with peer to peer data transfer.
*Low storage consumption by means of deduplication.
*Instant cloning for virtual machine images.
*On demand fetching through a network caching with local disks.
*LIQUID files has no specific limit.
CONCLUSION:
Presented LIQUID which is a deduplication file system with good IO performance.
Achieve by caching frequently accessed data blocks in memory cache.
Avoids additional disk operations.
Deduplication of VM images proved to be effective.
План вебинара:
##Что такое Storage Spaces Direct?
##Сценарии использования Storage Spaces.
##Описание минимальных требований для Storage Spaces.
##Как настроить Windows Server 2016 Spaces Direct для работы с локальными дисками сервера?
##Что такое Storage Replica?
##Разница подходов синхронной и асинхронной репликации.
##Какие технологии репликации для каких задач использовать (DFS-R, Hyper-V Repica, SQL AlwaysOn, Exchange DAG) - и как это комбинируется с новыми возможностями Windows Server 2016?
##Что такое ReFS и чем она отличается в Server 2016 от предыдущих изданий ОС?
##Что даёт использование ReFS для виртуальных машин Hyper-V. Сценарии и возможности.
##Общие изменения Storage технологий в Windows Server 2016.
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...Principled Technologies
Keeping a legacy disparate hardware solution composed of nine older servers instead of choosing the new Dell PowerEdge VRTX powered by the Intel Xeon processor E5-4650 v3 family may cost more than one would expect. We found that the Dell PowerEdge VRTX with an Intel Xeon processor E5-4650 v3-powered Dell PowerEdge M830 server could do the work of nine legacy servers running email, database, and file/print server workloads. The VRTX ran all nine workloads in VMs, achieving a slight performance boost on the database and file/print workloads while using much less datacenter space and reducing power consumption by 38.4 percent.
The VRTX achieved these savings using 88.6 percent less rack-equivalent space than the legacy disparate hardware solution and with one-third as many cables, to reduce complexity and reduce the burden of space in small offices.
Despite a larger initial investment, the Dell PowerEdge VRTX with an Intel Xeon processor E5-4650 v3-powered Dell PowerEdge M830 server could actually lower the total cost of ownership over five years by as much as 48.5 percent, delivering a solid return on investment in less than two years.
As our test results show, investing in the Dell PowerEdge VRTX solution powered by the Intel Xeon processor E5-4600 v3 family could provide a compact solution to optimize application performance and reduce complexity at a lower lifetime cost than a legacy solution composed of nine older servers.
The Dell PowerEdge VRTX is an all-inclusive platform, suitable for rapid deployment of a virtual environment, such as Citrix XenDesktop 7.5. The integrated components of the VRTX means your business has a centralized management console for the necessary data center components that support VDI environments. We found that the Dell PowerEdge VRTX and XenDesktop set up, configured, and deployed VDI users easily. The addition of Dell Wyse terminals demonstrates how your end-users can access your XenDesktop VDI environment with efficient hardware and little administrative effort. The combination of Dell PowerEdge VRTX and Citrix XenDesktop 7.5 can offer a unified, efficient, and simple enterprise-value VDI solution for your business, but without the resources and commitment need for supporting an enterprise data center.
Prepare images for machine learning faster with servers powered by AMD EPYC 7...Principled Technologies
A server cluster with 3rd Gen AMD EPYC processors achieved higher throughput and took less time to prepare images for classification than a server cluster with 3rd Gen Intel Xeon Platinum 8380 processors
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
If your business is expanding and you need to support more users accessing your databases, it’s time to act. Upgrading your database infrastructure with a flash storage-based solution is a smart way to improve performance without adding more servers or taking up very much rack space, which comes at a premium. The Dell Acceleration Appliance for Databases addresses this by providing strong performance when combined with your existing infrastructure or on its own.
We found that adding a highly available DAAD solution to our database application provided up to 3.01 times the Oracle Database 12c performance, which can make a big difference to your bottom line. Additionally, the DAAD delivered 3.14 times the database performance when replacing traditional storage completely, which could enable your infrastructure to keep up with your growing business’ needs.
Авторский учебный курс от Архитектора Microsoft Алексея Кибкало.
Что такое Hyper-V
Версии Windows Server 2012 Hyper-V
Аппаратные требования к Windows Server 2012 Hyper-V
Установка Hyper-V
Сетевые возможности Windows Server 2012 Hyper-V
Что такое Live Migration
Высокодоступные кластеры Windows Server 2012 Hyper-V
Аварийное восстановление и Hyper-V Replica
Азы управления при помощи System Center
При поддержке "Звезды и С" www.stars-s.ru
Windows Server 2012 R2 Software-Defined StorageAidan Finn
In this presentation I taught attendees how to build a Scale-Out File Server (SOFS) using Windows Server 2012 R2, JBODs, Storage Spaces, Failover Clustering, and SMB 3.0 Networking, suitable for storing application data such as Hyper-V and SQL Server.
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...Principled Technologies
Critical Apache Cassandra NoSQL databases can offer reliability and flexibility for workloads like media streaming or social media. Running these databases in a private cloud can let you maintain control of your data while giving you the agility and flexibility the cloud provides.
In our datacenter, the Dell EMC PowerEdge FC640 solution powered by Intel Xeon Gold 5120 processors dramatically increased performance for Apache Cassandra workloads compared to a legacy solution. By choosing a solution that can do up to 4.7 times the work of the legacy solution, your infrastructure could handle more requests at a time—and we found that the Dell EMC PowerEdge FC640 solution could do all this additional work in less space, which could let you hold off on renting more datacenter space or on building out your existing space as your business grows.
Comparing Dell Compellent network-attached storage to an industry-leading NAS...Principled Technologies
A flexible NAS solution addresses many organizational challenges from server backup to hosting production applications and databases. Advanced NAS solutions such as the Intel Xeon processor-based Dell Compellent FS8600 NAS provide flexibility and scalability, allowing various use options as well as drive options throughout its lifecycle. This scale and flexibility enables an organization to alleviate performance bottlenecks anywhere in the organization simply by reallocating or adding more disk resources.
We found that the Intel Xeon processor-based Dell Compellent FS8600 NAS solution backed up a small-file corpus up to 15.9 percent faster and a large-file corpus up to 17.1 percent faster than a similarly configured, industry-leading NAS solution. This means that selecting the Dell Compellent FS8600 NAS has the potential to help optimize an organization’s infrastructure.
Data has become an enterprise’s most important asset. How is data stored on the cloud? How is this different from the way it is stored with traditional IT? This chapter will answer these questions.
Hyper v® 2012 vs v sphere™ 5.1 understanding the differencesSolarWinds
With Hyper-V 2012, Microsoft® has closed many of the gaps it previously had with VMware®, and this webcast will walk through a comparison of the scalability and features of both hypervisors:
· Architecture & footprint
· CPU & memory management
· Storage capabilities
· Mobility & availability
The discussion will provide a technical basis for understanding the pros and cons of both platforms for users looking to either choose one or who are considering using both.
Drive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solutionPrincipled Technologies
A Dell EMC XC Series cluster featuring Nutanix software and powered by Toshiba PX05S SAS SSDs delivered strong database performance with a blend of structured and unstructured data
LIQUID-A Scalable Deduplication File System For Virtual Machine Imagesfabna benz
LIQUID-A Scalable Deduplication File System For Virtual Machine Images.
INTRODUCTION:Cloud computing means storing and accessing data programs over internet instead of yours computers hard drive.
A virtual machine is a software that creates a virtualized environment between the computer platform and the end user in which the end user can operate software.
Data Deduplication – data compression technology.
Eliminate duplicate copies of repeating data.
A redundant data block is replaced instead of storing multiple times.Improves storage utilization.
ADVANTAGES OF LIQUID:
*Fast virtual machine deployment with peer to peer data transfer.
*Low storage consumption by means of deduplication.
*Instant cloning for virtual machine images.
*On demand fetching through a network caching with local disks.
*LIQUID files has no specific limit.
CONCLUSION:
Presented LIQUID which is a deduplication file system with good IO performance.
Achieve by caching frequently accessed data blocks in memory cache.
Avoids additional disk operations.
Deduplication of VM images proved to be effective.
План вебинара:
##Что такое Storage Spaces Direct?
##Сценарии использования Storage Spaces.
##Описание минимальных требований для Storage Spaces.
##Как настроить Windows Server 2016 Spaces Direct для работы с локальными дисками сервера?
##Что такое Storage Replica?
##Разница подходов синхронной и асинхронной репликации.
##Какие технологии репликации для каких задач использовать (DFS-R, Hyper-V Repica, SQL AlwaysOn, Exchange DAG) - и как это комбинируется с новыми возможностями Windows Server 2016?
##Что такое ReFS и чем она отличается в Server 2016 от предыдущих изданий ОС?
##Что даёт использование ReFS для виртуальных машин Hyper-V. Сценарии и возможности.
##Общие изменения Storage технологий в Windows Server 2016.
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...Principled Technologies
Keeping a legacy disparate hardware solution composed of nine older servers instead of choosing the new Dell PowerEdge VRTX powered by the Intel Xeon processor E5-4650 v3 family may cost more than one would expect. We found that the Dell PowerEdge VRTX with an Intel Xeon processor E5-4650 v3-powered Dell PowerEdge M830 server could do the work of nine legacy servers running email, database, and file/print server workloads. The VRTX ran all nine workloads in VMs, achieving a slight performance boost on the database and file/print workloads while using much less datacenter space and reducing power consumption by 38.4 percent.
The VRTX achieved these savings using 88.6 percent less rack-equivalent space than the legacy disparate hardware solution and with one-third as many cables, to reduce complexity and reduce the burden of space in small offices.
Despite a larger initial investment, the Dell PowerEdge VRTX with an Intel Xeon processor E5-4650 v3-powered Dell PowerEdge M830 server could actually lower the total cost of ownership over five years by as much as 48.5 percent, delivering a solid return on investment in less than two years.
As our test results show, investing in the Dell PowerEdge VRTX solution powered by the Intel Xeon processor E5-4600 v3 family could provide a compact solution to optimize application performance and reduce complexity at a lower lifetime cost than a legacy solution composed of nine older servers.
The Dell PowerEdge VRTX is an all-inclusive platform, suitable for rapid deployment of a virtual environment, such as Citrix XenDesktop 7.5. The integrated components of the VRTX means your business has a centralized management console for the necessary data center components that support VDI environments. We found that the Dell PowerEdge VRTX and XenDesktop set up, configured, and deployed VDI users easily. The addition of Dell Wyse terminals demonstrates how your end-users can access your XenDesktop VDI environment with efficient hardware and little administrative effort. The combination of Dell PowerEdge VRTX and Citrix XenDesktop 7.5 can offer a unified, efficient, and simple enterprise-value VDI solution for your business, but without the resources and commitment need for supporting an enterprise data center.
Prepare images for machine learning faster with servers powered by AMD EPYC 7...Principled Technologies
A server cluster with 3rd Gen AMD EPYC processors achieved higher throughput and took less time to prepare images for classification than a server cluster with 3rd Gen Intel Xeon Platinum 8380 processors
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
If your business is expanding and you need to support more users accessing your databases, it’s time to act. Upgrading your database infrastructure with a flash storage-based solution is a smart way to improve performance without adding more servers or taking up very much rack space, which comes at a premium. The Dell Acceleration Appliance for Databases addresses this by providing strong performance when combined with your existing infrastructure or on its own.
We found that adding a highly available DAAD solution to our database application provided up to 3.01 times the Oracle Database 12c performance, which can make a big difference to your bottom line. Additionally, the DAAD delivered 3.14 times the database performance when replacing traditional storage completely, which could enable your infrastructure to keep up with your growing business’ needs.
Авторский учебный курс от Архитектора Microsoft Алексея Кибкало.
Что такое Hyper-V
Версии Windows Server 2012 Hyper-V
Аппаратные требования к Windows Server 2012 Hyper-V
Установка Hyper-V
Сетевые возможности Windows Server 2012 Hyper-V
Что такое Live Migration
Высокодоступные кластеры Windows Server 2012 Hyper-V
Аварийное восстановление и Hyper-V Replica
Азы управления при помощи System Center
При поддержке "Звезды и С" www.stars-s.ru
Windows Server 2012 R2 Software-Defined StorageAidan Finn
In this presentation I taught attendees how to build a Scale-Out File Server (SOFS) using Windows Server 2012 R2, JBODs, Storage Spaces, Failover Clustering, and SMB 3.0 Networking, suitable for storing application data such as Hyper-V and SQL Server.
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...Principled Technologies
Critical Apache Cassandra NoSQL databases can offer reliability and flexibility for workloads like media streaming or social media. Running these databases in a private cloud can let you maintain control of your data while giving you the agility and flexibility the cloud provides.
In our datacenter, the Dell EMC PowerEdge FC640 solution powered by Intel Xeon Gold 5120 processors dramatically increased performance for Apache Cassandra workloads compared to a legacy solution. By choosing a solution that can do up to 4.7 times the work of the legacy solution, your infrastructure could handle more requests at a time—and we found that the Dell EMC PowerEdge FC640 solution could do all this additional work in less space, which could let you hold off on renting more datacenter space or on building out your existing space as your business grows.
Comparing Dell Compellent network-attached storage to an industry-leading NAS...Principled Technologies
A flexible NAS solution addresses many organizational challenges from server backup to hosting production applications and databases. Advanced NAS solutions such as the Intel Xeon processor-based Dell Compellent FS8600 NAS provide flexibility and scalability, allowing various use options as well as drive options throughout its lifecycle. This scale and flexibility enables an organization to alleviate performance bottlenecks anywhere in the organization simply by reallocating or adding more disk resources.
We found that the Intel Xeon processor-based Dell Compellent FS8600 NAS solution backed up a small-file corpus up to 15.9 percent faster and a large-file corpus up to 17.1 percent faster than a similarly configured, industry-leading NAS solution. This means that selecting the Dell Compellent FS8600 NAS has the potential to help optimize an organization’s infrastructure.
Upgrading to Windows Server 2019 on Dell EMC PowerEdge servers: A simple proc...Principled Technologies
Using Dell EMC PowerEdge R740xd servers with Intel Xeon Scalable processors, we upgraded from Windows Server 2016 and saw data compression ratios of up to 9.8:1 thanks to new Storage Spaces Direct features
A Dell PowerEdge MX environment using OpenManage Enterprise and OpenManage En...Principled Technologies
Compared to a Cisco UCS-X environment using Intersight, the Dell environment streamlined making changes to VLANs and helped avoid interventions during scheduled firmware updates
Conclusion
We executed two management scenarios in a Dell PowerEdge MX environment with Dell OpenManage Enterprise and OpenManage Enterprise Modular and a Cisco UCS X-Series chassis environment with Cisco Intersight. We learned that the Dell solution’s single-part profile modification for performing VLAN updates was quicker and simpler than the Cisco solution’s two-part profile deployment, requiring 40 percent less time and two-thirds as many steps. We also compared the firmware updating process on the solutions. Being able to schedule these updates to occur automatically from the online Dell repository offered an advantage over having to manually execute the same tasks from the Cisco Intersight repositories. Namely, administrators do not need to take action during maintenance windows but can instead schedule them ahead of time. Saving time on routine tasks frees administrators to pursue innovation, and being able to avoid middle-of-the-night duties helps companies provide a better work experience for admins. Together, these advantages help make Dell PowerEdge MX servers a good candidate for companies considering upgrading the older Cisco UCS servers in their data centers.
Citrix VDI on FlexPod with Microsoft Private CloudNetApp
FlexPod with Microsoft Private Cloud is prevalidated, best-in-class private cloud reference implementation from Microsoft’s partners. Joint collaboration between NetApp, Cisco, and Microsoft to accelerate private cloud adoption, resulting in a single orderable part and unified support.
Ensure greater uptime and boost VMware vSAN cluster performance with the Del...Principled Technologies
The Dell EMC PowerEdge MX with VMware vSAN Ready Nodes delivered a 55.9% faster response time than a Cisco UCS solution and a 41.3% faster response time than an HPE Synergy solution
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...Maichino Sepede
An overview of the VxRail Appliance, including what’s new with VxRail on the 14th generation PowerEdge server, and advancements in the VxRail 4.5 software.
A company’s success depends on critical application performance and availability. Upgrades and patches can improve application efficiency and user experience, but making the necessary changes requires resource intensive environments to test updates before deploying them. What’s more, these applications need to continue accessing data even in the event of an on-premises crisis.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. Storage latency for the VMAX 250F peaked at a millisecond in our testing while IOPS stayed within an acceptable range. The solution also kept data highly available with no downtime or performance drop when we initiated a lost host connection for the primary storage. Consider the Dell EMC VMAX 250F array for your datacenter to support the critical database applications that drive your company.
Slow performance and unavailable critical applications can impinge a company’s progress. You can apply patches and updates to improve application quality and user experience, but these changes need to be tested in resource-intensive environments before deployment. Keeping these applications accessing data is vital, too, as on-premises events can put availability at risk.
Our Dell EMC VMAX 250F and PowerEdge server solution supported test/dev environments and production database applications simultaneously without affecting the production applications’ performance. As we added VMs designed for test/dev environments, the production workload maintained an acceptable level of IOPS and achieved an average storage latency of less than a millisecond. The solution also kept data highly available with no downtime and no performance drop when we initiated a lost host connection for the primary storage. To run critical database applications of your company, consider the Dell EMC VMAX 250F for your datacenter.
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
Get in and stay in the productivity zone with the HP Z2 G9 Tower WorkstationPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to Dell Precision 3660 and 5860 tower workstations in optimized performance modes
Conclusion
HP Z2 G9 Tower Workstation users can change the BIOS settings to dial in the performance mode that best suits their needs: High Performance Mode, Performance Mode, or Quiet Mode. In good
news for both creative and technical professionals, we found that an Intel Core i9-13900 processor-powered HP Z2 G9 Tower Workstation set to High Performance mode received higher CPU-based benchmark scores than both a similarly configured Dell Precision 3660 and a Dell Precision 5860 equipped with an Intel Xeon w5-2455x processor. Plus, the HP Z2 G9 Tower Workstation was quieter while running CPU-intensive Cinebench 2024 and SPECapc for Solidworks 2022 workloads than both Dell Precision tower workstations. This means HP Z2 G9 Tower Workstation users who prize performance over everything else can do so without sacrificing a quiet workspace.
Open up new possibilities with higher transactional database performance from...Principled Technologies
In our PostgreSQL tests, R7i instances boosted performance over R6i instances with previous-gen processors
If you use the open-source PostgreSQL database to run your critical business operations, you have many cloud options from which to choose. While many of these instances can do the job, some can deliver stronger performance, which can mean getting a greater return on your cloud investment.
We conducted hands-on testing with the HammerDB TPROC-C benchmark to see how the PostgreSQL performance of Amazon EC2 R7i instances, enabled by 4th Gen Intel Xeon Scalable processors, stacked up to that of R6i instances with previous-generation processors. We learned that small, medium-sized, and large R7i instances with the newer processors delivered better OLTP performance, with improvements as high as 13.8 percent. By choosing the R7i instances, your organization has the potential to support more users, deliver a better experience to those users, and even lower your cloud operating expenditures by requiring fewer instances to get the job done.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Make room for more virtual desktops with fast storage
1. Make room for more virtual desktops with fast storage May 2017
Make room for more virtual
desktops with fast storage
Dell EMC XtremIO storage with PowerEdge
servers can support a large number of
desktops in a small footprint while saving
storage space
For virtual desktop infrastructure (VDI) to be worth your while,
it should give a large number of users a responsive experience,
be easy for IT staff to administer, and minimize ongoing costs.
The Dell EMC™
XtremIO®
solution (Dell EMC PowerEdge™
FX2s modular enclosures with FC630 server modules, Dell
Networking 4048-ON switch modules, Brocade®
Connectrix®
DS-6620B switches, Emulex®
LPe31000-series HBAs by
Broadcom®
, and Dell EMC XtremIO storage) is a robust VDI
platform that can support a large number of virtual desktops
while taking up little datacenter space, which can help keep
ongoing datacenter expenses low. It offers the flexibility to host
either full or linked clones thanks to inline compression and
deduplication technologies that help maximize available storage
space. Our hands-on tests showed that the Dell EMC XtremIO
solution sped deploying and recomposing operations—which
refresh desktops to reflect changes from a master VM—saving
admins time and getting desktops to users more quickly.
The Dell EMC XtremIO solution offers a powerful VDI
platform that can meet the needs of virtual desktop users
and admins alike.
6,000 users
for either linked or full clones
Up to 16:1
deduplication ratio
to maximize storage space
Fits in just 35U
of rack space
Dell EMC XtremIO
solution
A Principled Technologies report: Hands-on testing. Real-world results.
2. Make room for more virtual desktops with fast storage May 2017 | 2
The power of the Dell EMC XtremIO solution for VDI
A powerful VDI solution based on strong
storage infrastructure can help eliminate many
of the problems that can plague VDI in sub-par
deployments. Users demand a snappy desktop
experience and also have large capacity requirements,
both of which stress the underlying storage and bog
down user experience—if it’s not up to the task.
We put the Dell EMC XtremIO solution to the test
deploying both linked and full clones to see how it
would fare in a large VDI deployment of 6,000 users.
For detailed information about our test hardware,
see Appendix A. See Appendix B for more detailed
test results. For our step-by-step test methodology,
see Appendix C.
The Dell EMC XtremIO solution supported 6,000 users with ease
Get your linked clone or full clone desktops up and running quickly
At the start of a VDI deployment, administrators must create the desktops for end users. In large deployments like the
6,000-user case we tested, storage can sometimes lag behind and create huge delays during such a daunting task.
But not the Dell EMC XtremIO solution. We found that it handled initial deployment of 6,000 virtual desktops
smoothly and quickly—with high bandwidth, high input/output operations per second (IOPS), and low latency—for
full and linked clones, which means it can help get your desktop users up and running fast.
Two ways to deliver desktops:
Linked clones vs. full clones
Organizations can deploy either full or linked clones to
their users. Linked clones, which remain linked to a parent
VM, conserve disk space, but don’t offer users as much
autonomy. Full clones, which become independent VMs
after the initial cloning operation, give users full control
over their desktops, but take up more storage space.
Some organizations deploy a mix of these options to
meet the needs of a varied user base. The Dell EMC
XtremIO array’s deduplication technology means that
it can support either or a mix, with the same desktop
density, for maximum flexibility in one small package.
Bandwidth during full clone deployment
Bandwidth(Gbps)
Time (hours)
0
30
60
90
120
150
Write bandwidth Read bandwidth Total bandwidth
4:500:00
3. Make room for more virtual desktops with fast storage May 2017 | 3
It took only 4 hours 50 minutes to deploy 6,000 full-clone desktops. High bandwidth, which peaked at 123.4 Gbps,
shows that the XtremIO storage can deliver advantages for VM cloning operations in VDI deployments. XtremIO
clones from metadata stored in-memory, which can enable this fast virtual desktop deployment.
IOPS measure storage performance
and show how many reads and writes
the disks can perform per second. Both
a high number of IOPS and low latency
(wait times) indicate that the storage
can handle a given workload with solid
performance for end users. During
deployment, the solution handled
consistently high IOPS, and latency
averaged just 2.63ms for full clones.
The Dell EMC XtremIO solution
deployed 6,000 linked clones in only
2 hours 49 minutes. As with the full-
clone deployment, the linked-clone
deployment had high bandwidth—a
peak of 31.0 Gbps—high IOPS, and a
low average latency of just 1.16ms.
IOPS during full clone deployment
0
10
20
30
40
50
60
70
80
Write IOPS Read IOPS Total IOPS
4:500:00
IOPS(thousands)
Time (hours)
Bandwidth during linked clones deployment
Bandwidth(Gbps)
Time (hours)
0
5
10
15
20
25
30
35
Write bandwidth Read bandwidth Total bandwidth
2:490:00
IOPS during linked clones deployment
0
30
60
90
120
150
Write IOPS Read IOPS Total IOPS
2:490:00
IOPS(thousands)
Time (hours)
4. Make room for more virtual desktops with fast storage May 2017 | 4
Responsive storage with high IOPS even under stress
Another task that stresses a VDI
platform is a boot storm. This is
where a large number of users boot
their desktops around the same time,
likely at the start of a typical working
day. Handling 6,000 users logging
on at once was no problem for the
Dell EMC XtremIO solution, which
processed a large number of IOPS
at low latencies to get users to their
desktops—linked or full—quickly.
During the boot storm, latencies
averaged 0.70ms for linked clones
and 0.80ms for full clones.
IOPS during linked clones boot storm and Login VSI test run
0
50
100
150
200
250
300
Write IOPS Read IOPS Total IOPS
3:210:00
< Boot storm
LoginVSI >
IOPS(thousands)
Time (hours)
IOPS during full clones boot storm and Login VSI test run
0
50
100
150
200
Write IOPS Read IOPS Total IOPS
3:370:00
< Boot storm
LoginVSI >
IOPS(thousands)
Time (hours)
The Login VSI benchmark reports IOPS during a boot storm for 6,000 users.
See Appendix C for details.
Speedy storage for VDI
The Dell EMC XtremIO is an all-flash array that supports
up to 25 400GB SSDs in a single X-Brick® to handle heavy
workloads that require fast performance. According to
Dell EMC, the XtremIO array can improve operational
efficiency, encourage business agility, and scale linearly
with both performance and capacity to let you easily add
storage as your VDI environment grows. It uses the latest
compression and deduplication technologies to help keep
your data footprint manageable.
XtremIO performs deduplication and compression inline
and in-memory, which speeds up the processes, and no
additional space is consumed on disk as only unique
data is stored on the SSDs. This can also speed cloning
operations from metadata. High bandwidth numbers, such
as a peak of 123.4 Gbps during full clone deployment,
illustrate the advantage of VM cloning in VDI environments
running on XtremIO.
To learn more, visit www.emc.com/en-us/storage/xtremio/benefits.htm
5. Make room for more virtual desktops with fast storage May 2017 | 5
Save admin time with quick desktop recomposition for linked clones
Linked clones give administrators the
ability to make changes to all desktops
at once (the nature of full clones
does not allow this). If a new software
version is available, for example, an
administrator can make that change
to the parent VM and replicate that
change to all linked clones. This
simultaneous update is called desktop
recomposition, and we found that the
Dell EMC XtremIO solution could let
admins recompose 6,000 desktops
quickly and easily so they can move on
to other tasks. During recomposition,
latency averaged just 0.80ms.
Cut down your use of expensive datacenter space
Datacenter space is expensive, and is often charged per square foot. The Dell EMC
XtremIO solution’s small footprint can help keep these costs low, which can be a feat
for large VDI deployments. Built with Dell EMC PowerEdge FX2s modular enclosures
with FC630 compute and networking modules, the Dell EMC XtremIO solution fits
into just 35U of datacenter space. That means it can support an impressive 6,000
users without taking up even one full rack in the datacenter. The Dell EMC solution
has the power to support a large VDI user base without contributing to costly server
sprawl. Save that space for your other datacenter initiatives.
Compact performance
The Dell EMC PowerEdge FX2s is a modular server platform that can
combine servers, storage, and networking into a single 2U chassis.
The two-socket FC630 server block packs up to
• 24 memory DIMMs
• 8 SSDs
• dual- and quad-port 10Gb and quad-port 1Gb select network adapters
into a half-width sled, providing the power to run virtual desktop platforms
such as the one we tested.
To learn more, visit www.dell.com/us/business/p/poweredge-fx/pd
35U
IOPS during linked clones recomposition
0
20
40
60
80
100
Write IOPS Read IOPS Total IOPS
5:200:00
IOPS(thousands)
Time (hours)
6. Make room for more virtual desktops with fast storage May 2017 | 6
Save storage space through deduplication
Giving users their own centrally located desktops means
they have to be stored somewhere. This task falls to the
storage solution. Whether you want to use linked or full
clones, Dell EMC XtremIO can help reduce storage space
for your large VDI deployment by using inline deduplication
and compression. Inline deduplication dramatically reduces
the amount of data you need to store on your array
by eliminating redundant data in real time and instead
saving only new and changed data. By completing the
process inline, meaning before the data gets to the array,
Dell EMC XtremIO storage can also reduce the heavy
bandwidth usage typically associated with backing up data.
Deduplication is what allows the Dell EMC XtremIO solution
to support full clones, which take up significantly more
storage space than linked clones. The Dell EMC XtremIO
arrays delivered impressive deduplication ratios for both
linked and full clones. Inline compression reduces the size of
data blocks before they are written to storage by applying
mathematical algorithms to encode their information more
efficiently. Some data can be compressed dramatically,
while other data cannot be compressed at all. In our tests,
the overall compression ratio we observed was 1.5:1 for full
clones and 1.6:1 for linked clones.
Deduplication ratios show how much space
an array saves by eliminating redundant data
Linked clones
10:1
Full clones
16:1
01000100 01100101 01100100
01110101 01110000 01101100
01101001 01100011 01100001
01110100 01101001 01101111
01101110 00100000 01110010
01100001 01110100 01101001
01101111 01110011 00100000
01110011 01101000 01101111
01110111 00100000 01101000
01101111 01110111 00100000
01101101 01110101 01100011
01101000 00100000 01110011
01110000 01100001 01100011
01100101 00100000 01100001
01101110 00100000 01100001
01110010 01110010 01100001
01111001 00100000 01110011
01100001 01110110 01100101
01110011 00100000 01100010
01111001 00100000 01100101
01101100 01101001 01101101
01101001 01101110 01100001
01110100 01101001 01101110
01100111 00100000 01110010
01100101 01100100 01110101
01101110 01100100 01100001
01101110 01110100 00100000
01100100 01100001 01110100
01100001 00001010 01000100
01100101 01100100 01110101
01000100 01100101 01100100
01110101 01110000 01101100
01101001 01100011 01100001
01110100 01101001 01101111
01101110 00100000 01110010
01100001 01110100 01101001
01101111 01110011 00100000
01110011 01101000 01101111
01110111 00100000 01101000
01101111 01110111 00100000
01101101 01110101 01100011
01101000 00100000 01110011
01110000 01100001 01100011
01100101 00100000 01100001
01101110 00100000 01100001
01110010 01110010 01100001
01111001 00100000 01110011
01100001 01110110 01100101
01110011 00100000 01100010
01111001 00100000 01100101
01101100 01101001 01101101
01101001 01101110 01100001
01110100 01101001 01101110
01100111 00100000 01110010
01100101 01100100 01110101
01101110 01100100 01100001
01101110 01110100 00100000
01100100 01100001 01110100
01100001 00001010 01000100
01100101 01100100 01110101
Actual storage
used thanks to
deduplication
Total storage
used according to
VMware vSphere®
Storage fabric using Brocade
Gen 6 hardware
In our tests, we used Connectrix DS-6620B
switches, built on Brocade Gen 6 hardware,
known as the Dell EMC Connectrix B-Series
Gen 6 Fibre Channel by Brocade. The
Connectrix B Series provides out-of-the box
tools for SAN monitoring, management, and
diagnostics that simplify administration and
troubleshooting for administrators.
Brocade Fibre offers Brocade Fabric Vision™
Technology, which can provide further visibility
into the storage network with monitoring and
diagnostic tools. With Monitoring and Alerting
Policy Suite (MAPS), admins can proactively
monitor the health of all connected storage
using policy-based monitoring.
Brocade offers another tool to simplify SAN
management for Gen 6 hardware: Connectrix
Manager Converged Network Edition (CMCNE).
This tool uses an intuitive GUI to help admins
automate repetitive tasks and further simplify
SAN fabric management in the datacenter.
To learn more about what Brocade Gen 6 has
to offer, visit www.brocade.com
Storage networking adapters
using Emulex Gen 6 HBAs
We used Emulex-branded Fibre Channel Host
Bus Adapters (HBAs) by Broadcom in our
testing. According to Broadcom, Emulex HBAs
are designed to address the performance,
reliability, and management requirements of
modern networked storage systems that utilize
high performance and low latency SSDs.
The Emulex Dynamic Multi-core architecture
applies all ASIC resources to any port that
needs it, which can improve performance and
port utilization efficiency. The adapters are
NVMe over Fabrics-enabled and support NVMe
over Fabrics and SCSI concurrently.
To learn more about Emulex Gen 6 Host Bus
Adapters, visit www.broadcom.com/products/
storage/fibre-channel-host-bus-adapters/
7. Make room for more virtual desktops with fast storage May 2017 | 7
For a large VDI deployment, the Dell EMC XtremIO solution delivered
Before you make the plunge, take into consideration every aspect of your VDI project: user experience, admin
time, storage capacity, and ongoing costs related to datacenter space. Our experiences with Dell EMC PowerEdge
FX2s enclosures outfitted with PowerEdge FC630 compute modules and Dell EMC XtremIO arrays show that
this solution is a compelling one for VDI deployments. The Dell EMC XtremIO solution supported 6,000 virtual
desktops with a good user experience, offered flexibility by supporting both full and linked clones, recomposed
the desktops quickly and easily, and reduced data dramatically through inline deduplication and compression.
And it did all this in less than a single rack of datacenter space, to keep server sprawl in check and costs down.
8. Make room for more virtual desktops with fast storage May 2017 | 8
On February 5, 2017, we finalized the hardware and software configurations we tested. Updates for current and
recently released hardware and software appear often, so unavoidably these configurations may not represent
the latest versions available when this report appears. For older systems, we chose configurations representative
of typical purchases of those systems. We concluded hands-on testing on February 14, 2017.
Appendix A: System configuration information
Servers under test
Server configuration information Dell EMC PowerEdge FC630
BIOS name and version Dell 2.2.5
Operating system name and version/build number VMware®
ESXi™
6.5.0 4564106
Date of last OS updates/patches applied 01/01/17
Power management policy Maximum Performance
Processor
Number of processors 2
Vendor and model Intel®
Xeon®
E5-2698 v4
Core count (per processor) 20
Core frequency (GHz) 2.20
Stepping B0
Memory module(s)
Total memory in system (GB) 512
Number of memory modules 16
Vendor and model Samsung®
M393A4K40BB1-CRC
Size (GB) 32
Type PC4-19200
Speed (MHz) 2,400
Speed running in the server (MHz) 2,400
Local storage
Number of drives 1
Drive vendor and model Dell G9917 SD card
Drive size (GB) 16
Drive information (speed, interface, type) Class 10 SD
Network adapter #1
Vendor and model Cavium NetXtreme®
II BCM57810
Number and type of ports 2 x 10GbE
Driver version bnx2x
9. Make room for more virtual desktops with fast storage May 2017 | 9
Server configuration information Dell EMC PowerEdge FC630
Network adapter #2
Vendor and model Emulex LPe31002-M6
Number and type of ports 2 x 16Gb Fibre Channel
Driver version lpfc
Server enclosure
Server enclosure configuration information Dell EMC PowerEdge FX2s
Number of management modules 1
Management module firmware revision 1.40
KVM module firmware 1.0
Midplane version 1.0
First type of I/O module
Vendor and model number Dell EMC PowerEdge FN 410S IOM
I/O module firmware revision 9.10(0.1)
Number of modules 2
Occupied bay(s) A1, A2
Power supplies
Vendor and model number Dell 0W1R7VA00
Number of power supplies 2
Wattage of each (W) 2,000
Login VSI Launcher servers
Server configuration information Dell EMC PowerEdge M620
BIOS name and version Dell 2.5.4
Operating system name and version/build number VMware ESXi 6.5.0 4564106
Date of last OS updates/patches applied 01/01/17
Power management policy Maximum Performance
Processor
Number of processors 2
Vendor and model Intel Xeon E5-2660
Core count (per processor) 8
Core frequency (GHz) 2.20
Stepping C2
10. Make room for more virtual desktops with fast storage May 2017 | 10
Server configuration information Dell EMC PowerEdge M620
Memory module(s)
Total memory in system (GB) 192
Number of memory modules 12
Vendor and model Samsung M393B2G70BH0-YH9
Size (GB) 16
Type PC3-10600
Speed (MHz) 1,333
Speed running in the server (MHz) 1,333
Local storage
Number of drives 2
Drive vendor and model Seagate®
ST9146852SS
Drive size (GB) 146
Drive information (speed, interface, type) 15K, SAS, HDD
Network adapter
Vendor and model Cavium NetXtreme II BCM57810
Number and type of ports 2 x 10GbE
Driver version bnx2x
Login VSI Launcher server enclosure
Server enclosure configuration information Dell EMC PowerEdge M1000e
Number of management modules 1
Management module firmware revision 5.20
KVM module firmware 01.00.01.01
First type of I/O module
Vendor and model number Force10 MXL 10/40GbE
I/O module firmware revision 1.0
Number of modules 4
Occupied bay(s) A1, B1, A2, B2
Second type of I/O module
Vendor and model number Brocade M5424
I/O module firmware revision 11
Number of modules 2
Occupied bay(s) B1, B2
11. Make room for more virtual desktops with fast storage May 2017 | 11
Server enclosure configuration information Dell EMC PowerEdge M1000e
Power supplies
Vendor and model number Dell 0W1R7VA00
Number of power supplies 2
Wattage of each (W) 2,000
Cooling fans
Vendor and model number Dell YK776 Rev. A00
Number of fans 8
Storage solution
Storage configuration information Dell EMC XtremIO X-Brick
Controller firmware revision 4.0.4-41
Number of storage controllers 4
Number of storage shelves 2
Number of drives per shelf 25
Drive vendor and model number Hitachi HUSMM114 CLAR400
Drive size (GB) 400
Drive information (speed, interface, type) 12Gb SAS, SSD
FC SAN storage network infrastructure
Switch configuration information Dell EMC Connectrix DS-6620B
Firmware version 8.0.0
Licensed features Enterprise Bundle
Number of ports 48
SFP 48 x 32Gbps
VMware Horizon virtual desktop VMs
VM configuration
Operating system Windows 10 Enterprise x86
VMware virtual machine hardware version 11
vCPUs 1 socket / 2 cores
RAM 1.5 GB
Network adapters 1 x VMXNET3
Virtual disk 24 GB
12. Make room for more virtual desktops with fast storage May 2017 | 12
Appendix B: Detailed test results
VSImax v4 benchmark results
Figure 1: VSImax v4 response times for 6,000 linked clones
Figure 2: VSImax v4 response times for 6,000 full clones
Average latency during deployment, boot storm and Login VSI, and recompose
Average latency Linked clones Full clones
Deploy 1.16 ms 2.63 ms
Boot storm + Login VSI 0.70 ms 0.80 ms
Recompose 0.80 ms N/A
13. Make room for more virtual desktops with fast storage May 2017 | 13
CPU utilization
CPU utilization during linked clones deployment
%CPUutilization
Time (hours)
0
20
40
60
80
100
2:440:00
CPU utilization during full clone deployment
%CPUutilization
Time (hours)
0
20
40
60
80
100
4:330:00
CPU utilization during full clones boot storm and Login VSI test run
< Boot storm
LoginVSI >
%CPUutilization
Time (hours)
0
20
40
60
80
100
3:230:00
14. Make room for more virtual desktops with fast storage May 2017 | 14
Bandwidth
CPU utilization during linked clones boot storm and Login VSI test run
< Boot storm
LoginVSI >
%CPUutilization
Time (hours)
0
20
40
60
80
100
3:360:00
CPU utilization during linked clones recomposition
%CPUutilization
Time (hours)
0
20
40
60
80
100
5:230:00
Bandwidth during full clone deployment
Bandwidth(Gbps)
Time (hours)
0
30
60
90
120
150
Write bandwidth Read bandwidth Total bandwidth
4:500:00
15. Make room for more virtual desktops with fast storage May 2017 | 15
Bandwidth during linked clones boot storm and Login VSI test run
< Boot storm
LoginVSI >
Bandwidth(Gbps)
Time (hours)
0
5
10
15
20
25
30
Write bandwidth Read bandwidth Total bandwidth
3:210:00
Bandwidth during full clones boot storm and Login VSI test run
< Boot storm
LoginVSI >
Bandwidth(Gbps)
Time (hours)
0
5
10
15
20
25
30
35
Write bandwidth Read bandwidth Total bandwidth
3:370:00
Bandwidth during linked clones deployment
Bandwidth(Gbps)
Time (hours)
0
5
10
15
20
25
30
35
Write bandwidth Read bandwidth Total bandwidth
2:490:00
16. Make room for more virtual desktops with fast storage May 2017 | 16
Bandwidth during linked clones recomposition
Bandwidth(Gbps)
Time (hours)
0
5
10
15
20
25
Write bandwidth Read bandwidth Total bandwidth
5:200:00
17. Make room for more virtual desktops with fast storage May 2017 | 17
Appendix C: How we tested
Dell EMC testbed
Dell EMC PowerEdge FX2s
enclosures with 2x PowerEdge
FC630 server modules
9x Dell EMC PowerEdge FX2s enclosures,
each with 4x PowerEdge FC630 server
modules and 4x Emulex LPe31002-M6 HBAs
Dell EMC XtremIO
X-Brick storage
2x Dell Networking
S4048-ON switch
2x Dell EMC Connectrix
DS-6620B switch
16Gbps Fibre Channel
10Gbps Ethernet
40Gbps Ethernet
Storage
Dell EMC PowerEdge
M1000e blade enclosure
with 14x PowerEdge
M620 blade servers
Compute node
Launcher
INFRA
INFRA
Dell EMC PowerEdge FC630
server modules
VMware
Distributed Virtual
Switch (DVS)
vMotion VLAN
Desktop and server VLAN
Management VLAN10Gbps
18. Make room for more virtual desktops with fast storage May 2017 | 18
Installing the Dell EMC PowerEdge M1000e blade enclosure to host launchers VMs
Before we installed and configured any software, we populated 14 slots in a Dell EMC PowerEdge M1000e blade enclosure with Dell EMC
PowerEdge M620 blade servers. We used the Lifecycle Controller to perform BIOS and firmware updates on each blade server and deployed
224 Launcher VMs on 14 M620 server blades.
Installing ESXi 6.5 on systems under test
Perform these steps on each M620 blade server and each FC630 server. In a similar fashion, we also set up two FC630s to host all
infrastructure VMs.
1. Insert the vSphere installation media, power on the server, and boot from the DVD drive.
2. Select the standard vSphere installer, and allow the files to copy into memory.
3. At the welcome screen, press F11.
4. At the keyboard language selection screen, press Enter.
5. Enter a password twice for the root user, and press Enter.
6. Choose to install to the local SD card.
7. Allow the installer to finish installing vSphere, and reboot the server.
Installing Microsoft®
Active Directory®
and DNS services on DC1
1. Deploy a Windows Server®
2012 R2 VM from template named DC1, and log in as administrator.
2. Launch Server Manager.
3. Click ManageAdd Roles and Features.
4. At the Before you begin screen, click Next.
5. At the Select installation type screen, leave Role-based or feature-based installation selected, and click Next.
6. At the Server Selection Screen, select the server from the pool, and click Next.
7. At the Select Server Roles screen, select Active Directory Domain Services. Click Add Features when prompted, and click Next.
8. At the Select Features screen, click Next.
9. At the Active Directory Domain Services screen, click Next.
10. At the Confirm installation selections screen, check Restart the destination server automatically if required, and click Install.
Configuring Active Directory and DNS services on DC1
1. After the installation completes, a screen should pop up with configuration options. If not, click the Tasks flag in the upper-right section
of Server Manager.
2. Click Promote this server to a Domain Controller.
3. At the Deployment Configuration screen, select Add a new forest. In the Root domain name field, type test.local and click Next.
4. At the Domain Controller Options screen, leave the default values, and enter a password twice.
5. Click Next four times to accept default settings for DNS, NetBIOS, and directory paths.
6. At the Review Options screen, click Next.
7. At the Prerequisites Check dialog, allow the check to complete. If there are no relevant errors, check Restart the destination server
automatically if required, and click Install.
8. When the server restarts, log on using testAdministrator and the specified password.
Configuring the Windows®
time service on DC1
To ensure reliable time, we pointed our Active Directory server to a physical NTP server.
1. Open a command prompt.
2. Type the following:
W32tm /config /syncfromflags:manual /manualpeerlist:”<ip address of a NTP server>”
W32tm /config /reliable:yes
W32tm /config /update
W32tm /resync
Net stop w32time
Net start w32time
Setting up DHCP services on DC1
1. Open Server Manager.
2. Select Manage, and click Add Roles and Features.
3. Click Next twice.
4. Select DHCP Server, and click Next.
19. Make room for more virtual desktops with fast storage May 2017 | 19
5. At the Introduction to DHCP Server screen, click Next.
6. At the Specify IPv4 DNS Settings screen, type test.local for the parent domain.
7. Type the preferred DNS server IPv4 address, and click Next.
8. At the Specify IPv4 WINS Server Settings screen, select WINS is not required for applications on the network, and click Next.
9. At the Add or Edit DHCP Scopes screen, click Add.
10. At the Add Scope screen, enter the Name DHCP Scope name.
11. In the next box, set the following values, and click OK.
a. Start IP address=172.16.10.1
b. End IP address=172.16.100.254
c. Subnet mask=255.255.0.0
12. Check the Activate This Scope box.
13. At the Add or Edit DHCP Scopes screen, click Next.
14. Click the Enable DHCP v6 Stateless Mode radio button, and click Next.
15. Leave the default IPv6 DNS Settings, and click Next.
16. At the Authorize DHCP server dialog box, select Use current credentials.
17. At the Confirm Installation Selections screen, click Next. If the installation is set up correctly, a screen displays saying that DHCP server
install succeeded.
18. Click Close.
Installing SQL Server®
2014 on SQL01
1. Deploy a Windows Server 2012 R2 VM named SQL01, and log in as administrator.
2. Prior to installing, add the .NET Framework 3.5 feature to the server.
3. Mount the installation DVD for SQL Server 2014.
4. Click Run SETUP.EXE. If Autoplay does not begin the installation, navigate to the SQL Server 2014 DVD, and double-click it.
5. In the left pane, click Installation.
6. Click New SQL Server stand-alone installation or add features to an existing installation.
7. Select the Enter the product key radio button, and enter the product key. Click Next.
8. Click the checkbox to accept the license terms, and click Next.
9. Click Use Microsoft Update to check for updates, and click Next.
10. Click Install to install the setup support files.
11. If there are no failures displayed, click Next.
12. At the Setup Role screen, choose SQL Server Feature Installation, and click Next.
13. At the Feature Selection screen, select Database Engine Services, Full-Text and Semantic Extractions for Search, Client Tools
Connectivity, Client Tools Backwards Compatibility, Management Tools – Basic, and Management Tools – Complete. Click Next.
14. At the Installation Rules screen, after the check completes, click Next.
15. At the Instance configuration screen, leave the default selection of default instance, and click Next.
16. At the Server Configuration screen, choose NT ServiceSQLSERVERAGENT for SQL Server Agent, and choose NT Service
MSSQLSERVER for SQL Server Database Engine. Change the Startup Type to Automatic. Click Next.
17. At the Database Engine Configuration screen, select the authentication method you prefer. For our testing purposes, we selected Mixed
Mode.
18. Enter and confirm a password for the system administrator account.
19. Click Add Current user. This may take several seconds.
20. Click the Data Directories tab to relocate the system, user, and temp db files.
21. Change the location of the root directory to the D: volume.
22. Click Next.
23. At the Error and usage reporting screen, click Next.
24. At the Installation Configuration Rules screen, check that there are no failures or relevant warnings, and click Next.
25. At the Ready to Install screen, click Install.
26. After installation completes, click Close.
27. Close the installation window.
28. Open SQL Server 2014 Configuration Manager, and expand Protocols for MSSQLSERVER.
29. Right-click Named Pipes, and choose Enabled.
30. Click OK, and restart the SQL service.
Setting up databases for VMware vCenter®
and VMware View®
Composer™
1. Log onto SQL01 as TESTadministrator.
2. From the server desktop, open SQL Server Configuration Manager.
20. Make room for more virtual desktops with fast storage May 2017 | 20
3. Click SQL Server Network ConfigurationProtocols for MSSQLSERVER.
4. Right-click TCP/IP, and select Enabled.
5. Click SQL Services, right-click SQL Server Browser, and select Properties.
6. In the SQL Server Browser Properties, select the Services tab, change the Start mode to Automatic, and click OK. Repeat this step for
the SQL Server Agent service.
7. Start the SQL Server browser service and the SQL Server Agent service.
8. From the SQL Server desktop, open SQL Server Management Studio.
9. Click Connect.
10. Select the Databases folder, right-click, and select New Database.
11. Provide the name vCenter for the new database.
12. Select the Databases folder, right-click, and select New Database.
13. Provide the name composer for the new database.
14. Click Options, change the recovery model from full to simple, and click OK.
Installing VMware vCenter 6.5
1. Mount the image, navigate to the vcsa folder, and launch VMware-ClientIntegrationPlugin-x.x.x.exe.
2. In the wizard that appears, click Next.
3. Accept the terms of the license agreement, and click Next.
4. Leave the default installation directory, and click Next.
5. Click Install.
6. Click Finish.
7. In the mounted image’s top directory, open vcsa-setup.html.
8. When prompted, click Allow.
9. Click Install.
10. Accept the terms of the license agreement, and click Next.
11. Enter the FQDN or IP address of the host onto which the vCenter Server Appliance will be deployed.
12. Provide a username and password for the system in question, and click Next.
13. To accept the certificate of the host you chose to connect to, click Yes.
14. Select the appropriate datacenter, and click Next.
15. Select the appropriate resource, and click Next.
16. Provide a name and password for the virtual machine, and click Next.
17. Select Install vCenter Server with an Embedded Platform Services Controller, and click Next.
18. Select Create a new SSO domain.
19. Provide a password, and confirm it.
20. Provide an SSO Domain name and SSO Site name, and click Next.
21. Set an appropriate Appliance Size, and click Next.
22. Select an appropriate datastore, and click Next.
23. Select Use an embedded database (PostgreSQL), and click Next.
24. At the Network Settings page, configure the network settings as appropriate for your environment, and click Next.
25. Review your settings, and click Finish.
26. When installation completes, click Close.
27. Log into vCenter VM as TESTadministrator.
28. From the VMware vCenter 6.5 install media, click Autorun.
29. To start the install wizard, click Run.
30. Select vCenter Server for Windows, and click Install.
31. At the Install wizard welcome screen, click Next.
32. Accept the End User License Agreement, and click Next.
33. Select embedded deployment and the deployment type, and click Next.
34. Verify the system name, and click Next.
35. Enter and confirm the password you wish to use with the Administrator account for vCenter Single Sign On, and click Next.
36. Select Use Windows Local System Account and the service account, and click Next.
37. Select Use an external database, and enter database credentials. Click Next.
38. At the Configure Ports screen, click Next.
39. Accept the default installation path, and click Next.
40. Click Install.
41. Using the vSphere web client, log into the vCenter server as DOMAINadministrator.
21. Make room for more virtual desktops with fast storage May 2017 | 21
42. Right-click the root of vCenter, and click New Data center.
43. Name the new datacenter datacenter
44. Add the servers under test and infrastructure servers to the datacenter.
Adding DRS/HA clusters
We created three DRS/HA clusters, one each for infrastructure, launchers, and Dell vDT.
1. Log into the vCenter via the VMWare Web client.
2. Right-click the Datacenter, and select New Cluster…
3. Provide an appropriate name, and check the Turn ON boxes for both DRS and vSphere HA.
4. Click OK.
5. When the cluster creation completes, right-click the new cluster, and select Settings.
6. In the vSphere DRS tab, click Edit.
7. Under Additional Options, check the boxes for VM Distribution and Memory Metric for Load Balancing. Leave CPU
Over-Commitment unchecked.
8. Click OK.
9. Repeat steps 1 through 8 for each cluster to be created.
Creating a distributed switch and the distributed port groups for the cluster
Before these steps we created VMkernel adapters for management and vMotion on each host under test, and assigned private IP addresses
to each. Create a distributed switch and two port groups as follows.
1. Log into the vCenter via the VMware web client.
2. Navigate to the Networking tab in the left pane.
3. Right-click the datacenter, and select Distributed SwitchNew Distributed Switch.
4. Provide a name, and click Next.
5. Choose Distributed switch: 6.5.0, and click Next.
6. Specify 2 as the number of uplinks, deselect Create a default port group, and click Next.
7. Click Finish.
8. Right-click the new distributed switch, and select Distributed Port GroupNew Distributed Port Group.
9. Provide a name, and click Next.
10. Set the port binding to Static binding, and the port allocation to Elastic. For the vMotion distributed port group only, set the VLAN type
to VLAN, with VLAN ID 300.
11. Click Next.
12. Click Finish.
13. Repeat steps 8 through 12 so that you have two distributed port groups, priv-net and vMotion.
14. Right-click the new distributed switch, and select Add and Manage Hosts.
15. Select Add hosts, and click Next.
16. Click New hosts…
17. Select all hosts under test, and click OK.
18. Click Next.
19. Check the boxes for Manage physical adapters and Manage VMkernel adapters, and click Next.
20. For each host, select the first adapter, click Assign Uplink, and click OK.
21. Repeat step 20 for the second adapter on each host.
22. After all hosts’ adapters are assigned to uplinks, click Next.
23. For each host, select a VMkernel adapter, and click Assign port group. Select the appropriate distributed port group, and click OK.
24. Repeat step 23 for each VMkernel adapter on each host, and click Next.
25. Click Next.
26. Click Finish.
Setting up a database and ODBC DSN for VMware Horizon Composer
1. Deploy a Windows Server 2012 R2 VM named Composer.
2. Log onto the composer VM as TESTadministrator.
3. From the desktop of the Composer server, select Start, Run, and type odbcad32.exe. Press Enter.
4. Click the system DSN tab.
5. Click Add.
6. Click SQL Server Native Client 10.0, and click Finish.
7. In the Create a New Data Source to SQL Server text box, type the connection name composer
22. Make room for more virtual desktops with fast storage May 2017 | 22
8. For Server, select SQL, and click Next.
9. Change authentication to With SQL Server authentication using a login ID and password entered by the user, type sa as the Login ID,
use the password you defined in SQL Server setup for the SA account, and click Next.
10. Select Change the default database to…, choose Composer from the pull-down menu, and click Next.
11. Click Finish.
12. Click Test Data Source…
13. To create the Composer ODBC connection, click OK.
Setting up VMware Horizon Composer 7
1. Open the View 7 media folder, and run the file named VMware-viewcomposer-7.0.2-4350300.exe.
2. At the Welcome screen and the Patents screen, click Next.
3. Accept the VMware end user license agreement, and click Next.
4. Leave the Destination folder as default, and click Next.
5. In the Database information box, type composer as the source name and sa as the user name. Enter the password, and click Next.
6. Leave the default SOAP port, and click Next.
7. Click Install, and click Finish.
8. Restart the server.
Installing VMware Horizon Connection Server 7
1. Log into the server named Connection1.
2. Browse to VMware Horizon installation media, and click VMware-viewconnectionserver-x86_64-7.0.2-4356666.exe.
3. Click Run.
4. At the Welcome screen, click Next.
5. Agree to the End User License Agreement, and click Next.
6. Keep the default installation directory, and click Next.
7. Select View Standard Server, and click Next.
8. At the Data Recovery screen, enter a backup password, and click Next.
9. Allow View Server to configure the Windows firewall automatically, and click Next.
10. Authorize the local administrator to administer View, and click Next.
11. Choose whether to participate in the customer experience improvement program, and click Next.
12. Complete the installation wizard to finish installing View Connection Server.
13. Click Finish.
14. Reboot server.
Configuring the VMware Horizon Connection Server
1. Open a Web browser to <view connection1 FQDN>/admin.
2. Log in as administrator.
3. Under Licensing, click Edit License…
4. Enter a valid license serial number, and click OK.
5. Open View ConfigurationServers.
6. In the vCenter Servers tab, click Add…
7. Enter vCenter server credentials, and edit the following settings:
a. Max concurrent vCenter provisioning operations:20
b. Max concurrent power operations:50
c. Max concurrent View Composer maintenance operations:20
d. Max concurrent View Composer provisioning operations:20
e. Max concurrent Instant Clone Engine provisioning operations:20
8. Click Next.
9. At the View Composer screen, select View Composer co-installed with vCenter Server, and click Next.
10. At the View Composer domains screen, click Add…
11. Enter full domain name and user credentials.
12. Do not use Reclaim VM disk space or Enable View Storage Accelerator.
13. At the ready to complete screen, click Finish.
Installing VMware Horizon Replica Server 7
Perform the following steps on connection2, connection3, and connection4.
23. Make room for more virtual desktops with fast storage May 2017 | 23
1. Log into the appropriate server.
2. Browse to VMware View installation media, and click VMware-viewconnectionserver-x86_64-7.0.2-4356666.exe.
3. Click Run.
4. At the Welcome screen, click Next.
5. Agree to the End User License Agreement, and click Next.
6. Keep the default installation directory, and click Next.
7. Select View Replica Server, and click Next.
8. In the Source Server page, provide connection1.test.local, and click Next.
9. Leave the default firewall configuration, and click Next.
10. Click Install.
11. When the installation completes, click Finish.
Setting up a Windows 10 Enterprise (x86) ESXi base image VM
1. Log into the vCenter via the VMware Web client.
2. Click the Virtual Machines tab.
3. Right-click, and choose New Virtual Machine.
4. Select Create a new virtual machine.
5. Choose Custom, and click Next.
6. Assign the name View-gold, and click Next.
7. Select any host in the vDT cluster, and click Next.
8. Select the appropriate storage.
9. Choose Virtual Machine Version 11, and click Next.
10. Choose Windows, choose Microsoft Windows 10 (32-bit), and click Next.
11. For CPUs, select one virtual processor socket and two cores per virtual socket, and click Next.
12. Choose 1.5 GB RAM, and click Next.
13. Click 1 for the number of NICs, select VMXNET 3, and click Next.
14. Leave the default virtual storage controller, and click Next.
15. Choose to create a new virtual disk, and click Next.
16. Make the OS virtual disk size 24 GB, choose Thin Provision, specify the OS datastore on the external storage, and click Next.
17. Keep the default virtual device node (0:0), and click Next.
18. Click Finish.
19. Click the Resources tab, click Memory, and check the Reserve all guest memory checkbox.
20. Click the Hardware tab, CD/DVD Drive, and Connect the VM virtual CD-ROM to the Microsoft Windows 10 x86 installation disk.
21. Click OK.
Installing Windows 10 Enterprise (x86) on the ESXi base image VM
1. When the installation prompts you, press any key to begin setup.
2. Enter your language preferences, and click Next.
3. Click Install.
4. Accept the license terms, and click Next.
5. Select Custom, and select the drive that will contain the OS.
6. Click Install.
7. Type user for the username, and click Next.
8. Enter no password, and click Next.
9. At the system protection screen, select Use recommended settings, and click Next.
10. Enter your time zone, and click Next.
11. Select the Work Network setting, and click Next.
12. Install VMware Tools, and select Complete Installation. For more information, see kb.vmware.com/selfservice/microsites/search.
do?language=en_US&cmd=displayKC&externalId=340.
13. Reboot.
14. Connect the machine to the Internet, and install all available Windows updates. Restart as necessary.
15. Join the domain, and restart the VM.
16. Install Microsoft Office Professional Plus 2016.
Installing the VMware Horizon agent
1. Browse to the VMware Horizon View 6 media, and run the VMware-viewagent file.
2. Click Run.
24. Make room for more virtual desktops with fast storage May 2017 | 24
3. At the Welcome screen, click Next.
4. Accept the VMware end user license agreement, and click Next.
5. Select defaults, and click Next.
6. Click Install.
Configuring Regedit for QuickPrep
1. Click StartRun, and type regedit
2. Browse to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesvmware-viewcomposer-ga.
3. Right-click Skip License Activation, and click Modify…
4. Change the value from 0 to 1.
5. Shut down the machine, and take a snapshot of the final disk image.
Optimize the Windows 10 master image
1. Download the Windows optimization tool from VMware.
2. Extract the zip file, and run VMwareOSOptimizationTool_b1084.exe,
3. Click Template, and select Windows 10 (Horizon Air Hybrid).
4. Click Analyze.
5. Select All Suggestions, and click Optimize.
6. When settings have been applied, reboot the VM.
Deploying virtual desktops using VMware Horizon
We deployed all VMs with a script to ensure repeatability (see Appendix D).
1. Open the View Administrator.
2. Log in as administrator.
3. Under Inventory, click Pools, and click Add…
4. Select Automatic Pool, and click Next.
5. Select floating, and click Next.
6. Select View Composer linked clones, and click Next.
7. Type pool for the pool ID and display name, and click Next.
8. Leave the pool settings as defaults, and click Next.
9. Under Naming Pattern, enter an appropriate name pattern for the pool.
10. Under Pool Sizing, type 1000 for Max number of desktops and the number of spare (power on) desktops.
11. Select Provision all desktops up-front, and click Next.
12. Select the defaults under View Composer Disks, and click Next.
13. Under Storage Optimization, select Do not use VMware Virtual SAN.
14. Under vCenter Settings, select the following depending on where you are deploying virtual machines:
a. Parent VM
b. Snapshot
c. VM folder location
d. Host or cluster
e. Resource pool
f. Data stores (1 Replica + 8 clone Datastores)
15. Under Guest customization, select the following:
a. Domain: test.local
b. AD container: OU=Computers,OU=LoginVSI
16. Select Use Quick Prep, and click Next.
17. Click Finish.
18. Select Entitle users after this wizard finishes.
19. Select the LoginVSI group, and click OK.
Installing the Login VSI 4 benchmark
Setting up a VM to host the Login VSI share CIFS1
1. Connect to an infrastructure server via the VMware web client.
2. In the VMware vSphere client, under Basic Tasks, select Create a new virtual machine.
3. Choose Custom, and click Next.
25. Make room for more virtual desktops with fast storage May 2017 | 25
4. Assign the name VSI_share to the virtual machine, and click Next.
5. Select infra1 as the host, and click Next.
6. Select the appropriate storage, and click Next.
7. Choose Virtual Machine Version 11, and click Next.
8. Choose Windows, choose Microsoft Windows Server 2008 R2 (64-bit), and click Next.
9. For CPUs, select one virtual processor socket and two cores per virtual socket, and click Next.
10. Choose 4 GB RAM, and click Next.
11. Click 1 for the number of NICs, select VMXNET3, connect to the PRIV-NET network, and click Next.
12. Leave the default virtual storage controller, and click Next.
13. Choose to create a new virtual disk, and click Next.
14. Make the OS virtual disk size 100 GB, choose thin-provisioned lazy zeroed, specify external storage, and click Next.
15. Keep the default virtual device node (0:0), and click Next.
16. Click Finish.
17. Right-click the VM, and choose Edit Settings.
18. Click the Resources tab, and click Memory.
19. Select Reserve all guest memory, and click OK.
20. Connect the VM virtual CD-ROM to the Microsoft Windows Server 2012 R2 installation disk.
21. Start the VM.
Installing the Microsoft Windows Server 2012 R2 operating system on the VM
1. Open a virtual machine console on VSIshare.
2. Leave the language, time/currency format, and input method as default, and click Next.
3. Click Install Now.
4. Choose Windows Server 2012 R2 Datacenter Edition (Server with a GUI), and click Next.
5. Accept the license terms, and click Next.
6. Click Custom: Install Windows only (advanced).
7. Select Drive 0 Unallocated Space, and click Next, at which point Windows begins automatically, and restarts automatically after completing.
8. Enter the administrator password twice, and click OK.
9. Install VMware Tools. For more information, see kb.vmware.com/selfservice/microsites/search.do?language=en_
US&cmd=displayKC&externalId=340.
10. Reboot the server.
11. Connect the machine to the Internet, and install all available Windows updates. Restart as necessary.
12. Enable remote desktop access.
13. Change the hostname to VSIShare, and reboot when the installation prompts you.
14. Set up networking for the data network:
a. Click StartControl Panel, right-click Network Connections, and choose Open.
b. Right-click the VM traffic NIC, and choose Properties.
c. Uncheck TCP/IP (v6).
d. Select TCP/IP (v4), and choose Properties.
e. Set the IP address, subnet, gateway, and DNS server.
Installing the Login VSI benchmark on VSI_shares
For Login VSI to work correctly, you must create a CIFS share(s), an Active Directory OU, and Active Directory users. For more information
on Login VSI, see https://www.loginvsi.com/documentation. We created one Windows server VM per 1,000 users, six total, to host roaming
profiles and folder redirection (CIFS2-7). We also created one Windows server to host Login VSI content per 1,000 users (VS1-6). Finally, we
created one dedicated Windows file share for VSI management (VSI1).
1. Log into CIFS1.
2. Create a folder named D:share.
3. Right-click the D:share folder, and click Properties.
4. Click the Sharing tab, and click Share…
5. Add everyone, system, and administrators to the Read/Write group, and click Share.
6. Unpack the Login VSI install media.
7. Open Datacenter setupsetup.exe.
8. At the welcome screen, click Next.
9. Select the share named vsishareshare, and click Next.
10. Check the Start Login VSI Management Console checkbox to start the Management Console when the setup is completed, and click Finish.
26. Make room for more virtual desktops with fast storage May 2017 | 26
11. At the management console, enter the path to your Login VSI license file, and click Save.
Configuring Login VSI: AD setup
1. On VSI1, open the VSI management console, and click “1. AD setup.”
2. Enter the following:
a. Base OU: DC=domain,DC=local
b. Username: LoginVSI
c. Password1: *********
d. Domain: (auto detect checked)
e. Number of users: 6000
f. Formatting length: 4
g. Launcher user: Launcher-v4
h. Launcher password: **********
i. Click Save to .ps1.
3. Copy the VSIADSetup.ps1 file to DC1c$.
4. Log into DC1, and execute the VSIADSetup.ps1 in Windows PowerShell™
.
5. Open Active Directory Users and Computers, and Create 6 OUs under test.local/loginVSI/users/Target by right clicking the Target OU
and selecting NewOrganizational Unit.
a. Name each UO vDT 1-6
b. Move users 1-1000 to vDT1, 1001-2000 to vDT2, and so on until each OU has 1,000 users.
Setting up the Login VSI share and Active Directory users
1. Log into each of the 6 CIFS servers, open Windows Explorer, and create the following folders: e:profiles, and g:
folderredirect.
2. Add permissions of read/write to the domain.local/everyone group.
3. Right-click the e:profiles, and g:folderredirect folders, and select Properties.
4. Click the Sharing tab, and select Share…
5. Add everyone, system, and administrators to the read/write group, and click Share.
6. Right-click the g:folderredirect folder and select PropertiesSharingAdvanced SharingCaching, and select No files or programs
from the Share Folder are available offline.
7. Click OK, Apply, OK, and Close.
8. Log onto VSI1 as testadministrator.
9. From the Login VSI media, run the Login VSI AD Setup.
10. Keep the default settings, and click Start.
Setting up roaming profiles for users
We assigned 1,000 users roaming profiles and redirected folders to a CIFS share. Users in vDT1 have their profiles stored on CIFS2, vDT2 on
CIFS3, vDT3 on CIFS4, vDT4 on CIFS5, vDT5 on CIFS6, and vDT6 on CIFS7.
1. Open Active Directory Users and Computers.
2. Browse to test.localLogin_VSIUsersTargetvDT1.
3. Select all Login VSI users, and right-click Properties.
4. Click Profile, and enter profile path %cifs2profiles%username%
5. Repeat profile creation until all OUs have roaming profiles assigned.
Configuring folder redirection
We redirected folders based on GPO, each of the vDT OUs have a GPO assigned that assures no more than 1,000 users to a Windows file share.
1. Log into DC1 as administrator.
2. Open the Group Policy Editor.
3. Open ForestDomainsvdi.com, right-click Group Policy Objects, and select New.
4. Type folder redirection vDT1, leave source starter GPO as None, and click OK.
5. Right-click the folder redirection GPO, and click Edit.
6. Browse User ConfigurationPoliciesWindows SettingsFolder Redirection, and right-click AppData (roaming).
7. In the AppData (roaming) Properties, target-tab select the following:
• Setting = select Basic = Redirect everyone’s folders to the same location
• Target folder location = Create a folder for each user under the root path
• Root Path = CIFS2folderredirection
8. In the AppData (roaming) PropertiesSetting tab, remove the checkbox for Grant the user exclusive right to AppData (roaming), and
27. Make room for more virtual desktops with fast storage May 2017 | 27
click OK.
9. Repeat steps 6 through 8 for all subfolders in the folder redirection tree.
10. Close the Folder Redirection group policy.
11. In the Group Policy Editor, right-click the folder redirection policy, and select GPO StatusComputer Configuration Settings Disabled.
12. Repeat steps 3 through 11 to create 6 GPOs, one for each vDT OU.
13. In the Group Policy Editor, drag the folder redirection GPO for vDT1 to ForestDomainstest.localLogin_VSIUsersTargetvDT1.
14. Repeat step 13 until all vDTs have a folder redirection GPO assigned.
Distributing the Login VSI content to VSI 1-6
Login VSI recommends no more than 1,000 users per share. We copied the _VSI_Websites directory and the _VSI_Content folders to six
Windows shares (VSI1-6).
1. Log into each of the six VSI servers, open Windows Explorer, and create a d:share on each.
2. Add permissions of read/write to the test.local/everyone group.
3. Right-click the d:share, and select Properties.
4. Click the Sharing tab, and select Share…
5. Add everyone, system, and administrators to the read/write group, and click Share.
6. Right-click the g:folderredirect folder and select PropertiesSharingAdvanced SharingCaching, and select No files or programs
from the Share Folder are available offline.
7. Click OK, Apply, OK, and Close.
8. Copy vsi1c$_VSIwebsites_ to D:.
9. Copy vsi1c$_VSI_Content to D:.
10. Repeat steps 1 through 9 to create all six file shares.
11. Log into VSI1, and open the VSI management console.
12. Click InfrastructureData servers.
a. Right-click, and select Add server.
i. UNC path= VSI1share_VSI_Content
13. Repeat step 12 to create six shares for VSI1-6.
14. Click InfrastructureWeb Servers.
a. Right-click, and select Add server.
i. UNC path = VSI1share_VSI_Websites.
15. Close the admin console, and log out.
Creating Windows 7 Enterprise x64 image VSI Launchers
Using the vSphere client, we created a Windows 7 Enterprise x64 VM with 2 vCPUs and 12 GB of memory.
Installing VMware Horizon Client on the launcher
1. Accept the VMware license terms.
2. Select default options.
3. Accept or change the destination folder.
4. Click Install.
Converting the launcher VM to a template
1. In vSphere Client, right-click the launcher VM, and select TemplateConvert to Template.
Deploying the launchers VM from the launcher template
1. In vSphere Client, browse to HomeVMs and Templates.
2. Right-click the launcher template to deploy a virtual machine from the template.
3. Type launcher_1 as the new VM name, and click Next.
4. Click Datacenter, and click Next.
5. Click the launcher server, and click Next.
6. Select the appropriate storage, and click Next.
7. Select Customization using existing customization specifications, select the appropriate file, and click Next.
8. To deploy the new VM, click Finish.
9. Repeat steps 1 through 8, or use powercli to deploy more launchers; we used 224 in our testing.
28. Make room for more virtual desktops with fast storage May 2017 | 28
Configuring VSI Launchers for testing
After all 224 launchers are configured and in an idle state, configure the Login VSI test harness to add them to available launcher capacity.
1. Log onto VSI_Share as DOMAINadministrator.
2. Open the Login VSI management console.
3. Click “2. Add Launchers”
4. At the Add launchers wizard, click Next.
5. Click batch entry and enter the following:
a. Name=launcher
b. Start number=1
c. Increment=1
d. Count=224
e. Maximum capacity=30
f. Formatting length=3
6. Click Next.
7. Click Finish.
About our test tool, Login VSI 4.1
Login Virtual Session Indexer (Login VSI) is a tool that helps assess the virtual desktop performance, capacity, and scalability of a server. The
newest version, Login VSI 4.1, has a number of improvements over previous versions. These include the following:
• More realistic user workload patterns
• All workloads are now a 48-minute loop instead of 14 minutes
• Alternating initial segment for each session to ensure equal load distribution
• HTML5 video player instead of flash (flash player is still optional)
• More realistic data set and data/file access in Login VSI workload
• More and larger websites
• MP4 video library with all formats: 480p, 720p and 1080p
Login VSI 4.1 reports the following metrics:
• Minimum Response: The minimum application response time
• Average Response: The average application response time
• Maximum Response: The maximum application response time
• VSI Baseline: Average application response time of the first 15 sessions
• VSI Index Average: Average response time dropping the highest and lowest 2 percent
For more information about Login VSI 4.1, see www.loginvsi.com/product-overview.
Running the Login VSI benchmark
Before we ran our test, we set up AD roaming profiles and folder redirection, and then we ran a profile create job in Login VSI to fully
initialize each user’s profile. Finally, we used 224 launchers to run an office worker workload. We set the Login VSI testing parameters as
indicated in the VSILauncher.ini (see Appendix D). For more information on how to run a Login VSI test, see https://www.loginvsi.com/
products/login-vsi.
The table below details our physical server configuration. We installed VMware ESXi 6.5 onto two Dell EMC PowerEdge FC630 blade servers,
and created a Windows Server 2012 R2 VM configured to be an Active Directory domain controller with DNS, DHCP, and NTP roles. We then
installed ESXi 6.5 on 14 M620 blade servers and 38 FC630 blade servers. We installed VMware vCenter and created six VMware DRS/HA
clusters: six clusters of six servers each for virtual desktops and infrastructure, and two more for Launchers and Dell_vDT. We assigned all the
hosts to the appropriate clusters. We installed and configured out guest virtual machines, VMware View infrastructure VMs, and a Microsoft
Windows 10 enterprise gold image. For each segment of testing we deployed either six pools of 1,000 Windows 10 Enterprise linked
clones or six pools of 1,000 Windows 10 Enterprise full clones. We created a master Windows 10 Enterprise launcher image and cloned 224
Windows 10 Launcher VMs.
29. Make room for more virtual desktops with fast storage May 2017 | 29
Server name OS Roles Cluster membership
LAUNCH1-LAUNCH14 VMware ESXi 6.5 Login VSI Launcher host Launchers
DELL1-DELL36 VMware ESXi 6.5 Virtual desktop host Dell_vDT
INFRA1-INFRA2 12Gb SAS, SSD
ADDS, DNS, DHCP, NTP,
vCenter, LoginVSI share
Infrastructure
Following VM creation, we moved the VMs into the appropriate clusters as follows.
VM name Quantity OS vCPUs Memory (GB) Cluster name Role(s)
AD 2 Windows Server 2012 R2 2 8 Infrastructure
Microsoft Active Directory server,
DNS, DHCP
SQL 1 Windows Server 2012 R2 4 16 Infrastructure
SQL database server for vCenter
and View Composer
vCenter 1
VMware vCenter Server Virtual
Appliance
24 48 Infrastructure vCenter
Composer 1 Windows Server 2012 R2 4 16 Infrastructure VMware View Composer server
Connection 4 Windows Server 2012 R2 4 16 Infrastructure VMware View connection servers
CIFS 1 1 Windows Server 2012 R2 4 16 Infrastructure
Login VSI mgmt. console and log
server
CIFS 2 1 Windows Server 2012 R2 4 4 Infrastructure
Active Directory roaming profiles
and folder redirection shares
(users 1-1000)
CIFS 3 1 Windows Server 2012 R2 4 4 Infrastructure
Active Directory roaming profiles
and folder redirection shares
(users 1001-2000)
CIFS 4 1 Windows Server 2012 R2 4 4 Infrastructure
Active Directory roaming profiles
and folder redirection shares
(users 2001-3000)
CIFS 5 1 Windows Server 2012 R2 4 4 Infrastructure
Active Directory roaming profiles
and folder redirection shares
(users 3001-4000)
CIFS 6 1 Windows Server 2012 R2 4 4 Infrastructure
Active Directory roaming profiles
and folder redirection shares
(users 4001-5000)
CIFS 7 1 Windows Server 2012 R2 4 4 Infrastructure
Active Directory roaming profiles
and folder redirection shares
(users 5001-6000)
VSI 6 Windows Server 2012 R2 2 8 Infrastructure Login VSI file shares and content
Launcher 224 Windows 7 Enterprise (x64) 1 11 Launchers Login VSI launchers
DELL_VDT 6,000 Windows 10 Enterprise (x86) 2 3 Dell_vDT VMware View virtual desktops
30. Make room for more virtual desktops with fast storage May 2017 | 30
Appendix D: Scripts and configuration files
Dell Networking 4048-ON Ethernet switch A configuration file
We configured our two Ethernet swtiches to be fully redundant.
Current Configuration ...
! Version 9.10(0.1)
!
boot system stack-unit 1 primary system: A:
boot system stack-unit 1 secondary system: B:
!
hostname dell-switchA
!
protocol lldp
advertise management-tlv system-description system-name
!
redundancy auto-synchronize full
!
enable password 7 (deleted)
!
username admin password 7 (deleted)
!
default-vlan disable
!
protocol spanning-tree rstp
no disable
bridge-priority 16384
!
vlt domain 100
peer-link port-channel 100
back-up destination 10.217.33.101
primary-priority 1
unit-id 0
delay-restore 10
peer-routing
!
stack-unit 1 provision S4048-ON
!
interface TenGigabitEthernet 1/1
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 111 mode active
no shutdown
!
31. Make room for more virtual desktops with fast storage May 2017 | 31
interface TenGigabitEthernet 1/2
no ip address
mtu 9216
portmode hybrid
switchport
speed 1000
no shutdown
!
interface TenGigabitEthernet 1/3
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 10 mode active
no shutdown
!
interface TenGigabitEthernet 1/4
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 10 mode active
no shutdown
!
interface TenGigabitEthernet 1/5
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 11 mode active
no shutdown
!
interface TenGigabitEthernet 1/6
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 11 mode active
no shutdown
!
interface TenGigabitEthernet 1/7
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 12 mode active
32. Make room for more virtual desktops with fast storage May 2017 | 32
no shutdown
!
interface TenGigabitEthernet 1/8
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 12 mode active
no shutdown
!
interface TenGigabitEthernet 1/9
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 13 mode active
no shutdown
!
interface TenGigabitEthernet 1/10
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 13 mode active
no shutdown
!
interface TenGigabitEthernet 1/11
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 14 mode active
no shutdown
!
interface TenGigabitEthernet 1/12
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 14 mode active
no shutdown
!
interface TenGigabitEthernet 1/13
no ip address
mtu 9216
!
33. Make room for more virtual desktops with fast storage May 2017 | 33
port-channel-protocol LACP
port-channel 15 mode active
no shutdown
!
interface TenGigabitEthernet 1/14
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 15 mode active
no shutdown
!
interface TenGigabitEthernet 1/15
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 16 mode active
no shutdown
!
interface TenGigabitEthernet 1/16
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 16 mode active
no shutdown
!
interface TenGigabitEthernet 1/17
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 17 mode active
no shutdown
!
interface TenGigabitEthernet 1/18
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 17 mode active
no shutdown
!
interface TenGigabitEthernet 1/19
no ip address
34. Make room for more virtual desktops with fast storage May 2017 | 34
mtu 9216
!
port-channel-protocol LACP
port-channel 18 mode active
no shutdown
!
interface TenGigabitEthernet 1/20
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 18 mode active
no shutdown
!
interface TenGigabitEthernet 1/21
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 19 mode active
no shutdown
!
interface TenGigabitEthernet 1/22
no ip address
mtu 9216
!
port-channel-protocol LACP
port-channel 19 mode active
no shutdown
!
interface TenGigabitEthernet 1/23
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/24
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/25
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/26
35. Make room for more virtual desktops with fast storage May 2017 | 35
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/27
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/28
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/29
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/30
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/31
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/32
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/33
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/34
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/35
no ip address
36. Make room for more virtual desktops with fast storage May 2017 | 36
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/36
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/37
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/38
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/39
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/40
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/41
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/42
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/43
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/44
no ip address
mtu 9216
37. Make room for more virtual desktops with fast storage May 2017 | 37
shutdown
!
interface TenGigabitEthernet 1/45
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/46
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/47
no ip address
mtu 9216
shutdown
!
interface TenGigabitEthernet 1/48
no ip address
mtu 9216
shutdown
!
interface fortyGigE 1/49
no ip address
no shutdown
!
interface fortyGigE 1/50
no ip address
no shutdown
!
interface fortyGigE 1/51
no ip address
shutdown
!
interface fortyGigE 1/52
no ip address
shutdown
!
interface fortyGigE 1/53
no ip address
shutdown
!
interface fortyGigE 1/54
no ip address
shutdown
!
38. Make room for more virtual desktops with fast storage May 2017 | 38
interface ManagementEthernet 1/1
ip address 10.217.33.100/16
no shutdown
!
interface ManagementEthernet 2/1
no shutdown
!
interface ManagementEthernet 3/1
no shutdown
!
interface ManagementEthernet 4/1
no shutdown
!
interface ManagementEthernet 5/1
no shutdown
!
interface ManagementEthernet 6/1
no shutdown
!
interface Port-channel 10
description fx2-infra
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
!
interface Port-channel 11
description fx2-c1
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
! interface Port-channel 12
description fx2-c2
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
!
interface Port-channel 13
description fx2-c3
no ip address
mtu 9216
39. Make room for more virtual desktops with fast storage May 2017 | 39
portmode hybrid
switchport
no shutdown
!
interface Port-channel 14
description fx2-c4
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
!
interface Port-channel 15
description fx2-c5
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
!
interface Port-channel 16
description fx2-c6
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
!
interface Port-channel 17
description fx2-c7
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
!
interface Port-channel 18
description fx2-c8
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
!
interface Port-channel 19
description fx2-c1
40. Make room for more virtual desktops with fast storage May 2017 | 40
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
!
interface Port-channel 100
description “VLTi - interconnect link”
no ip address
mtu 9216
channel-member fortyGigE 1/49,1/50
no shutdown
!
interface Port-channel 111
description “uplink to core-network”
no ip address
mtu 9216
switchport
vlt-peer-lag port-channel 111
no shutdown
!
interface Vlan 1
no shutdown
!
interface Vlan 161
description desktop_server-net
no ip address
mtu 9216
tagged Port-channel 10-19,100,111
no shutdown
!
interface Vlan 214
description “mgmt-net”
no ip address
mtu 9216
tagged Port-channel 10-19,100,111
untagged TenGigabitEthernet 1/2
shutdown
!
interface Vlan 300
description vMotion-net
no ip address
mtu 9216
tagged Port-channel 10-19,100,111
shutdown
!
41. Make room for more virtual desktops with fast storage May 2017 | 41
management route 0.0.0.0/0 10.217.0.1
!
ip ssh server enable
!
line console 0
line vty 0
line vty 1
line vty 2
line vty 3
line vty 4
line vty 5
line vty 6
line vty 7
line vty 8
line vty 9
!
reload-type
boot-type normal-reload
config-scr-download enable
!
end
Deployment script for linked clones
We ran our linked clone deployment via powercli for VMware Horizon.
Get-ViewVC -serverName vcsa.test.local | Get-ComposerDomain -domain test.local -username administrator
| Add-AutomaticLinkedCLonePool -pool_id dell-pool1 -displayName “dell-pool1” -namePrefix “dell-
pool1-{n:fixed=4}” -parentVMPath /Datacenter/vm/win10-gold -parentSnapshotPath “/ready/snap”
-vmFolderPath “/Datacenter/vm/clones” -resourcePoolPath /Datacenter/host/Dell_vDT1/Resources
-datastoreSpecs “[Conservative,OS,data]/Datacenter/host/Dell_vDT1/pool1_clone8;[Conservative,OS,data]/
Datacenter/host/Dell_vDT1/pool1_clone7;[Conservative,OS,data]/Datacenter/host/Dell_vDT1/pool1_
clone6;[Conservative,OS,data]/Datacenter/host/Dell_vDT1/pool1_clone5;[Conservative,OS,data]/
Datacenter/host/Dell_vDT1/pool1_clone4;[Conservative,OS,data]/Datacenter/host/Dell_vDT1/pool1_
clone3;[Conservative,OS,data]/Datacenter/host/Dell_vDT1/pool1_clone2;[Conservative,OS,data]/
Datacenter/host/Dell_vDT1/pool1_clone1;[Conservative,replica]/Datacenter/host/Dell_vDT1/pool1_
replica” -minimumCount 1000 -maximumCount 1000 -headroomCount 1000 -usetempdisk 1 -tempdisksize 4096
-persistence NonPersistent -suspendProvisioningonerror 0 -UseSeSparseDiskFormat 1 -OrganizationalUnit
“OU=Target,OU=Computers,OU=LoginVSI”
Get-ViewVC -serverName vcsa.test.local | Get-ComposerDomain -domain test.local -username administrator
| Add-AutomaticLinkedCLonePool -pool_id dell-pool2 -displayName “dell-pool2” -namePrefix “dell-
pool2-{n:fixed=4}” -parentVMPath /Datacenter/vm/win10-gold -parentSnapshotPath “/ready/snap”
-vmFolderPath “/Datacenter/vm/clones” -resourcePoolPath /Datacenter/host/Dell_vDT2/Resources
-datastoreSpecs “[Conservative,OS,data]/Datacenter/host/Dell_vDT2/pool2_clone8;[Conservative,OS,data]/
Datacenter/host/Dell_vDT2/pool2_clone7;[Conservative,OS,data]/Datacenter/host/Dell_vDT2/pool2_
clone6;[Conservative,OS,data]/Datacenter/host/Dell_vDT2/pool2_clone5;[Conservative,OS,data]/
45. Make room for more virtual desktops with fast storage May 2017 | 45
4KB Shell script
We ran the Dell EMC host optimization script and selected Yes for all suggested optimization.
#!/bin/sh
#
# check_esx_xtremio
#
# This script checks the configuration of a VMWare host to config that
# it has been configured in line with best practice recommendations for
# an EMC xtremio array.
#
# Version 0.1 - SH - 0413201601 - Initial version
#
yesno()
{
while true; do
read -p “ (Y/N) “ resp
case $resp in
[Yy]* ) return 0;;
[Nn]* ) return 1;;
* ) echo “Invalid response”;;
esac
done
}
echo Checking VMWare configuration against best-practices for EMC xtremio arrays
echo
SQ=`esxcfg-advcfg -q -g /Disk/SchedQuantum`
if [ -z “$SQ” ]; then
echo Error running esxcfg-advcfg command
exit 1
fi
if [ $SQ -ne 64 ]; then
echo “SchedQuantum is set to $SQ rather than the recommend value of 64.”
echo -n “Do you want to set SchedQuantum to 64? “
yesno
if [ $? -eq 0 ]; then
echo “Setting SchedQuantum to 64”
esxcfg-advcfg -s 64 /Disk/SchedQuantum
else
echo “Not modifying the value of SchedQuantum”
fi
46. Make room for more virtual desktops with fast storage May 2017 | 46
echo
else
echo “SchedQuantum setting is correct”
fi
echo
MaxIO=`esxcfg-advcfg -q -g /Disk/DiskMaxIOSize`
if [ -z “$MaxIO” ]; then
echo Error running esxcfg-advcfg command
exit 1
fi
if [ $MaxIO -ne 4096 ]; then
echo “DiskMaxIOSize is set to $MaxIO rather than the recommend value of 4096.”
echo -n “Do you want to set DiskMaxIOSize to 4096? “
yesno
if [ $? -eq 0 ]; then
echo “Setting DiskMaxIOSize to 4096”
esxcfg-advcfg -s 4096 /Disk/DiskMaxIOSize
else
echo “Not modifying the value of DiskMaxIOSize”
fi
echo
else
echo “DiskMaxIOSize setting is correct”
fi
echo
OIO=`esxcli storage core device list | awk ‘ /^[^ ]/ {x=””} /^naa.514f0c5/ {x=$1} /No of outstanding IOs
with competing worlds:/&&x&&!/No of outstanding IOs with competing worlds: 256/ { print x}’ | sort -u`
OIOC=`echo $OIO | wc -w`
if [ $OIOC -gt 0 ]; then
echo “$OIOC disk(s) found with incorrect max number of outstanding IOs”
echo -n “Do you want to set Outstanding IOs on these disks to 256? “
yesno
if [ $? -eq 0 ]; then
echo “Setting Outstanding IOs to 256 :”
for naa in $OIO; do
echo “ $naa”
esxcli storage core device set -d $naa -O 256
done
else
47. Make room for more virtual desktops with fast storage May 2017 | 47
echo “Not modifying the number of Outstanding IOs”
fi
else
echo “Number of Outstanding IOs is correct for all devices”
fi
echo
MPO=`esxcli storage nmp device list | awk ‘ /^[^ ]/ {x=””} /^naa.514f0c5/ {x=$1} /Path Selection
Policy:/&&x&&!/Path Selection Policy: VMW_PSP_RR/ { print x}’ | sort -u`
MPOC=`echo $MPO | wc -w`
if [ $MPOC -gt 0 ]; then
echo “$MPOC disk(s) found with incorrect Multipathing configuration”
echo -n “Do you want to set Multipathing to Round-Robin on these disks? “
yesno
if [ $? -eq 0 ]; then
echo “Setting Multipathing to Round-Robin :”
for naa in $MPO; do
echo “ $naa”
esxcli storage nmp device set -d $naa --psp VMW_PSP_RR
esxcli storage nmp psp roundrobin deviceconfig set -d $naa --iops 1 --type iops
done
else
echo “Not modifying Multipathing settings”
fi
else
echo “Existing device Multipath configuration is correct for all disks”
fi
echo
MPDO=`esxcli storage nmp satp rule list | egrep ‘VMW_SATP_DEFAULT_AA *DGC’`
MPRO=`echo $MPDO | grep VMW_PSP_RR`
if [ -z “$MPDO” ]; then
echo “Default Multipathing is not configured for xtremio devices”
echo -n “Do you want to set the Multipathing to Round-Robin for new xtremio devices?”
yesno
if [ $? -eq 0 ]; then
echo “Setting Default Multipathing to Round-Robin”
esxcli storage nmp satp rule add -c tpgs_off -e “XtremIO Active/Active” -P VMW_PSP_RR -O
iops=1 -s VMW_SATP_DEFAULT_AA -t vendor -V XtremIO
48. Make room for more virtual desktops with fast storage May 2017 | 48
else
echo “Not modifying Default Multipathing settings”
fi
elif [ -z “$MPRO” ]; then
echo “Default Multipath configuration does NOT appear to be set to Round-Robin. This will need to
be fixed manually”
else
echo “Default Multipath configuration is correct”
fi
echo
LoginVSI launcher config file
VSILauncher.ini
[Scenario]
CPUEnabled=disabled
NumberOfCPU=0
CoresEnabled=disabled
NumberOfCores=0
HyperThreadingEnabled=disabled
HostsEnabled=disabled
NumberOfHosts=0
[Launcher]
BaseLinePhase=0
Servername=connection
Username=LoginVSI{count}
Password=enc:BD177557C81239CED1:enc
ShowPassword=C04333158A4E
Domain=test
useCSV=1
MultipleResources=0
CCL=”C:Program Files (x86)VMwareVMware Horizon View Clientvmware-view.exe” -serverURL %csv_
viewserver% -username %csv_user% -password Password1 -domainName test -desktopName %csv_poolname%
-standAlone -logInAsCurrentUser False --desktoplayout 1920x1080#
CSV=cifs1shareuser.csv
AgentRunLoop=1
PhasesCount=1
NumberOfSessions=6000
NumberOfWindows=5400
SequentialInterval=0
AutoLogoff=1
LogoffTimeOut=5400
CreateProfile=0
TSDiscon=0
PreTestScript=
PostTestScript=
49. Make room for more virtual desktops with fast storage May 2017 | 49
Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
DISCLAIMER OF WARRANTIES; LIMITATION OF LIABILITY:
Principled Technologies, Inc. has made reasonable efforts to ensure the accuracy and validity of its testing, however, Principled Technologies, Inc. specifically disclaims
any warranty, expressed or implied, relating to the test results and analysis, their accuracy, completeness or quality, including any implied warranty of fitness for any
particular purpose. All persons or entities relying on the results of any testing do so at their own risk, and agree that Principled Technologies, Inc., its employees and its
subcontractors shall have no liability whatsoever from any claim of loss or damage on account of any alleged error or defect in any testing procedure or result.
In no event shall Principled Technologies, Inc. be liable for indirect, special, incidental, or consequential damages in connection with its testing, even if advised of the
possibility of such damages. In no event shall Principled Technologies, Inc.’s liability, including for direct damages, exceed the amounts paid in connection with Principled
Technologies, Inc.’s testing. Customer’s sole and exclusive remedies are as set forth herein.
This project was commissioned by Dell Technologies.
Principled
Technologies®
Facts matter.®Principled
Technologies®
Facts matter.®
BaselineSampleInterval=3
[Launcher_FD]
Servername=connection
Username=LoginVSI{count}
Password=enc:BD177557C81239CED1:enc
ShowPassword=C04333158A4E
Domain=test
useCSV=1
MultipleResources=0
CCL=”C:Program Files (x86)VMwareVMware Horizon View Clientvmware-view.exe” -serverURL %csv_
viewserver% -username %csv_user% -password Password1 -domainName test -desktopName %csv_poolname%
-standAlone -logInAsCurrentUser False --desktoplayout 1920x1080#
CSV=cifs1shareuser.csv
AgentRunLoop=1
[VSI]
SkipAddLaunchersScreen=0
SkipCommandLineScreen=0
Increment=0
SkipStartTestScreen=1
SkipAllWelcomeScreens=0
[Phase1]
Name=Phase 1
Sessions=6000
PhaseTime=5400
Pause=False
[LauncherWorkload]
launcherUsername=l
launcherPassword=enc:9D177557C81239CE:enc
launcherWorkflow=0
limitConcurrent=5
launcherResolution=1024x768
launcherDisconnectAfterLaunch=0