In today’s fast-paced world, IT organizations are continuously looking for better ways to increase the productivity and agility of their business. With the proven advantages and benefits of virtualization technologies, a lot of IT organizations are revisiting their business plans to implement a virtualization strategy. This is a mandate driven by the executive management team to provide a smoother and more efficient service to the business while reducing their costs. Therefore, IT organizations are going virtual. They are implementing desktop and server virtualization projects and are focusing in virtualizing mission-critical applications; especially core business and database applications such as Oracle, SAP, SQL Server, Exchange, and SharePoint.
However, this is not an easy task for IT because they have the mission and pressure to provide a fast, non-stop service to the business while maintaining low operating costs. But like everything else in life, there are always tradeoffs to make and IT is not the exception. IT organizations have some difficult decisions to make when virtualizing Tier 1 applications. The key is to figure out the optimal approach to achieve the best result in key areas they are measured on, includingperformance, uptime, and costs without introducing any additional complexity to their infrastructure.
Now in order to run a successful business your end users must have fast and uninterrupted access to their applications. If your applications are running slow and not performing to your established requirements or if they are not accessible by your end users, then this causes a major disruption in their day-to-day business operations. This creates a domino effect because if your end users are not satisfied with the service and the performance of the applications they need to get their job done, then the business will get lower productivity from its workforce, and lower productivity results in a loss of revenue.
Additionally, storage complexity is growing at levels that makes it very difficult for IT to manage. According to ESG Research, 50% of companies with more than 100 production servers report that they expect to grow storage between 30-50% annually, with the explosion of unstructured data and instrumented systems driving a massive wave of content. Server virtualization consolidation ratios of 80% and greater are demanding more from archaic storage architectures. Business needs are pushing the virtualization of Tier 1 applications leaving the majority IT projects storage bound.
Now one of the major pain points we hear from IT organizations that have migrated their Tier 1 applications into a virtualized environment is that they are experiencing a significant impact on application performance. But why does virtualization make your performance worse? The fact is that virtualization highly randomizes your I/O. Before virtualization you had an easy one-to-one relation with a server running an application. Now you’ve virtualized and you’ve created a many-to-one relationship where you have multiples if not dozens of virtual machines on a single server, all competing for the same I/O resources.The requirements of these virtual machines and the IOPS that they need to be fed, exceeds the capabilities of traditional storage solutions … which is why you’re having performance pain. This problem is most commonly known as an I/O bottleneck. This is a major challenge because a significant impact on application performance causes a direct impact on business productivity. This is unacceptable to any business.
Another major pain point we are hearing from IT organizations delivering virtualized applications is the amount of downtime due to service interruptions. Service interruptions are a bigchallenge as well because end users rely on these applications to run the business and they cannot afford any downtime. If a server is taken down for maintenance or a system component fails, then all the virtualized applications go down with the server. If end users cannot access their applications, then there’s a major disruption in day-to-day business operations. This is also unacceptable to any business.
So how is IT trying to address these pain points and challenges? You’ve probably wrestled with a couple of traditional approaches to “How do I remove this bottleneck?” Here’s a list of the common attempts. Sometimes, a database redesign can help, but often not, and it’s costly in terms of people or consultant expenses in any case. Consistently people try to brute force this with added processors or RAM, but that doesn’t help you if your application is IO-bound. Caching is good if you have very small capacity of data or you have specific locality of data. At the end of the day, all data has to be written somewhere so caching is not an ideal solution. And finally, adding hard drives doesn’t solve the problem because hard drives don’t give you enough IOPS and will likely waste capacity. You would have to aggregate hundreds and hundreds and hundreds of hard drives to get to a performance level that fixes your problem, it’s not cost-efficient.
Let’s take a quick look at what you probably have installed today and why you’re considering going to flash. If you take a look at the chart, there are a couple of other approaches. Today everybody’s talking “flash” be it internal PCIE cards for your servers, or externally attached DAS arrays that are based on flash. In the lower left you have your legacy storage arrays … SAN or NAS. There’s another category of product that has embraced flash called “Hybrid”. These are typically majority disk-based systems that have a limitedflash in them, primarily acting as caches, not as datastores. They still have a relativelylow number of IOPS and low overall performance. As I mentioned, they are predominantly based on disk. Disk just can’t drive enough IOPS to deliver the performance that you might need. Moreover, like traditional SAN or NAS, many of these systems are based on legacy controllers and legacy architecture that were designed for rotating media and not flash. They inhibit the full performance of flash. Yes, you’ll get a little bit of a bump, but it’s not a good overall solution for performance sensitive applications. The internal PCI and external DAS solutions are not designed for shared environments. By definition they’re direct-attach. They cannot easily share capacity between servers and they cannot easily share their performance among the various servers and applications. Moreover, they’re not ideally suited for virtualization, because they can’t support VMware vMotions. Andfrankly, addingPCIe cards to servers is very disruptive and very costly on a per GB basis. If you decide to rip out existing fiber channel drives and put in flash drives, that’s also very disruptive and expensive. It’s just not a good way to go. There is a better approach as you will shortly see, called networked flash. This is really the optimal solution for non-disruptively resolving these IO bottleneck problems, specifically in virtualized environments.
To minimize downtime in virtualized environments, one solution that is typically implemented is creating a cluster. In a cluster you have a group of servers running virtualized applications acting as a redundant system to immediately migrate workloads from one server to another. This cluster approach provides continued service when a server goes offline for any particular reason. However, something to keep in mind is that the virtualized applications require the use of shared storage in a cluster approach. This not only means that you have to make an additional investment on external storage, but you are also wasting existing and valuable server storage resources. By now you are probably thinking, well I might not be fully utilizing my server resources, but I can live with that because now I have a solution that minimizes downtime and takes care of my problem. Right?...
Not exactly. There is still another factor you have to consider because your system is still vulnerable. Just implementing a cluster with a shared storage array leads to a bigger problem. This solution has limitations because now all your servers and shared storage reside in one location. So what happens if there’s an outage in that facility due to a power failure, an air conditioning malfunction, a water leak, or even a construction accident? Now that facility becomes a single point of failure causing major downtime and a huge impact on the business.
So the fundamental and consistent customer question, when we talk to customers, to channel partners, to end-users throughout their IT organization, CIOs or virtualization managers, etc. is “How do I cost-effectively add performance and high availability to serve my application requirements?” Performance means different things to different people, but to effectively deploy your applications it has to be predictable. You have to know that you have it at the times that you need it. It has to be sustained so that when your users are running these applications you have very smooth operations of applications and you can sustain performance. And it has to work across physical, virtual or cloud based applications. Additionally, your applications need to be continuously available so that you can run and operate the business efficiently. All thepain points we have discussed are very common in any IT organization. Unfortunately some of the quick solutions implemented to address these pain points tend to force IT to make significant tradeoffs between performance, uptime, complexity, and costs. Instead of making tradeoffs in all these areas and settling for one of them, why not evaluate a different approach to delivering data that is optimized to address all these pain points simultaneously?Now let us share with you a better approach to increase business productivity and agility for your organization. This approach will help you improve the performance of your virtualized applications and maintain non-stop business operations while reducing your capital and operating expenses.
Part of this approach takes advantage of leveraging a networked flash architecture. So what is a networked flash architecture? It’s an approach where you simply add an all-flash ViSX performance storage appliance to your existing storage infrastructure by connecting it to any Ethernet switch port. You simply connect to the switch, give it an IP address, configure your RAID groups, Storage vMotion over your datastore and your application is ready to exploit the full performance of flash in a matter of minutes. Networked Flash means that the flash storage is available to all servers, all VMs, and all applications without replacing or disrupting your existing storage, servers, or applications.
Here’s a graphical view. Compared to traditional approaches that are disk-based in the bottom left, you can solve your performance issues without throwinghundreds of drives at the problem. Here’s a rack of a traditional disk barely delivering 60,000 IOPS, at a very expensive cost point of $500,000. Typically it takes weeks of implementation. One could implementPCIe flash cards, but these are very expensive as every server needs one or perhaps 2 cards. Hybrid storage systems such as shown in the bottom right simply don’t have the performance of all-flash systems. Typically, they also require you to replace your existing storage and learn an entire new set of storage management tools. In the upper right side you’ll see our latest generation, networked flash G4 ViSX appliance. It delivers 140,000 IOPS at a price similar to disk-based storage systems, yet it deploys in minutes.
The first part of the approach consists of aligning your storage tiers to your application requirements. This means that as a best practice you should leverage tiering across different types of storage to deliver the right balance between those applications that demand the fastest performance versus the ones that demand the largest capacity. In fact, one way to take application performance to the next level is by introducing a new and faster tier consisting of flash for those data-intensive applications that require quick access to information. Astute provides technologies that leverage flash to significantly increase datacenter efficiency and performance.
In addition to introducing a flash tier and leveraging tiering across your storage devices, it is recommended to provide a virtualized environment that can provide continuous availability to your business operations.
This can be accomplished by creating a physical separation that extendsyour cluster and expands your storage resources into a different location. This approach allows you to maintain an independent copy of your data that can be used to provide continuous access in case some type of service interruption occurs on the other end.
A better way to leverage tiering and take advantage of physical separation to provide fast performance and continuous availability for your virtualized applications is via networked flash and storage virtualization technologies. Through storage virtualization you are adding a storage hypervisor – an intelligent software layer residing between the applications and the disks that virtualizes the individual storage resources it controls and creates one or more flexible pools of storage capacity to improve their performance, availability, and utilization. The benefit of DataCore’s storage hypervisor is that it has the ability to present uniform virtual devices and services from dissimilar and incompatible hardware, even from different manufacturers, making these devices interchangeable. Continuous replacement and substitution of the underlying physical storage may take place, without altering or interrupting the virtual storage environment that is presented.
Now let us show you how this solution will help you accelerate the performance of your virtualized applications.
Here’s how it works. A key capability of the DataCore storage virtualization software is its ability to dynamically optimize storage capacity based on which disk blocks are most frequently accessed. Let’s say you have a multi-tier pool, using 7200 RPM hard disk drives for Tier 2, 10-15K RPM hard disk drives for Tier 1, and the fastflash-based Astute appliances for Tier 0. The DataCore software organizes the Astute appliances and the other available disks into a virtual storage pool. It classifies the flash-based appliance as the top tier, and assigns less speedy, higher density drives to lower tiers based on performance characteristics that you set. The software dynamically directs workloads to the most appropriate class of storage device, favoring the Tier 0 flash for high-priority demands needing very high-speed access. It relegates lower priority requests to Tier 1 and Tier 2 diskdrives, striking a balance between the speed of the flash-based Astute appliances and the economies of larger-capacity HDDs. Any special, high-priority workloads can also be pinned to the Astute appliances . At the same time, the software migrates less-frequently used blocks to the hard disk drives to avoid undesirable contention for the flash. This novel approach helps you avoid unnecessary spending on additional disk equipment or exotic storage devices and more importantly, it maximizes application performance.
If we take a closer look at the DataCore nodes you will notice the Astute flash-based appliances connected to them. These appliances play an instrumental role in reducing the disk latencies often responsible for mission-critical applications running poorly.Additionally, if you already have other types of disk arrays, you can combine all of them as part of your storage pool. The Astute appliances can operate as the fastest member in your balanced storage hierarchy, accompanied by high-performance SAS devices and bulk SATA storage. The Astute appliances are dynamically selected by the auto-tiering intelligence within the DataCore software for the most critical apps. When the flash disk capacity is consumed with high-priority requests, less critical requests are automatically directed to the SAS devices or SATA storage depending on their relative importance.
Now let us show you how our solution will help you prevent storage from taking down your applications, providing continuous availability for your business operations.
Another major capability of the DataCore software is that it allows you to configure redundant storage pools by synchronously mirroring between DataCore nodes at different locations. Basically, the virtual disk is really a logical representation of a dual-ported drive except that two independent copies are being updated in real time at each location. Notice that as a best practice, it is recommended that the two storage copies reside in two separate physical locations up to 100 km apart. To better load balance these configurations, traffic is evenly spread between the two pools by equally distributing the preferred paths from the host servers across the active/activeSAN. In other words, each node is generally set up to serve as the primary resource for half of the capacity while the other covers primary responsibility for the other half.
So for example, if one of the storage pools needs to be taken out of service, or any of its devices suffers a failure, the application servers sense that they cannot reach the disks through the preferred path and automatically redirect the applications on the alternate path without disruption. That request is fielded by the redundant node using the mirrored copy. When the service is completed on the left side, any changes that transpired while absent are sent over by the right node. After they are both back in sync, then the application servers which had redirected their requests are signaled to return to their preferred paths. They repeat the same procedure at the other site if necessary, never interrupting users despite the magnitude of the change. This technique maximizes uptime.
Let’s talk about some of the unique advantages and benefits of our solution.
When you combine networked flash with storage virtualization as part of your virtualized environment your business will be able to operate more efficiently and provide the service required to support the needs of your end users. Our solution addresses the major challenges that exist today related to application performance and service interruptions while saving you money. Key advantages include:Accelerating application response times by reducing I/O bottlenecksDelivering predictable performance at a lower cost per IOPSPreventing data loss and providing continuous availability through real-time I/O replication Taking advantage of your existing storage assets by maximizing utilizationDynamically matching workloads to the most appropriate disk and flash resources based on priority (faster performance versus more capacity)Relocating data from one storage system to another, non-disruptivelyPooling incompatible devices for the utmost flexibility and efficiency
Storage SystemsEach of the storage systems had 12 solid-state drives installed as data drives. Independent tests were run on 3 competitive all-flash storage systems with workloads varying by read /write mix (90/10, 70/30, 50/50, 30/70, 10/90). This chart shows a 70% read and 30% write environment which is typical in many workloads.The 12 SSDs were configured as a single RAID0 storage group. Four 100GB volumes were configured on each storage system.Each storage system was connected with one 10GbE iSCSI host connection.Several hours of pre-conditioning runs were performed on each storage system before the performance tests were run.Vendor “A” – a “start-up” flash storage vendorVendor “B” – a well-known,-established storage vendorRelative to either of the two competitors ViSX has a 5X price performance advantage.
Now that we have already shared with you the value proposition of the solution, let us show you how you can grow your storage infrastructure seamlessly as you need, when you need.
Let’s say your current environment consists of a set of virtualized applications in a cluster connected to a shared storage array. As discussed earlier, in this type of setup your virtualized applications are competing for access to the storage disks frequently causing I/O bottlenecks and resulting in slower response times.
So in order to accelerate the performance of your virtualized applications, you introduce a new storage tier – Tier 0, consisting of the high-performance Astute flash-based appliances in conjunction with the storage virtualization capabilities of the DataCore software. In this setup you will configure the storage virtualization software to take high-priority requests from one of your business critical applications and route them to the flash-based appliances, which are used as the fastest dedicated storage resources in the pool. This approach allows you to compare the application performance of your original environment with the DataCore and Astute technologies and see for yourself the performance improvements of the solution.
Then as the business need arises for introducing more capacity without sacrificing performance, you have the option to scale up by easily adding additional solid state drives (SSDs) to a currently installed Astute ViSX appliance on Tier 0 without having to disrupt the production environment - a true hot-pluggable add-on for up to 24 SSD modules.
Additionally, as your applications and user base grows you can also scale up and out to millions of IOPS without any disruption by adding more Astute’sViSX appliances and SSDs in your environment as needed. The capacity of an individual ViSX appliance is up 45.6TB or when using ViSXDeduplication, you can effectively access nearly 250TB of data. If more capacity or more performance is needed than one ViSX can supply, additional ViSX appliances can easily be added to the rack and concatenated to the existing ViSX. And there you have it, a cost-effective solution that not only allows you to deliver the performance and availability you need for your business, but a solution that also allows you to scale up and out as your business grows.
Finally to wrap up the presentation and open it up for questions, here are the next steps you can take if you are interested in learning more about how to deliver first-class performance and availability for Tier 1 apps. First, we encourage you to give us a call to get in touch with our sales professionals, obtain more information, and schedule an onsite meeting. Secondly, rethink your virtualization strategy to make sure it’s comprehensive. Our Sales Directors are available at your disposition to sit down with you, understand your needs, and build a plan together. Finally, request an assessment. Our Sales Engineers will work with you to provide a live demonstration and assess your business and technical requirements.We look forward to helping you transform your business and keep your organization competitive and well-positioned for future growth.Thank you for your time!
Now let’s open it up for questions and remember you can also contact us via our websites at ww.datacore.co and www.astutenetworks.com.-- Use the next slide as a visual to keep during the Q&A so that the audience is not staring at a blank slide
Use this slide as a visual to keep during the Q&A so that the audience is not staring at a blank slide.
ViSX is the ideal flash storage solution for IT organizations looking for the best combination of high IOPS/$ and lowest $/GB. Compared to the alternative approaches, ViSX is the simplest to deploy and best leverages your existing storage resources. DataCore makes this even more attractive with their multivendor storage virtualization and storage management capabilities which simplify the ongoing management of your storage infrastructure.
Presentation to customers delivering first-class performance and availability for tier 1 apps