DataCore has spent the last 17 years developing a hardware agnostic set of storage services. Some of these services are common among modern storage systems. Things like snapshots, thin provisioning and mirroring. No other vendor can offer this functionality in a universally compatible format. Our storage services can be found in our SANsymphony-V & DataCore Virtual SAN products.
Updated: December 8, 2014 (for SANsymphony-V10 PSP1)Key Points:
Introduce yourself, be yourself
Know enough about their environment to determine which points from the presentation to emphasize and tailor the conversation to the role of the person.
For example, when speaking with an application or server administrator who cares little about storage infrastructure, you’ll likely concentrate on Virtual SAN. On the other hand, a storage administrator dealing with diverse set of arrays will be far more interested in virtualizing external storage and is less likely to be interested in Virtual SAN. Flash seems pretty universally interesting.
Think about the last 30 years of data storage. After things moved off the mainframes, data storage on hosts was very common.
In the 80’s this is how you stored data. There wasn’t a ton of it so you could store it on your hosts. During the following 10 to 15 years, we saw the emergence of external storage. SAN in the early 90s and NAS coming to the latter part of the decade. This made sense for a particular period of time. One single storage system held all of your data and you plugged everything into it and it just works. They are effectively specialized servers with specialized operating systems to store your data. More recently, advancements have led to converged systems, flash cards, flash arrays, cloud storage. Every single one of these storage devices is basically rewriting the same software. Each manufacturer creates its own o.s. with its own services stack. And now you end up with environments with a bunch of different devices that don’t work together, can’t communicate with each other. It’s a real complicated mess These fragmented approaches to storage can’t keep up with the rate that data is growing. Which is why you are seeing the emergence of software-defined storage.
when there wasn’t much to store, you kept it inside the servers.
20 years ago, we had local attached disks in all servers. That resulted in islands of storage and poor utilization.
So we moved to shared storage with SAN and NAS arrays. That improved the TCO, but these boxes have become a large part of the IT cost.
Today, we’re finding new and innovative types of storage being built.
We’re seeing server flash become viable in terms of cost.
The cost of rotating media continues to plummet.
We find we have abundant cpu cycles in those servers so we can implement higher level functionality in software instead of firmware.
We’re seeing hyper-converged solutions that simplify the consumption of storage.
And, finally, we’re seeing object storage and cloud economics also significantly impact the overall cost of storage.
We see this from every major vendor out there. EMC, NetApp, VMware. Everyone is looking to the future of storage in software, not the newest hardware. So what should software be able to do for your environment? For example, it should enable different storage devices to communicate with one another. There’s no reason you can’t have a piece of software that let’s you select any device that you are looking for.
Second – it should separate advances in software from advances in hardware. There’s no reason to upgrade the hardware every time we need to enhance the software. Or every time new hardware comes out, we have to buy new software with it. The model doesn’t make sense. Third – we should be able to pool and centrally manage all of our storage. Siloed storage should be a thing of the past if we have the right software in place. And finally, for the things that are really difficult in storage like hardware refreshes, hardware maintenance and data migrations- the right software should help make those activities easy. These are the principles on which DataCore was founded.
The result is one unified software platform for any applications and any underlying storage, so you can accelerate your applications and ensure continuous availability, whether they run on virtual machines, on physical servers or virtual desktops. At the same time, the software pools and protects the storage devices, be they flash, SAN or Cloud while centralizing management and automating many of the functions.
This is what the product looks like- a complete software-defined storage stack after 16 years of development. We are on our 10th generation product with a cross-device set of services. In this unified software platform we provide everything you need to manage your storage. Many of these are features you’ll find on a modern storage system. Things like thin provisioning and snapshots. DataCore provides all of this functionality in a completely hardware-agnostic form. This enables us to do something that no other vendor can do, like auto-tiering across unlike systems and mirroring and failover between unlike storage systems.
Now to give you an overview of what this looks like, but first give you an appreciation for what benefits customers derive from DataCore.
[These are real proof points derived from first-hand survey of our customer base – corroborated by the 3rd party, TechValidate who specializes in auditing the results.]
DataCore customers report up to 75% reduction in storage costs. DataCore customers report up to a 10 time performance increase from the existing storage hardware they have in their environment. They report up to a 4 time improvement in capacity utilization. And a 100% reduction in storage-related downtime. DataCore customers also report a 90% decrease in the time they spend on routine storage tasks.
Let’s discuss three very typical ways that people are using DataCore in the field. First, virtualizing your existing storage hardware. The second thing is creating hyper-converged storage from direct-attached storage on servers. And third is the ease of integration of new hardware, like flash or SSDs.
Conceptually, the first use case is very simple.
DataCore runs on x86 servers, in the data path between hosts and the underlying storage hardware in your environment. This allows you to have one common set of services for all of your storage hardware. Whatever you have today, and whatever you choose to have in the future. We’re able to pool of your storage and eliminate any wasted capacity. We get rid of wasteful silos allocated to just one app. We enable different types of systems to communicate with each other even if they are mutually incompatible. Everything is replicated between DataCore nodes with unlike devices at either end.
To be clear, we are a software-only solution, not an appliance. We run in the data path. In a minute I’ll show you how we speed up your apps.
[Some in the audience may react – “I don’t want to put anything in the data path!” Draw out their concerns now so you can be clear that rather than slow them down, we enhance the performance and availability of their environment. Address this objection up front.]
Let’s consider some of the possibilities this approach brings.
For starters, the software automatically directs data to the class of storage device best suited to handle it. You can have up to 15 different tiers. Often, tier 1 is a flash array or flash card. Tier 2 might be your enterprise-caliber storage system. Tier 3 might be a tertiary system , and you can even designate a tier for your cloud storage. In the background, we constantly evaluate which blocks are being accessed most frequently, and transparently move only those to faster tiers without user intervention and while in production. Other data being used by the same “Tier 1” application may be judged to better reside in lower cost, higher capacity devices, since it is not retrieved as regularly. Yet in most cases today, people are buying top tier, high-performance storage to hold their entire database when only 10 to 20% of it really benefits from the added speed.
Around this theme of application performance is DataCore’s caching technology. Remember where we sit in the architecture. Over the last 16 years, we’ve perfected the art of caching your data so your applications can effectively run out of memory. Memory is significantly faster than flash and of course, light-years faster than spinning disks. The software constantly evaluates your data and intelligently anticipates what the applications will ask for next. The combination of DataCore software on x86 servers effectively turns them into mega caches, full of inexpensive RAM. We also write data to cache, keeping it there as long as possible. When we do write it to storage, we turn many random writes into a few sequential writes which gets the maximum performance from the storage devices.
The results from the combination of caching and auto-tiering speak for themselves. We have customers who have achieved up to 10X performance boost. Yet look at what the majority are getting- 72% of the DataCore we surveyed sped up their performance by 3x or more since virtualizing their storage with DataCore without necessarily buying new storage hardware.
Now consider our capabilities around protecting data and providing business continuity. This will help you further understand how we are architected. We typically operate in a minimum of a 2 node configuration. We allow you to synchronously mirror, failover and failback between unlike storage systems. For example, between your EMC and NetApp arrays or Dell and HP. It’s not just the ease to set up a highly available environment, but how this changes the way you look at data storage. Remember how we spoke in the beginning about having software making difficult hardware refreshes, hardware maintenance and migrations easy. All of the sudden bringing in new hardware gets really easy. Replicating data gets really easy. Taking systems offline gets significantly easier. Instead of coming in on Friday night and doing a fire-drill data migration where you have to get it done by Sunday night or your shutting the business down, it’s as seamless as standing up a new system next to your existing devices, mirroring data to the new gear and transparently switching over to it without interrupting applications.
This synchronous mirroring capability extends across data centers up to 100 kilometers. And asynchronous replication across any distance. The hardware on either end does not need to match. We don’t need specialized hardware. You can use any hardware at your data center or at your remote sites, which is a key differentiator between DataCore and anything else you see out there. We support any application, any hypervisor and any storage hardware.
More from our TechValidate survey. 50% of the customers we surveyed have had no planned or unplanned downtime for over 2 years. Several report running for 8+ years without downtime and still counting. We have a lot of customers that haven’t even had the product for that long so they don’t qualify. [Good time to insert your customer examples, several of which have had no downtime since inception]
Others that have made entire data center moves without incurring an outage.
The second use case we’d like to discuss is creating hyper-converged storage from storage directly-attached to servers.
Say you have a number of hosts with internal storage; some of them with flash cards. The apps may be running in virtual machines or directly on the physical server. In this scenario, you can run DataCore inside these servers alongside the apps to pool their direct-attached storage. Thereby creating a full-feature hyper-converged storage cluster. You would not need a separate external SAN array to share storage among the servers.
Everything you would have asked from a modern storage system, you now get directly on your hosts in the DataCore Virtual SAN. Features like synch mirroring, auto-tiering, CDP and adaptive, in-memory caching. This also means you can share flash resources between servers.
There are two primary needs met by hyper-converged storage. One is a low-cost, highly-available, remote site solution to be deployed at dozens, hundreds or thousands of sites. Examples include restaurants, branch offices, hotel chains, and remote offices requiring compact configurations with storage integral to the servers. No extra cost or complexity from external storage. Unlike others hyper-converged storage alternatives that require a minimum of 3 host, you can set up a fully-redundant DataCore Virtual SAN with just 2 servers. Data from each of these locations can be replicated to and from a central distribution center or a data center hub where additional safeguards, archiving and processing/analysis can take place.
On the other end of the spectrum, DataCore Virtual SANs is ideal for VDI (virtual desktop) rollouts as well as latency-sensitive, Tier 1 apps where it’s important to bring data storage close to the apps, and scale out the number of servers to match expanding users and workloads. You can start with as little as 2 nodes and scale out to 64-nodes, aggregating a total of 64 PBs and driving up to 100 million I/O Operations per Second (IOPS). Again, without relying on external storage.
Those of you entertaining the use of flash and other solid state disk to speed up applications will be particularly interested in this section.
You’re probably asking yourself many of these questions. How do I integrate this new technology with my existing environment?
What new processes do I need to manage this investment? What software is available from these flash devices to meet my broader data protection and business continuity objectives?
You’re also trying to determine just how much flash is appropriate to attain the desired performance levels and how much of it has to be dedicated to specific apps.
DataCore provides a comprehensive and robust set of storage services for any flash manufacturer out there, some of which are highlighted here. We cover flash cards, flash arrays and hybrid flash arrays. Our 10th generation product provides a extends the native capabilities of the flash devices and makes it easy to integrate them alongside other storage technologies already in your data center. DataCore also facilitates sharing the flash resources across your servers and applications.
You can see 3 ways in which we do this. On the left, hyper-converged example we talked about a few minutes ago, where the flash cards are inside the hosts. In the middle, you see flash as part of the dedicated x86 servers acting as DataCore nodes, responsible for virtualizing external storage. And we can easily add flash arrays or hybrid flash arrays into an external storage pool consisting of disk arrays. Wherever you choose, you are able to share your flash resources across your hosts and the variety of operating systems in your data center. As importantly, you can trust our complete set of advanced and proven storage services perfected over the past 16 years and 25,000+ installations, rather rely on a relatively infant code. We also offer you the added advantage to select the most appealing flash hardware today and potentially a different manufacturer next year while maintaining those same software services. In other words, the hardware decision you make today does not dictate or limit the choices you have in the future.
We talked about virtualizing external storage resources and also discussed how to create hyper-converged storage from direct-attached resources. Now let’s see how those concepts come together.
DataCore is the only vendor that can combine hyper-converged storage with external SANs, effectively virtualizing all of these storage resources, taking full advantage of the benefits each topology offers.
You can leverage our cross-device services on any data wherever it is stored in your environment, whether DAS, external storage, split across multiple data centers and remote sites. This represents one comprehensive storage solution – a platform – for all of your data. And you are able to manage this centrally from the DataCore console.
The economic benefits resulting from this software-defined storage platform are very compelling. Close to 80% of the organizations we surveyed reported reducing their storage-related spending by 25% or more. This is specially significant given that data storage today represents anywhere from one third to one half of every IT budget. Data is growing at 50 to 60% per year while IT budgets are flat.
Also note that 60% were able to defer 2 or more storage hardware refresh cycles by leveraging DataCore. All of you, I’m sure, want to extend the life of your investments.
Next consider productivity improvements and the positive effects on the lives of those responsible for storage. Prior to DataCore, the time spent on the many reactive tasks end up consuming nearly two-thirds of our days, and cut into our weekends and holidays. These are tasks like data migrations, capacity expansion, and recovering from unplanned downtime. Having one single software platform reduces that to less than 25%. That translates into giving back significant time in your days and nights, and reducing the stress from trying to resolve these problems during very short windows, under extreme pressure.
From the perspective of investment protection, you can see how one comprehensive set of storage services extends the life of investments you’ve made and puts you in control of investments you make in the future. You no longer need to refresh hardware simply to keep up with new software features. You may consider using lower cost models from tier 1 vendors and look to other suppliers for less critical bulk storage. You’re also in a position to quickly take advantage of new technologies like flash or whatever comes up next without concern for how they will integrate into the existing environment. It’s a very reasonable and practical way for managing the long-term behavior of your storage infrastructure.
Curious when to get started with DataCore? Before your buy more storage, whether a hardware refresh or a brand new purchase.
The same goes if you are looking at flash, SSDs or new types of storage. It also makes sense when expanding your server or desktop virtualization environment. And certainly as you develop or adjust your business continuity and disaster recovery plan.
Here’s a little on DataCore: [Walk through the highlights]
Any questions? [Stay on this slide to answer them and at the same time keep a reminder of what you presented]