20 years ago, we had local attached disks in all servers. That resulted in islands of storage and poor utilization.So we moved to shared storage with SAN and NAS arrays. That improved the TCO, but these boxes have become a large part of the IT cost.Today, we’re finding new and innovative types of storage being built.We’re seeing server flash become viable in terms of cost.The cost of rotating media continues to plummet.We find we have abundant cpu cycles in those servers so we can implement higher level functionality in software instead of firmware.We’re seeing hyper-converged solutions that simplify the consumption of storage.And, finally, we’re seeing object storage and cloud economics also significantly impact the overall cost of storage.
Key points:Explain what auto-tiering doesSample Talk Track: DataCore allows you to create tiers of storage from your existing hardware. Create top tiers with your highest performance hardware and lower tiers for your more cost effective gear. Then auto-tiering will constantly optimize and tune to your specific environment and the needs of your applications. When demand for a specific block of data increases it is automatically migrated to a higher tier of storage to ensure applications get the maximum available performance. Transition to next slide: This isn’t only done at the application level though…
Key Points:Explain how they can get 3-5x performance on existing storageSample Talk Track: The next things we’ll discuss is caching. I know, caching certainly doesn’t sound like the most exciting technology. This is, I assure you, the most elegant implementation of decades old technology that you’ll ever see. As we discussed, DataCore runs on x86 servers in between your application servers and your storage. What we effectively do is pump these DataCore nodes full of standard memory and create mega-caches. Our self tuning, high-performance data caching algorithms allow you to run your workloads out of electronic memory instead of slow spinning disks. This stuff allows you to use regular old memory to create ‘mega-caches’ that anticipate read/writes before they come in, preventing every request from hitting a spinning disk. Transition to next slide: We have a variety of proof points on the value of this technology
Key Points:Talk about the way DataCore replicates between unlike systems Show how this allows you to prevent downtimeShow how this allows you to failover and back from unlike systemsTalk about how this architecture can be used for seamless data migrations and hardware refreshes
Key Points:DataCore offers a long term solution to managing your data no matter how much it growsSample Talk Track: Most of environments fall in one of two situations. They have disparate storage systems and are frustrated because they are incompatibleThey feel locked into a single storage vendor and are frustrated with rising support costs, forklift upgrades and equipment being EOL’dDataCore solves both of these problems. If you’re locked into one vendor, virtualization will allow you to easily break out. If you have a variety of devices, DataCore will make them work together. Additionally, DataCore offers the ability to start considering lower cost storage systems for archieving. We also provide you a platform that puts you in a position to take full advantage of future advancements in storage hardware. Once virtualized, you can continue to integrate new hardware technologies as new things are developed. I really don’t know what’s coming next, but know that new things will happen and a long-term strategy for data storage needs to be open to new stuff.That’s exactly what DataCore is; a long-term strategy to protect your storage investments