This document discusses how virtualization introduces performance barriers and availability issues for applications. It presents networked flash storage and storage virtualization as solutions to provide predictable, high performance and continuous availability for virtualized applications in a simple and cost-effective manner. Specifically, it allows introducing flash as a new high performance tier, provides continuous availability through data separation and mirroring across rooms, and offers a scalable platform to meet growing needs.
Virtual SAN vs Good Old SANs: Can't they just get along?DataCore Software
Dark forces in the IT industry like to polarize popular opinion; most recently they argue for keeping all the storage in the servers using Virtual SANs, leaving nothing external. These sudden mood swings, while attracting a young cult following, lose sight of lessons learned over the past 20 years.
Truth is, a blend of internal storage close to the apps with good old fashioned external secondary storage out on the network makes a heck of a lot of sense.
In this presentation, Senior Analyst Jim Bagley from SSG-NOW, shows the not so black-and-white considerations driving customers to tap into the internal storage resources of clustered servers. Also, it provides some practical guidance on how to incorporate existing storage arrays and even public cloud capacity into your Virtual SAN rollout
The skyrocketing costs to achieve continuous data availability, cope with exponential data growth, and provide timely data access ranks among the most pressing challenges facing Healthcare IT organizations.
This presentation highlights how DataCore's Software-defined Storage solution can help Healthcare IT organizations increase uptime, optimize capacity and accelerate performance cost-effectively.
Increase Your Mission Critical Application Performance without Breaking the B...DataCore Software
In virtualized environments, mission critical applications get bogged down, leading to user complaints. Root cause analysis has shown that inadequate storage performance is the culprit. But, fixing these performance issues will cost 5 to 7 times your current storage.
In this presentation, learn about a revolutionary solution that combines Skyera’s advanced All Flash Arrays (AFA) with DataCore’s innovative Software-defined Storage platform. This solution will easily accelerate your SQL Servers at a price that fits your budget.
Integrating Hyper-converged Systems with Existing SANs DataCore Software
Hyper-converged systems offer a great deal of promise and yet come with a set of limitations. While they allow enterprises to re-integrate system components into a single enclosure and reduce the physical complexity, floor space and cost of supporting a workload in the data center, they also often will not support existing storage in local SANs or offered by cloud service providers. There are solutions available to address these challenges and allow hyper-converged systems to realize their promise. During this session you will learn:
- What are hyper-converged systems?
- What challenges do they pose?
- What should the ideal solution to those challenges look like?
- About a solution that helps integrate hyper-converged systems with existing SANs
Next to performance and scalability, cost efficiency is one of the top three reasons most companies cite as their motivations for acquiring storage technology. Businesses are struggling to control the storage costs, and to reduce OPEX costs for administrative staff, infrastructure and data management, and environmental and energy. Every storage vendor, it seems, including most of the Software-defined Storage purveyors, are promising ROIs that require nothing short of a suspension of disbelief.
In this presentation, Jon Toigo of the Data Management Institute digs out the root causes of high storage costs and sketches out a prescription for addressing them. He is joined by Ibrahim “Ibby” Rahmani of DataCore Software, who will address the specific cost efficiency advantages that are being realized by customers of Software-defined Storage
What will $0.08 get you with storage? Typically, not much. But, on $0.08 will change the way you think about storage and cause you to question everything storage vendors have told you. Find out more in this presentation
We have alot of exciting things happening at VMworld 2016. Both during the event and on our social channels. Check out this presentation to see everything we have going on and how you can participate and connect with us.
Virtual SAN: It’s a SAN, it’s Virtual, but what is it really?DataCore Software
What do you think of when you hear the words “Virtual SAN”? For some, it may mean addressing application latency and infrastructure costs through consolidation. For others, it may be addressing potential single point of failures. Regardless of the use case, Virtual SANs are becoming one of the hottest software-defined storage solutions for IT organizations to maximize storage resources, lower overall TCO, and increase availability of critical applications and data.
This presentation introduces the concept of Virtual SAN and does a technical deep dive on the most common use cases and deployment models involved with a DataCore Virtual SAN solution.
Virtual SAN vs Good Old SANs: Can't they just get along?DataCore Software
Dark forces in the IT industry like to polarize popular opinion; most recently they argue for keeping all the storage in the servers using Virtual SANs, leaving nothing external. These sudden mood swings, while attracting a young cult following, lose sight of lessons learned over the past 20 years.
Truth is, a blend of internal storage close to the apps with good old fashioned external secondary storage out on the network makes a heck of a lot of sense.
In this presentation, Senior Analyst Jim Bagley from SSG-NOW, shows the not so black-and-white considerations driving customers to tap into the internal storage resources of clustered servers. Also, it provides some practical guidance on how to incorporate existing storage arrays and even public cloud capacity into your Virtual SAN rollout
The skyrocketing costs to achieve continuous data availability, cope with exponential data growth, and provide timely data access ranks among the most pressing challenges facing Healthcare IT organizations.
This presentation highlights how DataCore's Software-defined Storage solution can help Healthcare IT organizations increase uptime, optimize capacity and accelerate performance cost-effectively.
Increase Your Mission Critical Application Performance without Breaking the B...DataCore Software
In virtualized environments, mission critical applications get bogged down, leading to user complaints. Root cause analysis has shown that inadequate storage performance is the culprit. But, fixing these performance issues will cost 5 to 7 times your current storage.
In this presentation, learn about a revolutionary solution that combines Skyera’s advanced All Flash Arrays (AFA) with DataCore’s innovative Software-defined Storage platform. This solution will easily accelerate your SQL Servers at a price that fits your budget.
Integrating Hyper-converged Systems with Existing SANs DataCore Software
Hyper-converged systems offer a great deal of promise and yet come with a set of limitations. While they allow enterprises to re-integrate system components into a single enclosure and reduce the physical complexity, floor space and cost of supporting a workload in the data center, they also often will not support existing storage in local SANs or offered by cloud service providers. There are solutions available to address these challenges and allow hyper-converged systems to realize their promise. During this session you will learn:
- What are hyper-converged systems?
- What challenges do they pose?
- What should the ideal solution to those challenges look like?
- About a solution that helps integrate hyper-converged systems with existing SANs
Next to performance and scalability, cost efficiency is one of the top three reasons most companies cite as their motivations for acquiring storage technology. Businesses are struggling to control the storage costs, and to reduce OPEX costs for administrative staff, infrastructure and data management, and environmental and energy. Every storage vendor, it seems, including most of the Software-defined Storage purveyors, are promising ROIs that require nothing short of a suspension of disbelief.
In this presentation, Jon Toigo of the Data Management Institute digs out the root causes of high storage costs and sketches out a prescription for addressing them. He is joined by Ibrahim “Ibby” Rahmani of DataCore Software, who will address the specific cost efficiency advantages that are being realized by customers of Software-defined Storage
What will $0.08 get you with storage? Typically, not much. But, on $0.08 will change the way you think about storage and cause you to question everything storage vendors have told you. Find out more in this presentation
We have alot of exciting things happening at VMworld 2016. Both during the event and on our social channels. Check out this presentation to see everything we have going on and how you can participate and connect with us.
Virtual SAN: It’s a SAN, it’s Virtual, but what is it really?DataCore Software
What do you think of when you hear the words “Virtual SAN”? For some, it may mean addressing application latency and infrastructure costs through consolidation. For others, it may be addressing potential single point of failures. Regardless of the use case, Virtual SANs are becoming one of the hottest software-defined storage solutions for IT organizations to maximize storage resources, lower overall TCO, and increase availability of critical applications and data.
This presentation introduces the concept of Virtual SAN and does a technical deep dive on the most common use cases and deployment models involved with a DataCore Virtual SAN solution.
NVMe and all-flash systems can solve any performance, floor space and energy problems. At least this is the marketing message many vendors and analysts spread today – but actually, sounds too good to be true, right?
Like always in real life, there is no clear black or white, but some circumstances you should be aware of – especially if you intend to leverage these technologies.
You may ask yourself: Do I need to rip and replace my existing storage? What is the best way to integrate both? What benefits do I receive?
Well, just join our brief webinar, which also includes a live demo and audience Q&A so you can get the most out of these technologies, make your storage great again and discover:
• How to integrate Flash over NVMe in real life
• How to benefit of some Flash/NVMe for your entire applications
In this deck from DDN booth at SC18, John Bent from DDN presents: IME - Unlocking the Potential of NVMe.
"DDN’s Infinite Memory Engine (IME) is a scale-out, software-defined, flash storage platform that streamlines the data path for application I/O. IME interfaces directly to applications and secures I/O via a data path that eliminates file system bottlenecks. With IME, architects can realize true flash-cache economics with a storage architecture that separates capacity from performance."
Learn more: https://www.ddn.com/products/ime-flash-native-data-cache/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This presentation provides an overview of DataCore's Software-defined Storage Platform and insights into DataCore's latest world-record setting performance achievements on the SPC-1 benchmark. DataCore Parallel I/O, which is at the heart of DataCore's technology, is a unique approach to increasing storage performance by orders of magnitude without the need to acquire more and more hardware.
How to Integrate Hyperconverged Systems with Existing SANsDataCore Software
Hyperconverged systems offer a great deal of promise and yet come with a set of limitations.
While they allow enterprises to re-integrate system components into a single enclosure and reduce the physical complexity, floor space and cost of supporting a workload in the data center, they also often will not support existing storage in local SANs or offered by cloud service providers.
However, there are solutions available to address these challenges and allow hyperconverged systems to realize their promise. Sign up to discover:
• What are hyperconverged systems?
• What challenges do they pose?
• What should the ideal solution to those challenges look like?
• A solution that helps integrate hyperconverged systems with existing SANs
Software Defined Storage - Open Framework and Intel® Architecture TechnologiesOdinot Stanislas
(FR)
Dans cette présentation vous aurez le plaisir d'y trouver une introduction plutôt détaillées sur la notion de "SDS Controller" qui est en résumé la couche applicative destinée à contrôler à terme toutes les technologies de stockage (SAN, NAS, stockage distribué sur disque, flash...) et chargée de les exposer aux orchestrateurs de Cloud et donc aux applications.
(ENG)
This presentation cover in detail the notion of "SDS Controller" which is in summary a software stack able to handle all storage technologies (SAN, NDA, distributed file systems on disk, flash...) and expose it to Cloud orchestrators and applications. Lots of good content.
VMworld 2013: Software-Defined Storage: The VCDX Way VMworld
VMworld 2013
Wade Holmes VCDX, VMware
Rawlinson Rivera VCDX, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Dealing with data storage pain points? Learn why a true Software-defined Storage solution is ideal for improving application performance, managing diversity and migrating between different vendors, models and generations of storage devices.
DataCore Software introduction from my "Meet DataCore" webinar. DataCore products include software-defined storage and hyperconverged infrastructure solutions. Datacore has more than 10K customers and 30K+ implementations world-wide.
How to Avoid Disasters via Software-Defined Storage Replication & Site RecoveryDataCore Software
Shifting weather patterns across the globe force us to re-evaluate data protection practices in locations we once thought immune from hurricanes, flooding and other natural disasters.
Offsite data replication combined with advanced site recovery methods should top your list.
In this webcast and live demo, you’ll learn about:
• Software-defined storage services that continuously replicate data, containers and virtual machine images over long distances
• Differences between secondary sites you own or rent vs. virtual destinations in public Clouds
• Techniques that help you test and fine tune recovery measures without disrupting production workloads
• Transferring responsibilities to the remote site
• Rapid restoration of normal operations at the primary facilities when conditions permit
Emergence of Software Defined Storage
SDS role in Software Defined Data Center
The value SDDC/SDC will bring to developers. System Integrators and IT community.
Primend Pilveseminar - Soodne hind + lihtne haldus – pilve minek= ?Primend
Kuidas saada oma andmekeskusesse rohkem pilvele omaseid funktsioone, kui pilve minek pole võimalik? Kuidas saavutada 90% kokkuhoidu andmehoidla ja varukoopia mahult? Kuidas taastada 1 TB mahuga varukoopia vähem kui minutiga? Koostöös Cisco UCS Director automatiseerimise ja juhtimisega pakub SimpliVIty avalikule pilvele omast paindlikkust ja madalat halduskulu.
SDDC – a term that still dwells in the futuristic sense of things, is perhaps the next major milestone in a cloud-centric world that can entirely change the way data is stored and managed.
Zero Downtime, Zero Touch Stretch Clusters from Software-Defined StorageDataCore Software
Business continuity, especially across data centers in nearby locations often depends on complicated scripts, manual intervention and numerous checklists. Those error-prone processes are exponentially more difficult when the data storage equipment differs between sites.
Such difficulties force many organizations to settle for partial disaster recovery measures, conceding data loss and hours of downtime during occasional facility outages.
In this webcast and live demo, you’ll learn about:
• Software-defined storage services capable of continuously mirroring data in
real-time between unlike storage devices.
• Non-disruptive failover between stretched cluster requiring zero touch.
• Rapid restoration of normal conditions when the facilities come back up.
App Performance Tip: Sharing Flash Across Virtualized WorkloadsDataCore Software
Core business applications like Oracle, SAP, SQL Server, Exchange and SharePoint often perform poorly when virtualized. More often than not the root cause is data I/O bottlenecks.
In this presentation, DataCore Software and Fusion-io highlight how to:
• Integrate flash memory to overcome I/O bottlenecks in real-world environments
• Combine flash technology with existing storage
• Speed up virtualized applications
• Prevent storage from slowing down or taking down your applications
NVMe and all-flash systems can solve any performance, floor space and energy problems. At least this is the marketing message many vendors and analysts spread today – but actually, sounds too good to be true, right?
Like always in real life, there is no clear black or white, but some circumstances you should be aware of – especially if you intend to leverage these technologies.
You may ask yourself: Do I need to rip and replace my existing storage? What is the best way to integrate both? What benefits do I receive?
Well, just join our brief webinar, which also includes a live demo and audience Q&A so you can get the most out of these technologies, make your storage great again and discover:
• How to integrate Flash over NVMe in real life
• How to benefit of some Flash/NVMe for your entire applications
In this deck from DDN booth at SC18, John Bent from DDN presents: IME - Unlocking the Potential of NVMe.
"DDN’s Infinite Memory Engine (IME) is a scale-out, software-defined, flash storage platform that streamlines the data path for application I/O. IME interfaces directly to applications and secures I/O via a data path that eliminates file system bottlenecks. With IME, architects can realize true flash-cache economics with a storage architecture that separates capacity from performance."
Learn more: https://www.ddn.com/products/ime-flash-native-data-cache/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This presentation provides an overview of DataCore's Software-defined Storage Platform and insights into DataCore's latest world-record setting performance achievements on the SPC-1 benchmark. DataCore Parallel I/O, which is at the heart of DataCore's technology, is a unique approach to increasing storage performance by orders of magnitude without the need to acquire more and more hardware.
How to Integrate Hyperconverged Systems with Existing SANsDataCore Software
Hyperconverged systems offer a great deal of promise and yet come with a set of limitations.
While they allow enterprises to re-integrate system components into a single enclosure and reduce the physical complexity, floor space and cost of supporting a workload in the data center, they also often will not support existing storage in local SANs or offered by cloud service providers.
However, there are solutions available to address these challenges and allow hyperconverged systems to realize their promise. Sign up to discover:
• What are hyperconverged systems?
• What challenges do they pose?
• What should the ideal solution to those challenges look like?
• A solution that helps integrate hyperconverged systems with existing SANs
Software Defined Storage - Open Framework and Intel® Architecture TechnologiesOdinot Stanislas
(FR)
Dans cette présentation vous aurez le plaisir d'y trouver une introduction plutôt détaillées sur la notion de "SDS Controller" qui est en résumé la couche applicative destinée à contrôler à terme toutes les technologies de stockage (SAN, NAS, stockage distribué sur disque, flash...) et chargée de les exposer aux orchestrateurs de Cloud et donc aux applications.
(ENG)
This presentation cover in detail the notion of "SDS Controller" which is in summary a software stack able to handle all storage technologies (SAN, NDA, distributed file systems on disk, flash...) and expose it to Cloud orchestrators and applications. Lots of good content.
VMworld 2013: Software-Defined Storage: The VCDX Way VMworld
VMworld 2013
Wade Holmes VCDX, VMware
Rawlinson Rivera VCDX, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Dealing with data storage pain points? Learn why a true Software-defined Storage solution is ideal for improving application performance, managing diversity and migrating between different vendors, models and generations of storage devices.
DataCore Software introduction from my "Meet DataCore" webinar. DataCore products include software-defined storage and hyperconverged infrastructure solutions. Datacore has more than 10K customers and 30K+ implementations world-wide.
How to Avoid Disasters via Software-Defined Storage Replication & Site RecoveryDataCore Software
Shifting weather patterns across the globe force us to re-evaluate data protection practices in locations we once thought immune from hurricanes, flooding and other natural disasters.
Offsite data replication combined with advanced site recovery methods should top your list.
In this webcast and live demo, you’ll learn about:
• Software-defined storage services that continuously replicate data, containers and virtual machine images over long distances
• Differences between secondary sites you own or rent vs. virtual destinations in public Clouds
• Techniques that help you test and fine tune recovery measures without disrupting production workloads
• Transferring responsibilities to the remote site
• Rapid restoration of normal operations at the primary facilities when conditions permit
Emergence of Software Defined Storage
SDS role in Software Defined Data Center
The value SDDC/SDC will bring to developers. System Integrators and IT community.
Primend Pilveseminar - Soodne hind + lihtne haldus – pilve minek= ?Primend
Kuidas saada oma andmekeskusesse rohkem pilvele omaseid funktsioone, kui pilve minek pole võimalik? Kuidas saavutada 90% kokkuhoidu andmehoidla ja varukoopia mahult? Kuidas taastada 1 TB mahuga varukoopia vähem kui minutiga? Koostöös Cisco UCS Director automatiseerimise ja juhtimisega pakub SimpliVIty avalikule pilvele omast paindlikkust ja madalat halduskulu.
SDDC – a term that still dwells in the futuristic sense of things, is perhaps the next major milestone in a cloud-centric world that can entirely change the way data is stored and managed.
Zero Downtime, Zero Touch Stretch Clusters from Software-Defined StorageDataCore Software
Business continuity, especially across data centers in nearby locations often depends on complicated scripts, manual intervention and numerous checklists. Those error-prone processes are exponentially more difficult when the data storage equipment differs between sites.
Such difficulties force many organizations to settle for partial disaster recovery measures, conceding data loss and hours of downtime during occasional facility outages.
In this webcast and live demo, you’ll learn about:
• Software-defined storage services capable of continuously mirroring data in
real-time between unlike storage devices.
• Non-disruptive failover between stretched cluster requiring zero touch.
• Rapid restoration of normal conditions when the facilities come back up.
App Performance Tip: Sharing Flash Across Virtualized WorkloadsDataCore Software
Core business applications like Oracle, SAP, SQL Server, Exchange and SharePoint often perform poorly when virtualized. More often than not the root cause is data I/O bottlenecks.
In this presentation, DataCore Software and Fusion-io highlight how to:
• Integrate flash memory to overcome I/O bottlenecks in real-world environments
• Combine flash technology with existing storage
• Speed up virtualized applications
• Prevent storage from slowing down or taking down your applications
Linked in Twitter Facebook Google+ Email Embed Share Flash Across Virtualized...Emulex Corporation
Does your business need to speed up response times and provide continuous availability for your mission-critical applications?Core business applications like Oracle, SAP, SQL Server, Exchange and SharePoint often perform poorly when virtualized. More often than not, the root cause of poor performance is data I/O bottlenecks. If you are looking at solid-state memory technologies to deliver the blazing performance you need, this joint webinar will be well worth your time!
When does it make sense to upgrade to more efficient servers? Most data centers operate on a 3 to 5 year tech refresh cycle. Is this really the best way to decide when to refresh old equipment? By continuously monitoring the cost to run older equipment, you can determine when you have hit the break-even point with your existing servers. Join Viridity co-founder and CTO, Mike Rowan, as he reviews best practices for technology refresh.
Enterprise-Grade Disaster Recovery Without Breaking the BankDonna Perlstein
Until recently, enterprise-grade DR had been prohibitively expensive, leaving many companies with high risk levels and unreliable solutions. Now, many organizations are enjoying top-of- the-line disaster recovery at a fraction of the price, thanks to the rapid development of cloud technology. CloudEndure and Actual Tech Media are thrilled to present this presentation, with a cost comparison of 3 Disaster Recovery Strategies, and much more.
Enterprise-Grade Disaster Recovery Without Breaking the BankCloudEndure
Until recently, enterprise-grade DR had been prohibitively expensive, leaving many companies with high risk levels and unreliable solutions. Now, many organizations are enjoying top-of- the-line disaster recovery at a fraction of the price, thanks to the rapid development of cloud technology. CloudEndure and Actual Tech Media are thrilled to present this presentation, with a cost comparison of 3 Disaster Recovery Strategies, and much more.
From Disaster to Recovery: Preparing Your IT for the UnexpectedDataCore Software
Did you know that 22% of data center outages are caused by human error? Or that 10% are caused by weather incidents?
The impact of an unexpected outage for just a few hours or even days could be catastrophic to your business.
How would you like to minimize or even eliminate these business interruptions, and more?
Join us to discover:
• Useful and simple measures to use that can help you keep the lights on
• How to quickly recover when the worst-case scenario occurs
• How to achieve zero downtime and high availability
Software-Defined Storage Accelerates Storage Cost Reduction and Service-Level...DataCore Software
In this White Paper, IDC, a major global market intelligence firm assesses DataCore in the Software-Defined Storage (SDS) space.
DataCore is one of the leading providers of hardware independent storage virtualization software. Its customers are actively leveraging the benefits of software-defined storage in IT environments ranging from large datacenters to more modest computer rooms, thereby getting better use from pre-existing storage equipment.
This White Paper further discusses the emerging storage architecture of software-defined storage and how DataCore enables its customers to take advantage of it today.
Download this IDC White Paper to learn about:
- The four major forces that have led to a major transformation in changing the way we use IT to do our jobs and how datacenters need to adapt.
- Why companies are switching to SDS and the benefits, including significant reductions in cost, that they can expect upon adoption.
- An Overview of DataCore’s SDS solution and the key differentiators that make it well equipped to handle the next generation of storage challenges.
Despite years of industry advocacy, cloud adoption in larger firms remains slow. There are many logos for many vendors dotting the cloud technology landscape and many competing architectures. But there are also few standards that guarantee the interoperability of different approaches.
The latest buzz in enterprise cloud technology is around “hybrid cloud data centers” in which large enterprises “build their base” – that is, their core infrastructure, possibly as a “private cloud” – and “buy their burst” – that is, obtain additional public cloud- based resources and services to augment their on-premises capabilities during periods of peak workload handling, for application development, or for business continuity.
Ultimately, the adoption of cloud architecture will be gated by how successfully organizations are able to leverage emerging technologies in a secure and reliable manner and whether the resulting infrastructure actually delivers in the key areas of cost-containment, risk reduction and improved productivity.
Regardless of whether you use a direct attached storage array, or a network-attached storage (NAS) appliances, or a storage area network (SAN) to host your data, if this data infrastructure is not designed for high availability, then the data it stores is not highly available by extension, application availability is at risk – regardless of server clustering.
The purpose of this paper is to outline best practices for improving overall business application availability by building a highly available data infrastructure.
Download this paper to:
- Learn how to develop a High Availability strategy for your applications
- Identify the differences between Hardware and Software-defined infrastructures in terms of Availability
- Learn how to build a Highly Available data infrastructure using Hyper-converged storage
At TUI Cruises, a high level of availability and security are essential for IT systems at sea, and also pose a special challenge. Very fast and expensive shipyard time slots are needed for installation and maintenance. A consistent internet connection cannot always be guaranteed during remote maintenance at sea. Because of the monthly costs of about $50,000 for a 4-Mbit line, larger data transactions are not possible in any case.
After TUI Cruises adopted DataCore SANsymphony they benefited from:
- High level of availability, thanks to synchronous mirroring
- Transparent failover: if a section of a data center fails, the other side automatically takes over
- Scalable in terms of capacity, output, and performance
- Easy to use on-site, with worldwide remote management by the partner
With Thorntons having so many locations—operating across two time zones—basic store functionality is imperative and the reason why Thorntons is such a write-intensive enterprise. Everything that Thorntons does at the store level is considered “mission critical” and is contingent upon system uptime due to their 24/7/365 operation. Attaining non-stop business operations as well as better performance management and capacity management is what drove Thorntons to explore new alternatives to its Dell Compellent SANs that were deployed previously.
After Thorntons adopted DataCore SANsymphony they benefited from:
- Zero-downtime with SANsymphony software-defined storage deployed as two synchronous mirrors
- 50% faster backups (including VMware VMs and SQL
databases), which enables the number of full backups from one to three times a week
- Significant risk reduction attained due to the ability to replicate volumes instantaneously to both the primary and secondary sites
Top 3 Challenges Impacting Your Data and How to Solve ThemDataCore Software
Demands on your data have grown exponentially more difficult for IT departments to manage. Companies that fail to address this new reality risk not only data outages, but a significant loss of business. In this white paper we review the top 3 critical challenges impacting your data (maintaining uninterrupted service, scaling with increased capacity, and improving storage performance) and how to solve them.
Download this white paper to learn about:
- How to maintain data availability in the event of a catastrophic failure within the storage architecture due to hardware malfunctions, site failures, regional disasters, or user errors.
- How to optimize existing storage capacity and safely scale your storage infrastructure up and out to stay ahead of changing storage requirements.
- How to speed up response when reading and writing to disk while reducing latency to dramatically improve storage performance.
Business Continuity for Mission Critical ApplicationsDataCore Software
Unplanned interruption events, a.k.a. “disasters,” hit virtually all data centers at one time or another. While the preponderance of annual downtime results from interruptions that have a limited or localized scope of impact, IT planners must also prepare for the possibility of a catastrophic event with a broader geographical footprint.
Such disasters cannot be circumvented simply by using high availability configurations in servers or storage. What is needed, especially for mission-critical applications and databases, are strategies that can help organizations prevail in the wake of “big footprint” disasters, but that can also be implemented in a more limited way in response to interruption events with a more limited impact profile.
DataCore Software’s storage platform provides several capabilities for data protection and disaster recovery that are well-suited to today’s most mission-critical databases and applications.
Dynamic Hyper-Converged Future Proof Your Data CenterDataCore Software
IT organizations are continuously striving to reduce the amount of time and effort to deploy new resources for the business. Data center and remote office infrastructures are often complex and rigid to deploy, causing operational delays. As a result, many IT organizations are looking at a hyper-converged infrastructure.
Read this whitepaper to discover that a hyper-converged approach is flexible and easy to deploy and offers:
• Lower CAPEX because of lower up-front prices for infrastructure
• Lower OPEX through reductions in operational expenses and personnel
• Faster time-to-value for new business needs
Community Health Network Delivers Unprecedented Availability for Critical Hea...DataCore Software
The use of DataCore Software-Defined Storage resulted in providing CHN with a highly available infrastructure, improved application processing, and the total elimination of storage related downtime. Considering that CHN is using the SANsymphony software to virtualize and manage over 450TBs of data, with an environment supporting 14,000+ users, the seamless availability of all that data is certainly impressive.
With DataCore SANsymphony now in operation at Mission Community Hospital. storage management is less labor-intensive, systems are easily managed and data is simple to migrate when necessary. The overall cost effectiveness of DataCore storage virtualization software platform and DataCore's ability to make the physical storage completely "agnostic" so that hardware is interchangeable are just two of the great benefits for the hospital's IT team.
The Need for Speed: Parallel I/O and the New Tick-Tock in ComputingDataCore Software
The virtualization wave is beginning to stall as companies confront application performance problems that can no longer be addressed effectively, even in the short term, by the expensive deployment of silicon storage, brute force caching, or complex log structuring schemes. Simply put, hypervisor-based computing has hit the performance wall established decades ago when the industry shifted from multi-processor parallel computing to unicore/serial bus server computing.
In this Presentation Jon Toigo and DataCore will help you learn how your business can benefit from our Adaptive Parallel I/O software by:
- Harnessing the untapped power of today's multi-core processing systems and efficient CPU memory to create a new class of storage servers and hyper-converged systems
- Enabling order of magnitude improvements in I/O throughput
- Reducing the cost per I/O significantly
- Increasing the number of virtual machines that an individual server can host without application performance slowdowns
Optimizing The Economics of Storage: It's All About the BenjaminsDataCore Software
Unfortunately, storage has mostly been treated as an afterthought by infrastructure designers, resulting in the over provisioning and underutilization of storage capacity and a lack of uniform management or inefficient allocation of storage services to the workload that requires them. This situation has led to increasing capacity demand and higher cost with storage, depending on the analyst one consults, consuming between .33 and .70 cents of every dollar spent on IT hardware acquisition. At the same time, storage capacity demand is spiking – especially in highly virtualized environments.
Bottom line: in an era of frugal budgets, storage infrastructure stands out like a nail in search of a cost reducing hammer. This paper examines storage cost of ownership and seeks to identify ways to bend the cost-curve without shortchanging applications and their data of the performance, capacity, availability, and other services they require.
The capabilities DataCore delivers have recently been significantly uplifted and streamlined further for virtualized server environments in its latest SANsymphony-V release. For brevity, it’s hard to beat DataCore’s press release which points out that “SANsymphony offers a flexible, open software platform from which to provision, share, reconfigure, migrate, replicate, expand and upgrade storage without slowdowns or downtime.” The product is agnostic with regard to the underlying storage hardware and can essentially breathe life and operational value into whatever is on a user’s floor. It is robust, flexible, and responsive and it can deliver value in terms of, for instance, better economics, improved response times, high availability (HA), and easy management administration.
This white paper will examine SANsymphony-V's place in the Software-Defined Storage marketplace and review it's core features and capabilities.
Failover Cluster support in Windows Server 2008 R2 with Hyper-V provides a powerful mechanism to minimize the effects of planned and unplanned server downtime
It coordinates live migrations and failover of workloads between servers through a Cluster Shared Volume (CSV). The health of the cluster depends on maintaining continuous access to the CSV and the shared disk on which it resides.
In this paper you will learn how DataCore Software solves a longstanding stumbling block to clustered systems spread across metropolitan sites by providing uninterrupted access to the CSV despite the many technical and environmental conditions that conspire to disrupt it.
DataCore™ SANsymphony™-V software migrates VHD LUNs non-disruptively behind the scenes and later reclaims the disk space from the decommissioned location. Transparent virtual LUN migration is one of several device-independent functions provided by SANsymphony™-V for the Microsoft Windows Server 2008 R2 with Hyper-V platform.
How do you get CIOs to jump on the storage virtualization bandwagon if they’re not on it already? Use these five compelling points to persuade them that storage virtualization is right for their organization:
1. It’s Inevitable and Strategic.
2. Drives Productivity and Innovation.
3. Talk Return on Investment.
4. Deferring CapEx, Reducing OpEx.
5. Times are Changing and so is the CIO’s job.
Benchmarking a Scalable and Highly Available Architecture for Virtual DesktopsDataCore Software
This paper reports on a configuration for Virtual Desktops (VDs) which reduces the total hardware cost to approximately $32.41 per desktop, including the storage infrastructure. This number is achieved using a configuration with dual node, cross-mirrored, High Availability storage. In comparison to previously published reports, which tout the storage infrastructure costs alone of VDI at from fifty to several hundred dollars per virtual machine, the significance of the data becomes self evident. In this report, storage hardware costs become inconsequential.
Removing Storage Related Barriers to Server and Desktop VirtualizationDataCore Software
An IDC Viewpoint Paper: Virtualization is among the technologies that have become increasingly attractive in the current economic climate. Organizations are implementing virtualization solutions to obtain the following benefits: Focus on efficiency and cost reduction, Simplify management and maintenance, and Improve availability and disaster recovery.
The science of automated storage tiering distills down to monitoring I/O behavior, determining frequency of use, then dynamically moving blocks of information to the most suitable class or tier of storage device. DataCore™ SANsymphony™-V software automatically manages your blocks to best allocate your storage.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Version 1.0 – June 17, 2013
Welcome everyone and thank you for taking the time to attend today’s presentation. During this session we’ll go over the most common challenges IT organizations are facing today when virtualizing Tier 1 applications. Additionally, we’ll share with you information about a cost-effective solution that can help your business deliver first-class performance and availability to serve your application requirements and increase business agility.
In today’s fast-paced world, IT organizations are continuously looking for better ways to increase the productivity and agility of their business. With the proven advantages and benefits of virtualization technologies, a lot of IT organizations are revisiting their business plans to implement a virtualization strategy. This is a mandate driven by the executive management team to provide a smoother and more efficient service to the business while reducing their costs. Therefore, IT organizations are going virtual. They are implementing desktop and server virtualization projects and are focusing in virtualizing mission-critical applications; especially core business and database applications such as Oracle, SAP, SQL Server, Exchange, and SharePoint.
However, this is not an easy task for IT because they have the mission and pressure to provide a fast, non-stop service to the business while maintaining low operating costs. But like everything else in life, there are always tradeoffs to make and IT is not the exception. IT organizations have some difficult decisions to make when virtualizing Tier 1 applications. The key is to figure out the optimal approach to achieve the best result in key areas they are measured on, including performance, uptime, and costs without introducing any additional complexity to their infrastructure.
Now in order to run a successful business your end users must have fast and uninterrupted access to their applications. If your applications are running slow and not performing to your established requirements or if they are not accessible by your end users, then this causes a major disruption in their day-to-day business operations. This creates a domino effect because if your end users are not satisfied with the service and the performance of the applications they need to get their job done, then the business will get lower productivity from its workforce, and lower productivity results in a loss of revenue.
Additionally, storage complexity is growing at levels that makes it very difficult for IT to manage. According to ESG Research, 50% of companies with more than 100 production servers report that they expect to grow storage between 30-50% annually, with the explosion of unstructured data and instrumented systems driving a massive wave of content. Server virtualization consolidation ratios of 80% and greater are demanding more from archaic storage architectures. Business needs are pushing the virtualization of Tier 1 applications leaving the majority IT projects storage bound.
Now one of the major pain points we hear from IT organizations that have migrated their Tier 1 applications into a virtualized environment is that they are experiencing a significant impact on application performance. But why does virtualization make your performance worse? The fact is that virtualization highly randomizes your I/O. Before virtualization you had an easy one-to-one relation with a server running an application. Now you’ve virtualized and you’ve created a many-to-one relationship where you have multiples if not dozens of virtual machines on a single server, all competing for the same I/O resources. The requirements of these virtual machines and the IOPS that they need to be fed, exceeds the capabilities of traditional storage solutions … which is why you’re having performance pain. This problem is most commonly known as an I/O bottleneck. This is a major challenge because a significant impact on application performance causes a direct impact on business productivity. This is unacceptable to any business.
Another major pain point we are hearing from IT organizations delivering virtualized applications is the amount of downtime due to service interruptions. Service interruptions are a big challenge as well because end users rely on these applications to run the business and they cannot afford any downtime. If a server is taken down for maintenance or a system component fails, then all the virtualized applications go down with the server. If end users cannot access their applications, then there’s a major disruption in day-to-day business operations. This is also unacceptable to any business.
So how is IT trying to address these pain points and challenges? You’ve probably wrestled with a couple of traditional approaches to “How do I remove this bottleneck?” Here’s a list of the common attempts. Sometimes, a database redesign can help, but often not, and it’s costly in terms of people or consultant expenses in any case. Consistently people try to brute force this with added processors or RAM, but that doesn’t help you if your application is IO-bound. Caching is good if you have very small capacity of data or you have specific locality of data. At the end of the day, all data has to be written somewhere so caching is not an ideal solution. And finally, adding hard drives doesn’t solve the problem because hard drives don’t give you enough IOPS and will likely waste capacity. You would have to aggregate hundreds and hundreds and hundreds of hard drives to get to a performance level that fixes your problem, it’s not cost-efficient.
Let’s take a quick look at what you probably have installed today and why you’re considering going to flash. If you take a look at the chart, there are a couple of other approaches. Today everybody’s talking “flash” be it internal PCIE cards for your servers, or externally attached DAS arrays that are based on flash. In the lower left you have your legacy storage arrays … SAN or NAS. There’s another category of product that has embraced flash called “Hybrid”. These are typically majority disk-based systems that have a limited flash in them, primarily acting as caches, not as datastores. They still have a relatively low number of IOPS and low overall performance. As I mentioned, they are predominantly based on disk. Disk just can’t drive enough IOPS to deliver the performance that you might need. Moreover, like traditional SAN or NAS, many of these systems are based on legacy controllers and legacy architecture that were designed for rotating media and not flash. They inhibit the full performance of flash. Yes, you’ll get a little bit of a bump, but it’s not a good overall solution for performance sensitive applications. The internal PCI and external DAS solutions are not designed for shared environments. By definition they’re direct-attach. They cannot easily share capacity between servers and they cannot easily share their performance among the various servers and applications. Moreover, they’re not ideally suited for virtualization, because they can’t support VMware vMotions. And frankly, adding PCIe cards to servers is very disruptive and very costly on a per GB basis. If you decide to rip out existing fiber channel drives and put in flash drives, that’s also very disruptive and expensive. It’s just not a good way to go. There is a better approach as you will shortly see, called networked flash. This is really the optimal solution for non-disruptively resolving these IO bottleneck problems, specifically in virtualized environments.
To minimize downtime in virtualized environments, one solution that is typically implemented is creating a cluster. In a cluster you have a group of servers running virtualized applications acting as a redundant system to immediately migrate workloads from one server to another. This cluster approach provides continued service when a server goes offline for any particular reason.
However, something to keep in mind is that the virtualized applications require the use of shared storage in a cluster approach. This not only means that you have to make an additional investment on external storage, but you are also wasting existing and valuable server storage resources.
By now you are probably thinking, well I might not be fully utilizing my server resources, but I can live with that because now I have a solution that minimizes downtime and takes care of my problem. Right?...
Not exactly. There is still another factor you have to consider because your system is still vulnerable. Just implementing a cluster with a shared storage array leads to a bigger problem. This solution has limitations because now all your servers and shared storage reside in one location. So what happens if there’s an outage in that facility due to a power failure, an air conditioning malfunction, a water leak, or even a construction accident? Now that facility becomes a single point of failure causing major downtime and a huge impact on the business.
So the fundamental and consistent customer question, when we talk to customers, to channel partners, to end-users throughout their IT organization, CIOs or virtualization managers, etc. is “How do I cost-effectively add performance and high availability to serve my application requirements?” Performance means different things to different people, but to effectively deploy your applications it has to be predictable. You have to know that you have it at the times that you need it. It has to be sustained so that when your users are running these applications you have very smooth operations of applications and you can sustain performance. And it has to work across physical, virtual or cloud based applications. Additionally, your applications need to be continuously available so that you can run and operate the business efficiently.
All the pain points we have discussed are very common in any IT organization. Unfortunately some of the quick solutions implemented to address these pain points tend to force IT to make significant tradeoffs between performance, uptime, complexity, and costs. Instead of making tradeoffs in all these areas and settling for one of them, why not evaluate a different approach to delivering data that is optimized to address all these pain points simultaneously?
Now let us share with you a better approach to increase business productivity and agility for your organization. This approach will help you improve the performance of your virtualized applications and maintain non-stop business operations while reducing your capital and operating expenses.
Part of this approach takes advantage of leveraging a networked flash architecture. So what is a networked flash architecture? It’s an approach where you simply add an all-flash ViSX performance storage appliance to your existing storage infrastructure by connecting it to any Ethernet switch port. You simply connect to the switch, give it an IP address, configure your RAID groups, Storage vMotion over your datastore and your application is ready to exploit the full performance of flash in a matter of minutes. Networked Flash means that the flash storage is available to all servers, all VMs, and all applications without replacing or disrupting your existing storage, servers, or applications.
Here’s a graphical view. Compared to traditional approaches that are disk-based in the bottom left, you can solve your performance issues without throwing hundreds of drives at the problem. Here’s a rack of a traditional disk barely delivering 60,000 IOPS, at a very expensive cost point of $500,000. Typically it takes weeks of implementation. One could implement PCIe flash cards, but these are very expensive as every server needs one or perhaps 2 cards. Hybrid storage systems such as shown in the bottom right simply don’t have the performance of all-flash systems. Typically, they also require you to replace your existing storage and learn an entire new set of storage management tools. In the upper right side you’ll see our latest generation, networked flash G4 ViSX appliance. It delivers 140,000 IOPS at a price similar to disk-based storage systems, yet it deploys in minutes.
The first part of the approach consists of aligning your storage tiers to your application requirements. This means that as a best practice you should leverage tiering across different types of storage to deliver the right balance between those applications that demand the fastest performance versus the ones that demand the largest capacity.
In fact, one way to take application performance to the next level is by introducing a new and faster tier consisting of flash for those data-intensive applications that require quick access to information. Astute provides technologies that leverage flash to significantly increase datacenter efficiency and performance.
In addition to introducing a flash tier and leveraging tiering across your storage devices, it is recommended to provide a virtualized environment that can provide continuous availability to your business operations.
This can be accomplished by creating a physical separation that extends your cluster and expands your storage resources into a different location. This approach allows you to maintain an independent copy of your data that can be used to provide continuous access in case some type of service interruption occurs on the other end.
A better way to leverage tiering and take advantage of physical separation to provide fast performance and continuous availability for your virtualized applications is via networked flash and storage virtualization technologies. Through storage virtualization you are adding a storage hypervisor – an intelligent software layer residing between the applications and the disks that virtualizes the individual storage resources it controls and creates one or more flexible pools of storage capacity to improve their performance, availability, and utilization.
The benefit of DataCore’s storage hypervisor is that it has the ability to present uniform virtual devices and services from dissimilar and incompatible hardware, even from different manufacturers, making these devices interchangeable. Continuous replacement and substitution of the underlying physical storage may take place, without altering or interrupting the virtual storage environment that is presented.
Now let us show you how this solution will help you accelerate the performance of your virtualized applications.
Here’s how it works. A key capability of the DataCore storage virtualization software is its ability to dynamically optimize storage capacity based on which disk blocks are most frequently accessed. Let’s say you have a multi-tier pool, using 7200 RPM hard disk drives for Tier 2, 10-15K RPM hard disk drives for Tier 1, and the fast flash-based Astute appliances for Tier 0.
The DataCore software organizes the Astute appliances and the other available disks into a virtual storage pool. It classifies the flash-based appliance as the top tier, and assigns less speedy, higher density drives to lower tiers based on performance characteristics that you set.
The software dynamically directs workloads to the most appropriate class of storage device, favoring the Tier 0 flash for high-priority demands needing very high-speed access. It relegates lower priority requests to Tier 1 and Tier 2 disk drives, striking a balance between the speed of the flash-based Astute appliances and the economies of larger-capacity HDDs. Any special, high-priority workloads can also be pinned to the Astute appliances . At the same time, the software migrates less-frequently used blocks to the hard disk drives to avoid undesirable contention for the flash. This novel approach helps you avoid unnecessary spending on additional disk equipment or exotic storage devices and more importantly, it maximizes application performance.
If we take a closer look at the DataCore nodes you will notice the Astute flash-based appliances connected to them. These appliances play an instrumental role in reducing the disk latencies often responsible for mission-critical applications running poorly.
Additionally, if you already have other types of disk arrays, you can combine all of them as part of your storage pool. The Astute appliances can operate as the fastest member in your balanced storage hierarchy, accompanied by high-performance SAS devices and bulk SATA storage.
The Astute appliances are dynamically selected by the auto-tiering intelligence within the DataCore software for the most critical apps. When the flash disk capacity is consumed with high-priority requests, less critical requests are automatically directed to the SAS devices or SATA storage depending on their relative importance.
Now let us show you how our solution will help you prevent storage from taking down your applications, providing continuous availability for your business operations.
Another major capability of the DataCore software is that it allows you to configure redundant storage pools by synchronously mirroring between DataCore nodes at different locations. Basically, the virtual disk is really a logical representation of a dual-ported drive except that two independent copies are being updated in real time at each location. Notice that as a best practice, it is recommended that the two storage copies reside in two separate physical locations up to 100 km apart.
To better load balance these configurations, traffic is evenly spread between the two pools by equally distributing the preferred paths from the host servers across the active/active SAN. In other words, each node is generally set up to serve as the primary resource for half of the capacity while the other covers primary responsibility for the other half.
So for example, if one of the storage pools needs to be taken out of service, or any of its devices suffers a failure, the application servers sense that they cannot reach the disks through the preferred path and automatically redirect the applications on the alternate path without disruption. That request is fielded by the redundant node using the mirrored copy. When the service is completed on the left side, any changes that transpired while absent are sent over by the right node. After they are both back in sync, then the application servers which had redirected their requests are signaled to return to their preferred paths. They repeat the same procedure at the other site if necessary, never interrupting users despite the magnitude of the change. This technique maximizes uptime.
Let’s talk about some of the unique advantages and benefits of our solution.
When you combine networked flash with storage virtualization as part of your virtualized environment your business will be able to operate more efficiently and provide the service required to support the needs of your end users. Our solution addresses the major challenges that exist today related to application performance and service interruptions while saving you money. Key advantages include:
Accelerating application response times by reducing I/O bottlenecks
Delivering predictable performance at a lower cost per IOPS
Preventing data loss and providing continuous availability through real-time I/O replication
Taking advantage of your existing storage assets by maximizing utilization
Dynamically matching workloads to the most appropriate disk and flash resources based on priority (faster performance versus more capacity)
Relocating data from one storage system to another, non-disruptively
Pooling incompatible devices for the utmost flexibility and efficiency
Storage Systems
Each of the storage systems had 12 solid-state drives installed as data drives.
Independent tests were run on 3 competitive all-flash storage systems with workloads varying by read /write mix (90/10, 70/30, 50/50, 30/70, 10/90). This chart shows a 70% read and 30% write environment which is typical in many workloads.
The 12 SSDs were configured as a single RAID0 storage group.
Four 100GB volumes were configured on each storage system.
Each storage system was connected with one 10GbE iSCSI host connection.
Several hours of pre-conditioning runs were performed on each storage system before the performance tests were run.
Vendor “A” – a “start-up” flash storage vendor
Vendor “B” – a well-known,-established storage vendor
Relative to either of the two competitors ViSX has a 5X price performance advantage.
Now that we have already shared with you the value proposition of the solution, let us show you how you can grow your storage infrastructure seamlessly as you need, when you need.
Let’s say your current environment consists of a set of virtualized applications in a cluster connected to a shared storage array. As discussed earlier, in this type of setup your virtualized applications are competing for access to the storage disks frequently causing I/O bottlenecks and resulting in slower response times.
So in order to accelerate the performance of your virtualized applications, you introduce a new storage tier – Tier 0, consisting of the high-performance Astute flash-based appliances in conjunction with the storage virtualization capabilities of the DataCore software. In this setup you will configure the storage virtualization software to take high-priority requests from one of your business critical applications and route them to the flash-based appliances, which are used as the fastest dedicated storage resources in the pool. This approach allows you to compare the application performance of your original environment with the DataCore and Astute technologies and see for yourself the performance improvements of the solution.
Then as the business need arises for introducing more capacity without sacrificing performance, you have the option to scale up by easily adding additional solid state drives (SSDs) to a currently installed Astute ViSX appliance on Tier 0 without having to disrupt the production environment - a true hot-pluggable add-on for up to 24 SSD modules.
Additionally, as your applications and user base grows you can also scale up and out to millions of IOPS without any disruption by adding more Astute’s ViSX appliances and SSDs in your environment as needed.
The capacity of an individual ViSX appliance is up 45.6TB or when using ViSX Deduplication, you can effectively access nearly 250TB of data. If more capacity or more performance is needed than one ViSX can supply, additional ViSX appliances can easily be added to the rack and concatenated to the existing ViSX.
And there you have it, a cost-effective solution that not only allows you to deliver the performance and availability you need for your business, but a solution that also allows you to scale up and out as your business grows.
Finally to wrap up the presentation and open it up for questions, here are the next steps you can take if you are interested in learning more about how to deliver first-class performance and availability for Tier 1 apps.
First, we encourage you to give us a call to get in touch with our sales professionals, obtain more information, and schedule an onsite meeting.
Secondly, rethink your virtualization strategy to make sure it’s comprehensive. Our Sales Directors are available at your disposition to sit down with you, understand your needs, and build a plan together.
Finally, request an assessment. Our Sales Engineers will work with you to provide a live demonstration and assess your business and technical requirements.
We look forward to helping you transform your business and keep your organization competitive and well-positioned for future growth.
Thank you for your time!
Now let’s open it up for questions and remember you can also contact us via our websites at ww.datacore.co and www.astutenetworks.com.
-- Use the next slide as a visual to keep during the Q&A so that the audience is not staring at a blank slide
Use this slide as a visual to keep during the Q&A so that the audience is not staring at a blank slide.