Hyper-converged systems offer a great deal of promise and yet come with a set of limitations. While they allow enterprises to re-integrate system components into a single enclosure and reduce the physical complexity, floor space and cost of supporting a workload in the data center, they also often will not support existing storage in local SANs or offered by cloud service providers. There are solutions available to address these challenges and allow hyper-converged systems to realize their promise. During this session you will learn:
- What are hyper-converged systems?
- What challenges do they pose?
- What should the ideal solution to those challenges look like?
- About a solution that helps integrate hyper-converged systems with existing SANs
At first all processing was done by a single system. We're going to gallop through what came next quickly and are not going to try to offer a complete, exhaustive discussion of all of the steps.
To reduce costs and improve agility, some workloads were moved to minicomputers in the divisional or branch office. While system hardware costs were reduced, the environment became more complex. Complexity increases administrative costs.
As industry standard systems become more powerful, some workloads were moved from minicomputers to PC servers. In some cases organizations replaced their mainframe systems or used them only as database servers and servers for critical functions. Quite often this meant that there were more systems to manage and the level of complexity only increased.
To improve performance and agility, system functions were decomposed into separate functions and those functions were hosted in individual rackmount servers.
While this improved performance, reliability and agility, the level of complexity increased dramatically.
Each component and each function had to be managed separately
Separate servers for UI, security, app logic, database, storage, networking and many other components
Hyper-converged systems reduce complexity
Processing, memory, storage and networking are re-integrated into a single enclosure
High speed fabric connects the components to improve performance
The systems can be very compact
Management tools make it possible to see these components as part of a single system
They can also reduce costs of administration, floor space, power consumed and heat produced.
Address growing workloads through multi-function, scale-out configurations
An attractive way to share internal storage among clusters of servers in a compact, cost effective configuration.
Reduces data center footprint
Can reduce power consumption and heat production
Vendor supplies
Monitoring tools
Management tools
Integration with 3rd party software
Professional Services
Vender often insists that it supply all components inside of the enclosure
May not integrate well with enterprises' selection of monitoring, management and other tools
May not integrate well with cloud resources
It isn't always clear how these systems integrate with established data center resources
This limitation creates data silos
Enterprises might face over provisioned storage and wasted space
Dark forces in the IT industry like to polarize popular opinion; most recently they argue for keeping all the storage in the servers using virtual SANs, leaving nothing external. These sudden mood swings, while attracting a young cult following, lose sight of lessons learned over the past 20 years.
Truth is, a blend of internal storage close to the apps with good old fashioned external secondary storage out on the network makes a heck of a lot of sense.
In this webinar, Senior Analyst Jim Bagley from SSG-NOW, discusses the not so black-and-white considerations driving customers to tap into the internal storage resources of clustered servers. Then Augie Gonzalez, Storage Virtualization expert from DataCore will provide some practical guidance on how to incorporate existing storage arrays and even public cloud capacity into your virtual SAN rollout.
Key Differentiation.
DataCore uses Paralllel I/O and high-speed RAM caching for I/O acceleration so you get great I/O performance as well as avoiding the expense of flash in the server
You only need 2 nodes for a highly available clusters, even stretch clusters, saving money on additional nodes (both hardware and software licenses) and reducing complexity
You can scale out storage capacity independent of compute so you can add capacity as needed while ensuring the data sits on the storage media and tier that matches its performance characteristic
The same set of services applies to all your storage devices including internal, external DAS, SAN and Cloud, ensuring consistency and reducing operational complexity
DataCore provides a single management platform for all your storage, making it easy to add DataCore to your environment while making it easy to manage
DataCore supports both multiple hypervisors and non-virtual workloads, running on bare metal Windows Servers in Failover Clusters as well.
Lastly, you can choose your preferred hardware vendor.
No one has this extraordinary combination of capabilities.