Conventional wisdom in the IT industry asserts that most, if not all, workloads will shift from scale-up to scale-out. Yet when we look at emerging business computing needs and vertically bespoke challenges like genomic sequencing, the benefits of holding and computing large datasets in-memory remain. Can scale-out be tuned to handle these emergent computing challenges—and perhaps more importantly, should it? Or what if scale-up could join scale-out to solve new challenges? Imagine for a moment these two platform archetypes being used together to compose powerful new applications—applications that combine the benefits of scale-up in-memory and scale-out. Today, we are already seeing the benefits with ETL on Hadoop feeding scale-up in memory business computing platforms. And also in the Internet of Things, where streaming low-latency behaviors are essential to reinforce safety culture, there is room for scale-up in-memory on the edge writing to scale-out platforms.
In this session, Hitachi Data Systems Vice President & Chief Engineer Michael Hay will explore the necessary tension between in-memory, instantaneous response vs. recoverability and non-stop operations and dissect the Scale-out vs. Scale-up industry debate. Hay will discuss the virtues of both scale-up and scale-out architectures, how they are already working together today and how they will work together in the future. Attendees will learn:
How the network/fabric will participate in future computing platforms, especially in-memory.
What new development and IT environments are needed to realize the full potential of combining scale-out/scale-up in-memory and edge data processing environments.
The benefits of a Scale-out + Scale-up + Edge approach and how to implement it.