Future of the Data Center

1,586 views

Published on

Surveys the advancements in data center technologies that are currently taking place.
From commute, networking and storage virtualization, to X86 vs. Arm64 and the force of moving from scale-up based architectures to scale-out ones.

By Liran Zvibel, CTO of Weka.IO

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,586
On SlideShare
0
From Embeds
0
Number of Embeds
64
Actions
Shares
0
Downloads
32
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • What worries CEOs of Enterprise Vendors (Intel, IBM, Cisco, etc)
    What causes the changes. Directions
    - Talk about what we see, rows and rows of racks
  • CDRs for cellular nets, Waze uses historical data to predict routed, Banks perform fraud detection based on history, Ad exchange clients keep information about all users
    This is all Business data!
    Not only FB/YouTube media, much of it is new BUSINESS data, that enables enterprises and startup companies to provide new kind of services and applications.
    CDRs for Cellular networks, Waze uses historical data + current to predict best routes, Banks performing fraud detection much better ,lowering transaction costs. Many companies connect to Ad Exchange stocks, get all surfing information, and calculate based on it
  • Small, few cars, huge water containers.
    Data centers become huge, power and cooling are main concerns.
  • Economics did not match. They could not pay for the big systems.
    Enterprise vendors had a portfolio where small clients bought small systems, medium clients bought medium systems, but paid much more, and big clients bought the big systems paying through the nose. It stopped working for the Internet Giants, that did not charge so much. Initially they leveraged the low prices cause of the households, but now since they use commodity hw, everyone enjoys.

    Optimize for solving the problem with best resources, while the other big players tried to maximize their income. HW became commodity, and they are using software to get features.
    HW became commodity, means that features that you used to buy very expensively from large enterprise vendor, you now achieve through software, and buy cheap HW.
    Enterprise vendors tried to build the portfolio in a way that forced customers with bigger problems to pay much more. The Internet Giants, had much bigger problems that could not be solved by the ‘Enterprise Vendors’ in an unknown scale, and changed this reality, enabling themselves, and the rest of the world to buy standard, commodity HW and leverage software to grow their business.
    Scale up became too expensive for the giants to pay, so they went with commodity.
  • Once complicated sw was created to break done scale up systems, it allows adding more features. Performance is no longer sacred, and you can add features in linear cost. 
    This was not possible with scale up. 
  • Helps you break the problem to smaller pieces. Once the solution is broken, it can be rebroken.
  • This is a major shift, as million of dollars are being shifted away from Intel and the traditional vendors towards new players.
    For I/O bound operations, the new, clean efficient architecture by Arm is much more efficient. Future data centers will leverage both x86 and ARM based 64bit CPUs, with massive amount of cores, sometimes on the same systems.
    Once we started breaking stuff to smaller pieces it keeps being broken to even smaller pieces. This is one good example, where you’re using the more efficient processor for the task.
  • Rack Disaggregation
    Rack is the unit now, “server” will be meaningless
    100G is the enabler
    Independent upgrade of compute, networking, storage, power.
    Saving a lot of power
  • MCX 16 1.6Tb is x10 the bandwidth of x16 PCI-e3 connection
    Cable is much lighter, and allows sharing of resources.
    FC did not catch up, RDMA over Ethernet gains momentum, will also replace IB
  • The 16-slot Nexus 9516 and four-slot, 40Gb, 576 ports,
    32 Switches + 18 = 50.
  • Basic - old networking had many autonomous networking gear connected. Now it’s just one single logical network.
    In the new world, if two applications need more bandwidth than the rest, using SDN they can get more Interconnection Switches bandwidth
  • Scale out means higher IOPS and bandwidth requirements
  • Future of the Data Center

    1. 1. T H E F U T U R E O F T H E D A T A C E N T E R L I R A N Z V I B E L , L I R A N @ W E K A . I O
    2. 2. X 5 0 G R O W T H R A T E ! D A T A I S E XP L O D I N G
    3. 3. D A T A G R O W T H • Amount of data collected and stored is ever increasing • New services are created based on using that data, and old enterprises are forced to provide new applications for the same cost • IoT, logs, sensors, media files and other accumulated data are regularly processed and reprocessed • Some data is “cold” but still accessed (FB images) • Regulatory archiving data
    4. 4. G I A N T S I N N O V A T E , R E S T F O L L O W • Internet giants like Google, Facebook, Amazon handle massive amount of data, and must solve scalability problem ahead of anyone else, using cheaper, commodity components. • Enterprise companies are forced to keep up the pace, and with the help of the traditional vendors try keep up. • IBM, Oracle, Microsoft, SAP, Dell and the like lost their ability to shape the future of the data centers.
    5. 5. S C A L E - U P V S S C A L E - O U T • Scale-up based solutions are bound, and will max-out. Becomes too expensive. • Everything must scale, hence must be built from many small components that together solve the big problem • SW infrastructure is required to create systems that are scale-out. Once this is done, adding features is at linear deployment cost, not possible at scale-up • Large RDBMS (“Oracle”) vs. Big Data solutions (Hadoop, Cassandra, etc) Scale up: Scale out:
    6. 6. C O M P U T E V I R T U A L I Z A T I O N • Being able to run several OS instances on the same computer • Allowed much condensed infrastructure (compute aggregation), resource agility and failover ability. • Programmable resource management makes it a Cloud • Enterprises will morph their data centers to a private cloud, then move to a hybrid cloud model
    7. 7. X 8 6 A N D A R M 6 4 C O M M O N S L O T • ARM64 very efficient for data movement (I/O), natively support virtualization • Applied Micro, AMD and others will come up with ARM based servers • Facebook’s Open Compute Project has designs that use X86 (“Intel”s) and Arm64 CPUs concurrently • New designs leverage hundreds of cores in a server. Xeon Phi or bigger ARMs
    8. 8. R A C K D I S A G G R E G A T I O N
    9. 9. N E T W O R K • 100Gb Eth/PCIe last year by Intel (and other) • 100x speedup in about 10 years, unprecedented progress related to other components! • Physical and logical locality (or placement) is dead • Ethernet won once again
    10. 10. N E T W O R K I N G S C A L E O U T • Cisco just announced the 576 ports 40Gb Nexus 9516 switch, so not all scale-out yet. • south-north vs east-west traffic • Can be replaced with 50, much cheaper switches, that also act as ToR • Cheaper ROI, Pay as you go, better performance • Clouds won’t use these devices
    11. 11. S D N : S W D E F I N E D N E T W O R K I N G • HW devices only provide data flow, management and control done in application. • Most large network equipment providers move towards supporting these open standards. • End users are free to shape traffic, and provide higher-level services that were once impractical
    12. 12. S T O R A G E • Scale out, virtualized • Flash NAND 3D, price drops • New mediums to archive and handle cold storage • Cloud object storage changing economics
    13. 13. L I R A N @ W E K A . I O @ L I R A N Z V I B E L Come join the party! We’re hiring!

    ×