Your SlideShare is downloading. ×
0
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Mellanox's Technological Advantage
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Mellanox's Technological Advantage

307

Published on

Mellanox Analyst Day 2013, presented by Michael Kagan, Chief Technology Officer

Mellanox Analyst Day 2013, presented by Michael Kagan, Chief Technology Officer

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
307
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Today, data creation and consumption are growing at unprecedented rates. IDC predicts that there will be 4.1 Zettabytes of consumer storage by 2016. Over 50% of all enterprise storage is external, and the amount of external storage is growing at 50% annually. With all this, it’s easy to see why data delivery is critical to both enterprise and consumer applications.In the storage interconnect realm, Fibre Channel has been synonymous with SANs for many years. However, an increasing number of storage vendors and enterprise SANs are starting to move away from FC and are installing alternative interconnects in their data center storage infrastructure. Ethernet is most common in general enterprise infrastructures, and InfiniBand now transports over 25% of all HPC storage traffic.<WW consumer digital storage growing at 66% annually.>
  • ConvergenceFlexibilityRapid response
  • Low latency and the mean to get there is RDMA. Scalability is latency. The main focus should be latency.
  • EDR – 2014-2015
  • Transcript

    • 1. Mellanox’s Technological Advantage Michael Kagan – Chief Technology Officer October 25, 2013
    • 2. Cyber Society © 2013 Mellanox Technologies - Mellanox Confidential - 2
    • 3. From Data to Information 40000 35000 Satellites and Sensors Shopping Records 30000 25000 20000 15000 Search Records Big Data 10000 5000 0 2011 GPS Signals 2012 2014 2016 2018 2020 Medical Records  20 Zettabytes of consumer storage predicted by 2020  Data delivery critical to enterprise and consumer Posts to Social Media © 2013 Mellanox Technologies Videos & Pictures - Mellanox Confidential - 3
    • 4. Data Center as a System  Efficiency, utilization  Ease of Use  Scalability  On-demand Services  Holistic Management  Return on Investment © 2013 Mellanox Technologies - Mellanox Confidential - 4
    • 5. Data Center is a Computer Cloud/Web 2.0 Data Center Cluster High-Performance Computing Cluster © 2013 Mellanox Technologies - Mellanox Confidential - 5
    • 6. Convergence and Flexibility © 2013 Mellanox Technologies - Mellanox Confidential - 6
    • 7. Remote Direct Memory Access – Latency and Scalability What is RDMA? InfiniBand App Buf OS NIC Buf App NIC OS Ethernet (RoCE) RDMA Engine © 2013 Mellanox Technologies - Mellanox Confidential - 7
    • 8. RDMA – Key Technology to Free CPU for Application Processing User Space ~53% CPU Efficiency User Space ~88% CPU Efficiency ~47% CPU Overhead/Idle System Space RDMA-capable Network System Space Legacy Network ~12% CPU Overhead/Idle © 2013 Mellanox Technologies - Mellanox Confidential - 8
    • 9. Mellanox Acceleration for Data Analytics Hadoop Terasoft Benchmark (20G file size) 1000 RDMA is Mandatory for Highest ROI! 900 Job Run Time(Sec) 800 700 600 500 400 300 200 100 0 1G, TCP 10G, TCP 10G, RDMA 40G, RDMA Fastest and Most Efficient Network Access Kernel Bypass – Eliminating OS Overhead and Context Switching © 2013 Mellanox Technologies - Mellanox Confidential - 9
    • 10. RDMA to Match SSD Network The Old Days (~6msec) Software 100usec Disk 200usec 6000usec 180 IOPS With SSDs (~0.5msec) 100usec 200usec 25 usec 3000 IOPS With Fast Interconnect (~0.2msec) 10 usec 200usec 25 usec 4300 IOPS With RDMA (~0.05msec) 1 20 usec usec 25 usec 20,000 IOPS © 2013 Mellanox Technologies - Mellanox Confidential - 10
    • 11. The New Storage Architecture: Storage Tiering and Data Migration Server SSD Drive Tape CPU CPU High Throughput, Low Latency, RDMA RDMA The Most Efficient Way to Move Data Ensure Flash and Storage Tiering ROI Data Migrates close andthe Processing node – Vice versa High Throughput to Low Latency Ensure Flash ROI © 2013 Mellanox Technologies - Mellanox Confidential - 11
    • 12. Software-Defined Data Center Software-Defined Infrastructure Virtual Data Center 1 VM VM VM VM Software-Defined Data Center Services Software-Defined Storage Virtual Data Center 2 VM VM VM VM Software-Defined Data Center Services Abstracted and Pooled Network/Security Software-Defined Computing Abstracted and Pooled Storage Abstracted and Pooled Compute Software-Defined Network Software-Defined Network © 2013 Mellanox Technologies - Mellanox Confidential - 13
    • 13. Accelerating Virtualized Datacenters VM VM Virtual Network Virtual Network VM Virtual Network VM Virtual Network The Only 40GbE VMware In-Box Solution Overlay Network Offloads (NVGRE, VXLAN) RDMA over InfiniBand and Ethernet Scale-out (“flat”) network Software Defined Networks (SDN) Open Flow and Unified Fabric Management The Interconnect Provider For 10Gb/s and Beyond © 2013 Mellanox Technologies - Mellanox Confidential - 14
    • 14. Software-Defined Network Data movement and data processing segregation  Today’s Network (Ethernet/IP)  Software-Defined Network Scale-out Network  “Sophisticated Switch” • Sophisticated routing  Scale-out Challenge • L3 (IP layer) for scale-out 1.5X Port Density >2X cost savings  “Data Mover” Switch • Simple L2 steering  Architected for Scale-out >2X Power savings - CPU per switch to handle protocol • L2-based scale-out - Single-chip switches, centralized management InfiniBand L2 vs. Ethernet L3 Scale-out network – $MMs cost saving at 10K-node scale © 2013 Mellanox Technologies - Mellanox Confidential - 15
    • 15. Mellanox Roadmap Highest Performance, Reliability, Scalability, Efficiency Bandwidth 3.0 Same Software Interface 2002 2005 2008 2011 2014/2015 Latency Merom 65nm © 2013 Mellanox Technologies Sandy Bridge Nehalem 45nm 32nm - Mellanox Confidential - Next Gen 22nm Future 16
    • 16. Mellanox Track Record  InfiniBridge • • • • IB TCA + switch Tape out – Q4/00 Samples – Q1/01 Production – Q2/01  InfiniScale • • • • 8-port SDR IB switch Tape out – Q3/01 Samples – Q4/01 Production – Q1/01  InfiniHost • • • • PCI-X SDR IB HCA Tape out – Q2/03 Samples – Q3/03 Production – Q4/03  InfiniScale III • • • • 24-port DDR IB switch Tape out – Q3/03 Samples – Q4/03 Production – Q1/04  InfiniHost III EX • • • • PCIe 1.0 DDR IB HCA Tape Out – Q4/03 Samples – Q1/04 Production – Q2/04  InfiniHost III LX • • • • PCIe DDR IB HCA Tape Out – Q4/04 Samples – Q1/05 Production – Q2/05  ConnectX • • • • VPI (QDR IB, 10GbE) PCIe 2.0 HCA/NIC Tape Out – Q4/06 Samples – Q1/07 Production – Q2/07  InfiniScale IV • • • • 36-port QDR IB switch Tape Out – Q1/08 Samples – Q2/08 Production – Q3/08  BridgeX • • • • IB/FC/Eth bridge Tape Out – Q4/08 Samples – Q1/09 Production – Q3/09  ConnectX-2 • • • • VPI (QDR IB, 40GbE) PCIe 2.0 HCA/NIC Tape Out – Q1/09 Samples – Q2/09 Production – Q3/09  SwitchX • • • • 36-port VPI (FDR IB, 10/40GbE) switch Tape Out – Q4/10 Samples – Q1/11 Production – Q2/11  ConnectX-3 • • • • VPI (FDR IB, 40GbE) PCIe 3.0 HCA/NIC Tape Out – Q1/11 Samples – Q2/11 Production – Q3/11  Connect-IB • • • • PCIe Gen3 x16 FDR IB HCA Tape Out – Q2/12 Samples – Q3/12 Production – Q4/12  SwitchX-2 • • • • 36-port VPI (FDR IB, 10/40GbE) switch Tape out – Q3/12 Samples – Q4/12 Production – Q1/13  ConnectX-3 Pro • • • • VPI (FDR IB, 40GbE) PCIe 3.0 HCA/NIC Tape Out – Q1/13 Samples – Q2/13 Production – Q3/13 15/15 A-step Silicon in Production © 2013 Mellanox Technologies - Mellanox Confidential - 17
    • 17. Thank You Thank You © 2013 Mellanox Technologies - Mellanox Confidential - 18

    ×