In this slidecast, Bill Lee from IBTA and Rupert Dance from the Open Fabrics Alliance provide an update on InfiniBand and RDMA from their joint booth at SC13.
Learn more:
http://www.infinibandta.org/
and
https://www.openfabrics.org/index.php
Watch the video presentation: http://wp.me/p3RLHQ-b2B
2. InfiniBand Trade Association (IBTA)
Global member organization dedicated to developing, maintaining and
furthering the InfiniBand specification
Architecture specification
– RDMA (Remote Direct Memory Access) software architecture
– InfiniBand, up to 56Gb/s and 168Gb/s per port
– RDMA over Converged Ethernet (RoCE)
Responsible for compliance and interoperability testing of commercial
products
Markets and promotes InfiniBand and RoCE from an industry
perspective
– Online, marketing and public relations engagements
– IBTA-sponsored technical events and resources
3. OpenFabrics Alliance (OFA)
Home of OpenFabrics Software (OFS) delivering RDMA to
performance demanding applications
– Delivers support for high performance applications
– Support for Linux distributions and Microsoft Windows Server
operating systems
Promoting the benefits of RDMA application acceleration to
data center, cloud and HPC users
– Server and storage connectivity
– High performance, low latency, virtualized, highly efficient
applications
4. OFA Promoting Developer and
User Participation
Developers
Annual developers’ workshop
Birds of a Feather events at International Supercomputing Conference (ISC) and
Supercomputing Conference (SC)
Interoperability events at UNH IOL
Users
Community-driven events for OFS users
– Sharing experience and ideas
– Collaborate over common issues and feed requests into the development community
User community communication tools
– Email list for distributing comments, questions, and answers
– http://lists.openfabrics.org/cgi-bin/mailman/listinfo/users
5. Microsoft and Emulex Join the IBTA
Expanding the Steering Committee leadership
– Microsoft announcement during SC13
– Emulex announced September 30
Adding their support to RDMA technologies with a
strong enterprise perspective
Practical perspective of deploying RDMA for storage,
cloud, and other applications
Steering committee members
6. The Need for RoCEv2
Extending functionality
L3 routing
– RoCEv1 delivers RDMA within single
Ethernet L2 domain
– L3 is pervasive in modern datacenters
– Datacenter networks now require
RDMA across L3 domains
Further enhancements for scalability
L2
L2
L2
L2 Domain
L2 Domain
L2 Domain
7. OpenFramework Work Group
Formed to develop, test, and distribute:
– Extensible, open source framework that provides access to highperformance fabric interfaces and services
– Extensible, open source interfaces aligned with ULP and application needs
for high-performance fabric services
Apply application-centric I/O design principles
Objectives
– Maximize performance for more classes of applications
– Maximize the return on investment being made in
computer systems by their owners and operators
8. HPC & Data Centers Demand RDMA
Essential for Scientific, Enterprise and Cloud Computing
I/O is central to achieving highest performance
Efficient computing reduces power, cooling and space requirements
OS bypass enables fastest access to remote data
Scalable storage to meet growing demand
Delivers direct access to data over the WAN
Benefits of RDMA
Low latency and CPU overhead
High network utilization
Efficient data transfer
Support for message passing, sockets and storage protocols
Supported by all major operating systems
9. OFS and Ethernet
High performance and scalable RDMA over Ethernet with iWARP and
RoCE
– iWARP adapters deliver up to 40Gb/s with 1.9µs latency
– RoCE adapters deliver up to 40Gb/s with 1.0µs latency
Accelerates applications in an Ethernet infrastructure
– Supports Hadoop, Memcached, databases, LLM, virtualization
– Storage solutions including OpenStorage, MS SMB3
– Direct connections to long distance WAN and Internet links
10. OFS and InfiniBand
Highest performance, scalability and efficiency for HPC, enterprise, cloud and
Web 2.0 networks
– Bandwidth up to 56Gb/s
– Latencies less than 1us
– Scales to tens of thousands of nodes
Interconnect of choice for world’s fastest supercomputers
Enables the highest system efficiencies for TOP500 supercomputer clusters
11. OFS and InfiniBand in the TOP500
48 percent of the world’s
most powerful Petaflop
capable systems
The highest systems
utilization in the TOP500
80% of the acceleratorbased systems
*According to November 2013 TOP500 list