InfiniBand is a high-performance, multi-purpose network architecture based
on a switch design often called a "switched fabric." InfiniBand is designed for use
in I/O networks such as storage area networks (SAN) or in cluster networks.
InfiniBand supports network bandwidth between 2.5 Gbps and 30 Gbps.
InfiniBand is a type of communications link for data flow between processors and
I/O devices that offers throughput of up to 2.5 gigabytes per second and support
for up to 64,000 addressable devices. Because it is also scalable and supports
quality of service (QoS) and failover, InfiniBand is often used as a server connect
in high-performance computing (HPC) environments.
The internal data flow system in most PCs and server systems is inflexible and
relatively slow. As the amount of data coming into and flowing between
components in the computer increases, the existing bus system becomes
a bottleneck. Instead of sending data in parallel (typically 32 bits at a time, but in
some computers 64 bits) across the backplane bus, InfiniBand specifies a serial
(bit-at-a-time) bus. Fewer pins and other electrical connections are required,
saving manufacturing cost and improving reliability. The serial bus can carry
multiple channels of data at the same time in a multiplexing signal. InfiniBand also
supports multiple memory areas, each of which can addressed by both processors
and storage devices.
The InfiniBand Trade Association views the bus itself as a switch because control
information determines the route a given message follows in getting to its
destination address. InfiniBand uses Internet Protocol Version 6 (IPv6), which
enables an almost limitless amount of device expansion.
With InfiniBand, data is transmitted in packets that together form a
communication called a message. A message can be a remote direct memory
access (RDMA) read or write operation, a channel send or receive message, a
reversible transaction-based operation or a multicast transmission. Like the
channel model many mainframe users are familiar with, all transmission begins or
ends with a channel adapter. Each processor (your PC or a data center server, for
example) has what is called a host channel adapter (HCA) and each peripheral
device has a target channel adapter (TCA). These adapters can potentially
exchange information that ensures security or work with a given Quality of
InfiniBand (IB) is an industry-standard, channel-based architecture that features
high-speed, low-latency interconnects for distributed computing infrastructures.
Multiplexing network data onto a common link, InfiniBand combines networks
into a unified fabric that collectively routes data between host nodes and network
peripherals. InfiniBand’s common interconnect reduces the required number of
adapters and cables (including support spares), which significantly reduces total
cost of ownership (TCO).
The InfiniBand specification was developed by merging two competing designs,
Future I/O, developed by Compaq, IBM, and Hewlett-Packard, with Next
Generation I/O, developed by Intel, Microsoft, and Sun Microsystems.
Specifications for the InfiniBand architecture span multiple layers of the OSI
model. InfiniBand features physical and data-link layer hardware
like Ethernet and ATM, though with more advanced technology. InfiniBand also
features connection-oriented and connectionless transport protocols analogous
to TCP and UDP. InfiniBand uses IPv6 for addressing at the network layer.
InfiniBand will possibly someday replace PCI as the system bus for PCs. Today's
applications of InfiniBand, though are limited to cluster supercomputers and
other specialized network systems. InfiniBand hasn't yet become a mainstream
technology because standard network software must be modified and/or re-built
to work with InfiniBand. InfiniBand bypasses traditional network protocol stacks
like TCP/IP because of the performance limitations of these protocols, but in the
process it breaks backward compatibility of applications. WinSock and other
network programming libraries must be made InfiniBand-aware, without
sacrificing the performance gains, before InfiniBand can be widely deployed.
In 1973, at Xerox Corporation’s Palo Alto Research Center (more commonly
known as PARC), researcher Bob Metcalfe designed and tested the first Ethernet
network. While working on a way to link Xerox’s "Alto"computer to a printer,
Metcalfe developed the physical method of cabling that connected devices on the
Ethernet as well as the standards that governed communication on the cable.
Ethernet has since become the most popular and most widely deployed network
technology in the world. Many of the issues involved with Ethernet are common
to many network technologies, and understanding how Ethernet addressed these
issues can provide a foundation that will improve your understanding of
networking in general.
The Ethernet standard has grown to encompass new technologies as computer
networking has matured, but the mechanics of operation for every Ethernet
network today stem from Metcalfe’s original design. The original Ethernet
described communication over a single cable shared by all devices on the
network. Once a device attached to this cable, it had the ability to communicate
with any other attached device. This allows the network to expand to
accommodate new devices without requiring any modification to those devices
already on the network.
Ethernet is the most widely-installed local area network ( LAN) technology.
Specified in a standard, IEEE 802.3. An Ethernet LAN typically uses coaxial
cable or special grades of twisted pair wires. Ethernet is also used in wireless
LANs. The most commonly installed Ethernet systems are called 10BASE-T and
provide transmission speeds up to 10 Mbps. Devices are connected to the cable
and compete for access using a Carrier Sense Multiple Access with Collision
Detection (CSMA/CD ) protocol.
Fast Ethernet or 100BASE-T provides transmission speeds up to 100 megabits per
second and is typically used for LAN backbone systems, supporting workstations
with 10BASE-T cards. Gigabit Ethernet provides an even higher level of backbone
support at 1000 megabits per second (1 gigabit or 1 billion bits per second). 10-
Gigabit Ethernet provides up to 10 billion bits per second.
Ethernet was named by Robert Metcalfe, one of its developers, for the passive
substance called "luminiferous (light-transmitting) ether" that was once thought
to pervade the universe, carrying light throughout. Ethernet was so- named to
describe the way that cabling, also a passive medium, could similarly carry data
everywhere throughout the network.
When first widely deployed , Ethernet supported a maximum theoretical data rate
of 10 megabits per second (Mbps). Later, so-called "Fast Ethernet" standards
increased this maximum data rate to 100 Mbps. Gigabit Ethernet technology
further extends peak performance up to 1000 Mbps, and 10 Gigabit Ethernet
technology also exists.
Higher level network protocols like Internet Protocol (IP) use Ethernet as their
transmission medium. Data travels over Ethernet inside protocol units
The run length of individual Ethernet cables is limited to roughly 100 meters, but
Ethernet networks can be easily extended to link entire schools or office buildings
using network bridge devices.
Since a signal on the Ethernet medium reaches every attached node, the
destination address is critical to identify the intended recipient of the frame.
For example, in the figure above, when computer B transmits to printer C,
computers A and D will still receive and examine the frame. However, when a
station first receives a frame, it checks the destination address to see if the frame
is intended for itself. If it is not, the station discards the frame without even
examining its contents.
One interesting thing about Ethernet addressing is the implementation of
a broadcast address. A frame with a destination address equal to the broadcast
address (simply called a broadcast, for short) is intended for every node on the
network, and every node will both receive and process this type of frame.
Comparison between Ethernet and Infinband
Best effort delivery. Any device may
• Relies on TCP/IP to correct any errors
• Subject to microbursts
• Store and forward. (cut-through
usually limited to local cluster)
• Carries legacy from it’s origins as a
• Ethernet switches not as scalable as
• Guaranteed delivery. Credit based
• Hardware based re-transmission
•Dropped packets prevented by
• Cut through design with late packet
• Must use QoS when sharing with
• Green field design which applied
lessons learnt from previous generation