Hyper Transport technology is a very fast, low latency, point-to-point link used for inter-connecting integrated circuits on board. Hyper Transport, previously codenamed as Lightning Data Transport (LDT), provides the bandwidth and flexibility critical for today's networking and computing platforms while retaining the fundamental programming model of PCI. Hyper Transport was invented by AMD and perfected with the help of several partners throughout the industry.
2. Group members are :- (T. Y. Btech.
Comp. Sci.)
1. Rohan R. Khude
2.Rina Kamble
3. Manjiri C. Patil
4. Ruksar Mulani
5.Akshata Doijad
Guidance by
Tirmare H. A.
3. Hyper Transport technology is a very fast, low latency, point-to-
point link used for inter-connecting integrated circuits on board.
Hyper Transport, previously codenamed as Lightning Data
Transport (LDT), provides the bandwidth and flexibility critical for
today's networking and computing platforms while retaining the
fundamental programming model of PCI. Hyper Transport was
invented by AMD and perfected with the help of several partners
throughout the industry.
Hyper Transport was designed to support both CPU-to-CPU
communications as well as CPU-to-I/O transfers, thus, it features
very low latency. It provides up to 22.4 Gigabyte/second aggregate
CPU to I/O or CPU to CPU bandwidth in a highly efficient chip-to-
chip technology that replaces existing complex multi-level buses
Abstract
6. • While microprocessor performance continues to double every
eighteen months, the performance of the I/O bus architecture has lagged,
doubling in performance approximately every three years, as illustrated in
Figure 1.
• This I/O bottleneck constrains system performance, resulting in
diminished actual performance gains as the processor and memory
subsystems evolve. Over the past 20 years, a number of legacy buses, such
as ISA, VL-Bus, AGP, LPC, PCI-32/33, and PCI-X, have emerged that must
be bridged together to support a varying array of devices. Servers and
workstations require multiple high-speed buses, including PCI-64/66, AGP
Pro, and SNA buses like InfiniBand. The hodge-podge of buses increases
system complexity, adds many transistors devoted to bus arbitration and
bridge logic, while delivering less than optimal performance.
1.I/O Bandwidth Problem
9. Original Design
Goals
Improve system performance
Simplify system design
Increase I/O flexibility
Maintain compatibility with legacy system
Ensure extensibility to new system network architecture
Provide highly scalable multiprocessing system
10. Conceptually, the architecture of the
HyperTransport I/O link can be mapped into five different
layers, which structure is similar to the Open System
Interconnection (OSI) reference model.
In HyperTransport technology:
Physical layer
Datalink layer
Protocol layer
Transaction layer
Session layer
Flexible I/O Architecture
11. Minimize Latency when transferring from HyperTransport™ to
Hyper Transport
Maximize Bandwidth when transferring to and from PCI-X
Provide 80% or more utilizable PCI-X bandwidth
Hot-Plug Support
Utilize ASIC flow for development and implementation
Architectural Goals
12. Do not make acceptance of a posted request dependent upon the ability to
issue another request.
Do not make acceptance of a nonposted request dependent upon the ability
to issue another nonposted request.
Do not make acceptance of a request dependent upon receipt of a response.
Do not make issuance of a response dependent upon the ability to issue a
nonposted request.
Do not make issuance of a response dependent upon receipt of a response.
Ordering Rules (Row Pass Column?)
Posted Non-Posted Response
Posted | No | Yes | Yes
Non-Posted | No | Yes/No | Yes/No
Response | No | Yes | Yes/No
Deadlock avoidance strategy
21. Electrical Configuration
• The signaling technology used in HyperTransport technology is a
type of low voltage differential signaling (LVDS ).
22. • HyperTransport links implement double data rate (DDR)
transfer, where transfers take place on both the rising
and falling edges of the clock signal.
• An implementation of HyperTransport links with 16 CAD
bits in each direction with a 1.6-GHz data rate provides
bandwidth of 3.2 Gigabytes per second in each direction,
for an aggregate peak bandwidth of 6.4 Gbytes/s, or 48
times the peak bandwidth of a 33-MHz PCI bus.
Maximum Bandwidth
24. • The data link layer includes the
initialization and configuration sequence,
periodic cyclic redundancy check (CRC),
disconnect/reconnect sequence,
information packets for flow control and
error management, and doubleword
framing for other packets.
Data Link Layer
25. • The protocol layer includes the
commands, the virtual channels in which
they run, and the ordering rules that
govern their flow.
• The transaction layer uses the elements
provided by the protocol layer to perform
actions, such as read request and
responses.
Protocol and Transaction
Layers
26.
27. • The session layer includes link width
optimization and link frequency
optimization along with interrupt and
power state capabilities.
Session Layer
31. Software Feature
• Supports Windows Server® 2003, Windows Server® 2008,
Red Hat Enterprise Linux, SUSE Linux, and Solaris.
• Supports corporate manageability requirements
• ACPI support.
• OS and API support
.
• Power Management support
37. Diagram: HT-Link
•
control pair
• Clock Clock Pair
• 2,4,8,16,32 Data Pairs
clock pair
control pair
2,4,8,16,32 Data Pairs
RESET_L
PWORK
VHT
GND
HyperTransport
Device A
HyperTransport
Device B
38. HT Devices
• Host Bridge : HT interface that provide
connectivity between system’s host processor.
• End-Chain Link :The Altera
HyperTransport MegaCore function
implements an end-chain link.
• Tunnel :A tunnel is a dual-link device that
is not a HyperTransport-to-HyperTransport
bridge.
53. The spectrum of IO technology
Traditional inside the box verses outside the box
Point to point inside the box verses outside the box
Traditional shared buses verses point to point
interconnection
Serial links verses parallel point to point link
Comparisons with other technologies