Performance of a speculative transmission scheme for scheduling latency reduction(synopsis)
Performance of a Speculative Transmission
Scheme for Scheduling-Latency Reduction
This work was motivated by the need to achieve low latency in
an input-queued centrally-scheduled cell switch for high-performance
computing applications; specifically, the aim is to reduce the latency
incurred between a request and response arrival of the corresponding
grant. The minimum latency in switches with centralized scheduling
comprises two components, namely, the control-path latency and the
data-path latency, which in a practical high-capacity, distributed switch
implementation can be far greater than the cell duration. We introduce
a speculative transmission scheme to significantly reduce the average
control-path latency by allowing cells to proceed without waiting for a
grant, under certain conditions. It operates in conjunction with any
centralized matching algorithm to achieve a high maximum utilization.
Using this model, performance measures such as the mean delay and
the rate of successful speculative transmissions are derived. The results
demonstrate that the latency can be almost entirely eliminated between
request and response for loads up to 50%. Our simulations confirm the
A KEY component of massively parallel computing systems is
the interconnection network (ICTN). To achieve a good system balance
between computation and Communication, the ICTN must provide low
latency, high bandwidth, low error rates, and scalability to high node
counts (thousands), with low latency being the most important
requirement. Although optics holds a strong promise towards fulfilling
these requirements, a number of technical and economic challenges
remain. Corning Inc. and IBM are jointly developing a demonstrator
system to solve the technical issues and map a path towards
commercialization. For background information on this project—the
(OSMOSIS)—and for a detailed description of the architecture.
Brikoff-von-newmann Switch which is eliminate the scheduler. It
incurs a worst-case latency penalty of N time slots. It has to wait
for exactly N time slots for the next opportunity.
Control and data path-latencies comprise serialization and deserialization
between request and response.
The existing system is if n_packets sending source to destination
it is exactly wait for n_time slots.
Serialization and deserialization delays between request and
We propose a novel method to combine speculative and
scheduled transmission in a cross bar switch.
The speculative transmission that does not have to wait for grant
hence low latency.
The scheduled transmission achieve high maximum throughput.
: PENTIUM IV 2.6 GHz
: 512 MB DD RAM
: 15” COLOR
: LG 52X
STANDARD 102 KEYS
: 3 BUTTONS
: Java, Swing
: Window’s XP
: SQL Server 2000
Forwarded Input to Centralized Scheduler
Apply centralized matching algorithm
Receive the packets & Performance calculation
In this module, we are going to connect the network .Each node
is connected the neighboring node and it is independently
deployed in network area.
In this module, browse and select the source file. And selected
data is converted into fixed size of packets.
Speculative transmission scheme to significantly reduce the
average control-path Latency by allowing cells to proceed
without waiting for a grant no need to ask
destination. So we are up to reduce the 50 percent of time
delay. In this module, the fixed size of packets forwarded from
source to Centralized Scheduler. The centralized scheduler
achieves high maximum utilization.
Here, we are going to apply the centralized matching algorithm
based upon the port no and read the port no from source and
compare with the scheduler then it’s forward to the destination.
In this module, received the valid packets from scheduler and
calculate the over all time delay from source to destination. Our
analysis and simulation results both confirm that this scheme
achieves a significant latency reduction of up to 50% at traffic