This document discusses dataflow computers and their architecture. Dataflow computers are based on data-driven computation rather than program flow control, allowing many instructions to execute asynchronously and implicitly in parallel. A dataflow graph represents a dataflow program as a directed graph where nodes are operators and arcs pass data tokens between them. The execution of a node is called "firing" when its input data tokens are available. Dataflow architectures come in static and dynamic models, with dynamic using tagged tokens to allow greater parallelism. Compilers for dataflow machines record dependencies with tags instead of variables. Programs load into content addressable memory and instructions fire when operands are available.
3. INTRODUCTION :
Data flow computers are based on the concept of data-driven computation, which is
drastically different from the operation of conventional von Neumann machine.The
fundamental difference is that instruction execution in a conventional computer is under
program-flow control, whereas that in a data flow computer is driven by the data
(operand) availability.
The data-driven concept means asynchrony, which means that many instructions can be
executed simultaneously and asynchronously. A higher degree of implicit parallelism is
expected in dataflow computer. Becuase there is no use of shared memory cells, dataflow
programs are free from side effects.
The Dataflow Principles section reviews the basic principles of the dataflow model.The
Dataflow Graphs section gives the representations used in dataflow system.The Dataflow
Architectures section provides a general description of the dataflow architecture.The
discussion includes a comparison of the architectural characteristics and the evolutionary
improvements in dataflow computing.
4. OBJECTIVE
Dataflow architecture is a computer achitecture that directly opposite of
the traditional von Neumann architecture or control flow architecture.
It has been successfully implemented in specialized hardware such as
in digital signal processing, network routing, graphics processing, telemetry,
and more recently in data warehousing.
The main objective and scope of our Project is about the discuss of principle
and the uses of dataflow computer.
5. Features Of Dataflow Computers :
Intermediate or final results are passed directly as data token
between instructions.
There is no concept of shared data storage as embodied the
traditional notation of a variable.
Program sequencing is constrained only by data dependency among
instructions.
6. DATA FLOW GRAPH :
Dataflow graphs can be viewed as the machine language for dataflow computers. A
data flow graph is a directed graph whose nodes correspond to operators and arcs
are pointers for forwardig data tokens.
A producing node is connected to a consuming
node by an arc, and the “point” where an arc
enters a node is called an input port.
The execution of an instruction is called the
firing of a node.
Data is sent along the arcs of the dataflow graph
in the form of tokens, which are created by
computational nodes and placed on output arcs.
7. Example With a expression :
The below figure illustrates an example of dataflow graph for evaluation
of expression X2– 2*X + 3.
The following subtraction operation will not be
carried out until these values are available. As
soon as X2 and 2*X values are computed
subtraction operation will be carried out which
in turn will provide input to next addition
operation.
8. DATAFLOW ARCHITECTURE :
Dataflow architecture is a computer
achitecture that directly contrasts the
traditional von Neumann architecture
or control flow architecture.
Although no commercially successful
general-purpose computer hardware has
used a dataflow architecture, it has been
successfully implemented in specialized
hardware such as in digital signal
processing, network routing, graphics
processing, telemetry, and more recently in
data warehousing.
9. MODELS OF DATAFLOWARCHITECTURE :
Depending on the way of handling data tokens, data
flow computers are divided into :
I. static model
II. dynamic model.
STATIC DATA FLOW MACHINES :
o In static data flow machine, data tokens are
assumed to move along the arcs of the data flow
program graph to the operator nodes.
o This architecture is considered static because
tokens are not labeled and control tokens must be
used to acknowledge the proper timing in
transferring data tokens from node to node.
10. DYNAMIC DATAFLOW MACHINE :
o A dynamic data flow machine uses tagged tokens, so that more than one
token can exist in an arc.The tagging is achieved by attaching a label with each
token which uniquely identifies the context of that particular token.
o The dynamic dataflow allows greater exploitation of parallelism; however, this
advantage comes at the expense of the overhead in terms of the generation of
tags, larger data tokens, and complexity of the matching tokens.
11. COMPILER :
Normally, in the control flow architecture compilers analyze program code for
data dependencies between instructions in order to better organize the
instruction sequences in the binary output files.
Binaries compiled for a dataflow machine contain this dependency information.
Adataflow compiler records these dependencies by creating unique tags for each
dependency instead of using variable names.
12. PROGRAM :
Programs are loaded into the CAM of a dynamic dataflow computer.When
all of the tagged operands of an instruction become available (that is, output
from previous instructions and/or user input), the instruction is marked as
ready for execution by an execution unit.
Once an instruction is completed by an execution unit, its output data is sent
(with its tag) to the CAM. Any instructions that are dependent upon this
particular datum (identified by its tag value) are then marked as ready for
execution.
13. INSTRUCTION :
An instruction, along with its required data operands, is transmitted to an execution unit as
a packet, also called an instruction token. Similarly, output data is transmitted back to the
CAM as a data token.The packetization of instructions and results allows for parallel
execution of ready instructions on a large scale
Dataflow networks deliver the instruction tokens to the execution units and return the data
tokens to the CAM. In contrast to the conventional von Neumann architecture, data tokens
are not permanently stored in memory, rather they are transient messages that only exist
when in transit to the instruction storage
14. CONCLUSION :
The advances from the development of dataflow machines indicate potential high
performance computation based on the dataflow principles.This is necessary owing to
increased demands of processing of complex scientific and technical data. As such
applications require large processing times, data flow computers may help reduce the
processing times and thus improve efficiency and effectiveness of implemented
algorithms. However, there are still many issues to be addressed for the efficient use of
dataflow computers.