Code Scheduling
Constraints
P.Abinaya
Msc(cs)
 Code scheduling is a form of program optimization
that applies to the machine code that is produced by
the code generator. Code scheduling is subject to three
kinds of constraints:
 Data-dependence constraints. The operations in the
optimized program must produce the same results as
the corresponding ones in the original program.
 Resource constraints. The schedule must not
oversubscribe the resources on the machine.
 True dependence: read after write. If a write is
followed by a read of the same location, the read
depends on the value written; such a dependence
is known as a true
 Antidependence: write after read. If a read is
followed by a write to the same location, we say
that there is an antidependence from the read to
the write.
 To check if two memory accesses share a data
dependence, we only need to tell if they can refer to
the same location; we do not need to know which
location is being accessed.
 For example, we can tell that the two accesses *p and
O p ) + 4 cannot refer to the same location, even
though we may not know what p points to. Data
dependence is generally undecidable at compile time.
The compiler must assume that operations may refer
to the same location unless it can prove otherwise.
 machine-independent intermediate rep-resentation of
the source program uses an unbounded number of
pseudoregisters to represent variables that can be
allocated to registers. These variables include scalar
variables in the source program that cannot be
referred to by any other names, as well as temporary
variables that are generated by the compiler to hold
the partial results in expressions. Unlike memory
locations, registers are uniquely named. Thus precise
data-dependence constraints can be generated for
register accesses easily.
 If registers are allocated before scheduling, the
resulting code tends to have many storage
dependences that limit code scheduling. On the other
hand, if code is scheduled before register allocation,
 Scheduling operations within a basic block is relatively
easy because all the instructions are guaranteed to
execute once control flow reaches the beginning of the
block. Instructions in a basic block can be reordered
arbitrarily, as long as all the data dependences are
satisfied. Unfortunately, basic blocks,
 especially in nonnumeric programs, are typically very
small; on average,
 there are only about five instructions in a basic block.
In addition, operations in the same block are often
highly related and thus have little parallelism.
 Memory loads are one type of instruction that can
benefit greatly from specula-tive execution. Memory
loads are quite common, of course.
 They have relatively long execution latencies,
addresses used in the loads are commonly available in
advance, and the result can be stored in a new
temporary variable without destroying the value of any
other variable.
 Many machines can be represented using the
following simple model. A machine M = {R,T), consists
of
 :A set of operation types T, such as loads, stores,
arithmetic operations, and so on.
 A vector R = [n, r2,... ] representing hardware
resources, where r« is the number of units available of
the ith kind of resource. Examples of typical resource
types include: memory access units, ALU's, and
floating-point functional units.
Compiler Design

Compiler Design

  • 1.
  • 2.
     Code schedulingis a form of program optimization that applies to the machine code that is produced by the code generator. Code scheduling is subject to three kinds of constraints:  Data-dependence constraints. The operations in the optimized program must produce the same results as the corresponding ones in the original program.  Resource constraints. The schedule must not oversubscribe the resources on the machine.
  • 3.
     True dependence:read after write. If a write is followed by a read of the same location, the read depends on the value written; such a dependence is known as a true  Antidependence: write after read. If a read is followed by a write to the same location, we say that there is an antidependence from the read to the write.
  • 4.
     To checkif two memory accesses share a data dependence, we only need to tell if they can refer to the same location; we do not need to know which location is being accessed.  For example, we can tell that the two accesses *p and O p ) + 4 cannot refer to the same location, even though we may not know what p points to. Data dependence is generally undecidable at compile time. The compiler must assume that operations may refer to the same location unless it can prove otherwise.
  • 5.
     machine-independent intermediaterep-resentation of the source program uses an unbounded number of pseudoregisters to represent variables that can be allocated to registers. These variables include scalar variables in the source program that cannot be referred to by any other names, as well as temporary variables that are generated by the compiler to hold the partial results in expressions. Unlike memory locations, registers are uniquely named. Thus precise data-dependence constraints can be generated for register accesses easily.
  • 6.
     If registersare allocated before scheduling, the resulting code tends to have many storage dependences that limit code scheduling. On the other hand, if code is scheduled before register allocation,
  • 7.
     Scheduling operationswithin a basic block is relatively easy because all the instructions are guaranteed to execute once control flow reaches the beginning of the block. Instructions in a basic block can be reordered arbitrarily, as long as all the data dependences are satisfied. Unfortunately, basic blocks,  especially in nonnumeric programs, are typically very small; on average,  there are only about five instructions in a basic block. In addition, operations in the same block are often highly related and thus have little parallelism.
  • 8.
     Memory loadsare one type of instruction that can benefit greatly from specula-tive execution. Memory loads are quite common, of course.  They have relatively long execution latencies, addresses used in the loads are commonly available in advance, and the result can be stored in a new temporary variable without destroying the value of any other variable.
  • 9.
     Many machinescan be represented using the following simple model. A machine M = {R,T), consists of  :A set of operation types T, such as loads, stores, arithmetic operations, and so on.  A vector R = [n, r2,... ] representing hardware resources, where r« is the number of units available of the ith kind of resource. Examples of typical resource types include: memory access units, ALU's, and floating-point functional units.