Your SlideShare is downloading. ×
Pipeline Mechanism
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Pipeline Mechanism

8,727
views

Published on

Published in: Education, Technology, Business

1 Comment
1 Like
Statistics
Notes
No Downloads
Views
Total Views
8,727
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
223
Comments
1
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Assignment on Pipeline Mechanism Advanced Computer Architecture Submitted To: Dr. Md. Fokhray Hossain Associate Professor Department of CSE, CIS & CS Faculty of Science & Information Technology DIU Submitted By: Muhammad Ashik Iqbal M.Sc. in CSE ID: 092-25-127 DIU Submission Date: 07 December 2009 Page | 1
  • 2. Introduction To achieve better performance, most modern processors (super-pipelined, superscalar RISC and VLIW processors) have many functional units on which several instructions can be executed simultaneously. An instruction starts execution if its issue conditions are satisfied. If not, the instruction is stalled until its conditions are satisfied. Such interlock (pipeline) delay causes interruption of the fetching of successor instructions (or demands nop instructions, e.g. for some MIPS processors). There are two major kinds of interlock delays in modern processors. The first one is a data dependence delay determining instruction latency time. The instruction execution is not started until all source data have been evaluated by prior instructions (there are more complex cases when the instruction execution starts even when the data are not available but will be ready in given time after the instruction execution start). Taking the data dependence delays into account is simple. The data dependence (true, output, and anti-dependence) delay between two instructions is given by a constant. In most cases this approach is adequate. The second kind of interlock delays is a reservation delay. The reservation delay means that two instructions under execution will be in need of shared processors resources, i.e. buses, internal registers, and/or functional units, which are reserved for some time. Taking this kind of delay into account is complex especially for modern RISC processors. The task of exploiting more processor parallelism is solved by an instruction scheduler. For a better solution to this problem, the instruction scheduler has to have an adequate description of the processor parallelism (or pipeline description). Currently GCC provides two alternative ways to describe processor parallelism, both described below. The first method is outlined in the next section; it was once the only method provided by GCC, and thus is used in a number of exiting ports. The second, and preferred method, specifies functional unit reservations for groups of instructions with the aid of regular expressions. This is called the automaton based description. The GCC instruction scheduler uses a pipeline hazard recognizer to figure out the possibility of the instruction issue by the processor on a given simulated processor cycle. The pipeline hazard recognizer is automatically generated from the processor pipeline description. The pipeline hazard recognizer generated from the automaton based description is more sophisticated and based on a deterministic finite state automaton (DFA) and therefore faster than one generated from the old description. Furthermore, its speed is not dependent on processor complexity. The instruction issue is possible if there is a transition from one automaton state to another one. You can use either model to describe processor pipeline characteristics or even mix them. You could use the old description for some processor sub-models and the DFA-based one for other processor sub-models. In general, using the automaton based description is preferred. Its model is richer and makes it possible to more accurately describe pipeline characteristics of processors, which results in improved code quality (although sometimes only marginally). It will also be used as an infrastructure to implement sophisticated and practical instruction scheduling which will try many instruction sequences to choose the best one. Page | 2
  • 3. How Pipelining Works Pipelining, a standard feature in RISC processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same time, more instructions can be executed in a shorter period of time. A useful method of demonstrating this is the laundry analogy. Let's say that there are four loads of dirty laundry that need to be washed, dried, and folded. We could put the the first load in the washer for 30 minutes, dry it for 40 minutes, and then take 20 minutes to fold the clothes. Then pick up the second load and wash, dry, and fold, and repeat for the third and fourth loads. Supposing we started at 6 PM and worked as efficiently as possible, we would still be doing laundry until midnight. However, a smarter approach to the problem would be to put the second load of dirty laundry into the washer after the first was already clean and whirling happily in the dryer. Then, while the first load was being folded, the second load would dry, and a third load could be added to the pipeline of laundry. Using this method, the laundry would be finished by 9:30. RISC Pipelines Page | 3
  • 4. An instruction pipeline is a technique used in the design of computers and other digital electronic devices to increase their instruction throughput (the number of instructions that can be executed in a unit of time). The fundamental idea is to split the processing of a computer instruction into a series of independent steps, with storage at the end of each step. This allows the computer's control circuitry to issue instructions at the processing rate of the slowest step, which is much faster than the time needed to perform all steps at once. The term pipeline refers to the fact that each step is carrying data at once (like water), and each step is connected to the next (like the links of a pipe.) The origin of pipelining is thought to be either the ILLIAC II project or the IBM Stretch project. The IBM Stretch Project proposed the terms, "Fetch, Decode, and Execute" that became common usage. Most modern CPUs are driven by a clock. The CPU consists internally of logic and memory (flip flops). When the clock signal arrives, the flip flops take their new value and the logic then requires a period of time to decode the new values. Then the next clock pulse arrives and the flip flops again take their new values, and so on. By breaking the logic into smaller pieces and inserting flip flops between the pieces of logic, the delay before the logic gives valid outputs is reduced. In this way the clock period can be reduced. For example, the RISC pipeline is broken into five stages with a set of flip flops between each stage. 1. Instruction fetch 2. Instruction decode and register fetch 3. Execute 4. Memory access 5. Register write back Hazards: When a programmer (or compiler) writes assembly code, they make the assumption that each instruction is executed before execution of the subsequent instruction is begun. This assumption is invalidated by pipelining. When this causes a program to behave incorrectly, the situation is known as a hazard. Various techniques for resolving hazards such as forwarding and stalling exist. Non-pipeline architecture is inefficient because some CPU components (modules) are idle while another module is active during the instruction cycle. Pipelining does not completely cancel out idle time in a CPU but making those modules work in parallel improves program execution significantly. Processors with pipelining are organized inside into stages which can semi-independently work on separate jobs. Each stage is organized and linked into a 'chain' so each stage's output is fed to another stage until the job is done. This organization of the processor allows overall processing time to be significantly reduced. A deeper pipeline means that there are more stages in the pipeline, and therefore, fewer logic gates in each pipeline. This generally means that the processor's frequency can be increased as the cycle time is lowered. This happens because there are fewer components in each stage of the pipeline, so the propagation delay is decreased for the overall stage. [1] Page | 4
  • 5. Advantages and Disadvantages Pipelining does not help in all cases. There are several possible disadvantages. An instruction pipeline is said to be fully pipelined if it can accept a new instruction every clock cycle. A pipeline that is not fully pipelined has wait cycles that delay the progress of the pipeline. Advantages of Pipelining: 1. The cycle time of the processor is reduced, thus increasing instruction issue-rate in most cases. 2. Some combinational circuits such as adders or multipliers can be made faster by adding more circuitry. If pipelining is used instead, it can save circuitry vs. a more complex combinational circuit. Disadvantages of Pipelining: 1. A non-pipelined processor executes only a single instruction at a time. This prevents branch delays (in effect, every branch is delayed) and problems with serial instructions being executed concurrently. Consequently the design is simpler and cheaper to manufacture. 2. The instruction latency in a non-pipelined processor is slightly lower than in a pipelined equivalent. This is due to the fact that extra flip flops must be added to the data path of a pipelined processor. 3. A non-pipelined processor will have a stable instruction bandwidth. The performance of a pipelined processor is much harder to predict and may vary more widely between different programs. Examples: Generic Pipeline Generic 4-stage pipeline; the colored boxes represent instructions independent of each other To the right is a generic pipeline with four stages: 1. Fetch 2. Decode 3. Execute 4. Write-back (for lw and sw memory is accessed after execute stage) The top gray box is the list of instructions waiting to be executed; the bottom gray box is the list of instructions that have been completed; and the middle white box is the pipeline. Execution is as follows: Time Execution 0 Four instructions are awaiting to be executed 1 • the green instruction is fetched from memory • the green instruction is decoded 2 • the purple instruction is fetched from memory Page | 5
  • 6. • the green instruction is executed (actual operation is performed) 3 • the purple instruction is decoded • the blue instruction is fetched • the green instruction's results are written back to the register file or memory • the purple instruction is executed 4 • the blue instruction is decoded • the red instruction is fetched • the green instruction is completed • the purple instruction is written back 5 • the blue instruction is executed • the red instruction is decoded • The purple instruction is completed 6 • the blue instruction is written back • the red instruction is executed • the blue instruction is completed 7 • the red instruction is written back 8 • the red instruction is completed 9 All instructions are executed Bubble A bubble in cycle 3 delays execution Main article: Bubble (computing) When a "hiccup" in execution occurs, a "bubble" is created in the pipeline in which nothing useful happens. In cycle 2, the fetching of the purple instruction is delayed and the decoding stage in cycle 3 now contains a bubble. Everything "behind" the purple instruction is delayed as well but everything "ahead" of the purple instruction continues with execution. Clearly, when compared to the execution above, the bubble yields a total execution time of 8 clock ticks instead of 7. Bubbles are like stalls, in which nothing useful will happen for the fetch, decode, execute and writeback. It can be completed with a nop code. Page | 6