Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

CISC & RISC Architecture


Published on

CISC & RISC Architecture with contents
History Of CISC & RISC
Need Of CISC
CISC Characteristics
CISC Architecture
The Search for RISC
RISC Characteristics
Bus Architecture
Pipeline Architecture
Compiler Structure
Commercial Application

Published in: Engineering
  • Login to see the comments

CISC & RISC Architecture

  1. 1. CISC & RISC Architecture Suvendu Kumar Dash M.Tech in ECE VTP1492
  2. 2.  History Of CISC & RISC  Need Of CISC  CISC  CISC Characteristics  CISC Architecture  The Search for RISC  RISC Characteristics  Bus Architecture  Pipeline Architecture  Compiler Structure  Commercial Application  Reference Overview
  3. 3. History Of CISC & RISC  1950s IBM instituted a research program.  1964 Release of System/360.  Mid-1970s improved measurement tools demonstrated on CISC.  1975 801 project initiated at IBM’s Watson Research Center.  1979 32-bit RISC microprocessor (801) developed led by Joel Birnbaum.  1984 MIPS (Microprocessor without Interlocked Pipeline Stages) developed at Stanford, as well as projects done at Berkeley.  1988 RISC processors had taken over high-end of the workstation market.
  4. 4. Need Of CISC  In the past, it was believed that hardware design was easier than compiler design  Most programs were written in assembly language  Hardware concerns of the past:  Limited and slower memory  Few registers
  5. 5. The Solution  As limited registers so … Instructions have do more work, thereby minimizing the number of instructions called in a program.  Allow for variations of each instruction  Usually variations in memory access.
  6. 6.  CISC, which stands for Complex Instruction Set Computer.  Each instruction executes multiple low level operations.  Ex. A single instruction can load from memory, perform an arithmetic operation, and store the result in memory.  Smaller program size. CISC
  7. 7. CISC Characteristics  A large number of instructions.  Some instructions for special tasks used infrequently.  A large variety of addressing modes (5 to 20).  Variable length instruction formats. Disadvantages : However, it soon became apparent that a complex instruction set has a number of disadvantages:  These include a complex instruction decoding scheme, an increased size of the control unit, and increased logic delays.
  8. 8. CISC Architecture  The essential goal of a CISC architecture is to attempt to provide a single machine instruction for each high level language instruction  Ex:  IBM/370 computers  Intel Pentium processors
  9. 9. The Search for RISC  Compilers became more prevalent.  The majority of CISC instructions were rarely used.  Some complex instructions were slower than a group of simple instructions performing an equivalent task:  Too many instructions for designers to optimize each one.  Smaller instructions allowed for constants to be stored in the unused bits of the instruction  This would mean less memory calls to registers or memory.
  10. 10. RISC  RISC Stands for Reduced Instruction Set Computer.  It is a microprocessor that is designed to perform a smaller number of types of computer instruction so that it can operate at a higher speed.
  11. 11. RISC Characteristics  Relatively few instructions  128 or less  Relatively few addressing modes.  Memory access is limited to LOAD and STORE instructions.  All operations done within the registers of the CPU.  This architectural feature simplifies the instruction set and encourages the optimization of register manipulation.  An essential RISC philosophy is to keep the most frequently accessed operands in registers and minimize register-memory operations.
  12. 12. RISC Characteristics Cont..  Fixed Length, easily decoded instruction format  Typically 4 bytes in length  Single cycle instruction execution  Done by overlapping the fetch, decode and execute phases of two or three instructions known as Pipelining!!  Large number of registers in the processor unit.  Use of overlapped Register Windows.
  13. 13. BUS Architecture  Bus Interconnection of Processor units to memory and IO subsystem
  14. 14. BUS Architecture Cont.. Memory Bus:  Memory bus (also called system bus since it interconnects the subsystems)  Interconnects the processor with the memory systems and also connects the I/O bus.  Three sets of signals –address bus, data bus and control bus
  15. 15. BUS Architecture Cont.. System Bus :  A system’s bus characteristics --- according to the needs of the processor, speed, and word length for instructions and data.  Processor internal bus(es) characteristics differ from the system external bus(es).
  16. 16. BUS Architecture Cont.. Buses to interconnect the processor Functional units to memory and IO subsystem
  17. 17. BUS Architecture Cont.. Address Bus  Processor issues the address of the instruction byte or word to the memory system through the address bus  Processor execution unit, when required, issues the address of the data (byte or word) to the memory system through the address bus.
  18. 18. Data Bus BUS Architecture Cont..  When the Processor issues the address of the instruction, it gets back the instruction through the data bus When it issues the address of the data, it loads the data through the data bus.  When it issues the address of the data, it stores the data in the memory through the data bus.
  19. 19. BUS Architecture Cont.. Control Bus  Issues signals to control the timing of various actions during interconnection.  Bus signals synchronize the subsystems
  20. 20. Pipeline Architecture  A technique used in advanced microprocessors where the microprocessor begins executing a second instruction before the first has been completed.  A Pipeline is a series of stages, where some work is done at each stage. The work is not finished until it has passed through all stages.  With pipelining, the computer architecture allows the next instructions to be fetched while the processor is performing arithmetic operations, holding them in a buffer close to the processor until each instruction operation can performed.
  21. 21. Pipeline Architecture  The pipeline is divided into segments and each segment can execute it operation concurrently with the other segments.  Once a segment completes an operations, it passes the result to the next segment in the pipeline and fetches the next operations from the preceding segment. Instruction 1 Instruction 2 X X Instruction 4 Instruction 3 X X Four sample instructions, executed linearly
  22. 22. Pipeline Architecture  CISC instructions do not fit pipelined architectures very well.  For pipelining to work effectively, each instruction needs to have similarities to other instructions, at least in terms of relative instruction complexity.
  23. 23.  Instruction Pipelining  Similar to the use of an assembly line in manufacturing plant.  New inputs are accepted at one end before previously accepted inputs appear as outputs at the other end.  Pipeline requires instruction to be divided into more stages.  So, that at every clock cycle, new instruction can be inserted for processing Pipeline Architecture
  24. 24. Pipeline Architecture
  25. 25. Pipeline Architecture Cont.. Various instruction phases:  Fetch Instruction(FI): fetch the next instruction  Decode Instruction(DI): determine the opcode and operand  Calculate Operands(CO):calculate the effective address of source operands.  Fetch Operands(FO):fetch each operand from memory.  Execute Instructions(EI): perform the indicated operation and store the result.  Write result or Operand(WO): store the result into memory.
  26. 26. Pipeline Architecture Cont.. RISC Pipeline.  Different from normal one.  Based on type of instruction.  According to instruction type, decide the number of phases in pipeline.  Number of stages in pipeline are not fixed.
  27. 27. Pipeline Architecture Cont.. RISC Pipeline.  Most instructions are register to register  Two phases of execution  I: Instruction fetch  E: Execute  ALU operation with register input and output  For load and store  Three phase execution  I: Instruction fetch  E: Execute  Calculate memory address  D: Memory  Register to memory or memory to register operation
  28. 28. Pipeline Architecture Cont.. Effects of Pipelining(1)
  29. 29. Pipeline Architecture Cont.. Effects of Pipelining(2)
  30. 30. Pipeline Architecture Cont.. Increase the Speedup Factor:  I and E stages of two different instructions are performed simultaneously.  Which yields up to twice the execution rate of serial scheme.  Two problem prevents to achieve this the maximum speedup:  Single port memory is used so only one memory access is possible per stage.  Branch instruction interrupts the sequential flow.
  31. 31. Pipeline Architecture Cont.. Four stage pipeline:  E stage usually involves an ALU operation, it may be longer. So we can divide into two stages:  E1: Register file read.  E2: ALU operation and register write.
  32. 32. Pipeline Architecture Cont.. Effects of Pipelining(3)
  33. 33. Pipeline Architecture Cont.. Optimization of RISC Pipelining:  Delayed branch:  Does not take effect until after execution of following instruction.  This following instruction is the delay slot.  Increased performance can be achieved by reordering the instructions!!! This can be applicable for unconditional branches.
  34. 34. Pipeline Architecture Cont.. Normal and Delayed Branch: Address Normal Branch Delayed Branch Optimized Delayed Branch 100 LOAD X, rA LOAD X, rA LOAD X, rA 101 ADD 1, rA ADD 1, rA JUMP 105 102 JUMP 105 JUMP 106 ADD 1, rA 103 ADD rA, rB NOOP ADD rA, rB 104 SUB rC, rB ADD rA, rB SUB rC, rB 105 STORE rA, Z SUB rC, rB STORE rA, Z 106 STORE rA, Z
  35. 35. Compiler Structure  A compiler is a Computer Program (or set of programs) that transforms Source Code written in a Programming Language (the source language) into another computer language (the target language, often having a binary form known as Object Code).  The most common reason for wanting to transform source code is to create an Executable program.
  36. 36. Compiler Structure
  37. 37. Compiler Structure Cont..  In a compiler,  linear analysis  is called Lexical Analysis or Scanning and is performed by the Lexical Analyzer or Lexer,  hierarchical analysis  is called Syntax Analysis or Parsing and is performed by the Syntax Analyzer or Parser.  During the analysis, the compiler manages a Symbol Table by  recording the identifiers of the source program  collecting information (called Attributes) about them: storage allocation, type, scope, and (for functions) signature.
  38. 38. Compiler Structure Cont..  When the identifier x is found by the lexical analyzer  generates the token id  enters the lexeme x in the symbol-table (if it is not already there)  associates to the generated token a pointer to the symbol-table entry x. This pointer is called the Lexical Value of the token.  During the analysis or synthesis, the compiler may Detect Errors and report on them.  However, after detecting an error, the compilation should proceed allowing further errors to be detected.  The syntax and semantic phases usually handle a large fraction of the errors detectable by the compiler.
  39. 39. Commercial Applications RISC:  First commercially available RISC processor was MIPS  R4000  Supports thirty-two 64-bit registers  128Kb of high speed cache  SPARC  Based on Berkeley RISC model  PowerPC.  Motorola.  Nintendo Game Boy Advance (ARM7)  Nintendo DS (ARM7, ARM9)
  40. 40. Commercial Applications Cont.. CISC:  CISC instruction set architectures are  System/360 through z/Architecture,  PDP-11,  VAX,  Motorola 68k, and Intel(R) 80x86.
  41. 41. Reference  Computer Organization And Architecture,8th Edition , William Stallings  %20Guwahati/comp_org_arc/web/ 
  42. 42. Thank You