CS212 Computers ArchitectureAnd Data Communication       Study Guide       Version 1.0
© 2006 by Global Business Unit – Higher EducationInformatics Holdings LtdA Member of Informatics GroupInformatics Campus10...
CS212                                                 CHAPTER 1 - COMPUTER HARDWAREChapter 1. Computer Hardware•     The h...
CS212                                              CHAPTER 1 - COMPUTER HARDWARE•   The CPU is the "brains" of the compute...
CS212                                                  CHAPTER 1 - COMPUTER HARDWARE1.1.1 CPU StructureLegend:            ...
CS212                                                  CHAPTER 1 - COMPUTER HARDWARE•   Registers    Registers are the sma...
CS212                                               CHAPTER 1 - COMPUTER HARDWARE        •   Memory Address Register (MAR)...
CS212                                                       CHAPTER 1 - COMPUTER HARDWARE            •    Program Status W...
CS212                                            CHAPTER 1 - COMPUTER HARDWARE                               Word Length =...
CS212                                                CHAPTER 1 - COMPUTER HARDWARE1.3       Input / Output Devices      • ...
CS212                                                CHAPTER 1 - COMPUTER HARDWARE•   Bus structure    •   A system bus ty...
CS212                                              CHAPTER 1 - COMPUTER HARDWARE•   Data Lines    •   Provides a path for ...
CS212                                                 CHAPTER 1 - COMPUTER HARDWARE1.5      Instruction-Execution CycleAll...
CS212                                               CHAPTER 1 - COMPUTER HARDWARE•   Steps:    1. [PC] -> [MAR]    2. [MAR...
CS212                                               CHAPTER 1 - COMPUTER HARDWARE•   Steps:    1. [IR] -> decoding circuit...
CS212                                                 CHAPTER 1 - COMPUTER HARDWARE     •       For simplicity reasons, th...
CS212                                                   CHAPTER 1 - COMPUTER HARDWARE•   ADD ACC, memory    •   This opera...
CS212                                                CHAPTER 1 - COMPUTER HARDWARE•   Example:    Draw a diagram (includin...
CS212                                              CHAPTER 1 - COMPUTER HARDWARE•   Fetch –cycle:    1. [PC] -> [MAR]     ...
CS212                                                   CHAPTER 1 - COMPUTER HARDWARE                                     ...
CS212                                                CHAPTER 1 - COMPUTER HARDWARE•     Up until now we have dealt with th...
CS212                                                CHAPTER 1 - COMPUTER HARDWARE  •     Modern processors on the other h...
CS212                                               CHAPTER 1 - COMPUTER HARDWARE  •     Enter the world of superscalar pi...
CS212                                               CHAPTER 1 - COMPUTER HARDWARE•   Exercises:a) With a diagram, introduc...
CS212   CHAPTER 1 - COMPUTER HARDWARE                                  1-23
CS212                                        CHAPTER 2 – INSTRUCTION FORMATSChapter 2. Instruction Formats•   A typical pr...
CS212                                           CHAPTER 2 – INSTRUCTION FORMATS2.1       Address Field•     Operations spe...
CS212                                          CHAPTER 2 – INSTRUCTION FORMATS2.1.2   Multiple Register Organisation•   A ...
CS212                                        CHAPTER 2 – INSTRUCTION FORMATS2.1.3   Stack Organisation•   The stack organi...
CS212                                          CHAPTER 2 – INSTRUCTION FORMATS2.2      Three Address Instructions         ...
CS212                                          CHAPTER 2 – INSTRUCTION FORMATS2.3      Two Address Instructions           ...
CS212                                           CHAPTER 2 – INSTRUCTION FORMATS2.4      One Address Instructions          ...
CS212                                         CHAPTER 2 – INSTRUCTION FORMATS2.5      Zero Address ( Stack ) Instructions ...
CS212                                           CHAPTER 2 – INSTRUCTION FORMATS2.6       Using Different Instruction Forma...
CS212                                         CHAPTER 2 – INSTRUCTION FORMATS        2.6.2   Two Address Format        •  ...
CS212                     CHAPTER 2 – INSTRUCTION FORMATS                 0                     Main Memory     64M       ...
CS212                                        CHAPTER 2 – INSTRUCTION FORMATS•   Exercises:(a) Write programs to compute X=...
CS212                                        CHAPTER 2 – INSTRUCTION FORMATS(c) A computer system has a 128-bit instructio...
CS212   CHAPTER 2 – INSTRUCTION FORMATS                                    2-14
CS212   CHAPTER 2 – INSTRUCTION FORMATS                                    2-15
CS212                                        CHAPTER 3 – ADDRESSING METHODSChapter 3. Addressing Methods•     Address fiel...
CS212                                     CHAPTER 3 – ADDRESSING METHODS3.1.1 Immediate Addressing•   The operand is given...
CS212                                       CHAPTER 3 – ADDRESSING METHODS3.1.2 Direct Addressing•   To specify a memory a...
CS212                                      CHAPTER 3 – ADDRESSING METHODS3.1.3 Indirect Addressing•   The effective addres...
CS212                                         CHAPTER 3 – ADDRESSING METHODS                            Load @5000        ...
CS212                                       CHAPTER 3 – ADDRESSING METHODS        •    Example:             •   Load R5: M...
CS212                                        CHAPTER 3 – ADDRESSING METHODS        •    Example             •   Load @R3: ...
CS212                                       CHAPTER 3 – ADDRESSING METHODS         •   Example             •   Load [displ...
CS212                                        CHAPTER 3 – ADDRESSING METHODS3.2      Assembly Language•     Machine instruc...
CS212                                       CHAPTER 3 – ADDRESSING METHODS3.2.1 Instruction Notation•   The assembly langu...
CS212                                       CHAPTER 3 – ADDRESSING METHODS•   The mnemonic ADDI states that the source ope...
CS212                                      CHAPTER 3 – ADDRESSING METHODS•   Binary numbers can be written more compactly ...
CS212                CHAPTER 3 – ADDRESSING METHODS• Exercises•   Student Notes:                                          ...
CS212   CHAPTER 3 – ADDRESSING METHODS•                                    3- 14
CS212                                       CHAPTER 4 – STACKS & SUBROUTINESChapter 4. Stacks & Subroutines•   A very usef...
CS212                                      CHAPTER 4 – STACKS & SUBROUTINES        •   Example:            The diagram bel...
CS212                                      CHAPTER 4 – STACKS & SUBROUTINES•   The stack pointer is decremented so it poin...
CS212                                        CHAPTER 4 – STACKS & SUBROUTINES4.1      Reverse Polish Notation (RPN)•     A...
CS212                                          CHAPTER 4 – STACKS & SUBROUTINES        4.1.1 RPN Operation        •   Scan...
CS212                                      CHAPTER 4 – STACKS & SUBROUTINES        •   Consider the expression:           ...
CS212                                        CHAPTER 4 – STACKS & SUBROUTINES4.2      Stack Operations•     Reverse Polish...
CS212                        CHAPTER 4 – STACKS & SUBROUTINES        •   Step 1:        STACK    ADDRESS                  ...
CS212                        CHAPTER 4 – STACKS & SUBROUTINES        •   Step 4:        STACK    ADDRESS                  ...
CS212                                   CHAPTER 4 – STACKS & SUBROUTINES2.2.1 Computer Stack (from chapter 2)•   The PUSH ...
CS212                                      CHAPTER 4 – STACKS & SUBROUTINES4.3     Subroutines        • A subroutine is a ...
CS212                                      CHAPTER 4 – STACKS & SUBROUTINES        •   A subroutine call instruction is im...
CS212                                      CHAPTER 4 – STACKS & SUBROUTINES        •   A particular register is designated...
CS212                                      CHAPTER 4 – STACKS & SUBROUTINES        •   Before execution of SUB 1        • ...
CS212                                     CHAPTER 4 – STACKS & SUBROUTINES        •   Before execution of SUB 2        •  ...
CS212                                     CHAPTER 4 – STACKS & SUBROUTINES        •   Return from SUB 2        •   Get ret...
CS212                                      CHAPTER 4 – STACKS & SUBROUTINES4.5     Parameter Transfer        •   When call...
CS212                                    CHAPTER 4 – STACKS & SUBROUTINES• Exercises:(a) Describe the steps using a specif...
CS212   CHAPTER 4 – STACKS & SUBROUTINES                                      4-19
CS212                                 CHAPTER 5 – INPUT OUTPUT ORGANIZATIONChapter 5. Input Output Organization•     The i...
CS212                               CHAPTER 5 – INPUT OUTPUT ORGANIZATION        •   CPU Communication            The CPU ...
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
MELJUN CORTES Computer Archetecture_data_communication
Upcoming SlideShare
Loading in …5
×

MELJUN CORTES Computer Archetecture_data_communication

722 views

Published on

MELJUN CORTES Computer Archetecture_data_communication

Published in: Education, Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
722
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
21
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

MELJUN CORTES Computer Archetecture_data_communication

  1. 1. CS212 Computers ArchitectureAnd Data Communication Study Guide Version 1.0
  2. 2. © 2006 by Global Business Unit – Higher EducationInformatics Holdings LtdA Member of Informatics GroupInformatics Campus10-12 Science Centre RoadSingapore 609080CS212Computers Architecture And Data CommunicationStudy GuideVersion 1.0Revised in June 2006All rights reserved. No part of this publication may be reproduced, stored in aretrieval system, or transmitted by any form or means, electronic, mechanical,photocopying, recording, or otherwise, without prior written permission of thepublisher.Every precaution has been taken by the publisher and author(s) in thepreparation of this book. The publisher offers no warranties or representations,not does it accept any liabilities with respect to the use of any information orexamples contained herein.All brand names and company names mentioned in this book are protected bytheir respective trademarks and are hereby acknowledged.The developer is wholly responsible for the contents, errors and omission.
  3. 3. CS212 CHAPTER 1 - COMPUTER HARDWAREChapter 1. Computer Hardware• The hallmark of a von Neumann machine is a large random-access memory. Each cell in the memory has a unique numerical address, which can be used to access or replace the contents of that cell in a single step. In addition to its ability to address memory locations directly, a von Neumann machine also has a central processing unit (the CPU) that possesses a special working memory (register memory) for holding data. Data that are being operated on and a set of built-in operations that is rich in comparison with the Turing machine. The exact design of the central processor varies considerably, but typically includes operations such as adding two binary integers, or branching to another part of the program if the binary integer in some register is equal to zero. The CPU can interpret information retrieved from memory either as instructions to perform particular operations or as data to apply the current operation to. Thus portion of memory can contain a sequence of instructions, called a program, and another portion of memory can contain the data to be operated on by the program. The CPU repeatedly goes through a fetch-execute cycle. A von Neumann machine runs efficiently because of its random access memory and because its architecture can be implemented in electronic circuitry that makes it very fast.1.1 ComponentsAll computers can be summarized with just two basic components: • Primary storage or memory • A central processing unit or CPU. 1-1
  4. 4. CS212 CHAPTER 1 - COMPUTER HARDWARE• The CPU is the "brains" of the computer. Its function is to execute programs that are stored in memory. The CPU fetches an instruction stored in memory then executes the retrieved (fetched) instruction within the CPU before proceeding to fetch the next instruction from memory This process continues until they are told to stop. These programs may involve either its data being process in some manner or results of an operation being stored. Thus we can summarize: • Fetch instruction: read instructions from the memory • Interpret instructions: decode the instruction to determine the operation to be performed • Process data: execute the instruction • Write data: results may mean writing data to memory or an I/O device• The ability to process and store data then entails more essential components, the architecture for which is the basic make-up of any computer. The CPU typically consists of: • A control unit • An Arithmetic Logic Unit (ALU) • Registers 1-2
  5. 5. CS212 CHAPTER 1 - COMPUTER HARDWARE1.1.1 CPU StructureLegend: Instruction DecoderControl Unit: IR MAR PSWMAR: memory address registerIR: instruction registerMBR: memory buffer register MBR PCPC: program counterPSW: program status word MM SBALU Unit: DRAC: accumulator registerDR: general-purpose register Arithmetic Logic Unit I/OMQ: multiplier-quotient register Module I/OMM: Main Memory AC MQ ModuleSB: System Bus • Organization of a CPU Structure• Control Unit (CU) Circuitry located on the central processing unit, which coordinates and controls all hardware. This is accomplished by using the contents of the instruction register to decide which circuits are to be activated. The control unit is also responsible for fetching instructions from the main memory and decoding the instruction.• Arithmetic-Logic Unit (ALU) Performs arithmetic operations such as addition and subtraction as well as logical operations such as AND, OR and NOT. Most operations require two operands. One of these operands usually comes from memory via the memory buffer register, while the other is the previously loaded value stored in the accumulator. The results of an arithmetic-logic unit operation are usually transferred to the accumulator (AC). 1-3
  6. 6. CS212 CHAPTER 1 - COMPUTER HARDWARE• Registers Registers are the smallest units of memory and are located internally within the CPU. And are most often used to either temporarily store results or control information. Registers within the CPU serve two basic functions: • User-visible registers: these enable to machine/assembly language programs to minimize main memory references by optimizing use of registers. • Control and Status registers: used by the control unit to control the operation of the CPU and by privileged, operating system programs to control the execution of programs. Examples include those usually found in the fetch-execute cycle: • Accumulator (AC, DR) A register located on the central processing unit. The contents can be used by the arithmetic-logic unit for arithmetic and logic operations, and by the memory buffer register. Usually, all results generated by the arithmetic- logic unit end up in the accumulator. • Instruction Register (IR) A register located on the central processing unit, which holds the contents of the last instruction fetched. This instruction is now ready to be executed and is accessed by the control unit. 1-4
  7. 7. CS212 CHAPTER 1 - COMPUTER HARDWARE • Memory Address Register (MAR) A register located on the central processing unit, which is in turn, connected to the address lines of the system. This register specifies the address in memory where information can be found and can be also used to point to a memory location where information is to be stored. • Memory Buffer Register (MBR) A register located on the central processing unit, which is in turn connected to the data lines of the system. The main purpose of this register is to act as an interface between the central processing unit and memory. When the control unit receives the appropriate signal, the memory location stored in the memory address register is used to copy data from or to the memory buffer registers. • Program Counter (PC) Contains the memory address of the next instruction to be executed. The contents of the program counter are copied to the memory address register before an instruction is fetched from memory. At the completion of the fetched instruction, the control unit updates the program counter to point to the next instruction, which is to be fetched. 1-5
  8. 8. CS212 CHAPTER 1 - COMPUTER HARDWARE • Program Status Word (PSW) Generally referred to as a Status Register (SR), this register encapsulates key information used by the CPU to record exceptional conditions. Conditions such as CPU-detected errors (an instruction attempting to divide by zero), hardware faults detected by error checking circuits, and urgent service requests or interrupts generated by the I/O modules.1.2 Memory• Memory is made up of a series of zeros (0) and ones (1) called bits or binary. These individual bits are grouped together in lots of eight and are referred to as a byte. Every byte in memory can be accessed by a unique address that identifies its location. The memory in modern computers contains millions of bytes and is often referred to as random-access memory (RAM). • Memory organization • This memory is called a N-word m-bit memory • N generally is in a power of 2, i.e. N=2n • Size of address is n bits • Size of word is m bits • Example: A 4096-word 16-bit memory 1-6
  9. 9. CS212 CHAPTER 1 - COMPUTER HARDWARE Word Length = m bits Address 0 Address 1 N Words MAR 2n Words Uni-directional Address N-2 address bus Address N-1 Bit 1 MBR m bits Bi-directional data bus • A diagram showing the structure of a Main Memory layout• Memory Read • Address is placed into the MAR • Read control is asserted • Contents of desired location is placed into MBR• Memory Write • The word to be written is placed into the MBR • Address of memory location to be written is specified in MAR • Write control line is asserted • Content of MBR is transferred into memory location specified by the MAR 1-7
  10. 10. CS212 CHAPTER 1 - COMPUTER HARDWARE1.3 Input / Output Devices • Input Devices • Accepts outside information • Converts it into digital signals suitable for computation by the CPU • Output Devices • Communicates data stored in memory or processed data, to the outside world • May be in various forms such as a computer Monitor or a Hardcopy1.4 Bus Interconnection • A bus is a communication pathway connecting two or more devices • It is a shared transmission medium. Multiple devices connect to the bus, and a signal transmitted by any one device is available for reception for all other devices attached to the bus. • Only one device at a time can successfully transmit • A bus usually consists of multiple communication pathways, or lines, each transmitting either 1 or 0, e.g. an 8-bit unit of data can be transmitted over eight bus lines • A system bus connects all the major components, i.e. CPU, CU, I/O 1-8
  11. 11. CS212 CHAPTER 1 - COMPUTER HARDWARE• Bus structure • A system bus typically consists of 50-100 separate lines. Each of which is split amongst the Data, Address and Control lines. The number of lines is usually referred to as the width of the bus. • Each line is assigned to a particular function • The bus lines can be classified into 3 functional groupings 1. Data Lines 2. Address Lines 3. Control Lines Control Unit ALU Registers Main memory Secondary memory Input Devices Output Devices Data Bus Address bus Control bus • Diagram of the Bus Interconnection Structure 1-9
  12. 12. CS212 CHAPTER 1 - COMPUTER HARDWARE• Data Lines • Provides a path for moving data between system modules• Address Lines • Used to designate the source or destination of the data on the data bus • The width of the address bus determines the maximum possible capacity of the system• Control Lines • Used to control the access to and the use of data and address lines • Control signals transmit both command and timing information between system modules • Timing signals indicate the validity of data and address information • Command signals specify the operations to be performed 1-10
  13. 13. CS212 CHAPTER 1 - COMPUTER HARDWARE1.5 Instruction-Execution CycleAll computers have an instruction execution cycle. A basic instruction execution cyclecan be broken down into the following steps: 1.Fetch cycle 2.Execute cycle Start Fetch instruction Execute instruction Stop • The fetch-Execute cycle 1.5.1 Fetch Cycle • To start off the fetch cycle, the address, which is stored in the program counter (PC), is transferred to the memory address register (MAR). The CPU then transfers the instruction located at the address stored in the MAR to the memory buffer register (MBR) via the data lines connecting the CPU to memory. The control unit (CU) coordinates this transfer from memory to CPU. To finish the cycle, the newly fetched instruction is transferred to the instruction register (IR) and unless told otherwise, the CU increments the PC to point to the next address location in memory. 1-11
  14. 14. CS212 CHAPTER 1 - COMPUTER HARDWARE• Steps: 1. [PC] -> [MAR] 2. [MAR] -> Address bus 3. Read control is asserted 4. [MEM] -> Data bus -> [MBR] 5. [MBR] -> [IR] 6. [PC] + 1 -> [PC] After the CPU has finished fetching an instruction, the CU checks the contents of the IR and determines which type of execution is to be carried out next. This process is known as the decoding phase. The instruction is now ready for the execution cycle. 1.5.2 Execute Cycle • Once an instruction has been loaded into the instruction register (IR), and the control unit (CU) has examined and decoded the fetched instruction and determined the required course of action to take, the execution cycle can commence. Unlike the fetch cycle (and the interrupt cycle, both of which have a set instruction sequence, which we will see later) the execute cycle can comprise some complex operations (commonly called opcodes). 1-12
  15. 15. CS212 CHAPTER 1 - COMPUTER HARDWARE• Steps: 1. [IR] -> decoding circuitry 2. If required data are not available in instruction, determine the location 3. Fetch the data, if any 4. Execute the instruction 5. Store results, if any• The actions within the execution cycle can be categorized into the following four groups: 1. CPU - Memory: Data may be transferred from memory to the CPU or from the CPU to memory. 2. CPU - I/O: Data may be transferred from an I/O module to the CPU or from the CPU to an I/O module. 3. Data Processing: The CPU may perform some arithmetic or logic operation on data via the arithmetic-logic unit (ALU). 4. Control: An instruction may specify that the sequence of operation may be altered. For example, the program counter (PC) may be updated with a new memory address to reflect that the next instruction fetched should be read from this new location. 1-13
  16. 16. CS212 CHAPTER 1 - COMPUTER HARDWARE • For simplicity reasons, the following examples will deal with two operations that can occur. The [LOAD ACC, memory] and [ADD ACC, memory], both of which could be classified as memory reference instructions. Instructions, which can be executed without leaving the CPU, are referred to as non-memory reference instructions.• LOAD ACC, memory • This operation loads the accumulator (ACC) with data that is stored in the memory location specified in the instruction. The operation starts off by transferring the address portion of the instruction from the IR to the memory address register (MAR). The CPU then transfers the instruction located at the address stored in the MAR to the memory buffer register (MBR) via the data lines connecting the CPU to memory. The CU coordinates this transfer from memory to CPU. To finish the cycle, the newly fetched data is transferred to the ACC.• Steps: 1. [IR] {address portion} -> [MAR] 2. [MAR] -> [MEM] -> [MBR] 3. [MBR] -> [ACC] 1-14
  17. 17. CS212 CHAPTER 1 - COMPUTER HARDWARE• ADD ACC, memory • This operation adds the data stored in the ACC with data that is stored in the memory location specified in the instruction using the ALU. The operation starts off by transferring the address portion of the instruction from the IR to the MAR. The CPU then transfers the instruction located at the address stored in the MAR to the MBR via the data lines connecting the CPU to memory. This transfer from memory to CPU is coordinated by the CU. Next, the ALU adds the data stored in the ACC and the MBR. To finish the cycle, the result of the addition operation is stored in the ACC for future use.• Steps: 1. [IR] {address portion} -> [MAR] 2. [MAR] -> [MEM] -> [MBR] 3. [MBR] + [ACC] -> [ALU] 4. [ALU] -> [ACC] After the execution cycle completes, if an interrupt is not detected, the next instruction is fetched and the process starts all over again. 1-15
  18. 18. CS212 CHAPTER 1 - COMPUTER HARDWARE• Example: Draw a diagram (including MAR, MBR, IR and PC), show how the instruction MOV AX [7000] is being fetched if the starting address for the instruction is 1FFF and the content of the location at 7000 is ABCD. • Instruction – Fetch-cycle PC MAR MEMORY 1FFF 1FFF ABCD (1) (2) 7000 IR MBR MOV AX MOV AX Mov AX[7000] [7000] [7000] 1FFF (5) (4) (3) (6) [PC] +1 -> [PC] Address bus Data bus• Instruction – Execution-cycle IR MOV AX (1) Decoding circuitry MEMORY [7000] MAR ABCD 7000 (2) (3) 7000 AX MBR ABCD ABCD (4) (6) (5) Address bus Data bus 1-16
  19. 19. CS212 CHAPTER 1 - COMPUTER HARDWARE• Fetch –cycle: 1. [PC] -> [MAR] MAR = 1FFF 2. [MAR] -> Address bus 3. Read control line is asserted 4. [MEM]1FFF -> Data bus -> [MBR] MBR = MOV AX [7000] 5. [MBR] -> [IR] IR = MOV AX [7000] 6. [PC] + 1 -> [PC]• Execute-cycle: 1. [IR] -> decoding circuitry MAR = 7000 2. [MAR] <- 7000 3. [MAR] -> Address bus 4. Read control line is asserted 5. [MEM]7000 -> Data bus -> [MBR] MBR = ABCD 6. [MBR] -> AX AX = ABCD 1.5.3 Interrupt Cycle • An interrupt can be described as a mechanism in which an I/O module etc. can break the normal sequential control of the central processing unit (CPU). And thus alter the way we view the traditional sequence of the fetch and execute cycle. The main advantage of using interrupts is that the processor can be engaged in executing other instructions while the I/O modules connected to the computer are engaged in other operations. 1-17
  20. 20. CS212 CHAPTER 1 - COMPUTER HARDWARE Start Fetch instruction Fetch Cycle Halt Execute instruction Execute Cycle Disabled Enabled Check for interrupt Interrupt Cycle • Instruction cycle with interrupts• Common interrupts that the CPU can receive • Program: Generated by some condition that occurs as a result of an instruction execution, such as arithmetic overflow, division by zero, attempt to execute an illegal machine instruction, and reference outside a users allowed memory space. • Timer: Generated by a timer within the processor. This allows the operating system to perform certain functions on a regular basis. • I/O: Generated by an I/O controller, to signal normal completion of an operation or to signal a variety of error conditions. • Hardware failure: Generated by a failure such as power failure or memory parity error. 1-18
  21. 21. CS212 CHAPTER 1 - COMPUTER HARDWARE• Up until now we have dealt with the instruction execution cycle on the hardware level. When interrupts are introduced, the CPU and the operating system driving the system, is responsible for the suspension of the program currently being run, as well as restoring that program at the same point before the interrupt was detected. To handle this, an interrupt handler routine is executed. This interrupt handler is usually built into the operating system.1.6 Modern Processors • When central processing units (CPUs) were first developed they processed the first instruction before starting the second. For example, the processor fetched the first instruction, decoded it, and then executed the fetched instruction, before fetching the second instruction and starting the process over again. In a processor, such as the one just described, the CPU itself is the weak link. The external bus operates for at least one cycle (clock pulse) in the three, but has to wait the remaining cycles for the CPU. Cycle 1 Cycle 2 Cycle 3 Cycle 4 Instruction 1 Fetch Execute Instruction 2 Fetch Execute • Sequential execution of program instructions 1-19
  22. 22. CS212 CHAPTER 1 - COMPUTER HARDWARE • Modern processors on the other hand, have developed what are called pipelines. Pipelines are the most common implementation technique in a CPU today that increases the performance of the system. The idea behind the pipeline is that while the first instruction is being executed, the second instruction can be fetched, or in simple terms, instructions overlap. • The first pipelines to be introduced where a simple three-stage pipeline. While this utilizes all the resources of the system, conflict of resources can occur, resulting in instructions being held until the previous instruction has completed its current stage. Apart from these minor hiccups, it is possible for the CPU to complete an instruction every cycle as opposed to the earlier processors that required three cycles per instruction. • To overcome the delays associated with the three stage pipeline, modern processors have broken down the execute cycle into a number of phases, some have even broken down the fetch cycle in the fight to overcome delays in their processors. No matter how many phases the cycle is broken down to, the end result is that only one instruction can be completed every cycle. Cycle 1 Cycle 2 Cycle 3 Cycle 4 Instruction 1 Fetch Execute Instruction 2 Fetch Execute Instruction 3 Fetch Execute • Execution cycle in a three-stage pipelined processor 1-20
  23. 23. CS212 CHAPTER 1 - COMPUTER HARDWARE • Enter the world of superscalar pipeline, where more than one instruction can be issued per clock cycle. Intel describes their processors by different levels. For example a level 2 (L2) processor (Pentium) can issue two instructions per clock cycle and their level 3 (L3) processor (Pentium Pro) can issue 3. The topic of pipelined architecture will be covered in more detail in later chapters. 1-21
  24. 24. CS212 CHAPTER 1 - COMPUTER HARDWARE• Exercises:a) With a diagram, introduce into the fetch-execute cycle a provision for an interrupt.b) The instruction (e.g. LOAD LABEL) loads the data at the address specified in the accumulator, which is a hexadecimal value. With a diagram show how many times the instruction must be fetched. Given that both the opcode field and the main memory are 1 byte wide. The starting address is FEDC16 and the instruction is LOAD 0110.c) The operation ADD ACC, Memory adds the data stored in the ACC with the data that is stored in the memory location specified in the instruction using the ALU. Show the steps of the execution-cycle on the operation.d) With the aid of a diagram, show how the instruction ADD [5000], 10, [1000] is loaded into main memory if the starting address for the instruction is 0200 and the data stored in location 5000 and 1000 are 8 and 8 respectively. Then… Show how the instruction, which will add the data at address 5000 with the data in address 1000 and the number 10, and then store the result in address 5000, will be executed.e) Give the definitions and purposes of the following: • PSW • ALU • MM • MAR • MBRf) Give a brief description on the concept of pipelined architecture. How does a traditional sequence like the fetch-execute differ in this type of architecture? 1-22
  25. 25. CS212 CHAPTER 1 - COMPUTER HARDWARE 1-23
  26. 26. CS212 CHAPTER 2 – INSTRUCTION FORMATSChapter 2. Instruction Formats• A typical program involves performing a number of functionally different steps, such as adding two numbers, testing for a particular condition, reading a character from the keyboard, or sending a character to be displayed on a monitor. A computer must have instructions capable of performing these four types of operations: • Data transfer between the main memory and the CPU registers • Arithmetic and logic operations on data • Program sequencing and control • I/O transfers• The format of an instruction is depicted in a rectangular box symbolising the bits of the instruction code. The bits of the binary instruction are divided into groups called fields. The most common fields found in instruction formats are: • An operation code field that specifies the operation to be performed. • An address field that designates either a memory address or a code for choosing a processor registers. • A mode field that specifies the way the address field is to be interpreted.• Other special fields are sometimes employed under certain circumstances, as for example a field that gives the number of shifts in a shift type instruction (a concept discussed in more detail in LD201), or an operand field in an immediate type instruction. The operation code field of an instruction is a group of bits that define various processor operations, such as Add, Subtract, Complement, and shift. The bits that define the mode field of an instruction code specify a variety of alternatives for choosing the operands from the given address field. The various addressing modes will be discussed in the next chapter. 2-1
  27. 27. CS212 CHAPTER 2 – INSTRUCTION FORMATS2.1 Address Field• Operations specified by computer instructions are executed on some data stored in memory or in processor registers. Operands residing in memory are specified by their addresses. Operands residing in processor registers are specified by a register address. A register address is a binary code of n-bits that specifies one of 2n registers in the processor. Thus a computer with 16 processor registers R0 through to R15 will have in its instruction code a register address field of four bits. The binary code 0101, for example, will designate register R5.Computers may have instructions of several different lengths containing varying number of addresses. The number of address fields in the instruction format of a computer depends upon the internal organisation of its registers. Most instructions fall in one of three types of organisation: • Single accumulator organisation. • Multiple register organisation. • Stack organisation.2.1.1 Accumulator Organisation• An accumulator type organisation is the simplest of computer organisations. All operations are performed with the implied accumulator register. The instruction format in this type of computer uses one memory address field. For example, the instruction that specifies an arithmetic addition has only one address field symbolised by X. ADD X• ADD is the symbol for the operation code of the instruction and gives X the address of the operand in memory. This instruction results in the operation AC <- AC + M[X]. AC is the accumulator register and M[X] symbolises the memory word located at address X. 2-2
  28. 28. CS212 CHAPTER 2 – INSTRUCTION FORMATS2.1.2 Multiple Register Organisation• A processor unit with multiple registers usually allows for greater programming flexibility. The instruction format in this type of computer needs three registers. Thus, the instruction for arithmetic addition may be written in symbolic form as ADD R1, R2, R3• to denote the operation R3 <- R1 + R2. However, the number of register address fields in the instruction can be reduced from three to two if the destination register is the same as one of the source registers. ADD R1, R2• Thus, the instruction will denote the operation R2 <- R2 + R1. Registers R1 and R2 are the source registers and R2 is also the destination register. Computers with multiple processor registers employ the MOVE instruction to symbolise the transfer of data from one location to another. The instruction MOVE R1, R2• Denotes a transfer R2 <- R1. Transfer type instructions need two address fields to specify the source operand and the destination of transfer. 2-3
  29. 29. CS212 CHAPTER 2 – INSTRUCTION FORMATS2.1.3 Stack Organisation• The stack organisation will also be presented in further detail in the next chapter, and later in this section. Computers with stack organisation have instructions that require one address field for transferring data to and from the stack. Operation type instructions such as ADD do not need an address field because the operation is performed directly with the operands in the stack.• To illustrate the influence of the number of address fields on computer programs, we will evaluate the arithmetic statement X=(A+B)*(C+D) Using the One, Two, and Three address instructions. 2-4
  30. 30. CS212 CHAPTER 2 – INSTRUCTION FORMATS2.2 Three Address Instructions OPCODE OPER 1 OPER 2 OPER 3 OPER 1 DATA SOURCE 1 OPER 2 DATA SOURCE 2 OPER 3 DESTINATION• Computers with three address instruction formats can use each address field to specify either a processor register or a memory address for an operand. The program in symbolic form evaluates X = ( A + B ) * ( C + D ) is shown below, together with an equivalent register transfer statement for each instruction. ADD A, B, R1 R1 <- M [A] + M [B] ADD C, D, R2 R2 <- M [C] + M [D] MUL R1, R2, X M [X] <- R1 * R2• It is assumed that the computer has two processor registers, R1 and R2. The symbol M[A] denotes the operand stored in memory at the address symbolised by A. • Advantage: Programming flexibility • Disadvantage: Long instruction word 2-5
  31. 31. CS212 CHAPTER 2 – INSTRUCTION FORMATS2.3 Two Address Instructions OPCODE OPER 1 OPER 2 OPER 1 DATA SOURCE 1 OPER 2 DATA SOURCE 2 • Implied destination, either oper 1 or oper 2• Two address instructions are the most common in commercial computers. Here again each address field can specify either a processor register or a memory address. The program to evaluate X = ( A + B ) * ( C + D ) is as follows: MOVE A, R1 R1 <- M[A] ADD B, R1 R1 <- R1 + M[B] MOVE C, R2 R2 <- M[C] ADD D, R2 R2 <- R2 + M[D] MUL R2, R1 R1 <- R1 * R2 MOVE R1, X M[X] <- R1• The MOVE instruction moves or transfers the operands to and from memory and processor registers. The second listed in the symbolic instruction is assumed (implied) to be the destination where the result of the operation is transferred. • Advantage: Smaller instruction word • Disadvantage: Implied destination may not be the desired address, needs a data transfer to the actual location. 2-6
  32. 32. CS212 CHAPTER 2 – INSTRUCTION FORMATS2.4 One Address Instructions OPCODE OPER 1 OPER 1 DATA SOURCE 1 • The other data source is an accumulator or stack • Implied destination of operation• A computer with a one-address instruction uses an implied AC register. The program to evaluate the arithmetic statement is as follows: LOAD A AC <- M[A] ADD B AC <- AC + M[B] STORE T M[T] <- AC LOAD C AC <- M[C] ADD D AC <- AC + M[D] MUL T AC <- AC * M[T] STORE X M[X] <- AC• All operations are done between the AC register and a memory operand. The symbolic address T designates a temporary memory location required for storing the intermediate result. • Advantage: Smaller instruction word. • Disadvantage: Data needs to be loaded into the accumulator first. And the implied destination may not be the desired address, therefore it requires a data transfer to the actual location. 2-7
  33. 33. CS212 CHAPTER 2 – INSTRUCTION FORMATS2.5 Zero Address ( Stack ) Instructions OPCODE• Most computers have a facility for a memory stack but only a few commercial computers have the appropriate instructions for evaluating arithmetic expressions. Such computers have a stack organised CPU with the top locations of the stack as registers. The rest of the stack is in memory. In this way, the operations that must be performed with the top two elements of the stack are available in processor registers for manipulation with arithmetic circuits. The PUSH and POP instructions require one address field to specify the source or destination operand. Operation type instructions for the stack such as ADD and MUL imply two operands on top of the stack and do not require an address field in the instruction. The following program shows how the expression will be evaluated, X = ( A + B ) * ( C + D ): PUSH A TOS <- A PUSH B TOS <- B ADD TOS <- ( A + B ) PUSH C TOS <- C PUSH D TOS <- D ADD TOS <- ( C + D ) MUL TOS <- ( C + D ) * ( A + B ) POP X M[X] <- TOS • Advantage: Smaller instruction word. • Disadvantage: Data needs to be loaded into the stack first. And the implied destination may not be the desired address, therefore it requires a data transfer to the actual location. 2-8
  34. 34. CS212 CHAPTER 2 – INSTRUCTION FORMATS2.6 Using Different Instruction Formats• Given the scenario of a computer system, which has a 40-bit wide instruction word, there are 12 instructions in the instruction set (4 bits for the opcode and 36 bits left for the operand). How many bits would by allocated to each of the instruction formats. 2.6.1 Three Address Format • 12 bits per operand field • The number of addressable ranges would be 2n12 = 4096 OPCODE OPER 1 OPER 2 OPER 3 4 Bits 12 Bits 12 Bits 12 Bits 0 Main Memory 4096 212 -1 2-9
  35. 35. CS212 CHAPTER 2 – INSTRUCTION FORMATS 2.6.2 Two Address Format • 18 bits per operand field • The number of addressable ranges would be 2n18 = 256K OPCODE OPER 1 OPER 2 4 Bits 18 Bits 18 Bits 0 Main Memory 256K 218 -1 2.6.2 Two Address Format • 36 bits per operand field • The number of addressable ranges would be 2n36 = 64M OPCODE OPER 1 4 Bits 36 Bits 2-10
  36. 36. CS212 CHAPTER 2 – INSTRUCTION FORMATS 0 Main Memory 64M 236 -1 2-11
  37. 37. CS212 CHAPTER 2 – INSTRUCTION FORMATS• Exercises:(a) Write programs to compute X= ( A + B ) * ( C * D / E ) on each of the following machines: i) 0-address format ii) 1-address format iii) 2-address format iv) 3-address formatThe instruction available for use is as follows: 0-address 1-address 2-address 3-address PUSH LOA M ADD A, B ADD A, B, B POP STO M DIV D, E DIV D, E, E MUL ADD M MUL C, D MUL C, E, E DIV SUB M MUL A, C MUL B, E, X ADD MUL M MOV X, A SUB DIV M(b) Assume a system has a 24-bit wide instruction word. Calculate the minimum number of bits needed for opcode in order to evaluate the above expression for various machines in part (a). Hence calculate the maximum number of bits for the operand for each of the machines. 2-12
  38. 38. CS212 CHAPTER 2 – INSTRUCTION FORMATS(c) A computer system has a 128-bit instruction word and uses a 3-address format. There are 155 different general-purpose registers available. Assume that the use of any of these registers is required in a particular instruction – i.e. the appropriate register must be specified in the instruction word as a special field. If there are 200 different opcodes/instructions available for the system, what is the instruction format like? 2-13
  39. 39. CS212 CHAPTER 2 – INSTRUCTION FORMATS 2-14
  40. 40. CS212 CHAPTER 2 – INSTRUCTION FORMATS 2-15
  41. 41. CS212 CHAPTER 3 – ADDRESSING METHODSChapter 3. Addressing Methods• Address fields in a typical instruction format are quite limited in addressable range, therefore it would be better to give them the ability to reference a large range of locations in main memory or, from some systems virtual memory. To achieve this objective, a variety of addressing techniques has been employed. They all involve some trade-off between address range and/or addressing flexibility on the one hand, and a number of memory references and/or complexity of address calculation on the other.• In many computer systems, the computer will have several different programs. To efficiently load and remove these programs from memory or different locations, addressing techniques are provided that make the program re-locatable, meaning that the same program can be run in many different sections at memory.3.1 Addressing Techniques• All computers provide more than one type of addressing mode. The question arises as to how the control unit can determine which address mode is being used in a particular instruction. Several approaches are taken. Often, different opcodes will use different addressing modes. Also, one or more bits in the instruction format can be used as mode field. The value of the mode field determines which addressing mode to be used. Another thing to note is the effective address, either being a main memory address or a register address. 3- 1
  42. 42. CS212 CHAPTER 3 – ADDRESSING METHODS3.1.1 Immediate Addressing• The operand is given explicitly in the instruction. This mode is used in specifying address and data constants in programs: MOV 200immediate, R0• The instruction places the value 200 in register R0. The immediate mode is used to specify the value of a source operand. It makes no sense as a destination because it does not specify a location in which an operand can be stored. Using a subscript to denote the immediate mode is not appropriate in an assembly language. Sometimes, the value is written, as it is, e.g. 200. But this can be confusing. A common convention is to use the pound sign # in front of the value of an operand to indicate that this value is to be used as an immediate operand. Using this convention, the above instruction is written as: MOV #200, R0 • Example: • Load 5000: Move the value 5000 into the accumulator Accumulator Load 5000 5000 Main memory • Immediate addressing technique 3- 2
  43. 43. CS212 CHAPTER 3 – ADDRESSING METHODS3.1.2 Direct Addressing• To specify a memory address in an instruction word, the most obvious technique is simply to give the address in binary form. Although direct addressing provides the most straightforward (and fastest) way to give a memory address, several other techniques are also used. It requires only one memory reference and no special calculation. However, because of this it has only a limited address space. A typical instruction using direct addressing may look as follows: MOV AX, [7000]• In this case the instruction specifies that the contents of [7000], which is a memory location, must be retrieved and placed into the accumulator AX. Common convention for denoting an instruction using direct address is to place a bracket by the side, e.g. [address]. • Example : • Load 5000: Move the content at address 5000 into the accumulator Load [5000] Accumulator 5000 ABCD ABCD Main memory • Direct addressing technique 3- 3
  44. 44. CS212 CHAPTER 3 – ADDRESSING METHODS3.1.3 Indirect Addressing• The effective address of the operand is the contents of a main memory location, the location whose address appears in the instruction. We denote indirection by placing the name of the memory location (or register address as we will see later in this chapter) with a @ symbol. The memory location that contains the address of the operand is called a pointer. In direction is an important and powerful concept in programming. Consider the analogy of a treasure hunt: In the instructions for the hunt you may be told to go to a house at a given address. Instead of finding the treasure there, you find a note that gives you another address where you will find the treasure. By changing the note, the location of the treasure can be changed, but the instructions for the hunt remain the same. Changing the note is equivalent to changing the contents of a pointer in a computer program. An example instruction may look like: MUL #10, @1000, [2000]• In this instruction we are asked multiply the value 10 with the contents of whatever is found to be the address at location @1000. For example, after we first point to @1000, the contents are the address 3000, of which we will go there to retrieve a value. The final part of the above instruction now asks us to store the resultant operation in address 2000, found in main memory. • Example • Load @5000: Access main memory at address 5000 to get address of actual data. Access main memory at this address to retrieve actual data into accumulator. 3- 4
  45. 45. CS212 CHAPTER 3 – ADDRESSING METHODS Load @5000 5000 ABCD Accumulator ABCD FACE FACE Main memory • Indirect addressing technique3.1.4 Register Addressing• Similar to direct addressing, however the operand field refers to a register address containing the actual data, instead of a main memory address. If register address is heavily used in an instruction set, this implies that the CPU registers will be heavily used. Because of the severely limited number of registers (compared to main memory locations), their use in this fashion makes sense only if they are employed efficiently. The advantages of register addressing is the access time, however the obvious disadvantage to register addressing is their limited size. A typical instruction could be similar to that of direct address, for example: MOV AX, R1• In this case the instruction specifies that the contents of R1, which is a memory location, must be retrieved and placed into the accumulator AX. Common convention for denoting an instruction using register address is to place an R followed by the register location by the side, e.g. R3. 3- 5
  46. 46. CS212 CHAPTER 3 – ADDRESSING METHODS • Example: • Load R5: Move the content in the register 05 into the accumulator R1 R2 Load [R5] R3 R4 Accumulator Main memory R5 ABCD ABCD R6 R7 • Register addressing technique3.1.5 Register Indirect Addressing• Similar to indirect addressing, however the operand field points to a register containing the effective address of data. The effective address may be a memory location or a register location. The same advantages and disadvantages that could be said of indirect addressing could also be true here. A typical instruction might be, again similar to indirect addressing, as such: ADD #20, @R8, R3 In this instruction we are asked add the value 20 with the contents of whatever is found to be in the register address @R8. For example, after we first point to @R8, the contents are the address 2000 (in the this case the effective address is a memory location), of which we will go there to retrieve a value. The final part of the above instruction now asks us to store the resultant operation in register location R3. 3- 6
  47. 47. CS212 CHAPTER 3 – ADDRESSING METHODS • Example • Load @R3: Access the register 05 to get the address of actual data (Assume this address is a register). Access this register to retrieve actual data into accumulator R1 R2 Load @R3 R3 R5 R4 Accumulator Main memory R5 FACE FACE R6 R7 • Register indirect addressing technique3.1.6 Displacement Addressing• Operand field contains offset or displacement. Uses a special –purpose register whose contents are added to the offset to produce the effective address of data, Basically it combines the capabilities of direct addressing and register indirect addressing. Under displacement addressing, there are three basic techniques, although it should be noted that all are following the same generic method of specifying an effective address: 1. Relative addressing (PC) 2. Based addressing (BX) 3. Indexed addressing (SI) 3- 7
  48. 48. CS212 CHAPTER 3 – ADDRESSING METHODS • Example • Load [displacement register e.g. PC, BX, or SI + 5]: Move the data at location e.g. (PC + 5) into the accumulator. Offset PC 5 5001 PC 5000 Load [PC + 5] + 5001 5001 5002 5003 5006 (Relative Address)Offset 5004 5005 5006 ABCD Accumulator ABCD • Displacement addressing technique 3- 8
  49. 49. CS212 CHAPTER 3 – ADDRESSING METHODS3.2 Assembly Language• Machine instructions are represented by patterns of 0’s and 1’s. Such patterns are awkward to deal with when writing programs. Therefore we use symbols to represent the patterns.• For example, in the case of the Move and Add instructions, we use the symbolic names MOVE and ADD to represent the corresponding operation code patterns. Similarly we use the notation R3 to refer to register number three. A complete set of such symbolic names and rules for their use constitutes a programming language generally referred to as assembly language. The symbolic names are called mnemonics; the set of rules for using the mnemonics in the specification of complete instructions and programs is called the syntax of the language.• Programs written in an assembly language can be automatically translated into a sequence of machine instructions by a special program called an assembler. The assembler, like any other program, is stored as a sequence of machine instructions in the main memory. A user program is usually entered into the computer, at this point the program is simply a set of lines of alphanumeric characters. When the assembler program is executed, it reads the user program, analyses it, and then generates the desired machine language program. The latter contains patterns of 0’s and 1’s specifying the instructions that will be executed by the computer. The user program in its original alphanumeric text format is called the source program, and the assembled machine language program is called the object program. 3- 9
  50. 50. CS212 CHAPTER 3 – ADDRESSING METHODS3.2.1 Instruction Notation• The assembly language syntax may require us to write the MOVE instruction as MOVE R0, SUM• The mnemonic, MOVE, represents the operation performed by the instruction. The assembler translates this mnemonic into a binary code that the computer understands. The binary code is usually referred to as the opcode, because it specifies the operation denoted by the mnemonic. The opcode mnemonic is followed by at least one blank space. Then the information that specifies the operand is given. In the example above, the source operand is in register R0. This information is followed by the specification of the destination operand, separated from the source operand by a comma. The destination operand is in the memory location that has its address represented by the name SUM. Since there are several possible addressing modes for specifying operand locations, the assembly language syntax must indicate which mode is being used. For example, a numerical value or a name used by itself, such as SUM in the preceding instruction may be used to denote the direct mode. The pound sign usually denotes an immediate operand. Thus the instruction ADD #5, R3• Adds the number 5 to the contents of register R3 and puts the result back into register R3. The pound sign is not the only way to denote immediate addressing. In some cases, the intended addressing mode is indicated by the opcode used. The assembly language may have different opcode mnemonics for different addressing modes. For example, the previous Add instruction may have to be written as ADDI 5, R3 3- 10
  51. 51. CS212 CHAPTER 3 – ADDRESSING METHODS• The mnemonic ADDI states that the source operand is given in the immediate addressing mode. Putting parentheses around the name or symbol denoting the pointer to the operand usually specifies indirect addressing. For example, if the number 5 is to be placed in a memory location whose address is held in register R2, the desired action can be specified as MOVE #5, (R2) or perhaps MOVEI 5, (R2)3.2.2 Number Notation• When dealing with numerical values, it is often convenient to use the familiar decimal notation. Of course, these values are stored in the computer as binary numbers. In some situations it is more convenient to specify the binary patterns directly. Most assemblers allow numerical values to be specified in different ways, using conventions that are defined by the assembly language syntax. Consider for example, the number 93, which is represented by the 8-bit binary number 01011101. If this value is to be used as an immediate operand, it can be given as a decimal number, as in the instruction ADD #93, R1 Or as a binary number identified by a percent sign, as in ADD #%01011101, R1 3- 11
  52. 52. CS212 CHAPTER 3 – ADDRESSING METHODS• Binary numbers can be written more compactly as hexadecimal numbers, in which four bits are represented by a single hexadecimal digit. The hexadecimal notation is a direct extension of the BCD code (binary coded decimal). Where the first ten patterns 0000, 0001, …., 1001 are represented by the digits 0, 1, …., 9, as in BCD. The remaining six 4-bit patterns, 1010, 1011, …., 1111, are represented by the letters A, B, …., F.• Thus, in hexadecimal representation, the decimal value becomes 5D. In assembler syntax, a hexadecimal representation is often identified by a dollar sign. Therefore we can write ADD #$5D, R1 3- 12
  53. 53. CS212 CHAPTER 3 – ADDRESSING METHODS• Exercises• Student Notes: 3- 13
  54. 54. CS212 CHAPTER 3 – ADDRESSING METHODS• 3- 14
  55. 55. CS212 CHAPTER 4 – STACKS & SUBROUTINESChapter 4. Stacks & Subroutines• A very useful feature included in many computers is a memory stack, also known as a last-in-first-out (LIFO) list. A stack is a storage device that stores information in such a manner that the item stored last is the first item retrieved. The operation of a stack is sometimes compared to a stack of books. The last book placed on top of the stack will be the first to be taken off.• The stack is useful for a variety of applications and its organisation possesses special features that facilitate many data processing tasks. A stack is used in some electronic calculators and computers to facilitate the evaluation of arithmetic expressions. However, its use in computers today is mostly for handling subroutines and interrupts.• A memory stack is essentially a portion of a memory unit accessed by an address that is always incremented or decremented after the memory access. The register that holds the address for the stack is called a stack pointer (SP) because its value always points at the top item of the stack. The two operations of a stack are insertion and deletion of items.• The operation of insertion onto a stack is call PUSH and it can be thought of as the result of pushing something onto the top of the stack.• The operation of deletion is called POP and it can be thought of as the result of removing one item so that the stack pops out.• However, nothing is physically pushed or popped in a memory stack. These operations are simulated by decrementing or incrementing the stack pointer register. 4-1
  56. 56. CS212 CHAPTER 4 – STACKS & SUBROUTINES • Example: The diagram below shows a portion of a memory organised as a stack. MEMORY ADDRESS Stack Limit 099 100 SP = 101 C 101 B 102 Stack Base A 103 • A memory stack• The stack pointer SP holds the binary address of the item that is currently on top of the stack. Three items are presently stored in the stack: A, B & C in consecutive addresses 103, 102 and 101 respectively. Item C is on top of the stack, so SP contains the address 101. To remove the top item, the stack is popped by reading the item at address 101 and incrementing SP. Item B is now on top of the stack since SP contains the address 102. To insert a new item, the stack is pushed by first decrementing SP and then writing the new item on top of the stack. Note that item C has been read out but not physically removed. This does not matter as far as the stack operation is concerned, because when the stack is pushed, a new item is written on top of the stack regardless of what was there before.• We can assume that the items in the stack communicate with a data register DR. A new item is inserted with the push operation as follows: SP <- SP – 1 M[SP] <- DR 4-2
  57. 57. CS212 CHAPTER 4 – STACKS & SUBROUTINES• The stack pointer is decremented so it points at the address of the next word. A memory write micro-operation inserts the word from the DR onto the top of the stack. Note that SP holds the address of the top of the stack and that M[SP] denotes the memory word specified by the address presently in SP. A new item is deleted with a pop operation as follows: DR <- M[SP] SP <- SP + 1• The top item is read from the stack into the DR. The stack pointer is then decremented at the point at the next item in the stack. The two micro-operations needed for either the push or pop are access to memory through SP and updating SP. Which of the two micro-operations is done first and whether SP is updated by decrementing or incrementing depends on the organisation of the stack. The stack may be constructed to grow by increasing the memory address. In such a case, SP is incremented for the push operations and decremented for the pop operations. A stack may also be constructed so that SP points at the next empty location above the top of the stack. In this case, the sequence of micro-operations must be interchanged. A stack pointer is loaded with an initial value. This initial value must be the bottom address of an assigned stack in memory. From then on, SP is automatically decremented or incremented with every push or pop operation. The advantage of a memory stack is that the processor is always available and automatically updated in the stack pointer. 4-3
  58. 58. CS212 CHAPTER 4 – STACKS & SUBROUTINES4.1 Reverse Polish Notation (RPN)• A stack organisation is very effective for evaluating arithmetic expressions. The common mathematical method of writing arithmetic expressions imposes difficulty when evaluated by a computer. Conventional arithmetic expressions are written in the infix notation, with each operator written between the operands. Consider the expression: A*B+C*D• To evaluate the arithmetic expression it is necessary to compute the product A * B, store this product, compute the product of C * D, and then sum the two products. From this simple example we see that to evaluate arithmetic expressions in infix notation it is necessary to scan back and forth along the expression to determine the sequence of operations that must be performed.• The Polish mathematician Jan Lukasiewicz proposed that arithmetic expressions be written in prefix notation. This representation referred to as Polish notation places the operator before the operands. Postfix notation, referred to as reverse polish notation, places the operator after the operands. The following examples demonstrate the three representations: A+B Infix notation +AB Prefix or Polish notation AB+ Postfix or reverse Polish notation• Reverse Polish notation, also known as RPN, is a form suitable for stack manipulation. The expression above can thus be said to be written in RPN as: AB*CD*+ 4-4
  59. 59. CS212 CHAPTER 4 – STACKS & SUBROUTINES 4.1.1 RPN Operation • Scan the expression from left to right. • When the operator is reached, perform the operation with the two operands to the left of the operator. • Remove the two operands and the operator and replace them with the number obtained from the operation. • Continue to scan the expression and repeat the procedure for every operator until there are no more operators.• For the above expression we find the operator * after A and B. We perform the operation A * B and replace A, B and * by the product to obtain (A*B)CD*+• where ( A + B ) is a single quantity obtained from the product. The next operator is a * and its previous two operands are C and D; so we perform C * D and obtain an expression with two operands and one operator: (A*B)(C*D)+• The next operator is + and the two operands on its left are two products; so we add the two quantities to obtain the result. The conversion from infix notation to reverse Polish notation must take into consideration the operational hierarchy adopted for infix notation. This hierarchy dictates that we first perform all arithmetic inside inner parentheses, then inside outer parentheses, then do multiplication and division, and finally, addition and subtraction. 4-5
  60. 60. CS212 CHAPTER 4 – STACKS & SUBROUTINES • Consider the expression: (A+B)*[C*(D+E)+F]• To evaluate the expression we first perform the arithmetic inside the parentheses and then evaluate the expression inside the square brackets. The multiplication of C * ( D + E ) must be done prior to the addition of F. The last operation is the multiplication of the two terms between the parentheses and brackets. The expression can be converted to RPN by taking into consideration the operation hierarchy. The converted expression is AB+DE+C*F+*• Proceeding from left to right, we first add A and B, then add D and E. at this point we are left with: (A+B)(D+E)C*F+*• Where ( A + B ) and ( D + E ) are each a single number obtained from the sum. The two operands for the next * are C and ( D + E ). These two numbers are multiplied and the product added to F. The final * causes the multiplication of the last result with the number ( A + B ). Note that all expressions in RPN are without parentheses. The subtraction and division operations are not commutative, and the order of the operands is important. We define the RPN expression A B - to mean ( A – B ) and the expression A B / to represent the division of A / B. 4-6
  61. 61. CS212 CHAPTER 4 – STACKS & SUBROUTINES4.2 Stack Operations• Reverse Polish notation combined with a stack provides an efficient way to evaluate arithmetic expressions. This procedure is employed in some electronic calculators and also in some computers. The stack is particularly useful for handling long, complex problems involving chain calculations. It is based on the fact that that any arithmetic expression can be expressed in parentheses-free Polish notation.• The procedure consists of first converting the arithmetic expression into its equivalent RPN. The operands are pushed onto the stack in the order that they appear. The initiation of an operation depends on whether we have a calculator or a computer. In a calculator, the operators are entered through the keyboard. In a computer they must be initiated by a set of program instructions. The following operations are executed with the stack when an operation is specified: the two topmost operands in the stack are popped and used for the operation. The result of the operation is pushed into the stack, replacing the lower operand. By continuously pushing the operands onto the stack and performing the operations as defined above, the expression in the proper order and the final result remains on top of the stack. • Example: The following expression will clarify the procedure: (3*4)+(5*6) in reverse Polish notation, it is expressed as: 34*56*+ • Now consider the above operations in the stack as shown below: 4-7
  62. 62. CS212 CHAPTER 4 – STACKS & SUBROUTINES • Step 1: STACK ADDRESS 100 101 102 SP = 103 3 103 3 • Step 2: STACK ADDRESS 100 101 SP = 102 4 102 3 103 4 • Step 3: STACK ADDRESS 100 101 102 SP = 103 12 103 * 4-8
  63. 63. CS212 CHAPTER 4 – STACKS & SUBROUTINES • Step 4: STACK ADDRESS 100 101 SP = 102 5 102 12 103 5 • Step 5: STACK ADDRESS 100 SP = 101 6 101 5 102 12 103 6 • Step 6: STACK ADDRESS 100 101 SP = 102 30 102 12 103 * • Step 7: STACK ADDRESS 100 101 102 SP = 103 42 103 + 4-9
  64. 64. CS212 CHAPTER 4 – STACKS & SUBROUTINES2.2.1 Computer Stack (from chapter 2)• The PUSH and POP instructions require one address field to specify the source or destination operand. Operation type instructions for the stack such as ADD and MUL imply two operands on top of the stack and do not require an address field in the instruction. The following program shows how the expression will be evaluated, X = ( A + B ) * ( C + D ): PUSH A TOS <- A PUSH B TOS <- B ADD TOS <- ( A + B ) PUSH C TOS <- C PUSH D TOS <- D ADD TOS <- ( C + D ) MUL TOS <- ( C + D ) * ( A + B ) POP X M[X] <- TOS 4-10
  65. 65. CS212 CHAPTER 4 – STACKS & SUBROUTINES4.3 Subroutines • A subroutine is a self-contained sequence of instructions that performs a given computational task. During the execution of a program, a subroutine my be called to perform its function many times at various points in the program. Each time a subroutine is called, a branch is made to the beginning of the subroutine to start executing its set of instructions. After the subroutine has been executed, a branch is made again to return to the main program. A subroutine is also known as a procedure. • The instruction that transfers control to a subroutine is known by different names. The most common names are call subroutine, call procedure, jump to subroutine, or branch to subroutine. The call subroutine instruction has a one- address field. The instruction is executed by performing two operations. The address of the next instruction, which is available in the PC (called the return address), is stored in a temporary location and control is then transferred to the beginning of the subroutine. The last instruction that must be inserted in every subroutine program is a return to the calling program. When this instruction is executed, the return address stored in the temporary location is transferred to into the PC. This results in a transfer of program control to the program that called the subroutine. • Different computers use different temporary locations for storing the return address. Some computers store it in a fixed location in memory, some store it in a processor register, and some store it in a stack. The advantage of using it in a stack for a return address is that when a succession of subroutines are called, the sequential return address can be pushed onto the stack. The return instruction causes the stack to pop, and the content of the top of the stack is then transferred to the PC. In this way, the return is always to the program that last called the subroutine. 4-11
  66. 66. CS212 CHAPTER 4 – STACKS & SUBROUTINES • A subroutine call instruction is implemented with the following micro- operations: SP <- SP – 1 Decrement the stack pointer M[SP] <- PC Store return address in stack PC <- effective address Transfer control to the subroutine The return instruction is implemented by popping the stack and transferring the return address to PC. PC <- M[SP] Transfer return address to PC SP <- SP + 1 Increment stack pointer By using a subroutine stack, all return addresses are automatically stored by the hardware in the memory stack. The programmer does not have to be concerned or remember where to return after the subroutine is executed.4.4 Nested Subroutines • A common programming practice, called subroutine nesting, is to have one subroutine call another subroutine. In this case, the return address of the second call is also stored in the register. Hence, it is essential to save the contents of the register in some other location before calling another subroutine. Otherwise the return address of the first subroutine will be lost. • Subroutine nesting can be carried out to any depth. Eventually, the last subroutine called completes its computations and returns to the subroutine that called it. The return address needed for this first return is the last one generated in the nested call sequence. 4-12
  67. 67. CS212 CHAPTER 4 – STACKS & SUBROUTINES • A particular register is designated as the stack pointer to be used in the operation; the stack pointer points to a stack called the processor stack. In such a computer, a call subroutine instruction pushes the contents of the PC onto the processor stack and loads the subroutine address into the PC. The return instruction pops the return address from the processor stack into the PC. • Example: Stack in Main Memory address 3700 3800 Call SUB 2 SUB 1 RET 3801 8500 Call SUB 1 MAIN 8501 BF00 RET SUB 2 . . C000 D000 STACK 4-13
  68. 68. CS212 CHAPTER 4 – STACKS & SUBROUTINES • Before execution of SUB 1 • PC = SP = C000 UNUSED SP PC C010 D000 PREVIOUS DATA • Call SUB 1 (assume no saving of registers) • Increment PC = Decrement SP = • Push PC onto the stack • Load SUB 1 starting address into PC = UNUSED SP PC 4-14
  69. 69. CS212 CHAPTER 4 – STACKS & SUBROUTINES • Before execution of SUB 2 • PC = SP = UNUSED SP PC • Call SUB 2 (assume no saving of registers) • Increment PC = Decrement SP = • Push PC onto the stack • Load SUB 2 starting address into PC = UNUSED SP PC 4-15
  70. 70. CS212 CHAPTER 4 – STACKS & SUBROUTINES • Return from SUB 2 • Get return address from the stack to PC = • Increment SP = UNUSED SP PC • Return from SUB 1 • Get return address from the stack to PC = • Increment SP = UNUSED SP PC PREVIOUS DATA 4-16
  71. 71. CS212 CHAPTER 4 – STACKS & SUBROUTINES4.5 Parameter Transfer • When calling a subroutine, a program must provide to the subroutine the parameters, that is, the operands or their addresses, to be used in the computation. Later, the subroutine returns other parameters, in this case, the results of the computation. This exchange of information between a calling program and a subroutine is referred to as parameter passing. Parameter passing may occur in several ways. The parameters may be placed in registers or in fixed memory locations, where they can be accessed by the subroutine, Alternatively, the parameters may be placed on a stack, possibly the processor stack used for saving the return address. • Passing parameters through CPU registers is straightforward and efficient. However, if many parameters are involved, there may not be enough general- purpose registers available for this purpose. And the calling program may need to retain information in some registers for use after returning from the subroutine, making these registers unavailable for passing parameters. Using a stack, on the other hand, is highly flexible; a stack can handle a large number of parameters. 4-17
  72. 72. CS212 CHAPTER 4 – STACKS & SUBROUTINES• Exercises:(a) Describe the steps using a specific linkage register how parameters are passed (at low-level) into a subroutine by a calling program. Assume [R2] is the specific linkage register.(b) The principle of a nested subroutine means that a subroutine can call another subroutine to perform some task. Given the diagram below, note the location of the Stack Pointers (SP) and the Program Counter (PC) before and during the calling of a subroutine and state the Starting address of the subroutine where it is appropriate. 2000 2300 Call Sub 2 SUB 1 2301 3000 Call Sub 1 MAIN 3001 4000 SUB 3 Return 5000 5500 Call Sub 3 SUB 2 5501 C000 Unused Data C010 STACK D000 4-18
  73. 73. CS212 CHAPTER 4 – STACKS & SUBROUTINES 4-19
  74. 74. CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATIONChapter 5. Input Output Organization• The input and output subsection of a computer provides an efficient mode of communication between the central processing unit and the outside environment. Programs and data must be entered into the computer memory for processing and results obtained from computations must be recorded or displayed to the user. Among the input and output devices that are commonly found in computer systems are keyboards, display terminals, printers and disks. Other input and output devices encountered are magnetic tape drives digital plotters, optical readers, analog-to-digital converters, and various data acquisition equipment. Computers can be used to control various processes such as machine tooling, assembly line procedures, and industrial control.• The input and output facility of a computer is a function of its intended application. The difference between a small and large system is partially dependent on the amount of hardware the computer has available for communicating with other devices and the number of devices connected to the system, and thus various modes of transfer, or architectures will differ. However, it can be said that an I/O module does have some generic functions.5.1 I/O Module Functions • Control and Timing To co-ordinate the flow of traffic between the computers internal resources and those of external devices, because of the fact that the CPU will be communicating with multiple devices. Therefore the internal resources, such as main memory and the system are must be shared among a number of activities including data I/O. 5-1
  75. 75. CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION • CPU Communication The CPU must be able to decode specific commands and also send commands to the I/O module such as READ SECTOR and WRITE SECTOR for example. These commands are sent via the control bus. The data bus is utilised for the transfer of data between the CPU and the I/O module. The address bus is used for address recognition, where each I/O device has an address unique to that particular device which is being controlled. Lastly there is status reporting, usually noted in the PSW, to report the current status of an I/O module and to report various error conditions that may have occurred. • Device Communication Communicating to external devices in terms of commands, status information of various devices or of the I/O module, and the exchange of data between the I/O module and CPU and vice versa. • Data Buffering Data buffering is required because of the different data transfer rate of the CPU, memory and the particular I/O devices. An I/O module must be able to operate at both device and memory speeds. • Error Detection Any error that has been detected will be recorded and reported to the CPU, examples of these are a paper-jam, bad disk track, transmission error and so on. 5-2

×