SlideShare a Scribd company logo
1 of 410
Download to read offline
1
22CS201
Computer Organization and Architecture
Module – I
Dr.A.Kathirvel, Dean, Computing Cluster
Sri Krishna College of Technology, Coibatore
2
22CS201 Computer Organization and Architecture
Module I - Functional Blocks of a Computer and Data Representation
Functional Blocks of a Computer: Functional blocks and its operations.
Instruction set architecture of a CPU - registers, instruction execution cycle, Data
path, RTL interpretation of instructions, instruction set. Performance metrics.
Addressing modes. Data Representation: Signed number representation, fixed
and floating point representations, character representation. Computer arithmetic
- integer addition and subtraction, ripple carry adder, carry look-ahead adder, etc.
multiplication - shift-and add, Booth multiplier, carry save multiplier, etc.
Division restoring and non-restoring techniques, floating point arithmetic.
Dr.A.Kathirvel, Professor & DEAN,
DCSE, SKCT
Kathirvel.a@skct.edu.in
3
Module I
Text Books:
1. David A. Patterson and John L. Hennessy, “Computer
Organization and Design: The Hardware/Software Interface”, 6th
Edition, Morgan Kaufmann/Elsevier, 2020.
2. 2. Carl Hamacher, Zvonko Vranesic, Safwat Zaky, Naraig
Manjikian, “Computer Organization and Embedded Systems”,
McGraw- Hill, 6th Edition 2017.
Reference Books:
1. John P. Hayes, “Computer Architecture and Organization”,
McGraw-Hill, 3rd Edition, 2017
2. William Stallings, “Computer Organization and Architecture
Designing for Performance”, 11th Edition, Pearson Education
2018.
3. Vincent P. Heuring and Harry F. Jordan, “Computer System
Design and Architecture”, 2nd Edition, Pearson Education 2004.
4
Session Topic
1.1 Functional Blocks of a Computer
1.2 Operation and Operands of Computer Hardware
1.3 Instruction Set Architecture
1.4 Register Transfer Language (RTL) interpretation of
instructions
1.5 Addressing Modes -1
1.6 Addressing Modes -2
1.7 Instruction Execution Cycle
1.8 Instruction Set
1.9 Performance Metrics
Module 1- Functional blocks of Computer & Instruction Set Architecture
5
Computer Architecture : It is a set of rules and methods that describe the functionality,
organization and implementation of computer systems.
Functional units : A computer consists of the following five independent units which has
its own functionality.
 Input unit
 Output unit
 Arithmetic and Logic unit
 Memory unit
 Control unit
1.1 - Functional Units of a Digital Computer
6
Input unit
• Input units are used by the computer to
read the data.
• e.g keyboards, mouse, joysticks,
trackballs, microphones, etc.
Output Unit
• The primary function of the output unit is
to send the processed results to the user.
• e.g monitor, printer, etc.
1.1 - Functional Units of a Digital Computer
7
Arithmetic & logical unit
• It performs arithmetic operations like
addition, subtraction, multiplication,
division and also the logical operations
like AND, OR, NOT operations.
Memory unit
• It is a storage area in which programs
and data’s are stored.
• The memory unit can be categorized as
primary memory and secondary memory.
1.1 - Functional Units of a Digital Computer
8
Control unit
• It coordinates the operation of the processor.
• It directs the other functional units to respond to a program's instructions.
• It is the nerve center of a computer system.
1.1 - Functional Units of a Digital Computer
9
DISCUSSION
1. Technology behind the working of input and output devices.
2. Memory types.
3. Processor – latest in market with their producer name.
4. Architecture types of computer.
1.1 - Functional Units of a Digital Computer
10
Quiz Link
1.1 - Functional Units of a Digital Computer
https://docs.google.com/forms/d/1kuMF1VOL6aoSF931L80ZPISZl
YdA-f_KBwJupMiHSvM/edit?usp=sharing
11
12
Session Topic
1.1 Functional Blocks of a Computer
1.2 Operation and Operands of Computer Hardware
1.3 Instruction Set Architecture
1.4 Register Transfer Language (RTL) interpretation of
instructions
1.5 Addressing Modes -1
1.6 Addressing Modes -2
1.7 Instruction Execution Cycle
1.8 Instruction Set
1.9 Performance Metrics
Module 1- Functional blocks of Computer & Instruction Set Architecture
13
Introduction
– Stored program concept
– Operational concept
– MIPS instruction set
– Example
1.2.1 - Operation of Computer Hardware
– Arithmetic (integer/floating-point)
– Logical
– Shift
– Compare
– Load/store
– Branch/jump
– System control and coprocessor
1.2 - Operation and Operands of Computer Hardware
1.2.2 - Operands of Computer Hardware
– Registers operands
– Memory operands
– Constant or immediate operands
14
• There is one main memory store
• Both data and instructions reside in same
memory store
• Data and instructions are fetched (copied)
from memory over the same set of buses
Design goals:
• Maximize performance and minimize cost and
reduce design time
1.2.1 – Operation of Computer Hardware
a) Stored Program Concept (Von Neumann Architecture)
15
b) Operational Concepts
 List of instructions and data are stored in the
memory
 Instructions are fetched from memory for execution.
Basic steps to execute program
 Fetch : Individual instructions are transferred from
the memory to the processor
 Decode : Determines the operation to be performed
& operands required.
 Execute : Operation is processed in ALU.
 Store :Result / data is stored in memory.
1.2.1 – Operation of Computer Hardware
16
Registers
• Instruction register (IR) - Holds the
currently executing Instruction.
• Program counter (PC) - address of next
instruction to be executed.
• General-purpose register (R0 – Rn-1)
• Memory address register (MAR) –
Contains the address of memory
location to be accessed.
• Memory data register (MDR) - Contains
the data to be written into or read out of
the address location
1.2.1 – Operation of Computer Hardware
17
xxxx
yyy
ADD R1, LOCA
SUB R2,R3
25
PC= starting address
PC=4
PC= 8 8
4
0
MAR=8
1.Fetch: Instruction
PC=8
MARPC =8
Read ctrl sgl
MDR= ADD R1,LOCA
MDR=mem[MAR]
IR=MDR
PC=PC+4
R1<-- R1+[LOCA]
2.Decoded
Operation = addition
Operands= R1, LOCA
LOCA
12
16
3. Fetch Data
MAR= mem[operand]=LOCA=16
Read ctrl sgl
MDR=[LOCA]
MDR=25 4. Execute
R1  R1 +MDR
18
Example Instruction
ADD R1, LOC A // R1=R1+ [LOCA]
( Adding the content of memory
location LOCA & Register R1)
Typical Operating Steps
• MAR  PC
• MDR  Mem [MAR]
• IR  MDR
• PC  PC + 4
• MAR  IR [Operand]
• MDR  Mem [MAR]
• R1  R1 +MDR
1.2.1 – Operation of Computer Hardware
19
c) Instructions
– Language of the machine
– More primitive than higher level languages
( e.g., no sophisticated control flow such as while or for loops.)
– Very restrictive
• e.g., MIPS arithmetic instructions
MIPS instruction set
– MIPS - Million Instructions Per Second.
– It is RISC (Reduced Instruction Set Computer) ISA
1.2.1 - Operation of Computer Hardware
20
MIPS Arithmetic
– All MIPS arithmetic instructions have 3 operands
– Operand order is fixed (e.g., destination first)
– Example:
C code: A = B + C
MIPS code: add a, b, c # The sum of b and c is placed in a
add $s0, $s1, $s2 # The sum of register content s1 and s2
is placed in s0
1.2.1 - Operation of Computer Hardware
Types of MIPS instructions
21
1.2.1 - Operation of Computer Hardware
22
Example 1
• sum of four variables b, c, d, and e into variable a.
• a=b+c+d+e
– add a, b, c # The sum of b and c is placed in a (a=b+c)
– add a, a, d # The sum of b, c, and d is now in a ( a=a+d)
– add a, a, e # The sum of b, c, d, and e is now in a (a=a+e)
1.2.1 - Operation of Computer Hardware
It takes three instructions to sum 4 variable.
i.e Compiler must break this a=b+c+d+e statement into 3 assembly instructions,
since only one operation is performed per instruction.
23
• X=a+b-c
• add x,a,b
• sub X,x,c
• X=(a+b)-(c+d)
• Add t, a, b =
• add $s0,$s1,$s2
• Add $s0,$s1,LOCA
• Add $s0,$s1,4
• Add r ,c,d
• Sub x,t,r
24
Example 2
• Take five variables a,b, c, d, and e .
• d=b+c-e
– split into
– a=b+c &
– d=a-e
– add a, b, c # The sum of b and c is placed in a
– sub d, a, e # The subtract e from a and placed in d
1.2.1 - Operation of Computer Hardware
It takes two instructions .
MIPS assembly language instructions is performed by the compiler
25
Example 3
• Take five variables f, g, h, i, and j:.
• f = (g + h) – (i + j)
– add t0,g,h # temporary variable t0 contains g + h
– add t1,i,j # temporary variable t1 contains i + j
– sub f,t0,t1 # gets t0 – t1, which is (g + h) – (i + j)
1.2.1 - Operation of Computer Hardware
• The first MIPS instruction calculatesthe sum of g and h & place the result in
temporary variable t0
• Thus, the second instruction places the sum of i and j in temporary variable t1
• Finally, the subtract instruction subtracts the second sum from the first and
places the difference in the variable f
26
a) Registers operands
– The operands of arithmetic instructions have some restriction.
– They may be from a special locations built directly in hardware called registers.
– The size of a register in the MIPS architecture is 32 bits (1word)
• Register Representation
– Two-character names following a dollar sign
– E.g $s0, $s1, . . .
– add $s3,$s1,$s2
( add register content of s1, s2 and place it in register s3)
1.2.2 - Operands of Computer Hardware
27
Example 1
• f = (g + h) – (i + j);
– add $t0,$s1,$s2 # register $t0 contains g + h
– add $t1,$s3,$s4 # register $t1 contains i + j
– sub $s0,$t0,$t1 # gets $t0 – $t1 in s0, which is (g + h)–(i + j)
1.2.2 - Operands of Computer Hardware
variables f g h i j
registers $s0 $s1 $s2 $s3 $s4
$s0,$s1 …  general
purpose register
$t0,$t1 …  temporary
register
28
• f = (g + h) – (i + j);
• Add $t0,$s1,$s2
• Add $t1,$s3,$s4
• Sub $s0, $t0,$t1
29
• X= a+b-c
• S0=x
• S1,s2,s3 =a,b,c
• Add $s1,$s1,$s2
• Sub $s0, $s1,$s3
• Add $t0,$s1,$s2
• Sub $s0, $t0,$s3
30
b) Memory operands
– MIPS transfer data between memory and processor registers.
– Data transfer instruction are used for this type of operation
– Two types
• load word (lw) - copies data from memory to a register
• store word (sw) - copies data from register to a memory
– Example 1
lw $t0,8($s3) , lw $s1,50($s4)
– Example 2
sw $s1,100($s2), sw $t2, 32($s5)
1.2.2 - Operands of Computer Hardware
31
– Example lw $t0,8($s3)
» $t0 – Temporary register = Memory[$s3+8]
» $s3 - base address
» 8 offset
1.2.2 - Operands of Computer Hardware
32
1
0 1 2 3
1
2
3
1
0
0
1
0
0
1
0
0
1
2
1 Byte= 8bits
1 word = 32
bit= 4 byte
0
33
Memory Organization - Alignment : Byte Order
• Bytes in a word can be numbered in two ways:
– big-endian
– little-endian
• In 32 bit computer,
1 Word = 4 byte (4X8 bit = 32 bit)
Big-endian :
 byte 0 at the leftmost (most significant) to
 byte 3 at the rightmost (least significant),
1.2.2 - Operands of Computer Hardware
34
Memory Organization - Alignment : Byte Order
Little-endian
• byte 3 at the leftmost (most significant)
• byte 0 at the rightmost (least significant)
1.2.2 - Operands of Computer Hardware
35
Example 1
• Assume that A is an array of 100 words
• Perform g = h + A[8];
• single operation in this assignment statement, but one of
the operands is in memory.
• So first perform load operation to transfer the content
from memory location to register, then perform addition
operation
• compiler has associated with
• starting address (base address) of the array is in $s3.
1.2.2 - Operands of Computer Hardware
variables g h Base address Temporary Register
registers $s1 $s2 $s3 $t0
36
Example 1
• first transfer A[8] to a register and be placed in a temporary register
• lw $t0,8($s3) # Temporary reg $t0 gets A[8].
• Add temporary register content with register s2 content (h), then store it in the register s1
(g)
• add $s1,$s2,$t0 # g = h + A[8]
1.2.2 - Operands of Computer Hardware
g = h + A[8];
lw $t0,8($s3)
add $s1,$s2,$t0
37
Example 2
• Assume that A is an array of 100 words
• Perform A[12] = h + A[8];
• single operation  two operands is in memory.
1. perform load operation
2. Perform addition
3. Perform store operation
• compiler has associated with
• starting address (base address) of the array is in $s3.
1.2.2 - Operands of Computer Hardware
variables h Base address Temporary Register
registers $s2 $s3 $t0
38
Example 2
• First transfer A[8] to temporary register , lw $t0,8($s3)
• Add t0 with s2 and place it in t0, add $t0,$s2,$t0
• Store sum into A[12]
1.2.2 - Operands of Computer Hardware
A[12] = h + A[8]
lw $t0,8($s3)
add $t0,$s2,$t0
sw $t0,12($s3)
39
c) Constant or Immediate operands
• Small constants are used quite frequently (50% of operands)
e.g., A = A + 5;
B = B - 18;
pc = pc + 4;
• e.g
– incrementing the index of an array to point to next item
– Increment the program counter to point to the next instruction
• The constants - placed in memory when the program was loaded.
• For example,
– to add the constant 4 to register $s3,
• lw $t0, AddrConstant4($s1) # $t0 = constant 4
• add $s3,$s3,$t0 # $s3 = $s3 + $t0 ($t0 == 4)
1.2.2 - Operands of Computer Hardware
40
1.2.2 - Operands of Computer Hardware
c) Constant or Immediate operands
• Alternative method that avoids the load instruction is offered.
• Arithmetic instructions add immediate or addi
• This quick add instruction has
– one constant operand
– one register operands
• e.g To add 4 to register $s3,
– addi $s3,$s3,4 # $s3 = $s3 + 4
Advantages
• operations are much faster by including constants inside arithmetic instructions
41
1.2 - Operation and Operands of Computer Hardware
TRY YOURSELF
1. Client 1 stored his data in location B of the memory and Client 2
directly send his data to register R0 in the processor. Add the details
sent by two client and store the result in register R1.
42
1.2 - Operation and Operands of Computer Hardware
https://docs.google.com/forms/d/1r_I81vayGncf-
MwVs1O7yaAPIj6mKLOC_VodtLJISrc/edit?usp=sharing
43
44
Session Topic
1.1 Functional Blocks of a Computer
1.2 Operation and Operands of Computer Hardware
1.3 Instruction Set Architecture
1.4 Register Transfer Language (RTL) interpretation of
instructions
1.5 Addressing Modes -1
1.6 Addressing Modes -2
1.7 Instruction Execution Cycle
1.8 Instruction Set
1.9 Performance Metrics
Module 1- Functional blocks of Computer & Instruction Set Architecture
45
• Instruction set architecture is basically the interface between your hardware and
the software.
• The Instruction Set Architecture (ISA) defines the way in which a microprocessor
is programmed at the machine level.
• i.e an ISA is defined as the design of a computer from the Programmer’s
Perspective.
1.3 Instruction Set Architecture
46
Different features considered when designing the instruction set architecture
are:
1. Types of instructions (Operations in the Instruction set)
2. Types and sizes of operands
3. Addressing Modes
4. Addressing Memory
5. Encoding and Instruction Formats
6. Compiler related issues
1.3 Instruction Set Architecture
47
1.3.1 Types of instructions:
A computer must have the following types of instructions:
a) Data transfer instructions
b) Data manipulation instructions
c) Program sequencing and control instructions
d) Input and output instructions
1.3 Instruction Set Architecture
48
a) Data transfer instructions
– perform data transfer between the various storage places in the computer
system, viz. registers, memory and I/O
– two basic operations are, Load (or Read or Fetch) and Store (or Write)
b) Data manipulation instructions
– perform operations on data and indicate the computational capabilities for the
processor.
– E.g arithmetic operations, logical operations or shift operations
– Add,sub,mul, addi, and
1.3 Instruction Set Architecture
49
c) Program sequencing and control instructions
– It changes the flow of the program.
– E.g1 Looping : adding a list of n numbers.
– E.g 2 branch instructions: It loads a new value into the program counter.
• conditional branch & unconditional branch
1.3 Instruction Set Architecture
Move DATA1, R0
Add DATA2, R0
Add DATA3, R0
...
Add DATAn, R0
Move R0, SUM
Move N, R1
Clear R0
LOOP Determine address of
“Next” number and add “Next”
number to R0 (Add R0, R0, X)
Decrement R1
Branch > 0, LOOP
Move R0, SUM
Sequencing Looping
50
d) Input and output instructions
– It transferring information between the registers, memory and the input /
output devices.
– Either use special instructions that exclusively perform I/O transfers, or use
memory related instructions itself to do I/O transfers.
1.3 Instruction Set Architecture
51
1.3 Instruction Set Architecture
52
1.3.2 Types and sizes of operands
Various data types supported by the processor and their lengths are
• Common operand size
– Character (8 bits),
– Half word (16 bits),
– Word (32 bits),
– Single Precision Floating Point (1 Word),
– Double Precision Floating Point (2 Words),
• Operand data types
– two’s complement binary numbers,
– Characters usually in ASCII
– Floating point numbers following the IEEE Standard
– Packed and unpacked decimal numbers.
1.3 Instruction Set Architecture
53
Operands of Computer Hardware
– Registers operands
– Memory operands
– Constant or immediate operands
(Refer PPT 1.2- operation and operands of computer hardware )
54
1.3.3 Addressing Modes
– The way the operands are chosen during program execution
1.3 Instruction Set Architecture
Detailed explanation is in PPT
1.5-Addressing Modes
55
1.3.5 Instruction Format
 Defines the layout of an instruction.
 Includes an opcode and zero or more operands.
 Opcode : It defines an operation to be performed like Add, Subtract, Multiply,
Shift, Complement ,etc.
 Operands / Address : It is a field which contain the operand or location of
operand, i.e., register or memory location.
 e.g ADD A,B
OPCODE OPERANDS or ADDRESS
1.3 Instruction Set Architecture
56
Types of instruction format
1. Three address instruction Examples
Add A,B, C
// ( A= B+C)
2. Two address instruction
Add A,B // ( A= A+B)
3. One address instruction
Add A // ( AC= AC+A)
(AC=Accumulator )
4. Zero address instruction
CMA // (Compliment
the accumulator content)
OPCODE Address 1 Address 2 Address 3
OPCODE Address 1 Address 2
OPCODE Address 1
OPCODE
1.3 Instruction Set Architecture
57
Three Address Instruction
57
 General Format :
Operation Destination, Source1, Source2
Example: Evaluate X=(A+B)  (C+D)
1. ADD R1,A,B R1 ← M[A] + M[B]
2. ADD R2,C,D R2 ← M[C] + M[D]
3. MUL X,R1,R2 M[X] ← R1  R2
 Advantage :
Reduced number of instruction
 Disadvantage :
Need more space for Lengthy Instruction.
1.3 Instruction Set Architecture
58
• X=(A+B)  (C+D)
• A+B
– Add R1, A,B // R1=A+B
• C+D
– Add R2,C,D
• X=(A+B)  (C+D) // X=R1 * R2
– MUL X, R1,R2
59
• Evaluate X=(A+B)  (C+D)
• A+B
– ADD R1, A,B
• C+D
– ADD R2,C,D
• X=(A+B)  (C+D) // X=R1 *R2
– MUL X,R1,R2
60
Two Address Instruction
60
General format
 Operation Destination, Source
Disadvantage :
Need more than two instruction to do single high level instruction.
Example: Evaluate X=(A+B)  (C+D)
1. MOV R1,A R1 ← M[A]
2. ADD R1,B R1 ← R1 + M[B]
3. MOV R2,C R2 ← M[C]
4. ADD R2,D R2 ← R2 + M[D]
5. MUL R1,R2 R1 ← R1  R2
6. MOV X, R1 M[X] ← R1
1.3 Instruction Set Architecture
OPCODE Address 1 Address 2
61
• X=(A+B)  (C+D)
• two addr inst
• A+B (ADD A,B == A=A+B)
– MOV R1,A R1A
– ADD R1,B // R1=R1+B
• C+D
– MOV R2,C
– ADD R2,D // R2=R2+D
• X=(A+B)  (C+D) // X=R1*R2
– MUL R1,R2 // R1=R1*R2
– MOV X,R1
• ( OR)
– MOV X,R1
– MUL X,R2
62
• X=(A+B) *(C+D)
• Two addr inst
• A+B
– MOV R1,A // R1M[A]
– ADD R1,B //R1 R1+M[B] // R1 A+B
• C+D
– MOV R2,C
– ADD R2,D // R2R2+M[D] // R2 C+D
• X=(A+B) *(C+D) // X=R1*R2
– MUL R1,R2 // R1 R1*R2
– MOV X,R1 // X R1
63
• X=(A+B)  (C+D)
• A+B
– MOV R0,A // R0=a, R0=5
– ADD R0,B // R0=5+6=11
• C+D
• MOV R1,C
• ADD R1,D // R1=C+D
• X=(A+B)  (C+D) // X=R0*R1
• MOV X,R0
• MUL X,R1
64
One Address Instruction
General format
 Operation Source
 Single Accumulator Organization
 Processor register usually called Accumulator
64
Example: Evaluate X=(A+B)  (C+D)
1. LOAD A AC ← M[A]
2. ADD B AC ← AC + M[B]
3. STORE T M[T] ← AC
4. LOAD C AC ← M[C]
5. ADD D AC ← AC + M[D]
6. MUL T AC ← AC  M[T]
7. STORE X M[X] ← AC
1.3 Instruction Set Architecture
OPCODE Address 1
65
• X=(A+B) *(C+D)
• One addr inst
• A+B
– LOAD A // ACA
– ADD B // AC AC + B
– STORE T // TAC
• C+D
– LOAD C
– ADD D // ACAC+D // C+D
• X=(A+B) *(C+D) // X=T*R
– MUL T
– STORE X
66
• X=(A+B)  (C+D) using one address format- accumulator AC
• A+B
– LOAD A // AC=A // MOV A a=5,b=6, c=2,d=4
– ADD B // AC=AC+B ac=11
– STORE T //T=AC , T=11
• C+D
– LOAD C // AC=C , ac=2
– ADD D // AC=2+4=6 AC=(C+D)
• X=(A+B)  (C+D) // X= T*AC
– MUL T
– STORE X
67
67
Zero Address Instruction
• Stack Organization
• Operands and result are always in the stack
• It is possible to use the instruction in which the locations are of all operands are
defined implicitly.
• Such instruction are found in machine that stores operands in a structure called
pushdown stack.
1.3 Instruction Set Architecture
68
• Stack - LIFO
• Push -insert
• Pop –delete
• TP
• X=(A+B) *(C+D)
• (AB+ ) * (CD+)
69
69
Zero Address Instruction
GATE Question 1: GATE Question 2 :
1.3 Instruction Set Architecture
Example: Evaluate X=(A+B)  (C+D)
1. PUSH A TOS ← A
2. PUSH B TOS ← B
3. ADD TOS ← (A + B)
4. PUSH C TOS ← C
5. PUSH D TOS ← D
6. ADD TOS ← (C + D)
7. MUL TOS ← (C + D) ∗ (A + B)
8. POP X M [X] ← TOS
70
• X=(A+B)  (C+D) a=4,b=5,c=6,d=7
• Zero addr inst – stack
• (AB+)*(CD+) (AB+)(CD+)*
• A+B  (AB+)
– Push A
– Push B
– Push + -- > pop out
– Pop B, pop A, A+B
– Push result
– 4+5 =9
• C+D  CD+
– Push C
– Push D
– Push + ,pop + ,D,C, perform addition 7+6
– Push result
• X=(A+B)  (C+D)
• Push * pop * 13(C+D),9(A+B), multiplication
• Push result
117

stack
71
TRY YOURSELF
1. Apply three address, two address and one address instruction format to
evaluate the following expressions
– X=(A+B)*(C+D)
– X = A-B+C*(D*E-F)
71
1.3 Instruction Set Architecture
72
0 1 2 3
4 5 6 7
8 9 10 11
1
2
3
memory
0
Word addressing Byte addressing
0
4
8
12
16
0
1
1
1
0
0
1
…
memory
1.3 Instruction Set Architecture
1.3.4 Addressing of Memory
0
1
73
Memory Organization - Alignment : Byte Order
• Bytes in a word can be numbered in two ways:
– big-endian
– little-endian
• In 32 bit computer,
1 Word = 4 byte (4X8 bit = 32 bit)
Big-endian :
 byte 0 at the leftmost (most significant) to
 byte 3 at the rightmost (least significant),
1.3 Instruction Set Architecture
74
Memory Organization - Alignment : Byte Order
Little-endian
• byte 3 at the leftmost (most significant)
• byte 0 at the rightmost (least significant)
1.3 Instruction Set Architecture
75
https://docs.google.com/forms/d/1r_I81vayGncf-
MwVs1O7yaAPIj6mKLOC_VodtLJISrc/edit?usp=sharing
1.3 Instruction Set Architecture
76
1
Session Topic
1.1 Functional Blocks of a Computer
1.2 Operation and Operands of Computer Hardware
1.3 Instruction Set Architecture
1.4 Register Transfer Language (RTL) interpretation of
instructions
1.5 Addressing Modes -1
1.6 Addressing Modes -2
1.7 Instruction Execution Cycle
1.8 Instruction Set
1.9 Performance Metrics
Module 1- Functional blocks of Computer & Instruction Set Architecture
2
Register Transfer Language (RTL) or Register Transfer Notation
• The symbolic notation used to describe the micro operation transfer among register is
called Register Transfer Language (RTL)
• E.g R1 R2 + R3 // Add R1,R2,R3 , R1=R2+R3
• The operations performed on the data stored in registers are called Micro-operations.
• The Register Transfer Language is the symbolic representation of notations used to
specify the sequence of micro-operations.
1.4 - Register Transfer Language (RTL)
3
Register Transfer Notations
• In a computer system, data transfer takes place between processor registers and
memory and between processor registers and input-output systems.
• These data transfer can be represented by standard notations given below:
– Processor registers - Notations R0, R1, R2…,MAR,MDR
– Addresses of memory locations - LOC, PLACE, MEM, A,B etc.
– Input-output registers - DATA IN, DATA OUT and so on.
1.4 - Register Transfer Language (RTL)
4
Register Transfer Notations
• The content of register or memory location is denoted by placing square brackets
around the name of the register or memory location.
• E.g
– Content of register - [R1], [R2],…
– Content of memory location - M[LOC] , M[A], M[B],…
1.4 - Register Transfer Language (RTL)
5
• REGISTERS
• It is a collection of Flip flops.
• Each flip flop can store 1 bit of information.
• Computer registers are represented by Capital letters.
• Ex : MAR, MDR, PC, IR, R1, R2…
• MICRO OPERATION
• Operation executed on data stored in a register.
1.4 - Register Transfer Language (RTL)
6
1. Register Transfer
• Transferring data from one register to another.
• It is represented in symbolic form by means of replacement operator ( )
• Typically, most of the situations, the transfer has to occur only in a predetermined
control condition. This can be shown by following if-then statement:
– If (P=1)
then (R2 ← R1); // Here P is a control signal generated in the control
section.
Example :
R2 ← R1 // transfer the data from register R1 into register R2.
Example :
P: R2 ← R1 // transfer the data from register R1 into register R2 if p==1.
1.4 - Register Transfer Language (RTL)
7
•
• Here,
• 'n' indicates the number of bits for the register.
• The 'n' outputs of the register R1 are connected to the 'n' inputs of register R2.
• A load input is activated by the control variable 'P' which is transferred to the
register R2.
Control function P is a Boolean
variable that is equal to 1 or 0.
The control function is stated as
follows :
P : R2 R1
1.4 - Register Transfer Language (RTL)
8
2. Memory Transfer
• Two types - Read & Write
• Read (load) :The transfer of
information from a memory unit to the
user end is called a Read operation.
• Write(store) : The transfer of new
information to be stored in the
memory is called a Write operation.
1.4 - Register Transfer Language (RTL)
9
• A memory word is designated by the letter M.
• We must specify the address of memory word while writing the memory transfer
operations.
• The address register is designated by MAR and the data register by MDR.
• Thus, a read operation can be stated as:
• Read: MDR ← M [MAR]
• The Read statement causes a transfer of information into the data register (MDR)
from the memory word (M) selected by the address register (MAR).
• And the corresponding write operation can be stated as:
• Write: M [MAR] ← R1
1.4 - Register Transfer Language (RTL)
10
3. Arithmetic Micro-operations
R3 ← R1 + R2 // ADD R3,R1,R2 The contents of R1 plus R2 are transferred to R3.
R3 ← R1 - R2 // SUB R3,R1,R2 The contents of R1 minus R2 are transferred to R3.
R1 ← R1 + 1 // ADDI R1,1 Increment the contents of R1 by one
R1 ← R1 - 1 Decrement the contents of R1 by one
R2 ← R2' Complement the contents of R2 (1's complement)
R2 ← R1 + M[LOC] The contents of R1 plus memory content of LOC are
transferred to R2.
1.4 - Register Transfer Language (RTL)
11
Example: Evaluate X=(A+B) * (C+D)
Three-Address
1. ADD R1,A,B ; R1 ← M[A] + M[B]
2. ADD R2,C,D ; R2 ← M[C] + M[D]
3. MUL X,R1,R2 ; M[X] ← R1 * R2
Two-Address
4. MOV R1,A ; R1 ← M[A]
5. ADD R1,B ; R1 ← M[B] +R1
6. MOV R2,C ; R2 ← M[C]
7. ADD R2,D ; R2 ← R2 + M[D]
8. MUL R1,R2 ; R1 ← R1 * R2
9. MOV X,R1 ; M[X] ← R1
1.4 - Register Transfer Language (RTL)
12
• X= (A+B) *(C+D)
• 3 addr inst
• A+B
– ADD R1,A,B
– R1 M[A]+M[B]
• C+D
– ADD R2,C,D
– R2 M[C]+M[D]
• X= (A+B) *(C+D)
– MUL X,R1,R2
– M[X] R1*R2
• X= (A+B) *(C+D)
• 2 addr inst
• A+B
– MOV R1,A
– R1 M[A]
– ADD R1,B //R1=R1+B//R1=A+B
– R1 R1+M[B]
• C+D
– MOV R2,C
– R2 M[C]
– ADD R2,D
– R2 R2+M[D]
• X= (A+B) *(C+D) // X=R1*R2
– MUL R1,R2
– R1 R1*R2
– MOV X,R1
– M[X] R1
13
• X= (A+B) *(C+D)
• Three addr inst
• A+B
– ADD R1,A,B
– R1 M[A]+M[B]
• C+D
– ADD R2,C,D
– R2 M[C]+M[D]
• X=(A+B) *(C+D) // X= R1* R2
– MUL X,R1,R2
– M[X] R1*R2
• X= (A+B) *(C+D)
• Two addr inst
• A+B
– MOV R1,A
– ADD R1,B
– R1 M[A]
– R1 R1+M[B]
• C+D
– MOV R2,C
– ADD R2,D
– R2 M[C]
– R2 R2+M[D]
• X=(A+B) *(C+D) // X= R1* R2
– MUL R1,R2
– MOV X,R1
– R1 R1*R2
– M[X] R1
14
• Evaluate X=(A+B) * (C+D)
• Three address inst
• ADD R1,A,B
– R1 M[A]+M[B] // RTL notation
• ADD R2,C,D
– R2 M[C]+M[D]
• MUL X,R1,R2
– M[X] R1*R2
• Evaluate X=(A+B) * (C+D)
• Two address inst
• MOV R1,A
– R1 M[A]
• ADD R1,B
– R1 R1+M[B]
• MOV R2,C
– R2 M[C]
• ADD R2,D
– R2 R2+M[D]
• MOV X,R1
– M[X] R1
• MUL X,R2
– M[X] M[X] *R2
15
• X=(A+B)*(C+D)
• Three addr inst
– ADD R1,A,B
• R1 M[A] + M[B]
– ADD R2,C,D
• R2 M[C]+M[D]
– MUL X,R1,R2
• M[X] R1*R2
16
• X=(A+B)*(C+D)
• Two addr inst
• MOV R1,A
– R1 M[A]
• ADD R1,B
– R1 R1 + M[B]
• MOV R2,C
– R2 M[C]
• ADD R2,D //
– R2 R2+M[D]
• MUL R1,R2 // R1=R1*R2
– R1 R1*R2
• MOV X,R1
– M[X] R1
17
• ADDL R0, (R5)
– R0 -> X
R5 -> MAR
read, wait
MDR -> Y
Add
Z -> R0
• MAR PC
• MDR Mem [MAR]
• IR MDR
• PC PC + 4
• X R0
• MAR IR [Operand] // MAR M[R5]
• MDR Mem [MAR]
• Y MDR
• Z X+Y
• R0 Z
18
Quiz Link
1.4 - Register Transfer Language (RTL)
19
1.4 - Register Transfer Language (RTL)
1
Session Topic
1.1 Functional Blocks of a Computer
1.2 Operation and Operands of Computer Hardware
1.3 Instruction Set Architecture
1.4 Register Transfer Language (RTL) interpretation of
instructions
1.5 Addressing Modes -1
1.6 Addressing Modes -2
1.7 Instruction Execution Cycle
1.8 Instruction Set
1.9 Performance Metrics
Module 1- Functional blocks of Computer & Instruction Set Architecture
2
Addressing Mode :
✔ It refers to the way in which the operand of an instruction
is specified.
✔ It specify the location of an operand.
• It is mainly classified as
Immediate addressing
Implicit addressing
Direct & Indirect addressing
Register addressing
Displacement addressing- Relative, base addr, index
Stack addressing
1.5 – Addressing Modes
3
Instruction Format
✔ Defines the layout of an instruction.
✔ Includes an opcode and zero or more operands.
✔ Opcode : It defines an operation to be performed like Add, Subtract, Multiply,
Shift, Complement ,etc.
✔ Operands / Address : It is a field which contain the operand or location of
operand, i.e., register or memory location.
✔ e.g ADD A,B
1.5 – Addressing Modes
OPCODE OPERANDS or ADDRESS
4
Types of instruction format
1. Three address instruction Examples
Add A,B, C
// ( A= B+C)
2. Two address instruction
Add A,B // ( A= A+B)
3. One address instruction
Add A // ( AC=
AC+A)
(AC=Accumulator
)
4. Zero address instruction
CMA //
1.5 – Addressing Modes
OPCODE Address 1 Address 2 Address 3
OPCODE Address 1 Address 2
OPCODE Address 1
OPCODE
5
1.5 – Addressing Modes
6
1.5 – Addressing Modes
Types of Addressing Modes
1. Direct / Absolute Addressing mode
2. Indirect Addressing mode
3. Register Addressing mode
4. Register Indirect Addressing mode
5. Immediate Addressing mode
6. Implicit Addressing mode
7. Indexed Addressing mode
8. Relative (PC Relative) Addressing mode
9. Stack Addressing mode
10. Auto Increment Addressing mode
11. Auto Decrement Addressing mode
12. Base Addressing mode
7
IMPLEMENTATION OF CONSTANT
1.5.1 Immediate Addressing Mode
• Constant Operand is specified in the
address field of instruction.
• data is present in instruction itself.
• i.e Value is directly given in instruction as
operand
• # symbol is added to indicate it is a value
• E.g
– Store R2, #100
– Add #7
– Add R1,#20
1.5 – Addressing Modes
8
1.5.2 Implicit Addressing mode
• Some instruction doesn’t require any
operand.(zero address instruction)
• They directly operate upon the content of
the accumulator.
• E.g
• CMA (Complement) , Content of the
accumulator is complemented.
• RAR - Rotate Right -Content of the
accumulator is rotate one position right.
• RAL - Rotate Left - Content of the
accumulator is rotate one position left.
1.5 – Addressing Modes
Opcode
CMA
Accumulator
Opcode
CMA
9
1.5.3 Direct Addressing Mode
• The address field of the instruction
contains the effective address (EA)
of the operand.
• Also called absolute addressing
mode.
• ADD X // AC ← AC + M[X]
• ADD R1, 4000
– EA = 4000 (Memory Address)
1.5 – Addressing Modes
Effective address(EA)
Information from which the memory address
of the operand can be determined.
10
• Direct
– ADD R1, 4000
– ADD R1, X
• Indirect
– ADD R1,(4000)
– ADD R1,(X)
35
3000
4000
4004
X
y
3000
11
• ADD R1,4000 address
– R1 R1+M[4000]
• ADD R1,#4000 value
– R1 R1+4000
• ADD R1,(4000) M(address)
– R1 R1+M[[4000]]
• ADD (x)
12
1.5.4 Indirect Addressing Mode
• Address field of instruction gives the address
where the effective address is stored in
memory.
• Need multiple memory lookups to find the
operand.
• For indirection use parentheses ( )
• E.g
• ADD (X) // AC ← AC + M[[X]]
• ADD R1, (4000)
– EA = Content of Location 4000
1.5 – Addressing Modes
13
1.5.5 Register Direct Addressing Mode
• Operand (data) is stored in the
Processor register.
• Register are given as operands of
instruction.
• Effective Address = Register
• E.g
• Add R4, R3
1.5 – Addressing Modes
14
1.5.6 Register Indirect Addressing
Mode
• Instruction specifies the register as
indirection.
• EA=(R), Effective Address is the content
of the register.
• Data value present in a content of
register (not in a register)
• E.g
• Load R3, (R2)
– Load R3, A A is memory location
– Load R3,200
– A is Effictive Address
1.5 – Addressing Modes
15
1.5.7 Relative Addressing Mode
• Effective address of the operand is
obtained by adding the content of
program counter with the address part
of the instruction.
• Effective Address = Content of Program
Counter + Address part of the
instruction
• EA = A + [PC]
• E.g
• Add A,(PC)
– EA=[A] +[PC]
1.6 – Addressing Modes
16
• EA=A+[PC]
• Pc=2000
• A constant
• EA=#30+2000=2030
• A address
• M[A]=3000
• EA=3000+2000=5000
Displacement addressing
mode
• EA=---- + -----
1. Relative
– EA= [PC]+ [A]
2. Base
– EA= [Base Reg]+ [A]
3. Index
– EA= [offset]+ [A]
17
1.5.8 Base Register Addressing Mode
• Effective address of the operand is
obtained by adding the content of
base register with the address part of
the instruction.
• EA=[Base Register] + [A]
• E.g
• Add R2(A)
– EA=[R2]+[A]
1.6 – Addressing Modes
18
1.5.9 Index Addressing Mode
• Data value present as index
• EA = X + (R)
• X=offset constant value
• Load Ri, X(R2)
– Load R2, A
– Load R3, (R2) // Load R3,A
– Load R4, 4(R2) // Load R4, 4+A
– Load R5, 8(R2) // Load R4, 8+A
– Load R6, 12(R2) // Load R4, 12+A
1.6 – Addressing Modes
Advantages & Disadvantages
19
1.6 – Addressing Modes
1.5.10 Stack Addressing Mode
• Instruction doesn’t contains any operand.
• If it is arithmetic operation, then It operate upon the stack
• Operand is at the top of the stack.
• Example: ADD
– POP top two items from the stack,
– add them, and
– PUSH the result to the top of the stack.
20
1.5.11 Auto increment Addressing
• EA =(R)
• (Ri
)+
• After accessing the operand, the content of the
register is automatically incremented to point the
next operand.
• E.g Add (R1)+
• First, the operand value is fetched.
• Then, the register R1 value is incremented by
step size ‘d’.
• Assume operand size = 2 bytes.
• After fetching 6B, R1 will be 3300 + 2 = 3302.
1.6 – Addressing Modes
21
1.5.12 Auto decrement Addressing
• EA =(R)-1
• - (Ri
)
• First, the content of the register is
decremented to point the operand.
• E.g Add -(R1)
• First, the register R1 value is decremented by
step size ‘d’.
• Assume operand size = 2 bytes.
• R1 will be 3302 – 2 = 3300.
• Then, the operand value is fetched.
1.6 – Addressing Modes
22
Comparison of addressing modes
GATE Question solutions
1.5 & 1.6 – Addressing Modes
23
1. Examine the following sequence and identify the addressing modes used,
operation done in every instruction and find the effective address by considering
R1=3000, R2=5000, R5=1000.
LOAD 10(R1),R5
SUB (R1)+, R5
ADD –(R2), R5
MOVI 2000,R5
2. Consider the following instruction
ADD A(R0),(B).
First operand (destination) “A(R0)” uses indexed addressing mode with R0 as index
register. The second operand (Source) “(B)” uses indirect addressing mode.
Determine the number of memory cycles required to execute this instruction.
1.5 & 1.6 – Addressing Modes
24
Problem Workouts
1. Write procedures for reading from and writing to a FIFO queue, using a two-address format, in
conjunction with:
– indirect addressing
– relative addressing
2. Write a sequence of instructions that will compute the value of y = x2 + 2x + 3 for a given x using
– three-address instructions
– two-address instructions
– one-address instructions
1.5 & 1.6 – Addressing Modes
25
GATE Question
Match each of the high level language statements given on the left hand side with the most
natural addressing mode from those listed on the right hand side.
(A) (1, c), (2, b), (3, a)
(B) (1, a), (2, c), (3, b)
(C) (1, b), (2, c), (3, a)
(D) (1, a), (2, b), (3, c)
1. A[1] = B[J]; a. Indirect addressing
2. while [*A++]; b. Indexed addressing
3. int temp = *x; c. Autoincrement
1.5 & 1.6 – Addressing Modes
26
https://docs.google.com/forms/d/1lNmQ4C9Gt60yOGXbsR8cG3cbBhfUQFSaQhRw
Gnj-Ff4/edit?usp=sharing
Individual Assessment
27
1
Session Topic
1.1 Functional Blocks of a Computer
1.2 Operation and Operands of Computer Hardware
1.3 Instruction Set Architecture
1.4 Register Transfer Language (RTL) interpretation of
instructions
1.5 Addressing Modes -1
1.6 Addressing Modes -2
1.7 Instruction Execution Cycle
1.8 Instruction Set
1.9 Performance Metrics
Module 1- Functional blocks of Computer & Instruction Set Architecture
2
Addressing Mode :
✔ It refers to the way in which the operand of an instruction
is specified.
✔ It specify the location of an operand.
• It is mainly classified as
Immediate addressing
Implicit addressing
Direct & Indirect addressing
Register addressing
Displacement addressing- Relative, base addr, index
Stack addressing
1.5 – Addressing Modes
3
Instruction Format
✔ Defines the layout of an instruction.
✔ Includes an opcode and zero or more operands.
✔ Opcode : It defines an operation to be performed like Add, Subtract, Multiply,
Shift, Complement ,etc.
✔ Operands / Address : It is a field which contain the operand or location of
operand, i.e., register or memory location.
✔ e.g ADD A,B
1.5 – Addressing Modes
OPCODE OPERANDS or ADDRESS
4
Types of instruction format
1. Three address instruction Examples
Add A,B, C
// ( A= B+C)
2. Two address instruction
Add A,B // ( A= A+B)
3. One address instruction
Add A // ( AC=
AC+A)
(AC=Accumulator
)
4. Zero address instruction
CMA //
1.5 – Addressing Modes
OPCODE Address 1 Address 2 Address 3
OPCODE Address 1 Address 2
OPCODE Address 1
OPCODE
5
1.5 – Addressing Modes
6
1.5 – Addressing Modes
Types of Addressing Modes
1. Direct / Absolute Addressing mode
2. Indirect Addressing mode
3. Register Addressing mode
4. Register Indirect Addressing mode
5. Immediate Addressing mode
6. Implicit Addressing mode
7. Relative (PC Relative) Addressing mode
8. Base Addressing mode
9. Indexed Addressing mode
10. Stack Addressing mode
11. Auto Increment Addressing mode
12. Auto Decrement Addressing mode
7
IMPLEMENTATION OF CONSTANT
1.5.1 Immediate Addressing Mode
• Constant Operand is specified in the
address field of instruction.
• data is present in instruction itself.
• i.e Value is directly given in instruction as
operand
• # symbol is added to indicate it is a value
• E.g
– Store R2, #100
– Add #7 // AC=AC+7
– Add R1,#20 //R1=[R1]+20
1.5 – Addressing Modes
8
1.5.2 Implicit Addressing mode
• Some instruction doesn’t require any
operand.(zero address instruction)
• They directly operate upon the content of
the accumulator.
• E.g
• CMA (Complement) , Content of the
accumulator is complemented.
• RAR - Rotate Right -Content of the
accumulator is rotate one position right.
• RAL - Rotate Left - Content of the
accumulator is rotate one position left.
1.5 – Addressing Modes
Opcode
CMA
Accumulator
Opcode
CMA
9
1.5.3 Direct Addressing Mode
• The address field of the instruction
contains the effective address (EA)
of the operand.
• Also called absolute addressing
mode.
• ADD X // AC ← AC + M[X]
• ADD R1, 4000
– EA = 4000 (Memory Address)
1.5 – Addressing Modes
Effective address(EA)
Information from which the memory address
of the operand can be determined.
10
• Direct
– ADD R1, 3000
– ADD R1, X
• Indirect
– ADD R1,(4000)
– ADD R1,(Y)
35
3000
4000
4004
y
3000
Memory
X
11
• ADD R1,4000 address
– R1 R1+M[4000]
• ADD R1,#4000 value
– R1 R1+4000
• ADD R1,(4000) M(address)
– R1 R1+M[[4000]]
• ADD (x)
12
1.5.4 Indirect Addressing Mode
• Address field of instruction gives the address
where the effective address is stored in
memory.
• Need multiple memory lookups to find the
operand.
• For indirection use parentheses ( )
• E.g
• ADD (X) // AC ← AC + M[[X]]
• ADD R1, (4000)
– EA = Content of Location 4000
1.5 – Addressing Modes
13
1.5.5 Register Direct Addressing Mode
• Operand (data) is stored in the
Processor register.
• Register are given as operands of
instruction.
• Effective Address = Register
• E.g
• Add R4, R3
1.5 – Addressing Modes
14
1.5.6 Register Indirect Addressing
Mode
• Instruction specifies the register as
indirection.
• EA=(R), Effective Address is the content
of the register.
• Data value present in a content of
register (not in a register)
• E.g
• Load R3, (R2)
– Load R3, A A is memory location
– Load R3,200
– A is Effictive Address
1.5 – Addressing Modes
15
Addressing mode Contains
Direct operands
Indirect Address of operands
Register direct Register contains Operands
Register Indirect Register contains Address of operands
16
Displacement addressing mode
• EA= ---- + -----
1. Relative addressing mode
– EA= [PC]+ [A]
2. Base addressing mode
– EA= [Base Reg]+ [A]
3. Index addressing mode
– EA= [offset]+ [A]
17
1.5.7 Relative Addressing Mode
(PC Relative)
• Effective address of the operand is obtained
by adding the content of program counter with
the address part of the instruction.
• Effective Address = Content of Program
Counter + Address part of the instruction
• EA = A + [PC]
• E.g
• Add A,(PC)
– EA=[A] +[PC]
1.6 – Addressing Modes
18
19
inst address
PC BR 5000
5004
.
BT 25 5050
50 6000
ADD A(PC)
ADD #50(PC) or ADD 6000(PC)
BR ----, 5050
EA=5000 + 50 5050
Pc=5000
ADD A(PC)
ADD 6000(PC)
EA=[A] +[PC] =[6000]+5000
= 50 +5000 = 5050
BT ----, 5050
EA=5000 + 50 5050
20
• EA=A+[PC]
• Pc=2000
• A constant
• EA=#30+2000=2030
• A address
• M[A]=3000
• EA=3000+2000=5000
Displacement addressing
mode
• EA=---- + -----
1. Relative
– EA= [PC]+ [A]
2. Base
– EA= [Base Reg]+ [A]
3. Index
– EA= [offset]+ [A]
21
1.5.8 Base Register Addressing Mode
• Effective address of the operand is
obtained by adding the content of
base register with the address part of
the instruction.
• EA=[Base Register] + [A]
• E.g
• Add R2(A) // Take [R2]=3000, A=50
– EA=[R2]+[A] = 3000+50==3050
1.6 – Addressing Modes
22
1.5.9 Index Addressing Mode
• Data value present as index
• EA = X + (R)
• X=offset constant value
• Load Ri, X(R2)
– Load R2, A
– Load R3, (R2) // Load R3,A
– Load R4, 4(R2) // Load R4, 4+A
– Load R5, 8(R2) // Load R4, 8+A
– Load R6, 12(R2) // Load R4, 12+A
1.6 – Addressing Modes
Advantages & Disadvantages
23
1.6 – Addressing Modes
1.5.10 Stack Addressing Mode
• Instruction doesn’t contains any operand.
• If it is arithmetic operation, then It operate upon the stack
• Operand is at the top of the stack.
• Example: ADD
– POP top two items from the stack,
– add them, and
– PUSH the result to the top of the stack.
24
1.5.11 Auto increment Addressing
• EA =(R)
• (Ri
)+
• After accessing the operand, the content of the
register is automatically incremented to point the
next operand.
• E.g Add (R1)+
• First, the operand value is fetched.
• Then, the register R1 value is incremented by
step size ‘d’.
• Assume operand size = 2 bytes.
• After fetching 6B, R1 will be 3300 + 2 = 3302.
1.6 – Addressing Modes
25
1.5.12 Auto decrement Addressing
• EA =(R)-1
• - (Ri
)
• First, the content of the register is
decremented to point the operand.
• E.g Add -(R1)
• First, the register R1 value is decremented by
step size ‘d’.
• Assume operand size = 2 bytes.
• R1 will be 3302 – 2 = 3300.
• Then, the operand value is fetched.
1.6 – Addressing Modes
26
Comparison of addressing modes
GATE Question solutions
1.5 & 1.6 – Addressing Modes
27
1. Examine the following sequence and identify the addressing modes used,
operation done in every instruction and find the effective address by considering
R1=3000, R2=5000, R5=1000.
LOAD 10(R1),R5
SUB (R1)+, R5
ADD –(R2), R5
MOVI 2000,R5
2. Consider the following instruction
ADD A(R0),(B).
First operand (destination) “A(R0)” uses indexed addressing mode with R0 as index
register. The second operand (Source) “(B)” uses indirect addressing mode.
Determine the number of memory cycles required to execute this instruction.
1.5 & 1.6 – Addressing Modes
28
Problem Workouts
1. Write procedures for reading from and writing to a FIFO queue, using a two-address format, in
conjunction with:
– indirect addressing
– relative addressing
2. Write a sequence of instructions that will compute the value of y = x2 + 2x + 3 for a given x using
– three-address instructions
– two-address instructions
– one-address instructions
1.5 & 1.6 – Addressing Modes
29
GATE Question
Match each of the high level language statements given on the left hand side with the most
natural addressing mode from those listed on the right hand side.
(A) (1, c), (2, b), (3, a)
(B) (1, a), (2, c), (3, b)
(C) (1, b), (2, c), (3, a)
(D) (1, a), (2, b), (3, c)
1. A[1] = B[J]; a. Indirect addressing
2. while [*A++]; b. Indexed addressing
3. int temp = *x; c. Autoincrement
1.5 & 1.6 – Addressing Modes
30
https://docs.google.com/forms/d/1lNmQ4C9Gt60yOGXbsR8cG3cbBhfUQFSaQhRw
Gnj-Ff4/edit?usp=sharing
Individual Assessment
31
1
Session Topic
1.1 Functional Blocks of a Computer
1.2 Operation and Operands of Computer Hardware
1.3 Instruction Set Architecture
1.4 Register Transfer Language (RTL) interpretation of
instructions
1.5 Addressing Modes -1
1.6 Addressing Modes -2
1.7 Instruction Execution Cycle
1.8 Instruction Set
1.9 Performance Metrics
Module 1- Functional blocks of Computer & Instruction Set Architecture
2
Instruction execution cycle
– Fetch
– Decode
– Execute
– Store
1.Fetch Phase
 IR ← [[PC]]
 PC ← [PC] + 4
2. Decode phase
 Decoder IR
 Operant fetch
3.Execution phase
 ALU operation
1.7 – Instruction Execution Cycle
3
1.8.1 Internal Organization of the Processor
PC – Holds address of next instruction
MAR – Holds address of operand or data
MDR – Holds data
R0 – Rn-1 – Gereral purpose register
Y,Z,TEMP – Temporary register
MUX – select either Y or constant 4 as input to
A of ALU
ALU – Arithmetic and Logic Unit
Decoder – Decode instruction and create a control signal
1.7 – Instruction Execution Cycle
4
1.8.2 Register Transfers
 The input and output of the a register are connected
to the bus through switches controlled by the signals
Rin and Rout
To transfer contents of R1 to R4
 R4R1
 Enable R1out=1 to places contents of R1 on the
processor bus
 Enable R4in = 1 to loads data from the processor
bus into register R4
1.7 – Instruction Execution Cycle
5
1.8.3 Performing an ALU operation
– E.g Add R3,R1,R2
– Control signal steps
1. R1out, Yin
2. R2out, SelectY, Add, Zin
3. Zout, R3in
Temporary register – Y,Z,TEMP
MUX – select anyone input for A in ALU
1.7 – Instruction Execution Cycle
6
1.8.4 Fetching a Word from Memory
• e.g Move (R1),R2
1. MAR ← [R1]
2. Start a Read operation on the memory bus
3. Wait for the MFC response from the memory
4. Load MDR from the memory bus
5. R2 ← [MDR]
Control sequence steps
1. R1out, MARin, Read
2. MDRinE, WMFC
3. MDRout,R2in
1.7 – Instruction Execution Cycle
7
1.8.5 Storing a Word in Memory
• e.g Move R2, (R1)
Steps
1. The desired address is loaded into MAR
2. Data to be written is loaded into MDR
3. Write signal is initiated
– R1out , MARin
– R2out, MDRin, Write
– MDRoutE, WMFC
1.7 – Instruction Execution Cycle
8
Steps for Add R1, R2
• Fetch the instruction
• Fetch the first operand
• Perform the addition
• Load the result into R1
Control sequence
Step Action
1. PCout, MARin, Read, Select4, Add, Zin
2. Zout, PCin, Yin, WMFC
3. MDRout, IRin
4. R1out, Yin, SelectY
5. R2out, Add, Zin
6. Zout, R1in, End
1.7 – Instruction Execution Cycle
9
• Add R1,R2 // R1=R1+R2
• Fetch – fetch the inst
• PCMAR, Read, M[MAR]MDR, MDRIR, PC=PC+4
– T1.PCout,MARin,Read,MDRin,select4,Add,Zin
– T2.Zout,Pcin,Yin,WMFC
– T3. MDRout,IRin
• ALU
– R1out,Yin, selectY
– R2out,Add,Zin
• Store
– Zout,R1in,End
10
• Sub R1,R2,R3 // R1=R2-R3
• Fetch
• PCMAR,Read,MDRIR, PCPC+4
– T1. PCout, MARin, Read, select 4, add, Zin
– T2. Zout,Pcin,Yin,WMFC
– T3.MDRout,IRin
• ALU
• R1=[R2]-[R3]
– T4.R2out,Yin,selectY
– T5.R3out,Sub,Zin
• Store ZR1
– Zout,R1in,End
11
Branch Instructions
• Unconditional branch instruction: JUMP X
• Replaces the PC contents with branch target address
Control sequence
Step Action
1. PCout, MARin, Read, Select4, Add, Zin
2. Zout, PCin, Yin, WMFC
3. MDRout, IRin
4. Offset –field –of-IRout, Add, Zin
5. Zout, PCin, End
1.7 – Instruction Execution Cycle
Pc=3000
Jump x ///3500 offset 500
Pc+offset= 3000+500=3500
12
Multiple-Bus Organization
• Number of control sequence steps are reduced
• e.g Sub R1, R2, R3 // R1=R2-R3
Control sequence
Step Action
1. PCout, R=B, MARin, Read, IncPC
2. WMFC
3. MDRoutB, R=B, IRin
4. R2outA, R3outB, SelectA, SUB, R1in, End
1.7 – Instruction Execution Cycle
13
https://docs.google.com/forms/d/1cvkbXtnzeaoTNUrDZXtghQCxEQ1a9_p2lt-
6zUHNcQo/edit?usp=sharing
Individual Assessment
14
15
Session Topic
1.1 Functional Blocks of a Computer
1.2 Operation and Operands of Computer Hardware
1.3 Instruction Set Architecture
1.4 Register Transfer Language (RTL) interpretation of
instructions
1.5 Addressing Modes -1
1.6 Addressing Modes -2
1.7 Instruction Execution Cycle
1.8 Instruction Set
1.9 Performance Metrics
Module 1- Functional blocks of Computer & Instruction Set Architecture
16
Instruction Set
• The instruction set, also called ISA (instruction set architecture), is part of a
computer that pertains to programming, which is basically machine language.
• The instruction set provides commands to the processor, to tell it what it needs to
do.
• E.g ADD, LOAD, COMPARE, ON, OUT, JUMP
Two types
RISC - Reduced instruction set computing
CISC - Complex instruction set computing
1.8 – Instruction Set
17
Reduced Set Instruction Set Architecture (RISC)
The main idea - To make hardware simpler by using an instruction set composed of a
few basic steps for loading, evaluating and storing operations just like a load command
will load data, store command will store the data.
• Example – add two 8-bit number - add operation is divided into parts i.e. load, operate,
store programmer
Complex Instruction Set Architecture (CISC)
The main idea is that a single instruction will do all loading, evaluating and storing
operations just like a multiplication command will do stuff like loading data, evaluating and
storing it, hence it’s complex.
• Example – add two 8-bit number - There will be a single command or instruction like
ADD which will perform the task..
1.8 – Instruction Set
18
Both approaches try to increase the CPU performance
• RISC: Reduce the cycles per instruction at the cost of the number of instructions per
program.
• CISC: The CISC approach attempts to minimize the number of instructions per program
but at the cost of increase in number of cycles per instruction.
1.8 – Instruction Set
19
Characteristic of RISC
• Simpler instruction, hence simple instruction decoding.
• Instruction come under size of one word.
• Instruction take single clock cycle to get executed.
• More number of general purpose register.
• Simple Addressing Modes.
• Less Data types.
• Pipeling can be achieved.
1.8 – Instruction Set
20
Characteristic of CISC
• Complex instruction, hence complex instruction decoding.
• Instruction are larger than one word size.
• Instruction may take more than single clock cycle to get executed.
• Less number of general purpose register as operation get performed in memory
itself.
• Complex Addressing Modes.
• More Data types.
1.8 – Instruction Set
21
RISC CISC
Focus on software Focus on hardware
Uses only Hardwired control unit
Uses both hardwired and micro programmed
control unit
Transistors are used for more registers
Transistors are used for storing complex
Instructions
Fixed sized instructions Variable sized instructions
Can perform only Register to Register
Arthmetic operations
Can perform REG to REG or REG to MEM or MEM
to MEM
Requires more number of registers Requires less number of registers
Code size is large Code size is small
A instruction execute in single clock cycle Instruction take more than one clock cycle
A instruction fit in one word Instruction are larger than size of one word
1.8 – Instruction Set
22
Kahoot Quiz
1.8 – Instruction Set
https://create.kahoot.it/share/551ec8ac-a5a0-4842-9c30-9b3bd878f802
23
24
Session Topic
1.1 Functional Blocks of a Computer
1.2 Operation and Operands of Computer Hardware
1.3 Instruction Set Architecture
1.4 Register Transfer Language (RTL) interpretation of
instructions
1.5 Addressing Modes -1
1.6 Addressing Modes -2
1.7 Instruction Execution Cycle
1.8 Instruction Set
1.9 Performance Metrics
Module 1- Functional blocks of Computer & Instruction Set Architecture
25
Performance
 The most important measure of a computer is
speed. ( How quickly it can execute programs).
 Three factors affecting CPU performance
• Instruction set
• Hardware design
• Compiler (software design)
 The Processor time to execute a program depends
on the hardware involved in the execution.
 The execution of each instruction is divided into
several steps. Each step completes in one clock
cycle.
1.9 – Performance Metrics
26
• To calculate the execution time, following
parameters are considered
 Clock rate (R) , (R=1/T)
 Cycles for single instruction (S)
 Instruction count for a task (N)
 Execution time for CPU (Tc)
 CPU Execution Time = number of
Instructions (N) * CPI (S) * clock cycle
Time (T=1/R)
R
S
N
Tc


1.9 – Performance Metrics
27
1.9 – Performance Metrics
RISC - Reduced Instruction Set Computers
CISC - Complex Instruction Set Computers
To improve performance
Gate Exercise
 Hardware design
 Clock rate (R) can be increased
 Pipeline concept (instruction overlapping) can be used
 Instruction set
 Using either RISC or CISC
 Compiler
 Optimized compiler





R
S
N
Tc
28
Hardware design
 Clock rate (R) can be increased
» VLSI design for fabrication,
» Transistor size small,
• Switching speed between 0 and 1 is high
• More transistor placed on chip
 Pipeline concept (instruction overlapping) can be used
» Performance can be increased by performing a number of operations in
parallel.
• Instruction level parallelism
• Multi core processor – on single chip – dual core, quad core, octo core
• Multiprocessor – many processor, each containing multiple cores.
1.9 – Performance Metrics
29
Comparing performance of several machines.
– performanceX = 1 / execution timeX
– Two computers X and Y, if the performance of X is greater than the
performance of Y, we have
– Performancex > Performance y
–
1
𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒 𝑥
>
1
𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒 𝑦
– Execution time x < Execution time y
– Execution time y > Execution time x
1.9 – Performance Metrics
30
Comparing performance of several machines.
• If X is n times faster than Y, then relate the performance of two different
computers quantitatively.
performanceX execution_timeY
--------------------- = --------------------- = n
performanceY execution_timeX
• speed up of Machine A over Machine B = TCB / TCA
1.9 – Performance Metrics
31
PROBLEMS
1. Nancy has a computer with dual core processor & runs a program in 20 seconds
. She also has laptop with octa core processor & runs the same program in 10
seconds. Determine which one is faster and by how much?
• Performamnce ratio = Execution Time of Nancy computer / Execution Time of Nancy laptop
• = 20 /10=2
• Laptop runs 2 time faster than computer
1.9 – Performance Metrics
32
PROBLEMS
Suppose we have two implementations of the same instruction set architecture.
Computer A has a clock cycle time of 250 ps and a CPI of 2.0 for some program,
and computer B has a clock cycle time of 500 ps and a CPI of 1.2 for the same
program. Which computer is faster for this program and by how much?
• Let we take number of instructions for the program is I.
33
PROBLEMS
34
1
Session Topic
2.1 Signed number representation
2.2 Fixed and floating point representations
2.3 Character representation
2.4 Integer addition and subtraction
2.5 Ripple carry adder
2.6 Carry look-ahead adder
2.7 Shift-and add multiplication
2.8 Booth multiplier
2.9 Carry save multiplier
2.10 Division - restoring techniques
2.11 Division - non-restoring techniques
2.12 Floating point arithmetic
Module 2- Data representation & Computer arithmetic
2
2.1 – Signed number representation
Integer Representation
• Computers use a fixed number of bits to represent an integer.
• The commonly-used bit-lengths for integers are 8-bit, 16-bit, 32-bit or 64-bit.
• Unsigned Integers: can represent zero and positive integers.
• Signed Integers: can represent zero, positive and negative integers.
• Three representation for signed integers:
– Sign-Magnitude representation
– 1's Complement representation
– 2's Complement representation
3
• Unsigned and Signed Binary Numbers
b n 1
– b 1 b 0
Magnitude
MSB
(a) Unsigned number
b n 1
– b 1 b 0
Magnitude
Sign
(b) Signed number
b n 2
–
0 denotes
1 denotes
+
– MSB
4
Unsigned Integers
• Representation of binary value
E.g 0 0 1 0 0 0 0 1
• Integer value in decimal is V(B)
V(B) = 0 X 27 + 0 X 26 + 1 X 25 + 0 X 24 + 0 X 23 + 0 X 22 + 0 X 21 + 0 X 20
= 0+0+32+0+0+0+0+1 = 33 D
2.1 – Signed number representation
5
Unsigned Integers
• Formula = 2^n-1
2.1 – Signed number representation
n Minimum Maximum
8 0 (2^8)-1 (=255)
16 0 (2^16)-1 (=65,535)
32 0 (2^32)-1 (=4,294,967,295) (9+ digits)
64 0 (2^64)-1
(=18,446,744,073,709,551,615) (19+ digits)
6
Signed Integers
• Signed integers - represent zero, positive integers & negative integers.
• Three representation schemes are available for signed integers:
– Sign-Magnitude representation
– 1's Complement representation
– 2's Complement representation
• In all the above three schemes, the most-significant bit (MSB) is called the sign
bit.
• The sign bit is used to represent the sign of the integer
– 0 for positive integers
– 1 for negative integers.
2.1 – Signed number representation
7
1. Sign-Magnitude Representation
• The most-significant bit (MSB) is the sign bit,
– 0 representing positive integer and
– 1 representing negative integer.
• The remaining n-1 bits represents the magnitude (absolute value) of the
integer.
• Example 1: Suppose that n=8 and the binary representation is 0 100 0001B.
Sign bit is 0 ⇒ positive
Absolute value is 100 0001B = 65D
Hence, the integer is +65D
2.1 – Signed number representation
0 1 0 0 0 0 0 1
8
• Example 2: Suppose that n=8 and the binary representation is 1 000 0001B.
Sign bit is 1 ⇒ negative
Absolute value is 000 0001B = 1D
Hence, the integer is -1D
2.1 – Signed number representation
1 0 0 0 0 0 0 1
9
Drawbacks of sign-magnitude representation :
• There are two representations for the number zero, which could lead to
inefficiency and confusion.
– 0000 0000B zero 0
– 1000 0000B  zero 0
• Positive and negative integers need to be processed separately.
2.1 – Signed number representation
10
Try Yourself
• Example 3: Suppose that n=8 and the binary representation is 0 000 0000B.
• Example 4: Suppose that n=8 and the binary representation is 1 000 0000B.
2.1 – Signed number representation
11
Try Yourself
• Example 3: Suppose that n=8 and the binary representation is 0 000 0000B.
Sign bit is 0 ⇒ positive
Absolute value is 000 0000B = 0D
Hence, the integer is +0D
• Example 4: Suppose that n=8 and the binary representation is 1 000 0000B.
Sign bit is 1 ⇒ negative
Absolute value is 000 0000B = 0D
Hence, the integer is -0D
2.1 – Signed number representation
12
2. 1's Complement Representation
• MSB - sign bit,
– 0 representing positive integers
– 1 representing negative integers.
• The remaining n-1 bits represents the magnitude of the integer, as follows:
– for positive integers,
• absolute value = magnitude of the (n-1) bit .
– for negative integers,
• absolute value = magnitude of the complement (inverse) of the (n-1)-bit
• hence called 1's complement.
2.1 – Signed number representation
13
• Example 1: Suppose that n=8 and the binary representation 0 100 0001B.
Sign bit is 0 ⇒ positive
Absolute value is 100 0001B = 65D
Hence, the integer is +65D
• Example 2: Suppose that n=8 and the binary representation 1 000 0001B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 000 0001B
– i.e., 000 0001B = 111 1110B = 126D
Hence, the integer is -126D
2.1 – Signed number representation
0 1 0 0 0 0 0 1
1 0 0 0 0 0 0 1
14
Drawbacks:
• There are two representations (0000 0000B and 1111 1111B) for zero.
• The positive integers and negative integers need to be processed separately.
2.1 – Signed number representation
15
Try yourself
• Example 3: Suppose that n=8 and the binary representation 0 000 0000B.
• Example 4: Suppose that n=8 and the binary representation 1 111 1111B.
2.1 – Signed number representation
16
Try yourself
• Example 3: Suppose that n=8 and the binary representation 0 000 0000B.
Sign bit is 0 ⇒ positive
Absolute value is 000 0000B = 0D
Hence, the integer is +0D
• Example 4: Suppose that n=8 and the binary representation 1 111 1111B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 111 1111B, i.e., 000 0000B = 0D
Hence, the integer is -0D
2.1 – Signed number representation
17
3. 2's Complement Representation
• MSB - sign bit,
– 0 representing positive integers
– 1 representing negative integers.
• The remaining n-1 bits represents the magnitude of the integer, as follows:
– for positive integers,
• absolute value = the magnitude of the (n-1)-bit
– for negative integers,
• absolute value = the magnitude of the complement of the (n-1)-bit plus one
• hence called 2's complement.
2.1 – Signed number representation
18
• Example 1: Suppose that n=8 and the binary representation 0 100 0001B.
Sign bit is 0 ⇒ positive
Absolute value is 100 0001B = 65D
Hence, the integer is +65D
• Example 2: Suppose that n=8 and the binary representation 1 000 0001B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 000 0001B plus 1,
i.e., complement of 000 0001B = 111 1110B
1B (+)
= 111 1111B = 127D
Hence, the integer is -127D
2.1 – Signed number representation
0 1 0 0 0 0 0 1
1 0 0 0 0 0 0 1
19
TRY YOURSELF
• Example 3: Suppose that n=8 and the binary representation 0 000 0000B.
• Example 4: Suppose that n=8 and the binary representation 1 111 1111B.
2.1 – Signed number representation
20
TRY YOURSELF
• Example 3: Suppose that n=8 and the binary representation 0 000 0000B.
Sign bit is 0 ⇒ positive
Absolute value is 000 0000B = 0D
Hence, the integer is +0D
• Example 4: Suppose that n=8 and the binary representation 1 111 1111B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 111 1111B plus 1,
i.e., 000 0000B + 1B = 000 0001B = 1D
Hence, the integer is -1D
2.1 – Signed number representation
21
2.1 – Signed number representation
22
Computers use 2's Complement Representation for Signed Integers
• There is only one representation for the number zero in 2's complement, instead
of two representations in sign-magnitude and 1's complement.
• Positive and negative integers can be treated together in addition and subtraction.
Subtraction can be carried out using the "addition logic".
• Example 1: Addition of Two Positive Integers:
Suppose that n=8, 65D + 5D = 70D
65D → 0100 0001B
5D → 0000 0101B (+ )
0100 0110B → 70D
2.1 – Signed number representation
23
• Example 2: Subtraction is treated as Addition of a Positive and a Negative
Integers:
Suppose that n=8, 65D - 5D = 65D + (-5D) = 60D
65D → 0100 0001B
-5D → 1111 1011B (+ )
0011 1100B → 60D (discard carry )
• Example 3: Addition of Two Negative Integers:
Suppose that n=8, -65D - 5D = (-65D) + (-5D) = -70D
-65D → 1011 1111B
-5D → 1111 1011B (+ )
1011 1010B → -70D (discard carry)
2.1 – Signed number representation
24
• Because of the fixed precision (i.e., fixed number of bits), an n-bit 2's
complement signed integer has a certain range.
• For example, for n=8, the range of 2's complement signed integers
is -128 to +127.
2.1 – Signed number representation
25
Range of n-bit 2's Complement Signed Integers
• - 2^n-1 to + (2^n-1)-1
n minimum maximum
8 -(2^7) (=-128) +(2^7)-1 (=+127)
16 -(2^15) (=-32,768) +(2^15)-1 (=+32,767)
32 -(2^31) (=-2,147,483,648) +(2^31)-1 (=+2,147,483,647)(9+ digits)
64 -(2^63)
(=-9,223,372,036,854,775,808)
+(2^63)-1
(=+9,223,372,036,854,775,807)(18+ digits)
2.1 – Signed number representation
26
• During addition (and subtraction), it is important to check whether the
result exceeds this range, in other words, whether overflow or underflow
has occurred.
2.1 – Signed number representation
27
Example 4: Overflow:
• Suppose that n=8, 127D + 2D = 129D (overflow - beyond the range)
127D → 0111 1111B
2D → 0000 0010B (+ )
1000 0001B → -127D (wrong)
Example 5: Underflow:
• Suppose that n=8, -125D - 5D = -130D (underflow - below the range)
-125D → 1000 0011B
-5D → 1111 1011B (+
0111 1110B → +126D (wrong)
2.1 – Signed number representation
28
• n=4 bit binary , signed number representation
2.1 – Signed number representation
29
Match & Match
a) + 5 i) 1000 1) Unsigned
Representation
b) - 5 ii) 1000 0111 2) 2’s Complement
Representation
c) -7 iii) 0000 0101 3) Signed Magnitdue
Representation
d) - 7 iv) 1011 4) 1’s Complement
Representation
29
ACTIVITY TIME
30
Match & Match
a) + 5 i) 1000 1) Unsigned
Representation
b) - 5 ii) 1000 0111 2) 2’s Complement
c) -7 iii) 0000 0101 3) Signed Magnitdue
Representation
d) - 7 iv) 1011 4) 1’s Complement
Representation
30
ACTIVITY TIME
31
1
Session Topic
2.1 Signed number representation
2.2 Fixed and floating point representations
2.3 Character representation
2.4 Integer addition and subtraction
2.5 Ripple carry adder
2.6 Carry look-ahead adder
2.7 Shift-and add multiplication
2.8 Booth multiplier
2.9 Carry save multiplier
2.10 Division - restoring techniques
2.11 Division - non-restoring techniques
2.12 Floating point arithmetic
Module 2- Data representation & Computer arithmetic
2
2.2 – Fixed and Floating point representations
Real Numbers
There are two major approaches to store real numbers (i.e., numbers with fractional
component) in modern computing.
(i) Fixed Point Notation
– there are a fixed number of digits after the decimal point,
(ii) Floating Point Notation.
– allows for a varying number of digits after the decimal point.
✔ Two representations
Single precision (32-bit)
Double precision (64-bit)
3
2.2 – Fixed and Floating point representations
Fixed point Representation
• It has fixed number of bits for integer part and for fractional part.
• For example, if given fixed-point representation is
• minimum value is 0000.0001 and
• maximum value is 9999.9999.
• There are three parts of a fixed-point number representation: the sign field, integer
field, and fractional field.
I I I I . F F F F
4
2.2 – Fixed and Floating point representations
Floating point Representation
IEEE 754 standard for Floating point Representation
Three parts
– Sign bit ( MSB- bit 31 )
– Exponent E’( bit 23 to bit 30)
– Mantissa or fractional ( bit 0 to bit 22)
5
2.2 – Fixed and Floating point representations
• Value = + 1.M × 2E
= + 1.M × 2E’-127
.
• E’=E+127. E’ is the range 0<E’<255 .
• 0 and 225 are used to represent special values.
• Therefore E’ for normal values 1 < E’<254 .
• This means that the actual exponent E is in the range -126 ≤ E ≤ 127.
• So, the scale factor has a range of 2-126
to 2+127
• Since binary normalization is used, the most significant bit of the mantissa is
always equal to 1
6
Example:
• Sign bit : 0 positive
• Exponent E’ : 00101000 40
• Mantissa M : 001010….. 0
• Value = + M × 2E’-127
.
• Value = +0. 001010…..0 × 240-127
un normalized form
• Value = +1. 001010…..0 × 2-90
0 00101000 001010….. 0
7
2.2 – Fixed and Floating point representations
Floating point Representation
IEEE 754 standard for Floating point Representation
Three parts
– Sign bit ( MSB- bit 63 )
– Exponent E’( bit 52 to bit 62)
– Mantissa or fractional ( bit 0 to bit 51)
8
• Exponent and mantissa ranges are increased.
– The 52 bit mantissa M
– The 1 bit provides sign indication S
– The 11 bit provides Exponent E’
• Exponent E’ uses excess-1023 format.
– E ‘ ranges 1 <E’<2046,
– 0 and 2047 used to indicate special values,
– Thus, the actual exponent E is in the range (E’=E+1023)
-1022 < E < 1023,
– So scale factors range is 2-1022
to 21023
.
• value= + 1.M × 2E’-1023
9
Normalization
• Sign bit : 0 positive
• Exponent E’ : 10001000 136
• Mantissa M : 0010110…..
• Value = + 1.M × 2E’-127
.
• Value in un normalized form = +0. 0010110…..0 × 2136-127
• Value in normalized form = +1. 0110…..0 × 2136-127-3
• Value = +1. 0110….. × 26
0 10001000 001011 0…..
10
Special values
• The end values 0 and 255 of the excess -127 & end values 0 and 2047 of the excess -1023
of exponent E' are used to represent special values.
E’= 0 and M = 0 Exact Zero.
E’ = 255 and M=0 Infinity.
E’ = 0 and M != 0 Denormal values.
E’ = 255 and M != 0 NaN [Not a Number]
Eg: 0/0 or sqrt(-1).
11
2.3 CHARACTER REPRESENTATION
2.3 – Character Representations
12
• In computer memory, character are "encoded" (or "represented") using a ASCII
(American Standard Code for Information Interchange) code
• ASCII is originally a 7-bit code. It has been extended to 8-bit to better utilize the
8-bit computer memory organization.
• The 8th-bit was originally used for parity check in the early computers.
• In ASCII
– Code numbers 32D (20H) to 126D (7EH) are printable (displayable)
characters
– Code numbers 0D (00H) to 31D (1FH), and 127D (7FH) are special control
characters, which are non-printable (non-displayable)
2.3 – Character Representations
13
• Code number 32D (20H) is the blank or space character.
– '0' to '9': 48D (30H) to 57D (39H)
– 'A' to 'Z': 65D (41H) to 90D (5AH)
– 'a' to 'z': 97D (61H) to 122D (7AH) .
• Code numbers 0D (00H) to 31D (1FH), and 127D (7FH) are special control
characters, which are non-printable (non-displayable)
2.3 – Character Representations
14
Dec 0 1 2 3 4 5 6 7 8 9
3 SP ! " # $ % & '
4 ( ) * + , - . / 0 1
5 2 3 4 5 6 7 8 9 : ;
6 < = > ? @ A B C D E
7 F G H I J K L M N O
8 P Q R S T U V W X Y
9 Z [  ] ^ _ ` a b c
10 d e f g h i j k l m
11 n o p q r s t u v w
12 x y z { | } ~
2.3 – Character Representations
Decimal Representation
15
2.3 – Character Representations
Hex 0 1 2 3 4 5 6 7 8 9 A B C D E F
2 SP ! " # $ % & ' ( ) * + , - . /
3 0 1 2 3 4 5 6 7 8 9 : ; < = > ?
4 @ A B C D E F G H I J K L M N O
5 P Q R S T U V W X Y Z [  ] ^ _
6 ` a b c d e f g h i j k l m n o
7 p q r s t u v w x y z { | } ~
Hexa Decimal Representation
16
2.3 – Character Representations
Non printable characters
17
18
TRY YOURSELF
Convert the following character into ASCII form (both decimal numbers , hexa-
decimal numbers) representation
1. F
2. h
3. D
4. u
5. 7
6. 1
7. {
8. }
9. 
10. [
11. ]
2.3 – Character Representations
19
Convert the following decimal number into fixed point
notation (use 12 bit register including 4 bit fractional part )
Example 1 (positive number)
i) 27.5
Given
• 12 bit register including 4 bit fractional part
• i.e 8 bit integer part (including MSB sign bit)
• 4 bit fractional part
• Integer part 2711011
 00011011 (in 8 bit representation)
• Fractional part 0.5  1000 (in 4 bit representation)
• Answer 000110111000
Convert the following decimal number into fixed point
notation (use 12 bit register including 4 bit fractional part )
Example 2 (negative number)
i) -55.75
Given
• 12 bit register including 4 bit fractional part
• i.e 8 bit integer part (including MSB sign bit)
• 4 bit fractional part
• Integer part 55 00110111 (in 8 bit representation)
• Fractional part 0.75  1100 (in 4 bit representation)
• 55.75 00110111.1100
• -55.75 11001000.0100 (in 2’C representation)
55 110111
55.75 00110111.1100
1’ C11001000.0011
2’ C11001000.0100 -55.75
Convert the following decimal number into floating
point notation (use 32 bit notation)
Example 1 (positive number) 17.625
In 32 bit notation
• Sign bit 0
• Integer part 1710001
• Fractional part 0.625  101000..
• 17.62510001.101000..
• 1.0001101000.. X 24 (in Normalized form)
• Exponent E’+127=4 +127=13110000011
• 32 bit notation 
• 
• Answer  010000011 0001101000 …..
Mantissa
Exponent
signbit exponent mantissa
0 10000011 0001101000 ……..
Convert the following decimal number into floating
point notation (use 32 bit notation)
Example 2 (negative number) -17.625
In 32 bit notation
• Sign bit 1
• Integer part 1710001
• Fractional part 0.625  101000..
• 17.62510001.101000..
• 1.0001101000.. X 24 (in Normalized form)
• Exponent E’+127=4 +127=13110000011
• 32 bit notation 
• 
• Answer  110000011 0001101000 …..
Mantissa
Exponent
signbit exponent mantissa
1 10000011 0001101000 ……..
Session Topic
2.1 Signed number representation
2.2 Fixed and floating point representations
2.3 Character representation
2.4 Integer addition and subtraction
2.5 Ripple carry adder
2.6 Carry look-ahead adder
2.7 Shift-and add multiplication
2.8 Booth multiplier
2.9 Carry save multiplier
2.10 Division - restoring techniques
2.11 Division - non-restoring techniques
2.12 Floating point arithmetic
Module 2- Data representation & Computer arithmetic
2.4 – Integer Addition and Subtraction
7 0 1 1 1
6 0 1 1 0 (+)
------------------------
13 1 1 0 1
----------------------
Half Adder
Adder circuit adds single bit binary number
Half adder doesn’t consider the carry from
previous sum
Calculation of sum and carry of Half Adder
Sum = XOR gate=A B
Carry out = AND gate = A.B
2.4 – Integer Addition and Subtraction
A B OR AND XOR
0 0 0 0 0
0 1 1 0 1
1 0 1 0 1
1 1 0 1 0
Half Adder
Design circuit for Half Adder
Sum = A B
Carry out = A.B
For 4 bit number addition
4 Half adder combined
Drawback
Does not consider the carry input
2.4 – Integer Addition and Subtraction
Full Adder
Full adder circuit
Add single bit number with carry
2.4 – Integer Addition and Subtraction
2.4 – Integer Addition and Subtraction
Full Adder
sum
Carry out
2.4 – Integer Addition and Subtraction
Addition logic for a single stage
9
2.4 – Integer Addition and Subtraction
4 bit adder circuit
Add two 4 bit numbers
2.4 – Integer Addition and Subtraction
n-bit adder
• Cascade n full adder (FA) blocks to form a n-bit adder.
• Carries propagate or ripple through this cascade, n-bit ripple carry adder.
11
2.4 – Integer Addition and Subtraction
Binary Addition / Subtraction
Subtraction operation
• subtraction operation X − Y = X+(-Y) = 2’s-complement of Y and add it to X .
• 2nd
input to FA is given through XOR gat
• All 2nd
input to XOR gate is connected to Add/Sub input control line(M) .
2.4 – Integer Addition and Subtraction
Binary Addition / Subtraction logic circuit
• Subtraction operation
• Add/Sub input control line = 1 (Y input is 1’s-complemented)
• C0
=1. (Y input is 1’s-complemented +1= 2’s-complementation of Y)
Addition operation
• Add/Sub input control line = 0
• C0
=0.
2.4 – Integer Addition and Subtraction
K n-bit adder
K n-bit numbers can be added by cascading k n-bit adders.
Each n-bit adder forms a block, so this is cascading of blocks.
Carries ripple or propagate through blocks, Blocked Ripple Carry Adder
14
2.4 – Integer Addition and Subtraction
TRY YOURSELF
Perform Binary Addition , subtraction on following numbers (using 2’s complement)
a) 6 + 7 b) 9 +12 c) 10 + 15
1) 7 – 6 2) 12- 9 3) 15 - 10
2.4 – Integer Addition and Subtraction
Session Topic
2.1 Signed number representation
2.2 Fixed and floating point representations
2.3 Character representation
2.4 Integer addition and subtraction
2.5 Ripple carry adder
2.6 Carry look-ahead adder
2.7 Shift-and add multiplication
2.8 Booth multiplier
2.9 Carry save multiplier
2.10 Division - restoring techniques
2.11 Division - non-restoring techniques
2.12 Floating point arithmetic
Module 2- Data representation & Computer arithmetic
 4 bit adder circuit
 Add two 4 bit numbers
2.5 – Ripple Carry Adder
 N bit Ripple Carry Adder
 Adds n bit number with carry
2.5 – Ripple Carry Adder
Computing the delay time
x0
y0
c0
c1
s0
FA
Consider 0th stage:
• c1 is available after 2 gate delays.
• s0 is available after 1 gate delay.
c
i
yi
xi
c
i
yi
x
i
xi
ci
yi
si
c
i 1
+
Sum Carry
4
2.5 – Ripple Carry Adder
Delay of the circuit
Consider 3rd stage:
• c3 is available after 2+2+2 =6 gate delays.
• s2 is available after 2+2+1 = 5 gate delay.
Consider nth stage:
• cn is available after 2n gate delays.
• sn-1 is available after 2n-1 gate delay.
2.5 – Ripple Carry Adder
Virtual Lab
Virtual Lab Link:
Design a 4 bit RCA and
upload your diagram in
google classroom
2.5 – Ripple Carry Adder
TRY YOURSELF
Perform Binary Addition , subtraction on following numbers
a) 6 + 7 b) 9 +12 c) 10 + 15
1) 7 – 6 2) 12- 9 3) 15 - 10
2.5 – Ripple Carry Adder
Session Topic
2.1 Signed number representation
2.2 Fixed and floating point representations
2.3 Character representation
2.4 Integer addition and subtraction
2.5 Ripple carry adder
2.6 Carry look-ahead adder
2.7 Shift-and add multiplication
2.8 Booth multiplier
2.9 Carry save multiplier
2.10 Division - restoring techniques
2.11 Division - non-restoring techniques
2.12 Floating point arithmetic
Module 2- Data representation & Computer arithmetic
2.6 – Carry Look-ahead Adder (Fast Adder)
2
Why Fast Adder ?:
Drawbacks of Ripple Carry Adder
1) Too much of gate delay in developing RCA output
2) Final carry output Cn is available after 2n gate delays
3) All sum bits are available after 2n gate delays, including the delay through the
XOR gates.
4) Overflow indication is available after 2n +2gate delays.
5) In n bit RCA , longest path is from input at LSB position to output at MSB
position.
DESIGN OF FAST ADDERS
2.6 – Carry Look-ahead Adder (Fast Adder)
3
Carry Look Ahead Logic (CAL)
• In the ripple carry adder, the FAs cannot operate simultaneously, because the
carry i/p to FA depends on the carry o/p of the previous FA.
• Carry Look Ahead Logic, generates carries itself and give them to the FAs. So all
FAs operate simultaneously and thus reducing the delay significantly.
2.6 – Carry Look-ahead Adder (Fast Adder)
4
CARRY LOOK AHEAD ADDITION:
Sum = (Xi
⊕ Yi
) ⊕ Ci
Carry out Ci+1
= Xi
Yi
+Xi
Ci
+ Yi
Ci
Functions used: Generate and Propagate
Ci+1
= Xi
Yi
+Xi
Ci
+ Yi
Ci
= Xi
Yi
+( Xi
+ Yi
)Ci
= Gi
+ Pi
Ci
where,
Generate function Gi
= Xi
Yi
Propagate function Pi
= Xi
+ Yi
2.6 – Carry Look-ahead Adder (Fast Adder)
5
Generate function Gi
• When Xi=Yi=1, Gi=1 and Pi=0
• Gi = XiYi =1, Pi = Xi + Yi =10, omitting carry gives Pi=0
• Generate function Gi produce carry out independent of Pi when Xi=Yi=1
Propagate function Pi
• When Xi=1 or Yi=1, Pi=1 and Gi=0
• The propagate function Pi produce carry out independent of Gi, when
either Xi =1 or Yi = 1.
Gi
= Xi
Yi
Pi
= Xi
+ Yi
A B OR AND XOR
0 0 0 0 0
0 1 1 0 1
1 0 1 0 1
1 1 0 1 0
2.6 – Carry Look-ahead Adder (Fast Adder)
6
DESIGN OF FAST ADDERS
• Gi
= Xi
Yi ,
✔ So Gi
is implemented with a AND gate.
• Pi
= Xi
+ Yi
✔ So Pi
is implemented with a OR gate.
• Si= xi ⊕ yi ⊕ ci.
✔ So Si is implemented with two XOR gates
• Reduce the number of gates?
2.6 – Carry Look-ahead Adder (Fast Adder)
• Pi
can be implemented with a OR gate. (Pi = Xi + Yi, )
• But in this circuit, it is implemented with same XOR gate which was
used to generate the sum output.
• Comparing Truth table of XOR, OR gate, almost it is same except one
last condition.(Xi=Yi=1)
• But when Xi=Yi=1, out carry depends only on Ci does not depends on
Pi
• So for design of Pi, use a XOR gate (which is already there for sum
calculation)
XOR Gate
X Y Z
0 0 0
0 1 1
1 0 1
1 1 0
OR Gate
X Y Z
0 0 0
0 1 1
1 0 1
1 1 1
2.6 – Carry Look-ahead Adder (Fast Adder)
8
•Combinational circuit of these gates is called B cell (Bit
Storage Cell).
2.6 – Carry Look-ahead Adder (Fast Adder)
4 bit Carry LookAhead Adder (CLA)
2.6 – Carry Look-ahead Adder (Fast Adder)
10
Design of 4 bit Carry Look Ahead Adder
The carries can be implemented as
C1
= G0
+ P0
C0
C2
= G1
+ P1
C1
C3
=G2
+ P2
C2
C4
=G3
+ P3
C3
Substitute C3
in C4
C4 = G3 + P3 (G2 + P2 C 2)
= G3 +P3 G 2+ P3 P2 (G1 + P1 C1 ) Substitute C2
in C4
= G 3+ P3 G2 + P3 P2 G1 (G0 +P0 C0 ) Substitute C1
in C4
C4 =G3 +P3 G2 +P3 P2 G1 + P1 P2 P3 G0 +P0 P1 P2 P3 C0
Ci+1
= Gi
+ Pi
Ci
2.6 – Carry Look-ahead Adder (Fast Adder)
11
Similarly,
c1 = G0 + P0c0
c2 = G1 + P1G0 + P1P0c0
c3 = G2 + P2G1 + P2P1G0 + P2P1P0c0
c4 = G3 + P3G2 + P3P2G1 + P3P2P1G0 + P3P2P1P0c0
2.6 – Carry Look-ahead Adder (Fast Adder)
Delay calculation
• All carries can be obtained three gate delays after the input operands X , Y , and
C0
are applied
– Only one gate delay is needed to develop all Pi and Gi signals
– Two gate delays is needed to produce ci+1 (AND-OR circuit) .
• After one more XOR gate delay, all sum bits are available (4 gate delay).
• In total, the 4-bit addition process requires only four gate delays, independent of n.
2.6 – Carry Look-ahead Adder (Fast Adder)
13
Thus in a 4 bit CLA adder,
C4
= 3 delay
S3
= 3+1 (XOR) = 4 delay (FOR ALL CASES)
Carry / Sum CLA Ripple Carry
C4
3 8
S3
4 7
2.6 – Carry Look-ahead Adder (Fast Adder)
16 bit Carry LookAhead Adder (CLA)
• The 4 bit adder design cannot be extended easily for longer operands due to
fan-in problem.
• Longer adders - cascading a number of 4-bit adders
• 16 bit adder – Four , 4 bit CLA cascaded.
• 32 bit adder – Eight , 4 bit CLA cascaded.
Delay in cascading
• 16 bit adder – Four , 4 bit CLA cascaded.
– C4
=3 , S3
=4
– C16
=(3X3)+3=12 , S15
=9+1=10
• 32 bit adder – Eight , 4 bit CLA cascaded.
– C32
=(7X2)+3=17 , S31
=17+1=18
• Compared to RCA, 16 or 32 bit cascading CLA has less delay.
• This can be further decreased by using Carry lookahead logic for generating
C4,C8….
Carry
/ Sum
CLA-
cascading
Ripple
Carry
C16
9 32
S15
10 31
C32
17 64
S31
18 63
2.6 – Carry Look-ahead Adder (Fast Adder)
16 bit Carry LookAhead Adder (CLA)
Combination of 4 CLA
2.6 – Carry Look-ahead Adder (Fast Adder)
•
2.6 – Carry Look-ahead Adder (Fast Adder)
•
2.6 – Carry Look-ahead Adder (Fast Adder)
18
Design of 16 bit Carry Look Ahead Adder
Carry / Sum CLA - using G & P
function
CLA- cascading Ripple Carry
C16
5 9 32
S15
8 10 31
2.6 – Carry Look-ahead Adder (Fast Adder)
Virtual Lab Link:
2.6 – Carry Look-ahead Adder (Fast Adder)
TRY YOURSELF
1. Derive the generate and propagate function for 4 bit adder
2. Derive the generate and propagate function for 16 bit adder
2.6 – Carry Look-ahead Adder (Fast Adder)
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 1
Session Topic
2.1 Signed number representation
2.2 Fixed and floating point representations
2.3 Character representation
2.4 Integer addition and subtraction
2.5 Ripple carry adder
2.6 Carry look-ahead adder
2.7 Shift-and Add multiplication
2.8 Booth multiplier
2.9 Carry save multiplier
2.10 Division - restoring techniques
2.11 Division - non-restoring techniques
2.12 Floating point arithmetic
Module 2- Data representation & Computer arithmetic
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 2
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
2
Normal Multiplication Technique is
Product of two n-bit numbers produce a 2n-bit number
Unsigned multiplicationcan be viewed as addition of shifted versions of the multiplicand.
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 3
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
Two types of multiplier
 Array multiplier
 Sequential circuit multiplier ( Shift and Add multiplier)
Array multiplier
 Implemented using array of full adder
 Two dimensional logic (n X n)
 Each row produce the partial product (PP)
 Add PP at each stage
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 4
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
Multiplicand
m 3
m 2
m 1
m0
0 0 0 0
q3
q 2
q 1
q0
0
p 2
p 1
p 0
0
0
0
p3
p 4
p 5
p 6
p 7
PP1
PP2
PP3
(PP0)
,
Product is: P7P6P5P4P3P2P1P0
Multiplicand is shifted by displacing it through an array of adders.
Combinatorial array multiplier
SESSION:15 Ms.S.PADMAVATHI
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 5
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
ith multiplier bit
carry in
carry out
jth multiplicand bit
ith multiplier bit
Bit of incoming partial product (PPi)
Bit of outgoing partial product (PP(i+1))
FA
Typical multiplication cell
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 6
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
• Combinatorial array multipliers are:
– Extremely inefficient.
– Have a high gate count for multiplying numbers of practical size
such as 32-bit or 64-bit numbers.
– Perform only one function, namely, unsigned integer product.
• Improve gate efficiency
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 7
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
Sequential circuit multiplier
 Less hardware required
 Single n bit adder used n times
 Repeated in many clock cycles to complete the multiplication
 Add & Shift operation in each cycle
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 8
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
Sequential multiplication (Shift and Add multiplication)
• Recall the rule for generating partial products:
– If the ith bit of the multiplier is 1, add the appropriately shifted multiplicand to
the current partial product.
– Multiplicand has been shifted left when added to the partial product.
• Note:
Adding a left-shifted multiplicand to an unshifted partial product is equivalent to
adding an unshifted multiplicand to a right-shifted partial product.
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 9
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
Sequential Circuit Multiplier
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 10
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
Construction
• n bit adder.
• Register M & Flip-Flop C.
• Mux – select 0 or multiplicand M
• Add / Noadd control line, control sequence
• Registers A and Q are shift registers.
• A & Q together, they hold partial product PPi
• The partial product grows in length by one bit per cycle
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 11
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 12
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
Working
• At the start, the multiplier is loaded into register Q, the multiplicand into register
M, and C and A are cleared to 0.
• Multiplier bit qi appears at the LSB position of Q generate the Add/Noadd signal
• If qi =0, then Noadd signal generated from control sequence and MUX select 0.
• If qi =1, then add signal generated from control sequence and MUX select M.
• At the end of each cycle, C, A, and Q are shifted right one bit position to allow for
growth of the partial product (as the multiplier is shifted out of register Q).
• After n cycles, the high-order half of the product is held in register A and the low-
order half is in register Q.
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 13
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
Sequential Circuit Multiplier
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 14
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
C A Q
0 0 0 0 0 1 0 1 1 Initial position
0 1 1 0 1 1 0 1 1 Add (A+M) 1st
cycle
0 0 1 1 0 1 1 0 1 Shift R
1 0 0 1 1 1 1 0 1 Add(A+M) 2nd
cycle
0 1 0 0 1 1 1 1 0 Shift R
0 1 0 0 1 1 1 1 0 NoAdd (A+0) 3nd
cycle
0 0 1 0 0 1 1 1 1 Shift R
1 0 0 0 1 1 1 1 1 Add (A+M) 4nd
cycle
0 1 0 0 0 1 1 1 1 Shift R
M 1 1 0 1
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 15
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
C A Q
0 0 0 0 0 1 0 1 0 Initial position
0 0 0 0 0 1 0 1 0 No Add (A+0) 1st
cycle
0 0 0 0 0 0 1 0 1 Shift R
0 1 1 1 1 0 1 0 1 Add(A+M) 2nd
cycle
0 0 1 1 1 1 0 1 0 Shift R
0 0 1 1 1 1 0 1 0 NoAdd (A+0) 3nd
cycle
0 0 0 1 1 1 1 0 1 Shift R
1 0 0 1 0 1 1 0 1 Add (A+M) 4nd
cycle
0 1 0 0 1 0 1 1 0 Shift R
M 1 1 1 1
19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 16
2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
Virtual Lab Link:
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA
22CS201 COA

More Related Content

What's hot

SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...vtunotesbysree
 
Clocked Sequential circuit analysis and design
Clocked Sequential circuit analysis and designClocked Sequential circuit analysis and design
Clocked Sequential circuit analysis and designDr Naim R Kidwai
 
Network Layer Numericals
Network Layer NumericalsNetwork Layer Numericals
Network Layer NumericalsManisha Keim
 
Memory organization (Computer architecture)
Memory organization (Computer architecture)Memory organization (Computer architecture)
Memory organization (Computer architecture)Sandesh Jonchhe
 
Data transfer and manipulation
Data transfer and manipulationData transfer and manipulation
Data transfer and manipulationSanjeev Patel
 
Pipeline hazards in computer Architecture ppt
Pipeline hazards in computer Architecture pptPipeline hazards in computer Architecture ppt
Pipeline hazards in computer Architecture pptmali yogesh kumar
 
Floating point arithmetic operations (1)
Floating point arithmetic operations (1)Floating point arithmetic operations (1)
Floating point arithmetic operations (1)cs19club
 
Chapter 03 arithmetic for computers
Chapter 03   arithmetic for computersChapter 03   arithmetic for computers
Chapter 03 arithmetic for computersBảo Hoang
 
Real-Time Scheduling
Real-Time SchedulingReal-Time Scheduling
Real-Time Schedulingsathish sak
 
Dynamic storage allocation techniques
Dynamic storage allocation techniquesDynamic storage allocation techniques
Dynamic storage allocation techniquesShashwat Shriparv
 
Instruction Set Architecture (ISA)
Instruction Set Architecture (ISA)Instruction Set Architecture (ISA)
Instruction Set Architecture (ISA)Gaditek
 
Direct memory access (dma)
Direct memory access (dma)Direct memory access (dma)
Direct memory access (dma)Zubair Khalid
 
Computer instructions
Computer instructionsComputer instructions
Computer instructionsAnuj Modi
 
Computer architecture page replacement algorithms
Computer architecture page replacement algorithmsComputer architecture page replacement algorithms
Computer architecture page replacement algorithmsMazin Alwaaly
 

What's hot (20)

SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
 
Clocked Sequential circuit analysis and design
Clocked Sequential circuit analysis and designClocked Sequential circuit analysis and design
Clocked Sequential circuit analysis and design
 
Network Layer Numericals
Network Layer NumericalsNetwork Layer Numericals
Network Layer Numericals
 
Chapter6 pipelining
Chapter6  pipeliningChapter6  pipelining
Chapter6 pipelining
 
Memory organization (Computer architecture)
Memory organization (Computer architecture)Memory organization (Computer architecture)
Memory organization (Computer architecture)
 
Data transfer and manipulation
Data transfer and manipulationData transfer and manipulation
Data transfer and manipulation
 
Cache coherence
Cache coherenceCache coherence
Cache coherence
 
Pipeline hazards in computer Architecture ppt
Pipeline hazards in computer Architecture pptPipeline hazards in computer Architecture ppt
Pipeline hazards in computer Architecture ppt
 
Floating point arithmetic operations (1)
Floating point arithmetic operations (1)Floating point arithmetic operations (1)
Floating point arithmetic operations (1)
 
Pipeline
PipelinePipeline
Pipeline
 
Chapter 03 arithmetic for computers
Chapter 03   arithmetic for computersChapter 03   arithmetic for computers
Chapter 03 arithmetic for computers
 
Architecture of pentium family
Architecture of pentium familyArchitecture of pentium family
Architecture of pentium family
 
pipelining
pipeliningpipelining
pipelining
 
Real-Time Scheduling
Real-Time SchedulingReal-Time Scheduling
Real-Time Scheduling
 
Dynamic storage allocation techniques
Dynamic storage allocation techniquesDynamic storage allocation techniques
Dynamic storage allocation techniques
 
Instruction Set Architecture (ISA)
Instruction Set Architecture (ISA)Instruction Set Architecture (ISA)
Instruction Set Architecture (ISA)
 
Direct memory access (dma)
Direct memory access (dma)Direct memory access (dma)
Direct memory access (dma)
 
Computer arithmetic
Computer arithmeticComputer arithmetic
Computer arithmetic
 
Computer instructions
Computer instructionsComputer instructions
Computer instructions
 
Computer architecture page replacement algorithms
Computer architecture page replacement algorithmsComputer architecture page replacement algorithms
Computer architecture page replacement algorithms
 

Similar to 22CS201 COA

POLITEKNIK MALAYSIA
POLITEKNIK MALAYSIAPOLITEKNIK MALAYSIA
POLITEKNIK MALAYSIAAiman Hud
 
module 1 computer architecture diploma
 module 1 computer architecture diploma   module 1 computer architecture diploma
module 1 computer architecture diploma Manoharan Ragavan
 
Lec-1-2-27102021-122208am.pptx for embedded design
Lec-1-2-27102021-122208am.pptx for embedded designLec-1-2-27102021-122208am.pptx for embedded design
Lec-1-2-27102021-122208am.pptx for embedded designArsalanLatif5
 
COA Chapter 1.pdf
COA Chapter 1.pdfCOA Chapter 1.pdf
COA Chapter 1.pdfAbelAteme
 
Computer organisation Module 1.ppt
Computer organisation Module 1.pptComputer organisation Module 1.ppt
Computer organisation Module 1.pptSoulReaper21
 
CS304PC:Computer Organization and Architecture UNIT I.pdf
CS304PC:Computer Organization and Architecture UNIT I.pdfCS304PC:Computer Organization and Architecture UNIT I.pdf
CS304PC:Computer Organization and Architecture UNIT I.pdfAsst.prof M.Gokilavani
 
Unit-1_Digital Computers, number systemCOA[1].pptx
Unit-1_Digital Computers, number systemCOA[1].pptxUnit-1_Digital Computers, number systemCOA[1].pptx
Unit-1_Digital Computers, number systemCOA[1].pptxVanshJain322212
 
Chapter1 basic structure of computers
Chapter1  basic structure of computersChapter1  basic structure of computers
Chapter1 basic structure of computersjismymathew
 
chapter 1 -Basic Structure of Computers.ppt
chapter 1 -Basic Structure of Computers.pptchapter 1 -Basic Structure of Computers.ppt
chapter 1 -Basic Structure of Computers.pptsandeepPingili1
 
chapter1-basicstructureofcomputers.ppt
chapter1-basicstructureofcomputers.pptchapter1-basicstructureofcomputers.ppt
chapter1-basicstructureofcomputers.pptKarrarIbrahimAbdAlam
 
COA-Unit-1-Basics.ppt
COA-Unit-1-Basics.pptCOA-Unit-1-Basics.ppt
COA-Unit-1-Basics.pptRuhul Amin
 
basic structure of computers
basic structure of computersbasic structure of computers
basic structure of computersHimanshu Chawla
 
Basic structure of computers
Basic structure of computersBasic structure of computers
Basic structure of computersKumar
 
Basic structure of computers
Basic structure of computersBasic structure of computers
Basic structure of computersKumar
 
Chapter 1 basic structure of computers
Chapter 1  basic structure of computersChapter 1  basic structure of computers
Chapter 1 basic structure of computersGurpreet Singh
 

Similar to 22CS201 COA (20)

POLITEKNIK MALAYSIA
POLITEKNIK MALAYSIAPOLITEKNIK MALAYSIA
POLITEKNIK MALAYSIA
 
module 1 computer architecture diploma
 module 1 computer architecture diploma   module 1 computer architecture diploma
module 1 computer architecture diploma
 
Module_01.ppt
Module_01.pptModule_01.ppt
Module_01.ppt
 
Lec-1-2-27102021-122208am.pptx for embedded design
Lec-1-2-27102021-122208am.pptx for embedded designLec-1-2-27102021-122208am.pptx for embedded design
Lec-1-2-27102021-122208am.pptx for embedded design
 
COA Chapter 1.pdf
COA Chapter 1.pdfCOA Chapter 1.pdf
COA Chapter 1.pdf
 
Computer organisation Module 1.ppt
Computer organisation Module 1.pptComputer organisation Module 1.ppt
Computer organisation Module 1.ppt
 
CS304PC:Computer Organization and Architecture UNIT I.pdf
CS304PC:Computer Organization and Architecture UNIT I.pdfCS304PC:Computer Organization and Architecture UNIT I.pdf
CS304PC:Computer Organization and Architecture UNIT I.pdf
 
Unit-1_Digital Computers, number systemCOA[1].pptx
Unit-1_Digital Computers, number systemCOA[1].pptxUnit-1_Digital Computers, number systemCOA[1].pptx
Unit-1_Digital Computers, number systemCOA[1].pptx
 
Chapter1 basic structure of computers
Chapter1  basic structure of computersChapter1  basic structure of computers
Chapter1 basic structure of computers
 
chapter 1 -Basic Structure of Computers.ppt
chapter 1 -Basic Structure of Computers.pptchapter 1 -Basic Structure of Computers.ppt
chapter 1 -Basic Structure of Computers.ppt
 
chapter1-basicstructureofcomputers.ppt
chapter1-basicstructureofcomputers.pptchapter1-basicstructureofcomputers.ppt
chapter1-basicstructureofcomputers.ppt
 
UNIT I.ppt
UNIT I.pptUNIT I.ppt
UNIT I.ppt
 
COA-Unit-1-Basics.ppt
COA-Unit-1-Basics.pptCOA-Unit-1-Basics.ppt
COA-Unit-1-Basics.ppt
 
basic structure of computers
basic structure of computersbasic structure of computers
basic structure of computers
 
1INTRODUCTION.pptx.pdf
1INTRODUCTION.pptx.pdf1INTRODUCTION.pptx.pdf
1INTRODUCTION.pptx.pdf
 
COA.pptx
COA.pptxCOA.pptx
COA.pptx
 
Cpu unit
Cpu unitCpu unit
Cpu unit
 
Basic structure of computers
Basic structure of computersBasic structure of computers
Basic structure of computers
 
Basic structure of computers
Basic structure of computersBasic structure of computers
Basic structure of computers
 
Chapter 1 basic structure of computers
Chapter 1  basic structure of computersChapter 1  basic structure of computers
Chapter 1 basic structure of computers
 

More from Kathirvel Ayyaswamy

22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE
22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE
22cs201 COMPUTER ORGANIZATION AND ARCHITECTUREKathirvel Ayyaswamy
 
20CS2021-Distributed Computing module 2
20CS2021-Distributed Computing module 220CS2021-Distributed Computing module 2
20CS2021-Distributed Computing module 2Kathirvel Ayyaswamy
 
Recent Trends in IoT and Sustainability
Recent Trends in IoT and SustainabilityRecent Trends in IoT and Sustainability
Recent Trends in IoT and SustainabilityKathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network SecurityKathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network SecurityKathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network SecurityKathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security 18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security Kathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network SecurityKathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network SecurityKathirvel Ayyaswamy
 
20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology20CS024 Ethics in Information Technology
20CS024 Ethics in Information TechnologyKathirvel Ayyaswamy
 
20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology20CS024 Ethics in Information Technology
20CS024 Ethics in Information TechnologyKathirvel Ayyaswamy
 

More from Kathirvel Ayyaswamy (20)

22CS201 COA
22CS201 COA22CS201 COA
22CS201 COA
 
22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE
22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE
22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE
 
18CS3040_Distributed Systems
18CS3040_Distributed Systems18CS3040_Distributed Systems
18CS3040_Distributed Systems
 
20CS2021-Distributed Computing module 2
20CS2021-Distributed Computing module 220CS2021-Distributed Computing module 2
20CS2021-Distributed Computing module 2
 
18CS3040 Distributed System
18CS3040 Distributed System	18CS3040 Distributed System
18CS3040 Distributed System
 
20CS2021 Distributed Computing
20CS2021 Distributed Computing 20CS2021 Distributed Computing
20CS2021 Distributed Computing
 
20CS2021 DISTRIBUTED COMPUTING
20CS2021 DISTRIBUTED COMPUTING20CS2021 DISTRIBUTED COMPUTING
20CS2021 DISTRIBUTED COMPUTING
 
18CS3040 DISTRIBUTED SYSTEMS
18CS3040 DISTRIBUTED SYSTEMS18CS3040 DISTRIBUTED SYSTEMS
18CS3040 DISTRIBUTED SYSTEMS
 
Recent Trends in IoT and Sustainability
Recent Trends in IoT and SustainabilityRecent Trends in IoT and Sustainability
Recent Trends in IoT and Sustainability
 
20CS2008 Computer Networks
20CS2008 Computer Networks 20CS2008 Computer Networks
20CS2008 Computer Networks
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security 18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
20CS2008 Computer Networks
20CS2008 Computer Networks20CS2008 Computer Networks
20CS2008 Computer Networks
 
20CS2008 Computer Networks
20CS2008 Computer Networks 20CS2008 Computer Networks
20CS2008 Computer Networks
 
20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology
 
20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology
 

Recently uploaded

Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacingjaychoudhary37
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 

Recently uploaded (20)

Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacing
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 

22CS201 COA

  • 1. 1 22CS201 Computer Organization and Architecture Module – I Dr.A.Kathirvel, Dean, Computing Cluster Sri Krishna College of Technology, Coibatore
  • 2. 2 22CS201 Computer Organization and Architecture Module I - Functional Blocks of a Computer and Data Representation Functional Blocks of a Computer: Functional blocks and its operations. Instruction set architecture of a CPU - registers, instruction execution cycle, Data path, RTL interpretation of instructions, instruction set. Performance metrics. Addressing modes. Data Representation: Signed number representation, fixed and floating point representations, character representation. Computer arithmetic - integer addition and subtraction, ripple carry adder, carry look-ahead adder, etc. multiplication - shift-and add, Booth multiplier, carry save multiplier, etc. Division restoring and non-restoring techniques, floating point arithmetic. Dr.A.Kathirvel, Professor & DEAN, DCSE, SKCT Kathirvel.a@skct.edu.in
  • 3. 3 Module I Text Books: 1. David A. Patterson and John L. Hennessy, “Computer Organization and Design: The Hardware/Software Interface”, 6th Edition, Morgan Kaufmann/Elsevier, 2020. 2. 2. Carl Hamacher, Zvonko Vranesic, Safwat Zaky, Naraig Manjikian, “Computer Organization and Embedded Systems”, McGraw- Hill, 6th Edition 2017. Reference Books: 1. John P. Hayes, “Computer Architecture and Organization”, McGraw-Hill, 3rd Edition, 2017 2. William Stallings, “Computer Organization and Architecture Designing for Performance”, 11th Edition, Pearson Education 2018. 3. Vincent P. Heuring and Harry F. Jordan, “Computer System Design and Architecture”, 2nd Edition, Pearson Education 2004.
  • 4. 4 Session Topic 1.1 Functional Blocks of a Computer 1.2 Operation and Operands of Computer Hardware 1.3 Instruction Set Architecture 1.4 Register Transfer Language (RTL) interpretation of instructions 1.5 Addressing Modes -1 1.6 Addressing Modes -2 1.7 Instruction Execution Cycle 1.8 Instruction Set 1.9 Performance Metrics Module 1- Functional blocks of Computer & Instruction Set Architecture
  • 5. 5 Computer Architecture : It is a set of rules and methods that describe the functionality, organization and implementation of computer systems. Functional units : A computer consists of the following five independent units which has its own functionality.  Input unit  Output unit  Arithmetic and Logic unit  Memory unit  Control unit 1.1 - Functional Units of a Digital Computer
  • 6. 6 Input unit • Input units are used by the computer to read the data. • e.g keyboards, mouse, joysticks, trackballs, microphones, etc. Output Unit • The primary function of the output unit is to send the processed results to the user. • e.g monitor, printer, etc. 1.1 - Functional Units of a Digital Computer
  • 7. 7 Arithmetic & logical unit • It performs arithmetic operations like addition, subtraction, multiplication, division and also the logical operations like AND, OR, NOT operations. Memory unit • It is a storage area in which programs and data’s are stored. • The memory unit can be categorized as primary memory and secondary memory. 1.1 - Functional Units of a Digital Computer
  • 8. 8 Control unit • It coordinates the operation of the processor. • It directs the other functional units to respond to a program's instructions. • It is the nerve center of a computer system. 1.1 - Functional Units of a Digital Computer
  • 9. 9 DISCUSSION 1. Technology behind the working of input and output devices. 2. Memory types. 3. Processor – latest in market with their producer name. 4. Architecture types of computer. 1.1 - Functional Units of a Digital Computer
  • 10. 10 Quiz Link 1.1 - Functional Units of a Digital Computer https://docs.google.com/forms/d/1kuMF1VOL6aoSF931L80ZPISZl YdA-f_KBwJupMiHSvM/edit?usp=sharing
  • 11. 11
  • 12. 12 Session Topic 1.1 Functional Blocks of a Computer 1.2 Operation and Operands of Computer Hardware 1.3 Instruction Set Architecture 1.4 Register Transfer Language (RTL) interpretation of instructions 1.5 Addressing Modes -1 1.6 Addressing Modes -2 1.7 Instruction Execution Cycle 1.8 Instruction Set 1.9 Performance Metrics Module 1- Functional blocks of Computer & Instruction Set Architecture
  • 13. 13 Introduction – Stored program concept – Operational concept – MIPS instruction set – Example 1.2.1 - Operation of Computer Hardware – Arithmetic (integer/floating-point) – Logical – Shift – Compare – Load/store – Branch/jump – System control and coprocessor 1.2 - Operation and Operands of Computer Hardware 1.2.2 - Operands of Computer Hardware – Registers operands – Memory operands – Constant or immediate operands
  • 14. 14 • There is one main memory store • Both data and instructions reside in same memory store • Data and instructions are fetched (copied) from memory over the same set of buses Design goals: • Maximize performance and minimize cost and reduce design time 1.2.1 – Operation of Computer Hardware a) Stored Program Concept (Von Neumann Architecture)
  • 15. 15 b) Operational Concepts  List of instructions and data are stored in the memory  Instructions are fetched from memory for execution. Basic steps to execute program  Fetch : Individual instructions are transferred from the memory to the processor  Decode : Determines the operation to be performed & operands required.  Execute : Operation is processed in ALU.  Store :Result / data is stored in memory. 1.2.1 – Operation of Computer Hardware
  • 16. 16 Registers • Instruction register (IR) - Holds the currently executing Instruction. • Program counter (PC) - address of next instruction to be executed. • General-purpose register (R0 – Rn-1) • Memory address register (MAR) – Contains the address of memory location to be accessed. • Memory data register (MDR) - Contains the data to be written into or read out of the address location 1.2.1 – Operation of Computer Hardware
  • 17. 17 xxxx yyy ADD R1, LOCA SUB R2,R3 25 PC= starting address PC=4 PC= 8 8 4 0 MAR=8 1.Fetch: Instruction PC=8 MARPC =8 Read ctrl sgl MDR= ADD R1,LOCA MDR=mem[MAR] IR=MDR PC=PC+4 R1<-- R1+[LOCA] 2.Decoded Operation = addition Operands= R1, LOCA LOCA 12 16 3. Fetch Data MAR= mem[operand]=LOCA=16 Read ctrl sgl MDR=[LOCA] MDR=25 4. Execute R1  R1 +MDR
  • 18. 18 Example Instruction ADD R1, LOC A // R1=R1+ [LOCA] ( Adding the content of memory location LOCA & Register R1) Typical Operating Steps • MAR  PC • MDR  Mem [MAR] • IR  MDR • PC  PC + 4 • MAR  IR [Operand] • MDR  Mem [MAR] • R1  R1 +MDR 1.2.1 – Operation of Computer Hardware
  • 19. 19 c) Instructions – Language of the machine – More primitive than higher level languages ( e.g., no sophisticated control flow such as while or for loops.) – Very restrictive • e.g., MIPS arithmetic instructions MIPS instruction set – MIPS - Million Instructions Per Second. – It is RISC (Reduced Instruction Set Computer) ISA 1.2.1 - Operation of Computer Hardware
  • 20. 20 MIPS Arithmetic – All MIPS arithmetic instructions have 3 operands – Operand order is fixed (e.g., destination first) – Example: C code: A = B + C MIPS code: add a, b, c # The sum of b and c is placed in a add $s0, $s1, $s2 # The sum of register content s1 and s2 is placed in s0 1.2.1 - Operation of Computer Hardware Types of MIPS instructions
  • 21. 21 1.2.1 - Operation of Computer Hardware
  • 22. 22 Example 1 • sum of four variables b, c, d, and e into variable a. • a=b+c+d+e – add a, b, c # The sum of b and c is placed in a (a=b+c) – add a, a, d # The sum of b, c, and d is now in a ( a=a+d) – add a, a, e # The sum of b, c, d, and e is now in a (a=a+e) 1.2.1 - Operation of Computer Hardware It takes three instructions to sum 4 variable. i.e Compiler must break this a=b+c+d+e statement into 3 assembly instructions, since only one operation is performed per instruction.
  • 23. 23 • X=a+b-c • add x,a,b • sub X,x,c • X=(a+b)-(c+d) • Add t, a, b = • add $s0,$s1,$s2 • Add $s0,$s1,LOCA • Add $s0,$s1,4 • Add r ,c,d • Sub x,t,r
  • 24. 24 Example 2 • Take five variables a,b, c, d, and e . • d=b+c-e – split into – a=b+c & – d=a-e – add a, b, c # The sum of b and c is placed in a – sub d, a, e # The subtract e from a and placed in d 1.2.1 - Operation of Computer Hardware It takes two instructions . MIPS assembly language instructions is performed by the compiler
  • 25. 25 Example 3 • Take five variables f, g, h, i, and j:. • f = (g + h) – (i + j) – add t0,g,h # temporary variable t0 contains g + h – add t1,i,j # temporary variable t1 contains i + j – sub f,t0,t1 # gets t0 – t1, which is (g + h) – (i + j) 1.2.1 - Operation of Computer Hardware • The first MIPS instruction calculatesthe sum of g and h & place the result in temporary variable t0 • Thus, the second instruction places the sum of i and j in temporary variable t1 • Finally, the subtract instruction subtracts the second sum from the first and places the difference in the variable f
  • 26. 26 a) Registers operands – The operands of arithmetic instructions have some restriction. – They may be from a special locations built directly in hardware called registers. – The size of a register in the MIPS architecture is 32 bits (1word) • Register Representation – Two-character names following a dollar sign – E.g $s0, $s1, . . . – add $s3,$s1,$s2 ( add register content of s1, s2 and place it in register s3) 1.2.2 - Operands of Computer Hardware
  • 27. 27 Example 1 • f = (g + h) – (i + j); – add $t0,$s1,$s2 # register $t0 contains g + h – add $t1,$s3,$s4 # register $t1 contains i + j – sub $s0,$t0,$t1 # gets $t0 – $t1 in s0, which is (g + h)–(i + j) 1.2.2 - Operands of Computer Hardware variables f g h i j registers $s0 $s1 $s2 $s3 $s4 $s0,$s1 …  general purpose register $t0,$t1 …  temporary register
  • 28. 28 • f = (g + h) – (i + j); • Add $t0,$s1,$s2 • Add $t1,$s3,$s4 • Sub $s0, $t0,$t1
  • 29. 29 • X= a+b-c • S0=x • S1,s2,s3 =a,b,c • Add $s1,$s1,$s2 • Sub $s0, $s1,$s3 • Add $t0,$s1,$s2 • Sub $s0, $t0,$s3
  • 30. 30 b) Memory operands – MIPS transfer data between memory and processor registers. – Data transfer instruction are used for this type of operation – Two types • load word (lw) - copies data from memory to a register • store word (sw) - copies data from register to a memory – Example 1 lw $t0,8($s3) , lw $s1,50($s4) – Example 2 sw $s1,100($s2), sw $t2, 32($s5) 1.2.2 - Operands of Computer Hardware
  • 31. 31 – Example lw $t0,8($s3) » $t0 – Temporary register = Memory[$s3+8] » $s3 - base address » 8 offset 1.2.2 - Operands of Computer Hardware
  • 32. 32 1 0 1 2 3 1 2 3 1 0 0 1 0 0 1 0 0 1 2 1 Byte= 8bits 1 word = 32 bit= 4 byte 0
  • 33. 33 Memory Organization - Alignment : Byte Order • Bytes in a word can be numbered in two ways: – big-endian – little-endian • In 32 bit computer, 1 Word = 4 byte (4X8 bit = 32 bit) Big-endian :  byte 0 at the leftmost (most significant) to  byte 3 at the rightmost (least significant), 1.2.2 - Operands of Computer Hardware
  • 34. 34 Memory Organization - Alignment : Byte Order Little-endian • byte 3 at the leftmost (most significant) • byte 0 at the rightmost (least significant) 1.2.2 - Operands of Computer Hardware
  • 35. 35 Example 1 • Assume that A is an array of 100 words • Perform g = h + A[8]; • single operation in this assignment statement, but one of the operands is in memory. • So first perform load operation to transfer the content from memory location to register, then perform addition operation • compiler has associated with • starting address (base address) of the array is in $s3. 1.2.2 - Operands of Computer Hardware variables g h Base address Temporary Register registers $s1 $s2 $s3 $t0
  • 36. 36 Example 1 • first transfer A[8] to a register and be placed in a temporary register • lw $t0,8($s3) # Temporary reg $t0 gets A[8]. • Add temporary register content with register s2 content (h), then store it in the register s1 (g) • add $s1,$s2,$t0 # g = h + A[8] 1.2.2 - Operands of Computer Hardware g = h + A[8]; lw $t0,8($s3) add $s1,$s2,$t0
  • 37. 37 Example 2 • Assume that A is an array of 100 words • Perform A[12] = h + A[8]; • single operation  two operands is in memory. 1. perform load operation 2. Perform addition 3. Perform store operation • compiler has associated with • starting address (base address) of the array is in $s3. 1.2.2 - Operands of Computer Hardware variables h Base address Temporary Register registers $s2 $s3 $t0
  • 38. 38 Example 2 • First transfer A[8] to temporary register , lw $t0,8($s3) • Add t0 with s2 and place it in t0, add $t0,$s2,$t0 • Store sum into A[12] 1.2.2 - Operands of Computer Hardware A[12] = h + A[8] lw $t0,8($s3) add $t0,$s2,$t0 sw $t0,12($s3)
  • 39. 39 c) Constant or Immediate operands • Small constants are used quite frequently (50% of operands) e.g., A = A + 5; B = B - 18; pc = pc + 4; • e.g – incrementing the index of an array to point to next item – Increment the program counter to point to the next instruction • The constants - placed in memory when the program was loaded. • For example, – to add the constant 4 to register $s3, • lw $t0, AddrConstant4($s1) # $t0 = constant 4 • add $s3,$s3,$t0 # $s3 = $s3 + $t0 ($t0 == 4) 1.2.2 - Operands of Computer Hardware
  • 40. 40 1.2.2 - Operands of Computer Hardware c) Constant or Immediate operands • Alternative method that avoids the load instruction is offered. • Arithmetic instructions add immediate or addi • This quick add instruction has – one constant operand – one register operands • e.g To add 4 to register $s3, – addi $s3,$s3,4 # $s3 = $s3 + 4 Advantages • operations are much faster by including constants inside arithmetic instructions
  • 41. 41 1.2 - Operation and Operands of Computer Hardware TRY YOURSELF 1. Client 1 stored his data in location B of the memory and Client 2 directly send his data to register R0 in the processor. Add the details sent by two client and store the result in register R1.
  • 42. 42 1.2 - Operation and Operands of Computer Hardware https://docs.google.com/forms/d/1r_I81vayGncf- MwVs1O7yaAPIj6mKLOC_VodtLJISrc/edit?usp=sharing
  • 43. 43
  • 44. 44 Session Topic 1.1 Functional Blocks of a Computer 1.2 Operation and Operands of Computer Hardware 1.3 Instruction Set Architecture 1.4 Register Transfer Language (RTL) interpretation of instructions 1.5 Addressing Modes -1 1.6 Addressing Modes -2 1.7 Instruction Execution Cycle 1.8 Instruction Set 1.9 Performance Metrics Module 1- Functional blocks of Computer & Instruction Set Architecture
  • 45. 45 • Instruction set architecture is basically the interface between your hardware and the software. • The Instruction Set Architecture (ISA) defines the way in which a microprocessor is programmed at the machine level. • i.e an ISA is defined as the design of a computer from the Programmer’s Perspective. 1.3 Instruction Set Architecture
  • 46. 46 Different features considered when designing the instruction set architecture are: 1. Types of instructions (Operations in the Instruction set) 2. Types and sizes of operands 3. Addressing Modes 4. Addressing Memory 5. Encoding and Instruction Formats 6. Compiler related issues 1.3 Instruction Set Architecture
  • 47. 47 1.3.1 Types of instructions: A computer must have the following types of instructions: a) Data transfer instructions b) Data manipulation instructions c) Program sequencing and control instructions d) Input and output instructions 1.3 Instruction Set Architecture
  • 48. 48 a) Data transfer instructions – perform data transfer between the various storage places in the computer system, viz. registers, memory and I/O – two basic operations are, Load (or Read or Fetch) and Store (or Write) b) Data manipulation instructions – perform operations on data and indicate the computational capabilities for the processor. – E.g arithmetic operations, logical operations or shift operations – Add,sub,mul, addi, and 1.3 Instruction Set Architecture
  • 49. 49 c) Program sequencing and control instructions – It changes the flow of the program. – E.g1 Looping : adding a list of n numbers. – E.g 2 branch instructions: It loads a new value into the program counter. • conditional branch & unconditional branch 1.3 Instruction Set Architecture Move DATA1, R0 Add DATA2, R0 Add DATA3, R0 ... Add DATAn, R0 Move R0, SUM Move N, R1 Clear R0 LOOP Determine address of “Next” number and add “Next” number to R0 (Add R0, R0, X) Decrement R1 Branch > 0, LOOP Move R0, SUM Sequencing Looping
  • 50. 50 d) Input and output instructions – It transferring information between the registers, memory and the input / output devices. – Either use special instructions that exclusively perform I/O transfers, or use memory related instructions itself to do I/O transfers. 1.3 Instruction Set Architecture
  • 51. 51 1.3 Instruction Set Architecture
  • 52. 52 1.3.2 Types and sizes of operands Various data types supported by the processor and their lengths are • Common operand size – Character (8 bits), – Half word (16 bits), – Word (32 bits), – Single Precision Floating Point (1 Word), – Double Precision Floating Point (2 Words), • Operand data types – two’s complement binary numbers, – Characters usually in ASCII – Floating point numbers following the IEEE Standard – Packed and unpacked decimal numbers. 1.3 Instruction Set Architecture
  • 53. 53 Operands of Computer Hardware – Registers operands – Memory operands – Constant or immediate operands (Refer PPT 1.2- operation and operands of computer hardware )
  • 54. 54 1.3.3 Addressing Modes – The way the operands are chosen during program execution 1.3 Instruction Set Architecture Detailed explanation is in PPT 1.5-Addressing Modes
  • 55. 55 1.3.5 Instruction Format  Defines the layout of an instruction.  Includes an opcode and zero or more operands.  Opcode : It defines an operation to be performed like Add, Subtract, Multiply, Shift, Complement ,etc.  Operands / Address : It is a field which contain the operand or location of operand, i.e., register or memory location.  e.g ADD A,B OPCODE OPERANDS or ADDRESS 1.3 Instruction Set Architecture
  • 56. 56 Types of instruction format 1. Three address instruction Examples Add A,B, C // ( A= B+C) 2. Two address instruction Add A,B // ( A= A+B) 3. One address instruction Add A // ( AC= AC+A) (AC=Accumulator ) 4. Zero address instruction CMA // (Compliment the accumulator content) OPCODE Address 1 Address 2 Address 3 OPCODE Address 1 Address 2 OPCODE Address 1 OPCODE 1.3 Instruction Set Architecture
  • 57. 57 Three Address Instruction 57  General Format : Operation Destination, Source1, Source2 Example: Evaluate X=(A+B)  (C+D) 1. ADD R1,A,B R1 ← M[A] + M[B] 2. ADD R2,C,D R2 ← M[C] + M[D] 3. MUL X,R1,R2 M[X] ← R1  R2  Advantage : Reduced number of instruction  Disadvantage : Need more space for Lengthy Instruction. 1.3 Instruction Set Architecture
  • 58. 58 • X=(A+B)  (C+D) • A+B – Add R1, A,B // R1=A+B • C+D – Add R2,C,D • X=(A+B)  (C+D) // X=R1 * R2 – MUL X, R1,R2
  • 59. 59 • Evaluate X=(A+B)  (C+D) • A+B – ADD R1, A,B • C+D – ADD R2,C,D • X=(A+B)  (C+D) // X=R1 *R2 – MUL X,R1,R2
  • 60. 60 Two Address Instruction 60 General format  Operation Destination, Source Disadvantage : Need more than two instruction to do single high level instruction. Example: Evaluate X=(A+B)  (C+D) 1. MOV R1,A R1 ← M[A] 2. ADD R1,B R1 ← R1 + M[B] 3. MOV R2,C R2 ← M[C] 4. ADD R2,D R2 ← R2 + M[D] 5. MUL R1,R2 R1 ← R1  R2 6. MOV X, R1 M[X] ← R1 1.3 Instruction Set Architecture OPCODE Address 1 Address 2
  • 61. 61 • X=(A+B)  (C+D) • two addr inst • A+B (ADD A,B == A=A+B) – MOV R1,A R1A – ADD R1,B // R1=R1+B • C+D – MOV R2,C – ADD R2,D // R2=R2+D • X=(A+B)  (C+D) // X=R1*R2 – MUL R1,R2 // R1=R1*R2 – MOV X,R1 • ( OR) – MOV X,R1 – MUL X,R2
  • 62. 62 • X=(A+B) *(C+D) • Two addr inst • A+B – MOV R1,A // R1M[A] – ADD R1,B //R1 R1+M[B] // R1 A+B • C+D – MOV R2,C – ADD R2,D // R2R2+M[D] // R2 C+D • X=(A+B) *(C+D) // X=R1*R2 – MUL R1,R2 // R1 R1*R2 – MOV X,R1 // X R1
  • 63. 63 • X=(A+B)  (C+D) • A+B – MOV R0,A // R0=a, R0=5 – ADD R0,B // R0=5+6=11 • C+D • MOV R1,C • ADD R1,D // R1=C+D • X=(A+B)  (C+D) // X=R0*R1 • MOV X,R0 • MUL X,R1
  • 64. 64 One Address Instruction General format  Operation Source  Single Accumulator Organization  Processor register usually called Accumulator 64 Example: Evaluate X=(A+B)  (C+D) 1. LOAD A AC ← M[A] 2. ADD B AC ← AC + M[B] 3. STORE T M[T] ← AC 4. LOAD C AC ← M[C] 5. ADD D AC ← AC + M[D] 6. MUL T AC ← AC  M[T] 7. STORE X M[X] ← AC 1.3 Instruction Set Architecture OPCODE Address 1
  • 65. 65 • X=(A+B) *(C+D) • One addr inst • A+B – LOAD A // ACA – ADD B // AC AC + B – STORE T // TAC • C+D – LOAD C – ADD D // ACAC+D // C+D • X=(A+B) *(C+D) // X=T*R – MUL T – STORE X
  • 66. 66 • X=(A+B)  (C+D) using one address format- accumulator AC • A+B – LOAD A // AC=A // MOV A a=5,b=6, c=2,d=4 – ADD B // AC=AC+B ac=11 – STORE T //T=AC , T=11 • C+D – LOAD C // AC=C , ac=2 – ADD D // AC=2+4=6 AC=(C+D) • X=(A+B)  (C+D) // X= T*AC – MUL T – STORE X
  • 67. 67 67 Zero Address Instruction • Stack Organization • Operands and result are always in the stack • It is possible to use the instruction in which the locations are of all operands are defined implicitly. • Such instruction are found in machine that stores operands in a structure called pushdown stack. 1.3 Instruction Set Architecture
  • 68. 68 • Stack - LIFO • Push -insert • Pop –delete • TP • X=(A+B) *(C+D) • (AB+ ) * (CD+)
  • 69. 69 69 Zero Address Instruction GATE Question 1: GATE Question 2 : 1.3 Instruction Set Architecture Example: Evaluate X=(A+B)  (C+D) 1. PUSH A TOS ← A 2. PUSH B TOS ← B 3. ADD TOS ← (A + B) 4. PUSH C TOS ← C 5. PUSH D TOS ← D 6. ADD TOS ← (C + D) 7. MUL TOS ← (C + D) ∗ (A + B) 8. POP X M [X] ← TOS
  • 70. 70 • X=(A+B)  (C+D) a=4,b=5,c=6,d=7 • Zero addr inst – stack • (AB+)*(CD+) (AB+)(CD+)* • A+B  (AB+) – Push A – Push B – Push + -- > pop out – Pop B, pop A, A+B – Push result – 4+5 =9 • C+D  CD+ – Push C – Push D – Push + ,pop + ,D,C, perform addition 7+6 – Push result • X=(A+B)  (C+D) • Push * pop * 13(C+D),9(A+B), multiplication • Push result 117  stack
  • 71. 71 TRY YOURSELF 1. Apply three address, two address and one address instruction format to evaluate the following expressions – X=(A+B)*(C+D) – X = A-B+C*(D*E-F) 71 1.3 Instruction Set Architecture
  • 72. 72 0 1 2 3 4 5 6 7 8 9 10 11 1 2 3 memory 0 Word addressing Byte addressing 0 4 8 12 16 0 1 1 1 0 0 1 … memory 1.3 Instruction Set Architecture 1.3.4 Addressing of Memory 0 1
  • 73. 73 Memory Organization - Alignment : Byte Order • Bytes in a word can be numbered in two ways: – big-endian – little-endian • In 32 bit computer, 1 Word = 4 byte (4X8 bit = 32 bit) Big-endian :  byte 0 at the leftmost (most significant) to  byte 3 at the rightmost (least significant), 1.3 Instruction Set Architecture
  • 74. 74 Memory Organization - Alignment : Byte Order Little-endian • byte 3 at the leftmost (most significant) • byte 0 at the rightmost (least significant) 1.3 Instruction Set Architecture
  • 76. 76
  • 77. 1 Session Topic 1.1 Functional Blocks of a Computer 1.2 Operation and Operands of Computer Hardware 1.3 Instruction Set Architecture 1.4 Register Transfer Language (RTL) interpretation of instructions 1.5 Addressing Modes -1 1.6 Addressing Modes -2 1.7 Instruction Execution Cycle 1.8 Instruction Set 1.9 Performance Metrics Module 1- Functional blocks of Computer & Instruction Set Architecture
  • 78. 2 Register Transfer Language (RTL) or Register Transfer Notation • The symbolic notation used to describe the micro operation transfer among register is called Register Transfer Language (RTL) • E.g R1 R2 + R3 // Add R1,R2,R3 , R1=R2+R3 • The operations performed on the data stored in registers are called Micro-operations. • The Register Transfer Language is the symbolic representation of notations used to specify the sequence of micro-operations. 1.4 - Register Transfer Language (RTL)
  • 79. 3 Register Transfer Notations • In a computer system, data transfer takes place between processor registers and memory and between processor registers and input-output systems. • These data transfer can be represented by standard notations given below: – Processor registers - Notations R0, R1, R2…,MAR,MDR – Addresses of memory locations - LOC, PLACE, MEM, A,B etc. – Input-output registers - DATA IN, DATA OUT and so on. 1.4 - Register Transfer Language (RTL)
  • 80. 4 Register Transfer Notations • The content of register or memory location is denoted by placing square brackets around the name of the register or memory location. • E.g – Content of register - [R1], [R2],… – Content of memory location - M[LOC] , M[A], M[B],… 1.4 - Register Transfer Language (RTL)
  • 81. 5 • REGISTERS • It is a collection of Flip flops. • Each flip flop can store 1 bit of information. • Computer registers are represented by Capital letters. • Ex : MAR, MDR, PC, IR, R1, R2… • MICRO OPERATION • Operation executed on data stored in a register. 1.4 - Register Transfer Language (RTL)
  • 82. 6 1. Register Transfer • Transferring data from one register to another. • It is represented in symbolic form by means of replacement operator ( ) • Typically, most of the situations, the transfer has to occur only in a predetermined control condition. This can be shown by following if-then statement: – If (P=1) then (R2 ← R1); // Here P is a control signal generated in the control section. Example : R2 ← R1 // transfer the data from register R1 into register R2. Example : P: R2 ← R1 // transfer the data from register R1 into register R2 if p==1. 1.4 - Register Transfer Language (RTL)
  • 83. 7 • • Here, • 'n' indicates the number of bits for the register. • The 'n' outputs of the register R1 are connected to the 'n' inputs of register R2. • A load input is activated by the control variable 'P' which is transferred to the register R2. Control function P is a Boolean variable that is equal to 1 or 0. The control function is stated as follows : P : R2 R1 1.4 - Register Transfer Language (RTL)
  • 84. 8 2. Memory Transfer • Two types - Read & Write • Read (load) :The transfer of information from a memory unit to the user end is called a Read operation. • Write(store) : The transfer of new information to be stored in the memory is called a Write operation. 1.4 - Register Transfer Language (RTL)
  • 85. 9 • A memory word is designated by the letter M. • We must specify the address of memory word while writing the memory transfer operations. • The address register is designated by MAR and the data register by MDR. • Thus, a read operation can be stated as: • Read: MDR ← M [MAR] • The Read statement causes a transfer of information into the data register (MDR) from the memory word (M) selected by the address register (MAR). • And the corresponding write operation can be stated as: • Write: M [MAR] ← R1 1.4 - Register Transfer Language (RTL)
  • 86. 10 3. Arithmetic Micro-operations R3 ← R1 + R2 // ADD R3,R1,R2 The contents of R1 plus R2 are transferred to R3. R3 ← R1 - R2 // SUB R3,R1,R2 The contents of R1 minus R2 are transferred to R3. R1 ← R1 + 1 // ADDI R1,1 Increment the contents of R1 by one R1 ← R1 - 1 Decrement the contents of R1 by one R2 ← R2' Complement the contents of R2 (1's complement) R2 ← R1 + M[LOC] The contents of R1 plus memory content of LOC are transferred to R2. 1.4 - Register Transfer Language (RTL)
  • 87. 11 Example: Evaluate X=(A+B) * (C+D) Three-Address 1. ADD R1,A,B ; R1 ← M[A] + M[B] 2. ADD R2,C,D ; R2 ← M[C] + M[D] 3. MUL X,R1,R2 ; M[X] ← R1 * R2 Two-Address 4. MOV R1,A ; R1 ← M[A] 5. ADD R1,B ; R1 ← M[B] +R1 6. MOV R2,C ; R2 ← M[C] 7. ADD R2,D ; R2 ← R2 + M[D] 8. MUL R1,R2 ; R1 ← R1 * R2 9. MOV X,R1 ; M[X] ← R1 1.4 - Register Transfer Language (RTL)
  • 88. 12 • X= (A+B) *(C+D) • 3 addr inst • A+B – ADD R1,A,B – R1 M[A]+M[B] • C+D – ADD R2,C,D – R2 M[C]+M[D] • X= (A+B) *(C+D) – MUL X,R1,R2 – M[X] R1*R2 • X= (A+B) *(C+D) • 2 addr inst • A+B – MOV R1,A – R1 M[A] – ADD R1,B //R1=R1+B//R1=A+B – R1 R1+M[B] • C+D – MOV R2,C – R2 M[C] – ADD R2,D – R2 R2+M[D] • X= (A+B) *(C+D) // X=R1*R2 – MUL R1,R2 – R1 R1*R2 – MOV X,R1 – M[X] R1
  • 89. 13 • X= (A+B) *(C+D) • Three addr inst • A+B – ADD R1,A,B – R1 M[A]+M[B] • C+D – ADD R2,C,D – R2 M[C]+M[D] • X=(A+B) *(C+D) // X= R1* R2 – MUL X,R1,R2 – M[X] R1*R2 • X= (A+B) *(C+D) • Two addr inst • A+B – MOV R1,A – ADD R1,B – R1 M[A] – R1 R1+M[B] • C+D – MOV R2,C – ADD R2,D – R2 M[C] – R2 R2+M[D] • X=(A+B) *(C+D) // X= R1* R2 – MUL R1,R2 – MOV X,R1 – R1 R1*R2 – M[X] R1
  • 90. 14 • Evaluate X=(A+B) * (C+D) • Three address inst • ADD R1,A,B – R1 M[A]+M[B] // RTL notation • ADD R2,C,D – R2 M[C]+M[D] • MUL X,R1,R2 – M[X] R1*R2 • Evaluate X=(A+B) * (C+D) • Two address inst • MOV R1,A – R1 M[A] • ADD R1,B – R1 R1+M[B] • MOV R2,C – R2 M[C] • ADD R2,D – R2 R2+M[D] • MOV X,R1 – M[X] R1 • MUL X,R2 – M[X] M[X] *R2
  • 91. 15 • X=(A+B)*(C+D) • Three addr inst – ADD R1,A,B • R1 M[A] + M[B] – ADD R2,C,D • R2 M[C]+M[D] – MUL X,R1,R2 • M[X] R1*R2
  • 92. 16 • X=(A+B)*(C+D) • Two addr inst • MOV R1,A – R1 M[A] • ADD R1,B – R1 R1 + M[B] • MOV R2,C – R2 M[C] • ADD R2,D // – R2 R2+M[D] • MUL R1,R2 // R1=R1*R2 – R1 R1*R2 • MOV X,R1 – M[X] R1
  • 93. 17 • ADDL R0, (R5) – R0 -> X R5 -> MAR read, wait MDR -> Y Add Z -> R0 • MAR PC • MDR Mem [MAR] • IR MDR • PC PC + 4 • X R0 • MAR IR [Operand] // MAR M[R5] • MDR Mem [MAR] • Y MDR • Z X+Y • R0 Z
  • 94. 18 Quiz Link 1.4 - Register Transfer Language (RTL)
  • 95. 19 1.4 - Register Transfer Language (RTL)
  • 96. 1 Session Topic 1.1 Functional Blocks of a Computer 1.2 Operation and Operands of Computer Hardware 1.3 Instruction Set Architecture 1.4 Register Transfer Language (RTL) interpretation of instructions 1.5 Addressing Modes -1 1.6 Addressing Modes -2 1.7 Instruction Execution Cycle 1.8 Instruction Set 1.9 Performance Metrics Module 1- Functional blocks of Computer & Instruction Set Architecture
  • 97. 2 Addressing Mode : ✔ It refers to the way in which the operand of an instruction is specified. ✔ It specify the location of an operand. • It is mainly classified as Immediate addressing Implicit addressing Direct & Indirect addressing Register addressing Displacement addressing- Relative, base addr, index Stack addressing 1.5 – Addressing Modes
  • 98. 3 Instruction Format ✔ Defines the layout of an instruction. ✔ Includes an opcode and zero or more operands. ✔ Opcode : It defines an operation to be performed like Add, Subtract, Multiply, Shift, Complement ,etc. ✔ Operands / Address : It is a field which contain the operand or location of operand, i.e., register or memory location. ✔ e.g ADD A,B 1.5 – Addressing Modes OPCODE OPERANDS or ADDRESS
  • 99. 4 Types of instruction format 1. Three address instruction Examples Add A,B, C // ( A= B+C) 2. Two address instruction Add A,B // ( A= A+B) 3. One address instruction Add A // ( AC= AC+A) (AC=Accumulator ) 4. Zero address instruction CMA // 1.5 – Addressing Modes OPCODE Address 1 Address 2 Address 3 OPCODE Address 1 Address 2 OPCODE Address 1 OPCODE
  • 101. 6 1.5 – Addressing Modes Types of Addressing Modes 1. Direct / Absolute Addressing mode 2. Indirect Addressing mode 3. Register Addressing mode 4. Register Indirect Addressing mode 5. Immediate Addressing mode 6. Implicit Addressing mode 7. Indexed Addressing mode 8. Relative (PC Relative) Addressing mode 9. Stack Addressing mode 10. Auto Increment Addressing mode 11. Auto Decrement Addressing mode 12. Base Addressing mode
  • 102. 7 IMPLEMENTATION OF CONSTANT 1.5.1 Immediate Addressing Mode • Constant Operand is specified in the address field of instruction. • data is present in instruction itself. • i.e Value is directly given in instruction as operand • # symbol is added to indicate it is a value • E.g – Store R2, #100 – Add #7 – Add R1,#20 1.5 – Addressing Modes
  • 103. 8 1.5.2 Implicit Addressing mode • Some instruction doesn’t require any operand.(zero address instruction) • They directly operate upon the content of the accumulator. • E.g • CMA (Complement) , Content of the accumulator is complemented. • RAR - Rotate Right -Content of the accumulator is rotate one position right. • RAL - Rotate Left - Content of the accumulator is rotate one position left. 1.5 – Addressing Modes Opcode CMA Accumulator Opcode CMA
  • 104. 9 1.5.3 Direct Addressing Mode • The address field of the instruction contains the effective address (EA) of the operand. • Also called absolute addressing mode. • ADD X // AC ← AC + M[X] • ADD R1, 4000 – EA = 4000 (Memory Address) 1.5 – Addressing Modes Effective address(EA) Information from which the memory address of the operand can be determined.
  • 105. 10 • Direct – ADD R1, 4000 – ADD R1, X • Indirect – ADD R1,(4000) – ADD R1,(X) 35 3000 4000 4004 X y 3000
  • 106. 11 • ADD R1,4000 address – R1 R1+M[4000] • ADD R1,#4000 value – R1 R1+4000 • ADD R1,(4000) M(address) – R1 R1+M[[4000]] • ADD (x)
  • 107. 12 1.5.4 Indirect Addressing Mode • Address field of instruction gives the address where the effective address is stored in memory. • Need multiple memory lookups to find the operand. • For indirection use parentheses ( ) • E.g • ADD (X) // AC ← AC + M[[X]] • ADD R1, (4000) – EA = Content of Location 4000 1.5 – Addressing Modes
  • 108. 13 1.5.5 Register Direct Addressing Mode • Operand (data) is stored in the Processor register. • Register are given as operands of instruction. • Effective Address = Register • E.g • Add R4, R3 1.5 – Addressing Modes
  • 109. 14 1.5.6 Register Indirect Addressing Mode • Instruction specifies the register as indirection. • EA=(R), Effective Address is the content of the register. • Data value present in a content of register (not in a register) • E.g • Load R3, (R2) – Load R3, A A is memory location – Load R3,200 – A is Effictive Address 1.5 – Addressing Modes
  • 110. 15 1.5.7 Relative Addressing Mode • Effective address of the operand is obtained by adding the content of program counter with the address part of the instruction. • Effective Address = Content of Program Counter + Address part of the instruction • EA = A + [PC] • E.g • Add A,(PC) – EA=[A] +[PC] 1.6 – Addressing Modes
  • 111. 16 • EA=A+[PC] • Pc=2000 • A constant • EA=#30+2000=2030 • A address • M[A]=3000 • EA=3000+2000=5000 Displacement addressing mode • EA=---- + ----- 1. Relative – EA= [PC]+ [A] 2. Base – EA= [Base Reg]+ [A] 3. Index – EA= [offset]+ [A]
  • 112. 17 1.5.8 Base Register Addressing Mode • Effective address of the operand is obtained by adding the content of base register with the address part of the instruction. • EA=[Base Register] + [A] • E.g • Add R2(A) – EA=[R2]+[A] 1.6 – Addressing Modes
  • 113. 18 1.5.9 Index Addressing Mode • Data value present as index • EA = X + (R) • X=offset constant value • Load Ri, X(R2) – Load R2, A – Load R3, (R2) // Load R3,A – Load R4, 4(R2) // Load R4, 4+A – Load R5, 8(R2) // Load R4, 8+A – Load R6, 12(R2) // Load R4, 12+A 1.6 – Addressing Modes Advantages & Disadvantages
  • 114. 19 1.6 – Addressing Modes 1.5.10 Stack Addressing Mode • Instruction doesn’t contains any operand. • If it is arithmetic operation, then It operate upon the stack • Operand is at the top of the stack. • Example: ADD – POP top two items from the stack, – add them, and – PUSH the result to the top of the stack.
  • 115. 20 1.5.11 Auto increment Addressing • EA =(R) • (Ri )+ • After accessing the operand, the content of the register is automatically incremented to point the next operand. • E.g Add (R1)+ • First, the operand value is fetched. • Then, the register R1 value is incremented by step size ‘d’. • Assume operand size = 2 bytes. • After fetching 6B, R1 will be 3300 + 2 = 3302. 1.6 – Addressing Modes
  • 116. 21 1.5.12 Auto decrement Addressing • EA =(R)-1 • - (Ri ) • First, the content of the register is decremented to point the operand. • E.g Add -(R1) • First, the register R1 value is decremented by step size ‘d’. • Assume operand size = 2 bytes. • R1 will be 3302 – 2 = 3300. • Then, the operand value is fetched. 1.6 – Addressing Modes
  • 117. 22 Comparison of addressing modes GATE Question solutions 1.5 & 1.6 – Addressing Modes
  • 118. 23 1. Examine the following sequence and identify the addressing modes used, operation done in every instruction and find the effective address by considering R1=3000, R2=5000, R5=1000. LOAD 10(R1),R5 SUB (R1)+, R5 ADD –(R2), R5 MOVI 2000,R5 2. Consider the following instruction ADD A(R0),(B). First operand (destination) “A(R0)” uses indexed addressing mode with R0 as index register. The second operand (Source) “(B)” uses indirect addressing mode. Determine the number of memory cycles required to execute this instruction. 1.5 & 1.6 – Addressing Modes
  • 119. 24 Problem Workouts 1. Write procedures for reading from and writing to a FIFO queue, using a two-address format, in conjunction with: – indirect addressing – relative addressing 2. Write a sequence of instructions that will compute the value of y = x2 + 2x + 3 for a given x using – three-address instructions – two-address instructions – one-address instructions 1.5 & 1.6 – Addressing Modes
  • 120. 25 GATE Question Match each of the high level language statements given on the left hand side with the most natural addressing mode from those listed on the right hand side. (A) (1, c), (2, b), (3, a) (B) (1, a), (2, c), (3, b) (C) (1, b), (2, c), (3, a) (D) (1, a), (2, b), (3, c) 1. A[1] = B[J]; a. Indirect addressing 2. while [*A++]; b. Indexed addressing 3. int temp = *x; c. Autoincrement 1.5 & 1.6 – Addressing Modes
  • 122. 27
  • 123. 1 Session Topic 1.1 Functional Blocks of a Computer 1.2 Operation and Operands of Computer Hardware 1.3 Instruction Set Architecture 1.4 Register Transfer Language (RTL) interpretation of instructions 1.5 Addressing Modes -1 1.6 Addressing Modes -2 1.7 Instruction Execution Cycle 1.8 Instruction Set 1.9 Performance Metrics Module 1- Functional blocks of Computer & Instruction Set Architecture
  • 124. 2 Addressing Mode : ✔ It refers to the way in which the operand of an instruction is specified. ✔ It specify the location of an operand. • It is mainly classified as Immediate addressing Implicit addressing Direct & Indirect addressing Register addressing Displacement addressing- Relative, base addr, index Stack addressing 1.5 – Addressing Modes
  • 125. 3 Instruction Format ✔ Defines the layout of an instruction. ✔ Includes an opcode and zero or more operands. ✔ Opcode : It defines an operation to be performed like Add, Subtract, Multiply, Shift, Complement ,etc. ✔ Operands / Address : It is a field which contain the operand or location of operand, i.e., register or memory location. ✔ e.g ADD A,B 1.5 – Addressing Modes OPCODE OPERANDS or ADDRESS
  • 126. 4 Types of instruction format 1. Three address instruction Examples Add A,B, C // ( A= B+C) 2. Two address instruction Add A,B // ( A= A+B) 3. One address instruction Add A // ( AC= AC+A) (AC=Accumulator ) 4. Zero address instruction CMA // 1.5 – Addressing Modes OPCODE Address 1 Address 2 Address 3 OPCODE Address 1 Address 2 OPCODE Address 1 OPCODE
  • 128. 6 1.5 – Addressing Modes Types of Addressing Modes 1. Direct / Absolute Addressing mode 2. Indirect Addressing mode 3. Register Addressing mode 4. Register Indirect Addressing mode 5. Immediate Addressing mode 6. Implicit Addressing mode 7. Relative (PC Relative) Addressing mode 8. Base Addressing mode 9. Indexed Addressing mode 10. Stack Addressing mode 11. Auto Increment Addressing mode 12. Auto Decrement Addressing mode
  • 129. 7 IMPLEMENTATION OF CONSTANT 1.5.1 Immediate Addressing Mode • Constant Operand is specified in the address field of instruction. • data is present in instruction itself. • i.e Value is directly given in instruction as operand • # symbol is added to indicate it is a value • E.g – Store R2, #100 – Add #7 // AC=AC+7 – Add R1,#20 //R1=[R1]+20 1.5 – Addressing Modes
  • 130. 8 1.5.2 Implicit Addressing mode • Some instruction doesn’t require any operand.(zero address instruction) • They directly operate upon the content of the accumulator. • E.g • CMA (Complement) , Content of the accumulator is complemented. • RAR - Rotate Right -Content of the accumulator is rotate one position right. • RAL - Rotate Left - Content of the accumulator is rotate one position left. 1.5 – Addressing Modes Opcode CMA Accumulator Opcode CMA
  • 131. 9 1.5.3 Direct Addressing Mode • The address field of the instruction contains the effective address (EA) of the operand. • Also called absolute addressing mode. • ADD X // AC ← AC + M[X] • ADD R1, 4000 – EA = 4000 (Memory Address) 1.5 – Addressing Modes Effective address(EA) Information from which the memory address of the operand can be determined.
  • 132. 10 • Direct – ADD R1, 3000 – ADD R1, X • Indirect – ADD R1,(4000) – ADD R1,(Y) 35 3000 4000 4004 y 3000 Memory X
  • 133. 11 • ADD R1,4000 address – R1 R1+M[4000] • ADD R1,#4000 value – R1 R1+4000 • ADD R1,(4000) M(address) – R1 R1+M[[4000]] • ADD (x)
  • 134. 12 1.5.4 Indirect Addressing Mode • Address field of instruction gives the address where the effective address is stored in memory. • Need multiple memory lookups to find the operand. • For indirection use parentheses ( ) • E.g • ADD (X) // AC ← AC + M[[X]] • ADD R1, (4000) – EA = Content of Location 4000 1.5 – Addressing Modes
  • 135. 13 1.5.5 Register Direct Addressing Mode • Operand (data) is stored in the Processor register. • Register are given as operands of instruction. • Effective Address = Register • E.g • Add R4, R3 1.5 – Addressing Modes
  • 136. 14 1.5.6 Register Indirect Addressing Mode • Instruction specifies the register as indirection. • EA=(R), Effective Address is the content of the register. • Data value present in a content of register (not in a register) • E.g • Load R3, (R2) – Load R3, A A is memory location – Load R3,200 – A is Effictive Address 1.5 – Addressing Modes
  • 137. 15 Addressing mode Contains Direct operands Indirect Address of operands Register direct Register contains Operands Register Indirect Register contains Address of operands
  • 138. 16 Displacement addressing mode • EA= ---- + ----- 1. Relative addressing mode – EA= [PC]+ [A] 2. Base addressing mode – EA= [Base Reg]+ [A] 3. Index addressing mode – EA= [offset]+ [A]
  • 139. 17 1.5.7 Relative Addressing Mode (PC Relative) • Effective address of the operand is obtained by adding the content of program counter with the address part of the instruction. • Effective Address = Content of Program Counter + Address part of the instruction • EA = A + [PC] • E.g • Add A,(PC) – EA=[A] +[PC] 1.6 – Addressing Modes
  • 140. 18
  • 141. 19 inst address PC BR 5000 5004 . BT 25 5050 50 6000 ADD A(PC) ADD #50(PC) or ADD 6000(PC) BR ----, 5050 EA=5000 + 50 5050 Pc=5000 ADD A(PC) ADD 6000(PC) EA=[A] +[PC] =[6000]+5000 = 50 +5000 = 5050 BT ----, 5050 EA=5000 + 50 5050
  • 142. 20 • EA=A+[PC] • Pc=2000 • A constant • EA=#30+2000=2030 • A address • M[A]=3000 • EA=3000+2000=5000 Displacement addressing mode • EA=---- + ----- 1. Relative – EA= [PC]+ [A] 2. Base – EA= [Base Reg]+ [A] 3. Index – EA= [offset]+ [A]
  • 143. 21 1.5.8 Base Register Addressing Mode • Effective address of the operand is obtained by adding the content of base register with the address part of the instruction. • EA=[Base Register] + [A] • E.g • Add R2(A) // Take [R2]=3000, A=50 – EA=[R2]+[A] = 3000+50==3050 1.6 – Addressing Modes
  • 144. 22 1.5.9 Index Addressing Mode • Data value present as index • EA = X + (R) • X=offset constant value • Load Ri, X(R2) – Load R2, A – Load R3, (R2) // Load R3,A – Load R4, 4(R2) // Load R4, 4+A – Load R5, 8(R2) // Load R4, 8+A – Load R6, 12(R2) // Load R4, 12+A 1.6 – Addressing Modes Advantages & Disadvantages
  • 145. 23 1.6 – Addressing Modes 1.5.10 Stack Addressing Mode • Instruction doesn’t contains any operand. • If it is arithmetic operation, then It operate upon the stack • Operand is at the top of the stack. • Example: ADD – POP top two items from the stack, – add them, and – PUSH the result to the top of the stack.
  • 146. 24 1.5.11 Auto increment Addressing • EA =(R) • (Ri )+ • After accessing the operand, the content of the register is automatically incremented to point the next operand. • E.g Add (R1)+ • First, the operand value is fetched. • Then, the register R1 value is incremented by step size ‘d’. • Assume operand size = 2 bytes. • After fetching 6B, R1 will be 3300 + 2 = 3302. 1.6 – Addressing Modes
  • 147. 25 1.5.12 Auto decrement Addressing • EA =(R)-1 • - (Ri ) • First, the content of the register is decremented to point the operand. • E.g Add -(R1) • First, the register R1 value is decremented by step size ‘d’. • Assume operand size = 2 bytes. • R1 will be 3302 – 2 = 3300. • Then, the operand value is fetched. 1.6 – Addressing Modes
  • 148. 26 Comparison of addressing modes GATE Question solutions 1.5 & 1.6 – Addressing Modes
  • 149. 27 1. Examine the following sequence and identify the addressing modes used, operation done in every instruction and find the effective address by considering R1=3000, R2=5000, R5=1000. LOAD 10(R1),R5 SUB (R1)+, R5 ADD –(R2), R5 MOVI 2000,R5 2. Consider the following instruction ADD A(R0),(B). First operand (destination) “A(R0)” uses indexed addressing mode with R0 as index register. The second operand (Source) “(B)” uses indirect addressing mode. Determine the number of memory cycles required to execute this instruction. 1.5 & 1.6 – Addressing Modes
  • 150. 28 Problem Workouts 1. Write procedures for reading from and writing to a FIFO queue, using a two-address format, in conjunction with: – indirect addressing – relative addressing 2. Write a sequence of instructions that will compute the value of y = x2 + 2x + 3 for a given x using – three-address instructions – two-address instructions – one-address instructions 1.5 & 1.6 – Addressing Modes
  • 151. 29 GATE Question Match each of the high level language statements given on the left hand side with the most natural addressing mode from those listed on the right hand side. (A) (1, c), (2, b), (3, a) (B) (1, a), (2, c), (3, b) (C) (1, b), (2, c), (3, a) (D) (1, a), (2, b), (3, c) 1. A[1] = B[J]; a. Indirect addressing 2. while [*A++]; b. Indexed addressing 3. int temp = *x; c. Autoincrement 1.5 & 1.6 – Addressing Modes
  • 153. 31
  • 154. 1 Session Topic 1.1 Functional Blocks of a Computer 1.2 Operation and Operands of Computer Hardware 1.3 Instruction Set Architecture 1.4 Register Transfer Language (RTL) interpretation of instructions 1.5 Addressing Modes -1 1.6 Addressing Modes -2 1.7 Instruction Execution Cycle 1.8 Instruction Set 1.9 Performance Metrics Module 1- Functional blocks of Computer & Instruction Set Architecture
  • 155. 2 Instruction execution cycle – Fetch – Decode – Execute – Store 1.Fetch Phase  IR ← [[PC]]  PC ← [PC] + 4 2. Decode phase  Decoder IR  Operant fetch 3.Execution phase  ALU operation 1.7 – Instruction Execution Cycle
  • 156. 3 1.8.1 Internal Organization of the Processor PC – Holds address of next instruction MAR – Holds address of operand or data MDR – Holds data R0 – Rn-1 – Gereral purpose register Y,Z,TEMP – Temporary register MUX – select either Y or constant 4 as input to A of ALU ALU – Arithmetic and Logic Unit Decoder – Decode instruction and create a control signal 1.7 – Instruction Execution Cycle
  • 157. 4 1.8.2 Register Transfers  The input and output of the a register are connected to the bus through switches controlled by the signals Rin and Rout To transfer contents of R1 to R4  R4R1  Enable R1out=1 to places contents of R1 on the processor bus  Enable R4in = 1 to loads data from the processor bus into register R4 1.7 – Instruction Execution Cycle
  • 158. 5 1.8.3 Performing an ALU operation – E.g Add R3,R1,R2 – Control signal steps 1. R1out, Yin 2. R2out, SelectY, Add, Zin 3. Zout, R3in Temporary register – Y,Z,TEMP MUX – select anyone input for A in ALU 1.7 – Instruction Execution Cycle
  • 159. 6 1.8.4 Fetching a Word from Memory • e.g Move (R1),R2 1. MAR ← [R1] 2. Start a Read operation on the memory bus 3. Wait for the MFC response from the memory 4. Load MDR from the memory bus 5. R2 ← [MDR] Control sequence steps 1. R1out, MARin, Read 2. MDRinE, WMFC 3. MDRout,R2in 1.7 – Instruction Execution Cycle
  • 160. 7 1.8.5 Storing a Word in Memory • e.g Move R2, (R1) Steps 1. The desired address is loaded into MAR 2. Data to be written is loaded into MDR 3. Write signal is initiated – R1out , MARin – R2out, MDRin, Write – MDRoutE, WMFC 1.7 – Instruction Execution Cycle
  • 161. 8 Steps for Add R1, R2 • Fetch the instruction • Fetch the first operand • Perform the addition • Load the result into R1 Control sequence Step Action 1. PCout, MARin, Read, Select4, Add, Zin 2. Zout, PCin, Yin, WMFC 3. MDRout, IRin 4. R1out, Yin, SelectY 5. R2out, Add, Zin 6. Zout, R1in, End 1.7 – Instruction Execution Cycle
  • 162. 9 • Add R1,R2 // R1=R1+R2 • Fetch – fetch the inst • PCMAR, Read, M[MAR]MDR, MDRIR, PC=PC+4 – T1.PCout,MARin,Read,MDRin,select4,Add,Zin – T2.Zout,Pcin,Yin,WMFC – T3. MDRout,IRin • ALU – R1out,Yin, selectY – R2out,Add,Zin • Store – Zout,R1in,End
  • 163. 10 • Sub R1,R2,R3 // R1=R2-R3 • Fetch • PCMAR,Read,MDRIR, PCPC+4 – T1. PCout, MARin, Read, select 4, add, Zin – T2. Zout,Pcin,Yin,WMFC – T3.MDRout,IRin • ALU • R1=[R2]-[R3] – T4.R2out,Yin,selectY – T5.R3out,Sub,Zin • Store ZR1 – Zout,R1in,End
  • 164. 11 Branch Instructions • Unconditional branch instruction: JUMP X • Replaces the PC contents with branch target address Control sequence Step Action 1. PCout, MARin, Read, Select4, Add, Zin 2. Zout, PCin, Yin, WMFC 3. MDRout, IRin 4. Offset –field –of-IRout, Add, Zin 5. Zout, PCin, End 1.7 – Instruction Execution Cycle Pc=3000 Jump x ///3500 offset 500 Pc+offset= 3000+500=3500
  • 165. 12 Multiple-Bus Organization • Number of control sequence steps are reduced • e.g Sub R1, R2, R3 // R1=R2-R3 Control sequence Step Action 1. PCout, R=B, MARin, Read, IncPC 2. WMFC 3. MDRoutB, R=B, IRin 4. R2outA, R3outB, SelectA, SUB, R1in, End 1.7 – Instruction Execution Cycle
  • 167. 14
  • 168. 15 Session Topic 1.1 Functional Blocks of a Computer 1.2 Operation and Operands of Computer Hardware 1.3 Instruction Set Architecture 1.4 Register Transfer Language (RTL) interpretation of instructions 1.5 Addressing Modes -1 1.6 Addressing Modes -2 1.7 Instruction Execution Cycle 1.8 Instruction Set 1.9 Performance Metrics Module 1- Functional blocks of Computer & Instruction Set Architecture
  • 169. 16 Instruction Set • The instruction set, also called ISA (instruction set architecture), is part of a computer that pertains to programming, which is basically machine language. • The instruction set provides commands to the processor, to tell it what it needs to do. • E.g ADD, LOAD, COMPARE, ON, OUT, JUMP Two types RISC - Reduced instruction set computing CISC - Complex instruction set computing 1.8 – Instruction Set
  • 170. 17 Reduced Set Instruction Set Architecture (RISC) The main idea - To make hardware simpler by using an instruction set composed of a few basic steps for loading, evaluating and storing operations just like a load command will load data, store command will store the data. • Example – add two 8-bit number - add operation is divided into parts i.e. load, operate, store programmer Complex Instruction Set Architecture (CISC) The main idea is that a single instruction will do all loading, evaluating and storing operations just like a multiplication command will do stuff like loading data, evaluating and storing it, hence it’s complex. • Example – add two 8-bit number - There will be a single command or instruction like ADD which will perform the task.. 1.8 – Instruction Set
  • 171. 18 Both approaches try to increase the CPU performance • RISC: Reduce the cycles per instruction at the cost of the number of instructions per program. • CISC: The CISC approach attempts to minimize the number of instructions per program but at the cost of increase in number of cycles per instruction. 1.8 – Instruction Set
  • 172. 19 Characteristic of RISC • Simpler instruction, hence simple instruction decoding. • Instruction come under size of one word. • Instruction take single clock cycle to get executed. • More number of general purpose register. • Simple Addressing Modes. • Less Data types. • Pipeling can be achieved. 1.8 – Instruction Set
  • 173. 20 Characteristic of CISC • Complex instruction, hence complex instruction decoding. • Instruction are larger than one word size. • Instruction may take more than single clock cycle to get executed. • Less number of general purpose register as operation get performed in memory itself. • Complex Addressing Modes. • More Data types. 1.8 – Instruction Set
  • 174. 21 RISC CISC Focus on software Focus on hardware Uses only Hardwired control unit Uses both hardwired and micro programmed control unit Transistors are used for more registers Transistors are used for storing complex Instructions Fixed sized instructions Variable sized instructions Can perform only Register to Register Arthmetic operations Can perform REG to REG or REG to MEM or MEM to MEM Requires more number of registers Requires less number of registers Code size is large Code size is small A instruction execute in single clock cycle Instruction take more than one clock cycle A instruction fit in one word Instruction are larger than size of one word 1.8 – Instruction Set
  • 175. 22 Kahoot Quiz 1.8 – Instruction Set https://create.kahoot.it/share/551ec8ac-a5a0-4842-9c30-9b3bd878f802
  • 176. 23
  • 177. 24 Session Topic 1.1 Functional Blocks of a Computer 1.2 Operation and Operands of Computer Hardware 1.3 Instruction Set Architecture 1.4 Register Transfer Language (RTL) interpretation of instructions 1.5 Addressing Modes -1 1.6 Addressing Modes -2 1.7 Instruction Execution Cycle 1.8 Instruction Set 1.9 Performance Metrics Module 1- Functional blocks of Computer & Instruction Set Architecture
  • 178. 25 Performance  The most important measure of a computer is speed. ( How quickly it can execute programs).  Three factors affecting CPU performance • Instruction set • Hardware design • Compiler (software design)  The Processor time to execute a program depends on the hardware involved in the execution.  The execution of each instruction is divided into several steps. Each step completes in one clock cycle. 1.9 – Performance Metrics
  • 179. 26 • To calculate the execution time, following parameters are considered  Clock rate (R) , (R=1/T)  Cycles for single instruction (S)  Instruction count for a task (N)  Execution time for CPU (Tc)  CPU Execution Time = number of Instructions (N) * CPI (S) * clock cycle Time (T=1/R) R S N Tc   1.9 – Performance Metrics
  • 180. 27 1.9 – Performance Metrics RISC - Reduced Instruction Set Computers CISC - Complex Instruction Set Computers To improve performance Gate Exercise  Hardware design  Clock rate (R) can be increased  Pipeline concept (instruction overlapping) can be used  Instruction set  Using either RISC or CISC  Compiler  Optimized compiler      R S N Tc
  • 181. 28 Hardware design  Clock rate (R) can be increased » VLSI design for fabrication, » Transistor size small, • Switching speed between 0 and 1 is high • More transistor placed on chip  Pipeline concept (instruction overlapping) can be used » Performance can be increased by performing a number of operations in parallel. • Instruction level parallelism • Multi core processor – on single chip – dual core, quad core, octo core • Multiprocessor – many processor, each containing multiple cores. 1.9 – Performance Metrics
  • 182. 29 Comparing performance of several machines. – performanceX = 1 / execution timeX – Two computers X and Y, if the performance of X is greater than the performance of Y, we have – Performancex > Performance y – 1 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒 𝑥 > 1 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒 𝑦 – Execution time x < Execution time y – Execution time y > Execution time x 1.9 – Performance Metrics
  • 183. 30 Comparing performance of several machines. • If X is n times faster than Y, then relate the performance of two different computers quantitatively. performanceX execution_timeY --------------------- = --------------------- = n performanceY execution_timeX • speed up of Machine A over Machine B = TCB / TCA 1.9 – Performance Metrics
  • 184. 31 PROBLEMS 1. Nancy has a computer with dual core processor & runs a program in 20 seconds . She also has laptop with octa core processor & runs the same program in 10 seconds. Determine which one is faster and by how much? • Performamnce ratio = Execution Time of Nancy computer / Execution Time of Nancy laptop • = 20 /10=2 • Laptop runs 2 time faster than computer 1.9 – Performance Metrics
  • 185. 32 PROBLEMS Suppose we have two implementations of the same instruction set architecture. Computer A has a clock cycle time of 250 ps and a CPI of 2.0 for some program, and computer B has a clock cycle time of 500 ps and a CPI of 1.2 for the same program. Which computer is faster for this program and by how much? • Let we take number of instructions for the program is I.
  • 187. 34
  • 188. 1 Session Topic 2.1 Signed number representation 2.2 Fixed and floating point representations 2.3 Character representation 2.4 Integer addition and subtraction 2.5 Ripple carry adder 2.6 Carry look-ahead adder 2.7 Shift-and add multiplication 2.8 Booth multiplier 2.9 Carry save multiplier 2.10 Division - restoring techniques 2.11 Division - non-restoring techniques 2.12 Floating point arithmetic Module 2- Data representation & Computer arithmetic
  • 189. 2 2.1 – Signed number representation Integer Representation • Computers use a fixed number of bits to represent an integer. • The commonly-used bit-lengths for integers are 8-bit, 16-bit, 32-bit or 64-bit. • Unsigned Integers: can represent zero and positive integers. • Signed Integers: can represent zero, positive and negative integers. • Three representation for signed integers: – Sign-Magnitude representation – 1's Complement representation – 2's Complement representation
  • 190. 3 • Unsigned and Signed Binary Numbers b n 1 – b 1 b 0 Magnitude MSB (a) Unsigned number b n 1 – b 1 b 0 Magnitude Sign (b) Signed number b n 2 – 0 denotes 1 denotes + – MSB
  • 191. 4 Unsigned Integers • Representation of binary value E.g 0 0 1 0 0 0 0 1 • Integer value in decimal is V(B) V(B) = 0 X 27 + 0 X 26 + 1 X 25 + 0 X 24 + 0 X 23 + 0 X 22 + 0 X 21 + 0 X 20 = 0+0+32+0+0+0+0+1 = 33 D 2.1 – Signed number representation
  • 192. 5 Unsigned Integers • Formula = 2^n-1 2.1 – Signed number representation n Minimum Maximum 8 0 (2^8)-1 (=255) 16 0 (2^16)-1 (=65,535) 32 0 (2^32)-1 (=4,294,967,295) (9+ digits) 64 0 (2^64)-1 (=18,446,744,073,709,551,615) (19+ digits)
  • 193. 6 Signed Integers • Signed integers - represent zero, positive integers & negative integers. • Three representation schemes are available for signed integers: – Sign-Magnitude representation – 1's Complement representation – 2's Complement representation • In all the above three schemes, the most-significant bit (MSB) is called the sign bit. • The sign bit is used to represent the sign of the integer – 0 for positive integers – 1 for negative integers. 2.1 – Signed number representation
  • 194. 7 1. Sign-Magnitude Representation • The most-significant bit (MSB) is the sign bit, – 0 representing positive integer and – 1 representing negative integer. • The remaining n-1 bits represents the magnitude (absolute value) of the integer. • Example 1: Suppose that n=8 and the binary representation is 0 100 0001B. Sign bit is 0 ⇒ positive Absolute value is 100 0001B = 65D Hence, the integer is +65D 2.1 – Signed number representation 0 1 0 0 0 0 0 1
  • 195. 8 • Example 2: Suppose that n=8 and the binary representation is 1 000 0001B. Sign bit is 1 ⇒ negative Absolute value is 000 0001B = 1D Hence, the integer is -1D 2.1 – Signed number representation 1 0 0 0 0 0 0 1
  • 196. 9 Drawbacks of sign-magnitude representation : • There are two representations for the number zero, which could lead to inefficiency and confusion. – 0000 0000B zero 0 – 1000 0000B  zero 0 • Positive and negative integers need to be processed separately. 2.1 – Signed number representation
  • 197. 10 Try Yourself • Example 3: Suppose that n=8 and the binary representation is 0 000 0000B. • Example 4: Suppose that n=8 and the binary representation is 1 000 0000B. 2.1 – Signed number representation
  • 198. 11 Try Yourself • Example 3: Suppose that n=8 and the binary representation is 0 000 0000B. Sign bit is 0 ⇒ positive Absolute value is 000 0000B = 0D Hence, the integer is +0D • Example 4: Suppose that n=8 and the binary representation is 1 000 0000B. Sign bit is 1 ⇒ negative Absolute value is 000 0000B = 0D Hence, the integer is -0D 2.1 – Signed number representation
  • 199. 12 2. 1's Complement Representation • MSB - sign bit, – 0 representing positive integers – 1 representing negative integers. • The remaining n-1 bits represents the magnitude of the integer, as follows: – for positive integers, • absolute value = magnitude of the (n-1) bit . – for negative integers, • absolute value = magnitude of the complement (inverse) of the (n-1)-bit • hence called 1's complement. 2.1 – Signed number representation
  • 200. 13 • Example 1: Suppose that n=8 and the binary representation 0 100 0001B. Sign bit is 0 ⇒ positive Absolute value is 100 0001B = 65D Hence, the integer is +65D • Example 2: Suppose that n=8 and the binary representation 1 000 0001B. Sign bit is 1 ⇒ negative Absolute value is the complement of 000 0001B – i.e., 000 0001B = 111 1110B = 126D Hence, the integer is -126D 2.1 – Signed number representation 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1
  • 201. 14 Drawbacks: • There are two representations (0000 0000B and 1111 1111B) for zero. • The positive integers and negative integers need to be processed separately. 2.1 – Signed number representation
  • 202. 15 Try yourself • Example 3: Suppose that n=8 and the binary representation 0 000 0000B. • Example 4: Suppose that n=8 and the binary representation 1 111 1111B. 2.1 – Signed number representation
  • 203. 16 Try yourself • Example 3: Suppose that n=8 and the binary representation 0 000 0000B. Sign bit is 0 ⇒ positive Absolute value is 000 0000B = 0D Hence, the integer is +0D • Example 4: Suppose that n=8 and the binary representation 1 111 1111B. Sign bit is 1 ⇒ negative Absolute value is the complement of 111 1111B, i.e., 000 0000B = 0D Hence, the integer is -0D 2.1 – Signed number representation
  • 204. 17 3. 2's Complement Representation • MSB - sign bit, – 0 representing positive integers – 1 representing negative integers. • The remaining n-1 bits represents the magnitude of the integer, as follows: – for positive integers, • absolute value = the magnitude of the (n-1)-bit – for negative integers, • absolute value = the magnitude of the complement of the (n-1)-bit plus one • hence called 2's complement. 2.1 – Signed number representation
  • 205. 18 • Example 1: Suppose that n=8 and the binary representation 0 100 0001B. Sign bit is 0 ⇒ positive Absolute value is 100 0001B = 65D Hence, the integer is +65D • Example 2: Suppose that n=8 and the binary representation 1 000 0001B. Sign bit is 1 ⇒ negative Absolute value is the complement of 000 0001B plus 1, i.e., complement of 000 0001B = 111 1110B 1B (+) = 111 1111B = 127D Hence, the integer is -127D 2.1 – Signed number representation 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1
  • 206. 19 TRY YOURSELF • Example 3: Suppose that n=8 and the binary representation 0 000 0000B. • Example 4: Suppose that n=8 and the binary representation 1 111 1111B. 2.1 – Signed number representation
  • 207. 20 TRY YOURSELF • Example 3: Suppose that n=8 and the binary representation 0 000 0000B. Sign bit is 0 ⇒ positive Absolute value is 000 0000B = 0D Hence, the integer is +0D • Example 4: Suppose that n=8 and the binary representation 1 111 1111B. Sign bit is 1 ⇒ negative Absolute value is the complement of 111 1111B plus 1, i.e., 000 0000B + 1B = 000 0001B = 1D Hence, the integer is -1D 2.1 – Signed number representation
  • 208. 21 2.1 – Signed number representation
  • 209. 22 Computers use 2's Complement Representation for Signed Integers • There is only one representation for the number zero in 2's complement, instead of two representations in sign-magnitude and 1's complement. • Positive and negative integers can be treated together in addition and subtraction. Subtraction can be carried out using the "addition logic". • Example 1: Addition of Two Positive Integers: Suppose that n=8, 65D + 5D = 70D 65D → 0100 0001B 5D → 0000 0101B (+ ) 0100 0110B → 70D 2.1 – Signed number representation
  • 210. 23 • Example 2: Subtraction is treated as Addition of a Positive and a Negative Integers: Suppose that n=8, 65D - 5D = 65D + (-5D) = 60D 65D → 0100 0001B -5D → 1111 1011B (+ ) 0011 1100B → 60D (discard carry ) • Example 3: Addition of Two Negative Integers: Suppose that n=8, -65D - 5D = (-65D) + (-5D) = -70D -65D → 1011 1111B -5D → 1111 1011B (+ ) 1011 1010B → -70D (discard carry) 2.1 – Signed number representation
  • 211. 24 • Because of the fixed precision (i.e., fixed number of bits), an n-bit 2's complement signed integer has a certain range. • For example, for n=8, the range of 2's complement signed integers is -128 to +127. 2.1 – Signed number representation
  • 212. 25 Range of n-bit 2's Complement Signed Integers • - 2^n-1 to + (2^n-1)-1 n minimum maximum 8 -(2^7) (=-128) +(2^7)-1 (=+127) 16 -(2^15) (=-32,768) +(2^15)-1 (=+32,767) 32 -(2^31) (=-2,147,483,648) +(2^31)-1 (=+2,147,483,647)(9+ digits) 64 -(2^63) (=-9,223,372,036,854,775,808) +(2^63)-1 (=+9,223,372,036,854,775,807)(18+ digits) 2.1 – Signed number representation
  • 213. 26 • During addition (and subtraction), it is important to check whether the result exceeds this range, in other words, whether overflow or underflow has occurred. 2.1 – Signed number representation
  • 214. 27 Example 4: Overflow: • Suppose that n=8, 127D + 2D = 129D (overflow - beyond the range) 127D → 0111 1111B 2D → 0000 0010B (+ ) 1000 0001B → -127D (wrong) Example 5: Underflow: • Suppose that n=8, -125D - 5D = -130D (underflow - below the range) -125D → 1000 0011B -5D → 1111 1011B (+ 0111 1110B → +126D (wrong) 2.1 – Signed number representation
  • 215. 28 • n=4 bit binary , signed number representation 2.1 – Signed number representation
  • 216. 29 Match & Match a) + 5 i) 1000 1) Unsigned Representation b) - 5 ii) 1000 0111 2) 2’s Complement Representation c) -7 iii) 0000 0101 3) Signed Magnitdue Representation d) - 7 iv) 1011 4) 1’s Complement Representation 29 ACTIVITY TIME
  • 217. 30 Match & Match a) + 5 i) 1000 1) Unsigned Representation b) - 5 ii) 1000 0111 2) 2’s Complement c) -7 iii) 0000 0101 3) Signed Magnitdue Representation d) - 7 iv) 1011 4) 1’s Complement Representation 30 ACTIVITY TIME
  • 218. 31
  • 219. 1 Session Topic 2.1 Signed number representation 2.2 Fixed and floating point representations 2.3 Character representation 2.4 Integer addition and subtraction 2.5 Ripple carry adder 2.6 Carry look-ahead adder 2.7 Shift-and add multiplication 2.8 Booth multiplier 2.9 Carry save multiplier 2.10 Division - restoring techniques 2.11 Division - non-restoring techniques 2.12 Floating point arithmetic Module 2- Data representation & Computer arithmetic
  • 220. 2 2.2 – Fixed and Floating point representations Real Numbers There are two major approaches to store real numbers (i.e., numbers with fractional component) in modern computing. (i) Fixed Point Notation – there are a fixed number of digits after the decimal point, (ii) Floating Point Notation. – allows for a varying number of digits after the decimal point. ✔ Two representations Single precision (32-bit) Double precision (64-bit)
  • 221. 3 2.2 – Fixed and Floating point representations Fixed point Representation • It has fixed number of bits for integer part and for fractional part. • For example, if given fixed-point representation is • minimum value is 0000.0001 and • maximum value is 9999.9999. • There are three parts of a fixed-point number representation: the sign field, integer field, and fractional field. I I I I . F F F F
  • 222. 4 2.2 – Fixed and Floating point representations Floating point Representation IEEE 754 standard for Floating point Representation Three parts – Sign bit ( MSB- bit 31 ) – Exponent E’( bit 23 to bit 30) – Mantissa or fractional ( bit 0 to bit 22)
  • 223. 5 2.2 – Fixed and Floating point representations • Value = + 1.M × 2E = + 1.M × 2E’-127 . • E’=E+127. E’ is the range 0<E’<255 . • 0 and 225 are used to represent special values. • Therefore E’ for normal values 1 < E’<254 . • This means that the actual exponent E is in the range -126 ≤ E ≤ 127. • So, the scale factor has a range of 2-126 to 2+127 • Since binary normalization is used, the most significant bit of the mantissa is always equal to 1
  • 224. 6 Example: • Sign bit : 0 positive • Exponent E’ : 00101000 40 • Mantissa M : 001010….. 0 • Value = + M × 2E’-127 . • Value = +0. 001010…..0 × 240-127 un normalized form • Value = +1. 001010…..0 × 2-90 0 00101000 001010….. 0
  • 225. 7 2.2 – Fixed and Floating point representations Floating point Representation IEEE 754 standard for Floating point Representation Three parts – Sign bit ( MSB- bit 63 ) – Exponent E’( bit 52 to bit 62) – Mantissa or fractional ( bit 0 to bit 51)
  • 226. 8 • Exponent and mantissa ranges are increased. – The 52 bit mantissa M – The 1 bit provides sign indication S – The 11 bit provides Exponent E’ • Exponent E’ uses excess-1023 format. – E ‘ ranges 1 <E’<2046, – 0 and 2047 used to indicate special values, – Thus, the actual exponent E is in the range (E’=E+1023) -1022 < E < 1023, – So scale factors range is 2-1022 to 21023 . • value= + 1.M × 2E’-1023
  • 227. 9 Normalization • Sign bit : 0 positive • Exponent E’ : 10001000 136 • Mantissa M : 0010110….. • Value = + 1.M × 2E’-127 . • Value in un normalized form = +0. 0010110…..0 × 2136-127 • Value in normalized form = +1. 0110…..0 × 2136-127-3 • Value = +1. 0110….. × 26 0 10001000 001011 0…..
  • 228. 10 Special values • The end values 0 and 255 of the excess -127 & end values 0 and 2047 of the excess -1023 of exponent E' are used to represent special values. E’= 0 and M = 0 Exact Zero. E’ = 255 and M=0 Infinity. E’ = 0 and M != 0 Denormal values. E’ = 255 and M != 0 NaN [Not a Number] Eg: 0/0 or sqrt(-1).
  • 229. 11 2.3 CHARACTER REPRESENTATION 2.3 – Character Representations
  • 230. 12 • In computer memory, character are "encoded" (or "represented") using a ASCII (American Standard Code for Information Interchange) code • ASCII is originally a 7-bit code. It has been extended to 8-bit to better utilize the 8-bit computer memory organization. • The 8th-bit was originally used for parity check in the early computers. • In ASCII – Code numbers 32D (20H) to 126D (7EH) are printable (displayable) characters – Code numbers 0D (00H) to 31D (1FH), and 127D (7FH) are special control characters, which are non-printable (non-displayable) 2.3 – Character Representations
  • 231. 13 • Code number 32D (20H) is the blank or space character. – '0' to '9': 48D (30H) to 57D (39H) – 'A' to 'Z': 65D (41H) to 90D (5AH) – 'a' to 'z': 97D (61H) to 122D (7AH) . • Code numbers 0D (00H) to 31D (1FH), and 127D (7FH) are special control characters, which are non-printable (non-displayable) 2.3 – Character Representations
  • 232. 14 Dec 0 1 2 3 4 5 6 7 8 9 3 SP ! " # $ % & ' 4 ( ) * + , - . / 0 1 5 2 3 4 5 6 7 8 9 : ; 6 < = > ? @ A B C D E 7 F G H I J K L M N O 8 P Q R S T U V W X Y 9 Z [ ] ^ _ ` a b c 10 d e f g h i j k l m 11 n o p q r s t u v w 12 x y z { | } ~ 2.3 – Character Representations Decimal Representation
  • 233. 15 2.3 – Character Representations Hex 0 1 2 3 4 5 6 7 8 9 A B C D E F 2 SP ! " # $ % & ' ( ) * + , - . / 3 0 1 2 3 4 5 6 7 8 9 : ; < = > ? 4 @ A B C D E F G H I J K L M N O 5 P Q R S T U V W X Y Z [ ] ^ _ 6 ` a b c d e f g h i j k l m n o 7 p q r s t u v w x y z { | } ~ Hexa Decimal Representation
  • 234. 16 2.3 – Character Representations Non printable characters
  • 235. 17
  • 236. 18 TRY YOURSELF Convert the following character into ASCII form (both decimal numbers , hexa- decimal numbers) representation 1. F 2. h 3. D 4. u 5. 7 6. 1 7. { 8. } 9. 10. [ 11. ] 2.3 – Character Representations
  • 237. 19
  • 238. Convert the following decimal number into fixed point notation (use 12 bit register including 4 bit fractional part ) Example 1 (positive number) i) 27.5 Given • 12 bit register including 4 bit fractional part • i.e 8 bit integer part (including MSB sign bit) • 4 bit fractional part • Integer part 2711011  00011011 (in 8 bit representation) • Fractional part 0.5  1000 (in 4 bit representation) • Answer 000110111000
  • 239. Convert the following decimal number into fixed point notation (use 12 bit register including 4 bit fractional part ) Example 2 (negative number) i) -55.75 Given • 12 bit register including 4 bit fractional part • i.e 8 bit integer part (including MSB sign bit) • 4 bit fractional part • Integer part 55 00110111 (in 8 bit representation) • Fractional part 0.75  1100 (in 4 bit representation) • 55.75 00110111.1100 • -55.75 11001000.0100 (in 2’C representation) 55 110111 55.75 00110111.1100 1’ C11001000.0011 2’ C11001000.0100 -55.75
  • 240. Convert the following decimal number into floating point notation (use 32 bit notation) Example 1 (positive number) 17.625 In 32 bit notation • Sign bit 0 • Integer part 1710001 • Fractional part 0.625  101000.. • 17.62510001.101000.. • 1.0001101000.. X 24 (in Normalized form) • Exponent E’+127=4 +127=13110000011 • 32 bit notation  •  • Answer  010000011 0001101000 ….. Mantissa Exponent signbit exponent mantissa 0 10000011 0001101000 ……..
  • 241. Convert the following decimal number into floating point notation (use 32 bit notation) Example 2 (negative number) -17.625 In 32 bit notation • Sign bit 1 • Integer part 1710001 • Fractional part 0.625  101000.. • 17.62510001.101000.. • 1.0001101000.. X 24 (in Normalized form) • Exponent E’+127=4 +127=13110000011 • 32 bit notation  •  • Answer  110000011 0001101000 ….. Mantissa Exponent signbit exponent mantissa 1 10000011 0001101000 ……..
  • 242. Session Topic 2.1 Signed number representation 2.2 Fixed and floating point representations 2.3 Character representation 2.4 Integer addition and subtraction 2.5 Ripple carry adder 2.6 Carry look-ahead adder 2.7 Shift-and add multiplication 2.8 Booth multiplier 2.9 Carry save multiplier 2.10 Division - restoring techniques 2.11 Division - non-restoring techniques 2.12 Floating point arithmetic Module 2- Data representation & Computer arithmetic
  • 243. 2.4 – Integer Addition and Subtraction 7 0 1 1 1 6 0 1 1 0 (+) ------------------------ 13 1 1 0 1 ----------------------
  • 244. Half Adder Adder circuit adds single bit binary number Half adder doesn’t consider the carry from previous sum Calculation of sum and carry of Half Adder Sum = XOR gate=A B Carry out = AND gate = A.B 2.4 – Integer Addition and Subtraction
  • 245. A B OR AND XOR 0 0 0 0 0 0 1 1 0 1 1 0 1 0 1 1 1 0 1 0
  • 246. Half Adder Design circuit for Half Adder Sum = A B Carry out = A.B For 4 bit number addition 4 Half adder combined Drawback Does not consider the carry input 2.4 – Integer Addition and Subtraction
  • 247. Full Adder Full adder circuit Add single bit number with carry 2.4 – Integer Addition and Subtraction
  • 248. 2.4 – Integer Addition and Subtraction
  • 249. Full Adder sum Carry out 2.4 – Integer Addition and Subtraction
  • 250. Addition logic for a single stage 9 2.4 – Integer Addition and Subtraction
  • 251. 4 bit adder circuit Add two 4 bit numbers 2.4 – Integer Addition and Subtraction
  • 252. n-bit adder • Cascade n full adder (FA) blocks to form a n-bit adder. • Carries propagate or ripple through this cascade, n-bit ripple carry adder. 11 2.4 – Integer Addition and Subtraction
  • 253. Binary Addition / Subtraction Subtraction operation • subtraction operation X − Y = X+(-Y) = 2’s-complement of Y and add it to X . • 2nd input to FA is given through XOR gat • All 2nd input to XOR gate is connected to Add/Sub input control line(M) . 2.4 – Integer Addition and Subtraction
  • 254. Binary Addition / Subtraction logic circuit • Subtraction operation • Add/Sub input control line = 1 (Y input is 1’s-complemented) • C0 =1. (Y input is 1’s-complemented +1= 2’s-complementation of Y) Addition operation • Add/Sub input control line = 0 • C0 =0. 2.4 – Integer Addition and Subtraction
  • 255. K n-bit adder K n-bit numbers can be added by cascading k n-bit adders. Each n-bit adder forms a block, so this is cascading of blocks. Carries ripple or propagate through blocks, Blocked Ripple Carry Adder 14 2.4 – Integer Addition and Subtraction
  • 256. TRY YOURSELF Perform Binary Addition , subtraction on following numbers (using 2’s complement) a) 6 + 7 b) 9 +12 c) 10 + 15 1) 7 – 6 2) 12- 9 3) 15 - 10 2.4 – Integer Addition and Subtraction
  • 257.
  • 258. Session Topic 2.1 Signed number representation 2.2 Fixed and floating point representations 2.3 Character representation 2.4 Integer addition and subtraction 2.5 Ripple carry adder 2.6 Carry look-ahead adder 2.7 Shift-and add multiplication 2.8 Booth multiplier 2.9 Carry save multiplier 2.10 Division - restoring techniques 2.11 Division - non-restoring techniques 2.12 Floating point arithmetic Module 2- Data representation & Computer arithmetic
  • 259.  4 bit adder circuit  Add two 4 bit numbers 2.5 – Ripple Carry Adder
  • 260.  N bit Ripple Carry Adder  Adds n bit number with carry 2.5 – Ripple Carry Adder
  • 261. Computing the delay time x0 y0 c0 c1 s0 FA Consider 0th stage: • c1 is available after 2 gate delays. • s0 is available after 1 gate delay. c i yi xi c i yi x i xi ci yi si c i 1 + Sum Carry 4 2.5 – Ripple Carry Adder
  • 262. Delay of the circuit Consider 3rd stage: • c3 is available after 2+2+2 =6 gate delays. • s2 is available after 2+2+1 = 5 gate delay. Consider nth stage: • cn is available after 2n gate delays. • sn-1 is available after 2n-1 gate delay. 2.5 – Ripple Carry Adder
  • 263. Virtual Lab Virtual Lab Link: Design a 4 bit RCA and upload your diagram in google classroom 2.5 – Ripple Carry Adder
  • 264. TRY YOURSELF Perform Binary Addition , subtraction on following numbers a) 6 + 7 b) 9 +12 c) 10 + 15 1) 7 – 6 2) 12- 9 3) 15 - 10 2.5 – Ripple Carry Adder
  • 265.
  • 266. Session Topic 2.1 Signed number representation 2.2 Fixed and floating point representations 2.3 Character representation 2.4 Integer addition and subtraction 2.5 Ripple carry adder 2.6 Carry look-ahead adder 2.7 Shift-and add multiplication 2.8 Booth multiplier 2.9 Carry save multiplier 2.10 Division - restoring techniques 2.11 Division - non-restoring techniques 2.12 Floating point arithmetic Module 2- Data representation & Computer arithmetic
  • 267. 2.6 – Carry Look-ahead Adder (Fast Adder) 2 Why Fast Adder ?: Drawbacks of Ripple Carry Adder 1) Too much of gate delay in developing RCA output 2) Final carry output Cn is available after 2n gate delays 3) All sum bits are available after 2n gate delays, including the delay through the XOR gates. 4) Overflow indication is available after 2n +2gate delays. 5) In n bit RCA , longest path is from input at LSB position to output at MSB position. DESIGN OF FAST ADDERS
  • 268. 2.6 – Carry Look-ahead Adder (Fast Adder) 3 Carry Look Ahead Logic (CAL) • In the ripple carry adder, the FAs cannot operate simultaneously, because the carry i/p to FA depends on the carry o/p of the previous FA. • Carry Look Ahead Logic, generates carries itself and give them to the FAs. So all FAs operate simultaneously and thus reducing the delay significantly.
  • 269. 2.6 – Carry Look-ahead Adder (Fast Adder) 4 CARRY LOOK AHEAD ADDITION: Sum = (Xi ⊕ Yi ) ⊕ Ci Carry out Ci+1 = Xi Yi +Xi Ci + Yi Ci Functions used: Generate and Propagate Ci+1 = Xi Yi +Xi Ci + Yi Ci = Xi Yi +( Xi + Yi )Ci = Gi + Pi Ci where, Generate function Gi = Xi Yi Propagate function Pi = Xi + Yi
  • 270. 2.6 – Carry Look-ahead Adder (Fast Adder) 5 Generate function Gi • When Xi=Yi=1, Gi=1 and Pi=0 • Gi = XiYi =1, Pi = Xi + Yi =10, omitting carry gives Pi=0 • Generate function Gi produce carry out independent of Pi when Xi=Yi=1 Propagate function Pi • When Xi=1 or Yi=1, Pi=1 and Gi=0 • The propagate function Pi produce carry out independent of Gi, when either Xi =1 or Yi = 1. Gi = Xi Yi Pi = Xi + Yi A B OR AND XOR 0 0 0 0 0 0 1 1 0 1 1 0 1 0 1 1 1 0 1 0
  • 271. 2.6 – Carry Look-ahead Adder (Fast Adder) 6 DESIGN OF FAST ADDERS • Gi = Xi Yi , ✔ So Gi is implemented with a AND gate. • Pi = Xi + Yi ✔ So Pi is implemented with a OR gate. • Si= xi ⊕ yi ⊕ ci. ✔ So Si is implemented with two XOR gates • Reduce the number of gates?
  • 272. 2.6 – Carry Look-ahead Adder (Fast Adder) • Pi can be implemented with a OR gate. (Pi = Xi + Yi, ) • But in this circuit, it is implemented with same XOR gate which was used to generate the sum output. • Comparing Truth table of XOR, OR gate, almost it is same except one last condition.(Xi=Yi=1) • But when Xi=Yi=1, out carry depends only on Ci does not depends on Pi • So for design of Pi, use a XOR gate (which is already there for sum calculation) XOR Gate X Y Z 0 0 0 0 1 1 1 0 1 1 1 0 OR Gate X Y Z 0 0 0 0 1 1 1 0 1 1 1 1
  • 273. 2.6 – Carry Look-ahead Adder (Fast Adder) 8 •Combinational circuit of these gates is called B cell (Bit Storage Cell).
  • 274. 2.6 – Carry Look-ahead Adder (Fast Adder) 4 bit Carry LookAhead Adder (CLA)
  • 275. 2.6 – Carry Look-ahead Adder (Fast Adder) 10 Design of 4 bit Carry Look Ahead Adder The carries can be implemented as C1 = G0 + P0 C0 C2 = G1 + P1 C1 C3 =G2 + P2 C2 C4 =G3 + P3 C3 Substitute C3 in C4 C4 = G3 + P3 (G2 + P2 C 2) = G3 +P3 G 2+ P3 P2 (G1 + P1 C1 ) Substitute C2 in C4 = G 3+ P3 G2 + P3 P2 G1 (G0 +P0 C0 ) Substitute C1 in C4 C4 =G3 +P3 G2 +P3 P2 G1 + P1 P2 P3 G0 +P0 P1 P2 P3 C0 Ci+1 = Gi + Pi Ci
  • 276. 2.6 – Carry Look-ahead Adder (Fast Adder) 11 Similarly, c1 = G0 + P0c0 c2 = G1 + P1G0 + P1P0c0 c3 = G2 + P2G1 + P2P1G0 + P2P1P0c0 c4 = G3 + P3G2 + P3P2G1 + P3P2P1G0 + P3P2P1P0c0
  • 277. 2.6 – Carry Look-ahead Adder (Fast Adder) Delay calculation • All carries can be obtained three gate delays after the input operands X , Y , and C0 are applied – Only one gate delay is needed to develop all Pi and Gi signals – Two gate delays is needed to produce ci+1 (AND-OR circuit) . • After one more XOR gate delay, all sum bits are available (4 gate delay). • In total, the 4-bit addition process requires only four gate delays, independent of n.
  • 278. 2.6 – Carry Look-ahead Adder (Fast Adder) 13 Thus in a 4 bit CLA adder, C4 = 3 delay S3 = 3+1 (XOR) = 4 delay (FOR ALL CASES) Carry / Sum CLA Ripple Carry C4 3 8 S3 4 7
  • 279. 2.6 – Carry Look-ahead Adder (Fast Adder) 16 bit Carry LookAhead Adder (CLA) • The 4 bit adder design cannot be extended easily for longer operands due to fan-in problem. • Longer adders - cascading a number of 4-bit adders • 16 bit adder – Four , 4 bit CLA cascaded. • 32 bit adder – Eight , 4 bit CLA cascaded. Delay in cascading • 16 bit adder – Four , 4 bit CLA cascaded. – C4 =3 , S3 =4 – C16 =(3X3)+3=12 , S15 =9+1=10 • 32 bit adder – Eight , 4 bit CLA cascaded. – C32 =(7X2)+3=17 , S31 =17+1=18 • Compared to RCA, 16 or 32 bit cascading CLA has less delay. • This can be further decreased by using Carry lookahead logic for generating C4,C8…. Carry / Sum CLA- cascading Ripple Carry C16 9 32 S15 10 31 C32 17 64 S31 18 63
  • 280. 2.6 – Carry Look-ahead Adder (Fast Adder) 16 bit Carry LookAhead Adder (CLA) Combination of 4 CLA
  • 281. 2.6 – Carry Look-ahead Adder (Fast Adder) •
  • 282. 2.6 – Carry Look-ahead Adder (Fast Adder) •
  • 283. 2.6 – Carry Look-ahead Adder (Fast Adder) 18 Design of 16 bit Carry Look Ahead Adder Carry / Sum CLA - using G & P function CLA- cascading Ripple Carry C16 5 9 32 S15 8 10 31
  • 284. 2.6 – Carry Look-ahead Adder (Fast Adder) Virtual Lab Link:
  • 285. 2.6 – Carry Look-ahead Adder (Fast Adder) TRY YOURSELF 1. Derive the generate and propagate function for 4 bit adder 2. Derive the generate and propagate function for 16 bit adder
  • 286. 2.6 – Carry Look-ahead Adder (Fast Adder)
  • 287. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 1 Session Topic 2.1 Signed number representation 2.2 Fixed and floating point representations 2.3 Character representation 2.4 Integer addition and subtraction 2.5 Ripple carry adder 2.6 Carry look-ahead adder 2.7 Shift-and Add multiplication 2.8 Booth multiplier 2.9 Carry save multiplier 2.10 Division - restoring techniques 2.11 Division - non-restoring techniques 2.12 Floating point arithmetic Module 2- Data representation & Computer arithmetic
  • 288. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 2 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier 2 Normal Multiplication Technique is Product of two n-bit numbers produce a 2n-bit number Unsigned multiplicationcan be viewed as addition of shifted versions of the multiplicand.
  • 289. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 3 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier Two types of multiplier  Array multiplier  Sequential circuit multiplier ( Shift and Add multiplier) Array multiplier  Implemented using array of full adder  Two dimensional logic (n X n)  Each row produce the partial product (PP)  Add PP at each stage
  • 290. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 4 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier Multiplicand m 3 m 2 m 1 m0 0 0 0 0 q3 q 2 q 1 q0 0 p 2 p 1 p 0 0 0 0 p3 p 4 p 5 p 6 p 7 PP1 PP2 PP3 (PP0) , Product is: P7P6P5P4P3P2P1P0 Multiplicand is shifted by displacing it through an array of adders. Combinatorial array multiplier SESSION:15 Ms.S.PADMAVATHI
  • 291. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 5 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier ith multiplier bit carry in carry out jth multiplicand bit ith multiplier bit Bit of incoming partial product (PPi) Bit of outgoing partial product (PP(i+1)) FA Typical multiplication cell
  • 292. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 6 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier • Combinatorial array multipliers are: – Extremely inefficient. – Have a high gate count for multiplying numbers of practical size such as 32-bit or 64-bit numbers. – Perform only one function, namely, unsigned integer product. • Improve gate efficiency
  • 293. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 7 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier Sequential circuit multiplier  Less hardware required  Single n bit adder used n times  Repeated in many clock cycles to complete the multiplication  Add & Shift operation in each cycle
  • 294. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 8 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier Sequential multiplication (Shift and Add multiplication) • Recall the rule for generating partial products: – If the ith bit of the multiplier is 1, add the appropriately shifted multiplicand to the current partial product. – Multiplicand has been shifted left when added to the partial product. • Note: Adding a left-shifted multiplicand to an unshifted partial product is equivalent to adding an unshifted multiplicand to a right-shifted partial product.
  • 295. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 9 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier Sequential Circuit Multiplier
  • 296. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 10 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier Construction • n bit adder. • Register M & Flip-Flop C. • Mux – select 0 or multiplicand M • Add / Noadd control line, control sequence • Registers A and Q are shift registers. • A & Q together, they hold partial product PPi • The partial product grows in length by one bit per cycle
  • 297. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 11 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier
  • 298. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 12 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier Working • At the start, the multiplier is loaded into register Q, the multiplicand into register M, and C and A are cleared to 0. • Multiplier bit qi appears at the LSB position of Q generate the Add/Noadd signal • If qi =0, then Noadd signal generated from control sequence and MUX select 0. • If qi =1, then add signal generated from control sequence and MUX select M. • At the end of each cycle, C, A, and Q are shifted right one bit position to allow for growth of the partial product (as the multiplier is shifted out of register Q). • After n cycles, the high-order half of the product is held in register A and the low- order half is in register Q.
  • 299. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 13 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier Sequential Circuit Multiplier
  • 300. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 14 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier C A Q 0 0 0 0 0 1 0 1 1 Initial position 0 1 1 0 1 1 0 1 1 Add (A+M) 1st cycle 0 0 1 1 0 1 1 0 1 Shift R 1 0 0 1 1 1 1 0 1 Add(A+M) 2nd cycle 0 1 0 0 1 1 1 1 0 Shift R 0 1 0 0 1 1 1 1 0 NoAdd (A+0) 3nd cycle 0 0 1 0 0 1 1 1 1 Shift R 1 0 0 0 1 1 1 1 1 Add (A+M) 4nd cycle 0 1 0 0 0 1 1 1 1 Shift R M 1 1 0 1
  • 301. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 15 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier C A Q 0 0 0 0 0 1 0 1 0 Initial position 0 0 0 0 0 1 0 1 0 No Add (A+0) 1st cycle 0 0 0 0 0 0 1 0 1 Shift R 0 1 1 1 1 0 1 0 1 Add(A+M) 2nd cycle 0 0 1 1 1 1 0 1 0 Shift R 0 0 1 1 1 1 0 1 0 NoAdd (A+0) 3nd cycle 0 0 0 1 1 1 1 0 1 Shift R 1 0 0 1 0 1 1 0 1 Add (A+M) 4nd cycle 0 1 0 0 1 0 1 1 0 Shift R M 1 1 1 1
  • 302. 19IT302 – COMPUTER ORGANIZATION AND ARCHITECTURE 16 2.7 – Multiplication of Unsigned Numbers - Shift and Add Multiplier Virtual Lab Link: