Information representation, Floating point representation (IEEE 754), computer arithmetic and their implementation; Fixed-point Arithmetic: Addition, Subtraction, Multiplication and Division, Memory Technology, static and dynamic memory, Random Access and Serial Access Memories, Cache memory and Memory Hierarchy, Address Mapping, Cache updation schemes, Virtual memory and memory management unit.
Here are the answers to the questions:
1. Pipeline cycle time = Maximum delay of any stage + Latch delay
= 90 ns + 10 ns = 100 ns
2. Non-pipeline execution time for one task = Total delay of all stages
= 60 + 50 + 90 + 80 = 280 ns
3. Speed up ratio = Non-pipeline time/Pipeline time
= 280/100 = 2.8
4. Pipeline time for 1000 tasks = Pipeline cycle time x Number of tasks
= 100 ns x 1000 = 100,000 ns = 100 μs
5. Sequential time for 1000 tasks = Non-pipeline time per task x Number of tasks
= 280 ns x 1000 = 280,
Basic architecture and organization of computers, Von Neumann Model, Registers and storage, Register Transfer Language, Bus and Memory Transfer, Common Bus System, Machine instructions, functional units and execution of a program; instruction cycles, Instruction set architectures, instruction formats
I/O subsystems: Input/output devices such as Disk, CD,ROM, Printer etc.; Interfacing with IO devices, keyboard and display interfaces; Basic concepts Bus Control, Read Write operations, Programmed IO, Concept of handshaking, Polled and Interrupt driven I/O, DMA data transfer
Pipelining is a technique where the instruction execution process is divided into multiple stages that can operate in parallel. This allows subsequent instructions to begin processing before previous ones have finished. For example, with laundry, pipelining allows washing, drying, and folding different loads simultaneously to complete all the laundry faster. In processors, pipelining overlaps the stages of instruction fetch, decode, execute, and writeback to improve throughput. While pipelining improves performance, it can introduce hazards like structural, data, and control hazards that must be addressed.
This presentation discusses different types of microoperations that can be performed on data stored in registers. It describes arithmetic microoperations like addition, subtraction, and increment/decrement. Logic microoperations perform bit-wise operations on registers like selective set, clear, complement, and masking. Shift microoperations serially transfer data in a register left or right through logical, circular, and arithmetic shifts. Arithmetic shifts preserve a number's sign during multiplication and division by 2 during left and right shifts.
1) Arithmetic pipelines divide arithmetic operations like multiplication and floating point addition into multiple stages to perform the operations concurrently and increase computational speed.
2) Vector and array processors use multiple processing elements that can perform the same operation on multiple data items simultaneously, further increasing speed.
3) Pipelining helps throughput by allowing new tasks to begin before previous ones finish, but does not reduce the latency of individual tasks. The pipeline rate is limited by its slowest stage.
Booth's multiplication algorithm multiplies two signed binary numbers in two's complement notation. It was invented by Andrew Donald Booth in 1950. The algorithm inspects two bits of the multiplier at a time, and either adds, subtracts, or leaves unchanged the partial product depending on whether the bits are 10, 01, or the same. It shifts the partial product and multiplier arithmeticly to the right after each step to inspect the next bits.
Here are the answers to the questions:
1. Pipeline cycle time = Maximum delay of any stage + Latch delay
= 90 ns + 10 ns = 100 ns
2. Non-pipeline execution time for one task = Total delay of all stages
= 60 + 50 + 90 + 80 = 280 ns
3. Speed up ratio = Non-pipeline time/Pipeline time
= 280/100 = 2.8
4. Pipeline time for 1000 tasks = Pipeline cycle time x Number of tasks
= 100 ns x 1000 = 100,000 ns = 100 μs
5. Sequential time for 1000 tasks = Non-pipeline time per task x Number of tasks
= 280 ns x 1000 = 280,
Basic architecture and organization of computers, Von Neumann Model, Registers and storage, Register Transfer Language, Bus and Memory Transfer, Common Bus System, Machine instructions, functional units and execution of a program; instruction cycles, Instruction set architectures, instruction formats
I/O subsystems: Input/output devices such as Disk, CD,ROM, Printer etc.; Interfacing with IO devices, keyboard and display interfaces; Basic concepts Bus Control, Read Write operations, Programmed IO, Concept of handshaking, Polled and Interrupt driven I/O, DMA data transfer
Pipelining is a technique where the instruction execution process is divided into multiple stages that can operate in parallel. This allows subsequent instructions to begin processing before previous ones have finished. For example, with laundry, pipelining allows washing, drying, and folding different loads simultaneously to complete all the laundry faster. In processors, pipelining overlaps the stages of instruction fetch, decode, execute, and writeback to improve throughput. While pipelining improves performance, it can introduce hazards like structural, data, and control hazards that must be addressed.
This presentation discusses different types of microoperations that can be performed on data stored in registers. It describes arithmetic microoperations like addition, subtraction, and increment/decrement. Logic microoperations perform bit-wise operations on registers like selective set, clear, complement, and masking. Shift microoperations serially transfer data in a register left or right through logical, circular, and arithmetic shifts. Arithmetic shifts preserve a number's sign during multiplication and division by 2 during left and right shifts.
1) Arithmetic pipelines divide arithmetic operations like multiplication and floating point addition into multiple stages to perform the operations concurrently and increase computational speed.
2) Vector and array processors use multiple processing elements that can perform the same operation on multiple data items simultaneously, further increasing speed.
3) Pipelining helps throughput by allowing new tasks to begin before previous ones finish, but does not reduce the latency of individual tasks. The pipeline rate is limited by its slowest stage.
Booth's multiplication algorithm multiplies two signed binary numbers in two's complement notation. It was invented by Andrew Donald Booth in 1950. The algorithm inspects two bits of the multiplier at a time, and either adds, subtracts, or leaves unchanged the partial product depending on whether the bits are 10, 01, or the same. It shifts the partial product and multiplier arithmeticly to the right after each step to inspect the next bits.
Data encoding converts data into a signal form for transmission. There are different encoding methods for digital-to-digital, digital-to-analog, analog-to-analog, and analog-to-digital conversion. Common digital encoding techniques include unipolar, bipolar, and polar encoding, which represent binary data using single or multiple voltage levels. Non-return-to-zero and return-to-zero are examples that represent bits through the signal remaining at a level or returning to zero between bits. Manchester encoding and differential Manchester encoding are biphase techniques that provide clocking through signal transitions.
This document discusses parallel processing and pipelining. It describes different levels and types of parallel processing including job level, task level, inter-instruction level, and intra-instruction level parallelism. It also covers Flynn's classification of parallel computers as SISD, SIMD, MISD, and MIMD based on the number of instruction and data streams. Pipelining is defined as decomposing a process into sub-operations that execute concurrently. The key benefits of pipelining are that multiple computations can progress simultaneously through different pipeline stages.
Booth's algorithm is a method for multiplying two signed or unsigned integers in binary representation more efficiently than straightforward algorithms. It uses fewer additions and subtractions by representing the multiplicand as 2's complement numbers. The algorithm loads the multiplicand and multiplier into registers, initializes a third register to 0, and performs bitwise shifts and arithmetic operations (addition/subtraction of the multiplicand) on the registers based on the values of bits from the multiplier. This process builds up the product one bit at a time in a third register.
Micro operations
Fetch, Indirect, Interrupt, Execute, Instruction Cycle
Control Unit
Hardwired Control Unit
Microprogrammed Control Unit
Wilkie's Microprogrammed Control Unit
This document discusses computer arithmetic and floating point representation. It begins with an introduction to computer arithmetic and covers topics like addition, subtraction, multiplication, division and their algorithms. It then discusses floating point representation which uses scientific notation to represent real numbers. Key aspects covered include single and double precision formats, normalized and denormalized numbers, overflow and underflow, and biased exponent representation. Examples are provided to illustrate floating point addition and multiplication. The document also discusses floating point instructions in MIPS and the need for accurate arithmetic in floating point operations.
The document provides an introduction to the 8085 microprocessor. It discusses the basic components of a microcomputer including the CPU, memory (RAM and ROM), and I/O unit. It then describes the internal structure of the 8085 CPU including its registers, flag bits, program counter, and stack pointer. The document outlines the 8085 bus structure including its address bus, data bus, and control signals. It provides timing diagrams for opcode fetch, memory read, and memory write operations. Finally, it discusses addressing modes, instruction size, and includes a table of the 8085 instruction set.
Instruction codes and computer registersSanjeev Patel
The document discusses instruction codes and computer registers. Instruction codes are made up of an opcode and address that tell the computer what operation to perform. Computer registers store important data and instructions, including the program counter, address register, instruction register, temporary register, data register, accumulator, input register, and output register. These registers perform functions like holding memory operands, instructions, temporary data, addresses, and input/output characters.
This document discusses different addressing modes used in computer architecture. It defines 10 addressing modes: immediate, register, register indirect, direct, indirect, implied, relative, indexed, base register, and autoincrement/autodecrement. Each addressing mode is described in terms of how the operand is specified and accessed from memory or registers. Examples are provided to illustrate each addressing mode.
The document provides an overview of pipelining in computer processors. It discusses how pipelining works by dividing processor operations like fetch, decode, execute, memory, and write-back into discrete stages that can overlap, improving throughput. Key points made include:
- Pipelining allows multiple instructions to be in different stages of completion at the same time, improving instruction throughput.
- The document uses an example of a sequential laundry process versus a pipelined laundry process to illustrate how pipelining improves efficiency.
- It describes the five main stages of a RISC instruction set pipeline - fetch, decode, execute, memory, and write-back. The work done and data passed between each stage
Digital logic design is the basis of electronic systems like computers and cell phones. It uses binary numbers (zeros and ones) to represent information and process input/output operations. Digital logic employs logic gates that perform functions like AND, OR, and NOT to translate binary input signals into specific outputs. Career opportunities include developing device infrastructures using components like information storage, signal transmission, and information processing. Engineers work to improve performance, decrease energy usage, and debug issues.
The Diffie-Hellman algorithm was developed by Whitfield Diffie and Martin Hellman in 1976.
This algorithm was devices not to encrypt the data but to generate same private cryptographic key at both ends so that there is no need to transfer this key from one communication end to another.
Diffie – Hellman algorithm is an algorithm that allows two parties to get the shared secret key using the communication channel, which is not protected from the interception but is protected from modification.
This is about a topic of compiler design, LR and SLR parsing algorithm and LR grammar, Canonical collection and Item, Conflict in LR parsing shift reduce. Classification of Bottom up parsing.
What are the microoperations that our meant for the implementation of various logical circuits are explained here. Check through all the slides and comment or ask if need help to understand more.
This document discusses the organization of registers and stacks in a CPU. It describes how a CPU contains a register set that is used to store intermediate values and pointers to improve efficiency over memory access. A general register organization is shown using multiplexers and decoders to select registers for arithmetic operations. The document also introduces the concept of a stack, which uses a last-in, first-out data structure and stack pointer register to efficiently insert and delete items for functions.
Register transfer language is used to describe micro-operation transfers between registers. It represents the sequence of micro-operations performed on binary information stored in registers and the control that initiates the sequences. A register is a group of flip-flops that store binary information. Information can be transferred between registers using replacement operators and control functions. Common bus systems using multiplexers or three-state buffers allow efficient information transfer between multiple registers by selecting one register at a time to connect to the shared bus lines. Memory transfers are represented by specifying the memory word selected by the address in a register and the data register involved in the transfer.
This document provides an outline for a course on 8086 Assembly Language Programming. It begins with an introduction to machine language and assembly language. It then covers topics like the organization of the 8086 processor, assembly language syntax, data representation, variables, instruction types, memory segmentation, program structure, addressing modes, and input/output. The document is intended to guide students through the key concepts needed to program in 8086 assembly language.
This document discusses hashing techniques for indexing and retrieving elements in a data structure. It begins by defining hashing and its components like hash functions, collisions, and collision handling. It then describes two common collision handling techniques - separate chaining and open addressing. Separate chaining uses linked lists to handle collisions while open addressing resolves collisions by probing to find alternate empty slots using techniques like linear probing and quadratic probing. The document provides examples and explanations of how these hashing techniques work.
Register transfer language & its micro operationsLakshya Sharma
The document discusses register transfer language and micro-operations in digital systems. It describes (1) how register transfer language can be used to describe the sequence of micro-operations involved in any computer function, (2) the four main types of micro-operations - register transfer, arithmetic, logic, and shift micro-operations, giving examples of each, and (3) how register transfers and bus transfers are represented in register transfer language.
This document discusses the complexity of algorithms and the tradeoff between algorithm cost and time. It defines algorithm complexity as a function of input size that measures the time and space used by an algorithm. Different complexity classes are described such as polynomial, sub-linear, and exponential time. Examples are given to find the complexity of bubble sort and linear search algorithms. The concept of space-time tradeoffs is introduced, where using more space can reduce computation time. Genetic algorithms are proposed to efficiently solve large-scale construction time-cost tradeoff problems.
This document describes a ring counter circuit using D flip-flops. A ring counter consists of flip-flops connected in a loop such that a single '1' circulates around the flip-flops as long as a clock signal is applied. The document explains that in a 4-bit ring counter, the output shifts from 1000 to 0100 to 0010 to 0001 and then back to 1000 with each clock pulse as the '1' circulates through the flip-flops. It also notes that a ring counter can enter an invalid state due to noise but should be designed to self-correct back to the valid counting states.
Module 3 of computer organization and architecturekumarloresh143
This document discusses algorithms for fixed point number multiplication and division in computer arithmetic. It provides details on:
1) Multiplication is done through successive shift and add operations of the multiplicand based on the multiplier bits. Hardware uses registers to store operands and results, and performs shifting and addition to calculate the product.
2) Division algorithms perform successive compare, shift, and subtract operations. Hardware uses registers to store the divisor and doubled-length dividend, and performs shifting, subtraction, and incrementing of the quotient register.
3) Examples are given of multiplying and dividing two fixed point binary numbers using the algorithms and hardware implementation. Overflow conditions are also discussed for the division algorithm.
Computer arithmetics (computer organisation & arithmetics) pptSuryaKumarSahani
This is a presentation of explanation of various computer arithmetic including Binary addition, subtraction, multiplication and division. Also Floating point addition, subtraction, multiplication, and division operations.
Data encoding converts data into a signal form for transmission. There are different encoding methods for digital-to-digital, digital-to-analog, analog-to-analog, and analog-to-digital conversion. Common digital encoding techniques include unipolar, bipolar, and polar encoding, which represent binary data using single or multiple voltage levels. Non-return-to-zero and return-to-zero are examples that represent bits through the signal remaining at a level or returning to zero between bits. Manchester encoding and differential Manchester encoding are biphase techniques that provide clocking through signal transitions.
This document discusses parallel processing and pipelining. It describes different levels and types of parallel processing including job level, task level, inter-instruction level, and intra-instruction level parallelism. It also covers Flynn's classification of parallel computers as SISD, SIMD, MISD, and MIMD based on the number of instruction and data streams. Pipelining is defined as decomposing a process into sub-operations that execute concurrently. The key benefits of pipelining are that multiple computations can progress simultaneously through different pipeline stages.
Booth's algorithm is a method for multiplying two signed or unsigned integers in binary representation more efficiently than straightforward algorithms. It uses fewer additions and subtractions by representing the multiplicand as 2's complement numbers. The algorithm loads the multiplicand and multiplier into registers, initializes a third register to 0, and performs bitwise shifts and arithmetic operations (addition/subtraction of the multiplicand) on the registers based on the values of bits from the multiplier. This process builds up the product one bit at a time in a third register.
Micro operations
Fetch, Indirect, Interrupt, Execute, Instruction Cycle
Control Unit
Hardwired Control Unit
Microprogrammed Control Unit
Wilkie's Microprogrammed Control Unit
This document discusses computer arithmetic and floating point representation. It begins with an introduction to computer arithmetic and covers topics like addition, subtraction, multiplication, division and their algorithms. It then discusses floating point representation which uses scientific notation to represent real numbers. Key aspects covered include single and double precision formats, normalized and denormalized numbers, overflow and underflow, and biased exponent representation. Examples are provided to illustrate floating point addition and multiplication. The document also discusses floating point instructions in MIPS and the need for accurate arithmetic in floating point operations.
The document provides an introduction to the 8085 microprocessor. It discusses the basic components of a microcomputer including the CPU, memory (RAM and ROM), and I/O unit. It then describes the internal structure of the 8085 CPU including its registers, flag bits, program counter, and stack pointer. The document outlines the 8085 bus structure including its address bus, data bus, and control signals. It provides timing diagrams for opcode fetch, memory read, and memory write operations. Finally, it discusses addressing modes, instruction size, and includes a table of the 8085 instruction set.
Instruction codes and computer registersSanjeev Patel
The document discusses instruction codes and computer registers. Instruction codes are made up of an opcode and address that tell the computer what operation to perform. Computer registers store important data and instructions, including the program counter, address register, instruction register, temporary register, data register, accumulator, input register, and output register. These registers perform functions like holding memory operands, instructions, temporary data, addresses, and input/output characters.
This document discusses different addressing modes used in computer architecture. It defines 10 addressing modes: immediate, register, register indirect, direct, indirect, implied, relative, indexed, base register, and autoincrement/autodecrement. Each addressing mode is described in terms of how the operand is specified and accessed from memory or registers. Examples are provided to illustrate each addressing mode.
The document provides an overview of pipelining in computer processors. It discusses how pipelining works by dividing processor operations like fetch, decode, execute, memory, and write-back into discrete stages that can overlap, improving throughput. Key points made include:
- Pipelining allows multiple instructions to be in different stages of completion at the same time, improving instruction throughput.
- The document uses an example of a sequential laundry process versus a pipelined laundry process to illustrate how pipelining improves efficiency.
- It describes the five main stages of a RISC instruction set pipeline - fetch, decode, execute, memory, and write-back. The work done and data passed between each stage
Digital logic design is the basis of electronic systems like computers and cell phones. It uses binary numbers (zeros and ones) to represent information and process input/output operations. Digital logic employs logic gates that perform functions like AND, OR, and NOT to translate binary input signals into specific outputs. Career opportunities include developing device infrastructures using components like information storage, signal transmission, and information processing. Engineers work to improve performance, decrease energy usage, and debug issues.
The Diffie-Hellman algorithm was developed by Whitfield Diffie and Martin Hellman in 1976.
This algorithm was devices not to encrypt the data but to generate same private cryptographic key at both ends so that there is no need to transfer this key from one communication end to another.
Diffie – Hellman algorithm is an algorithm that allows two parties to get the shared secret key using the communication channel, which is not protected from the interception but is protected from modification.
This is about a topic of compiler design, LR and SLR parsing algorithm and LR grammar, Canonical collection and Item, Conflict in LR parsing shift reduce. Classification of Bottom up parsing.
What are the microoperations that our meant for the implementation of various logical circuits are explained here. Check through all the slides and comment or ask if need help to understand more.
This document discusses the organization of registers and stacks in a CPU. It describes how a CPU contains a register set that is used to store intermediate values and pointers to improve efficiency over memory access. A general register organization is shown using multiplexers and decoders to select registers for arithmetic operations. The document also introduces the concept of a stack, which uses a last-in, first-out data structure and stack pointer register to efficiently insert and delete items for functions.
Register transfer language is used to describe micro-operation transfers between registers. It represents the sequence of micro-operations performed on binary information stored in registers and the control that initiates the sequences. A register is a group of flip-flops that store binary information. Information can be transferred between registers using replacement operators and control functions. Common bus systems using multiplexers or three-state buffers allow efficient information transfer between multiple registers by selecting one register at a time to connect to the shared bus lines. Memory transfers are represented by specifying the memory word selected by the address in a register and the data register involved in the transfer.
This document provides an outline for a course on 8086 Assembly Language Programming. It begins with an introduction to machine language and assembly language. It then covers topics like the organization of the 8086 processor, assembly language syntax, data representation, variables, instruction types, memory segmentation, program structure, addressing modes, and input/output. The document is intended to guide students through the key concepts needed to program in 8086 assembly language.
This document discusses hashing techniques for indexing and retrieving elements in a data structure. It begins by defining hashing and its components like hash functions, collisions, and collision handling. It then describes two common collision handling techniques - separate chaining and open addressing. Separate chaining uses linked lists to handle collisions while open addressing resolves collisions by probing to find alternate empty slots using techniques like linear probing and quadratic probing. The document provides examples and explanations of how these hashing techniques work.
Register transfer language & its micro operationsLakshya Sharma
The document discusses register transfer language and micro-operations in digital systems. It describes (1) how register transfer language can be used to describe the sequence of micro-operations involved in any computer function, (2) the four main types of micro-operations - register transfer, arithmetic, logic, and shift micro-operations, giving examples of each, and (3) how register transfers and bus transfers are represented in register transfer language.
This document discusses the complexity of algorithms and the tradeoff between algorithm cost and time. It defines algorithm complexity as a function of input size that measures the time and space used by an algorithm. Different complexity classes are described such as polynomial, sub-linear, and exponential time. Examples are given to find the complexity of bubble sort and linear search algorithms. The concept of space-time tradeoffs is introduced, where using more space can reduce computation time. Genetic algorithms are proposed to efficiently solve large-scale construction time-cost tradeoff problems.
This document describes a ring counter circuit using D flip-flops. A ring counter consists of flip-flops connected in a loop such that a single '1' circulates around the flip-flops as long as a clock signal is applied. The document explains that in a 4-bit ring counter, the output shifts from 1000 to 0100 to 0010 to 0001 and then back to 1000 with each clock pulse as the '1' circulates through the flip-flops. It also notes that a ring counter can enter an invalid state due to noise but should be designed to self-correct back to the valid counting states.
Module 3 of computer organization and architecturekumarloresh143
This document discusses algorithms for fixed point number multiplication and division in computer arithmetic. It provides details on:
1) Multiplication is done through successive shift and add operations of the multiplicand based on the multiplier bits. Hardware uses registers to store operands and results, and performs shifting and addition to calculate the product.
2) Division algorithms perform successive compare, shift, and subtract operations. Hardware uses registers to store the divisor and doubled-length dividend, and performs shifting, subtraction, and incrementing of the quotient register.
3) Examples are given of multiplying and dividing two fixed point binary numbers using the algorithms and hardware implementation. Overflow conditions are also discussed for the division algorithm.
Computer arithmetics (computer organisation & arithmetics) pptSuryaKumarSahani
This is a presentation of explanation of various computer arithmetic including Binary addition, subtraction, multiplication and division. Also Floating point addition, subtraction, multiplication, and division operations.
Computer arithmetic deals with efficient implementation of numeric operations like addition, subtraction, multiplication, and division in hardware. It includes representing numbers, designing circuits for operations, and balancing accuracy, speed, power usage, and other factors. Applications include general purpose processors, graphics cards, signal processing systems, and more. Common algorithms discussed are signed magnitude representation and operations, Booth multiplication, array multiplication, floating point representation and operations.
The document discusses addition, subtraction, multiplication, and division algorithms for signed binary numbers. It describes the process for each operation step-by-step including comparing sign bits, performing the operation, and determining the final result. Hardware implementations for addition/subtraction and multiplication are also covered, showing how the algorithms can be physically realized using components like registers, adders, and shift registers.
The document discusses the arithmetic logic unit (ALU) and its operations. It describes:
1) The ALU performs basic arithmetic operations like addition, subtraction, incrementing and decrementing on numeric data stored in registers.
2) Binary adders and adder-subtractors are used to implement addition and subtraction operations. They are built by connecting full adders in series.
3) An arithmetic circuit can perform the basic arithmetic operations as well as logic operations by setting control signals appropriately.
4) A status register is used to store flags indicating the results of arithmetic and logic operations like carry, overflow, zero etc. This allows instructions to conditionally execute based on previous results.
SIGNED 2’S COMPLEMENT SYSTEM: The standard system used to represent signed binary integers. A negative number is obtained by taking the 2's complement of its positive form. This allows for an easy way to perform addition and subtraction on signed binaries.
CARRY LOOKAHEAD ADDER: A fast adder circuit that calculates carry bits in advance to reduce wait time, improving speed over a ripple carry adder. It determines carry signals through a carry generate and propagate network to independently calculate carry and sum bits.
SHIFT-AND-ADD MULTIPLICATION: Implements binary multiplication by shifting and adding the multiplicand to itself the number of times indicated by
Computer Organisation and Architecture :Module M-1.pdfSushantRaj25
The document discusses digital logic design and describes half adders, full adders, parity generators, and parity checkers. It provides details on their implementation including truth tables and logic diagrams. Specifically, it explains that a half adder can add two single bits but has limitations, while a full adder can add three bits including a carry bit. It also describes how even and odd parity generators work and provides truth tables for 3-bit and 4-bit even parity generators. Finally, it discusses how even parity checkers can detect errors by checking if the total number of 1s is even.
Digital electronics & microprocessor Batu- s y computer engineering- arvind p...ARVIND PANDE
Unit-1 Digital signals, digital circuits, AND, OR, NOT, NAND, NOR and Exclusive-OR operations, Boolean algebra, examples of IC gates,
Number Systems: binary, signed binary, octal hexadecimal number, binary arithmetic, one’s and two’s complements arithmetic, codes, error detecting and correcting codes.
Integer addition and subtraction can be performed using ripple carry adders and carry lookahead adders. Ripple carry adders consist of cascaded full adders where the carry output of each stage is input to the next. This results in a delay until the final output is reached. Carry lookahead adders reduce this propagation delay by computing carry bits in parallel rather than series. Shift-and-add multiplication works by shifting and adding the multiplicand to itself based on the bits of the multiplier, similar to the hand multiplication algorithm.
This document discusses basic combinational logic functions including adders, comparators, decoders, and encoders. It begins by explaining half adders and full adders, including their truth tables and logic expressions. It then covers parallel binary adders used to add multi-bit numbers. Comparators are introduced for comparing the magnitude of two binary quantities. Decoders and encoders are also discussed, with decoders detecting a specified input code and encoders performing the reverse by converting an input to a coded output. Examples and exercises are provided to illustrate the concepts.
CS304PC:Computer Organization and Architecture session 22 floating point arit...Asst.prof M.Gokilavani
This document summarizes a session on floating point arithmetic operations. It discusses how floating point numbers are represented using the IEEE standard format, with a sign bit, exponent bits, and mantissa bits. It describes the basic operations of addition, subtraction, multiplication, and division of floating point numbers. These operations involve aligning the mantissa, adding or subtracting exponents, and normalizing the result. The document also discusses normalized representation, underflow, overflow, and biased exponents in floating point number representation and arithmetic.
Bitwise operations allow manipulating individual bits within binary representations of data. Common bitwise operations include AND, OR, NOT, and shifting bits left or right. Bitmasks are used to isolate specific bits for setting, clearing, or reading their values. Bitwise logic is applied extensively in areas like graphics, embedded systems, data structures, and networking where individual bits need to be manipulated.
Digital computers represent all data, including numbers, using the binary number system. There are two main methods for representing real numbers: fixed-point notation reserves a fixed number of bits for the integer and fractional parts, while floating-point notation reserves bits for the significand and exponent to allow more flexibility in the number of fractional digits. Fixed-point has better performance but a more limited range, while floating-point can represent a wider range of values but requires more complex operations. Common number representations like two's complement are used to support arithmetic operations.
Computer Organization and Architecture OverviewDhaval Bagal
This document discusses computer architecture and organization. It defines architecture as the attributes visible to the programmer, like instruction set and data representation, while organization refers to the operational units and interconnections that implement the architecture. Examples of architectural design issues include whether there is a multiply instruction, while organizational issues could be whether multiplication is done in hardware or software. The architecture may not change often but the organization does as technology advances to improve performance and speed.
Digital Arithmetic: Operations and Circuits discusses binary addition, subtraction, multiplication, and division. It also covers different systems for representing signed numbers, including sign-magnitude, 1's complement, and 2's complement. Key topics include performing arithmetic using the 2's complement system, detecting overflow, and representing decimal values in binary coded decimal. The document provides examples and review questions to illustrate binary arithmetic concepts.
This document discusses floating point arithmetic operations including:
- The components of a floating point number including the mantissa and exponent.
- Normalization of floating point numbers to have a leading nonzero digit in the mantissa.
- Common floating point operations like addition, subtraction, multiplication, and division and how they are performed.
- The IEEE 754 standard for representing floating point numbers.
- How floating point arithmetic is implemented in hardware including registers and adders used to process mantissas and exponents.
This document contains information about a course on Data Mining and Warehousing taught by Mr. Sagar Pandya at Medi-Caps University. The course code is IT3ED02 and it is a 3 credit course covering 5 units: introduction to data mining, association and classification, clustering, and business analysis. It provides reference books and textbooks for the course. It also contains lecture materials from Mr. Pandya covering topics like querying and reporting tools, applications of data mining, OLAP, and OLAP cubes.
Direct and Indirect Address, addressing modes; Arithmetic Logic Units control and data path, data path components, design of ALU and data path, Stack Organization, discussions about RISC versus CISC architectures, controller design; Hardwired and Micro programmed Control
Scheduling -Issues in Load Distributing, Components for Load Distributing Algorithms,
Different Types Distributed of Load Distributing Algorithms, Fault-tolerant services Highly
available services, Introduction to Distributed Database and Multimedia system
Clock synchronization: Clocks, events and process states, Synchronizing physical clocks,
Logical time and logical clocks, Lamport’s Logical Clock, Global states, Distributed mutual
exclusion algorithms: centralized, decentralized, distributed and token ring algorithms,
election algorithms, Multicast communication.
Distributed Objects and Remote Invocation: Communication between distributed objects
Remote procedure call, Events and notifications, operating system layer Protection, Processes
and threads, Operating system architecture. Introduction to Distributed shared memory,
Design and implementation issue of DSM.Case Study: CORBA and JAVA RMI.
Introduction: Definition, Design Issues, Goals, Types of distributed systems, Centralized
Computing, Advantages of Distributed systems over centralized system .Limitation of
Distributed systems Architectural models of distributed system, Client-server
communication, Introduction to DCE
Clustering: Introduction, Types of clustering;
Partition-based clustering: K-Means, K-Medoids;
Density based clustering: DBSCAN, Clustering evaluation.
Mining Data Stream, Mining Time-Series Data, Mining Sequence Patterns in Transactional Database,
Social Network analysis and Multirelational Data Mining.
This document contains information about a Data Mining and Warehousing course taught by Mr. Sagar Pandya at Medi-Caps University. The course code is IT3ED02 and it is a 3 credit course taught over 3 hours per week. The document provides details about the course units which include introductions to data mining, association and classification, clustering, and business analysis. It also lists reference textbooks and includes sections taught by Mr. Pandya on topics like the basics of data mining, techniques, applications and challenges.
Data Warehousing (Need,Application,Architecture,Benefits), Data Mart, Schema,...Medicaps University
Data warehousing Components –Building a Data warehouse,
Need for data warehousing,
Basic elements of data warehousing,
Data Mart,
Data Extraction, Clean-up, and Transformation Tools –Metadata,
Star, Snow flake and Galaxy Schemas for Multidimensional databases,
Fact and dimension data,
Partitioning Strategy-Horizontal and Vertical Partitioning.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
A review on techniques and modelling methodologies used for checking electrom...
UNIT-3 Complete PPT.pptx
1. MEDICAPS UNIVERSITY
UNIT - 3
Course Code Course Name Hours Per Week Total Credits
L T P
IT3CO20 Computer System Architecture 3 1 2 5
FACULTY OF ENGINEERING
Department of Information Technology
2. Syllabus
Information Representation, Floating Point Representation,
Computer Arithmetic and Their Implementation;
Fixed, Point Arithmetic: Addition, Subtraction, Multiplication
and Division,
Memory:-Static and Dynamic Memory, Random Access and
Serial Access Memories, Cache Memory and Memory Hierarchy,
Address Mapping, Cache Updation Schemes,
Virtual Memory and Memory Management Unit.
3. Information Representation
Computer does not understand human language. Any data, viz., letters,
symbols, pictures, audio, videos, etc., fed to computer should be
converted to machine language first.
Number System
We are introduced to concept of numbers from a very early age. To a
computer, everything is a number, i.e., alphabets, pictures, sounds, etc.,
are numbers. Number system is categorized into four types −
Binary number system consists of only two values, either 0 or 1
Octal number system represents values in 8 digits.
Decimal number system represents values in 10 digits.
Hexadecimal number system represents values in 16 digits.
4. Information Representation
Bits and Bytes
Bits − A bit is a smallest possible unit of data that a computer
can recognize or use. Computer usually uses bits in groups.
Bytes − group of eight bits is called a byte. Half a byte is called a
nibble.
6. Information Representation
The following table shows conversion of Bits and Bytes −
Byte Value Bit Value
1 Byte 8 Bits
1024 Bytes 1 Kilobyte
1024 Kilobytes 1 Megabyte
1024 Megabytes 1 Gigabyte
1024 Gigabytes 1 Terabyte
1024 Terabytes 1 Petabyte
1024 Petabytes 1 Exabyte
1024 Exabytes 1 Zettabyte
1024 Zettabytes 1 Yottabyte
1024 Yottabytes 1 Brontobyte
1024 Brontobytes 1 Geopbytes
7. Information Representation
Text Code
Text code is format used commonly to represent alphabets,
punctuation marks and other symbols. Four most popular text code
systems are −
EBCDIC
ASCII
Extended ASCII
Unicode
8. Sign-Magnitude
The sign-magnitude binary format is the simplest conceptual
format.
To represent a number in sign-magnitude, we simply use the
leftmost bit to represent the sign.
where 0 means positive, and the remaining bits to represent the
magnitude (absolute value).
A 8-bit sign-magnitude number would appear as follows:
Sign Magnitude
7 6-0
9. Fixed Point Representation
Addition and Subtraction with Signed-Magnitude Data
• We designate the magnitude of two numbers by A and B.
• There are eight different conditions when signed numbers are
added and subtracted.
• The algorithm for addition and subtraction are derived from
the table.
• When two equal numbers are subtracted, the result should be
+0 not -0.
10. • When the signs of A and B are identical add the two magnitudes
and attach the sign of A to the result.
•When the signs of A and B are different compare the
magnitudes and subtract the smaller from the larger.
•If A>B, Sign of the result to be the same as A.
•If A<B, Sign of the result to be complement of A.
•If A=B, Subtract B from A and make the sign of result positive.
Addition and Subtraction Algorithm
12. Hardware Implementation
•Let A and B are two registers. (Hold Magnitudes of numbers)
•As and Bs are two flipflops. (Hold corresponding signs)
•Add Overflow Flip-flop.(Hold overflow bit when A and B Added)
•The output carry transferred to flip-flop E.
•The addition of A and B is done through the Parallel Adder.
•The Sum (S) output of adder is applied to the input of A register.
13. Hardware Implementation
•The Complementor provides output of B. (or Complement)
•Complementer consist of EX-OR gate.
•When Mode Control M=0 (Perform Addition)
•In addition input carry is 0 and SUM=A+B.
•When Mode Control M=1 (Perform Subtraction)
•In Subtraction input carry is 1 and SUM=A+B+1.
•This is equal A + two’s complement of B, SUM = A-B
14.
15. Hardware Algorithm
•As and Bs are two flipflops. (Hold corresponding signs)
•As and Bs are compared by an Exclusive-OR gate.
•If the output of gate=0 (Signs are identical)
•If the output of gate=1 (Signs are different)
•0 indicate positive sign.
•1 indicate negative sign.
16. Hardware Algorithm
•EA register combines E (carry) and A (register).
•The value of E transferred to AVF.
•No overflow can occur if the numbers are subtracted and hence
AVF is cleared to zero.
•E=1 indicates A>=B (Number in A is a correct result)
•E=0 indicates A<B (A=A+1, As=As)
•The final result found in register A and sign As.
17.
18.
19.
20.
21.
22.
23. Shift Micro-Operations
•Shift micro-operations are those micro-operations that are used
for serial transfer of information. These are also used in
conjunction with arithmetic micro-operation, logic micro-
operation, and other data-processing operations.
•There are three types of shifts micro-operations:
•1. Logical
•2. Arithmetic
•3. Circular
24. Shift Micro-Operations
•1. Logical :
•It transfers the 0 zero through the serial input. We use the
symbols shl for logical shift-left and shr for shift-right.
•Logical Shift Left –
•In this shift one position moves each bit to the left one by one.
The Empty least significant bit (LSB) is filled with zero (i.e, the
serial input), and the most significant bit (MSB) is rejected.
26. Shift Micro-Operations
•Right Logical Shift –
•In this one position moves each bit to the right one by one and
the least significant bit(LSB) is rejected and the empty MSB is
filled with zero.
27. Shift Micro-Operations
•2. Arithmetic :
•This micro-operation shifts a signed binary number to the left or
to the right position. In an arithmetic shift-left, it multiplies a
signed binary number by 2 and In an arithmetic shift-right, it
divides the number by 2.
•Left Arithmetic Shift –
•In this one position moves each bit to the left one by one. The
empty least significant bit (LSB) is filled with zero and the most
significant bit (MSB) is rejected. Same as the Left Logical Shift.
29. Shift Micro-Operations
•Right Arithmetic Shift –
•In this one position moves each bit to the right one by one and
the least significant bit is rejected and the empty MSB is filled
with the value of the previous MSB.
30. Shift Micro-Operations
•3. Circular :
•The circular shift circulates the bits in the sequence of the
register around the both ends without any loss of information.
•Left Circular Shift -
32. Multiplication Algorithm
•The process consists of looking at successive bits of the multiplier,
least significant bit first.
•If the multiplier bit is a 1, the multiplicand is copied down;
otherwise, zeros are copied down.
•The numbers copied down in successive lines are shifted one
position to the left from the previous number.
•Finally, the numbers are added and their sum forms the product.
•The sign of the product is determined from the signs of the
multiplicand and multiplier.
33. Multiplication Algorithm
•If they are alike, the sign of the product is positive.
•If they are unlike, the sign of the product is negative.
34. Hardware Implementation
•The multiplicand is store in register B and the multiplier in Q.
•As, Bs and Qs are flipflops. (Hold corresponding signs)
• Qn is the rightmost bit of the multiplier.
•The output carry transferred to flip-flop E. (Initial value of E=0)
•A register initialize with 0. As denotes sign of partial product.
•The sum of A and B forms a partial product which is transferred
to the EA register.
35. Hardware Implementation
•The multiplier is stored in the Q register and its sign in Qs.
•The sequence counter SC is initially set to a number equal to the
number of bits in the multiplier.
•The counter is decremented by 1 after forming each partial product.
•When the content of the counter reaches zero, the product is
formed and the process stops.
•EA register holds partial product , with carry in addition shifted to E.
•AQ Contains the final product. A holding the MSB and Q holding the
LSB.
37. Hardware Algorithm for Multiplication
•The multiplicand is in B and the multiplier in Q.
•Their corresponding signs are in B, and Q., respectively.
•The signs are compared, and both A and Q are set to
correspond to the sign of the product since a double-length
product will be stored in registers A and Q.
•one bit of the word will be occupied by the sign and the
magnitude will consist of n - 1 bits.
•After the initialization, the low-order bit of the multiplier in Q, is
tested.
38. Hardware Algorithm for Multiplication
•If it is a 1, the multiplicand in B is added to the present partial
product in A. If i t i s a 0, nothing is done.
•Register EAQ is then shifted once to the right to form the new
partial product.
•The sequence counter is decremented by 1 and its new value
checked.
•If it is not equal to zero, the process is repeated and a new
partial product is formed.
•The process stops when SC = 0.
39.
40.
41.
42.
43.
44. Booth’s Multiplication Algorithm
•In Booth’s Algorithm sign bits are not separated from register.
•Qr register contains the multiplier.
•Qn represent LSB of the multiplier in Qr register.
•Qn+1 is an extra flip-flop appended to Qr to facilitate a double
bit inspection of the multiplier.
•To show this difference, we rename registers A, B, and Q, as AC,
BR, and QR, respectively.
45. Booth’s Multiplication Algorithm
•Ac and Qn+1 are initially cleared to 0.
•The sequence counter SC is set to a number n equal to the
number of bits in the multiplier.
•The two bits of the multiplier in Qn and Qn+1 are inspected.
• If the two bits are equal to 10, it means that the first 1 in a
string of 1' s has been encountered.
•This requires a subtraction of the multiplicand from the partial
product in AC.
47. Booth’s Multiplication Algorithm
•Booth algorithm gives a procedure for multiplying binary
integers in signed-2's complement representation.
• As in all multiplication schemes, Booth algorithm requires
examination of the multiplier bits and shifting of the partial
product.
•Prior to the shifting, the multiplicand may be added to the
partial product, subtracted from the partial product, or left
unchanged according to the following rules:
48. Booth’s Multiplication Algorithm
•1. The multiplicand is subtracted from the partial product upon
encountering the first least significant 1 in a string of 1's in the
multiplier. (10)
•2. The multiplicand is added to the partial product upon
encountering the first 0 (provided that there was a previous 1)
in a string of O's in the multiplier. (01)
•3. The partial product does not change when the multiplier bit
is identical to the previous multiplier bit. (00/11)
49. Booth’s Multiplication Algorithm
•If the two bits are equal to 01, it means that the first 0 in a
string of 0' s has been encountered.
•This requires the addition of the multiplicand to the partial
product in AC.
•When the two bits are equal, the partial product does not
change.
•An overflow cannot occur because the addition and
subtraction of the multiplicand follow each other.
50. Booth’s Multiplication Algorithm
•The two numbers that are added always have opposite sign.
•The next step is to shift right the partial product and the
multiplier.
•This is an arithmetic shift right (ashr) operation which shifts
AC and QR to the right and leaves the sign bit in AC
unchanged.
•The sequence counter is decremented and the
computational loop is repeated n times.
66. Division Algorithm
•Division of two fixed-point binary numbers in signed-magnitude
representation is done with paper and pencil by a process of
successive compare, shift, and subtract operations.
•The quotient digits are either 0 or 1 and there is no need to
estimate how many times the dividend or partial remainder fits
into the divisor.
•The division process is illustrated by a numerical example
72. Hardware Algorithm
•The dividend is in A and Q and the divisor in B.
• The sign of the result is transferred into Qs to be part of the
quotient.
•A constant is set into the sequence counter SC to specify the
number of bits in the quotient.
•As in multiplication, we assume that operands are transferred to
registers from a memory unit that has words of n bits.
•an operand must be stored with its sign, one bit of the word will be
occupied by the sign and the magnitude will consist of n -1 bits.
73. Hardware Algorithm
•If the bit shifted into E is 1, B must be subtracted from EA and 1 inserted
into Qn for the quotient bit.
•If the shift-left operation inserts a 0 into E, the divisor is subtracted by
adding its 2's complement value and the carry is transferred into E.
•If E = 1, it signifies that A>=B; therefore, Qn is set to 1.
•If E = 0, it signifies that A < B and the original number is restored by
adding B to A.
•The quotient sign is in Qs and the sign of the remainder in As is the
same as the original sign of the dividend.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88. Assignment for 1st Week [Unit-3 (I)]
Q.1 Show the step-by-step multiplication process using Booth algorithm when the following
binary numbers are multiplied. Assume 5-bit registers that hold signed numbers. The
multiplicand in both cases is + 15.
a. (+15) x (+13) b. (+15) X (-13)
Q.2 Show the contents of registers E, A, Q, and SC during the process of multiplication of
two binary numbers, 11111 (multiplicand) and 10101 (multiplier). The signs are not
included.
Q.3 Show the contents of registers E, A, Q, and SC during the process of division of
(a) 10100011 by 1011; (b) 00001111 by 0011. (Use a dividend of eight bits.)
Q.4 Show the step-by-step Addition/Subtraction process using Addition/Subtraction
Algorithm
A. 8-7 B. 6-7 C. 5+4 D. -5-4
Q.5 Represent 58.25 in IEEE 754 single precision floating point representation.
89. Memory
•Memory unit is essential component of digital computer since it is
needed for storing programs and data.
•Memory unit that communicates directly with CPU is called Main
memory.
•Devices that provide backup storage is called auxiliary memory.
•Only programs and data currently needed by processor reside in the
main memory.
•All other information is stored in auxiliary memory and transferred to
main memory when needed.
90. Memory Hierarchy
•The total memory capacity of a computer can be visualized as being
a hierarchy of components.
•Memory hierarchy system consist of all storage devices from
auxiliary memory to main memory to cache memory
•As one goes down the hierarchy :
Cost per bit decreases.
Capacity increases.
Access time increases.
Frequency of access by the processor decreases.
93. Main Memory
•It is the memory used to store programs and data during the
computer operation.
•The principal technology is based on semiconductor integrated
circuits.
•It consists of RAM and ROM chips.
•RAM chips are available in two form static and dynamic.
94. RAM & ROM
•RAM was used to refer to a random-access memory, but now it is
used to designate a read/write memory to distinguish it from a read-
only memory, although ROM is also random access.
•RAM is used for storing the bulk of the programs and data that are
subject to change.
• ROM is used for storing programs that are permanently resident in
the computer and for tables of constants that do not change in value
once the production of the computer is completed.
•The ROM portion of main memory is needed for storing an
initial program called a bootstrap loader.
95. Bootstrap Loader
•The bootstrap loader is a program whose function is to start the
computer software operating when power is turned on.
•Since RAM is volatile, its contents are destroyed when power is
turned off.
•The contents of ROM remain unchanged after power is turned off
and on again.
•The startup of a computer consists of turning the power on and
starting the execution of an initial program.
•The bootstrap program loads a portion of the operating system
from disk to main memory and control is then transferred to the
operating system, which prepares the computer for general use.
96. RAM and ROM Chips
•A RAM chip is better suited for communication with the CPU if it has one
or more control inputs that select the chip only when needed.
•Another common feature is a bidirectional data bus that allows the
transfer of data either from memory to CPU during a read operation, or
from CPU to memory during a write operation.
•The capacity of the memory is 128 words of eight bits (one byte) per
word. This requires a 7-bit address and an 8-bit bidirectional data bus.
•The read and write inputs specify the memory operation.
• The two chips select (CS) control inputs are for enabling the chip only
when it is selected by the microprocessor.
97.
98. ROM
•It is used for storing programs that are permanent and the tables of
constants that do not change.
•ROM store program called bootstrap loader whose function is to start the
computer software when the power is turned on.
•When the power is turned on, the hardware of the computer sets the
program counter to the first address of the bootstrap loader.
•For the same size chip it is possible to have more bits of ROM than of
RAM, because the internal binary cells in ROM occupy less space than in
RAM
•For this reason the diagram specifies 512 byte ROM and 128 bytes RAM.
99. ROM
•A ROM can only read, the data bus can only be in an output mode.
•The nine address lines in the ROM chip specify any one of the 512
bytes stored in it.
•The two chip select inputs must be CS1 = 1 and CS2 = 0 for the unit
to operate. Otherwise, the data bus is in a high-impedance state.
•There is no need for a read or write control because the unit can
only read. Thus when the chip is enabled by the two select inputs,
the byte selected by the address lines appears on the data bus.
101. Memory Address Map
•The designer of a computer system must calculate the amount of
memory required for the particular application and assign it to
either RAM or ROM.
•The addressing of the memory is then established by means of
table called memory address map that specifies the memory
address assign to each chip.
•To demonstrate with a particular example, assume that a
computer system needs 512 bytes of RAM and 512 bytes of ROM.
102.
103. Memory Address Map
•Although there are 16 lines in the address bus, the table shows
only 10 lines because the other 6 are not used in this example
and are assumed to be zero.
•The small x's under the address bus lines designate those lines
that must be connected to the address inputs in each chip.
•The RAM chips have 128 bytes and need seven address lines.
•The ROM chip has 512 bytes and needs 9 address lines.
•The x's are always assigned to the low-order bus lines: lines 1
through 7 for the RAM and lines 1 through 9 for the ROM.
104. Memory Address Map
•It is now necessary to distinguish between four RAM chips by
assigning to each a different address.
•For this particular example we choose bus lines 8 and 9 to
represent four distinct binary combinations.
•Note that any other pair of unused bus lines can be chosen for
this purpose.
•The distinction between a RAM and ROM address is done with
another bus line. When line 10 is 0, the CPU selects a RAM, and
when this line is equal to 1, it selects the ROM
111. Sr.No. SRAM DRAM
1.
Transistors are used to store
information in SRAM.
Capacitors are used to store data in
DRAM.
2.
Capacitors are not used hence no
refreshing is required.
To store information for a longer time,
contents of the capacitor needs to be
refreshed periodically.
3. SRAM is faster as compared to DRAM. DRAM provides slow access speeds.
4. These are expensive. These are cheaper.
5. SRAMs are low density devices. DRAMs are high density devices.
6. These are used in cache memories. These are used in main memories.
112. Auxiliary Memory
•The most common auxiliary memory devices used in computer
systems are magnetic disks and magnetic tapes.
•Other components used, but not as frequently, are magnetic
drums, magnetic bubble memory, and optical disks.
•The important characteristics of any device are its access
mode, access time, transfer rate, capacity, and cost.
•The average time required to reach a storage location in
memory and obtain its contents is called the access time.
113. Auxiliary Memory
•Transfer time required to transfer data to or from the device.
•The seek time is usually much longer than the transfer time.
•Auxiliary storage is organized in records or blocks.
•A record is a specified number of characters or words.
•Reading or writing is always done on entire records.
•The transfer rate is the number of characters or words that the
device can transfer per second, after it has been positioned at
the beginning of the record.
114. Magnetic Disk
•A magnetic disk is a circular plate constructed of metal or plastic
coated with magnetized material.
•Often both sides of the disk are used and several disks may be
stacked on one spindle with read/write heads available on each
surface.
•All disks rotate together at high speed and are not stopped or
started for access purposes.
•Bits are stored in the magnetized surface in spots along
concentric circles called tracks.
115. Magnetic Disk
•The tracks are commonly divided into sections called sectors.
•The minimum quantity of information which can be transferred
is a sector.
•Some units use a single read/write head for each disk
surface.
•In other disk systems, separate read/write heads are
provided for each track in each surface.
116.
117. Magnetic Tape
•Magnetic tape is a strip of plastic coated with a magnetic
recording medium.
• It is a Sequential Storage Medium used for data collection,
backups and archiving.
•The major drawbacks of tape is its sequential format locating a
specific records requires reading every record in front of it.
•Magnetic tape provides a compact, economical means of
preserving & reproducing varied forms of information.
118. Magnetic Tape
•Recording on tape can be played back immediately & are easily erased,
permitting the tape to be reused many times without a loss in quality
of recording.
•Bits are recorded as magnetic spots on tape along several tracks.
•Usually 7 or 9 bits are recorded simultaneously to forma character
together with a parity bit.
•Read/write heads are mounted one in each track so that data can be
recorded and read as a sequence of characters.
•Magnetic tape units can be stopped, started to move forward or in
reverse, or can be rewound.
119. Magnetic Tape
•Information is recorded in blocks referred to as records.
•Each record on tape has an identification bit pattern at the beginning
and end.
•By reading the bit pattern at the beginning, the tape control identifies
the record number. By reading the bit pattern at the end of the record,
the control recognizes the beginning of a gap.
• A tape unit is addressed by specifying the record number and the
number of characters in the record.
•Records may be of fixed or variable length.
120.
121.
122.
123. Cache Memory
•Cache Memory is a special very high-speed memory.
• It is used to speed up and synchronizing with high-speed CPU.
•Cache memory is costlier than main memory or disk memory
but economical than CPU registers.
•Cache memory is an extremely fast memory type that acts as a
buffer between RAM and the CPU.
•It holds frequently requested data and instructions so that they
are immediately available to the CPU when needed.
124. Cache Memory
•Cache memory is used to reduce the average time to access
data from the Main memory.
•The cache is a smaller and faster memory which stores copies of
the data from frequently used main memory locations.
•There are various different independent caches in a CPU, which
store instructions and data.
•The cache memory access time is less than the access time of
main memory by a factor of 5 to 10.
125.
126. Cache Memory
•When the CPU needs to access memory, the cache is examined.
•If the word is found in the cache, it is read from the fast
memory.
•If the word addressed by the CPU is not found in the cache, the
main memory is accessed to read the word.
•A block of words containing the one just accessed is then
transferred from main memory to cache memory.
•The block size may vary from one word (the one just accessed)
to about 16 words.
127. Cache Memory
•Some data are transferred to cache so that future references
to memory find the required words in the fast cache memory.
•The performance of cache memory is frequently measured in
terms of a quantity called hit ratio .
•When the CPU refers to memory and finds the word in cache,
it is said to produce a hit.
•If the word is not found in cache, it is in main memory and it
counts as a miss.
•The ratio of the number of hits divided by the total CPU
references to memory (hits plus misses) is the hit ratio.
128. Cache Memory
•The hit ratio is best measured experimentally by running
representative programs in the computer and measuring the
number of hits and misses during a given time.
•The basic characteristic of cache memory is its fast access time.
•The transformation of data from main memory to cache
memory is referred to as a mapping process.
•Three types of mapping procedures are of practical interest
when considering the organization of cache memory:
•1. Direct mapping 2. Associative mapping 3. Set-associative
mapping
129.
130.
131.
132.
133.
134.
135.
136.
137.
138.
139.
140.
141.
142.
143.
144.
145. Locality of Reference
•1. Spatial Locality (Space)
•2. Temporal Locality (Time)
•If a word is accessed now then the word adjacent to it will be
accessed next.
•Keeping more words in a block affects spatial locality. (block size)
•If a word is referenced now then the same word will be reference
again in future.
•LRU is used in temporal locality.
146. Associative Memory
•A memory unit accessed by content is called an associative
memory or Content Addressable Memory (CAM).
•This type of memory is accessed simultaneously and in parallel
on the basis of data content rather than by specific address or
location.
•When a word is written in an associative memory, no address is
given.
•The memory is capable of finding an empty unused location to
store the word.
147. Associative Memory
•When a word is to be read from an associative memory, the
content of the word, or part of the word, is specified.
•The memory locates all words which match the specified content
and marks them for reading.
•The time required to find an item stored in memory can be
reduced considerably if stored data can be identified for access
by the content of the data itself rather than by an address.
•Because of its organization, the associative memory is uniquely
suited to do parallel searches by data association.
148. Associative Memory
•Searches can be done on an entire word or on a specific field
within a word.
•An associative memory is more expensive than a random
access memory because each cell must have storage
capability as well as logic circuits for matching its content
with an external argument.
•For this reason, associative memories are used in
applications where the search time is very critical and must
be very short.
149. How associative memory works?
•Data is stored at the very first empty location found in memory.
•In associative memory when data is stored at a particular location
then no address is stored along with it.
•When the stored data need to be searched then only the key (i.e.
data or part of data) is provided.
•A sequential search is performed in the memory using the
specified key to find out the matching key from the memory.
•If the data content is found then it is set for the next reading by
the memory.
150. Associative Memory
•It consists of a memory array and logic for m words with n bits
per word.
•The argument register A and key register K each have n bits, one
for each bit of a word.
•The match register M has m bits, one for each memory word.
•Each word in memory is compared in parallel with the content of
the argument register.
•The key register provides a mask for choosing a particular field or
key in the argument word.
151.
152. Associative Memory
•The entire argument is compared with each memory word if the
key register contains all l' s.
•Otherwise, only those bits in the argument that have l's in their
corresponding position of the key register are compared.
153.
154. Associative Memory
•Advantages of associative memory
•Advantages of associative memory is given below:
•Associative memory searching process is fast.
•Associative memory is suitable for parallel searches.
•Disadvantages of associative memory
•Disadvantages of associative memory is given below:
•Associative memory is expensive than RAM
155. Difference between RAM and CAM
Sr. No. Key RAM [MM] CAM [AM]
1
Definition RAM stands for Random
Access Memory.
CAM stands for Content
Addressable Memory.
2
Operation User supplies an address and
RAM returns the word present
at that location.
User supplies a word and
CAM returns the links where
word is present.
3
Cost RAM is cheaper than
associative memory.
CAM is costlier.
4
Application RAM is used to run programs
and to store their data during
execution.
CAM is primary used in
database management
systems.
5
Suitability RAM is suitable for PRAM
(Paraller RAM) algorithm.
CAM is suitable for parallel
access.
156. VIRTUAL MEMORY
•In a memory hierarchy system, programs and data are first
stored in auxiliary memory.
•Portions of a program or data are brought into main memory as
they are needed by the CPU.
•The term virtual memory refers to something which appears to
be present but actually it is not.
•The virtual memory is the concept that gives the illusion to the
user to use more memory of program than the real memory of
computer.
157. VIRTUAL MEMORY
•Virtual memory is used to give programmers the illusion that
they have a very large memory at their disposal, even though the
computer actually has a relatively small main memory.
•An address used by a programmer will be called a virtual
address, and the set of such addresses the address space.
•An address in main memory is called a location or physical
address. The set of such locations is called the memory space.
•Virtual Memory is simulated memory that is written to a file on a
hard drive. This file is called page file or swap file.
158. VIRTUAL MEMORY
•It is used by operating system to simulate physical RAM by using
hard disk space.
•In windows 1.0, 2.0 version there was no virtual memory. So we
were not able to run a number of application due to run out of
RAM space.
•However from windows 3.0 concept of virtual memory was
introduced.
•To implement it, a portion of hard drive is reserved by the
system. This portion can be either a file or a separate portion.
159. VIRTUAL MEMORY
•In Windows, it is called pagefile.sys and in Linux, a separate
partition is used for virtual memory.
•When the system needs more memory it maps some of its
memory address out to the hard disk drive.
•The extra memory does not actually exist in the RAM, it is the
storage space on disk drive.
•The more RAM (main memory) your computer has, the faster
your program run.
•If lack of RAM is slowing our computer, then we can increase
virtual memory to compensate.
160. VIRTUAL MEMORY
•Virtual Address (Logical Address):- Each Address in Virtual
Memory is called virtual Address.
•Address Space:- Set of all Virtual Address is called virtual
address.
•Memory Address (Physical Address):- Each Address in main
memory is called physical address.
•Memory Space:- Set of all Memory Address is called Memory
Sapce.
161. VIRTUAL MEMORY
•Swapping:- Swapping is a mechanism in which a process
temporarily moved out from main memory to secondary storage
is called swap out and another process moved in from secondary
storage to main memory is called swap in.
•After sometime first process again brought to main memory.
•Address Mapping:- Address mapping specify how to convert
virtual address to memory address.
•Virtual Memory Implementation :-
•1) Paging 2)Segmented Paging
162. Address Mapping using PAGING
•Page:- Virtual memory is divided into equal size of group. Each
group is called page.
•Each page consists of number of words called page size.
•The term ‘Page Frame’ is sometimes used to denote a block.
•The physical memory is broken down into groups of equal
size called block.
163.
164.
165.
166. Memory Management Unit
•A memory management system is a collection of hardware
and software procedures for managing the various programs
residing in memory.
•The memory management software is part of an overall
operating system available in many computers.
•Here we are concerned with the hardware unit associated
with the memory management system.
167. Memory Management Unit
•The basic components of a memory management unit are:
• 1. A facility for dynamic storage relocation that maps logical
memory references into physical memory addresses.
• 2. A provision for sharing common programs stored in memory
by different users.
•3. Protection of information against unauthorized access
between users and preventing users from changing operating
system functions
168. Memory Management Unit
•It is more convenient to divide programs and data into
logical parts called segments.
•A segment is a set of logically related instructions or data
elements associated with a given name.
•Segments may be generated by the programmer or by the
operating system.
•Examples of segments are a subroutine, an array of data, a
table of symbols, or a user's program.
169. Segment page mapping
•In this the virtual memory is first divided into segments,
which are further divided into pages, pages are further
divided into words.
•Each segment s a group of pages.
•Each page is group of words.