The document discusses the memory hierarchy in computers. It explains that memory is organized in a hierarchy with different levels providing varying degrees of speed and capacity. The levels from fastest to slowest are: registers, cache, main memory, and auxiliary memory such as magnetic disks and tapes. Cache memory sits between the CPU and main memory to bridge the speed gap. It exploits locality of reference to improve memory access speed. The document provides details on the working of each memory level and how they interact with each other.
Memory is organized in a hierarchy with different levels providing trade-offs between speed and cost.
- Cache memory sits between the CPU and main memory for fastest access.
- Main memory (RAM) is where active programs and data reside and is faster than auxiliary memory but more expensive.
- Auxiliary memory (disks, tapes) provides backup storage and is slower than main memory but larger and cheaper.
Virtual memory manages this hierarchy through address translation techniques like paging that map virtual addresses to physical locations, allowing programs to access more memory than physically available. When data is needed from auxiliary memory a page fault occurs and page replacement algorithms determine what data to remove from main memory.
Introduction of memory Segmentation
Segmentation is the process in which the main memory of the computer is logically divided into different segments and each segment has its own base address.
Memory segmentation is the methods where whole memory is divided into the smaller parts called segments of various sizes.
A segment is just an area in memory.
The process of dividing memory this way is called segmentation.
This document discusses instruction set architectures (ISAs). It covers four main types of ISAs: accumulator, stack, memory-memory, and register-based. It also discusses different addressing modes like immediate, direct, indirect, register-indirect, and relative addressing. The key details provided are:
1) Accumulator ISAs use a dedicated register (accumulator) to hold operands and results, while stack ISAs use an implicit last-in, first-out stack. Memory-memory ISAs can have 2-3 operands specified directly in memory.
2) Register-based ISAs can be either register-memory (like 80x86) or load-store (like MIPS), which fully separate
The document discusses various aspects of I/O organization in a computer system. It describes the input-output interface that provides a method for transferring information between internal storage and external I/O devices. It discusses asynchronous data transfer techniques like strobe control and handshaking. It also covers asynchronous serial transmission, different modes of data transfer like programmed I/O, interrupt-initiated I/O, and direct memory access (DMA).
This document discusses memory organization and virtual memory. It describes paging and segmentation as methods for virtual memory address translation. Paging divides memory and processes into equal sized pages, while segmentation divides processes into variable sized segments. Both methods use data structures like page tables to map logical addresses to physical addresses. Caching is also discussed as a way to improve memory performance by storing frequently accessed data in a small, fast memory near the CPU.
Associative memory, also known as content-addressable memory (CAM), allows data to be searched based on its content rather than its location. It consists of a memory array, argument register (containing the search word), key register (specifying which bits to compare), and match register (indicating matching locations). All comparisons are done in parallel. Associative memory provides faster searching than conventional memory but is more expensive due to the additional comparison circuitry in each cell. It is well-suited for applications requiring very fast searching such as databases and virtual memory address translation.
Memory is organized in a hierarchy with different levels providing trade-offs between speed and cost.
- Cache memory sits between the CPU and main memory for fastest access.
- Main memory (RAM) is where active programs and data reside and is faster than auxiliary memory but more expensive.
- Auxiliary memory (disks, tapes) provides backup storage and is slower than main memory but larger and cheaper.
Virtual memory manages this hierarchy through address translation techniques like paging that map virtual addresses to physical locations, allowing programs to access more memory than physically available. When data is needed from auxiliary memory a page fault occurs and page replacement algorithms determine what data to remove from main memory.
Introduction of memory Segmentation
Segmentation is the process in which the main memory of the computer is logically divided into different segments and each segment has its own base address.
Memory segmentation is the methods where whole memory is divided into the smaller parts called segments of various sizes.
A segment is just an area in memory.
The process of dividing memory this way is called segmentation.
This document discusses instruction set architectures (ISAs). It covers four main types of ISAs: accumulator, stack, memory-memory, and register-based. It also discusses different addressing modes like immediate, direct, indirect, register-indirect, and relative addressing. The key details provided are:
1) Accumulator ISAs use a dedicated register (accumulator) to hold operands and results, while stack ISAs use an implicit last-in, first-out stack. Memory-memory ISAs can have 2-3 operands specified directly in memory.
2) Register-based ISAs can be either register-memory (like 80x86) or load-store (like MIPS), which fully separate
The document discusses various aspects of I/O organization in a computer system. It describes the input-output interface that provides a method for transferring information between internal storage and external I/O devices. It discusses asynchronous data transfer techniques like strobe control and handshaking. It also covers asynchronous serial transmission, different modes of data transfer like programmed I/O, interrupt-initiated I/O, and direct memory access (DMA).
This document discusses memory organization and virtual memory. It describes paging and segmentation as methods for virtual memory address translation. Paging divides memory and processes into equal sized pages, while segmentation divides processes into variable sized segments. Both methods use data structures like page tables to map logical addresses to physical addresses. Caching is also discussed as a way to improve memory performance by storing frequently accessed data in a small, fast memory near the CPU.
Associative memory, also known as content-addressable memory (CAM), allows data to be searched based on its content rather than its location. It consists of a memory array, argument register (containing the search word), key register (specifying which bits to compare), and match register (indicating matching locations). All comparisons are done in parallel. Associative memory provides faster searching than conventional memory but is more expensive due to the additional comparison circuitry in each cell. It is well-suited for applications requiring very fast searching such as databases and virtual memory address translation.
Computer performance is characterized by the amount of useful work accomplished by a system over the resources and time used. It can be measured through metrics like response time, throughput, and utilization. Several factors influence performance, including hardware, software, memory, and I/O. Benchmarks are used to evaluate performance by measuring how systems perform standard tasks. Maintaining high performance requires optimizing these various components through techniques like CPU enhancement, memory improvement, and I/O optimization.
This document provides an overview of input/output interfaces in 3 paragraphs. It discusses how I/O devices communicate differently than internal storage due to differences in operation, data transfer rates, word formats, and peripheral operating modes. It describes how interface modules connect I/O devices like keyboards, displays, printers and storage to the I/O bus and processor. Finally, it provides an example of an I/O interface unit that uses control and status registers to facilitate communication between a CPU and I/O device over control, data and status lines.
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
Memory organisation ppt final presentationrockymani
Memory is an essential component of computers that is used to store programs and data. Computers typically have three levels of memory: main memory, secondary memory, and cache memory. Main memory is fast memory that stores programs and data being executed. Secondary memory is permanent storage for programs and data used less frequently. Cache memory sits between the CPU and main memory for faster access. Memory is also classified by location, access method, volatility, and type. The different types include registers, main memory, secondary memory, cache memory, RAM, ROM, PROM, EPROM, and EEPROM.
Instruction Cycle in Computer Organization.pptxYash346903
The instruction cycle consists of three main stages:
1. The fetch stage where the instruction is fetched from the memory address stored in the program counter and placed in the instruction register. The program counter is then incremented.
2. The decode stage where the instruction is interpreted by the decoder.
3. The execute stage where the control unit passes signals to perform the required operations, and the result is stored in memory or sent to an output device. The program counter may then be updated to fetch the next instruction, beginning the cycle again.
Memory organization in computer architectureFaisal Hussain
Memory organization in computer architecture
Volatile Memory
Non-Volatile Memory
Memory Hierarchy
Memory Access Methods
Random Access
Sequential Access
Direct Access
Main Memory
DRAM
SRAM
NVRAM
RAM: Random Access Memory
ROM: Read Only Memory
Auxiliary Memory
Cache Memory
Hit Ratio
Associative Memory
In these slides the registration organization and stack organization have discussed in detail. Stack organization is discussed with the aid of animation to let the user understand it in a better and easy way.
The document discusses various methods for input/output (IO) in computer systems, including IO interfaces, programmed IO, interrupt-initiated IO, direct memory access (DMA), and input-output processors (IOPs). It describes how each method facilitates the transfer of data between the CPU, memory, and external IO devices.
This document discusses asynchronous data transfer between independent units. It describes two methods for asynchronous transfer - strobe control and handshaking. Strobe control uses a single control line to time each transfer, while handshaking introduces a second control signal to provide confirmation between units. Specifically, it details the handshaking process, which involves control signals like "data valid" and "data accepted" or "ready for data" to coordinate placing data on the bus and accepting data between a source and destination unit.
Types of instructions can be categorized into data transfer, arithmetic, and logical/program control instructions. Data transfer instructions like MOV copy data between registers and memory. Arithmetic instructions include INC/DEC to increment/decrement values, ADD/SUB for addition/subtraction, and MUL/DIV for multiplication/division. Logical instructions perform bitwise operations while program control instructions manage program flow.
1) The document discusses different types of micro-operations including arithmetic, logic, shift, and register transfer micro-operations.
2) It provides examples of common arithmetic operations like addition, subtraction, increment, and decrement. It also describes logic operations like AND, OR, XOR, and complement.
3) Shift micro-operations include logical shifts, circular shifts, and arithmetic shifts which affect the serial input differently.
Synchronous data transfer involves sharing a common clock between a CPU and I/O interface so that data transfer is coordinated. Asynchronous transfer has independent clocks, so handshaking methods like strobe control and handshaking are used. Strobe control uses a single strobe pulse to indicate valid data. Handshaking adds a second control signal for acknowledgment between units. This ensures the source knows data was received and the destination knows data is available.
The 80486 microprocessor features an integrated math coprocessor that is 3 times faster than the 80386/387 combination. It has an 8KB internal code and data cache and uses a 168-pin PGA package. New signals support burst mode memory access and bus sharing. The 80486 includes parity checking/generation and additional page table entry bits control internal caching.
This document discusses various addressing modes of the 8086 microprocessor. It defines addressing modes as how operands are specified in an instruction. There are 8 main addressing modes - immediate, direct, register, register indirect, indexed, register relative, based indexed, and relative based indexed. Each mode is explained with examples of how operand values are accessed from memory or registers to perform operations. The document also discusses intrasegment and intersegment addressing modes which specify if the source and destination locations are within the same memory segment or different segments.
Multiprocessor system is an interconnection of two or more CPUs with memory and input-output equipment
The components that forms multiprocessor are CPUs IOPs connected to input –output devices , and memory unit that may be partitioned into a number of separate modules.
Multiprocessor are classified as multiple instruction stream, multiple data stream (MIMD) system.
The document describes how input/output (I/O) devices communicate with the processor and memory. I/O devices are connected to the processor and memory via a shared bus. Each device has a unique address and uses address, data, and control lines on the bus. Interrupts allow I/O devices to signal the processor when they need attention, reducing wasted processor time. Multiple interrupt lines allow different devices to interrupt independently and ensure the correct interrupt service routine is executed.
The document discusses address sequencing in a microprogram control unit. It begins by defining key terms like control address register, which stores the initial address of the first microinstruction. It then explains that the next address generator is responsible for selecting the next address from control memory based on the current microinstruction. Microinstructions are stored in control memory in groups that make up routines corresponding to each machine instruction. The document also discusses control memory, hardwired control vs microprogrammed control, and examples of next address generation and status bits.
The document discusses memory organization and hierarchy. It describes how main memory directly communicates with the CPU while auxiliary memory provides backup storage. It also outlines different memory mapping techniques like direct mapping and set-associative mapping used for cache memory. Virtual memory allows programs to be larger than physical memory by swapping blocks between main and auxiliary storage.
The document discusses different levels of computer memory hierarchy including main memory, cache memory, auxiliary memory, and virtual memory. Main memory uses RAM and ROM chips that are connected to the CPU through address and data buses. The address lines select the specific memory chip and byte location within that chip. Main memory is the highest level of memory that can be accessed directly by the CPU for storage of data and instructions currently in use.
Computer performance is characterized by the amount of useful work accomplished by a system over the resources and time used. It can be measured through metrics like response time, throughput, and utilization. Several factors influence performance, including hardware, software, memory, and I/O. Benchmarks are used to evaluate performance by measuring how systems perform standard tasks. Maintaining high performance requires optimizing these various components through techniques like CPU enhancement, memory improvement, and I/O optimization.
This document provides an overview of input/output interfaces in 3 paragraphs. It discusses how I/O devices communicate differently than internal storage due to differences in operation, data transfer rates, word formats, and peripheral operating modes. It describes how interface modules connect I/O devices like keyboards, displays, printers and storage to the I/O bus and processor. Finally, it provides an example of an I/O interface unit that uses control and status registers to facilitate communication between a CPU and I/O device over control, data and status lines.
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
Memory organisation ppt final presentationrockymani
Memory is an essential component of computers that is used to store programs and data. Computers typically have three levels of memory: main memory, secondary memory, and cache memory. Main memory is fast memory that stores programs and data being executed. Secondary memory is permanent storage for programs and data used less frequently. Cache memory sits between the CPU and main memory for faster access. Memory is also classified by location, access method, volatility, and type. The different types include registers, main memory, secondary memory, cache memory, RAM, ROM, PROM, EPROM, and EEPROM.
Instruction Cycle in Computer Organization.pptxYash346903
The instruction cycle consists of three main stages:
1. The fetch stage where the instruction is fetched from the memory address stored in the program counter and placed in the instruction register. The program counter is then incremented.
2. The decode stage where the instruction is interpreted by the decoder.
3. The execute stage where the control unit passes signals to perform the required operations, and the result is stored in memory or sent to an output device. The program counter may then be updated to fetch the next instruction, beginning the cycle again.
Memory organization in computer architectureFaisal Hussain
Memory organization in computer architecture
Volatile Memory
Non-Volatile Memory
Memory Hierarchy
Memory Access Methods
Random Access
Sequential Access
Direct Access
Main Memory
DRAM
SRAM
NVRAM
RAM: Random Access Memory
ROM: Read Only Memory
Auxiliary Memory
Cache Memory
Hit Ratio
Associative Memory
In these slides the registration organization and stack organization have discussed in detail. Stack organization is discussed with the aid of animation to let the user understand it in a better and easy way.
The document discusses various methods for input/output (IO) in computer systems, including IO interfaces, programmed IO, interrupt-initiated IO, direct memory access (DMA), and input-output processors (IOPs). It describes how each method facilitates the transfer of data between the CPU, memory, and external IO devices.
This document discusses asynchronous data transfer between independent units. It describes two methods for asynchronous transfer - strobe control and handshaking. Strobe control uses a single control line to time each transfer, while handshaking introduces a second control signal to provide confirmation between units. Specifically, it details the handshaking process, which involves control signals like "data valid" and "data accepted" or "ready for data" to coordinate placing data on the bus and accepting data between a source and destination unit.
Types of instructions can be categorized into data transfer, arithmetic, and logical/program control instructions. Data transfer instructions like MOV copy data between registers and memory. Arithmetic instructions include INC/DEC to increment/decrement values, ADD/SUB for addition/subtraction, and MUL/DIV for multiplication/division. Logical instructions perform bitwise operations while program control instructions manage program flow.
1) The document discusses different types of micro-operations including arithmetic, logic, shift, and register transfer micro-operations.
2) It provides examples of common arithmetic operations like addition, subtraction, increment, and decrement. It also describes logic operations like AND, OR, XOR, and complement.
3) Shift micro-operations include logical shifts, circular shifts, and arithmetic shifts which affect the serial input differently.
Synchronous data transfer involves sharing a common clock between a CPU and I/O interface so that data transfer is coordinated. Asynchronous transfer has independent clocks, so handshaking methods like strobe control and handshaking are used. Strobe control uses a single strobe pulse to indicate valid data. Handshaking adds a second control signal for acknowledgment between units. This ensures the source knows data was received and the destination knows data is available.
The 80486 microprocessor features an integrated math coprocessor that is 3 times faster than the 80386/387 combination. It has an 8KB internal code and data cache and uses a 168-pin PGA package. New signals support burst mode memory access and bus sharing. The 80486 includes parity checking/generation and additional page table entry bits control internal caching.
This document discusses various addressing modes of the 8086 microprocessor. It defines addressing modes as how operands are specified in an instruction. There are 8 main addressing modes - immediate, direct, register, register indirect, indexed, register relative, based indexed, and relative based indexed. Each mode is explained with examples of how operand values are accessed from memory or registers to perform operations. The document also discusses intrasegment and intersegment addressing modes which specify if the source and destination locations are within the same memory segment or different segments.
Multiprocessor system is an interconnection of two or more CPUs with memory and input-output equipment
The components that forms multiprocessor are CPUs IOPs connected to input –output devices , and memory unit that may be partitioned into a number of separate modules.
Multiprocessor are classified as multiple instruction stream, multiple data stream (MIMD) system.
The document describes how input/output (I/O) devices communicate with the processor and memory. I/O devices are connected to the processor and memory via a shared bus. Each device has a unique address and uses address, data, and control lines on the bus. Interrupts allow I/O devices to signal the processor when they need attention, reducing wasted processor time. Multiple interrupt lines allow different devices to interrupt independently and ensure the correct interrupt service routine is executed.
The document discusses address sequencing in a microprogram control unit. It begins by defining key terms like control address register, which stores the initial address of the first microinstruction. It then explains that the next address generator is responsible for selecting the next address from control memory based on the current microinstruction. Microinstructions are stored in control memory in groups that make up routines corresponding to each machine instruction. The document also discusses control memory, hardwired control vs microprogrammed control, and examples of next address generation and status bits.
The document discusses memory organization and hierarchy. It describes how main memory directly communicates with the CPU while auxiliary memory provides backup storage. It also outlines different memory mapping techniques like direct mapping and set-associative mapping used for cache memory. Virtual memory allows programs to be larger than physical memory by swapping blocks between main and auxiliary storage.
The document discusses different levels of computer memory hierarchy including main memory, cache memory, auxiliary memory, and virtual memory. Main memory uses RAM and ROM chips that are connected to the CPU through address and data buses. The address lines select the specific memory chip and byte location within that chip. Main memory is the highest level of memory that can be accessed directly by the CPU for storage of data and instructions currently in use.
This document summarizes a school management software system called Vidyapith that is proposed by SRRK IT Limited. The key points are:
1) Vidyapith is an all-in-one school management software that can save time, reduce costs, and improve efficiency for functions like student admission, attendance, marks keeping, accounting, etc.
2) It offers modules for students, teachers, academics and accounts that automate tasks like ID card generation, report cards, attendance tracking, fee collection etc.
3) SRRK provides training, support, software updates and claims complaints will be addressed within 2.5 hours for customers adopting Vidyapith.
The document discusses the memory system in computers including main memory, cache memory, and different types of memory chips. It provides details on the following key points in 3 sentences:
The document discusses the different levels of memory hierarchy including main memory, cache memory, and auxiliary memory. It describes the basic concepts of memory including addressing schemes, memory access time, and memory cycle time. Examples of different types of memory chips are discussed such as SRAM, DRAM, ROM, and cache memory organization and mapping techniques.
The document discusses computer memory organization and the memory hierarchy. It describes different types of memory like RAM, ROM, cache memory and secondary storage. It explains the memory hierarchy as fast but expensive memory like registers and cache being used for frequently accessed data, while slower but cheaper memory like hard disks are used for long term and bulk storage. The principle of locality is discussed where programs tend to access data and instructions that are near each other in memory. Cache memory aims to improve performance by storing recently accessed data from main memory.
Modern processors are faster than memory
So Processors may waste time for accessing memory
Its purpose is to make the main memory appear to the processor to be much faster than it actually is
This document provides information about PHP VARNA #5, including thanks to the hosts, the agenda for the event which includes a talk on PHP 7.x and lightning talks, and details for the next monthly meetup in April. Contact and location information is also listed to find and spread the word about PHP VARNA.
Jesus Otero was a Spanish sculptor born in 1908 in Santillana del Mar, Cantabria. As a child, he enjoyed watching and caring for animals, and as an adult sculpted them in stone. Some of his sculptures include a bison in Santillana del Mar, a bear near San Glorio pass, a roe deer in Palombera pass, and a salmon in Hermida gorge. He established a museum in Santillana a few months before his death in 1994, which features some of his sculptures of animals like mutton and horses.
Data Integrity in pharmaceutical laboratories is a must, the attached ppt shall help the QC members to understand and develop an integral analytical culture
This document summarizes a school management software system called Vidyapith that is proposed by SRRK IT Limited. The key points are:
1. Vidyapith is an all-in-one school management software that can save time, reduce costs, and improve efficiency for functions like student admission, attendance, marks keeping, accounting, etc.
2. It offers modules for students, teachers, academics and accounts that automate manual school processes and make them paperless and easily accessible.
3. SRRK IT Limited provides training, support, updates and hosting for Vidyapith to ensure clients remain satisfied, and aims to solve any complaints within 2.5 hours.
Layers of Communication: Forms of Talk on TwitterAxel Bruns
The document discusses three layers of communication on Twitter: the macro level of hashtags (#hashtags), the meso level of follower networks, and the micro level of direct messages (@mentions). These layers interact and information can transition between them through retweets. Studying them individually provides only a limited understanding, as the layers intersect and public discussions on Twitter are interwoven with the wider media environment. More research is needed on how publics form and interact on Twitter, how communication patterns on Twitter relate to the broader society, and how to apply social media research methods beyond any single platform.
Secondary storage devices hold information regardless of whether the computer has power. Examples include floppy disks and hard drives. Secondary storage is slower than primary storage but is used for storing programs and data due to its larger size and lower cost. Magnetic tapes and disks are common sequential and direct access storage devices that differ in their access methods and performance characteristics. Tapes are portable but slow to access random data while disks allow faster random access but have lower capacity than tapes.
Computer memory can be classified as primary or secondary memory. Primary memory, also called main memory, is located directly on the motherboard and includes RAM and ROM. RAM is used for temporary storage and needs power to retain data, while ROM permanently stores basic startup instructions. Secondary memory, used for long-term storage, includes magnetic tapes, disks, and optical disks like CDs and DVDs, which allow large amounts of data to be stored externally to the computer's main components. Common units for measuring computer memory are bits, bytes, kilobytes, megabytes, gigabytes, and terabytes.
La casa tiene 2 plantas con 3 recámaras y 2 baños, cocina integral, jardín, cochera para un auto, gas estacionario, aljibe y bomba. Tiene un año de antigüedad, está ubicada en Paseo de la Primavera No. 492, Col. Bosques del Centinela 2, y cuenta con un terreno de 69.75 m2 y construcción de 77.84 m2.
La propiedad cuenta con 3 recámaras, 2 1/2 baños, un estudio, sala-comedor, cocina integral, cuarto de servicio, jardín, cochera para 2 autos y se encuentra en un coto con doble vigilancia las 24 horas.
O documento descreve um anúncio de jornal sobre a venda de um terreno retangular com frente voltada para o leste e área de 200m2. A planta dos terrenos disponíveis é apresentada, e o terreno em questão é o terreno IV, que corresponde às características mencionadas no anúncio.
The document discusses memory organization and hierarchy in a computer system. It explains that memory hierarchy is used to minimize access time by organizing memory such that frequently used parts are closer to the CPU. It describes the different levels of memory including main memory, cache memory, and auxiliary memory. It provides details on RAM, ROM, and how the computer starts up using the bootstrap loader stored in ROM. It also discusses associative memory and different mapping techniques used to transfer data between main and cache memory such as direct mapping and set-associative mapping.
Chapter 8 computer memory system overviewAhlamAli20
The document discusses various aspects of computer memory systems including:
- Memory can be internal (e.g. main memory, cache) or external (e.g. disks, tapes). Internal memory is faster but has lower capacity, while external memory is slower but can store more data.
- Memory is characterized by its access method (e.g. random, sequential), capacity, units of transfer (e.g. words, blocks), and performance parameters like access time and transfer rate.
- Common semiconductor memory types include RAM (random access, volatile), ROM (read-only, non-volatile), and flash memory. RAM can be static or dynamic.
This document provides information about computer memory systems hierarchy. It discusses how memory is organized in a hierarchy with the fastest and smallest memory (cache) closest to the CPU and the largest but slowest auxiliary memory. The main memory sits between these occupying a central position communicating with both the CPU and auxiliary storage. It focuses on the characteristics of different main memory technologies like SRAM and DRAM and how they are organized on computer chips with address lines and data buses. Cache memory aims to bridge the speed difference between CPU and main memory.
Memory is a device used to store data or programs either temporarily or permanently for use in a computer. There are different types of memory based on their characteristics such as location, capacity, unit of transfer, access method, performance, physical type and organization. Common memory types include RAM, ROM, and external memory such as magnetic disks. The memory hierarchy consists of registers, cache, main memory and external storage. Cache memory uses the principle of locality to improve memory access time by storing recently accessed data from main memory.
The document provides an overview of computer structure and components. It discusses the main parts of a computer system including the processor, memory, and buses that connect the components. It describes the fetch-execute cycle that the processor uses to access and execute instructions stored in memory. Different types of memory like registers, cache, main memory, and backing storage are explained based on their speed and purpose. Factors that impact system performance such as clock speed, memory size, and data transfer rates are also covered.
The document provides an overview of computer structure and components. It discusses the main parts of a computer system including the processor, memory, and buses that connect the components. It describes the fetch-execute cycle that the processor uses to access and execute instructions stored in memory. Different types of memory like registers, cache, main memory, and backing storage are explained based on their speed and purpose. Factors that impact system performance such as clock speed, memory size, and data transfer rates are also covered.
The document provides an overview of computer structure and components. It discusses the main parts including the processor, memory, and buses that connect the parts. It describes how data and instructions flow through the computer and how the processor communicates with other components using addresses. It also covers various types of memory and their speeds, as well as factors that influence computer performance such as clock speed, memory size, and data transfer rates.
Computer Architecture | Computer Fundamental and OrganizationSmit Luvani
Agenda :
Structure of Instruction
Description of Processor
Interconnection Unit
Processor to memory communication
RISC and CISC
All about how the computer interacts with memory and processor. how they connected and work.which device how works.
The document discusses computer memory systems and cache memory principles. It provides an overview of:
- The memory hierarchy, which uses different memory technologies arranged in order of decreasing cost per bit, increasing capacity, and increasing access time. This hierarchy satisfies the conflicting demands of large capacity, fast speed, and low cost.
- Cache memory, which sits between the processor and main memory in the hierarchy. Cache memory exploits locality of reference to improve average memory access time.
- Characteristics of different levels of memory, including location, capacity, unit of transfer, access methods, physical types, volatility, and erasability. Faster but smaller and more expensive memories are higher in the hierarchy to satisfy performance needs.
This document provides an overview of computer structure and performance. It discusses the main components of a computer system including the processor, memory, and buses. It describes the fetch-execute cycle and how different types of memory like registers, cache, and main memory work. It also examines factors that influence computer performance such as clock speed, memory size, and data transfer rates. Current trends are increasing processor speeds, larger memory capacities, and higher capacity storage devices.
1. Memory hierarchy takes advantage of spatial and temporal locality by keeping frequently used data closer to the CPU.
2. Caches store the most recently used data from main memory and are faster but smaller than main memory.
3. If a memory request is in cache it is a "hit" and faster to access, if not in cache it is a "miss" and requires fetching from slower main memory.
This document provides information about computer organization and architecture. It discusses the motherboard as the central component that connects all other components like the CPU, RAM, expansion slots and ports. It describes how the chipset and its components like the northbridge and southbridge facilitate data exchange. It covers CPU components like the ALU and registers, and characteristics like clock speed and instruction sets. It also discusses the memory hierarchy including caches, RAM and disk storage. In summary, the document is an overview of key components and concepts in computer organization and architecture.
1. The document discusses memory management and the memory hierarchy in computer systems. It describes the different levels of memory including CPU registers, main memory, cache memory, and auxiliary memory.
2. Cache memory is used to reduce the average time required to access memory by taking advantage of spatial and temporal locality. There are three common cache mapping techniques - direct mapping, associative mapping, and set-associative mapping.
3. Virtual memory allows programs to behave as if they have a large, single memory space even if physical memory is smaller. It uses a memory management unit to translate virtual addresses to physical addresses through a page table.
This document discusses computer memory systems including main memory, cache, and virtual memory. It defines main memory as the central storage location that holds programs and data currently being used by the CPU. The document outlines memory hierarchy from fastest to slowest as registers, cache, main memory, and secondary storage. It describes RAM and ROM types as well as cache memory. Locality of reference and memory technologies such as magnetic disks are also summarized.
Memory Hierarchy PPT of Computer Organization2022002857mbit
The document discusses memory hierarchy and cache design. It begins by listing sources used to create slides on this topic. It then provides definitions of key terms like cache hit, miss, hit time, and miss penalty. The document explains the principles of memory hierarchy, including exploiting locality of reference and implementing multiple memory levels with decreasing size but increasing speed. It discusses technologies like SRAM and DRAM that are commonly used for caches and main memory. The document also addresses four important questions in cache design: block placement, block identification, block replacement, and write strategy.
The document discusses memory hierarchy and technologies. It describes the different levels of memory from fastest to slowest as processor registers, cache memory (levels 1 and 2), main memory, and secondary storage. The main memory technologies discussed are SRAM, DRAM, ROM, flash memory, and magnetic disks. Cache memory aims to speed up access time by exploiting locality of reference and uses mapping functions like direct mapping to determine cache locations.
The document discusses several key concepts in computability theory:
1. The diagonalization language Ld is undecidable as it contains strings that would cause any Turing machine encoding to not halt on itself as input.
2. The universal Turing machine U can simulate any other Turing machine and is used to show the universal language Lu is undecidable.
3. Rice's theorem states that any non-trivial property of recursively enumerable languages is undecidable, such as whether a language is empty or not empty.
The document discusses computer organization and architecture. It defines a computer as a general-purpose programmable machine that can execute a list of instructions. The Von Neumann architecture is described as having a CPU, memory, control unit, and input/output unit. Register transfer language (RTL) represents the transfer of data between registers using symbols. Key components like the ALU, registers, and buses are explained in terms of their role in processing and transferring data and instructions.
Computer arithmetic in computer architectureishapadhy
The document discusses Flynn's Taxonomy, which classifies computer architectures based on the number of instruction and data streams. It proposes four categories: SISD, SIMD, MISD, and MIMD. SISD refers to a single instruction single data stream architecture, like the classical von Neumann model. SIMD uses a single instruction on multiple data streams, for applications like image processing. MIMD uses multiple instruction and data streams and is most common, allowing distributed computing across independent computers. The document also discusses parallel processing, pipeline processing in computers, and hazards that can occur in instruction pipelines.
This document discusses remote invocation and summarizes key aspects of remote procedure call (RPC). It describes RPC as extending normal function calling such that the called and calling procedures are not in the same address space. RPC involves invoking remote elements through methods like request-reply protocol and remote method invocation. The document outlines the steps of an RPC call, including how client and server stubs are used to package requests and unpack responses to allow remote procedures to be called like local procedures.
Domain Name Service (DNS) converts hostnames into IP addresses. It allows users to use easy-to-remember hostnames like "facebook.com" instead of difficult-to-remember IP addresses. DNS works hierarchically, with local DNS servers querying root servers, top-level domain servers like .com, and authoritative name servers to resolve hostnames into IP addresses in an iterative process. This document outlines the key functions and implementation of DNS.
The document discusses different architectural models for distributed systems including tiered, two-tier, three-tier, decentralized, structured (Chord), and hybrid architectures. It covers concepts like interaction models, failure models, and security models that are important for designing distributed systems. The interaction model accounts for latency, bandwidth, and clock synchronization issues. The failure model defines process and communication channel omission, arbitrary, and timing failures. The security model aims to protect objects, processes, and communication channels against unauthorized access.
The document discusses different models for distributed systems including physical, architectural and fundamental models. It describes the physical model which captures the hardware composition and different generations of distributed systems. The architectural model specifies the components and relationships in a system. Key architectural elements discussed include communicating entities like processes and objects, communication paradigms like remote invocation and indirect communication, roles and responsibilities of entities, and their physical placement. Common architectures like client-server, layered and tiered are also summarized.
Operating system support in distributed systemishapadhy
The document discusses operating system support and components. It states that an operating system must provide encapsulation, concurrent processing, and protection. It lists the main OS components as the process manager, thread manager, communication manager, memory manager, and supervisor. It also discusses process/thread concepts such as address spaces, creation of new processes, and threads in distributed systems for multi-threaded clients and servers.
A distributed system is a collection of independent computers that appears to users as a single coherent system. The document defines a distributed system and discusses its goals, including making resources accessible, achieving distribution transparency, openness, scalability, fault tolerance, concurrency, and security. Examples of distributed systems include distributed computing systems like cluster and cloud computing, distributed information systems, and distributed pervasive systems.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
1. Memory Hierarchy
• Memory is an essential component in computer system, more efficiently if
extra storage is added to the system.
• The total memory capacity can be looked as hierarchy of components.
• Main memory occupies the central position and can communicate directly
with CPU and with auxiliary memory through an I/O processor.
• Cache is used to increase the speed of processing by making required data
and instructions available to CPU at a rapid rate. Also speed of CPU is
more than that of main memory.
• I/O processor : manages data transfer between auxiliary memory and main
memory
Cache: manages between main memory and CPU.
• CPU has direct access to main and cache but not to auxiliary memory .
• The speed of cache memory is 7 times faster than the main memory
1Isha padhy. Asst. Prof. CSE Dept.
3. Main Memory
Main memory consists of two kinds of memory
1. RAM – random access memory- volatile. Contents are destroyed when the power goes off.
RAM can be static and dynamic. Static RAM stores binary info in flip flops, so info stays as
long as power is applied. Dynamic RAM stores info in form of electric charges applied to
capacitors. Charge stored tend to discharge with time.
2. ROM- read only memory.non volatile. Stores programs that are permanently resident in the
computer.( bootstrap loader)
Bootstrap loader is a program to start computer software operating when the power is turned
on.( operating system)
• PROM: Programmable Read Only Memory;
- it allows user to load the required programs once.
- faster and less expensive because they can be programmed directly by the user.
• EPROM: Erasable Programmable Read Only Memory; the contents of the memory can be
erased and store new data into the memory. In this case, we have to erase whole information.
• Remove from the circuit for reprogramming and Erasing done by exposing chip to ultra
violet rays .
• EEPROM: Electrically Erasable Programmable Read Only Memory;
- in this type of memory the contents of a particular location can be changed without effecting
the contents of other location.
Isha padhy. Asst. Prof. CSE Dept. 3
4. • Size of the main memory determined by
addressing scheme
• Ex- 16 bit computer generates 16 bit addresses
capable of addressing upto 216 which is equal to
64K memory location.
• 32 bit addresses, the total capacity will be
232 which is equal to 4G memory location.
• The data transfer between main memory and
the CPU takes place through two CPU registers.
• MAR : Memory Address Register
• MDR : Memory Data Register.
• If the MAR is k-bit long, then the total
addressable memory location will be 2k.
• If the MDR is n-bit long, then the n bit of data is
transferred in one memory cycle.
• Data transfer takes place through address bus
and data bus.
• Control lines like Read, Write and Memory
Function Complete (MFC) for coordinating data
transfer. CPU needs to know when the desired
memory function (Read or Write) has been
completed. This line back to the CPU saying that
the operation is complete is sometimes called
memory function complete (MFC)
Isha padhy. Asst. Prof. CSE Dept. 4
5. Isha padhy. Asst. Prof. CSE Dept. 5
- The word length of a computer is
defined as the number of bits actually
stored and retrieved in one main memory
access. For eg. In a byte-addressable
computer, generating 32 bit address from
CPU to the main memory unit, high-order
30-bits determine which word will be
accessed and the low-order 2- bits
specifies which byte location is involved.
Addressable unit of information is called
memory word.
- Address is assigned for each byte
of information, and it is called byte-
addressable computer.
- One memory word contains the
one or more bytes which can be addressed
individually.
6. • Processor initiate the memory operation by loading
appropriate address to MAR.
• Read operation sets read control line to 1 and put the
content of address to MDR. MFC to 1.
• Write operation, sets write control line to 1. places the
content of MDR to specified memory location and indicate
the operation completed by setting up MFC to 1.
• Speed of the memory unit is measured by
1. Memory Access Time:- time elapsed between the initiation
of an operation and the completion of that operation (time
between the Read and the MFC signals)
2. Memory Cycle Time:- minimum time delay required
between the initiation of two successive memory
operations (time between two successive Read operation)
– slightly longer than memory access time
Isha padhy. Asst. Prof. CSE Dept. 6
8. RAM chip
• 1 or more control inputs that select the chip only when required.
• Bi-directional data bus that allows transfer of data either from
memory(read) or to memory(write).This bus can be constructed by using 3
state buffers(high(1), low(0),high impedance(open ckt).
• RAM capacity=128 words of 8 bits each. So for 128(27 )words 7 bits for
addressing and 8 bits data bus.
• Multiple select lines to select the chip when multiple chips are available in
micro-computer.
• The chip is in operation when CS1=1, CS2=0. bar on top of 2nd select var
indicates that this input is enabled when 0.
• Bus in High impedance state: when select lines are not enabled/ read –write
inputs are not enabled.
• When WR input is enabled data from data bus is stored in location
specified by address bus.
• When RD i/p signal is enabled the selected byte is placed on to data bus.
Isha padhy. Asst. Prof. CSE Dept. 8
9. ROM chip
• ROM can only read, data bus always in output
mode.
Isha padhy. Asst. Prof. CSE Dept. 9
10. Memory Address Map
• The addressing of memory can be established by means of
a table that specifies the memory address assigned to each
chip.. A memory map is a massive table, in effect
a database, that comprises complete information about
how the memory is structured in a computer system. In the
map, each computer file has a unique
memory address reserved especially for it, so that no
other data can overwrite or corrupt it.
• In order for a computer to function properly,
its OS (operating system) must always be able to access
the right parts of its memory at the right times. When a
computer first boots up(starts), the memory map tells the
OS how much memory is available.
• Component: specifies RAM/ROM
Hexa-decimal address: a range of HD addresses for each
chip
address bus:16 bits, 10 are utilized here other 6 are
assigned 0.
x represent lines that are to be connected to address input
in eac chip
RAM has 128 addresses so 7 lines. ROM chip has 512
addresses so 9 lines.
To distinguish between 4 RAM chips lines 8,9 are used to
select
line 10 distinguish RAM from ROM
Isha padhy. Asst. Prof. CSE Dept. 10
RAM 1
RAM 2
RAM 3
RAM 4
ROM
0000 - 007F
0080 - 00FF
0100 - 017F
0180 - 01FF
0200 - 03FF
Component
Hexa
address
0 0 0 x x x x x x x
0 0 1 x x x x x x x
0 1 0 x x x x x x x
0 1 1 x x x x x x x
1 x x x x x x x x x
10 9 8 7 6 5 4 3 2 1
Address bus
Memory address map for micro-computer
12. Auxiliary memory
• Auxiliary memory is the lowest-cost, highest-capacity, and slowest-access
storage in a computer system. It is where programs and data are kept for long-
term storage or when not in immediate use. Such memories tend to occur in
two types-sequential access (data must be accessed in a linear sequence)
and direct access (data may be accessed in any sequence). The most common
sequential storage device is the magnetic tape, whereas direct-access devices
include rotating drums, disks, CD-ROMs and DVD-ROMs.
• The important characteristics of any device are access mode, access time,
transfer rate, capacity and cost.
• Access time: average time required to reach storage location in memory and
obtain its contents is called access time.
Access time= seek time(time required to put read/write head to a location) +
transfer time( time required to move data to or from the device.)
• Storage is organized in records or blocks. Reading/ writing is always done on
entire records. Transfer rate is no. of blocks that the device can transfer per
second after the head is placed in position.
• Ex. Magnetic tapes, magnetic disks.
Isha padhy. Asst. Prof. CSE Dept. 12
13. Average time to access some target sector approximated by :
– Taccess = Tavg seek + Tavg rotation + Tavg transfer
Seek time (Tavg seek)
– Time to position heads over cylinder containing target sector
– Typical Tavg seek = 9 ms
Rotational latency (Tavg rotation)
– Time waiting for first bit of target sector to pass under r/w head
– Tavg rotation = 1/2 x 1/RPMs x 60 sec/1 min
Transfer time (Tavg transfer)
– Time to read the bits in the target sector.
– Tavg transfer = 1/RPM x 1/(avg # sectors/track) x 60 secs/1 min
• Transfers between the memory and the processor involves single words of data or
large block of words.
• The speed and efficiency of these transfers impact on performance of the system.
• Performance is given by two parameters.
• Latency and bandwidth.
• Memory latency- the amount of time it takes to transfer a word of data to or from
the memory
• Bandwidth- no of bits or bytes transferred in one second.
• Memory cycle time- the minimum time delay between the two independent
memory operations( two successive memory read operations)Isha padhy. Asst. Prof. CSE Dept. 13
14. Associative memory
• Generally data is stored in tabular format in memory. So to
get a data from the table 2 ways can be used:
- Choosing a sequence of address, reading the contents of the
address, comparing the item with the contents until the
match is found.
- Search the item using part of or the data itself.
• A memory unit accessed by content is called an associative
memory.
• Part of the required word is written in memory unused
location and associative memory holds the actual word, then
a search process selects all the matched words and marks
them for reading.
• Reading can be done simultaneously.
• Costlier than RAM because searching logic ckts are to be
implemented.
Isha padhy. Asst. Prof. CSE Dept. 14
15. Hardware organization
Isha padhy. Asst. Prof. CSE Dept. 15
-Array : m words* n bits each
-Match register has m bits
- A,K registers are n bit each
- Each word in memory is compared bitwise
with the word in argument register.
- the words which match make the
corresponding bit in match register as 1.
- reading is done sequentially for all those
words that matched.
- key register is used as masking register. All
the bits in argument reg is compared with the
words in memory if all the bits in key register
is 1, or else only those bits are compared with
the words in memory whose corresponding
bit in key register is 1.
- Ex A: 101 110001
K: 111 000000
word1: 001 110001 (no match)
Word2: 101 110101 (match)
17. - Cells have 2 subscripts:ij, ith word , jth location in the word
- Aj is compared with all jth bit of all the words in Kj=1
- Mi=1, if all the bits match otherwise 0.
- Match logic: word i is equal to the argument in A if aj=Fij,
j=1,2,….n.2 bits are equal if both are 1 or 0.
xj=AjFij+A’jF’ij
- Mi=x1x2..xn // All X values must be 1 for Mi to be 1
- xj+K’j= xj if Kj=1
= 1 if Kj=0 // Kj should be 1 so that Fij can be
compared with Aj. When Kj=1, xj is returned that means it
got compared, when Kj=1, both cannot be compared.
- So Mi= (x1+K’1)(x2+K’2)….(xn+K’n)
When Kj=0 the term will be 1, when Kj=1, we get the term
value as 1/0 depending on xj value.
Isha padhy. Asst. Prof. CSE Dept. 17
19. Cache memory
In large programs the no of instructions are to be executed repeatedly.
Loops, nested loops and some procedures call other procedures repeatedly.
Instructions of few localized area of the program are repeatedly executed,
while the remainder of the program is accessed relatively less. This
phenomenon is referred to as locality of reference.
Cache is the technique of storing a copy of data temporarily in rapidly-
accessible storage media (also known as memory) local to the CPU and
separate from bulk storage.
Cache is faster device which is 5 to 10 times faster than the main memory.
Reduces the data transfer between main memory to cache memory.
Isha padhy. Asst. Prof. CSE Dept. 19
20. Operation of cache memory
• Assumptions
1. CPU does not know that cache is in between them.
2. CPU makes read/write operation as on main memory.
3. CPU generates address and the block of data from the specified location is transferred into
the cache.
- In computer science, locality of reference, also called the principle of locality, is the term
applied to situations where the same value or related storage locations are frequently
accessed. There are three basic types of locality of reference:
Temporal locality : Here a resource that is referenced at one point in time is referenced again
soon afterwards.
Spatial locality: Here the likelihood of referencing a storage location is greater if a storage
location near it has been recently referenced.
Sequential locality: Here storage is accessed sequentially, in descending or ascending order.
- The performance of cache memory is measured in terms of a quantity called hit ratio. If the
required word is found in cache then there is a hit otherwise a miss. Ratio of no.of hits and
no. of cache reference is called hit ratio. Miss penalty time is time taken to move the
required data from MM to CM.
- CPU perform read operation.
• The mapping of memory blocks to cache is done by mapping function.
• Cache is limited in size.
• If cache is full and requested memory word is not in cache make decision that which block is
to be removed to provide space for new referenced memory word. It uses replacement
algorithm. Isha padhy. Asst. Prof. CSE Dept. 20
21. When CPU makes Write operation
Two ways to do
1. Cache location and main memory locations updated
simultaneously. This is called store through method or
write through method .
2. Update cache location only.
- during replacement process, cache block is written on
the main memory. This method is write back method.
- this information is maintained with the help of flag bit
- when write operation is done on cache block this bit is
set to one.
- during replacement time , checks the bit, if set to one
write back the cache block to main memory otherwise not.
If addressed word is not in the cache, directly write into the
main memory
Isha padhy. Asst. Prof. CSE Dept. 21
22. How the memory block is mapped to
cache block
Mapping function
- transfers block of d to cache memory.
Three mapping functions.
1. Direct mapping
- A particular block of main memory can be brought to a particular
block of cache memory. So, it is not flexible.
2. Associative mapping
- any block of Main memory can potentially reside in any cache
block position.
3. Block-set-associative mapping
- blocks of cache are grouped into sets, and the mapping allows a
block of main memory to reside in any block of a specific set
Isha padhy. Asst. Prof. CSE Dept. 22
23. Example
• Cache memory
• Cache size= 4kb (4096 words)
• No of address line required for 4KB words=12bits
• Block size = 32 words
• Total no of blocks in cache = 128
• To select one block out of 128 block, need 7 bit address lines.
• To select one word within a block , requires 5 bit address lines
Consider a main memory
• Main memory capacity=64KB
• No of address lines required for 64KB words=16 bits
• Block size =32 words
• Total no of block in main memory= 2048.
• To select a block in main memory 11 bits are used.
• To select a word within a block of main memory 5 bits are required.
Isha padhy. Asst. Prof. CSE Dept. 23
24. Associative cache
• MM is divided into a no. of blocks. Only the page that is
required currently, is present in MM, others are brought on
demand from secondary memory. A MM is divided into block,
so when anything is to be transferred from MM to cache
memory, complete block is transferred. block size can be
16byte/32 byte.
• The address generated by CPU is divided into 2 parts.
Isha padhy. Asst. Prof. CSE Dept. 24
25. • Cache memory will also have blocks and each block size=block size of MM.
Ex 1 block contains 8 byte data(8 words of 1 byte each) which is same in
both CM, MM.
• Every block is identified by the block no. in address generated by CPU.
• Ex assume a machine with address bits=16, every block=8 byte, so 16=(3
bits (8 byte)+13 bits for block number)
• When a block is transferred from MM to CM, we need to check whether
that particular block no. is present in CM or not so we have in CM another
field which contains this block number, called TAG field. The no. of bits in
TAG field=no.of bits in block no. field. No.of entries in TAG = no. of entry in
Cache. A valid bit checks whether the block no present in the
corresponding tag memory is valid or not, because initially it contains
garbage value so all valid bits are 0.
• So start checking with the valid bit, If its 1 then match the corresponding
tag bits with block no. Search sequentially all the TAG bits. But
sequentially checking will take lot of time, speed will be less.
• Hardware Impl: Argument Register is connected to Address generated by
CPU, from which it takes the Block no. part. Arg Reg is connected to TAG
memory where it checks the block number in parallel with all the TAG
elements. Comparison will be done in parallel. Match bit is 1 when there is
a match.
Isha padhy. Asst. Prof. CSE Dept. 25
27. Direct mapping
• CPU places main memory address and
from this address corresponding cache
address is to be generated.
• Cache addr= mod(no. of cache
locations)(MMAddress)
ex mod(10)(23)=3 //1st fig
ex mod(8)(21)=5 //2nd fig
- If we consider reverse, cache loc 3 means
either 3,13,23,33. we should know which
memory loc data cache is holding.
- Initially when system is switched on,
cache contains invalid data.
- Cache contents must have tag(extra
information which is portion of DRAM EX
which one out of 4 data in series), data,
valid bit(indicating data is valid or not)
- Ex for cache loc:2, valid:1, data: mem loc
32, so tag: 3
Isha padhy. Asst. Prof. CSE Dept. 27
VALID TAG DATA
28. Direct mapped
• In associative mapping any of the block of MM
can be kept in any of block of CM, in direct map
the block of MM can be kept in a particular place,
not any where.
• Disadvantage of Associative mapping is the cost
of argument register is more than RAM due to
added logics for comparison.
• The MM address is divided into 3 phase,
Isha padhy. Asst. Prof. CSE Dept. 28
TAG BLOCK BYTE
29. Isha padhy. Asst. Prof. CSE Dept. 29
0 25
6
1 25
7
2 25
8
.
.
255
0
3
Data
Cache
memory Valid bit Tag memory Main memory
Tag 0 1 2 3 ……………… 31
Gr
0
1
2
255
Gr
0
2
TAG GROUP BYTE
5 8 3
Blockno (in associative mapping)= TAG+ BLOCK
Each TAG denotes 256(28 ) groups, Each group has 23 =8 words.
16 bit address is divided as:
30. Set associative mapping
MS= 64 B
CS(cache size)=32 B
Block size(BS)=4 B
Set size=2 blocks(2 blocks in a set), also
called 2 way set associative
Cache blocks(lines)=CS/BS=8 blocks
No. of sets=Cache blocks/set
size=8/2=4 sets
In MM, 64B/4B=16 blocks, In MM we
don’t have sets, only blocks present.
Isha padhy. Asst. Prof. CSE Dept. 30
0
1
2
3
4
5
6
7
0
1
2
0
1
15
TAG SET NO.
3
BYTE.
31. Virtual Memory
• A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of
a hard disk that's set up to emulate the computer's RAM.
• Virtual memory gives programmers the illusion that they have a very large memory
and provides mechanism for dynamically translating program-generated addresses
into correct main memory locations. The translation or mapping is handled
automatically by the hardware by means of a mapping table.
• An address used by the programmer is a virtual address (virtual memory addresses)
and the set of such addresses is the Address Space. An address in main memory is
called a location or physical address. The set of such locations is called the
memory space. Thus, the address space is the set of addresses generated by the
programs as they reference instructions and data; the memory space consists of
actual main memory locations directly addressable for processing. Generally, the
address space is larger than the memory space.
• Consider main memory: 32K words (K=1024)= 215 and auxiliary memory 1024K
words= 220. Thus, we need 15 bits to address physical memory and 20 bits for
virtual memory (virtual memory can be as large as we have auxiliary storage).
Here auxiliary memory has the capacity of storing information equivalent to 32
main memories.
Address space N=1024K
Memory space M=32K
Isha padhy. Asst. Prof. CSE Dept. 31
32. • In multi-program computer system, programs and data are transferred to
and from auxiliary and main memory based on the demands imposed by
CPU.
• We have 20-bit address of an instruction(to refer 20-bit virtual address)
but physical memory addresses are specified with 15-bits. So a table is
needed to map a virtual address of 20-bits to a physical address of 15-
bits.Mapping is a dynamic operation, which means that every address is
translated immediately as a word is referenced by CPU.
Isha padhy. Asst. Prof. CSE Dept. 32
33. Address Mapping using Pages
• Memory table implementation of address,
mapping is simplified if the information in
address space and memory space are each
divided into groups of fixed size.
• Blocks or page frame: The physical
memory is broken down into groups of equal
size called blocks, which may range from 64
to 4096 words each.
• Pages: groups of address space of same size.
• Example: consider computer with address
space = 8K(23*210) and memory space =
4K(22*210).
• If we split both spaces into groups of 1K
words, we obtain 8 pages and 4 blocks.
• Virtual address has 13bits .Since each page
consists of 210=1024 words, high-order 3bits
will specify one of 8 pages and low-order
10bits give the line address with in the
pages. Memory space has 12 bits(MSB 2
bits for block number, 10 bits for the word)
Isha padhy. Asst. Prof. CSE Dept. 33
34. Address Mapping using Pages
• The mapping from address space to memory space becomes
easy if virtual address is represented by two numbers : a
page number address and a line with in the page. In a
computer with 2pwords per page, p bits are used to specify a
line address and remaining high-order bits of the virtual
address specify the page number.
• The memory page table consists of 8 words, one for each
page.
• The address in the page table denotes page number and the
content of the word gives the block number where the page
is stored in main memory.
• Presence bit when 0 indicates page is not available in main
memory and when 1 says that page has been transferred to
main memory.
• Table shows that pages 1, 2, 5 and 6 are now available in
main memory in blocks 3, 0, 1 and 2 respectively.
Isha padhy. Asst. Prof. CSE Dept. 34
36. Associative memory page table
• We use random-access page table which is
inefficient with respect to storage utilization. For
example: consider address space = 1024K words
and memory space = 32K words. If each page or
block contains 1K words, the number of pages is
1024 and number of blocks 32. The capacity of
the memory page table must be 1024 words and
only 32 locations have presence bit equal to 1. At
any given time, at least 992 locations will be
empty and not in use.
• We can make no. of words in page table equal to
no. of blocks in MM. This method can be
implemented by means of an associative memory
in which each word in memory containing a page
number with its corresponding block number.
• The page field in each associative memory table
word is compared with page number bits in an
argument register (which contains page number
in the virtual address), if match occurs, the word
is read form memory and its corresponding block
number is extracted.
Isha padhy. Asst. Prof. CSE Dept. 36
37. Page Replacement
• A virtual memory system is a combination of hardware and software techniques. A
memory management software system handles:
• Which page in main memory could be removed to make room for a new page?
• When a new page is to be transferred from auxiliary memory to main memory?
• Where the page is to be placed in main memory?
• When a program starts execution, one or more pages are transferred into main
memory and the page table is set to indicate their position. The program is executed
from main memory until it attempts to refer a page that is still in auxiliary memory.
This condition is called page fault. When page fault occurs, the execution of the
present program is suspended until required page is brought into memory. Since
loading a page from auxiliary memory to main memory is basically an I/O
operation, OS assigns this task to I/O processor. In the mean time, control is
transferred to the next program in memory that is waiting to be processed in the
CPU. Later, when memory block has been assigned, the original program can
resume its operation. When a page fault occurs in a virtual memory system, it
signifies that the page referenced by the program is not in main memory. A new
page is then transferred from auxiliary memory to main memory. If main memory is
full, it would be necessary to remove a page from a memory block to make a room
for a new page. The policy for choosing pages to remove is determined from the
replacement algorithm that is used.
• 2 most common replacement algorithms are FIFO, LRU(Least Recently Used).
Isha padhy. Asst. Prof. CSE Dept. 37
38. Memory management Hardware
• Memory management system is a collection of hardware and software procedures
for managing programs in memory.
• Features of MMS are:-
1. A facility that maps logical memory references to physical memory addresses.
2. A provision of sharing common programs stored in memory by different users.
3. Protection of information against un-authorised access between users.
• Here instead of considering fixed page size , the programs can be divided into
parts called segments.
• A segment is a set of logically related instructions or data elements associated
with a single name. These are generated by programmers or the OS. Ex. Of
segments are array of elements, functions etc.
• Sharing of programs can be done ex. Many users who want to compile their
programs can use a single copy of compiler instead of having separate copy of
compiler in each memory of the user’s system.
• The address generated by a segmented program is called a logical address.
• Difference between logical and virtual address is logical address space is
associated with variable length segments rather than fixed length pages.
Isha padhy. Asst. Prof. CSE Dept. 38
39. Segment Page Mapping
• The length of each segment is allowed to grow
and contract according to the needs of the
program being executed. One way of
specifying the length of a segment is by
associating with it a number of equal-sized
pages.
• Logical address = Segment+ page+ Word
Segment: segment number
Page: page within the segment
word: gives the specific word within the page.
A segment can have 1 page or more, basing
on which the size of the segment can be
decided.
Here, mapping of logical address to physical
address is done by using two tables: segment
and page table. The entry in the segment table
is a pointer address for the page table base,
which is then added to page number(given in
logical address). The sum point to some entry
in page table and content of that page is the
address of physical block. The concatenation of
block field with the word field produces the
final physical mapped address. Isha padhy. Asst. Prof. CSE Dept. 39
40. • Since memory reference from
CPU will require 3 accesses to
memory, 1 for segment table,
one for page table and 3rd from
main memory. This may
increase the delay of accessing
memory, so a fast associative
memory is used to hold the
most recent table entries. This
memory is called Translation
lookaside buffer(TLB)
Isha padhy. Asst. Prof. CSE Dept. 40
41. 1. How many 128*8 RAM chips required for a
memory capacity of 2048 bytes?
• How many address lines are required for the
above configuration.
2. An address space is specified by 24 bits and
memory space by 16 bits
- How many words are there in address space
and memory space?
Isha padhy. Asst. Prof. CSE Dept. 41