The document summarizes different types of semiconductor memory technologies, including dynamic RAM (DRAM), static RAM (SRAM), read-only memory (ROM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, synchronous DRAM (SDRAM), Rambus DRAM (RDRAM), and double data rate SDRAM (DDR SDRAM). It describes the basic operation and characteristics of each technology. Key aspects like refresh requirements, cell structures, write capabilities, and performance are compared between DRAM and SRAM.
Processor Organization and ArchitectureDhaval Bagal
This document discusses processor organization and architecture. It covers the stored program concept where both instructions and data are stored in memory. It describes the Von Neumann architecture, which includes a main memory, ALU, control unit, and I/O. It discusses the registers used in processor control and execution like the program counter, accumulator, and instruction register. Finally, it examines addressing modes like immediate, direct, indirect, register, displacement, and stack addressing.
This document summarizes different types of semiconductor memory, including dynamic RAM (DRAM), static RAM (SRAM), read-only memory (ROM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), RAMBUS RAM, cache DRAM, and error correction techniques. It describes the basic structure and operation of DRAM and SRAM memory cells, as well as advanced organizations like SDRAM that can improve memory performance.
This document provides an overview of different types of computer memory, including RAM, ROM, and hybrid memory. It describes the characteristics of SRAM and DRAM, the most common types of RAM memory. DRAM is cheaper and slower than SRAM, and must be periodically refreshed. The document outlines the evolution of DDR RAM standards and their internal structures. ROM includes mask ROM, PROM, EPROM, and is read-only memory that can be programmed during manufacture or special modes. Hybrid memory such as flash memory has qualities of both RAM and ROM.
This document discusses the classification and hierarchy of memory systems in computers. It begins by explaining that computers utilize a memory hierarchy to efficiently store and access programs and data, since not all information is needed by the CPU at once. The hierarchy includes register sets, cache memory, main memory (RAM and ROM), hard disks, magnetic disks, and magnetic tapes, which differ in access time, transfer rate, and capacity. Faster but smaller and more expensive memory like registers are at the top, while larger but slower memory like hard disks are at the bottom. Understanding this memory hierarchy is important for knowing how computers access and manage programs and data.
Operating system paging and segmentationhamza haseeb
This document discusses paging and segmentation in operating systems. Paging divides memory into fixed-size pages for faster data access and allows physical addresses to be non-contiguous. It has advantages like no external fragmentation but disadvantages like internal fragmentation and consuming memory for page tables. Segmentation divides memory into segments of varying lengths and permissions for memory protection. It has advantages like no internal fragmentation and less memory used for segment tables, while lending itself to data sharing and protection but has the disadvantage of a more costly memory management algorithm.
A semiconductor memory system can experience hard failures from permanent physical defects or soft errors from random, non-destructive events that change memory cell contents without permanent damage. Error correcting codes add redundant bits when data is stored to detect and possibly correct errors by regenerating codes when data is read out and comparing them. ECC memory can detect common internal data corruption to prevent data loss in applications where reliability is critical.
The document provides an introduction to computer architecture. It discusses binary numbers and the bit and byte units used to measure digital information. It describes the major components of a computer system, including the central processing unit (CPU), memory, hard drives, and input/output components. The CPU functions are explained, including fetching and executing instructions through a cycle and using registers, caches, and memory in a hierarchy. Direct access memory like RAM is faster than sequential access storage like hard disks.
This document discusses multiprocessor architecture types and limitations. It describes tightly coupled and loosely coupled multiprocessing systems. Tightly coupled systems have shared memory that all CPUs can access, while loosely coupled systems have each CPU connected through message passing without shared memory. Examples given are symmetric multiprocessing (SMP) and Beowulf clusters. Interconnection structures like common buses, multiport memory, and crossbar switches are also outlined. The advantages of multiprocessing include improved performance from parallel processing, increased reliability, and higher throughput.
Processor Organization and ArchitectureDhaval Bagal
This document discusses processor organization and architecture. It covers the stored program concept where both instructions and data are stored in memory. It describes the Von Neumann architecture, which includes a main memory, ALU, control unit, and I/O. It discusses the registers used in processor control and execution like the program counter, accumulator, and instruction register. Finally, it examines addressing modes like immediate, direct, indirect, register, displacement, and stack addressing.
This document summarizes different types of semiconductor memory, including dynamic RAM (DRAM), static RAM (SRAM), read-only memory (ROM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), RAMBUS RAM, cache DRAM, and error correction techniques. It describes the basic structure and operation of DRAM and SRAM memory cells, as well as advanced organizations like SDRAM that can improve memory performance.
This document provides an overview of different types of computer memory, including RAM, ROM, and hybrid memory. It describes the characteristics of SRAM and DRAM, the most common types of RAM memory. DRAM is cheaper and slower than SRAM, and must be periodically refreshed. The document outlines the evolution of DDR RAM standards and their internal structures. ROM includes mask ROM, PROM, EPROM, and is read-only memory that can be programmed during manufacture or special modes. Hybrid memory such as flash memory has qualities of both RAM and ROM.
This document discusses the classification and hierarchy of memory systems in computers. It begins by explaining that computers utilize a memory hierarchy to efficiently store and access programs and data, since not all information is needed by the CPU at once. The hierarchy includes register sets, cache memory, main memory (RAM and ROM), hard disks, magnetic disks, and magnetic tapes, which differ in access time, transfer rate, and capacity. Faster but smaller and more expensive memory like registers are at the top, while larger but slower memory like hard disks are at the bottom. Understanding this memory hierarchy is important for knowing how computers access and manage programs and data.
Operating system paging and segmentationhamza haseeb
This document discusses paging and segmentation in operating systems. Paging divides memory into fixed-size pages for faster data access and allows physical addresses to be non-contiguous. It has advantages like no external fragmentation but disadvantages like internal fragmentation and consuming memory for page tables. Segmentation divides memory into segments of varying lengths and permissions for memory protection. It has advantages like no internal fragmentation and less memory used for segment tables, while lending itself to data sharing and protection but has the disadvantage of a more costly memory management algorithm.
A semiconductor memory system can experience hard failures from permanent physical defects or soft errors from random, non-destructive events that change memory cell contents without permanent damage. Error correcting codes add redundant bits when data is stored to detect and possibly correct errors by regenerating codes when data is read out and comparing them. ECC memory can detect common internal data corruption to prevent data loss in applications where reliability is critical.
The document provides an introduction to computer architecture. It discusses binary numbers and the bit and byte units used to measure digital information. It describes the major components of a computer system, including the central processing unit (CPU), memory, hard drives, and input/output components. The CPU functions are explained, including fetching and executing instructions through a cycle and using registers, caches, and memory in a hierarchy. Direct access memory like RAM is faster than sequential access storage like hard disks.
This document discusses multiprocessor architecture types and limitations. It describes tightly coupled and loosely coupled multiprocessing systems. Tightly coupled systems have shared memory that all CPUs can access, while loosely coupled systems have each CPU connected through message passing without shared memory. Examples given are symmetric multiprocessing (SMP) and Beowulf clusters. Interconnection structures like common buses, multiport memory, and crossbar switches are also outlined. The advantages of multiprocessing include improved performance from parallel processing, increased reliability, and higher throughput.
Virtual memory allows a program to access more memory than is physically installed on the system by storing seldom-used data on disk. The CPU uses virtual addresses while physical addresses identify the actual memory location. The memory management unit (MMU) translates virtual to physical addresses using page tables stored in memory. If a page is not in memory, a page fault occurs and the OS loads the required page from disk before resuming execution. Virtual memory provides the illusion of a larger memory to programs through demand paged paging and address translation mechanisms.
The document discusses various topics related to memory management in operating systems including swapping, contiguous memory allocation, paging, segmentation, virtual memory concepts like demand paging, page replacement, and thrashing. It provides details on page tables, segmentation hardware, logical to physical address translation, and performance aspects of demand paging. The key aspects covered are memory management techniques to overcome fragmentation and enable efficient use of limited main memory.
The document discusses input-output (I/O) modules in computers. It explains that I/O modules play a crucial role in allowing communication between a computer's central processing unit (CPU) and external devices. I/O modules connect devices to the computer's system bus and control the exchange of data between devices and main memory or the CPU. They help address issues like differing data formats and speeds between devices and the CPU. The document outlines various I/O techniques like programmed I/O, interrupt-driven I/O, and direct memory access (DMA) that use I/O modules to facilitate input and output.
Cache coherence refers to maintaining consistency between data stored in caches and the main memory in a system with multiple processors that share memory. Without cache coherence protocols, modified data in one processor's cache may not be propagated to other caches or memory. There are different levels of cache coherence - from ensuring all processors see writes instantly to allowing different ordering of reads and writes. Cache coherence aims to ensure reads see the most recent writes and that write ordering is preserved across processors. Directory-based and snooping protocols are commonly used to maintain coherence between caches in multiprocessor systems.
This document provides an overview of memory management concepts in computer systems. It discusses classification of memory types, memory addressing, memory management units, allocation techniques, swapping, fragmentation, page replacement algorithms, segmentation, hardware implementation, memory mapping, byte ordering, and common memory problems. The document contains 23 pages of content on these memory management topics.
Memory management is the method by which an operating system handles and allocates primary memory. It tracks the status of memory locations as allocated or free, and determines how memory is distributed among competing processes. Memory can be allocated contiguously or non-contiguously. Contiguous allocation assigns consecutive blocks of memory to a process, while non-contiguous allocation allows a process's memory blocks to be scattered across different areas using techniques like paging or segmentation. Paging divides processes and memory into fixed-size pages and frames to allow non-contiguous allocation while reducing fragmentation.
A computer system needs a main memory to execute programs but it is too small, so it requires a secondary storage to back up main memory and hold programs and data when power is lost. Secondary storage provides additional space for programs and data beyond what main memory can hold and ensures data is not lost if power fails.
The document discusses different memory management techniques used in operating systems. It begins with an overview of processes entering memory from an input queue. It then covers binding of instructions and data to memory at compile time, load time, or execution time. Key concepts discussed include logical vs physical addresses, the memory management unit (MMU), dynamic loading and linking, overlays, swapping, contiguous allocation, paging using page tables and frames, and fragmentation. Hierarchical paging, hashed page tables, and inverted page tables are also summarized.
Memory refers to computer components that hold digital data and programs. There are several types of memory that differ in speed and volatility. Primary storage like RAM is directly connected to the CPU and is volatile, requiring constant power. Secondary storage like hard disks have greater capacity but are slower and non-volatile. Tertiary storage provides even larger capacity for archiving data. Memory is also characterized as volatile, like RAM, or non-volatile, like ROM.
Drivers act as translators between hardware and operating systems, allowing communication. There are three types of devices: dedicated devices which can only be used by one process at a time like printers; shared devices which can be used by multiple processes simultaneously through device management; and virtual devices which are dedicated devices that appear shared through techniques like print spooling. Understanding these device categories is necessary for proper device management in operating systems.
The document discusses computer memory organization and the memory hierarchy. It describes different types of memory like RAM, ROM, cache memory and secondary storage. It explains the memory hierarchy as fast but expensive memory like registers and cache being used for frequently accessed data, while slower but cheaper memory like hard disks are used for long term and bulk storage. The principle of locality is discussed where programs tend to access data and instructions that are near each other in memory. Cache memory aims to improve performance by storing recently accessed data from main memory.
The document discusses instruction execution in a computer processor. It describes how a processor executes instructions by fetching them from memory using the program counter. The instruction is placed in the instruction register and decoded by the control unit. The control unit then selects components like the ALU to carry out operations. Common components involved in instruction execution are the program counter, memory address register, instruction register, memory buffer register, control unit, arithmetic logic unit, and accumulator. The execution cycle involves fetching the instruction from memory address, decoding it, and then executing the instruction.
This document discusses cache memory and virtual memory. It defines cache memory as fast memory that acts as a buffer between the CPU and RAM, holding frequently used data and instructions. It also describes how cache memory works to reduce average memory access time. Virtual memory is defined as a technique where secondary memory is used like primary memory. It discusses how virtual memory uses pages and page tables to map virtual memory addresses to physical memory locations, transferring pages between RAM and disk as needed. The document provides information on types of cache memory, applications of cache memory, and types and applications of virtual memory.
The document discusses the basics of semiconductor memories. It explains that memory controllers establish information flow between memory and the CPU. Memory buses connect memory to the controller. Newer systems have frontside and backside buses connecting different components. During boot-up, the BIOS and operating system are loaded from ROM and hard drive into RAM for fast access by the CPU. Applications and files are also loaded into and removed from RAM as needed. The document compares different types of volatile and non-volatile memory in terms of speed, size, and cost.
The document discusses various types of external memory including magnetic disk, RAID, removable optical disks, and magnetic tape. It provides details on magnetic disk technology including materials, read/write mechanisms, data organization, and disk formatting. It also describes different RAID levels from 0 to 6 and their characteristics. Finally, it covers optical storage technologies like CD-ROM, CD-R, DVD, and compares their capabilities.
The document discusses virtual memory and demand paging. It describes how virtual memory maps logical addresses to physical addresses using page tables and frames. On a page fault, the OS finds a free frame, loads the missing page, and updates the tables. It also covers page replacement algorithms like FIFO, LRU and their implementations. The document concludes with examples of Windows NT and Solaris virtual memory systems.
1. The document discusses different types of semiconductor memory including RAM, ROM, PROM, EPROM, EEPROM, and flash memory.
2. It describes the operation and structure of both dynamic RAM (DRAM) and static RAM (SRAM). DRAM uses capacitors to store bits and requires refreshing, while SRAM uses flip-flops and does not require refreshing.
3. The document also covers various RAM technologies including synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Rambus DRAM (RDRAM), error correction codes, and cache DRAM.
This document discusses internal memory organization and technologies. It begins with an overview of memory cell operation for DRAM and SRAM. It then covers various memory types including DRAM, SRAM, ROM, PROM, EPROM, and EEPROM. Advanced DRAM technologies like synchronous DRAM, Rambus DRAM, and double data rate SDRAM are also summarized. The document concludes with sections on error correction techniques and cache DRAM.
Virtual memory allows a program to access more memory than is physically installed on the system by storing seldom-used data on disk. The CPU uses virtual addresses while physical addresses identify the actual memory location. The memory management unit (MMU) translates virtual to physical addresses using page tables stored in memory. If a page is not in memory, a page fault occurs and the OS loads the required page from disk before resuming execution. Virtual memory provides the illusion of a larger memory to programs through demand paged paging and address translation mechanisms.
The document discusses various topics related to memory management in operating systems including swapping, contiguous memory allocation, paging, segmentation, virtual memory concepts like demand paging, page replacement, and thrashing. It provides details on page tables, segmentation hardware, logical to physical address translation, and performance aspects of demand paging. The key aspects covered are memory management techniques to overcome fragmentation and enable efficient use of limited main memory.
The document discusses input-output (I/O) modules in computers. It explains that I/O modules play a crucial role in allowing communication between a computer's central processing unit (CPU) and external devices. I/O modules connect devices to the computer's system bus and control the exchange of data between devices and main memory or the CPU. They help address issues like differing data formats and speeds between devices and the CPU. The document outlines various I/O techniques like programmed I/O, interrupt-driven I/O, and direct memory access (DMA) that use I/O modules to facilitate input and output.
Cache coherence refers to maintaining consistency between data stored in caches and the main memory in a system with multiple processors that share memory. Without cache coherence protocols, modified data in one processor's cache may not be propagated to other caches or memory. There are different levels of cache coherence - from ensuring all processors see writes instantly to allowing different ordering of reads and writes. Cache coherence aims to ensure reads see the most recent writes and that write ordering is preserved across processors. Directory-based and snooping protocols are commonly used to maintain coherence between caches in multiprocessor systems.
This document provides an overview of memory management concepts in computer systems. It discusses classification of memory types, memory addressing, memory management units, allocation techniques, swapping, fragmentation, page replacement algorithms, segmentation, hardware implementation, memory mapping, byte ordering, and common memory problems. The document contains 23 pages of content on these memory management topics.
Memory management is the method by which an operating system handles and allocates primary memory. It tracks the status of memory locations as allocated or free, and determines how memory is distributed among competing processes. Memory can be allocated contiguously or non-contiguously. Contiguous allocation assigns consecutive blocks of memory to a process, while non-contiguous allocation allows a process's memory blocks to be scattered across different areas using techniques like paging or segmentation. Paging divides processes and memory into fixed-size pages and frames to allow non-contiguous allocation while reducing fragmentation.
A computer system needs a main memory to execute programs but it is too small, so it requires a secondary storage to back up main memory and hold programs and data when power is lost. Secondary storage provides additional space for programs and data beyond what main memory can hold and ensures data is not lost if power fails.
The document discusses different memory management techniques used in operating systems. It begins with an overview of processes entering memory from an input queue. It then covers binding of instructions and data to memory at compile time, load time, or execution time. Key concepts discussed include logical vs physical addresses, the memory management unit (MMU), dynamic loading and linking, overlays, swapping, contiguous allocation, paging using page tables and frames, and fragmentation. Hierarchical paging, hashed page tables, and inverted page tables are also summarized.
Memory refers to computer components that hold digital data and programs. There are several types of memory that differ in speed and volatility. Primary storage like RAM is directly connected to the CPU and is volatile, requiring constant power. Secondary storage like hard disks have greater capacity but are slower and non-volatile. Tertiary storage provides even larger capacity for archiving data. Memory is also characterized as volatile, like RAM, or non-volatile, like ROM.
Drivers act as translators between hardware and operating systems, allowing communication. There are three types of devices: dedicated devices which can only be used by one process at a time like printers; shared devices which can be used by multiple processes simultaneously through device management; and virtual devices which are dedicated devices that appear shared through techniques like print spooling. Understanding these device categories is necessary for proper device management in operating systems.
The document discusses computer memory organization and the memory hierarchy. It describes different types of memory like RAM, ROM, cache memory and secondary storage. It explains the memory hierarchy as fast but expensive memory like registers and cache being used for frequently accessed data, while slower but cheaper memory like hard disks are used for long term and bulk storage. The principle of locality is discussed where programs tend to access data and instructions that are near each other in memory. Cache memory aims to improve performance by storing recently accessed data from main memory.
The document discusses instruction execution in a computer processor. It describes how a processor executes instructions by fetching them from memory using the program counter. The instruction is placed in the instruction register and decoded by the control unit. The control unit then selects components like the ALU to carry out operations. Common components involved in instruction execution are the program counter, memory address register, instruction register, memory buffer register, control unit, arithmetic logic unit, and accumulator. The execution cycle involves fetching the instruction from memory address, decoding it, and then executing the instruction.
This document discusses cache memory and virtual memory. It defines cache memory as fast memory that acts as a buffer between the CPU and RAM, holding frequently used data and instructions. It also describes how cache memory works to reduce average memory access time. Virtual memory is defined as a technique where secondary memory is used like primary memory. It discusses how virtual memory uses pages and page tables to map virtual memory addresses to physical memory locations, transferring pages between RAM and disk as needed. The document provides information on types of cache memory, applications of cache memory, and types and applications of virtual memory.
The document discusses the basics of semiconductor memories. It explains that memory controllers establish information flow between memory and the CPU. Memory buses connect memory to the controller. Newer systems have frontside and backside buses connecting different components. During boot-up, the BIOS and operating system are loaded from ROM and hard drive into RAM for fast access by the CPU. Applications and files are also loaded into and removed from RAM as needed. The document compares different types of volatile and non-volatile memory in terms of speed, size, and cost.
The document discusses various types of external memory including magnetic disk, RAID, removable optical disks, and magnetic tape. It provides details on magnetic disk technology including materials, read/write mechanisms, data organization, and disk formatting. It also describes different RAID levels from 0 to 6 and their characteristics. Finally, it covers optical storage technologies like CD-ROM, CD-R, DVD, and compares their capabilities.
The document discusses virtual memory and demand paging. It describes how virtual memory maps logical addresses to physical addresses using page tables and frames. On a page fault, the OS finds a free frame, loads the missing page, and updates the tables. It also covers page replacement algorithms like FIFO, LRU and their implementations. The document concludes with examples of Windows NT and Solaris virtual memory systems.
1. The document discusses different types of semiconductor memory including RAM, ROM, PROM, EPROM, EEPROM, and flash memory.
2. It describes the operation and structure of both dynamic RAM (DRAM) and static RAM (SRAM). DRAM uses capacitors to store bits and requires refreshing, while SRAM uses flip-flops and does not require refreshing.
3. The document also covers various RAM technologies including synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Rambus DRAM (RDRAM), error correction codes, and cache DRAM.
This document discusses internal memory organization and technologies. It begins with an overview of memory cell operation for DRAM and SRAM. It then covers various memory types including DRAM, SRAM, ROM, PROM, EPROM, and EEPROM. Advanced DRAM technologies like synchronous DRAM, Rambus DRAM, and double data rate SDRAM are also summarized. The document concludes with sections on error correction techniques and cache DRAM.
Chapter 2 - Computer Evolution and PerformanceCésar de Souza
The document discusses the evolution of computer hardware from the 1940s onwards. It describes early computers like ENIAC which used vacuum tubes and was programmed manually via switches. The stored program concept developed by von Neumann separated the program and data into memory. Transistors replaced vacuum tubes, making computers smaller, cheaper and more reliable. Integrated circuits led to generations of computers with increasing numbers of components on a single chip due to Moore's Law. Memory speed could not keep up with rising CPU speeds, leading to cache memory and other performance improvements.
Chapter 3 - Top Level View of Computer / Function and InterconectionCésar de Souza
The document discusses computer system buses and their role in connecting different components of a computer system. It describes the functions of different types of buses including data, address, and control buses. It also explains bus arbitration techniques and timing protocols for synchronous and asynchronous buses. Specific bus architectures like PCI are discussed with details on their components, commands, and arbitration process.
memory Interleaving and low order interleaving and high interleavingJawwad Rafiq
Memory interleaving splits memory into independent banks that can process read/write requests in parallel to increase throughput. It interleaves the address space so consecutive addresses are assigned to different banks. Low order interleaving uses the low order bits of an address to identify the memory module and high order bits for the word address within each module, allowing block access in a pipelined fashion. This improves the effective memory bandwidth.
A computer uses a hierarchy of internal and external memory systems. Internal memory includes RAM, ROM, and cache, which provide fast access but are more expensive per byte. RAM allows independent access to each memory location and is used for main memory. ROM permanently stores data and is used for boot programs. Cached memory uses SRAM for faster access than RAM. External memory includes hard disks and USB drives, which provide large, inexpensive storage but are much slower to access.
1. The document discusses the steps for completing business messages, including revising, producing, proofreading, and distributing.
2. It covers revising the content, organization, style, and tone and promoting readability with sentences, paragraphs, and headings.
3. Producing the message may involve using multimedia elements, page layout, graphics, and document design principles.
4. The final steps are proofreading for errors and choosing a distribution method based on cost, convenience, time, and privacy concerns.
The document discusses various types of external memory used in computer systems, including magnetic disks, optical disks, magnetic tape, and RAID (redundant array of independent disks). Magnetic disks come in varieties such as hard disks, floppy disks, and removable hard disks. Optical disks discussed include CD-ROM, CD-R/W, DVD, and their uses, capacities, and read/write capabilities. RAID systems provide data redundancy across multiple disks for reliability and performance.
digital logic circuits, digital component memory unitRai University
The document discusses different types of computer memory. It defines memory as a device that stores binary data for processing. There are two main types: random access memory (RAM), which allows random read/write access, and sequential access memory, which requires sequential access. RAM is further divided into static RAM, dynamic RAM, read-only memory (ROM), programmable ROM, erasable programmable ROM, and electrically erasable programmable ROM. ROM is used to permanently store programs, while RAM is used for temporary data storage. Flash memory is a type of EEPROM that uses standard voltages and allows block writing for easier updating.
This document discusses memory subsystems and hierarchy. It begins by describing the memory hierarchy which includes registers, main memory (RAM), and external memory. It then discusses different types of memory in terms of read/write capability, volatility, and erasure mechanisms. The document outlines cache organization and mapping techniques including direct mapping, set associative, and fully associative mapping. It provides examples of address mapping for each technique. The document also discusses RAM and ROM types as well as memory subsystem organization.
This document summarizes key points from Chapter 3 of William Stallings' book "Computer Organization and Architecture". It discusses the top-level view of computer function and interconnection. The main components of a computer are the control unit, arithmetic logic unit, main memory, and input/output. Programs are sequences of steps that are executed via control signals. Buses are used to connect these components and transfer data, addresses, and control signals. Interrupts allow other devices to interrupt normal program execution.
The document introduces the concept of interleaving, which is the placement of a substrate between or around food products. It defines interleaving and underleaving, and provides examples of common interleaving applications like sub setups, lay flat bacon, and frozen burger patties. Interleaving provides benefits like portion control and sanitary handling for food processors and end users.
This document discusses memory characteristics including location, capacity, access methods, performance, and types. It covers the memory hierarchy from registers to external memory. Key points include that dynamic RAM needs refreshing to prevent data loss while static RAM maintains data without refreshing, and that memory performance is determined by access time, cycle time, and transfer rate. The document provides diagrams of DRAM and SRAM cell structures.
RAM (random access memory) is the primary storage area in a computer for programs and data that the processor can access. It is temporary memory that stores bytes of data and instructions until the computer is powered down. RAM needs to match the processor's abilities and most operating systems require at least 1GB of RAM to run smoothly. ROM (read only memory) provides permanent storage for instructions and firmware needed to boot the computer. It stores data using fixed circuits that don't erase when powered down, including the BIOS (basic input/output system) which contains hardware information and boot instructions.
This document discusses error detection and correction techniques for digital data transmission. It introduces different types of errors that can occur, such as single-bit and burst errors. It describes how redundancy is used to detect and correct errors using block coding techniques. Specific examples are provided to illustrate how block codes are constructed and used to detect and correct errors. Key concepts discussed include linear block codes, Hamming distance, minimum Hamming distance, and how these relate to the error detection and correction capabilities of different coding schemes.
Magnetic disks remain the most important component of external memory. Data is recorded on disks through magnetic read and write heads. Disks are organized into tracks and sectors for efficient data access. RAID systems provide data redundancy or higher performance through striping and mirroring across multiple disks. Optical disks like CDs and DVDs store data through microscopic pits and lands read by lasers, and use constant linear velocity to increase storage capacity toward the disk edge.
The document discusses the differences between computer architecture and organization. Architecture refers to attributes visible to programmers like instruction sets, while organization refers to how features are implemented internally. It also discusses the functional view of a computer in terms of data movement, control, storage, and processing. Finally, it outlines the chapters of the book which will cover topics like CPU structure, instruction sets, and computer evolution.
This document provides an overview of parallel computing and parallel processing. It discusses:
1. The three types of concurrent events in parallel processing: parallel, simultaneous, and pipelined events.
2. The five fundamental factors for projecting computer performance: clock rate, cycles per instruction (CPI), execution time, million instructions per second (MIPS) rate, and throughput rate.
3. The four programmatic levels of parallel processing from highest to lowest: job/program level, task/procedure level, interinstruction level, and intrainstruction level.
This document provides information about memory hierarchy and cache design. It discusses the different types of memory technologies like SRAM and DRAM that are used at different levels of the memory hierarchy. It describes the basic operations of DRAM and SRAM. It also covers cache organization concepts like direct-mapped caches, cache hits, misses, and handling reads and writes. The goal of the memory hierarchy is to provide fast access to frequently used data while also providing large storage capacity.
This document provides an introduction to software engineering topics including:
1. What software engineering is, its importance, and the software development lifecycle activities it encompasses.
2. The many different types of software systems that exist and how software engineering approaches vary depending on the application.
3. Key fundamentals of software engineering that apply universally, including managing development processes, dependability, and reusing existing software components.
The document summarizes different types of computer memory. It describes RAM as volatile memory that can be randomly accessed. There are two main types of RAM: DRAM uses capacitors and must be refreshed, while SRAM uses flip-flops and does not need refreshing. The document also discusses cache memory, ROM, EPROM, EEPROM, flash memory, memory organization, errors and interleaving.
This document provides an overview of the design of a dual port SRAM using Verilog HDL. It begins with an introduction describing the objectives and accomplishments of the project. It then reviews relevant literature on SRAM design. The document describes the FPGA design flow and introduces Verilog. It provides the design and operation of the SRAM, and discusses simulation results and conclusions. The proposed 8-bit dual port SRAM utilizes negative bitline techniques during write operations to improve write ability and reduce power consumption and area compared to conventional designs.
Computer memory, also known as RAM, is temporary storage that allows the computer to perform tasks by holding instructions and data in an easily accessible location. There are two main types of computer memory: volatile and non-volatile. Volatile memory, like RAM, loses its contents when power is removed while non-volatile types like ROM retain data without power. Over time, RAM technologies have evolved from SIMMs to DIMMs and SDRAM to DDR, DDR2, and DDR3, with each generation offering faster speeds and higher capacities. Proper identification and installation of the correct RAM type is important for system functionality and performance.
Computer memory, also known as RAM, is temporary storage that allows the computer to perform tasks by holding instructions and data in an easily accessible location. There are two main types of computer memory: volatile and non-volatile. Volatile memory, like RAM, loses its contents when power is removed while non-volatile types like ROM retain data without power. Over time, RAM technologies have evolved from SIMMs to DIMMs and SDRAM to DDR, DDR2, and DDR3, with each generation offering faster speeds and higher capacities. Proper identification and installation of the correct RAM type is important for system functionality and performance.
The document discusses different types of RAM and ROM. It describes SRAM, DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM, PROM, EPROM, and EEPROM. It provides details on their characteristics such as speed, cost, refresh requirements, packaging, and compatibility. The document also gives tips for selecting memory such as checking the maximum supported size and speed, and ensuring it matches the system board configuration.
Primary memory, also known as main memory or RAM, is the area where the CPU stores and accesses data for processing. It has limited capacity and is volatile, meaning data is lost when power is turned off. Primary memory includes RAM and ROM. RAM allows reading and writing and is used as the computer's working memory, while ROM is read-only and stores firmware. Examples of RAM include SRAM, DRAM, SDRAM, and DDR RAM, which differ in refresh rates, speeds, and capacities. ROM includes PROM, EPROM, and EEPROM, which have different methods of programming and erasing data. Primary memory provides fast access for processing but has limited storage compared to secondary
This document discusses different types of computer memory and storage devices. It defines primary and secondary storage. Primary storage includes RAM and ROM, which temporarily and permanently store data respectively. RAM is volatile and includes DRAM, SRAM, and RDRAM. ROM is non-volatile and includes PROM, EPROM, EEPROM, and flash memory. The document provides details on each type of memory, including their characteristics and uses.
This document discusses different types of computer memory, including RAM, ROM, and virtual memory. It describes the key characteristics and uses of dynamic RAM, static RAM, ROM, EEPROM, flash memory, cache memory, and virtual memory. The main types of RAM discussed are DRAM, SRAM, SDRAM, and RDRAM. DRAM needs refreshing but is cheaper than SRAM. ROM types include mask ROM, PROM, EPROM, and EEPROM. Virtual memory allows programs to access memory as if it were one unified virtual space.
Main memory is made up of RAM and ROM chips. RAM is read-write memory that can be accessed randomly; data is lost when power is off. There are static and dynamic RAM types. Static RAM retains data indefinitely if powered, dynamic RAM must be periodically refreshed. ROM is read-only and permanently stores data. There are mask, PROM, EPROM and EEPROM ROM types that can be programmed at different stages. Cache memory uses fast static RAM. Main memory often uses dynamic RAM for its ability to store large amounts of data at lower cost despite slower access.
https://peoplelaptop.com/difference-between-ram-and-rom/
RAM and ROM:Difference between RAM and ROM in tabular form is given here. Visit now to check the detailed RAM vs ROM difference with their comparisons.
Here are the answers:
1. SRAM is static RAM that does not require periodic refreshing. It is faster but more expensive than DRAM, which is dynamic RAM that must be refreshed periodically.
2. The functions of ROM are to permanently store basic input/output instructions for the computer and to hold firmware like the BIOS. ROM maintains its data without power and can usually only be read from, not written to.
3. The different types and levels of CPU caches are:
- Level 1 (L1) cache, which is the smallest and fastest cache built directly into the CPU chip.
- Level 2 (L2) cache, which is external to the CPU chip but still on the processor
RAM allows stored data to be accessed randomly in any order. It is a type of volatile memory that does not permanently store data and loses its contents when powered off. There are two main types of RAM: static RAM and dynamic RAM. Dynamic RAM needs to be refreshed to maintain its contents while static RAM does not. RAM technologies have evolved from FPM DRAM to EDO DRAM, SDRAM, DDR SDRAM, and RDRAM to increase bandwidth and transfer rates. The memory hierarchy includes CPU registers, cache memory levels L1-L3, main memory, virtual memory, and storage. Future RAM technologies aim to be smaller, faster, and cheaper through innovations like RRAM and Z-RAM.
RAM is a type of volatile memory that is used for temporary storage. It allows data to be accessed randomly in any order. There are different types of RAM such as static RAM, dynamic RAM, SDRAM, and DDR SDRAM. RAM is part of a memory hierarchy that includes processor registers, cache memory levels L1-L3, main memory, and virtual memory. Future RAM technologies aim to provide memory that is smaller, faster, and cheaper than current memory chips.
This document provides an overview of different types of computer memory including cache memory, RAM, and ROM. It discusses the levels of cache memory (levels 1, 2, and 3) and how they work together. It describes the two main types of RAM - SRAM and DRAM - and explains the differences between them. It also outlines the three main types of ROM - PROM, EPROM, and EEPROM - and how their data can be programmed and erased.
The document discusses memory architectures including Harvard and Von Neumann architectures. It provides details on each:
- The Harvard architecture has separate memory for instructions and data allowing parallel access, but is more expensive.
- The Von Neumann architecture shares memory for instructions and data through a common bus, making it simpler but limiting parallelism.
- It also discusses memory types like RAM, SRAM, and DRAM and cache memory levels L1 and L2. RAM is volatile memory used for main memory, while SRAM is faster but more expensive and used for caches. DRAM is cheaper than SRAM but needs refresh cycles. Caches improve performance by storing recently used data and instructions from main memory.
The document discusses memory hierarchy and technologies. It describes the different levels of memory from fastest to slowest as processor registers, cache memory (levels 1 and 2), main memory, and secondary storage. The main memory technologies discussed are SRAM, DRAM, ROM, flash memory, and magnetic disks. Cache memory aims to speed up access time by exploiting locality of reference and uses mapping functions like direct mapping to determine cache locations.
This document summarizes different types of RAM:
- RAM is random access memory that temporarily stores information and is volatile, losing data when power is cut. It is faster than hard disk storage.
- The main types are SRAM, DRAM, FPM RAM, EDO RAM, SDR RAM, RDRAM, DDR RAM, and DDR5 RAM. Each was introduced at a different time and has different speeds, costs, power usage, and capacities.
- RAM can be installed as modules in computers, with DIMMs for desktops and smaller SODIMMs for laptops. SODIMMs have become the standard for laptop memory.
This presentation discusses Dynamic RAM (DRAM) and its types. It begins by explaining what RAM is and how it provides faster access for the CPU than the hard disk. It then covers that DRAM is the main memory in computers and must be refreshed periodically to prevent data loss. The main types of DRAM discussed are SDRAM, DDR, RDRAM, and DRAM memory modules. Specific details are provided about the features and operation of each DRAM type. Major memory manufacturers are also listed.
1) Primary computer memory, also known as main memory, is directly accessible to the CPU and stores instructions and data. It is divided into RAM and ROM.
2) RAM can be static RAM (SRAM) or dynamic RAM (DRAM). SRAM retains data as long as power is on, while DRAM requires regular refreshing to retain data.
3) ROM types include PROM, EPROM, and EEPROM. PROM cannot be erased, EPROM can be erased with UV light, and EEPROM can be electrically erased and rewritten.
5. +
Dynamic RAM (DRAM)
RAM technology is divided into two technologies:
Dynamic RAM (DRAM)
Static RAM (SRAM)
DRAM
Made with cells that store data as charge on capacitors
Presence or absence of charge in a capacitor is interpreted as a
binary 1 or 0
Requires periodic charge refreshing to maintain data storage
The term dynamic refers totendency of the stored charge to leak
away, even with power continuously applied
7. +
Static RAM
(SRAM)
Digital device that uses the same
logic elements used in the
processor
Binary values are stored using
traditional flip-flop logic gate
configurations
Will hold its data as long as power
is supplied to it
9. SRAM versusDRAM
SRAM
Both volatile
Power must be continuously supplied to the memory
to preserve the bit values
Dynamic cell
Simpler to build, smaller
More dense (smaller cells = more cells per unit area) DRAM
Less expensive
Requires the supporting refresh circuitry
Tend to be favored for large memory requirements
Used for main memory
+
Static
Faster
Used for cache memory (both on and off chip)
10. +
Read Only Memory (ROM)
Contains a permanent pattern of data that cannot be changed
or added to
No power source is required to maintain the bit values in
memory
Data or program is permanently in main memory and never
needs to be loaded from a secondary storage device
Data is actually wired into the chip as part of the fabrication
process
Disadvantages of this:
No room for error, if one bit is wrong the whole batch of ROMs
must be thrown out
Data insertion step includes a relatively large fixed cost
11. +
Programmable ROM (PROM)
Less expensive alternative
Nonvolatile and may be written into only once
Writing process is performed electrically and may be performed
by supplier or customer at a time later than the original chip
fabrication
Special equipment is required for the writing process
Provides flexibility and convenience
Attractive for high volume production runs
12. Read-Mostly Memory
Flash
EPROM EEPROM
Memory
Electrically erasable
programmable read-only Intermediate between
Erasable programmable
memory EPROM and EEPROM in
read-only memory
both cost and functionality
Can be written into at any
time without erasing prior
contents
Uses an electrical erasing
Erasure process can be
technology, does not
performed repeatedly
Combines the advantage of provide byte-level erasure
non-volatility with the
flexibility of being updatable
in place
More expensive than PROM Microchip is organized so
but it has the advantage of that a section of memory
the multiple update More expensive than cells are erased in a single
capability EPROM action or “flash”
17. Interleaved Memory Composed of a collection of
DRAM chips
Grouped together to form a
memory bank
Each bank is independently
able to service a memory
read or write request
K banks can service K
requests simultaneously,
increasing memory read or
write rates by a factor of K
If consecutive words of
memory are stored in different
banks, the transfer of a block
of memory is speeded up
18. +
Error Correction
Hard Failure
Permanent physical defect
Memory cell or cells affected cannot reliably store data but become
stuck at 0 or 1 or switch erratically between 0 and 1
Can be caused by:
Harsh environmental abuse
Manufacturing defects
Wear
Soft Error
Random, non-destructive event that alters the contents of one or more
memory cells
No permanent damage to memory
Can be caused by:
Power supply problems
Alpha particles
25. Advanced DRAM Organization SDRAM
One of the most critical system bottlenecks when using
high-performance processors is the interface to main
internal memory DDR-DRAM
The traditional DRAM chip is constrained both by its
internal architecture and by its interface to the processor’s
memory bus
Anumber of enhancements to the basic DRAM
architecture have been explored: RDRAM
+
Table 5.3 Performance Comparison of Some DRAM Alternatives
26. Synchronous DRAM (SDRAM)
One of the most widely used forms of DRAM
Exchanges data with the processor synchronized to
an external clock signal and running at the full speed
of the processor/memory bus without imposing wait
states
With synchronous access the DRAM moves data in
and out under control of the system clock
• The processor or other master issues the instruction and
address information which is latched by the DRAM
• The DRAM then responds after a set number of clock
cycles
• Meanwhile the master can safely do other tasks while
the SDRAM is processing
30. RDRAM Developed by Rambus
Bus delivers address and
control information using an
asynchronous block-oriented
protocol Adopted by Intel for its Pentium
and Itanium processors
•Gets a memory request over the high-
speed bus
•Request contains the desired address,
the type of operation, and the number
of bytes in the operation
Bus can address up to 320
Has become the main
RDRAM chips and is rated at
competitor to SDRAM
1.6 GBps
Chips are vertical packages with
all pins on one side
•Exchanges data with the processor over
28 wires no more than 12 centimeters
long
32. +
Double Data Rate SDRAM
(DDR SDRAM)
SDRAM can only send data once per bus clock cycle
Double-data-rate SDRAM can send data twice per clock cycle,
once on the rising edge of the clock pulse and once on the
falling edge
Developed by the JEDEC Solid State Technology Association
(Electronic Industries Alliance’s semiconductor-engineering-
standardization body)
34. +
Cache DRAM (CDRAM)
Developed by Mitsubishi
Integrates a small SRAM cache onto a generic DRAM chip
SRAM on the CDRAM can be used in two ways:
It can be used as a true cache consisting of a number of 64-bit lines
Cache mode of the CDRAM is effective for ordinary random
access to memory
Can also be used as a buffer to support the serial access of a block
of data
35. + Summary Internal
Memory
Chapter 5
Semiconductor main memory
Hamming code
Organization
DRAM and SRAM Advanced DRAM organization
Types of ROM Synchronous DRAM
Chip logic Rambus DRAM
Chip packaging DDR SDRAM
Module organization Cache DRAM
Interleaved memory
Error correction
Hard failure
Soft error
Editor's Notes
We begin this chapter with a survey of semiconductor main memory subsystems,including ROM, DRAM, and SRAM memories. Then we look at error controltechniques used to enhance memory reliability. Following this, we look at moreadvanced DRAM architectures.
In earlier computers, the most common form of random-access storage for computermain memory employed an array of doughnut-shaped ferromagnetic loopsreferred to as cores. Hence, main memory was often referred to as core, a term thatpersists to this day. The advent of, and advantages of, microelectronics has longsince vanquished the magnetic core memory. Today, the use of semiconductor chipsfor main memory is almost universal. Key aspects of this technology are exploredin this section.The basic element of a semiconductor memory is the memory cell. Although a varietyof electronic technologies are used, all semiconductor memory cells share certainproperties:• They exhibit two stable (or semistable) states, which can be used to representbinary 1 and 0.• They are capable of being written into (at least once), to set the state.• They are capable of being read to sense the state.Figure 5.1 depicts the operation of a memory cell. Most commonly, the cellhas three functional terminals capable of carrying an electrical signal. The selectterminal, as the name suggests, selects a memory cell for a read or write operation.The control terminal indicates read or write. For writing, the other terminalprovides an electrical signal that sets the state of the cell to 1 or 0. For reading, thatterminal is used for output of the cell’s state. The details of the internal organization,functioning, and timing of the memory cell depend on the specific integratedcircuit technology used and are beyond the scope of this book, except for a briefsummary. For our purposes, we will take it as given that individual cells can beselected for reading and writing operations.
All of the memory types that we will explore in this chapter are random access. That is,individual words of memory are directly accessed through wired-in addressing logic.Table 5.1 lists the major types of semiconductor memory. The most commonis referred to as random-access memory (RAM). This is, in fact, a misuse of theterm, because all of the types listed in the table are random access. One distinguishingcharacteristic of memory that is designated as RAM is that it is possibleboth to read data from the memory and to write new data into the memory easilyand rapidly. Both the reading and writing are accomplished through the use ofelectrical signals.The other distinguishing characteristic of RAM is that it is volatile. A RAMmust be provided with a constant power supply. If the power is interrupted, thenthe data are lost. Thus, RAM can be used only as temporary storage. The two traditionalforms of RAM used in computers are DRAM and SRAM.
Figure 5.2a is a typical DRAM structure for an individual cell that stores 1 bit.The address line is activated when the bit value from this cell is to be read or written.The transistor acts as a switch that is closed (allowing current to flow) if a voltage isapplied to the address line and open (no current flows) if no voltage is present onthe address line.For the write operation, a voltage signal is applied to the bit line; a high voltagerepresents 1, and a low voltage represents 0. A signal is then applied to theaddress line, allowing a charge to be transferred to the capacitor.For the read operation, when the address line is selected, the transistor turnson and the charge stored on the capacitor is fed out onto a bit line and to a senseamplifier. The sense amplifier compares the capacitor voltage to a reference valueand determines if the cell contains a logic 1 or a logic 0. The readout from the celldischarges the capacitor, which must be restored to complete the operation.Although the DRAM cell is used to store a single bit (0 or 1), it is essentiallyan analog device. The capacitor can store any charge value within a range; a thresholdvalue determines whether the charge is interpreted as 1 or 0.
Figure 5.2b is a typical SRAM structure for an individual cell. Four transistors(T1, T2, T3, T4) are cross connected in an arrangement that produces a stable logicstate. In logic state 1, point C1 is high and point C2 is low; in this state, T1 and T4 are offand T2 and T3 are on. In logic state 0, point C1 is low and point C2 is high; in this state,T1 and T4 are on and T2 and T3 are off. Both states are stable as long as the directcurrent (dc) voltage is applied. Unlike the DRAM, no refresh is needed to retain data.As in the DRAM, the SRAM address line is used to open or close a switch.The address line controls two transistors (T5 and T6). When a signal is applied tothis line, the two transistors are switched on, allowing a read or write operation. Fora write operation, the desired bit value is applied to line B, while its complementis applied to line B. This forces the four transistors (T1, T2, T3, T4) into the properstate. For a read operation, the bit value is read from line B.
Both static and dynamic RAMs are volatile; that is,power must be continuously supplied to the memory to preserve the bit values.A dynamic memory cell is simpler and smaller than a static memory cell. Thus, aDRAM is more dense (smaller cells = more cells per unit area) and less expensivethan a corresponding SRAM. On the other hand, a DRAM requires the supportingrefresh circuitry. For larger memories, the fixed cost of the refresh circuitry is morethan compensated for by the smaller variable cost of DRAM cells. Thus, DRAMstend to be favored for large memory requirements. A final point is that SRAMs aresomewhat faster than DRAMs. Because of these relative characteristics, SRAM isused for cache memory (both on and off chip), and DRAM is used for main memory.
When only a small number of ROMs with a particular memory content isneeded, a less expensive alternative is the programmable ROM (PROM). Like theROM, the PROM is nonvolatile and may be written into only once. For the PROM,the writing process is performed electrically and may be performed by a supplieror customer at a time later than the original chip fabrication. Special equipment isrequired for the writing or “programming” process. PROMs provide flexibility andconvenience. The ROM remains attractive for high-volume production runs.
Another variation on read-only memory is the read-mostly memory, which isuseful for applications in which read operations are far more frequent than writeoperations but for which nonvolatile storage is required. There are three commonforms of read-mostly memory: EPROM, EEPROM, and flash memory.The optically erasable programmable read-only memory (EPROM) is readand written electrically, as with PROM. However, before a write operation, all thestorage cells must be erased to the same initial state by exposure of the packagedchip to ultraviolet radiation. Erasure is performed by shining an intense ultravioletlight through a window that is designed into the memory chip. This erasure processcan be performed repeatedly; each erasure can take as much as 20 minutes toperform. Thus, the EPROM can be altered multiple times and, like the ROM andPROM, holds its data virtually indefinitely. For comparable amounts of storage, theEPROM is more expensive than PROM, but it has the advantage of the multipleupdate capability.A more attractive form of read-mostly memory is electrically erasable programmableread-only memory (EEPROM). This is a read-mostly memory that canbe written into at any time without erasing prior contents; only the byte or bytesaddressed are updated. The write operation takes considerably longer than the readoperation, on the order of several hundred microseconds per byte. The EEPROMcombines the advantage of non-volatility with the flexibility of being updatable inplace, using ordinary bus control, address, and data lines. EEPROM is more expensivethan EPROM and also is less dense, supporting fewer bits per chip.Another form of semiconductor memory is flash memory (so named becauseof the speed with which it can be reprogrammed). First introduced in the mid-1980s,flash memory is intermediate between EPROM and EEPROM in both cost andfunctionality. Like EEPROM, flash memory uses an electrical erasing technology.An entire flash memory can be erased in one or a few seconds, which is much fasterthan EPROM. In addition, it is possible to erase just blocks of memory rather thanan entire chip. Flash memory gets its name because the microchip is organized sothat a section of memory cells are erased in a single action or “flash.” However,flash memory does not provide byte-level erasure. Like EPROM, flash memoryuses only one transistor per bit, and so achieves the high density (compared withEEPROM) of EPROM.
Main memory is composed of a collection of DRAM memory chips. A number ofchips can be grouped together to form a memory bank. It is possible to organizethe memory banks in a way known as interleaved memory. Each bank is independentlyable to service a memory read or write request, so that a system withK banks can service K requests simultaneously, increasing memory read or writerates by a factor of K. If consecutive words of memory are stored in differentbanks, then the transfer of a block of memory is speeded up. Appendix E exploresthe topic of interleaved memory.
The simplest of the error-correcting codes is the Hamming code devised byRichard Hamming at Bell Laboratories. Figure 5.8 uses Venn diagrams to illustratethe use of this code on 4-bit words (M = 4). With three intersecting circles,there are seven compartments. We assign the 4 data bits to the inner compartments(Figure5.8a). The remaining compartments are filled with what are called paritybits. Each parity bit is chosen so that the total number of 1s in its circle is even(Figure5.8b). Thus, because circle A includes three data 1s, the parity bit in thatcircle is set to 1. Now, if an error changes one of the data bits (Figure 5.8c), it is easilyfound. By checking the parity bits, discrepancies are found in circle A and circleC but not in circle B. Only one of the seven compartments is in A and C but not B.The error can therefore be corrected by changing that bit.
The first three columns of Table 5.2lists the number of check bits required for various data word lengths.For convenience, we would like to generate a 4-bit syndrome for an 8-bit dataword with the following characteristics:• If the syndrome contains all 0s, no error has been detected.• If the syndrome contains one and only one bit set to 1, then an error hasoccurred in one of the 4 check bits. No correction is needed.• If the syndrome contains more than one bit set to 1, then the numerical valueof the syndrome indicates the position of the data bit in error. This data bit isinverted for correction.
To achieve these characteristics, the data and check bits are arranged into a12-bit word as depicted in Figure 5.9. The bit positions are numbered from 1 to 12.Those bit positions whose position numbers are powers of 2 are designated as checkbits.
Figure 5.10 illustrates the calculation. The data and check bits arepositioned properly in the 12-bit word. Four of the data bits have a value 1 (shadedin the table), and their bit position values are XORed to produce the Hammingcode 0111, which forms the four check digits. The entire block that is stored is001101001111. Suppose now that data bit 3, in bit position 6, sustains an error and ischanged from 0 to 1. The resulting block is 001101101111, with a Hamming code of0111. An XOR of the Hamming code and all of the bit position values for nonzerodata bits results in 0110. The nonzero result detects an error and indicates that theerror is in bit position 6.
The code just described is known as a single-error-correcting (SEC) code.More commonly, semiconductor memory is equipped with a single-error-correcting,double-error-detecting (SEC-DED) code. As Table 5.2 shows, such codes requireone additional bit compared with SEC codes.Figure 5.11 illustrates how such a code works, again with a 4-bit data word.The sequence shows that if two errors occur (Figure 5.11c), the checking proceduregoes astray (d) and worsens the problem by creating a third error (e). To overcomethe problem, an eighth bit is added that is set so that the total number of 1s in thediagram is even. The extra parity bit catches the error (f).An error-correcting code enhances the reliability of the memory at the cost ofadded complexity. With a 1-bit-per-chip organization, an SEC-DED code is generallyconsidered adequate. For example, the IBM 30xx implementations used an 8-bit SECDEDcode for each 64 bits of data in main memory. Thus, the size of main memory isactually about 12% larger than is apparent to the user. The VAX computers used a 7-bitSEC-DED for each 32 bits of memory, for a 22% overhead. A number of contemporaryDRAMs use 9 check bits for each 128 bits of data, for a 7% overhead [SHAR97].
Figure 5.12 shows the internal logic of IBM’s 64-Mb SDRAM [IBM01], whichis typical of SDRAM organization.
Table 5.4 defines the various pin assignments.The SDRAM employs a burst mode to eliminate the address setup time androw and column line pre-charge time after the first access. In burst mode, a series ofdata bits can be clocked out rapidly after the first bit has been accessed. This modeis useful when all the bits to be accessed are in sequence and in the same row of thearray as the initial access. In addition, the SDRAM has a multiple-bank internalarchitecture that improves opportunities for on-chip parallelism.The mode register and associated control logic is another key feature differentiatingSDRAMs from conventional DRAMs. It provides a mechanism tocustomize the SDRAM to suit specific system needs. The mode register specifiesthe burst length, which is the number of separate units of data synchronously fedonto the bus. The register also allows the programmer to adjust the latency betweenreceipt of a read request and the beginning of data transfer.The SDRAM performs best when it is transferring large blocks of data serially,such as for applications like word processing, spreadsheets, and multimedia.
Figure 5.13 shows an example of SDRAM operation.
Figure 5.14 illustrates the RDRAM layout. The configuration consists ofa controller and a number of RDRAM modules connected via a common bus.The controller is at one end of the configuration, and the far end of the bus isa parallel termination of the bus lines. The bus includes 18 data lines (16 actualdata, two parity) cycling at twice the clock rate; that is, 1 bit is sent at the leadingand following edge of each clock signal. This results in a signal rate on eachdata line of 800 Mbps. There is a separate set of 8 lines (RC) used for addressand control signals. There is also a clock signal that starts at the far end fromthe controller propagates to the controller end and then loops back. A RDRAMmodule sends data to the controller synchronously to the clock to master, and thecontroller sends data to an RDRAM synchronously with the clock signal in theopposite direction. The remaining bus lines include a reference voltage, ground,and power source.
SDRAM is limited by the fact that it can only send data to the processor once perbus clock cycle. A new version of SDRAM, referred to as double-data-rate SDRAMcan send data twice per clock cycle, once on the rising edge of the clock pulse andonce on the falling edge.DDR DRAM was developed by the JEDEC Solid State TechnologyAssociation, the Electronic Industries Alliance’s semiconductor-engineering-standardizationbody. Numerous companies make DDR chips, which are widely used indesktop computers and servers.
Figure 5.15 shows the basic timing for a DDR read. The data transfer is synchronizedto both the rising and falling edge of the clock. It is also synchronized toa bidirectional data strobe (DQS) signal that is provided by the memory controllerduring a read and by the DRAM during a write. In typical implementations theDQS is ignored during the read. An explanation of the use of DQS on writes isbeyond our scope; see [JACO08] for details.There have been two generations of improvement to the DDR technology.DDR2 increases the data transfer rate by increasing the operational frequencyof the RAM chip and by increasing the prefetch buffer from 2 bits to 4 bitsper chip. The prefetch buffer is a memory cache located on the RAM chip. Thebuffer enables theRAM chip to preposition bits to be placed on the data bus asrapidly as possible. DDR3, introduced in 2007, increases the prefetch buffer sizeto 8 bits.Theoretically, a DDR module can transfer data at a clock rate in the range of200 to 600 MHz; a DDR2 module transfers at a clock rate of 400 to 1066 MHz; anda DDR3 module transfers at a clock rate of 800 to 1600 MHz. In practice, somewhatsmaller rates are achieved.Appendix K provides more detail on DDR technology.
Cache DRAM (CDRAM), developed by Mitsubishi [HIDA90, ZHAN01], integratesa small SRAM cache (16 Kb) onto a generic DRAM chip.The SRAM on the CDRAM can be used in two ways. First, it can be used as atrue cache, consisting of a number of 64-bit lines. The cache mode of the CDRAMis effective for ordinary random access to memory.The SRAM on the CDRAM can also be used as a buffer to support the serialaccess of a block of data. For example, to refresh a bit-mapped screen, the CDRAMcan prefetch the data from the DRAM into the SRAM buffer. Subsequent accessesto the chip result in accesses solely to the SRAM.