The document summarizes different types of semiconductor memory technologies, including dynamic RAM (DRAM), static RAM (SRAM), read-only memory (ROM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, synchronous DRAM (SDRAM), Rambus DRAM (RDRAM), and double data rate SDRAM (DDR SDRAM). It describes the basic operation and characteristics of each technology. Key aspects like refresh requirements, cell structures, write capabilities, and performance are compared between DRAM and SRAM.
This document provides information about external memory systems including magnetic disks, solid state drives, and optical disks. It discusses the components and operation of magnetic disks including read/write mechanisms, data organization and formatting, physical characteristics, and performance parameters. It also summarizes different RAID levels from 0 to 6 and their characteristics. For solid state drives, it describes flash memory, compares SSDs to HDDs, and discusses SSD organization and practical issues related to performance and lifespan.
This document provides an overview of memory management concepts in computer systems. It discusses classification of memory types, memory addressing, memory management units, allocation techniques, swapping, fragmentation, page replacement algorithms, segmentation, hardware implementation, memory mapping, byte ordering, and common memory problems. The document contains 23 pages of content on these memory management topics.
Disk partitioning or disk slicing is the creation of one or more regions on secondary storage, so that each region can be managed separately.
Don't know how to partition your disk? Go through the presentation to get aware about this.
The document summarizes the boot process of an operating system. It begins with the BIOS executing code from the ROM chip and loading the master boot record (MBR) from the hard disk. The MBR then loads the volume boot record (VBR) from the active partition, which loads the second stage boot loader like NTLDR or GRUB. This boot loader then loads and executes the kernel, which initializes the filesystem and launches core processes.
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
a glance on memory management in operating system.
this note is useful for those who are keen to know about how the OS works and a brief explanation regarding several terms such
-paging
segmentation
fragmentation
virtual memory
page table
to A Level A2 Computing students, this light note may be helpful for your revision
The motherboard is the main circuit board in a computer that connects all the parts together. It contains the CPU, memory, expansion slots, and buses that electrically connect all the components. Key parts of the motherboard include the CPU socket, memory sockets, BIOS chip, heat sink, cooling fan, and CMOS battery. The motherboard comes in different form factors like ATX, microATX, and BTX that determine the size and layout of the board.
This document provides information about external memory systems including magnetic disks, solid state drives, and optical disks. It discusses the components and operation of magnetic disks including read/write mechanisms, data organization and formatting, physical characteristics, and performance parameters. It also summarizes different RAID levels from 0 to 6 and their characteristics. For solid state drives, it describes flash memory, compares SSDs to HDDs, and discusses SSD organization and practical issues related to performance and lifespan.
This document provides an overview of memory management concepts in computer systems. It discusses classification of memory types, memory addressing, memory management units, allocation techniques, swapping, fragmentation, page replacement algorithms, segmentation, hardware implementation, memory mapping, byte ordering, and common memory problems. The document contains 23 pages of content on these memory management topics.
Disk partitioning or disk slicing is the creation of one or more regions on secondary storage, so that each region can be managed separately.
Don't know how to partition your disk? Go through the presentation to get aware about this.
The document summarizes the boot process of an operating system. It begins with the BIOS executing code from the ROM chip and loading the master boot record (MBR) from the hard disk. The MBR then loads the volume boot record (VBR) from the active partition, which loads the second stage boot loader like NTLDR or GRUB. This boot loader then loads and executes the kernel, which initializes the filesystem and launches core processes.
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
a glance on memory management in operating system.
this note is useful for those who are keen to know about how the OS works and a brief explanation regarding several terms such
-paging
segmentation
fragmentation
virtual memory
page table
to A Level A2 Computing students, this light note may be helpful for your revision
The motherboard is the main circuit board in a computer that connects all the parts together. It contains the CPU, memory, expansion slots, and buses that electrically connect all the components. Key parts of the motherboard include the CPU socket, memory sockets, BIOS chip, heat sink, cooling fan, and CMOS battery. The motherboard comes in different form factors like ATX, microATX, and BTX that determine the size and layout of the board.
Chapter 2 - Computer Evolution and PerformanceCésar de Souza
The document discusses the evolution of computer hardware from the 1940s onwards. It describes early computers like ENIAC which used vacuum tubes and was programmed manually via switches. The stored program concept developed by von Neumann separated the program and data into memory. Transistors replaced vacuum tubes, making computers smaller, cheaper and more reliable. Integrated circuits led to generations of computers with increasing numbers of components on a single chip due to Moore's Law. Memory speed could not keep up with rising CPU speeds, leading to cache memory and other performance improvements.
Power point presentation on backup and recovery.
A good presentation cover all topics.
For any other type of ppt's or pdf's to be created on demand contact -dhawalm8@gmail.com
mob. no-7023419969
The document discusses key concepts related to process management in Linux, including process lifecycle, states, memory segments, scheduling, and priorities. It explains that a process goes through creation, execution, termination, and removal phases repeatedly. Process states include running, stopped, interruptible, uninterruptible, and zombie. Process memory is made up of text, data, BSS, heap, and stack segments. Linux uses a O(1) CPU scheduling algorithm that scales well with process and processor counts.
Checkpointing is used to improve database recovery time. It periodically writes uncommitted transaction logs and dirty database pages to stable storage. This allows transactions committed before the last checkpoint to be ignored during recovery. Recovery involves undoing uncommitted transactions and redoing committed ones since the last checkpoint using the write-ahead logging protocol to ensure crash consistency. Shadow paging is an alternative technique that maintains a shadow page table to recover the pre-transaction state if needed. Automated backups and mirroring can further improve availability.
The document summarizes key characteristics of cache memory including location, capacity, unit of transfer, access methods, performance, physical types, organization, and hierarchy. It discusses cache memory in terms of where it is located (internal or external to the CPU), its typical sizes (word, block), access techniques (sequential, random, associative), performance metrics (access time, transfer rate), common physical implementations (SRAM, disk), and organizational aspects like mapping functions, replacement algorithms, and write policies. A cache sits between the CPU and main memory, using fast but small memory to speed up access to frequently used data from larger but slower main memory.
The document discusses various types of external memory including magnetic disk, RAID, removable optical disks, and magnetic tape. It provides details on magnetic disk technology including materials, read/write mechanisms, data organization, and disk formatting. It also describes different RAID levels from 0 to 6 and their characteristics. Finally, it covers optical storage technologies like CD-ROM, CD-R, DVD, and compares their capabilities.
The document discusses process management in operating systems. It defines a process as a program during execution, which requires resources like memory and CPU registers. The document outlines the life cycle of a process, including the different states a process can be in like ready, running, waiting, blocked. It describes process creation and termination. The process control block (PCB) contains information needed to control and monitor each process. Context switching allows the CPU to switch between processes. Scheduling determines which process enters the running state. The document lists some common process control system calls and discusses advantages and disadvantages of process management.
Cache is a small amount of fast memory located close to the CPU that stores frequently accessed instructions and data. It speeds up processing by allowing the CPU to access needed information more quickly than from main memory. Caches exploit the principle of locality of reference, where programs tend to access the same data/instructions repeatedly over short periods. There are multiple cache levels, with L1 cache being fastest but smallest and L3 cache being largest but slower. Caching improves performance dramatically by fulfilling over 90% of memory requests from the small cache rather than requiring slower access to main memory.
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
This document discusses the history and types of RAM. It begins by defining RAM as random access memory that is volatile and temporary. It then describes the two main types: static RAM and dynamic RAM (DRAM). The document proceeds to discuss the evolution of different DRAM technologies over time, including FPM DRAM, EDO DRAM, SDR RAM, DDR RAM, DDR2, DDR3, and DDR4. It notes that each new generation provides improved efficiency and speed over the previous. The document concludes by discussing some limitations of RAM and promising future technologies like RRAM and Z-RAM.
Linux Memory Management with CMA (Contiguous Memory Allocator)Pankaj Suryawanshi
Fundamentals of Linux Memory Management and CMA (Contiguous Memory Allocator) In Linux.
Virtual Memory, Physical Memory, Swap Space, DMA, IOMMU, Paging, Segmentation, TLB, Hugepages, Ion google memory manager
top level view of computer function and interconnectionSajid Marwat
The document summarizes key concepts from Chapter 3 of William Stallings' Computer Organization and Architecture textbook. It describes the top-level view of computer function and interconnection, including:
- The components of a computer system and how they interconnect via buses
- Common bus types like data, address, and control buses
- Bus arbitration and timing, whether synchronous or asynchronous
- Specific bus architectures like ISA and PCI buses
It provides instruction on computer organization concepts like instruction cycles, interrupts, and program flow control at a high level.
Memory management is the method by which an operating system handles and allocates primary memory. It tracks the status of memory locations as allocated or free, and determines how memory is distributed among competing processes. Memory can be allocated contiguously or non-contiguously. Contiguous allocation assigns consecutive blocks of memory to a process, while non-contiguous allocation allows a process's memory blocks to be scattered across different areas using techniques like paging or segmentation. Paging divides processes and memory into fixed-size pages and frames to allow non-contiguous allocation while reducing fragmentation.
RAID (Redundant Array of Independent Disks) distributes data across multiple disks to improve performance and provide redundancy. The common characteristics of RAID levels are that multiple physical disks act as a single logical disk, data is distributed across disks, and redundant parity information is used to recover data if a disk fails. RAID level 0 stripes data without parity for increased speed but no fault tolerance, while RAID level 1 uses mirroring to provide redundancy by writing all data to two disks.
This document discusses cache memory principles and provides details about cache operation, structure, organization, and design considerations. The key points covered are:
- Cache is a small, fast memory located between the CPU and main memory that stores frequently used data.
- During a cache read operation, the CPU first checks the cache for the requested data. If present, it is retrieved from the fast cache. If not, the data is read from main memory into cache.
- Cache design considerations include size, mapping function, replacement algorithm, write policy, line size, and number of cache levels.
- Modern CPUs use hierarchical cache designs with multiple levels (L1, L2, etc.) to improve performance.
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
This document summarizes a presentation on virtual memory given by 5 students for their Computer Architecture and Organization course. It includes definitions of virtual memory, how it works using demand paging and segmentation, why it is used to support multitasking and large programs, the mapping and address translation processes, page tables, page size and faults, and advantages and disadvantages of virtual memory such as protection, sharing, and memory and performance overhead.
In this presentation, we describe the computer storage.
The basic unit of data storage,
Memory hierarchy
CPU register
Cache memory
Types of storage
Primary Memory
RAM
ROM
Secondary Memory
Magnetic memory
Optical storage
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
1. The document discusses different types of semiconductor memory including RAM, ROM, PROM, EPROM, EEPROM, and flash memory.
2. It describes the operation and structure of both dynamic RAM (DRAM) and static RAM (SRAM). DRAM uses capacitors to store bits and requires refreshing, while SRAM uses flip-flops and does not require refreshing.
3. The document also covers various RAM technologies including synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Rambus DRAM (RDRAM), error correction codes, and cache DRAM.
This document discusses internal memory organization and technologies. It begins with an overview of memory cell operation for DRAM and SRAM. It then covers various memory types including DRAM, SRAM, ROM, PROM, EPROM, and EEPROM. Advanced DRAM technologies like synchronous DRAM, Rambus DRAM, and double data rate SDRAM are also summarized. The document concludes with sections on error correction techniques and cache DRAM.
Chapter 2 - Computer Evolution and PerformanceCésar de Souza
The document discusses the evolution of computer hardware from the 1940s onwards. It describes early computers like ENIAC which used vacuum tubes and was programmed manually via switches. The stored program concept developed by von Neumann separated the program and data into memory. Transistors replaced vacuum tubes, making computers smaller, cheaper and more reliable. Integrated circuits led to generations of computers with increasing numbers of components on a single chip due to Moore's Law. Memory speed could not keep up with rising CPU speeds, leading to cache memory and other performance improvements.
Power point presentation on backup and recovery.
A good presentation cover all topics.
For any other type of ppt's or pdf's to be created on demand contact -dhawalm8@gmail.com
mob. no-7023419969
The document discusses key concepts related to process management in Linux, including process lifecycle, states, memory segments, scheduling, and priorities. It explains that a process goes through creation, execution, termination, and removal phases repeatedly. Process states include running, stopped, interruptible, uninterruptible, and zombie. Process memory is made up of text, data, BSS, heap, and stack segments. Linux uses a O(1) CPU scheduling algorithm that scales well with process and processor counts.
Checkpointing is used to improve database recovery time. It periodically writes uncommitted transaction logs and dirty database pages to stable storage. This allows transactions committed before the last checkpoint to be ignored during recovery. Recovery involves undoing uncommitted transactions and redoing committed ones since the last checkpoint using the write-ahead logging protocol to ensure crash consistency. Shadow paging is an alternative technique that maintains a shadow page table to recover the pre-transaction state if needed. Automated backups and mirroring can further improve availability.
The document summarizes key characteristics of cache memory including location, capacity, unit of transfer, access methods, performance, physical types, organization, and hierarchy. It discusses cache memory in terms of where it is located (internal or external to the CPU), its typical sizes (word, block), access techniques (sequential, random, associative), performance metrics (access time, transfer rate), common physical implementations (SRAM, disk), and organizational aspects like mapping functions, replacement algorithms, and write policies. A cache sits between the CPU and main memory, using fast but small memory to speed up access to frequently used data from larger but slower main memory.
The document discusses various types of external memory including magnetic disk, RAID, removable optical disks, and magnetic tape. It provides details on magnetic disk technology including materials, read/write mechanisms, data organization, and disk formatting. It also describes different RAID levels from 0 to 6 and their characteristics. Finally, it covers optical storage technologies like CD-ROM, CD-R, DVD, and compares their capabilities.
The document discusses process management in operating systems. It defines a process as a program during execution, which requires resources like memory and CPU registers. The document outlines the life cycle of a process, including the different states a process can be in like ready, running, waiting, blocked. It describes process creation and termination. The process control block (PCB) contains information needed to control and monitor each process. Context switching allows the CPU to switch between processes. Scheduling determines which process enters the running state. The document lists some common process control system calls and discusses advantages and disadvantages of process management.
Cache is a small amount of fast memory located close to the CPU that stores frequently accessed instructions and data. It speeds up processing by allowing the CPU to access needed information more quickly than from main memory. Caches exploit the principle of locality of reference, where programs tend to access the same data/instructions repeatedly over short periods. There are multiple cache levels, with L1 cache being fastest but smallest and L3 cache being largest but slower. Caching improves performance dramatically by fulfilling over 90% of memory requests from the small cache rather than requiring slower access to main memory.
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
This document discusses the history and types of RAM. It begins by defining RAM as random access memory that is volatile and temporary. It then describes the two main types: static RAM and dynamic RAM (DRAM). The document proceeds to discuss the evolution of different DRAM technologies over time, including FPM DRAM, EDO DRAM, SDR RAM, DDR RAM, DDR2, DDR3, and DDR4. It notes that each new generation provides improved efficiency and speed over the previous. The document concludes by discussing some limitations of RAM and promising future technologies like RRAM and Z-RAM.
Linux Memory Management with CMA (Contiguous Memory Allocator)Pankaj Suryawanshi
Fundamentals of Linux Memory Management and CMA (Contiguous Memory Allocator) In Linux.
Virtual Memory, Physical Memory, Swap Space, DMA, IOMMU, Paging, Segmentation, TLB, Hugepages, Ion google memory manager
top level view of computer function and interconnectionSajid Marwat
The document summarizes key concepts from Chapter 3 of William Stallings' Computer Organization and Architecture textbook. It describes the top-level view of computer function and interconnection, including:
- The components of a computer system and how they interconnect via buses
- Common bus types like data, address, and control buses
- Bus arbitration and timing, whether synchronous or asynchronous
- Specific bus architectures like ISA and PCI buses
It provides instruction on computer organization concepts like instruction cycles, interrupts, and program flow control at a high level.
Memory management is the method by which an operating system handles and allocates primary memory. It tracks the status of memory locations as allocated or free, and determines how memory is distributed among competing processes. Memory can be allocated contiguously or non-contiguously. Contiguous allocation assigns consecutive blocks of memory to a process, while non-contiguous allocation allows a process's memory blocks to be scattered across different areas using techniques like paging or segmentation. Paging divides processes and memory into fixed-size pages and frames to allow non-contiguous allocation while reducing fragmentation.
RAID (Redundant Array of Independent Disks) distributes data across multiple disks to improve performance and provide redundancy. The common characteristics of RAID levels are that multiple physical disks act as a single logical disk, data is distributed across disks, and redundant parity information is used to recover data if a disk fails. RAID level 0 stripes data without parity for increased speed but no fault tolerance, while RAID level 1 uses mirroring to provide redundancy by writing all data to two disks.
This document discusses cache memory principles and provides details about cache operation, structure, organization, and design considerations. The key points covered are:
- Cache is a small, fast memory located between the CPU and main memory that stores frequently used data.
- During a cache read operation, the CPU first checks the cache for the requested data. If present, it is retrieved from the fast cache. If not, the data is read from main memory into cache.
- Cache design considerations include size, mapping function, replacement algorithm, write policy, line size, and number of cache levels.
- Modern CPUs use hierarchical cache designs with multiple levels (L1, L2, etc.) to improve performance.
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
This document summarizes a presentation on virtual memory given by 5 students for their Computer Architecture and Organization course. It includes definitions of virtual memory, how it works using demand paging and segmentation, why it is used to support multitasking and large programs, the mapping and address translation processes, page tables, page size and faults, and advantages and disadvantages of virtual memory such as protection, sharing, and memory and performance overhead.
In this presentation, we describe the computer storage.
The basic unit of data storage,
Memory hierarchy
CPU register
Cache memory
Types of storage
Primary Memory
RAM
ROM
Secondary Memory
Magnetic memory
Optical storage
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
1. The document discusses different types of semiconductor memory including RAM, ROM, PROM, EPROM, EEPROM, and flash memory.
2. It describes the operation and structure of both dynamic RAM (DRAM) and static RAM (SRAM). DRAM uses capacitors to store bits and requires refreshing, while SRAM uses flip-flops and does not require refreshing.
3. The document also covers various RAM technologies including synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Rambus DRAM (RDRAM), error correction codes, and cache DRAM.
This document discusses internal memory organization and technologies. It begins with an overview of memory cell operation for DRAM and SRAM. It then covers various memory types including DRAM, SRAM, ROM, PROM, EPROM, and EEPROM. Advanced DRAM technologies like synchronous DRAM, Rambus DRAM, and double data rate SDRAM are also summarized. The document concludes with sections on error correction techniques and cache DRAM.
This document summarizes different types of semiconductor memory, including dynamic RAM (DRAM), static RAM (SRAM), read-only memory (ROM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), RAMBUS RAM, cache DRAM, and error correction techniques. It describes the basic structure and operation of DRAM and SRAM memory cells, as well as advanced organizations like SDRAM that can improve memory performance.
Chapter 3 - Top Level View of Computer / Function and InterconectionCésar de Souza
The document discusses computer system buses and their role in connecting different components of a computer system. It describes the functions of different types of buses including data, address, and control buses. It also explains bus arbitration techniques and timing protocols for synchronous and asynchronous buses. Specific bus architectures like PCI are discussed with details on their components, commands, and arbitration process.
memory Interleaving and low order interleaving and high interleavingJawwad Rafiq
Memory interleaving splits memory into independent banks that can process read/write requests in parallel to increase throughput. It interleaves the address space so consecutive addresses are assigned to different banks. Low order interleaving uses the low order bits of an address to identify the memory module and high order bits for the word address within each module, allowing block access in a pipelined fashion. This improves the effective memory bandwidth.
A computer uses a hierarchy of internal and external memory systems. Internal memory includes RAM, ROM, and cache, which provide fast access but are more expensive per byte. RAM allows independent access to each memory location and is used for main memory. ROM permanently stores data and is used for boot programs. Cached memory uses SRAM for faster access than RAM. External memory includes hard disks and USB drives, which provide large, inexpensive storage but are much slower to access.
1. The document discusses the steps for completing business messages, including revising, producing, proofreading, and distributing.
2. It covers revising the content, organization, style, and tone and promoting readability with sentences, paragraphs, and headings.
3. Producing the message may involve using multimedia elements, page layout, graphics, and document design principles.
4. The final steps are proofreading for errors and choosing a distribution method based on cost, convenience, time, and privacy concerns.
The document discusses various types of external memory used in computer systems, including magnetic disks, optical disks, magnetic tape, and RAID (redundant array of independent disks). Magnetic disks come in varieties such as hard disks, floppy disks, and removable hard disks. Optical disks discussed include CD-ROM, CD-R/W, DVD, and their uses, capacities, and read/write capabilities. RAID systems provide data redundancy across multiple disks for reliability and performance.
digital logic circuits, digital component memory unitRai University
The document discusses different types of computer memory. It defines memory as a device that stores binary data for processing. There are two main types: random access memory (RAM), which allows random read/write access, and sequential access memory, which requires sequential access. RAM is further divided into static RAM, dynamic RAM, read-only memory (ROM), programmable ROM, erasable programmable ROM, and electrically erasable programmable ROM. ROM is used to permanently store programs, while RAM is used for temporary data storage. Flash memory is a type of EEPROM that uses standard voltages and allows block writing for easier updating.
This document discusses memory subsystems and hierarchy. It begins by describing the memory hierarchy which includes registers, main memory (RAM), and external memory. It then discusses different types of memory in terms of read/write capability, volatility, and erasure mechanisms. The document outlines cache organization and mapping techniques including direct mapping, set associative, and fully associative mapping. It provides examples of address mapping for each technique. The document also discusses RAM and ROM types as well as memory subsystem organization.
This document summarizes key points from Chapter 3 of William Stallings' book "Computer Organization and Architecture". It discusses the top-level view of computer function and interconnection. The main components of a computer are the control unit, arithmetic logic unit, main memory, and input/output. Programs are sequences of steps that are executed via control signals. Buses are used to connect these components and transfer data, addresses, and control signals. Interrupts allow other devices to interrupt normal program execution.
The document introduces the concept of interleaving, which is the placement of a substrate between or around food products. It defines interleaving and underleaving, and provides examples of common interleaving applications like sub setups, lay flat bacon, and frozen burger patties. Interleaving provides benefits like portion control and sanitary handling for food processors and end users.
This document discusses memory characteristics including location, capacity, access methods, performance, and types. It covers the memory hierarchy from registers to external memory. Key points include that dynamic RAM needs refreshing to prevent data loss while static RAM maintains data without refreshing, and that memory performance is determined by access time, cycle time, and transfer rate. The document provides diagrams of DRAM and SRAM cell structures.
RAM (random access memory) is the primary storage area in a computer for programs and data that the processor can access. It is temporary memory that stores bytes of data and instructions until the computer is powered down. RAM needs to match the processor's abilities and most operating systems require at least 1GB of RAM to run smoothly. ROM (read only memory) provides permanent storage for instructions and firmware needed to boot the computer. It stores data using fixed circuits that don't erase when powered down, including the BIOS (basic input/output system) which contains hardware information and boot instructions.
This document discusses error detection and correction techniques for digital data transmission. It introduces different types of errors that can occur, such as single-bit and burst errors. It describes how redundancy is used to detect and correct errors using block coding techniques. Specific examples are provided to illustrate how block codes are constructed and used to detect and correct errors. Key concepts discussed include linear block codes, Hamming distance, minimum Hamming distance, and how these relate to the error detection and correction capabilities of different coding schemes.
Magnetic disks remain the most important component of external memory. Data is recorded on disks through magnetic read and write heads. Disks are organized into tracks and sectors for efficient data access. RAID systems provide data redundancy or higher performance through striping and mirroring across multiple disks. Optical disks like CDs and DVDs store data through microscopic pits and lands read by lasers, and use constant linear velocity to increase storage capacity toward the disk edge.
The document discusses the differences between computer architecture and organization. Architecture refers to attributes visible to programmers like instruction sets, while organization refers to how features are implemented internally. It also discusses the functional view of a computer in terms of data movement, control, storage, and processing. Finally, it outlines the chapters of the book which will cover topics like CPU structure, instruction sets, and computer evolution.
This document provides an overview of parallel computing and parallel processing. It discusses:
1. The three types of concurrent events in parallel processing: parallel, simultaneous, and pipelined events.
2. The five fundamental factors for projecting computer performance: clock rate, cycles per instruction (CPI), execution time, million instructions per second (MIPS) rate, and throughput rate.
3. The four programmatic levels of parallel processing from highest to lowest: job/program level, task/procedure level, interinstruction level, and intrainstruction level.
This document provides information about memory hierarchy and cache design. It discusses the different types of memory technologies like SRAM and DRAM that are used at different levels of the memory hierarchy. It describes the basic operations of DRAM and SRAM. It also covers cache organization concepts like direct-mapped caches, cache hits, misses, and handling reads and writes. The goal of the memory hierarchy is to provide fast access to frequently used data while also providing large storage capacity.
This document provides an introduction to software engineering topics including:
1. What software engineering is, its importance, and the software development lifecycle activities it encompasses.
2. The many different types of software systems that exist and how software engineering approaches vary depending on the application.
3. Key fundamentals of software engineering that apply universally, including managing development processes, dependability, and reusing existing software components.
The document summarizes different types of computer memory. It describes RAM as volatile memory that can be randomly accessed. There are two main types of RAM: DRAM uses capacitors and must be refreshed, while SRAM uses flip-flops and does not need refreshing. The document also discusses cache memory, ROM, EPROM, EEPROM, flash memory, memory organization, errors and interleaving.
This document provides an overview of the design of a dual port SRAM using Verilog HDL. It begins with an introduction describing the objectives and accomplishments of the project. It then reviews relevant literature on SRAM design. The document describes the FPGA design flow and introduces Verilog. It provides the design and operation of the SRAM, and discusses simulation results and conclusions. The proposed 8-bit dual port SRAM utilizes negative bitline techniques during write operations to improve write ability and reduce power consumption and area compared to conventional designs.
Computer memory, also known as RAM, is temporary storage that allows the computer to perform tasks by holding instructions and data in an easily accessible location. There are two main types of computer memory: volatile and non-volatile. Volatile memory, like RAM, loses its contents when power is removed while non-volatile types like ROM retain data without power. Over time, RAM technologies have evolved from SIMMs to DIMMs and SDRAM to DDR, DDR2, and DDR3, with each generation offering faster speeds and higher capacities. Proper identification and installation of the correct RAM type is important for system functionality and performance.
Computer memory, also known as RAM, is temporary storage that allows the computer to perform tasks by holding instructions and data in an easily accessible location. There are two main types of computer memory: volatile and non-volatile. Volatile memory, like RAM, loses its contents when power is removed while non-volatile types like ROM retain data without power. Over time, RAM technologies have evolved from SIMMs to DIMMs and SDRAM to DDR, DDR2, and DDR3, with each generation offering faster speeds and higher capacities. Proper identification and installation of the correct RAM type is important for system functionality and performance.
The document discusses different types of RAM and ROM. It describes SRAM, DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM, PROM, EPROM, and EEPROM. It provides details on their characteristics such as speed, cost, refresh requirements, packaging, and compatibility. The document also gives tips for selecting memory such as checking the maximum supported size and speed, and ensuring it matches the system board configuration.
Primary memory, also known as main memory or RAM, is the area where the CPU stores and accesses data for processing. It has limited capacity and is volatile, meaning data is lost when power is turned off. Primary memory includes RAM and ROM. RAM allows reading and writing and is used as the computer's working memory, while ROM is read-only and stores firmware. Examples of RAM include SRAM, DRAM, SDRAM, and DDR RAM, which differ in refresh rates, speeds, and capacities. ROM includes PROM, EPROM, and EEPROM, which have different methods of programming and erasing data. Primary memory provides fast access for processing but has limited storage compared to secondary
This document discusses different types of computer memory and storage devices. It defines primary and secondary storage. Primary storage includes RAM and ROM, which temporarily and permanently store data respectively. RAM is volatile and includes DRAM, SRAM, and RDRAM. ROM is non-volatile and includes PROM, EPROM, EEPROM, and flash memory. The document provides details on each type of memory, including their characteristics and uses.
This document discusses different types of computer memory, including RAM, ROM, and virtual memory. It describes the key characteristics and uses of dynamic RAM, static RAM, ROM, EEPROM, flash memory, cache memory, and virtual memory. The main types of RAM discussed are DRAM, SRAM, SDRAM, and RDRAM. DRAM needs refreshing but is cheaper than SRAM. ROM types include mask ROM, PROM, EPROM, and EEPROM. Virtual memory allows programs to access memory as if it were one unified virtual space.
Main memory is made up of RAM and ROM chips. RAM is read-write memory that can be accessed randomly; data is lost when power is off. There are static and dynamic RAM types. Static RAM retains data indefinitely if powered, dynamic RAM must be periodically refreshed. ROM is read-only and permanently stores data. There are mask, PROM, EPROM and EEPROM ROM types that can be programmed at different stages. Cache memory uses fast static RAM. Main memory often uses dynamic RAM for its ability to store large amounts of data at lower cost despite slower access.
https://peoplelaptop.com/difference-between-ram-and-rom/
RAM and ROM:Difference between RAM and ROM in tabular form is given here. Visit now to check the detailed RAM vs ROM difference with their comparisons.
Here are the answers:
1. SRAM is static RAM that does not require periodic refreshing. It is faster but more expensive than DRAM, which is dynamic RAM that must be refreshed periodically.
2. The functions of ROM are to permanently store basic input/output instructions for the computer and to hold firmware like the BIOS. ROM maintains its data without power and can usually only be read from, not written to.
3. The different types and levels of CPU caches are:
- Level 1 (L1) cache, which is the smallest and fastest cache built directly into the CPU chip.
- Level 2 (L2) cache, which is external to the CPU chip but still on the processor
RAM allows stored data to be accessed randomly in any order. It is a type of volatile memory that does not permanently store data and loses its contents when powered off. There are two main types of RAM: static RAM and dynamic RAM. Dynamic RAM needs to be refreshed to maintain its contents while static RAM does not. RAM technologies have evolved from FPM DRAM to EDO DRAM, SDRAM, DDR SDRAM, and RDRAM to increase bandwidth and transfer rates. The memory hierarchy includes CPU registers, cache memory levels L1-L3, main memory, virtual memory, and storage. Future RAM technologies aim to be smaller, faster, and cheaper through innovations like RRAM and Z-RAM.
The document discusses the basics of semiconductor memories. It explains that memory controllers establish information flow between memory and the CPU. Memory buses connect memory to the controller. Newer systems have frontside and backside buses connecting different components. During boot-up, the BIOS and operating system are loaded from ROM and hard drive into RAM for fast access by the CPU. Applications and files are also loaded into and removed from RAM as needed. The document compares different types of volatile and non-volatile memory in terms of speed, size, and cost.
RAM is a type of volatile memory that is used for temporary storage. It allows data to be accessed randomly in any order. There are different types of RAM such as static RAM, dynamic RAM, SDRAM, and DDR SDRAM. RAM is part of a memory hierarchy that includes processor registers, cache memory levels L1-L3, main memory, and virtual memory. Future RAM technologies aim to provide memory that is smaller, faster, and cheaper than current memory chips.
This document provides an overview of different types of computer memory including cache memory, RAM, and ROM. It discusses the levels of cache memory (levels 1, 2, and 3) and how they work together. It describes the two main types of RAM - SRAM and DRAM - and explains the differences between them. It also outlines the three main types of ROM - PROM, EPROM, and EEPROM - and how their data can be programmed and erased.
The document discusses memory architectures including Harvard and Von Neumann architectures. It provides details on each:
- The Harvard architecture has separate memory for instructions and data allowing parallel access, but is more expensive.
- The Von Neumann architecture shares memory for instructions and data through a common bus, making it simpler but limiting parallelism.
- It also discusses memory types like RAM, SRAM, and DRAM and cache memory levels L1 and L2. RAM is volatile memory used for main memory, while SRAM is faster but more expensive and used for caches. DRAM is cheaper than SRAM but needs refresh cycles. Caches improve performance by storing recently used data and instructions from main memory.
The document discusses memory hierarchy and technologies. It describes the different levels of memory from fastest to slowest as processor registers, cache memory (levels 1 and 2), main memory, and secondary storage. The main memory technologies discussed are SRAM, DRAM, ROM, flash memory, and magnetic disks. Cache memory aims to speed up access time by exploiting locality of reference and uses mapping functions like direct mapping to determine cache locations.
This document summarizes different types of RAM:
- RAM is random access memory that temporarily stores information and is volatile, losing data when power is cut. It is faster than hard disk storage.
- The main types are SRAM, DRAM, FPM RAM, EDO RAM, SDR RAM, RDRAM, DDR RAM, and DDR5 RAM. Each was introduced at a different time and has different speeds, costs, power usage, and capacities.
- RAM can be installed as modules in computers, with DIMMs for desktops and smaller SODIMMs for laptops. SODIMMs have become the standard for laptop memory.
This presentation discusses Dynamic RAM (DRAM) and its types. It begins by explaining what RAM is and how it provides faster access for the CPU than the hard disk. It then covers that DRAM is the main memory in computers and must be refreshed periodically to prevent data loss. The main types of DRAM discussed are SDRAM, DDR, RDRAM, and DRAM memory modules. Specific details are provided about the features and operation of each DRAM type. Major memory manufacturers are also listed.
5. +
Dynamic RAM (DRAM)
RAM technology is divided into two technologies:
Dynamic RAM (DRAM)
Static RAM (SRAM)
DRAM
Made with cells that store data as charge on capacitors
Presence or absence of charge in a capacitor is interpreted as a
binary 1 or 0
Requires periodic charge refreshing to maintain data storage
The term dynamic refers totendency of the stored charge to leak
away, even with power continuously applied
7. +
Static RAM
(SRAM)
Digital device that uses the same
logic elements used in the
processor
Binary values are stored using
traditional flip-flop logic gate
configurations
Will hold its data as long as power
is supplied to it
9. SRAM versusDRAM
SRAM
Both volatile
Power must be continuously supplied to the memory
to preserve the bit values
Dynamic cell
Simpler to build, smaller
More dense (smaller cells = more cells per unit area) DRAM
Less expensive
Requires the supporting refresh circuitry
Tend to be favored for large memory requirements
Used for main memory
+
Static
Faster
Used for cache memory (both on and off chip)
10. +
Read Only Memory (ROM)
Contains a permanent pattern of data that cannot be changed
or added to
No power source is required to maintain the bit values in
memory
Data or program is permanently in main memory and never
needs to be loaded from a secondary storage device
Data is actually wired into the chip as part of the fabrication
process
Disadvantages of this:
No room for error, if one bit is wrong the whole batch of ROMs
must be thrown out
Data insertion step includes a relatively large fixed cost
11. +
Programmable ROM (PROM)
Less expensive alternative
Nonvolatile and may be written into only once
Writing process is performed electrically and may be performed
by supplier or customer at a time later than the original chip
fabrication
Special equipment is required for the writing process
Provides flexibility and convenience
Attractive for high volume production runs
12. Read-Mostly Memory
Flash
EPROM EEPROM
Memory
Electrically erasable
programmable read-only Intermediate between
Erasable programmable
memory EPROM and EEPROM in
read-only memory
both cost and functionality
Can be written into at any
time without erasing prior
contents
Uses an electrical erasing
Erasure process can be
technology, does not
performed repeatedly
Combines the advantage of provide byte-level erasure
non-volatility with the
flexibility of being updatable
in place
More expensive than PROM Microchip is organized so
but it has the advantage of that a section of memory
the multiple update More expensive than cells are erased in a single
capability EPROM action or “flash”
17. Interleaved Memory Composed of a collection of
DRAM chips
Grouped together to form a
memory bank
Each bank is independently
able to service a memory
read or write request
K banks can service K
requests simultaneously,
increasing memory read or
write rates by a factor of K
If consecutive words of
memory are stored in different
banks, the transfer of a block
of memory is speeded up
18. +
Error Correction
Hard Failure
Permanent physical defect
Memory cell or cells affected cannot reliably store data but become
stuck at 0 or 1 or switch erratically between 0 and 1
Can be caused by:
Harsh environmental abuse
Manufacturing defects
Wear
Soft Error
Random, non-destructive event that alters the contents of one or more
memory cells
No permanent damage to memory
Can be caused by:
Power supply problems
Alpha particles
25. Advanced DRAM Organization SDRAM
One of the most critical system bottlenecks when using
high-performance processors is the interface to main
internal memory DDR-DRAM
The traditional DRAM chip is constrained both by its
internal architecture and by its interface to the processor’s
memory bus
Anumber of enhancements to the basic DRAM
architecture have been explored: RDRAM
+
Table 5.3 Performance Comparison of Some DRAM Alternatives
26. Synchronous DRAM (SDRAM)
One of the most widely used forms of DRAM
Exchanges data with the processor synchronized to
an external clock signal and running at the full speed
of the processor/memory bus without imposing wait
states
With synchronous access the DRAM moves data in
and out under control of the system clock
• The processor or other master issues the instruction and
address information which is latched by the DRAM
• The DRAM then responds after a set number of clock
cycles
• Meanwhile the master can safely do other tasks while
the SDRAM is processing
30. RDRAM Developed by Rambus
Bus delivers address and
control information using an
asynchronous block-oriented
protocol Adopted by Intel for its Pentium
and Itanium processors
•Gets a memory request over the high-
speed bus
•Request contains the desired address,
the type of operation, and the number
of bytes in the operation
Bus can address up to 320
Has become the main
RDRAM chips and is rated at
competitor to SDRAM
1.6 GBps
Chips are vertical packages with
all pins on one side
•Exchanges data with the processor over
28 wires no more than 12 centimeters
long
32. +
Double Data Rate SDRAM
(DDR SDRAM)
SDRAM can only send data once per bus clock cycle
Double-data-rate SDRAM can send data twice per clock cycle,
once on the rising edge of the clock pulse and once on the
falling edge
Developed by the JEDEC Solid State Technology Association
(Electronic Industries Alliance’s semiconductor-engineering-
standardization body)
34. +
Cache DRAM (CDRAM)
Developed by Mitsubishi
Integrates a small SRAM cache onto a generic DRAM chip
SRAM on the CDRAM can be used in two ways:
It can be used as a true cache consisting of a number of 64-bit lines
Cache mode of the CDRAM is effective for ordinary random
access to memory
Can also be used as a buffer to support the serial access of a block
of data
35. + Summary Internal
Memory
Chapter 5
Semiconductor main memory
Hamming code
Organization
DRAM and SRAM Advanced DRAM organization
Types of ROM Synchronous DRAM
Chip logic Rambus DRAM
Chip packaging DDR SDRAM
Module organization Cache DRAM
Interleaved memory
Error correction
Hard failure
Soft error
Editor's Notes
We begin this chapter with a survey of semiconductor main memory subsystems,including ROM, DRAM, and SRAM memories. Then we look at error controltechniques used to enhance memory reliability. Following this, we look at moreadvanced DRAM architectures.
In earlier computers, the most common form of random-access storage for computermain memory employed an array of doughnut-shaped ferromagnetic loopsreferred to as cores. Hence, main memory was often referred to as core, a term thatpersists to this day. The advent of, and advantages of, microelectronics has longsince vanquished the magnetic core memory. Today, the use of semiconductor chipsfor main memory is almost universal. Key aspects of this technology are exploredin this section.The basic element of a semiconductor memory is the memory cell. Although a varietyof electronic technologies are used, all semiconductor memory cells share certainproperties:• They exhibit two stable (or semistable) states, which can be used to representbinary 1 and 0.• They are capable of being written into (at least once), to set the state.• They are capable of being read to sense the state.Figure 5.1 depicts the operation of a memory cell. Most commonly, the cellhas three functional terminals capable of carrying an electrical signal. The selectterminal, as the name suggests, selects a memory cell for a read or write operation.The control terminal indicates read or write. For writing, the other terminalprovides an electrical signal that sets the state of the cell to 1 or 0. For reading, thatterminal is used for output of the cell’s state. The details of the internal organization,functioning, and timing of the memory cell depend on the specific integratedcircuit technology used and are beyond the scope of this book, except for a briefsummary. For our purposes, we will take it as given that individual cells can beselected for reading and writing operations.
All of the memory types that we will explore in this chapter are random access. That is,individual words of memory are directly accessed through wired-in addressing logic.Table 5.1 lists the major types of semiconductor memory. The most commonis referred to as random-access memory (RAM). This is, in fact, a misuse of theterm, because all of the types listed in the table are random access. One distinguishingcharacteristic of memory that is designated as RAM is that it is possibleboth to read data from the memory and to write new data into the memory easilyand rapidly. Both the reading and writing are accomplished through the use ofelectrical signals.The other distinguishing characteristic of RAM is that it is volatile. A RAMmust be provided with a constant power supply. If the power is interrupted, thenthe data are lost. Thus, RAM can be used only as temporary storage. The two traditionalforms of RAM used in computers are DRAM and SRAM.
Figure 5.2a is a typical DRAM structure for an individual cell that stores 1 bit.The address line is activated when the bit value from this cell is to be read or written.The transistor acts as a switch that is closed (allowing current to flow) if a voltage isapplied to the address line and open (no current flows) if no voltage is present onthe address line.For the write operation, a voltage signal is applied to the bit line; a high voltagerepresents 1, and a low voltage represents 0. A signal is then applied to theaddress line, allowing a charge to be transferred to the capacitor.For the read operation, when the address line is selected, the transistor turnson and the charge stored on the capacitor is fed out onto a bit line and to a senseamplifier. The sense amplifier compares the capacitor voltage to a reference valueand determines if the cell contains a logic 1 or a logic 0. The readout from the celldischarges the capacitor, which must be restored to complete the operation.Although the DRAM cell is used to store a single bit (0 or 1), it is essentiallyan analog device. The capacitor can store any charge value within a range; a thresholdvalue determines whether the charge is interpreted as 1 or 0.
Figure 5.2b is a typical SRAM structure for an individual cell. Four transistors(T1, T2, T3, T4) are cross connected in an arrangement that produces a stable logicstate. In logic state 1, point C1 is high and point C2 is low; in this state, T1 and T4 are offand T2 and T3 are on. In logic state 0, point C1 is low and point C2 is high; in this state,T1 and T4 are on and T2 and T3 are off. Both states are stable as long as the directcurrent (dc) voltage is applied. Unlike the DRAM, no refresh is needed to retain data.As in the DRAM, the SRAM address line is used to open or close a switch.The address line controls two transistors (T5 and T6). When a signal is applied tothis line, the two transistors are switched on, allowing a read or write operation. Fora write operation, the desired bit value is applied to line B, while its complementis applied to line B. This forces the four transistors (T1, T2, T3, T4) into the properstate. For a read operation, the bit value is read from line B.
Both static and dynamic RAMs are volatile; that is,power must be continuously supplied to the memory to preserve the bit values.A dynamic memory cell is simpler and smaller than a static memory cell. Thus, aDRAM is more dense (smaller cells = more cells per unit area) and less expensivethan a corresponding SRAM. On the other hand, a DRAM requires the supportingrefresh circuitry. For larger memories, the fixed cost of the refresh circuitry is morethan compensated for by the smaller variable cost of DRAM cells. Thus, DRAMstend to be favored for large memory requirements. A final point is that SRAMs aresomewhat faster than DRAMs. Because of these relative characteristics, SRAM isused for cache memory (both on and off chip), and DRAM is used for main memory.
When only a small number of ROMs with a particular memory content isneeded, a less expensive alternative is the programmable ROM (PROM). Like theROM, the PROM is nonvolatile and may be written into only once. For the PROM,the writing process is performed electrically and may be performed by a supplieror customer at a time later than the original chip fabrication. Special equipment isrequired for the writing or “programming” process. PROMs provide flexibility andconvenience. The ROM remains attractive for high-volume production runs.
Another variation on read-only memory is the read-mostly memory, which isuseful for applications in which read operations are far more frequent than writeoperations but for which nonvolatile storage is required. There are three commonforms of read-mostly memory: EPROM, EEPROM, and flash memory.The optically erasable programmable read-only memory (EPROM) is readand written electrically, as with PROM. However, before a write operation, all thestorage cells must be erased to the same initial state by exposure of the packagedchip to ultraviolet radiation. Erasure is performed by shining an intense ultravioletlight through a window that is designed into the memory chip. This erasure processcan be performed repeatedly; each erasure can take as much as 20 minutes toperform. Thus, the EPROM can be altered multiple times and, like the ROM andPROM, holds its data virtually indefinitely. For comparable amounts of storage, theEPROM is more expensive than PROM, but it has the advantage of the multipleupdate capability.A more attractive form of read-mostly memory is electrically erasable programmableread-only memory (EEPROM). This is a read-mostly memory that canbe written into at any time without erasing prior contents; only the byte or bytesaddressed are updated. The write operation takes considerably longer than the readoperation, on the order of several hundred microseconds per byte. The EEPROMcombines the advantage of non-volatility with the flexibility of being updatable inplace, using ordinary bus control, address, and data lines. EEPROM is more expensivethan EPROM and also is less dense, supporting fewer bits per chip.Another form of semiconductor memory is flash memory (so named becauseof the speed with which it can be reprogrammed). First introduced in the mid-1980s,flash memory is intermediate between EPROM and EEPROM in both cost andfunctionality. Like EEPROM, flash memory uses an electrical erasing technology.An entire flash memory can be erased in one or a few seconds, which is much fasterthan EPROM. In addition, it is possible to erase just blocks of memory rather thanan entire chip. Flash memory gets its name because the microchip is organized sothat a section of memory cells are erased in a single action or “flash.” However,flash memory does not provide byte-level erasure. Like EPROM, flash memoryuses only one transistor per bit, and so achieves the high density (compared withEEPROM) of EPROM.
Main memory is composed of a collection of DRAM memory chips. A number ofchips can be grouped together to form a memory bank. It is possible to organizethe memory banks in a way known as interleaved memory. Each bank is independentlyable to service a memory read or write request, so that a system withK banks can service K requests simultaneously, increasing memory read or writerates by a factor of K. If consecutive words of memory are stored in differentbanks, then the transfer of a block of memory is speeded up. Appendix E exploresthe topic of interleaved memory.
The simplest of the error-correcting codes is the Hamming code devised byRichard Hamming at Bell Laboratories. Figure 5.8 uses Venn diagrams to illustratethe use of this code on 4-bit words (M = 4). With three intersecting circles,there are seven compartments. We assign the 4 data bits to the inner compartments(Figure5.8a). The remaining compartments are filled with what are called paritybits. Each parity bit is chosen so that the total number of 1s in its circle is even(Figure5.8b). Thus, because circle A includes three data 1s, the parity bit in thatcircle is set to 1. Now, if an error changes one of the data bits (Figure 5.8c), it is easilyfound. By checking the parity bits, discrepancies are found in circle A and circleC but not in circle B. Only one of the seven compartments is in A and C but not B.The error can therefore be corrected by changing that bit.
The first three columns of Table 5.2lists the number of check bits required for various data word lengths.For convenience, we would like to generate a 4-bit syndrome for an 8-bit dataword with the following characteristics:• If the syndrome contains all 0s, no error has been detected.• If the syndrome contains one and only one bit set to 1, then an error hasoccurred in one of the 4 check bits. No correction is needed.• If the syndrome contains more than one bit set to 1, then the numerical valueof the syndrome indicates the position of the data bit in error. This data bit isinverted for correction.
To achieve these characteristics, the data and check bits are arranged into a12-bit word as depicted in Figure 5.9. The bit positions are numbered from 1 to 12.Those bit positions whose position numbers are powers of 2 are designated as checkbits.
Figure 5.10 illustrates the calculation. The data and check bits arepositioned properly in the 12-bit word. Four of the data bits have a value 1 (shadedin the table), and their bit position values are XORed to produce the Hammingcode 0111, which forms the four check digits. The entire block that is stored is001101001111. Suppose now that data bit 3, in bit position 6, sustains an error and ischanged from 0 to 1. The resulting block is 001101101111, with a Hamming code of0111. An XOR of the Hamming code and all of the bit position values for nonzerodata bits results in 0110. The nonzero result detects an error and indicates that theerror is in bit position 6.
The code just described is known as a single-error-correcting (SEC) code.More commonly, semiconductor memory is equipped with a single-error-correcting,double-error-detecting (SEC-DED) code. As Table 5.2 shows, such codes requireone additional bit compared with SEC codes.Figure 5.11 illustrates how such a code works, again with a 4-bit data word.The sequence shows that if two errors occur (Figure 5.11c), the checking proceduregoes astray (d) and worsens the problem by creating a third error (e). To overcomethe problem, an eighth bit is added that is set so that the total number of 1s in thediagram is even. The extra parity bit catches the error (f).An error-correcting code enhances the reliability of the memory at the cost ofadded complexity. With a 1-bit-per-chip organization, an SEC-DED code is generallyconsidered adequate. For example, the IBM 30xx implementations used an 8-bit SECDEDcode for each 64 bits of data in main memory. Thus, the size of main memory isactually about 12% larger than is apparent to the user. The VAX computers used a 7-bitSEC-DED for each 32 bits of memory, for a 22% overhead. A number of contemporaryDRAMs use 9 check bits for each 128 bits of data, for a 7% overhead [SHAR97].
Figure 5.12 shows the internal logic of IBM’s 64-Mb SDRAM [IBM01], whichis typical of SDRAM organization.
Table 5.4 defines the various pin assignments.The SDRAM employs a burst mode to eliminate the address setup time androw and column line pre-charge time after the first access. In burst mode, a series ofdata bits can be clocked out rapidly after the first bit has been accessed. This modeis useful when all the bits to be accessed are in sequence and in the same row of thearray as the initial access. In addition, the SDRAM has a multiple-bank internalarchitecture that improves opportunities for on-chip parallelism.The mode register and associated control logic is another key feature differentiatingSDRAMs from conventional DRAMs. It provides a mechanism tocustomize the SDRAM to suit specific system needs. The mode register specifiesthe burst length, which is the number of separate units of data synchronously fedonto the bus. The register also allows the programmer to adjust the latency betweenreceipt of a read request and the beginning of data transfer.The SDRAM performs best when it is transferring large blocks of data serially,such as for applications like word processing, spreadsheets, and multimedia.
Figure 5.13 shows an example of SDRAM operation.
Figure 5.14 illustrates the RDRAM layout. The configuration consists ofa controller and a number of RDRAM modules connected via a common bus.The controller is at one end of the configuration, and the far end of the bus isa parallel termination of the bus lines. The bus includes 18 data lines (16 actualdata, two parity) cycling at twice the clock rate; that is, 1 bit is sent at the leadingand following edge of each clock signal. This results in a signal rate on eachdata line of 800 Mbps. There is a separate set of 8 lines (RC) used for addressand control signals. There is also a clock signal that starts at the far end fromthe controller propagates to the controller end and then loops back. A RDRAMmodule sends data to the controller synchronously to the clock to master, and thecontroller sends data to an RDRAM synchronously with the clock signal in theopposite direction. The remaining bus lines include a reference voltage, ground,and power source.
SDRAM is limited by the fact that it can only send data to the processor once perbus clock cycle. A new version of SDRAM, referred to as double-data-rate SDRAMcan send data twice per clock cycle, once on the rising edge of the clock pulse andonce on the falling edge.DDR DRAM was developed by the JEDEC Solid State TechnologyAssociation, the Electronic Industries Alliance’s semiconductor-engineering-standardizationbody. Numerous companies make DDR chips, which are widely used indesktop computers and servers.
Figure 5.15 shows the basic timing for a DDR read. The data transfer is synchronizedto both the rising and falling edge of the clock. It is also synchronized toa bidirectional data strobe (DQS) signal that is provided by the memory controllerduring a read and by the DRAM during a write. In typical implementations theDQS is ignored during the read. An explanation of the use of DQS on writes isbeyond our scope; see [JACO08] for details.There have been two generations of improvement to the DDR technology.DDR2 increases the data transfer rate by increasing the operational frequencyof the RAM chip and by increasing the prefetch buffer from 2 bits to 4 bitsper chip. The prefetch buffer is a memory cache located on the RAM chip. Thebuffer enables theRAM chip to preposition bits to be placed on the data bus asrapidly as possible. DDR3, introduced in 2007, increases the prefetch buffer sizeto 8 bits.Theoretically, a DDR module can transfer data at a clock rate in the range of200 to 600 MHz; a DDR2 module transfers at a clock rate of 400 to 1066 MHz; anda DDR3 module transfers at a clock rate of 800 to 1600 MHz. In practice, somewhatsmaller rates are achieved.Appendix K provides more detail on DDR technology.
Cache DRAM (CDRAM), developed by Mitsubishi [HIDA90, ZHAN01], integratesa small SRAM cache (16 Kb) onto a generic DRAM chip.The SRAM on the CDRAM can be used in two ways. First, it can be used as atrue cache, consisting of a number of 64-bit lines. The cache mode of the CDRAMis effective for ordinary random access to memory.The SRAM on the CDRAM can also be used as a buffer to support the serialaccess of a block of data. For example, to refresh a bit-mapped screen, the CDRAMcan prefetch the data from the DRAM into the SRAM buffer. Subsequent accessesto the chip result in accesses solely to the SRAM.