This document discusses memory organization and different types of memory. It begins with an introduction to memory and memory cells. It then discusses how memory cells can be organized into various memory structures like registers and memory arrays. Next, it describes different types of memory like read-only memory, serial access memory, and their applications. Finally, it covers external storage devices like magnetic disks, optical disks, and their characteristics including storage capacity, access mechanisms, and read/write technologies.
The document discusses the memory system in computers including main memory, cache memory, and different types of memory chips. It provides details on the following key points in 3 sentences:
The document discusses the different levels of memory hierarchy including main memory, cache memory, and auxiliary memory. It describes the basic concepts of memory including addressing schemes, memory access time, and memory cycle time. Examples of different types of memory chips are discussed such as SRAM, DRAM, ROM, and cache memory organization and mapping techniques.
In this ppt you will learn about the various memory and its types inside the computer. The ppt also describes an analogy for your better understanding. Hope it will be fun learning.
The document discusses various types of computer memory technologies, including RAM types like DRAM, SRAM, DDR, DDR2, and DDR3. It explains the memory hierarchy from registers to cache to main memory to disks. Key points covered include how DRAM works using capacitors that must be periodically refreshed, advantages of SDRAM over regular DRAM like pipelining commands. Generations of DDR memory are compared in terms of clock speeds, data rates, and other features.
This document discusses subroutines and the CALL and RET instructions used to implement them in the 8085 microprocessor. It defines a subroutine as a group of instructions written separately from the main program to perform a function that occurs repeatedly. The CALL instruction transfers the program sequence to the subroutine and saves the return address on the stack. The RET instruction inserts the return address from the stack into the program counter to return to the main program. When CALL is executed, the stack pointer is decremented, and when RET is executed, the stack pointer is incremented.
This document discusses memory and I/O interfacing with the 8085 microprocessor. It defines interfaces as points of interaction between components that allow communication. Memory interfacing requires address decoding and multiplexing of address and data lines. I/O devices can be interfaced either through memory mapping or I/O mapping. Common memory types include RAM, ROM, SRAM and DRAM. RAM can be static or dynamic. ROM includes PROM, EPROM and EEPROM. A stack is a reserved part of memory used to temporarily store information during program execution.
The document discusses the memory system in computers including main memory, cache memory, and different types of memory chips. It provides details on the following key points in 3 sentences:
The document discusses the different levels of memory hierarchy including main memory, cache memory, and auxiliary memory. It describes the basic concepts of memory including addressing schemes, memory access time, and memory cycle time. Examples of different types of memory chips are discussed such as SRAM, DRAM, ROM, and cache memory organization and mapping techniques.
In this ppt you will learn about the various memory and its types inside the computer. The ppt also describes an analogy for your better understanding. Hope it will be fun learning.
The document discusses various types of computer memory technologies, including RAM types like DRAM, SRAM, DDR, DDR2, and DDR3. It explains the memory hierarchy from registers to cache to main memory to disks. Key points covered include how DRAM works using capacitors that must be periodically refreshed, advantages of SDRAM over regular DRAM like pipelining commands. Generations of DDR memory are compared in terms of clock speeds, data rates, and other features.
This document discusses subroutines and the CALL and RET instructions used to implement them in the 8085 microprocessor. It defines a subroutine as a group of instructions written separately from the main program to perform a function that occurs repeatedly. The CALL instruction transfers the program sequence to the subroutine and saves the return address on the stack. The RET instruction inserts the return address from the stack into the program counter to return to the main program. When CALL is executed, the stack pointer is decremented, and when RET is executed, the stack pointer is incremented.
This document discusses memory and I/O interfacing with the 8085 microprocessor. It defines interfaces as points of interaction between components that allow communication. Memory interfacing requires address decoding and multiplexing of address and data lines. I/O devices can be interfaced either through memory mapping or I/O mapping. Common memory types include RAM, ROM, SRAM and DRAM. RAM can be static or dynamic. ROM includes PROM, EPROM and EEPROM. A stack is a reserved part of memory used to temporarily store information during program execution.
Direct Memory Access (DMA) allows certain hardware subsystems to access main system memory independently of the CPU. DMA controllers temporarily borrow the address, data, and control buses from the microprocessor to transfer data directly between an I/O port and memory locations. This allows fast transfer of data to and from devices while the CPU performs other tasks, improving overall system performance. DMA transfers can occur via block transfers where the DMA controller controls the bus for an extended period, or via cycle stealing where it uses the bus for one transfer then returns control to the CPU.
The document discusses the organization and operation of dynamic random access memory (DRAM). DRAM uses capacitors to store bits of data in memory cells that must be periodically refreshed. It describes how DRAM cells are arranged in a grid structure with rows and columns, and how row and column addresses are used to access individual cells. The document also explains techniques like fast page mode that allow for faster access to blocks of data within the same row without needing to reselect the row address.
The document discusses the architecture and support components of the 8085 microprocessor. It describes the pin diagram and functions of the 8085, its operations including memory and I/O access, internal architecture consisting of ALU, registers, buses, and interfacing with memory and I/O devices using memory-mapped and peripheral-mapped techniques. Examples of programs to read from an input port and write to an output port are also provided.
Cache memory is a small, fast memory located close to the processor that stores frequently accessed data from main memory. When the processor requests data, the cache is checked first. If the data is present, there is a cache hit and the data is accessed quickly from the cache. If not present, there is a cache hit and the data must be fetched from main memory, which takes longer. Cache memory relies on principles of temporal and spatial locality, where frequently and nearby accessed data is likely to be needed again soon. Mapping functions like direct, associative, and set-associative mapping determine how data is stored in the cache. Replacement policies like FIFO, LRU, etc. determine which cached data gets replaced when new
The 8237 DMA controller allows data transfer between I/O devices and memory without CPU intervention. It uses HOLD and HLDA signals to request and acknowledge DMA actions from the CPU. The 8237 contains registers like CAR, CWCR, CR, and SR to program DMA channel operations, addresses, counts, and status. It can perform DMA transfers at up to 1.6 MB/s across 4 channels. Modern systems integrate DMA controllers within chipsets rather than using discrete 8237 components.
Cache memory is a type of fast RAM that a computer processor can access more quickly than regular RAM. It stores recently accessed data from main memory to allow for faster future access if the same data is needed again. Cache memory is organized into levels based on proximity and speed of access to the processor, with L1 cache being fastest as it is located directly on the CPU chip, and L2 cache and main memory being progressively slower as they are located further away. Modern processors integrate both L1 and L2 cache onto the CPU package to improve performance by reducing access time.
A memory unit contains storage devices that store binary information as bits. Memory can be classified as volatile, which loses data when power is off, or non-volatile, which retains data when unpowered. The total computer memory forms a hierarchy from slow auxiliary memory to faster main memory and cache memory. Main memory communicates directly with the CPU and auxiliary memory, and holds programs currently in use while transferring unused programs to auxiliary memory. Memory can be accessed randomly, sequentially, or directly depending on its type.
The 80386 microprocessor had two main versions - the 80386DX with a 32-bit address and data bus, and the 80386SX with a 24-bit address bus and 16-bit data bus. The 80386SX was developed later for applications that did not require the full 32-bit capabilities of the 80386DX. The 80386 supported protected mode which enabled virtual memory, paging, and memory protection in addition to the capabilities of the 80286. It had enhanced registers, addressing modes, and memory management compared to earlier Intel processors.
The document discusses processor organization and architecture. It covers the Von Neumann model, which stores both program instructions and data in the same memory. The Institute for Advanced Study (IAS) computer is described as the first stored-program computer, designed by John von Neumann to overcome limitations of previous computers like the ENIAC. The document also covers the Harvard architecture, instruction formats, register organization including general purpose, address, and status registers, and issues in instruction format design like instruction length and allocation of bits.
This presentation talks about Real Time Operating Systems (RTOS). Starting with fundamental concepts of OS, this presentation deep dives into Embedded, Real Time and related aspects of an OS. Appropriate examples are referred with Linux as a case-study. Ideal for a beginner to build understanding about RTOS.
The document summarizes different types of computer memory. It describes RAM as volatile memory that can be randomly accessed. There are two main types of RAM: DRAM uses capacitors and must be refreshed, while SRAM uses flip-flops and does not need refreshing. The document also discusses cache memory, ROM, EPROM, EEPROM, flash memory, memory organization, errors and interleaving.
The document discusses stacks and subroutines in 8085 microprocessors. It describes how the stack is an area of memory used for temporary storage of information in a LIFO manner using a stack pointer register. Information is stored on the stack using the PUSH instruction and retrieved using POP. Subroutines allow commonly used code to be executed from different locations in a program by using the CALL instruction to transfer program flow to the subroutine and the RET instruction to return to the main program. Parameters can be passed between the main program and subroutines using registers or memory locations.
Unit 2 processor&memory-organisationPavithra S
This document discusses processor and memory organization for embedded systems. It describes the structural units of a processor like the MAR, MDR, buses, BIU, IR, ID, CU, ALU, PC, and caches. It covers memory devices like ROM, RAM, SRAM, DRAM, and flash memory. It provides case studies on selecting a processor based on features like clock speed, performance needs, and power efficiency. The document aims to help with selecting appropriate processors and memory for different types of embedded systems.
This document discusses cache memory organization and characteristics. It begins by describing cache location, capacity, unit of transfer, access methods, and physical characteristics. It then covers the different mapping techniques used in caches, including direct mapping, set associative mapping, and fully associative mapping. The document also discusses cache performance factors like hit ratio, replacement algorithms, write policies, block size, and multilevel cache hierarchies. It provides examples of specific processor cache designs like those used in Intel Pentium processors.
This document discusses memory organization and virtual memory. It describes paging and segmentation as methods for virtual memory address translation. Paging divides memory and processes into equal sized pages, while segmentation divides processes into variable sized segments. Both methods use data structures like page tables to map logical addresses to physical addresses. Caching is also discussed as a way to improve memory performance by storing frequently accessed data in a small, fast memory near the CPU.
Cache memory is located between the processor and main memory. It is smaller and faster than main memory. There are two types of cache memory policies - write-back and write-through. Mapping is a technique that maps CPU-generated memory addresses to cache lines. There are three types of mapping - direct, associative, and set associative. Direct mapping maps each main memory block to a single cache line using the formula: cache line number = main memory block number % number of cache lines. This can cause conflict misses.
The document discusses memory segmentation in the 8086 microprocessor. It explains that the 8086 has a 20-bit address bus that can address 1MB of physical memory. This memory can be divided into 16 segments of 64KB each, addressed from 0000H to F000H. Segments are accessed using a segment register to provide the base address and an offset value. Logical addresses are specified as segment:offset pairs, which are combined and shifted to generate the 20-bit physical address. Segmentation allows code, data, and stacks to be separated and permits relocation of programs in memory.
The document discusses memory organization and hierarchy. It describes how main memory directly communicates with the CPU while auxiliary memory provides backup storage. It also outlines different memory mapping techniques like direct mapping and set-associative mapping used for cache memory. Virtual memory allows programs to be larger than physical memory by swapping blocks between main and auxiliary storage.
Semiconductor memories have become essential in electronics as processors have become more common and software more sophisticated, greatly increasing the need for memory. There are several types of semiconductor memory technologies that have emerged to meet different needs, including DRAM, SRAM, SDRAM, EEPROM, flash memory, and the newer MRAM. Each type has its advantages for different applications like main memory, caches, and non-volatile storage.
Interfacing memory with 8086 microprocessorVikas Gupta
This document discusses interfacing memory with the 8086 microprocessor. It begins by defining different types of memory like RAM, ROM, EPROM, and EEPROM. It then discusses memory fundamentals like capacity, organization, and standard memory ICs. The document explains two methods of address decoding - absolute and partial decoding. It provides examples of interfacing 32KB RAM, 32K words of memory, and a combination of ROM, EPROM, and RAM with the 8086 using address decoding techniques. Diagrams and tables are included to illustrate the memory mapping and generation of chip select logic.
Memory is a device used to store data or programs either temporarily or permanently for use in a computer. There are different types of memory based on their characteristics such as location, capacity, unit of transfer, access method, performance, physical type and organization. Common memory types include RAM, ROM, and external memory such as magnetic disks. The memory hierarchy consists of registers, cache, main memory and external storage. Cache memory uses the principle of locality to improve memory access time by storing recently accessed data from main memory.
The document discusses various aspects of computer memory systems including main memory, cache memory, and memory mapping techniques. It provides details on:
1) Main memory stores program and data during execution and consists of addressable memory cells. Memory access time is the time for a memory operation while cycle time is the minimum delay between operations.
2) Memory units include RAM, ROM, PROM, EPROM, EEPROM and flash memory which have different characteristics like volatility and ability to be written.
3) Cache memory uses fast SRAM to improve performance by taking advantage of locality of reference where nearby memory accesses are common. Mapping techniques like direct, associative and set-associative mapping determine how
Direct Memory Access (DMA) allows certain hardware subsystems to access main system memory independently of the CPU. DMA controllers temporarily borrow the address, data, and control buses from the microprocessor to transfer data directly between an I/O port and memory locations. This allows fast transfer of data to and from devices while the CPU performs other tasks, improving overall system performance. DMA transfers can occur via block transfers where the DMA controller controls the bus for an extended period, or via cycle stealing where it uses the bus for one transfer then returns control to the CPU.
The document discusses the organization and operation of dynamic random access memory (DRAM). DRAM uses capacitors to store bits of data in memory cells that must be periodically refreshed. It describes how DRAM cells are arranged in a grid structure with rows and columns, and how row and column addresses are used to access individual cells. The document also explains techniques like fast page mode that allow for faster access to blocks of data within the same row without needing to reselect the row address.
The document discusses the architecture and support components of the 8085 microprocessor. It describes the pin diagram and functions of the 8085, its operations including memory and I/O access, internal architecture consisting of ALU, registers, buses, and interfacing with memory and I/O devices using memory-mapped and peripheral-mapped techniques. Examples of programs to read from an input port and write to an output port are also provided.
Cache memory is a small, fast memory located close to the processor that stores frequently accessed data from main memory. When the processor requests data, the cache is checked first. If the data is present, there is a cache hit and the data is accessed quickly from the cache. If not present, there is a cache hit and the data must be fetched from main memory, which takes longer. Cache memory relies on principles of temporal and spatial locality, where frequently and nearby accessed data is likely to be needed again soon. Mapping functions like direct, associative, and set-associative mapping determine how data is stored in the cache. Replacement policies like FIFO, LRU, etc. determine which cached data gets replaced when new
The 8237 DMA controller allows data transfer between I/O devices and memory without CPU intervention. It uses HOLD and HLDA signals to request and acknowledge DMA actions from the CPU. The 8237 contains registers like CAR, CWCR, CR, and SR to program DMA channel operations, addresses, counts, and status. It can perform DMA transfers at up to 1.6 MB/s across 4 channels. Modern systems integrate DMA controllers within chipsets rather than using discrete 8237 components.
Cache memory is a type of fast RAM that a computer processor can access more quickly than regular RAM. It stores recently accessed data from main memory to allow for faster future access if the same data is needed again. Cache memory is organized into levels based on proximity and speed of access to the processor, with L1 cache being fastest as it is located directly on the CPU chip, and L2 cache and main memory being progressively slower as they are located further away. Modern processors integrate both L1 and L2 cache onto the CPU package to improve performance by reducing access time.
A memory unit contains storage devices that store binary information as bits. Memory can be classified as volatile, which loses data when power is off, or non-volatile, which retains data when unpowered. The total computer memory forms a hierarchy from slow auxiliary memory to faster main memory and cache memory. Main memory communicates directly with the CPU and auxiliary memory, and holds programs currently in use while transferring unused programs to auxiliary memory. Memory can be accessed randomly, sequentially, or directly depending on its type.
The 80386 microprocessor had two main versions - the 80386DX with a 32-bit address and data bus, and the 80386SX with a 24-bit address bus and 16-bit data bus. The 80386SX was developed later for applications that did not require the full 32-bit capabilities of the 80386DX. The 80386 supported protected mode which enabled virtual memory, paging, and memory protection in addition to the capabilities of the 80286. It had enhanced registers, addressing modes, and memory management compared to earlier Intel processors.
The document discusses processor organization and architecture. It covers the Von Neumann model, which stores both program instructions and data in the same memory. The Institute for Advanced Study (IAS) computer is described as the first stored-program computer, designed by John von Neumann to overcome limitations of previous computers like the ENIAC. The document also covers the Harvard architecture, instruction formats, register organization including general purpose, address, and status registers, and issues in instruction format design like instruction length and allocation of bits.
This presentation talks about Real Time Operating Systems (RTOS). Starting with fundamental concepts of OS, this presentation deep dives into Embedded, Real Time and related aspects of an OS. Appropriate examples are referred with Linux as a case-study. Ideal for a beginner to build understanding about RTOS.
The document summarizes different types of computer memory. It describes RAM as volatile memory that can be randomly accessed. There are two main types of RAM: DRAM uses capacitors and must be refreshed, while SRAM uses flip-flops and does not need refreshing. The document also discusses cache memory, ROM, EPROM, EEPROM, flash memory, memory organization, errors and interleaving.
The document discusses stacks and subroutines in 8085 microprocessors. It describes how the stack is an area of memory used for temporary storage of information in a LIFO manner using a stack pointer register. Information is stored on the stack using the PUSH instruction and retrieved using POP. Subroutines allow commonly used code to be executed from different locations in a program by using the CALL instruction to transfer program flow to the subroutine and the RET instruction to return to the main program. Parameters can be passed between the main program and subroutines using registers or memory locations.
Unit 2 processor&memory-organisationPavithra S
This document discusses processor and memory organization for embedded systems. It describes the structural units of a processor like the MAR, MDR, buses, BIU, IR, ID, CU, ALU, PC, and caches. It covers memory devices like ROM, RAM, SRAM, DRAM, and flash memory. It provides case studies on selecting a processor based on features like clock speed, performance needs, and power efficiency. The document aims to help with selecting appropriate processors and memory for different types of embedded systems.
This document discusses cache memory organization and characteristics. It begins by describing cache location, capacity, unit of transfer, access methods, and physical characteristics. It then covers the different mapping techniques used in caches, including direct mapping, set associative mapping, and fully associative mapping. The document also discusses cache performance factors like hit ratio, replacement algorithms, write policies, block size, and multilevel cache hierarchies. It provides examples of specific processor cache designs like those used in Intel Pentium processors.
This document discusses memory organization and virtual memory. It describes paging and segmentation as methods for virtual memory address translation. Paging divides memory and processes into equal sized pages, while segmentation divides processes into variable sized segments. Both methods use data structures like page tables to map logical addresses to physical addresses. Caching is also discussed as a way to improve memory performance by storing frequently accessed data in a small, fast memory near the CPU.
Cache memory is located between the processor and main memory. It is smaller and faster than main memory. There are two types of cache memory policies - write-back and write-through. Mapping is a technique that maps CPU-generated memory addresses to cache lines. There are three types of mapping - direct, associative, and set associative. Direct mapping maps each main memory block to a single cache line using the formula: cache line number = main memory block number % number of cache lines. This can cause conflict misses.
The document discusses memory segmentation in the 8086 microprocessor. It explains that the 8086 has a 20-bit address bus that can address 1MB of physical memory. This memory can be divided into 16 segments of 64KB each, addressed from 0000H to F000H. Segments are accessed using a segment register to provide the base address and an offset value. Logical addresses are specified as segment:offset pairs, which are combined and shifted to generate the 20-bit physical address. Segmentation allows code, data, and stacks to be separated and permits relocation of programs in memory.
The document discusses memory organization and hierarchy. It describes how main memory directly communicates with the CPU while auxiliary memory provides backup storage. It also outlines different memory mapping techniques like direct mapping and set-associative mapping used for cache memory. Virtual memory allows programs to be larger than physical memory by swapping blocks between main and auxiliary storage.
Semiconductor memories have become essential in electronics as processors have become more common and software more sophisticated, greatly increasing the need for memory. There are several types of semiconductor memory technologies that have emerged to meet different needs, including DRAM, SRAM, SDRAM, EEPROM, flash memory, and the newer MRAM. Each type has its advantages for different applications like main memory, caches, and non-volatile storage.
Interfacing memory with 8086 microprocessorVikas Gupta
This document discusses interfacing memory with the 8086 microprocessor. It begins by defining different types of memory like RAM, ROM, EPROM, and EEPROM. It then discusses memory fundamentals like capacity, organization, and standard memory ICs. The document explains two methods of address decoding - absolute and partial decoding. It provides examples of interfacing 32KB RAM, 32K words of memory, and a combination of ROM, EPROM, and RAM with the 8086 using address decoding techniques. Diagrams and tables are included to illustrate the memory mapping and generation of chip select logic.
Memory is a device used to store data or programs either temporarily or permanently for use in a computer. There are different types of memory based on their characteristics such as location, capacity, unit of transfer, access method, performance, physical type and organization. Common memory types include RAM, ROM, and external memory such as magnetic disks. The memory hierarchy consists of registers, cache, main memory and external storage. Cache memory uses the principle of locality to improve memory access time by storing recently accessed data from main memory.
The document discusses various aspects of computer memory systems including main memory, cache memory, and memory mapping techniques. It provides details on:
1) Main memory stores program and data during execution and consists of addressable memory cells. Memory access time is the time for a memory operation while cycle time is the minimum delay between operations.
2) Memory units include RAM, ROM, PROM, EPROM, EEPROM and flash memory which have different characteristics like volatility and ability to be written.
3) Cache memory uses fast SRAM to improve performance by taking advantage of locality of reference where nearby memory accesses are common. Mapping techniques like direct, associative and set-associative mapping determine how
This presentation introduces memory on the computer. There are two types of storage discussed in this presentation which is Primary memory and Secondary memory. We also look at the basis on which memory is classified as well as the measuring units of storage on the computer. From the basic units the bits 1s/0s to units such as bytes, kilobytes and so on
This document discusses different types of computer memory. It classifies memory as register, main memory, and secondary memory based on location. It also distinguishes between sequential access memory like tapes and random access memory like RAM. RAM is further divided into static and dynamic RAM. Memory is also classified as volatile and non-volatile based on whether data is retained when power is removed. ROM and RAM are discussed as examples of magnetic and semiconductor memory respectively. ROM is programmed during manufacturing and performs only read operations, while RAM allows both read and write.
The document discusses memory organization and hierarchy in a computer system. It explains that memory hierarchy is used to minimize access time by organizing memory such that frequently used parts are closer to the CPU. It describes the different levels of memory including main memory, cache memory, and auxiliary memory. It provides details on RAM, ROM, and how the computer starts up using the bootstrap loader stored in ROM. It also discusses associative memory and different mapping techniques used to transfer data between main and cache memory such as direct mapping and set-associative mapping.
This document discusses memory subsystems and hierarchy. It begins by describing the memory hierarchy which includes registers, main memory (RAM), and external memory. It then discusses different types of memory in terms of read/write capability, volatility, and erasure mechanisms. The document outlines cache organization and mapping techniques including direct mapping, set associative, and fully associative mapping. It provides examples of address mapping for each technique. The document also discusses RAM and ROM types as well as memory subsystem organization.
This document discusses caching and the SMRR mechanism introduced by Intel to prevent cache poisoning attacks on SMRAM. It explains that:
1) Memory caching types like write-back can allow data in CPU caches to be modified without writing to physical memory.
2) Early researchers exploited this to poison SMRAM caches and gain unauthorized access to protected memory.
3) Intel addressed this with the System Management Range Register (SMRR) that defines a restricted memory range for SMRAM and prevents caching of that memory when not in SMM.
This document discusses caching and the SMRR mechanism introduced by Intel to prevent cache poisoning attacks on SMRAM. It explains that:
1) Memory caching types like write-back can allow data in CPU caches to be modified without writing to physical memory.
2) Early researchers exploited this to poison SMRAM caches and gain unauthorized access to protected memory.
3) Intel addressed this with the System Management Range Register (SMRR) that defines a restricted memory range for SMRAM and prevents caching of that memory when not in SMM.
This document discusses caching and the SMRR mechanism introduced by Intel to prevent cache poisoning attacks on SMRAM. It explains that:
1) Memory caching types like write-back can allow data in CPU caches to be modified without writing to physical memory immediately. This was exploited in early attacks on SMRAM.
2) The System Management Range Register (SMRR) restricts access to the SMRAM range defined in the SMRR registers. It takes priority over other caching controls and prevents caching of SMRAM.
3) To use SMRR, software must first verify that the CPU supports it by checking a bit in the IA32_MTRRCAP MSR register. It then config
This document discusses caching and the SMRR mechanism introduced by Intel to prevent cache poisoning attacks on SMRAM. It explains that:
1) Memory caching types like write-back can allow cached data to be modified without writing to memory immediately, enabling attacks.
2) Early research demonstrated how to poison the cache to execute code in SMRAM.
3) In response, Intel added the SMRR register to define a protected range for SMRAM and prevent unauthorized access outside SMM.
This document discusses caching and the SMRR mechanism introduced by Intel to prevent cache poisoning attacks on SMRAM. It explains that:
1) Memory caching types like write-back can allow data in CPU caches to be modified without writing to physical memory.
2) Early researchers exploited this to poison SMRAM caches and gain unauthorized access to protected memory.
3) Intel addressed this with the System Management Range Register (SMRR) that defines a restricted memory range for SMRAM and prevents caching of that memory when not in SMM.
This document discusses caching and the SMRR mechanism introduced by Intel to prevent cache poisoning attacks on SMRAM. It explains that:
1) Memory caching types like write-back can allow data in CPU caches to be modified without writing to physical memory.
2) Early researchers exploited this to poison SMRAM caches and gain unauthorized access to protected memory.
3) Intel addressed this with the System Management Range Register (SMRR) that defines a restricted memory range for SMRAM and prevents caching of that memory when not in SMM.
This document discusses caching and the SMRR mechanism introduced by Intel to prevent cache poisoning attacks on SMRAM. It explains that:
1) Memory caching types like write-back can allow data in CPU caches to be modified without writing to physical memory.
2) Early researchers exploited this to poison SMRAM caches and gain unauthorized access to protected memory.
3) Intel addressed this with the System Management Range Register (SMRR) that defines a restricted memory range for SMRAM and prevents caching of that memory when not in SMM.
This document discusses memory organization and interfacing in embedded systems. It covers memory architecture, types of memory including ROM, RAM, cache memory and DRAM. It describes memory mapping techniques like direct, fully associative and set-associative mapping. The document also discusses memory interfacing, I/O device interfacing using ports or I/O controllers, and memory mapped I/O operations.
Semiconductor memory can be categorized based on attributes like read/write ability, storage permanence, and volatility. Common types include RAM, ROM, EEPROM, and flash memory. RAM is read/write and volatile, requiring power to maintain data. ROM is read-only and non-volatile, with data stored permanently. EEPROM and flash memory are read/write, non-volatile memories that retain data when powered off but with slower write speeds than RAM. Memory devices use architectures like rows and columns with decoders to access individual memory cells.
Memory devices can be classified in several ways:
1. By location as registers, main memory, and secondary memory. Registers are inside the CPU while main memory is external but faster than secondary memory like hard disks.
2. By access as sequential (location must be accessed in order) vs random access memory (RAM) which allows random access.
3. As static (maintains data without refresh) vs dynamic RAM which must be periodically refreshed.
4. As volatile (loses data on power off) vs non-volatile like ROM and magnetic storage.
Read-only memory (ROM) is non-volatile and only allows reading. It is used to permanently store information. Various RO
Memory can be classified as primary or secondary. Primary memory (RAM) is directly accessible by the CPU and is used to store currently running programs and data. Secondary memory (hard disks, SSDs) is used for long-term storage and requires data to be transferred to primary memory for access. RAM types include DRAM and SRAM, while ROM is non-volatile. Cache memory improves CPU performance. Input devices like keyboards are used to input data into the computer's primary memory.
Cache Memory for Computer Architecture.pptrularofclash69
The document discusses cache memory characteristics including location, capacity, unit of transfer, access methods, performance, physical type, organization, and mapping functions. It provides details on direct mapping, associative mapping, set associative mapping, replacement algorithms, and write policies for cache memory. Key aspects covered include cache hierarchy, cache operation, typical cache organization, comparison of cache sizes over time, and how mapping functions, block size, and number of sets/ways impact cache design.
Chapter 8 computer memory system overviewAhlamAli20
The document discusses various aspects of computer memory systems including:
- Memory can be internal (e.g. main memory, cache) or external (e.g. disks, tapes). Internal memory is faster but has lower capacity, while external memory is slower but can store more data.
- Memory is characterized by its access method (e.g. random, sequential), capacity, units of transfer (e.g. words, blocks), and performance parameters like access time and transfer rate.
- Common semiconductor memory types include RAM (random access, volatile), ROM (read-only, non-volatile), and flash memory. RAM can be static or dynamic.
1. Memory testing is an important part of embedded system development to ensure proper functionality.
2. Basic memory tests include data bus testing, address bus testing, and device testing.
3. Data bus testing uses techniques like walking 1's to write all possible data values and verify each bit. Address bus testing uses power-of-two addresses to isolate each address bit. Device testing writes data to addresses and checks for overwrites to test for overlapping addresses.
Similar to Memory Organization | Computer Fundamental and Organization (20)
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
2. Agenda
• In this unit we will see,
• Introduction
• Memory Cell
• Memory Organization
• Read Only Memory
• Serial Access Memory
• Physical Devices Used to Construct Memories
• Magnetic and Optical Disk
• Virtual Memory
22-10-2015Preparedby:ForamShah
3. Introduction
• Memory is required in a computer to store a program and the
data processed by the program
• It is made up of large number of cells, with each cell capable of
storing one bit
• Cell may be organized as a set of addressable words, each word
storing sequence of bit
• Time to store or retrieve a word is independent of the address
of word – called RAM(Random Access Memory)
• Another one is SAM(Serial Access Memory) – Linear sequencing
for storing and retrieving word
22-10-2015Preparedby:ForamShah
4. Memory Cell
• It may be defined as a device which can store a symbol
selected from a set of symbols.
• It may characterized by the following properties:
• 1) The number of stable state in which it can be place
• Cell can be placed to determine the number of distinct symbols it
can store.
• Each stable state may be assigned to represent a symbol.
• If a cell may have 10 stable state , each state must be used to
represent one symbol-cell store decimal digit
• If cell can be placed only one out of two stable state- cell store
binary digit
• 2) Whether a cell can store a symbol indefinitely even when
power is turned off.
• Volatile cell: Symbol will disappeared in a cell when no energy
supplied
• Non Volatile cell: Symbol stored in a cell when no energy supplied
22-10-2015Preparedby:ForamShah
5. Cont…
• 3) Whether, after reading a symbol from a cell, the stored symbol
is engaged in the memory cell or disturbed.
• Non-destructive cell: Ones read symbol should not disturbed(Not
erase).
• Destructive cell: Ones read the symbol is erased from cell.
• 4) The time taken to read a symbol from a cell and time taken to
write a new symbol in it.
• Read-time: Time taken to read symbol from cell.
• Write-time: Time taken to write symbol to cell.
• 5) Whether a symbol, ones written, can only be read and not
changed.
• Read only cell: A symbol is permanently written and can only be
read, not to modify.
22-10-2015Preparedby:ForamShah
6. Memory Organization
22-10-2015Preparedby:ForamShah
• Current technology of memory cell- Symbol can be placed in one out of
two stable state (i.e. Binary cell)
• Storage cell:
• Input data line: Symbol to be written is sent to the cell through this line
• Write line: In order to write particular symbol in cell “write control
signal” sent to this line
• Read line: If content of a cell is to be read from cell “read control signal”
sent to this line
• Output data line: Content of the cell may sensed on this line
7. Cont…
22-10-2015Preparedby:ForamShah
• Appropriation variation in the interconnection of binary
memory cell it is possible to organized different type of
memory.
• For example:
• Here we assume that, individual cells are non-volatile and
reading is non-destructive
• Simplest organizations of set of cell is given below,
A 3-bit Register
8. Cont…
22-10-2015Preparedby:ForamShah
• In this organization(Above figure), three cells are interconnected in a
such way that,
• The write and read control lines of all the cells are connected together.
• The bits to be written in each cell is fed to the appropriate input data
lines.
• When write signal is applied to the write-control line, these bits are
written in the individual cells.
• The previous content in the cell are automatically erased when the new
information is written.
• To read a content from cell, read signal is applied to read line.
• The content of the cell appears on the respective output line.
• The content of individual cell are not erased by the read operation as
reading from these cell. (As we assumed that- non destructive cell)
• This interconnection of cells is called register.(Here in this example it
store 3 bits)
10. Cont…
22-10-2015Preparedby:ForamShah
• Input data line of the first register of all four registers are connected
together, similarly for second and third register also.
• Same for output data line
• Four write line and four read line, one for each 3 bit register
• To read or write data to particular register is possible with putting
specific signal with read and write line
• Each register having unique identification, so appropriate register may
be selected for writing or reading
• At a time we may either read from or write in a register
• Identification code of each register is corressponding to a word in the
memory is called address.
• Address usually specified as a binary number and is placed in register
called Memory Address Registers(MAR)
• Data read from or written to memory is placed in register called
Memory Data Registers(MDR)
13. Read Only Memory
22-10-2015Preparedby:ForamShah
• Ones a word written in memory, later it can be read from
memory by specifying its address.
• Characteristics of ROM:
• Content of the word cannot be altered
• Reading from ROM should be non-destructive
• Memory should be non-volatile
• Application of ROM:
• Trigonometric function
• Washing machine functions (sequencing of operation of washing
machine may be stored in ROM and interpreted by processing unit)
14. Cont…
22-10-2015Preparedby:ForamShah
• Factory programmed ROM:
• A ROM which has information written in it during manufacture in a
factory
• It is feasible only in cases where the demand for such programmed
ROM is large.
• Programmable ROM (PROM):
• For more specialized uses, user may like to store his special
function or program to ROM
• Here information will only installed ones
• Programming is done using special writing circuits
• Time taken to write information is long while read time is small
• Not flexible because it can not be alter
15. Cont…
22-10-2015Preparedby:ForamShah
• Erasable Programmable ROM (EPROM):
• Information in ROM is erased by shining ultra-violet light
• After the ROM is exposed to UV lights all bits are erased and
become 0
• Then ROM may reprogrammed, this PROM is known as EPROM
• Electrically Erasable Programmable ROM (EEPROM):
• Electrical pluses are used instead of ultra-violet line to erase the
PROM
• Erasing PROM with electrical voltage is convenient compared to UV
rays to erase it
• Now it is dominant (Leading technology)
16. Flash Memory
22-10-2015Preparedby:ForamShah
• Variant of EEPROM
• It is Random Access Memory
• It use one transistor switch per memory cell
• Capacities ranging from 32 KB to 1 GB
• It is non-volatile: Do not required power to preserve the data
• Read time of flash memories is tens of nano second while write times is
several micro seconds
• It is compact and came in various shape
• Example: Pen drive, micro card etc.
• So, Advantages of Flash Memory:
• Random access
• Non-volatile
• Slow in write – fast in read memories
• Overwrite the data
• Compact
• Price/byte is rapidly reducing
17. Serial Access Memory
22-10-2015Preparedby:ForamShah
• Serial access memory is non-addressable memory
• That is, set of bits stored in the memory cannot be selectively retrieved by
specifying their location in the memory
• The bits stored can be retrieved in strict serial order
18. Cont…
22-10-2015Preparedby:ForamShah
• Here in above organization of memory cells, output of cell is input to the next
cell
• A read-signal places the content of each of the cell on the respective output
lines
• A write-signal following this read signal will stores these bits in the respective
“next” cell
• One read-write pair of signal would thus “shift” the content of the cell
right by one cell position
• And the bit stored in right-most cell will appear on the output line
• As the bits stored in the cell appear serially at the output, this memory is
called serial access memory
• This structure(In above figure) is also known as “Shift register”
20. Cont…
22-10-2015Preparedby:ForamShah
• “Read head” reads the content of the cell placed below it and places it on
output line
• Cells are moved physically from left to right and each cell appears below
the read-head its content is placed on the output
• This bits appear on the output as shown in fig(b)
• Write Mechanism:
23. Magnetic Disk
22-10-2015Preparedby:ForamShah
• magnetic disk is a thin, circular plate/platter made of nonmagnetic
material called substrate
• Substrate coated with magnetizable material (iron oxide)
• Traditionally substrate used to be aluminium
• But now glass substate is used
• Increases reliability
• Better stiffness
• Better shock/damage resistance
• Recording & retrieval done via conductive coil called a head
• Two head : Read Head and Write Head
• During read/write, head is stationary (fixed), platter rotates
24. Cont…
22-10-2015Preparedby:ForamShah
• Data Organization and Formating:
• Head will gives rise to organization of data on the platter in a concentric set
of rings (Having common centre), called track.
• Data are transferred to and from the disk in Sectors. (i.e. Track are divided
in to sector)
• Thousand of track per surface.
• Adjacent tracks are separated by gap called Intertrack gap.
• Hundreds of sectors per track.
• Adjacent sectors are separated by gap called Intersector gap.
• Reduce gap to increase capacity
26. Characteristics
Head Motion Platter
- Fixed head (one per track) - Single platter
- Moveable head (one per surface) - Multiple Platter
Disk Portability Head Mechanism
- Non removable disk - Contact (Floppy)
- Removable disk - Fixed gap
- Flying (Winchester)
Sides
- Single sided
- Double sided
27. Characteristics : Head Motion
• Fixed head
• One read write head per track
• Heads mounted on fixed ridged arm
• Movable head
• One read write head per side
• Mounted on a movable arm
28. Characteristics : Disk Portability
• Removable disk
• Can be removed from drive and replaced with another disk
• Provides unlimited storage capacity
• Easy data transfer between systems
• Non removable disk
• Permanently mounted in the drive
29. Characteristics : Sides
• Single sided
• Magnetizable coating applied on one side
• Less expensive
• Double sided
• Magnetizable coating applied on both side
30. Characteristics : Platters
• Single platter
• Only single plate is there
• Multiple platter
• One head per surface
• Heads are joined and aligned
• Aligned tracks on each platter form cylinders
• (Cylinders : The set of all the tracks in the same relative
position on the platter is referred to as cylinder.)
• Data is striped by cylinder
• reduces head movement
• Increases speed (transfer rate)
33. Characteristics : Head mechanism
• Fix gap
• Head has been positioned a fixed distance
• Floppy disk
• Head will physical contact with the medium during a read or
write operation
• FD is small, flexible platter and least expensive
• Flying (Winchester)
• Developed by IBM in Winchester (USA)
• Heads fly on boundary layer of air as disk spins
• Getting more robust
34. Speed
• Seek time
• Moving head to correct track
• (Rotational) latency
• Waiting for data to rotate under head
• Access time = Seek + Latency
• Transfer rate : Data transfer portion of operation
36. Optical Disk
• Consists of a circular disk,
• which is coated with polycarbonate coated with highly reflective
coat, usually aluminium
• for recording/reading of data on the disk : Laser beam technology is
used
• Also known as laser disk / optical laser disk,
• Data stored as pits
• Read by reflecting laser
• Proved to be a promising random access medium for high capacity
secondary storage
• Originally for audio
• 650Mbytes giving over 70 minutes audio
37. Cont…
• Has one long spiral track, which starts at the outer edge
and spirals inward to the center (To increase the density)
• Track is divided into equal size sectors
• Difference in track patterns on optical and magnetic
disks.
38. CD Operation
-> Pit having rough surface, so density is low
-> Land having smooth surface, so density will be high
-> Difference between pit and land has been detected by photosensor and
it will convert into digital signal
39. Cont…
• Optical Disk Products
• CD
• A non erasable disk storing digitized audio information only
• CD-ROM
• CD-R
• CD-RW
• DVD
• DVD-R
• DVD-RW
• Blu-Ray DVD (blue violet laser)
40. • CD-Recordable (CD-R)
• Now affordable
• Compatible with CD-ROM drives but user can write on disk only
once
• CD-RW
• Erasable
• Getting cheaper
• Mostly CD-ROM drive compatible but user can erase and rewrite
multiple times on disk
• Phase change
• Material has two different reflectivities in different phase states
Cont…
41. Cont…
DVD:
• Digital Video Disk
• Used to indicate a player for movies
• Only plays video disks
• Digital Versatile Disk
• Used to indicate a computer drive
• Will read computer disks and play video disks
• Multi-layer
• Very high capacity (4.7G per layer)
• Full length movie on single disk
• Using MPEG compression
• Double sided, capacity of up to 17 GB
• Basic DVD is read only (DVD-ROM)
42. • DVD-Recordable (DVD-R)
• Write only ones
• Only one sided
• DVD-Rewritable (DVD-RW)
• Erase and rewrite multiple times
• Only one sided
43. High Definition Optical Disks
• Designed for high definition videos
• Much higher capacity than DVD
• Shorter wavelength laser
• Blue-violet range
• Smaller pits
• HD-DVD
• 15GB single side single layer
• Blue-ray
• Data layer closer to laser
• Tighter focus, less distortion, smaller pits
• 25GB on single layer
• Available read only (BD-ROM), Recordable once (BR-R) and re-
recordable (BR-RE)
Cont…