This document summarizes a session on memory organization that was presented by Asst. Prof. M. Gokilavani at VITS. The session covered topics related to memory hierarchy including main memory, auxiliary memory, associative memory, and cache memory. It described the characteristics of different memory types, such as static vs dynamic RAM, RAM vs ROM, and cache mapping techniques like direct mapping, set-associative mapping, and associative mapping. Examples were provided to illustrate memory addressing and cache organization.
The document discusses computer organization and microprocessors. It begins by explaining that computer organization deals with the hardware components of a computer system, including input/output devices, the central processing unit, storage devices, and primary memory. It then discusses the basics of microprocessors, noting that the CPU performs tasks using a microprocessor integrated circuit. The microprocessor is made up of an arithmetic logic unit, control unit, and registers. The document then covers characteristics of microprocessors like clock speed, instruction set, and word size. It also discusses different types of memory devices, ports, and interfaces that are part of computer hardware.
This document provides an introduction to computer hardware. It discusses what a computer is and its basic components like input, output, processing and storage devices. It explains the differences between various types of computer memory like RAM, ROM, cache and primary memory. It also discusses concepts like Moore's Law and how computer performance and memory capacity have increased exponentially over time.
1. The document discusses memory management and the memory hierarchy in computer systems. It describes the different levels of memory including CPU registers, main memory, cache memory, and auxiliary memory.
2. Cache memory is used to reduce the average time required to access memory by taking advantage of spatial and temporal locality. There are three common cache mapping techniques - direct mapping, associative mapping, and set-associative mapping.
3. Virtual memory allows programs to behave as if they have a large, single memory space even if physical memory is smaller. It uses a memory management unit to translate virtual addresses to physical addresses through a page table.
The document discusses memory organization and hierarchy in a computer system. It explains that memory hierarchy is used to minimize access time by organizing memory such that frequently used parts are closer to the CPU. It describes the different levels of memory including main memory, cache memory, and auxiliary memory. It provides details on RAM, ROM, and how the computer starts up using the bootstrap loader stored in ROM. It also discusses associative memory and different mapping techniques used to transfer data between main and cache memory such as direct mapping and set-associative mapping.
This document provides information about computer memory systems hierarchy. It discusses how memory is organized in a hierarchy with the fastest and smallest memory (cache) closest to the CPU and the largest but slowest auxiliary memory. The main memory sits between these occupying a central position communicating with both the CPU and auxiliary storage. It focuses on the characteristics of different main memory technologies like SRAM and DRAM and how they are organized on computer chips with address lines and data buses. Cache memory aims to bridge the speed difference between CPU and main memory.
This document provides an overview of computer architecture and memory systems. It discusses topics like memory hierarchy, cache organization and operation, DRAM and SRAM technologies, and memory timing parameters. The key points are:
- Computer architecture refers to the operational design of a computer system, from low-level instruction sets to higher-level aspects like memory subsystems.
- Memory hierarchy utilizes smaller and faster memory levels (registers, cache, main memory, disk) to improve performance. Caches exploit locality of reference to reduce memory access latency.
- Caches are organized into blocks and use tags/indices/offsets to store and retrieve data efficiently. Replacement policies determine which blocks to evict.
- DR
1. The basic components of a parallel computer are processors, memory, and an interconnect network that connects the processors and memory.
2. Key processor terms include RISC, pipelining, and superscalar, which refer to instruction processing techniques. The network interconnect is characterized by latency, bandwidth, and topology.
3. Memory terms include cache, which sits between the CPU and main memory, as well as instruction cache, data cache, secondary cache, and translation lookaside buffer. Caches provide faster access to frequently used data and instructions.
This document summarizes key concepts related to computer memory organization and hierarchy. It discusses how memory is organized from the fastest cache memory up to slower main memory and auxiliary storage. It covers cache mapping techniques like direct mapping, set associative mapping and associative mapping. Virtual memory and paging/segmentation techniques are also summarized. Replacement algorithms for cache memory like FIFO and LRU are discussed. The document provides an overview of computer architecture course topics and assessment patterns.
The document discusses computer organization and microprocessors. It begins by explaining that computer organization deals with the hardware components of a computer system, including input/output devices, the central processing unit, storage devices, and primary memory. It then discusses the basics of microprocessors, noting that the CPU performs tasks using a microprocessor integrated circuit. The microprocessor is made up of an arithmetic logic unit, control unit, and registers. The document then covers characteristics of microprocessors like clock speed, instruction set, and word size. It also discusses different types of memory devices, ports, and interfaces that are part of computer hardware.
This document provides an introduction to computer hardware. It discusses what a computer is and its basic components like input, output, processing and storage devices. It explains the differences between various types of computer memory like RAM, ROM, cache and primary memory. It also discusses concepts like Moore's Law and how computer performance and memory capacity have increased exponentially over time.
1. The document discusses memory management and the memory hierarchy in computer systems. It describes the different levels of memory including CPU registers, main memory, cache memory, and auxiliary memory.
2. Cache memory is used to reduce the average time required to access memory by taking advantage of spatial and temporal locality. There are three common cache mapping techniques - direct mapping, associative mapping, and set-associative mapping.
3. Virtual memory allows programs to behave as if they have a large, single memory space even if physical memory is smaller. It uses a memory management unit to translate virtual addresses to physical addresses through a page table.
The document discusses memory organization and hierarchy in a computer system. It explains that memory hierarchy is used to minimize access time by organizing memory such that frequently used parts are closer to the CPU. It describes the different levels of memory including main memory, cache memory, and auxiliary memory. It provides details on RAM, ROM, and how the computer starts up using the bootstrap loader stored in ROM. It also discusses associative memory and different mapping techniques used to transfer data between main and cache memory such as direct mapping and set-associative mapping.
This document provides information about computer memory systems hierarchy. It discusses how memory is organized in a hierarchy with the fastest and smallest memory (cache) closest to the CPU and the largest but slowest auxiliary memory. The main memory sits between these occupying a central position communicating with both the CPU and auxiliary storage. It focuses on the characteristics of different main memory technologies like SRAM and DRAM and how they are organized on computer chips with address lines and data buses. Cache memory aims to bridge the speed difference between CPU and main memory.
This document provides an overview of computer architecture and memory systems. It discusses topics like memory hierarchy, cache organization and operation, DRAM and SRAM technologies, and memory timing parameters. The key points are:
- Computer architecture refers to the operational design of a computer system, from low-level instruction sets to higher-level aspects like memory subsystems.
- Memory hierarchy utilizes smaller and faster memory levels (registers, cache, main memory, disk) to improve performance. Caches exploit locality of reference to reduce memory access latency.
- Caches are organized into blocks and use tags/indices/offsets to store and retrieve data efficiently. Replacement policies determine which blocks to evict.
- DR
1. The basic components of a parallel computer are processors, memory, and an interconnect network that connects the processors and memory.
2. Key processor terms include RISC, pipelining, and superscalar, which refer to instruction processing techniques. The network interconnect is characterized by latency, bandwidth, and topology.
3. Memory terms include cache, which sits between the CPU and main memory, as well as instruction cache, data cache, secondary cache, and translation lookaside buffer. Caches provide faster access to frequently used data and instructions.
This document summarizes key concepts related to computer memory organization and hierarchy. It discusses how memory is organized from the fastest cache memory up to slower main memory and auxiliary storage. It covers cache mapping techniques like direct mapping, set associative mapping and associative mapping. Virtual memory and paging/segmentation techniques are also summarized. Replacement algorithms for cache memory like FIFO and LRU are discussed. The document provides an overview of computer architecture course topics and assessment patterns.
This document discusses computer architecture and organization concepts such as cache memory and input/output in computer systems. It defines computer architecture and organization, describes the major components of the CPU and their functions. It also explains the concept of interconnection within a computer system including bus interconnection, describes different types of cache memory mapping (direct, associative, set-associative) and cache initialization. Finally, it defines input/output modules, lists common I/O devices and describes I/O buses and interface modules.
The document discusses different levels of computer memory and cache memory. It describes four levels of memory:
1) Register - Stores data accepted by the CPU.
2) Cache memory - Faster memory that temporarily stores frequently accessed data from main memory.
3) Main memory - The memory the computer currently works on but data is lost when powered off.
4) Secondary memory - External memory that stores data permanently but is slower than main memory.
It then discusses cache memory in more detail, describing it as very high-speed memory that stores copies of frequently used data from main memory to reduce average access time. It explains the concepts of cache hits, misses, and hit ratio. Finally, it
Cache memory is a small, fast memory located close to the CPU that stores frequently accessed data from main memory to speed up processing. It is organized into multiple levels - L1 cache is inside the CPU, L2 cache is external, and main memory is L3. The cache improves performance by reducing access time - when data is in cache it is a "hit" and very fast to access, while a "miss" requires loading from main memory which is slower. Factors like cache size, mapping technique, replacement policy, and write strategy impact how efficiently it services memory requests.
1. Auxiliary memory refers to storage technologies like magnetic tapes and disks that are slower to access than main memory but can store much more data at a lower cost.
2. Common examples are magnetic tapes, which use magnetic strips to store data sequentially, and magnetic disks, which store data on spinning platters in concentric tracks and sectors that can be randomly accessed.
3. Cache memory is a small amount of very fast memory located close to the CPU that stores frequently used data from main memory to improve access speed. It works by checking for data matches before accessing slower main memory.
This document discusses computer memory systems including main memory, cache, and virtual memory. It defines main memory as the central storage location that holds programs and data currently being used by the CPU. The document outlines memory hierarchy from fastest to slowest as registers, cache, main memory, and secondary storage. It describes RAM and ROM types as well as cache memory. Locality of reference and memory technologies such as magnetic disks are also summarized.
The document discusses computer memory systems and cache memory principles. It provides an overview of:
- The memory hierarchy, which uses different memory technologies arranged in order of decreasing cost per bit, increasing capacity, and increasing access time. This hierarchy satisfies the conflicting demands of large capacity, fast speed, and low cost.
- Cache memory, which sits between the processor and main memory in the hierarchy. Cache memory exploits locality of reference to improve average memory access time.
- Characteristics of different levels of memory, including location, capacity, unit of transfer, access methods, physical types, volatility, and erasability. Faster but smaller and more expensive memories are higher in the hierarchy to satisfy performance needs.
The document discusses the memory hierarchy and cache memories. It begins by describing the main components of the memory system: main memory and secondary memory. The key issues are that microprocessors are much faster than memory, and larger memories are slower. To address this, a memory hierarchy is used that combines fast, small, expensive memory levels with slower, larger, cheaper levels. Caches are discussed as a small, fast memory located between the CPU and main memory. Caches improve performance by exploiting locality of reference in programs. Different cache organizations like direct mapping and set associative mapping are described to determine where blocks are placed in the cache on a miss.
Computer System Architecture Lecture Note 8.2 Cache MemoryBudditha Hettige
Cache memory is a small, high-speed memory located close to the processor that stores frequently accessed instructions and data. When the CPU requests data, it first checks the cache and if the data is present it is retrieved from cache, otherwise it is retrieved from main memory. There are different types of cache mapping including direct mapping, associative mapping, and set-associative mapping that determine how data is stored in cache. Cache performance is improved through techniques such as write-back caching that only write data to cache on write operations.
Computer architecture refers to the operational design of a computer system. This lecture focuses on the memory subsystem aspects of computer architecture, including memory hierarchy, memory performance, cache organization and operation, random access memory technologies, and bus interfaces. Key topics discussed are memory hierarchy principles like smaller and faster memory levels, cache hit rates, direct mapped, set associative, and write-back caching techniques, and differences between static and dynamic random access memory technologies.
This document discusses virtual memory and cache memory. It defines virtual memory as a technique that allows programs to behave as if they have contiguous memory even if the actual physical memory is fragmented. It also describes how virtual memory provides each process with its own address space and hides fragmentation. The document also defines cache memory as a small, fast memory located close to the CPU that stores frequently accessed instructions and data to improve performance. It describes levels 1 and 2 caches and how they work with memory and disk caches.
International Journal of Engineering Research and DevelopmentIJERD Editor
This document summarizes the design of a virtual extended memory symmetric multiprocessor (SMP) organization using LC-3 processors. It discusses the LC-3 processor architecture and instruction set. It then describes the design of a dual core LC-3 processor that shares memory over 32K bank sizes. The key components of the LC-3 processor pipeline including fetch, decode, execute, and writeback units are defined along with their inputs, outputs, and functions. Memory architectures for SMP systems including conventional, direct connect, and shared bus approaches are also summarized.
This document provides an overview of cache memory, including its placement, addressing, levels, mapping methods, and interaction with main memory. Key points:
- Cache memory is high-speed SRAM located on or near the CPU that stores frequently accessed data from main memory for quick access by the processor.
- Caches are organized into multiple levels (L1, L2, L3) based on access speed and size, with L1 being fastest and smallest and L3 largest and slowest.
- Cache mapping determines where main memory blocks are placed in cache and includes direct, associative, and set associative methods. Direct mapping places each block in a single cache line.
- Cache hits
The document discusses the memory hierarchy in computers. It explains that memory is organized in a hierarchy with different levels providing varying degrees of speed and capacity. The levels from fastest to slowest are: registers, cache, main memory, and auxiliary memory such as magnetic disks and tapes. Cache memory sits between the CPU and main memory to bridge the speed gap. It exploits locality of reference to improve memory access speed. The document provides details on the working of each memory level and how they interact with each other.
The document is an A-Z glossary of computer memory terminology from the website MemoryFinder.org. It defines terms related to computer hardware components, memory technologies, and specifications. Some key terms defined include RAM, ROM, cache memory, buses, bytes, chips, latency, and memory modules. The glossary provides concise explanations of over 50 important concepts for understanding computer memory and components.
This document provides an overview of various components of computer memory hierarchy, including main memory, auxiliary memory, associative memory, cache memory, virtual memory, and memory management hardware. Main memory uses RAM and ROM chips as primary storage during runtime. Auxiliary memory includes magnetic disks and tapes for long-term secondary storage. Associative memory allows for fast parallel searches. Cache memory acts as a buffer between the CPU and main memory for frequently accessed data. Virtual memory allows programs to access secondary storage as if it were main memory. Memory management hardware in operating systems allocates and manages memory usage between processes.
Cache memory is a small, high-speed memory located between the CPU and main memory. It stores copies of frequently used instructions and data from main memory in order to speed up processing. There are multiple levels of cache with L1 cache being the smallest and fastest located directly on the CPU chip. Larger cache levels like L2 and L3 are further from the CPU but can still provide faster access than main memory. The main purpose of cache is to accelerate processing speed while keeping computer costs low.
Cache memory is a small, high-speed memory located between the CPU and main memory. It stores copies of frequently used instructions and data to accelerate access times. There are three levels of cache - L1, L2, and L3 - with L1 being the smallest and fastest, located directly on the CPU chip. Cache memory utilizes the principle of locality of reference, where programs tend to reuse the same data and instructions. This allows the cache to improve performance by retaining recently accessed content.
CS304PC:Computer Organization and Architecture Session 14 data transfer and ...Asst.prof M.Gokilavani
This document summarizes the topics covered in Session 14 of the CS304PC course on Computer Organization and Architecture. It discusses data transfer instructions, which move data without changing its contents. This includes load, store, move, input, output, push and pop instructions. It also examines addressing modes like immediate, register, indexed, and indirect addressing. The document then reviews data manipulation instructions, including arithmetic, logical/bitwise, and shift operations. It concludes by previewing that the next session will cover program control instructions.
COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and ArchitectureCOA Computer Organisation and Architecture
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
LIST OF EXPERIMENTS:
1. Implement simple vector addition in Tensor Flow.
2. Implement a regression model in Keras.
3. Implement a perception in TensorFlow/Keras Environment.
4. Implement a Feed Forward Network in TensorFlow/Keras.
5. Implement an image classifier using CNN in TensorFlow/Keras.
6. Improve the deep Learning model by fine tuning hyper parameters.
7. Implement a Transfer Learning concept in image classification.
8. Using a pre trained model on Keras for transfer learning.
9. Perform Sentimental Analysis using RNN.
10. Implement an LSTM based Auto encoding inTensorflow/Keras.
11. Image generation using GAN.
ADDITIONAL EXPERIMENTS
12. Train a deep Learning model to classify a given image using pre trained model.
13. Recommendation system from sales data using Deep Learning.
14. Implement Object detection using CNN.
15. Implement any simple Reinforcement Algorithm for an NLP problem.
This document discusses computer architecture and organization concepts such as cache memory and input/output in computer systems. It defines computer architecture and organization, describes the major components of the CPU and their functions. It also explains the concept of interconnection within a computer system including bus interconnection, describes different types of cache memory mapping (direct, associative, set-associative) and cache initialization. Finally, it defines input/output modules, lists common I/O devices and describes I/O buses and interface modules.
The document discusses different levels of computer memory and cache memory. It describes four levels of memory:
1) Register - Stores data accepted by the CPU.
2) Cache memory - Faster memory that temporarily stores frequently accessed data from main memory.
3) Main memory - The memory the computer currently works on but data is lost when powered off.
4) Secondary memory - External memory that stores data permanently but is slower than main memory.
It then discusses cache memory in more detail, describing it as very high-speed memory that stores copies of frequently used data from main memory to reduce average access time. It explains the concepts of cache hits, misses, and hit ratio. Finally, it
Cache memory is a small, fast memory located close to the CPU that stores frequently accessed data from main memory to speed up processing. It is organized into multiple levels - L1 cache is inside the CPU, L2 cache is external, and main memory is L3. The cache improves performance by reducing access time - when data is in cache it is a "hit" and very fast to access, while a "miss" requires loading from main memory which is slower. Factors like cache size, mapping technique, replacement policy, and write strategy impact how efficiently it services memory requests.
1. Auxiliary memory refers to storage technologies like magnetic tapes and disks that are slower to access than main memory but can store much more data at a lower cost.
2. Common examples are magnetic tapes, which use magnetic strips to store data sequentially, and magnetic disks, which store data on spinning platters in concentric tracks and sectors that can be randomly accessed.
3. Cache memory is a small amount of very fast memory located close to the CPU that stores frequently used data from main memory to improve access speed. It works by checking for data matches before accessing slower main memory.
This document discusses computer memory systems including main memory, cache, and virtual memory. It defines main memory as the central storage location that holds programs and data currently being used by the CPU. The document outlines memory hierarchy from fastest to slowest as registers, cache, main memory, and secondary storage. It describes RAM and ROM types as well as cache memory. Locality of reference and memory technologies such as magnetic disks are also summarized.
The document discusses computer memory systems and cache memory principles. It provides an overview of:
- The memory hierarchy, which uses different memory technologies arranged in order of decreasing cost per bit, increasing capacity, and increasing access time. This hierarchy satisfies the conflicting demands of large capacity, fast speed, and low cost.
- Cache memory, which sits between the processor and main memory in the hierarchy. Cache memory exploits locality of reference to improve average memory access time.
- Characteristics of different levels of memory, including location, capacity, unit of transfer, access methods, physical types, volatility, and erasability. Faster but smaller and more expensive memories are higher in the hierarchy to satisfy performance needs.
The document discusses the memory hierarchy and cache memories. It begins by describing the main components of the memory system: main memory and secondary memory. The key issues are that microprocessors are much faster than memory, and larger memories are slower. To address this, a memory hierarchy is used that combines fast, small, expensive memory levels with slower, larger, cheaper levels. Caches are discussed as a small, fast memory located between the CPU and main memory. Caches improve performance by exploiting locality of reference in programs. Different cache organizations like direct mapping and set associative mapping are described to determine where blocks are placed in the cache on a miss.
Computer System Architecture Lecture Note 8.2 Cache MemoryBudditha Hettige
Cache memory is a small, high-speed memory located close to the processor that stores frequently accessed instructions and data. When the CPU requests data, it first checks the cache and if the data is present it is retrieved from cache, otherwise it is retrieved from main memory. There are different types of cache mapping including direct mapping, associative mapping, and set-associative mapping that determine how data is stored in cache. Cache performance is improved through techniques such as write-back caching that only write data to cache on write operations.
Computer architecture refers to the operational design of a computer system. This lecture focuses on the memory subsystem aspects of computer architecture, including memory hierarchy, memory performance, cache organization and operation, random access memory technologies, and bus interfaces. Key topics discussed are memory hierarchy principles like smaller and faster memory levels, cache hit rates, direct mapped, set associative, and write-back caching techniques, and differences between static and dynamic random access memory technologies.
This document discusses virtual memory and cache memory. It defines virtual memory as a technique that allows programs to behave as if they have contiguous memory even if the actual physical memory is fragmented. It also describes how virtual memory provides each process with its own address space and hides fragmentation. The document also defines cache memory as a small, fast memory located close to the CPU that stores frequently accessed instructions and data to improve performance. It describes levels 1 and 2 caches and how they work with memory and disk caches.
International Journal of Engineering Research and DevelopmentIJERD Editor
This document summarizes the design of a virtual extended memory symmetric multiprocessor (SMP) organization using LC-3 processors. It discusses the LC-3 processor architecture and instruction set. It then describes the design of a dual core LC-3 processor that shares memory over 32K bank sizes. The key components of the LC-3 processor pipeline including fetch, decode, execute, and writeback units are defined along with their inputs, outputs, and functions. Memory architectures for SMP systems including conventional, direct connect, and shared bus approaches are also summarized.
This document provides an overview of cache memory, including its placement, addressing, levels, mapping methods, and interaction with main memory. Key points:
- Cache memory is high-speed SRAM located on or near the CPU that stores frequently accessed data from main memory for quick access by the processor.
- Caches are organized into multiple levels (L1, L2, L3) based on access speed and size, with L1 being fastest and smallest and L3 largest and slowest.
- Cache mapping determines where main memory blocks are placed in cache and includes direct, associative, and set associative methods. Direct mapping places each block in a single cache line.
- Cache hits
The document discusses the memory hierarchy in computers. It explains that memory is organized in a hierarchy with different levels providing varying degrees of speed and capacity. The levels from fastest to slowest are: registers, cache, main memory, and auxiliary memory such as magnetic disks and tapes. Cache memory sits between the CPU and main memory to bridge the speed gap. It exploits locality of reference to improve memory access speed. The document provides details on the working of each memory level and how they interact with each other.
The document is an A-Z glossary of computer memory terminology from the website MemoryFinder.org. It defines terms related to computer hardware components, memory technologies, and specifications. Some key terms defined include RAM, ROM, cache memory, buses, bytes, chips, latency, and memory modules. The glossary provides concise explanations of over 50 important concepts for understanding computer memory and components.
This document provides an overview of various components of computer memory hierarchy, including main memory, auxiliary memory, associative memory, cache memory, virtual memory, and memory management hardware. Main memory uses RAM and ROM chips as primary storage during runtime. Auxiliary memory includes magnetic disks and tapes for long-term secondary storage. Associative memory allows for fast parallel searches. Cache memory acts as a buffer between the CPU and main memory for frequently accessed data. Virtual memory allows programs to access secondary storage as if it were main memory. Memory management hardware in operating systems allocates and manages memory usage between processes.
Cache memory is a small, high-speed memory located between the CPU and main memory. It stores copies of frequently used instructions and data from main memory in order to speed up processing. There are multiple levels of cache with L1 cache being the smallest and fastest located directly on the CPU chip. Larger cache levels like L2 and L3 are further from the CPU but can still provide faster access than main memory. The main purpose of cache is to accelerate processing speed while keeping computer costs low.
Cache memory is a small, high-speed memory located between the CPU and main memory. It stores copies of frequently used instructions and data to accelerate access times. There are three levels of cache - L1, L2, and L3 - with L1 being the smallest and fastest, located directly on the CPU chip. Cache memory utilizes the principle of locality of reference, where programs tend to reuse the same data and instructions. This allows the cache to improve performance by retaining recently accessed content.
CS304PC:Computer Organization and Architecture Session 14 data transfer and ...Asst.prof M.Gokilavani
This document summarizes the topics covered in Session 14 of the CS304PC course on Computer Organization and Architecture. It discusses data transfer instructions, which move data without changing its contents. This includes load, store, move, input, output, push and pop instructions. It also examines addressing modes like immediate, register, indexed, and indirect addressing. The document then reviews data manipulation instructions, including arithmetic, logical/bitwise, and shift operations. It concludes by previewing that the next session will cover program control instructions.
COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and Architecture COA Computer Organisation and ArchitectureCOA Computer Organisation and Architecture
Similar to CS304PC:Computer Organization and Architecture Session 29 Memory organization.pptx (20)
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
LIST OF EXPERIMENTS:
1. Implement simple vector addition in Tensor Flow.
2. Implement a regression model in Keras.
3. Implement a perception in TensorFlow/Keras Environment.
4. Implement a Feed Forward Network in TensorFlow/Keras.
5. Implement an image classifier using CNN in TensorFlow/Keras.
6. Improve the deep Learning model by fine tuning hyper parameters.
7. Implement a Transfer Learning concept in image classification.
8. Using a pre trained model on Keras for transfer learning.
9. Perform Sentimental Analysis using RNN.
10. Implement an LSTM based Auto encoding inTensorflow/Keras.
11. Image generation using GAN.
ADDITIONAL EXPERIMENTS
12. Train a deep Learning model to classify a given image using pre trained model.
13. Recommendation system from sales data using Deep Learning.
14. Implement Object detection using CNN.
15. Implement any simple Reinforcement Algorithm for an NLP problem.
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
UNIT I INTRODUCTION
Neural Networks-Application Scope of Neural Networks-Artificial Neural Network: An IntroductionEvolution of Neural Networks-Basic Models of Artificial Neural Network- Important Terminologies of
ANNs-Supervised Learning Network.
This document provides a detailed syllabus for an Information Security course. It includes 5 units: Introduction, Security Investigation, Security Analysis, Logical Design, and Physical Design. The Introduction unit covers the history of information security and computer security. It defines key concepts like confidentiality, integrity, availability, and the CIA triangle. It also discusses security models and the components of an information system. The other units will cover topics like risk management, access control, security standards, cryptography, and physical security controls.
This document provides a detailed syllabus for an Information Security course. It covers 5 units:
1) Introduction - Provides a history of information security and an overview of key concepts like the CIA triangle of Confidentiality, Integrity and Availability.
2) Security Investigation - Covers the need for security, threats, attacks, and legal/ethical issues.
3) Security Analysis - Focuses on risk management, access controls, and information flow.
4) Logical Design - Addresses security policies, standards, security architecture design and planning continuity.
5) Physical Design - Covers security technologies, intrusion detection systems, cryptography, access controls, physical security and personnel security
Tuple assignment allows multiple variables to be assigned values from an iterable like a list or tuple in a single statement. This is more concise than separate assignments and avoids using a temporary variable. For example, to swap the values of variables a and b, tuple assignment can be used: a, b = b, a. The left side must contain the same number of variables as there are elements on the right, and each value is assigned to the corresponding variable from left to right. Tuple assignment is useful for unpacking elements like splitting a string into parts.
This document discusses control flow, functions, and recursion in Python. It begins by defining boolean expressions and explaining different types of operators like arithmetic, relational, logical, and assignment operators. It then covers conditional execution using if, else, and elif statements. Loops like while and for are explained along with break, continue, and pass statements. Functions are described as being able to return values. Finally, recursion is defined as a function calling itself, either directly or indirectly.
The document discusses various concepts related to data, expressions, and statements in Python programming. It begins by defining an interpreter as a program that executes instructions in a programming language. It then discusses invoking the Python interpreter in both script and interactive modes. In interactive mode, the interpreter provides immediate feedback for each statement. The document also defines various Python concepts like values and variables, keywords, expressions, operators, data types, functions, and control flow. It provides examples to illustrate function definition and calls, math functions, and basic Python programs to swap variables, check leap years, and convert Celsius to Fahrenheit.
This document outlines the syllabus for the course GE3151 Problem Solving and Python Programming. It contains 5 units that cover topics such as algorithmic problem solving, Python data types and expressions, control flow and functions in Python, Python lists, tuples and dictionaries, and files and modules in Python. The objectives of the course are to teach students how to solve problems using Python conditionals and loops, define Python functions, use Python data structures, perform input/output with files, and more. Each unit is allocated a certain number of periods to be taught and includes example programs to illustrate the concepts covered.
This document outlines the syllabus for the course GE3151 Problem Solving and Python Programming. It includes 5 units that cover topics like computational thinking, Python data types, control flow, functions, lists, tuples, dictionaries, files and modules. The objectives of the course are to understand algorithmic problem solving, learn to solve problems using Python conditionals and loops, define functions and use data structures like lists and tuples. It also aims to teach input/output with files in Python. The document provides the number of periods (45) and textbooks recommended for the course.
This document outlines the objectives and units of study for the course GE3151 Problem Solving and Python Programming. The course aims to teach algorithmic problem solving using Python conditionals, loops, functions, and data structures like lists, tuples and dictionaries. Students will learn to do input/output with files in Python. The 5 units cover computational thinking and problem solving, Python data types and statements, control flow and functions, lists, tuples and dictionaries, and files, modules and packages. Key concepts covered include algorithms, conditionals, iteration, functions, strings, lists, files operations like reading, writing and closing files, and exception handling.
The document provides information about the course GE3151 Problem Solving and Python Programming. It includes the objectives of the course, which are to understand algorithmic problem solving and learn to solve problems using Python constructs like conditionals, loops, functions, and data structures. It also outlines the 5 units that will be covered in the course, which include computational thinking, Python basics, control flow and functions, lists/tuples/dictionaries, and files/modules. Example problems and programs are provided for different sorting algorithms, quadratic equations, and list operations.
This document outlines the objectives and units of study for the course GE3151 Problem Solving and Python Programming. The objectives include understanding algorithmic problem solving, learning to solve problems using Python conditionals and loops, defining functions, and using data structures like lists, tuples and dictionaries. The 5 units of study are: Computational Thinking and Problem Solving, Data Types Expressions and Statements, Control Flow Functions and Strings, Lists Tuples and Dictionaries, and Files Modules and Packages. Some example problems and programs are provided for each unit to illustrate the concepts covered.
This document outlines the objectives and content of the course GE3151 Problem Solving and Python Programming. The course is intended to teach students the basics of algorithmic problem solving using Python. It covers topics like computational thinking, Python data types, control flow, functions, strings, lists, tuples, dictionaries, files and modules. The course contains 5 units that will teach students how to define problems, develop algorithms, implement solutions in Python using conditionals, loops, functions and data structures, perform input/output with files and use modules and packages.
This document provides an overview of first-order logic for knowledge representation in artificial intelligence. It discusses the syntax and semantics of first-order logic, including predicates, quantifiers, and variables. It also describes the knowledge engineering process for developing a first-order logic knowledge base, including identifying the problem domain, encoding general domain knowledge as rules, and representing a specific problem instance. Queries can then be posed to the knowledge base to infer answers using logical reasoning techniques like forward chaining and backward chaining.
AI3391 Artificial intelligence Session 29 Forward and backward chaining.pdfAsst.prof M.Gokilavani
The document summarizes topics to be covered in an AI course session. It includes:
1. Logical reasoning, propositional logic, theorem proving, model checking, and agents based on propositional logic will be discussed. First-order logic, syntax, semantics, knowledge representation, inference, and chaining will also be covered.
2. Examples of first-order logic, resolution proofs, and knowledge representation are provided.
3. The next session will cover acting under uncertainty.
The document outlines topics to be covered in an Artificial Intelligence session, including logical reasoning, propositional logic, first-order logic, knowledge representation, inference, and resolution theorem proving. It provides examples of converting statements to clausal form, constructing a resolution proof graph through unification of complementary literals, and uses an example problem to demonstrate resolving "John likes peanuts" through contradiction. The next session will cover forward and backward chaining inference techniques.
AI3391 Artificial intelligence session 27 inference and unification.pptxAsst.prof M.Gokilavani
This document summarizes an AI class session covering logical reasoning topics like propositional logic, first-order logic, and unification. It introduces propositional versus first-order logic and describes their differences. Key concepts from unification like the unification algorithm, most general unifier, and examples of unifying expressions are provided. The next class topics will cover resolution in first-order logic.
This document summarizes a session on first-order logic presented by Assistant Professor M. Gokilavani. The session covered the basics of first-order logic, including its syntax and semantics. Key points included: first-order logic uses predicates, terms, quantifiers, and connectives to represent relationships between objects; its syntax includes atomic sentences formed from predicates and terms, and complex sentences formed by combining atomic sentences; the universal and existential quantifiers allow representing statements that apply to all or some objects. Examples were provided to demonstrate representing statements in first-order logic. The next session will cover rules of inference in first-order logic.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
CS304PC:Computer Organization and Architecture Session 29 Memory organization.pptx
1. CS304PC:Computer Organization
and Architecture (R18 II(I sem))
Department of computer science and engineering (AI/ML)
Session 27
by
Asst.Prof.M.Gokilavani
VITS
3/21/2023 Department of CSE (AI/ML) 1
2. TEXTBOOK:
• 1. Computer System Architecture – M. Moris Mano,
Third Edition, Pearson/PHI.
REFERENCES:
• Computer Organization – Car Hamacher, Zvonks
Vranesic, Safea Zaky, Vth Edition, McGraw Hill.
• Computer Organization and Architecture – William
Stallings Sixth Edition, Pearson/PHI.
• Structured Computer Organization – Andrew S.
Tanenbaum, 4th Edition, PHI/Pearson.
3/21/2023 Department of CSE (AI/ML) 2
3. Unit IV
Input-Output Organization: Input-Output
Interface, Asynchronous data transfer, Modes of
transfer, Priority Interrupt Direct memory
Access.
Memory Organization: Memory Hierarchy,
Main memory, Auxiliary memory, Associate
memory, cache memory.
3/21/2023 Department of CSE (AI/ML) 3
4. Topics covered in session 29
3/21/2023 Department of CSE (AI/ML) 4
Memory Organization
Memory Hierarchy
Main memory
Auxiliary memory
Associate memory
cache memory.
5. Memory Organization
• Memory unit is an essential component in any digital computer
since it is needed for storing programs and data.
• The memory unit that communicates directly with the CPU is
called the main memory. Devices that provide backup storage
are called auxiliary memory.
• Only programs and data currently needed by the processor
reside in main memory. All other information is stored in
auxiliary memory and transferred to main memory when
needed.
• The memory hierarchy consists of all storage devices
employed in a computer system from the slow but high-
capacity auxiliary memory to a relatively faster main memory,
to an even smaller and faster cache memory accessible to the
high-speed processing logic.
3/21/2023 5
Department of CSE (AI/ML)
6. Memory Organization
• The main memory occupies a central position by being able to
communicate directly with the CPU and with auxiliary
memory devices through an I/O processor.
• Access time of CPU and main memory are different. So, to co-
ordinate the speed between these, a fast memory is needed
called as cache memory.
• While the I/O processor manages data transfer between
auxiliary memory and main memory, the cache organization is
concerned with the transfer of information between main
memory and CPU.
• As the storage capacity of the memory increases, the cost per
bit for storing binary information decreases and the access
time of the memory becomes longer.
• The auxiliary memory has a large storage capacity, is
relatively inexpensive, but has low access speed compared to
main memory.
3/21/2023 6
Department of CSE (AI/ML)
7. Memory Organization
• The cache memory is very small, relatively very small, relatively
expensive, and has very high access speed.
• CPU has direct access to both cache and main memory but not to
auxiliary memory. The transfer from auxiliary to main memory is
usually done by means of direct memory access of large blocks of
data.
• Many OS are designed to enable the CPU to process a number of
independent programs concurrently.
3/21/2023 7
Department of CSE (AI/ML)
8. Memory Organization
• This concept, called multiprogramming, refers to
the existence of two or more programs in different
parts of memory hierarchy at the same time.
• In this way, it is possible to keep all parts of the
computer busy by working with several programs
in sequence.
• The part of the computer system that supervises
the flow of information between auxiliary
memory and main memory is called memory
management system.
3/21/2023 8
Department of CSE (AI/ML)
10. Main Memory
• The main memory is the central storage unit in a computer
system.
• It is a relatively large and fast memory used to store programs
and data during the computer operation.
• Integrated circuit RAM chips are available in two possible
modes:
• static
• dynamic
• The static RAM consists essentially of internal flip-flops.
• The stored information remains valid as long as power is
applied to the unit.
3/21/2023 10
Department of CSE (AI/ML)
11. Dynamic Main Memory
• The dynamic RAM stores the binary information in the
form of electric charges that are applied to capacitors.
• The capacitors are provided inside the chip by MOS
transistors.
• The stored charge on the capacitor tends to discharge
with time and the capacitors must be periodically
recharged by refreshing the dynamic memory.
• The dynamic RAM offers reduced power consumption
and larger storage capacity in a single memory chip.
3/21/2023 11
Department of CSE (AI/ML)
12. Main Memory
• Most of the main memory in a general-purpose computer is made
up of RAM integrated circuit chips, but a portion of the memory
may be constructed with ROM chips.
• RAM is used for storing the bulk of the programs and data that
are subject to change.
• ROM is used for storing programs that are permanently resident
in the computer and for tables of constants that do not change in
the value once the production of the computer is completed.
• Among other things, the ROM portion of main memory is needed
for storing an initial program called as bootstrap loader.
• The bootstrap loader is a program whose function is to start the
computer software operating when power is turned on.
3/21/2023 12
Department of CSE (AI/ML)
13. Main Memory
• RAM and ROM chips
– RAM and ROM chips are available in a variety of sizes.
If we need larger memory for the system, it is necessary
to combine a number of chips to form the required
memory size.
• RAM Chips
• A RAM chip is better suited to communicate with
CPU if it has one or more control inputs that select
the chip only when needed. The block diagram of a
RAM chip is shown below:
3/21/2023 13
Department of CSE (AI/ML)
16. Memory Address Map
• The addressing of memory can be established by
means of a table that specifies the memory address
assigned to each RAM or ROM chip.
• This table is called memory address map and is a
pictorial representation of assigned address space for
particular chip.
• Example: Suppose computer system needs 512 bytes
of RAM and 512 bytes of ROM.
3/21/2023 16
Department of CSE (AI/ML)
17. 3/21/2023 Department of CSE (AI/ML) 17
•We use four 128 words RAM to make 512 byte size.
• Hexadecimal address column assigns a range of addresses for each
chip.
•10 lines in address bus column: lines 1 through 7 for RAM and 1
through 9 for ROM.
•Distinction between RAM and ROM chip is made by line 10.
•When line 10 is 1, it selects ROM and when it is 0, CPU selects
RAM.
•X’s represents a binary number ranging from all-0’s to all-1’s.
18. Associative Memory
• Also called as Content-addressable memory
(CAM), associative storage, or associative
array .
• Content-addressed or associative memory
refers to a memory organization in which the
memory is accessed by its content (as opposed
to an explicit address).
• It is a special type of computer memory used
in certain very high speed searching
applications.
3/21/2023 18
Department of CSE (AI/ML)
19. Associative Memory
• If the data word is found, the CAM returns a list of one or
more storage addresses where the word was found.
• CAM is designed to search its entire memory in a single
operation.
• It is much faster than RAM in virtually all search applications.
• An associative memory is more expensive than RAM, as each
cell must have storage capability as well as logic circuits for
matching its content with an external argument.
• Associative memories are used in applications where the
search time is very critical and short.
• Associative memories are expensive compared to RAMs
because of the add logic associated with each cell.
3/21/2023 19
Department of CSE (AI/ML)
22. Cache Memory
• Cache (pronounced cash) memory is extremely fast
memory that is built into a computer’s central processing
unit (CPU), or located next to it on a separate chip.
• The CPU uses cache memory to store instructions that
are repeatedly required to run programs, improving
overall system speed.
• The advantage of cache memory is that the CPU does not
have to use the motherboard’s system bus for data
transfer.
3/21/2023 Department of CSE (AI/ML) 22
23. cache mapping techniques
• A cache is a small amount of very fast associative
memory.
• It sits between normal main memory and CPU.
• When CPU needs to access contents of memory
location, then cache is examined for this data.
• If present, get from cache (fast).
• If not present, read required block from main memory
to cache, then deliver from cache to CPU.
• The performance of cache memory is frequently
measured in terms of a quantity called hit ratio.
3/21/2023 Department of CSE (AI/ML) 23
25. Cache mapping techniques
• The process of transferring the data from main memory to
cache is known as mapping process.
• There are three types of cache mapping techniques:
• Associative mapping
• Direct mapping
• Set-associative mapping
3/21/2023 Department of CSE (AI/ML) 25
26. Example
• The main memory can store 32K words of 12 bits each.
• The cache is capable of storing 512 of these words at
any given time.
• For every word stored in cache, there is a duplicate
copy in main memory.
• The CPU communicates with both memories.
• It first sends a 15-bit address to cache.
• If there is a hit, the CPU accepts the 12-bit data from
cache.
• If there is a miss, the CPU reads the word from main
memory and the word is then transferred to cache.
3/21/2023 Department of CSE (AI/ML) 26
27. Associative mapping
• Associative memory Stores both address and data
content in memory word.
3/21/2023 Department of CSE (AI/ML) 27
28. Direct Mapping
• The CPU address of 15 bits is divided into two fields. The
nine least significant bits constitute the index field and the
remaining six bits form the tag field.
3/21/2023 Department of CSE (AI/ML) 28
29. Direct Mapping
• The n bit memory address is divided into two fields:
• k bits for the index field
• n-k bits for the tag field.
• The direct mapping cache organization uses the n bit address
to access the main memory and the k bit index to access the
cache.
• The number of bits in the index field is equal to the number
of address bits required to access the cache memory.
3/21/2023 Department of CSE (AI/ML) 29
31. Direct Mapping
• The tag field of the CPU address is compared
with the tag in the word read from the cache.
• If the two tags match, there is a hit and the desired
data word is in cache.
• If there is no match, there is a miss and the
required word is read from main memory.
• It is then stored in the cache together with the new
tag, replacing the previous value.
• The disadvantage of direct mapping is that two
words with the same index in their address but
with different tag values cannot reside in cache
memory at the same time.
3/21/2023 Department of CSE (AI/ML) 31
32. Set-Associative Mapping
• It is an improvement over the direct mapping
organization in that each word of cache can
store two or more words of memory under the
same index address.
• The most common replacement algorithms
used are: random replacement, first in first out
(FIFO), and least recently used (LRU)
3/21/2023 Department of CSE (AI/ML) 32
33. Set-Associative Mapping
• The words stored at
addresses 01000 and 02000
of main memory are stored
in cache memory at index
address 000. Similarly, the
words at addresses 02777
and 00777 are stored in
cache at index address 777.
• When a miss occurs in a set
associative cache and the set
is full, it is necessary to
replace one of the tag data
items with a new value.
3/21/2023 Department of CSE (AI/ML) 33
34. Virtual memory
• A virtual memory system attempts to optimize the use
of the main memory (the higher speed portion) with
the hard disk (the lower speed portion).
• Virtual memory gives programmers the illusion that
they have a very large memory and provides
mechanism for dynamically translating program-
generated addresses into correct main memory
locations.
• The translation or mapping is handled automatically
by the hardware by means of a mapping table.
3/21/2023 Department of CSE (AI/ML) 34
35. Topics to be covered in next session 30
• RISC
3/21/2023 Department of CSE (AI/ML) 35
Thank you!!!