This document provides an overview of basic computer system elements and operating system concepts. It discusses the processor, memory, I/O modules, and system bus. It then covers processor registers, instruction execution, interrupts, the memory hierarchy including cache memory, and I/O communication techniques like programmed I/O, interrupt-driven I/O, and direct memory access. The goal is to provide a high-level roadmap of core computer hardware and software concepts.
The document provides an overview of computer system components and their functions. It describes the CPU, RAM, motherboard, power supply, hard drive, and disk drives. It then discusses basic computer elements like the processor, memory, I/O modules, and system bus. Interrupts are handled through interrupt handlers that suspend normal program execution to service I/O requests. Memory is organized in a hierarchy from fastest and smallest registers to slower but larger secondary storage.
The document discusses the key components and characteristics of the central processing unit (CPU). It describes the CPU's main components as the arithmetic logic unit (ALU) and control unit. It also discusses other components like registers, cache, and different types of memory including RAM, ROM, and their characteristics. The document compares RISC and CISC architectures and covers concepts like multicore processors, overclocking, threading and CPU sockets.
This document provides information about a computer systems course, including the lecturer, textbook, and recommended reading. It then summarizes the key topics that will be covered in the course, including computer structure, central processing unit components like registers and instruction cycles, memory hierarchy with caches, input/output techniques like programmed I/O and interrupt-driven I/O, and other concepts.
The CPU is the central processing unit that acts as the brain of the computer system. It has three main parts: the control unit that manages fetching and executing instructions, the arithmetic logic unit that performs calculations and logical operations, and registers for temporary storage of data and results. The CPU processes instructions and data to control all devices in the computer.
The document discusses how CPUs and memory work together in a computer system. It explains that the CPU contains components like the control unit, arithmetic logic unit, and caches that process instructions. The CPU communicates with memory using address and data buses. Memory stores data and instructions in rows and columns that the CPU can access randomly. Modern multicore CPUs contain multiple processing units on a single chip to run tasks simultaneously for improved performance.
This document outlines the syllabus for a course on computer organization. It includes 5 modules that cover topics like basic computer structure, input/output organization, memory systems, arithmetic, and basic processing units. The course aims to explain computer organization and demonstrate how different subsystems like the processor, input/output, and memory function. Students will learn about hardwired and microprogrammed control as well as pipelining, embedded systems, and other computing architectures. Assessment includes assignments, a written exam consisting of questions from each module, and students must answer 1 question from each module.
This document provides an overview of an operating systems course, including its aims, outline, recommended reading, and introductory content. The key points are:
1. The course aims to explain operating system structure and functions, illustrate concepts with examples, and prepare students for future courses. Students will learn about CPU scheduling, processes, memory management, I/O, protection, and case studies of Unix and Windows.
2. The course outline covers introductions to operating systems, processes and scheduling, memory management, I/O and device management, protection, filing systems, and case studies of Unix and Windows NT.
3. The recommended reading includes textbooks on concurrent systems, operating system concepts, and case studies
This document provides lecture notes on operating systems. It begins with an overview of operating systems, their goals and functions. It describes the components of a computer system including hardware, operating system, application programs and users. It then covers common operating system concepts such as processes, memory management, storage management, I/O subsystem and protection/security. The document also discusses distributed systems and operating system services provided to users and for efficient system operation.
The document provides an overview of computer system components and their functions. It describes the CPU, RAM, motherboard, power supply, hard drive, and disk drives. It then discusses basic computer elements like the processor, memory, I/O modules, and system bus. Interrupts are handled through interrupt handlers that suspend normal program execution to service I/O requests. Memory is organized in a hierarchy from fastest and smallest registers to slower but larger secondary storage.
The document discusses the key components and characteristics of the central processing unit (CPU). It describes the CPU's main components as the arithmetic logic unit (ALU) and control unit. It also discusses other components like registers, cache, and different types of memory including RAM, ROM, and their characteristics. The document compares RISC and CISC architectures and covers concepts like multicore processors, overclocking, threading and CPU sockets.
This document provides information about a computer systems course, including the lecturer, textbook, and recommended reading. It then summarizes the key topics that will be covered in the course, including computer structure, central processing unit components like registers and instruction cycles, memory hierarchy with caches, input/output techniques like programmed I/O and interrupt-driven I/O, and other concepts.
The CPU is the central processing unit that acts as the brain of the computer system. It has three main parts: the control unit that manages fetching and executing instructions, the arithmetic logic unit that performs calculations and logical operations, and registers for temporary storage of data and results. The CPU processes instructions and data to control all devices in the computer.
The document discusses how CPUs and memory work together in a computer system. It explains that the CPU contains components like the control unit, arithmetic logic unit, and caches that process instructions. The CPU communicates with memory using address and data buses. Memory stores data and instructions in rows and columns that the CPU can access randomly. Modern multicore CPUs contain multiple processing units on a single chip to run tasks simultaneously for improved performance.
This document outlines the syllabus for a course on computer organization. It includes 5 modules that cover topics like basic computer structure, input/output organization, memory systems, arithmetic, and basic processing units. The course aims to explain computer organization and demonstrate how different subsystems like the processor, input/output, and memory function. Students will learn about hardwired and microprogrammed control as well as pipelining, embedded systems, and other computing architectures. Assessment includes assignments, a written exam consisting of questions from each module, and students must answer 1 question from each module.
This document provides an overview of an operating systems course, including its aims, outline, recommended reading, and introductory content. The key points are:
1. The course aims to explain operating system structure and functions, illustrate concepts with examples, and prepare students for future courses. Students will learn about CPU scheduling, processes, memory management, I/O, protection, and case studies of Unix and Windows.
2. The course outline covers introductions to operating systems, processes and scheduling, memory management, I/O and device management, protection, filing systems, and case studies of Unix and Windows NT.
3. The recommended reading includes textbooks on concurrent systems, operating system concepts, and case studies
This document provides lecture notes on operating systems. It begins with an overview of operating systems, their goals and functions. It describes the components of a computer system including hardware, operating system, application programs and users. It then covers common operating system concepts such as processes, memory management, storage management, I/O subsystem and protection/security. The document also discusses distributed systems and operating system services provided to users and for efficient system operation.
This document provides an overview of basic computer system components and their functions. It discusses the main components including the processor, main memory, I/O modules, and system bus. It also describes how instructions are executed in a fetch-execute cycle and how interrupts can alter the normal sequencing. The memory hierarchy is introduced, including caches and different levels of memory. Common I/O techniques like programmed I/O, interrupt-driven I/O, and direct memory access are summarized. Finally, symmetric multiprocessors and multicore computers are defined.
The document provides a top-level overview of the basic components and functions of a computer system. It describes how a central processing unit (CPU) works with memory and input/output devices via buses to execute instructions. Interrupts allow efficient processing by suspending the current program to handle higher priority tasks or events before resuming the original program.
This document discusses system calls and their types. System calls allow programs to request services from the operating system kernel. There are five main categories of system calls: process control, file management, device management, information maintenance, and communication. Process control system calls deal with processes like creation and termination. File management calls handle file operations like reading, writing and opening files. Device management calls allow access and control of devices. Information maintenance calls transfer data between programs and the OS. Communication calls enable interprocess communication.
1) Cache memory is RAM that can be accessed more quickly than regular RAM by the microprocessor. It stores frequently used data from main memory to enable faster access.
2) Cache memory is organized into multiple levels based on their closeness to the microprocessor core. Level 1 (L1) cache is closest while level 2 (L2) cache and main memory are farther.
3) Direct memory access (DMA) allows certain hardware systems to access memory independently of the CPU. This avoids occupying the CPU during lengthy input/output operations.
The CPU acts as the computer's brain and carries out instructions from programs. It has two main components: the control unit, which selects and coordinates instruction execution, and the arithmetic logic unit, which performs calculations. Registers temporarily store data during instruction processing, including special purpose registers like the program counter, accumulator, and input/output registers. The CPU communicates with main memory, usually RAM, and cache memory for faster access to active data. It fetches instructions from memory and decodes and executes them in a multi-step process controlled by the control unit.
The document discusses operating system services and functions. It describes that an operating system manages computer resources, provides services for programmers, and schedules program execution. It then lists and describes key operating system services like program creation, execution, I/O access, file access, error handling, and accounting. The document also discusses how the operating system acts as a resource manager and describes common types of operating systems, scheduling, memory management techniques like swapping and paging, and how logical addresses are mapped to physical addresses.
An operating system controls application programs and acts as an interface between applications and hardware. It provides services like program development, execution and resource management. An OS allows for convenient, efficient and evolvable use of computer systems. It masks hardware details from users and programs. An OS manages resources like processors, memory, storage and I/O devices.
The document discusses the five main units of computer hardware: input, storage, operation, control, and output. It describes each unit's function and role, which is analogous to parts of the human body. The storage unit is divided into main storage and auxiliary storage. The document also provides details on integrated circuits, semiconductor memory including RAM and ROM, and different types of RAM and ROM.
The document discusses various techniques for input/output (I/O) in computer systems, including programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It describes how I/O modules interface with CPUs and peripherals to handle data transfer between devices that operate at different speeds. Common I/O bus standards like ISA, PCI, FireWire, and InfiniBand are also overviewed in terms of their architecture, protocols, and applications.
This document discusses I/O systems, including an overview of I/O hardware, the application I/O interface, the kernel I/O subsystem, and I/O performance. It describes how I/O requests are transformed into hardware operations through techniques like interrupts, DMA, polling, and blocking vs. asynchronous I/O. Specific I/O concepts covered include STREAMS, device characteristics, and data structures used in the kernel I/O subsystem.
Von-Neumann machine and IAS architectureShishir Aryal
The presentation summarizes the von Neumann machine architecture, the stored program concept, and components of the IAS architecture. It explains that von Neumann introduced the stored program concept where both instructions and data are stored in memory. It then describes the basic components of a von Neumann machine including the CPU, memory, and I/O devices. Finally, it details the specific components of the IAS architecture such as the memory buffer register, accumulator, and instruction register.
The document discusses input and output in computer systems. It describes three main techniques for transferring data between the CPU and I/O devices: programmed I/O, interrupt-driven I/O, and direct memory access (DMA). Programmed I/O involves the CPU continuously polling I/O devices, interrupt-driven I/O uses interrupts to signal the CPU when data is ready, and DMA allows high-speed transfer of data directly between memory and I/O devices without CPU involvement.
To perform tasks, programs consisting of instruction lists are stored in memory. Individual instructions are fetched from memory into the processor for execution. Data is also stored in memory. The processor contains an ALU, control circuitry, and registers like the instruction register (IR), program counter (PC), memory address register (MAR), and memory data register (MDR). Instructions are fetched from memory based on the PC, decoded and executed, potentially accessing operands from memory via the MAR and MDR and performing operations in the ALU. Results may be written back to memory using the same process.
The document discusses the control unit (CU) and how it executes micro-operations to control the flow of instructions in a computer's central processing unit (CPU). It explains that the CU breaks down each instruction fetch and execute cycle into discrete micro-operations that transfer data between registers and memory. These include operations to fetch instructions from memory into registers using the program counter, memory address register, memory buffer register, and instruction register. The document also describes how the CU handles interrupts by saving the program context and loading the interrupt handling routine address.
The document discusses three different I/O techniques:
1. Programmed I/O - The CPU controls the entire I/O process and must periodically check device status, wasting CPU time.
2. Interrupt-driven I/O - The CPU issues a command and is freed up while the device operates. The device then interrupts the CPU when ready.
3. Direct memory access (DMA) - Allows devices to communicate directly with memory without involving the CPU, using a DMA controller. This overcomes CPU waiting and avoids repeated status checks.
The document discusses various aspects of computer system structures. It describes that a modern computer system consists of a CPU, memory, and device controllers connected through a system bus. I/O devices and the CPU can operate concurrently, with each device controller managing a specific device type. Interrupts are used to signal when I/O operations are complete. Memory is organized in a hierarchy from fastest and smallest registers to slower but larger magnetic disks. Various techniques like caching, paging and virtual memory help bridge differences in speed between CPU and I/O devices. The document also discusses hardware protection mechanisms like dual mode operation, memory protection using base and limit registers, and CPU protection using timers.
The document provides an overview of key components and processes in a computer system. It discusses registers which hold data for the CPU, the arithmetic logic unit (ALU) which performs operations, and control and program counter registers which determine system actions. It also describes buses which transfer data and addresses between components, clocks which synchronize operations, and input/output interfaces. Memory, interrupts, and the fetch-decode-execute instruction cycle are summarized as well.
The document provides an overview of operating system concepts, including:
- The basic components of a computer system including the processor, main memory, I/O modules, and system bus.
- How the processor fetches and executes instructions from main memory in a cycle. Interrupts allow I/O devices to signal the processor to improve efficiency.
- The memory hierarchy addresses constraints of memory speed and cost through caches and secondary storage.
- I/O techniques like programmed I/O, interrupt-driven I/O, and direct memory access (DMA) control data transfer between memory and I/O devices.
- Symmetric multiprocessor (SMP) systems use multiple identical processors that
operating system over view.ppt operating sysyemsJyoReddy9
The document discusses the key concepts of operating systems including their goals, structure, functions and management of processes, memory, storage and security. Specifically, it describes how an operating system acts as an intermediary between the user and hardware to execute programs efficiently while making resource allocation decisions. It also outlines the hierarchy of computer storage and caching strategies used to optimize performance.
CMP 416-Architecture and system design.pdfElvisAngelot
This document provides an overview of the key topics covered in a computer architecture course (CMP 416). It discusses the basic elements of a computer system including the processor, memory, I/O, and system bus. The processor controls the computer's operations and performs data processing. Main memory is volatile storage for active data and programs. I/O manages data transfer between the computer and external devices. The system bus provides communication between these core components. The document also examines memory hierarchy and cache memory which aim to speed up processor access to data and instructions.
This document provides an overview of basic computer system components and their functions. It discusses the main components including the processor, main memory, I/O modules, and system bus. It also describes how instructions are executed in a fetch-execute cycle and how interrupts can alter the normal sequencing. The memory hierarchy is introduced, including caches and different levels of memory. Common I/O techniques like programmed I/O, interrupt-driven I/O, and direct memory access are summarized. Finally, symmetric multiprocessors and multicore computers are defined.
The document provides a top-level overview of the basic components and functions of a computer system. It describes how a central processing unit (CPU) works with memory and input/output devices via buses to execute instructions. Interrupts allow efficient processing by suspending the current program to handle higher priority tasks or events before resuming the original program.
This document discusses system calls and their types. System calls allow programs to request services from the operating system kernel. There are five main categories of system calls: process control, file management, device management, information maintenance, and communication. Process control system calls deal with processes like creation and termination. File management calls handle file operations like reading, writing and opening files. Device management calls allow access and control of devices. Information maintenance calls transfer data between programs and the OS. Communication calls enable interprocess communication.
1) Cache memory is RAM that can be accessed more quickly than regular RAM by the microprocessor. It stores frequently used data from main memory to enable faster access.
2) Cache memory is organized into multiple levels based on their closeness to the microprocessor core. Level 1 (L1) cache is closest while level 2 (L2) cache and main memory are farther.
3) Direct memory access (DMA) allows certain hardware systems to access memory independently of the CPU. This avoids occupying the CPU during lengthy input/output operations.
The CPU acts as the computer's brain and carries out instructions from programs. It has two main components: the control unit, which selects and coordinates instruction execution, and the arithmetic logic unit, which performs calculations. Registers temporarily store data during instruction processing, including special purpose registers like the program counter, accumulator, and input/output registers. The CPU communicates with main memory, usually RAM, and cache memory for faster access to active data. It fetches instructions from memory and decodes and executes them in a multi-step process controlled by the control unit.
The document discusses operating system services and functions. It describes that an operating system manages computer resources, provides services for programmers, and schedules program execution. It then lists and describes key operating system services like program creation, execution, I/O access, file access, error handling, and accounting. The document also discusses how the operating system acts as a resource manager and describes common types of operating systems, scheduling, memory management techniques like swapping and paging, and how logical addresses are mapped to physical addresses.
An operating system controls application programs and acts as an interface between applications and hardware. It provides services like program development, execution and resource management. An OS allows for convenient, efficient and evolvable use of computer systems. It masks hardware details from users and programs. An OS manages resources like processors, memory, storage and I/O devices.
The document discusses the five main units of computer hardware: input, storage, operation, control, and output. It describes each unit's function and role, which is analogous to parts of the human body. The storage unit is divided into main storage and auxiliary storage. The document also provides details on integrated circuits, semiconductor memory including RAM and ROM, and different types of RAM and ROM.
The document discusses various techniques for input/output (I/O) in computer systems, including programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It describes how I/O modules interface with CPUs and peripherals to handle data transfer between devices that operate at different speeds. Common I/O bus standards like ISA, PCI, FireWire, and InfiniBand are also overviewed in terms of their architecture, protocols, and applications.
This document discusses I/O systems, including an overview of I/O hardware, the application I/O interface, the kernel I/O subsystem, and I/O performance. It describes how I/O requests are transformed into hardware operations through techniques like interrupts, DMA, polling, and blocking vs. asynchronous I/O. Specific I/O concepts covered include STREAMS, device characteristics, and data structures used in the kernel I/O subsystem.
Von-Neumann machine and IAS architectureShishir Aryal
The presentation summarizes the von Neumann machine architecture, the stored program concept, and components of the IAS architecture. It explains that von Neumann introduced the stored program concept where both instructions and data are stored in memory. It then describes the basic components of a von Neumann machine including the CPU, memory, and I/O devices. Finally, it details the specific components of the IAS architecture such as the memory buffer register, accumulator, and instruction register.
The document discusses input and output in computer systems. It describes three main techniques for transferring data between the CPU and I/O devices: programmed I/O, interrupt-driven I/O, and direct memory access (DMA). Programmed I/O involves the CPU continuously polling I/O devices, interrupt-driven I/O uses interrupts to signal the CPU when data is ready, and DMA allows high-speed transfer of data directly between memory and I/O devices without CPU involvement.
To perform tasks, programs consisting of instruction lists are stored in memory. Individual instructions are fetched from memory into the processor for execution. Data is also stored in memory. The processor contains an ALU, control circuitry, and registers like the instruction register (IR), program counter (PC), memory address register (MAR), and memory data register (MDR). Instructions are fetched from memory based on the PC, decoded and executed, potentially accessing operands from memory via the MAR and MDR and performing operations in the ALU. Results may be written back to memory using the same process.
The document discusses the control unit (CU) and how it executes micro-operations to control the flow of instructions in a computer's central processing unit (CPU). It explains that the CU breaks down each instruction fetch and execute cycle into discrete micro-operations that transfer data between registers and memory. These include operations to fetch instructions from memory into registers using the program counter, memory address register, memory buffer register, and instruction register. The document also describes how the CU handles interrupts by saving the program context and loading the interrupt handling routine address.
The document discusses three different I/O techniques:
1. Programmed I/O - The CPU controls the entire I/O process and must periodically check device status, wasting CPU time.
2. Interrupt-driven I/O - The CPU issues a command and is freed up while the device operates. The device then interrupts the CPU when ready.
3. Direct memory access (DMA) - Allows devices to communicate directly with memory without involving the CPU, using a DMA controller. This overcomes CPU waiting and avoids repeated status checks.
The document discusses various aspects of computer system structures. It describes that a modern computer system consists of a CPU, memory, and device controllers connected through a system bus. I/O devices and the CPU can operate concurrently, with each device controller managing a specific device type. Interrupts are used to signal when I/O operations are complete. Memory is organized in a hierarchy from fastest and smallest registers to slower but larger magnetic disks. Various techniques like caching, paging and virtual memory help bridge differences in speed between CPU and I/O devices. The document also discusses hardware protection mechanisms like dual mode operation, memory protection using base and limit registers, and CPU protection using timers.
The document provides an overview of key components and processes in a computer system. It discusses registers which hold data for the CPU, the arithmetic logic unit (ALU) which performs operations, and control and program counter registers which determine system actions. It also describes buses which transfer data and addresses between components, clocks which synchronize operations, and input/output interfaces. Memory, interrupts, and the fetch-decode-execute instruction cycle are summarized as well.
The document provides an overview of operating system concepts, including:
- The basic components of a computer system including the processor, main memory, I/O modules, and system bus.
- How the processor fetches and executes instructions from main memory in a cycle. Interrupts allow I/O devices to signal the processor to improve efficiency.
- The memory hierarchy addresses constraints of memory speed and cost through caches and secondary storage.
- I/O techniques like programmed I/O, interrupt-driven I/O, and direct memory access (DMA) control data transfer between memory and I/O devices.
- Symmetric multiprocessor (SMP) systems use multiple identical processors that
operating system over view.ppt operating sysyemsJyoReddy9
The document discusses the key concepts of operating systems including their goals, structure, functions and management of processes, memory, storage and security. Specifically, it describes how an operating system acts as an intermediary between the user and hardware to execute programs efficiently while making resource allocation decisions. It also outlines the hierarchy of computer storage and caching strategies used to optimize performance.
CMP 416-Architecture and system design.pdfElvisAngelot
This document provides an overview of the key topics covered in a computer architecture course (CMP 416). It discusses the basic elements of a computer system including the processor, memory, I/O, and system bus. The processor controls the computer's operations and performs data processing. Main memory is volatile storage for active data and programs. I/O manages data transfer between the computer and external devices. The system bus provides communication between these core components. The document also examines memory hierarchy and cache memory which aim to speed up processor access to data and instructions.
The document discusses the basic components of a computer system including the processor, main memory, I/O modules, and system bus. It describes how instruction execution works involving fetching from memory and performing processor tasks. Interrupt processing is covered along with the memory hierarchy principles of cache memory. Finally, it outlines symmetric multiprocessors and multicore computer architectures.
This chapter introduces operating systems by describing their main components and functions. It discusses how operating systems act as an intermediary between the user and computer hardware to execute programs and manage system resources like the CPU, memory, storage and I/O devices. It also provides an overview of computer system organization, operating system structure, common operations and how operating systems handle processes, memory and storage management.
Operating systems. replace ch1 with numbers for next chapterssphs
This chapter introduces operating systems and their core components and functions. It discusses how operating systems act as an intermediary between the user and computer hardware to execute programs and manage system resources like the CPU, memory, storage and I/O devices. It also describes the basic structure of a computer system, including hardware components, the operating system, application programs and users. Finally, it provides overviews of key operating system operations like process management, memory management and storage management.
This chapter introduces operating systems and their major components. It discusses how operating systems act as an intermediary between the user and computer hardware to execute programs and manage system resources like the CPU, memory, storage and I/O devices. It also covers the basic structure of a computer system including hardware components, the operating system, application programs, and users. Key operating system functions like process management, memory management and storage management are introduced.
C Programming Language is the most popular computer language and most used programming language till now. It is very simple and elegant language. This lecture series will give you basic concepts of structured programming language with C.
This document provides an overview of operating system concepts and components. It describes the basic structure of a computer system including hardware, operating system, application programs, and users. It then discusses operating system definitions and goals, including acting as an intermediary between the user and hardware and making efficient use of system resources. Finally, it covers operating system operations like process management, memory management, and protection/security.
An educational hardware system consists of computer hardware components like the central processing unit (CPU), primary storage, secondary storage, and input/output devices. The CPU contains the processor, memory, and buses that connect all the parts. Primary storage like RAM is used for temporary data and instructions during processing while secondary storage such as hard disks store data long-term. Common computer components include the motherboard, video and sound cards, ports, and power supply inside the system unit.
This chapter provides an introduction and overview of operating systems. It defines an operating system as a program that manages computer hardware and software resources and provides common services for computer programs. It describes the components of a computer system including hardware, operating system, application programs, and users. It then discusses the structure and functions of operating systems, including process management, memory management, storage management, protection and security, and distributed systems. It provides examples of different computing environments like traditional, client-server, peer-to-peer, and web-based computing.
The document discusses operating system concepts including:
1. An operating system acts as an intermediary between the user and computer hardware, executing programs and making the computer convenient to use.
2. A computer system consists of hardware, operating system, application programs, and users, with the operating system controlling resource allocation.
3. Key operating system functions include process management, memory management, storage management, and handling interrupts to enable concurrent execution.
This document provides an introduction to operating systems, including their basic components and functions. It describes how operating systems act as an intermediary between the user and computer hardware, managing resources and executing programs. It also outlines the typical structure of a computer system, with hardware, operating system, application programs, and users as the four main components. Finally, it gives overviews of computer organization, storage management, multiprocessing, and the structure of operating systems.
This document provides an overview of operating system concepts, including:
- The role of an operating system is to act as an intermediary between the user and computer hardware to execute programs and efficiently manage system resources.
- A computer system consists of hardware, operating system, application programs, and users. The operating system controls and coordinates the hardware resources among applications and users.
- Operating systems perform process management, memory management, storage management, and security functions to allocate resources and control concurrent execution of programs.
The document discusses the processor, memory, and cache memory components of a computer system. It describes the central processing unit (CPU) as having two main components - the control unit which interprets and executes instructions, and the arithmetic logic unit which performs arithmetic and logic operations. It also discusses different types of processors, memory organization and storage, and cache memory which acts as a buffer between the CPU and main memory.
UNIT I OPERATING SYSTEM OVERVIEW
Computer System Overview-Basic Elements, Instruction Execution, Interrupts, Memory Hierarchy, Cache Memory, Direct Memory Access, Multiprocessor and Multicore Organization. Operating system overview-objectives and functions, Evolution of Operating System.- Computer System Organization Operating System Structure and Operations- System Calls, System Programs, OS Generation and System Boot.
This document provides an overview of operating systems concepts including goals of the course, operating system definitions, services, resource management, hardware architecture, processes, and interrupts. Key points covered include defining an operating system, its design goals of convenience, efficiency and flexibility, traditional hardware architecture including CPU, memory, peripherals and bus, managing resources through scheduling and allocation policies, and handling interrupts by finishing current instructions before invoking interrupt handling routines.
The document summarizes the five main units of a computer system: input, output, storage, arithmetic, and control units. It describes the functions of hardware components like integrated circuits, memory (RAM and ROM), and the processor. The processor has a control unit that retrieves and decodes instructions from memory and an arithmetic logic unit that performs calculations. Instructions are fetched, decoded, executed, and retired in sequence using the von Neumann architecture.
The document discusses the central processing unit (CPU) and its components. The CPU contains an arithmetic logic unit and a control unit which work together to execute stored program instructions. It retrieves instructions and data from memory, decodes and executes the instructions by performing arithmetic and logical operations, and stores the results back in memory. Modern CPUs use techniques like reduced instruction sets, pipelining, and parallel processing to increase their speed and processing power.
The document provides a top-level overview of the function and interconnection of computer components. It describes how a program is executed through an instruction cycle of fetching and executing instructions. It explains the role of the control unit and how different computer components like the CPU, memory, and I/O devices are interconnected through buses to allow the transfer of data and instructions. Interrupts provide a way to improve processing efficiency and allow different tasks to be interleaved.
This document summarizes I/O management and disk scheduling techniques in operating systems. It covers I/O devices, how the I/O function is organized, operating system design issues regarding I/O, I/O buffering, and different disk scheduling policies like FIFO, SSTF, SCAN, C-SCAN, and others. The document provides an overview of these fundamental operating system I/O concepts in just over 3 sentences.
This document provides an overview of client/server computing and distributed systems. It discusses traditional centralized data processing and how distributed data processing departs from this model. Client/server architectures are introduced, including different types of client/server applications and architectures. Distributed message passing and remote procedure calls are covered as techniques for interprocess communication in distributed systems. The document also discusses clusters, including different cluster types, operating system design issues for clusters, examples of Windows Cluster Server and Sun Cluster, and Beowulf and Linux clusters using commodity hardware.
The document discusses principles of concurrency in operating systems, including mutual exclusion and synchronization. It covers various techniques for managing concurrent processes such as hardware support using interrupt disabling or compare-and-swap instructions. It also covers higher-level synchronization methods like semaphores, monitors, and message passing. It provides examples of how these techniques can solve concurrency issues like the bounded buffer problem and readers-writers problem.
The document summarizes threads, symmetric multiprocessing (SMP), and microkernels. It discusses threads in terms of resource ownership and execution. It describes SMP as allowing portions of the kernel to execute in parallel on multiple processors. Microkernels are described as having a small kernel core that provides modularity through extensions run in user space. Case studies of threads and SMP in Windows, Solaris, and Linux are provided.
The document provides an overview of operating system concepts, including:
1. The objectives and functions of operating systems such as providing convenience, efficiency, and ability to evolve for users and applications.
2. The evolution of operating systems from serial processing to time sharing systems to better utilize hardware resources and serve multiple users simultaneously.
3. Major achievements in operating system design including processes, memory management, information protection, scheduling, and system structure.
This document provides an overview of basic computer system elements and operating system concepts. It discusses the processor, memory, I/O modules, and system bus. It describes processor registers, instruction execution, interrupts, and the memory hierarchy including cache memory. It also covers I/O communication techniques such as programmed I/O, interrupt-driven I/O, and direct memory access.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
2. Roadmap Basic Elements Processor Registers Instruction Execution Interrupts The Memory Hierarchy Cache Memory I/O Communication Techniques
3. Operating System Exploits the hardware resources of one or more processors Provides a set of services to system users Manages secondary memory and I/O devices
6. Main Memory Volatile Data is typically lost when power is removed Referred to as real memory or primary memory Consists of a set of locations defined by sequentially numbers addresses Containing either data or instructions
7. I/O Modules Moves data between the computer and the external environment such as: Storage (e.g. hard drive) Communications equipment Terminals Specified by an I/O Address Register (I/OAR)
10. Roadmap Basic Elements Processor Registers Instruction Execution Interrupts The Memory Hierarchy Cache Memory I/O Communication Techniques
11. Processor Registers Faster and smaller than main memory User-visible registers Enable programmer to minimize main memory references by optimizing register use Control and status registers Used by processor to control operating of the processor Used by privileged OS routines to control the execution of programs
12. User-Visible Registers May be referenced by machine language Available to all programs – application programs and system programs Types of registers typically available are: data, address, condition code registers.
13. Data and Address Registers Data Often general purpose But some restrictions may apply Address Index Register Segment pointer Stack pointer
14. Control and Status Registers Program counter (PC) Contains the address of an instruction to be fetched Instruction register (IR) Contains the instruction most recently fetched Program status word (PSW) Contains status information
15. Condition codes Usually part of the control register Also called flags Bits set by processor hardware as a result of operations Read only, intended for feedback regarding the results of instruction execution.
16. Roadmap Basic Elements Processor Registers Instruction Execution Interrupts The Memory Hierarchy Cache Memory I/O Communication Techniques
17. Instruction Execution A program consists of a set of instructions stored in memory Two steps Processor reads (fetches) instructions from memory Processor executes each instruction
19. Instruction Fetch and Execute The processor fetches the instruction from memory Program counter (PC) holds address of the instruction to be fetched next PC is incremented after each fetch
20. Instruction Register Fetched instruction loaded into instruction register Categories Processor-memory, processor-I/O, Data processing, Control
23. Roadmap Basic Elements Processor Registers Instruction Execution Interrupts The Memory Hierarchy Cache Memory I/O Communication Techniques
24. Interrupts Interrupt the normal sequencing of the processor Provided to improve processor utilization Most I/O devices are slower than the processor Processor must pause to wait for device
34. Multiple Interrupts Suppose an interrupt occurs while another interrupt is being processed. E.g. printing data being received via communications line. Two approaches: Disable interrupts during interrupt processing Use a priority scheme.
38. Multiprogramming Processor has more than one program to execute The sequence the programs are executed depend on their relative priority and whether they are waiting for I/O After an interrupt handler completes, control may not return to the program that was executing at the time of the interrupt
39. Roadmap Basic Elements Processor Registers Instruction Execution Interrupts The Memory Hierarchy Cache Memory I/O Communication Techniques
40. Memory Hierarchy Major constraints in memory Amount Speed Expense Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access speed
41. The Memory Hierarchy Going down the hierarchy Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access to the memory by the processor
43. Roadmap Basic Elements Processor Registers Instruction Execution Interrupts The Memory Hierarchy Cache Memory I/O Communication Techniques
44. Cache Memory Invisible to the OS Interacts with other memory management hardware Processor must access memory at least once per instruction cycle Processor speed faster than memory access speed Exploit the principle of locality with a small fast memory
45. Principal of Locality More details later but in short … Data which is required soon is often close to the current data If data is referenced, then it’s neighbour might be needed soon.
47. Cache Principles Contains copy of a portion of main memory Processor first checks cache If not found, block of memory read into cache Because of locality of reference, likely future memory references are in that block
50. Cache Design Issues Main categories are: Cache size Block size Mapping function Replacement algorithm Write policy
51. Size issues Cache size Small caches have significant impact on performance Block size The unit of data exchanged between cache and main memory Larger block size means more hits But too large reduces chance of reuse.
52. Mapping function Determines which cache location the block will occupy Two constraints: When one block read in, another may need replaced Complexity of mapping function increases circuitry costs for searching.
53. Replacement Algorithm Chooses which block to replace when a new block is to be loaded into the cache. Ideally replacing a block that isn’t likely to be needed again Impossible to guarantee Effective strategy is to replace a block that has been used less than others Least Recently Used (LRU)
54. Write policy Dictates when the memory write operation takes place Can occur every time the block is updated Can occur when the block is replaced Minimize write operations Leave main memory in an obsolete state
55. Roadmap Basic Elements Processor Registers Instruction Execution Interrupts The Memory Hierarchy Cache Memory I/O Communication Techniques
56. I/O Techniques When the processor encounters an instruction relating to I/O, it executes that instruction by issuing a command to the appropriate I/O module. Three techniques are possible for I/O operations: Programmed I/O Interrupt-driven I/O Direct memory access (DMA)
57. Programmed I/O The I/O module performs the requested action then sets the appropriate bits in the I/O status register but takes no further action to alert the processor. As there are no interrupts, the processor must determine when the instruction is complete
58. Programmed I/OInstruction Set Control Used to activate and instruct device Status Tests status conditions Transfer Read/write between process register and device
59. Programmed I/O Example Data read in a word at a time Processor remains in status-checking look while reading
60. Interrupt-Driven I/O Processor issues an I/O command to a module and then goes on to do some other useful work. The I/O module will then interrupt the processor to request service when it is ready to exchange data with the processor.
62. Direct Memory Access Performed by a separate module on the system When needing to read/write processor issues a command to DMA module with: Whether a read or write is requested The address of the I/O device involved The starting location in memory to read/write The number of words to be read/written
63. Direct Memory Access I/O operation delegated to DMA module Processor only involved when beginning and ending transfer. Much more efficient.
Editor's Notes
These slides are intended to help a teacher develop a presentation. This PowerPoint covers the entire chapter and includes too many slides for a single delivery. Professors are encouraged to adapt this presentation in ways which are best suited for their students and environment.
An operating system mediates among application programs, utilities, and users, on the one hand, and the computer system hardware on the other. To appreciate the functionality of the operating system and the design issues involved, one must have some appreciation for computer organization and architecture. This chapter provides a brief survey of the basic elements of a computer system including the processor, memory, and Input/Output (I/O)
An operating system (OS) exploits the hardware resources of one or more processors to provide a set of services to system users. The OS also manages secondary memory and I/O (input/output) devices on behalf of its users. Accordingly, it is important to have some understanding of the underlying computer system hardware before we begin our examination of operating systems.Many areas introduced in this chapter are covered in more depth later.
At a top level, a computer consists of processor, memory, and I/O components, with one or more modules of each type. These components are interconnected in some fashion to achieve the main function of the computer, which is to execute programs.
Controls the operation of the computer and performs its data processing functions.When there is only one processor, it is often referred to as the central processing unit (CPU).One of the processor’s functions is to exchange data with memory. For this purpose, it typically makes use of two internal (to the processor) registers: a memory address register (MAR), Which specifies the address in memory for the next read or write; And a memory buffer register (MBR), which contains the data to be written into memory or which receives the data read from memory.
Stores data and programs. typically volatile;ie, when the computer is shut down, the contents of the memory are lost.In contrast, the contents of disk memory are retained even when the computer system is shut down. Main memory is also referred to as real memory or primary memory.A memory module consists of a set of locations, defined by sequentially numbered addresses. Each location contains a bit pattern that can be interpreted as either an instruction or data.
Move data between the computer and its external environment. The external environment consists of a variety of devices, including secondary memory devices (e. g., disks), communications equipment, and terminals.An I/O module transfers data from external devices to processor and memory, and vice versa. It contains internal buffers for temporarily holding data until they can be sent on.Similarly, an I/O address register (I/OAR) specifies a particular I/O device. An I/O buffer register (I/OBR) is used for the exchange of data between an I/O module and the processor.
Provides for communication among processors, main memory, and I/O modules.
A processor includes a set of registers that provide memory that is faster and smaller than main memory. Processor registers serve two functions: User-visible registers: Enable the machine or assembly language programmer to minimize main memory references by optimizing register use. For high-level languages, an optimizing compiler will attempt to make intelligent choices of which variables to assign to registers and which to main memory locations. Some high-level languages, such as C, allow the programmer to suggest to the compiler which variables should be held in registers.Control and status registers: Used by the processor to control the operation of the processor and by privileged OS routines to control the execution of programs.NOTE: There is not a clean separation of registers into these two categories. For example, on some processors, the program counter is user visible, but on many it is not. For purposes of the following discussion, however, it is convenient to use these categories.
A user-visible register may be referenced by means of the machine language that the processor executes and is generally available to all programs, including application programs as well as system programs. Types of registers that are typically available are data, address, and condition code registers
Data registers can be assigned to a variety of functions by the programmer. Usually they are general purpose in nature and can be used with any machine instruction that performs operations on data. Often, however, there are restrictions. E.g. there may be dedicated registers for floating-point operations and others for integer operations.Address registers contain:main memory addresses of data and instructions, or they contain a portion of the address that is used in the calculation of the complete or effective address. These registers may themselves be general purpose, or may be devoted to a particular way, or mode, of addressing memory. Examples include:Index register: Indexed addressing is a common mode of addressing that involves adding an index to a base value to get the effective address.Segment pointer: With segmented addressing, memory is divided into segments, which are variable-length blocks of words. A memory reference consists of a reference to a particular segment and an offset within the segment. In this mode of addressing, a register is used to hold the base address (starting location) of the segment. \\ There may be multiple registers; for example, one for the OS (i.e., when OS code is executing on the processor) and one for the currently executing application.Stack pointer: If there is user-visible stack addressing, then there is a dedicated register that points to the top of the stack. This allows the use of instructions that contain no address field, such as push and pop.
A variety of processor registers are employed to control the operation of the processor.On most processors, most of these are not visible to the user. Some of them may be accessible by machine instructions executed in what is referred to as a control or kernel mode.Different processors will have different register organizations and use different terminology.In addition to the MAR,MBR, I/OAR, and I/OBR the following are essential to instruction execution:Program counter (PC): Contains the address of the next instruction to be fetched Instruction register (IR): Contains the instruction most recently fetched The program status word (PSW), contains status information. The PSW typically contains condition codes plus other status information, such as an interrupt enable/disable bit and a kernel/user mode bit.
Bits typically set by the processor hardware as the result of operations.For example, an arithmetic operation may produce a positive, negative, zero, or overflow result. In addition to the result itself being stored in a register or memory, a condition code is also set following the execution of the arithmetic instruction. The condition code may subsequently be tested as part of a conditional branch operation. Condition code bits are collected into one or more registers. Usually, they form part of a control register. Generally, machine instructions allow these bits to be read by implicit reference, but they cannot be altered by explicit reference because they are intended for feedback regarding the results of instruction execution.
A program to be executed by a processor consists of a set of instructions stored in memory. In its simplest form, instruction processing consists of two steps: The processor reads (fetches) instructions from memory one at a time and executes each instruction. Program execution consists of repeating the process of instruction fetch and instruction execution.
The two steps are referred to as the fetch stage and the execute stage. Instruction execution may involve several operations and depends on the nature of the instruction.The processing required for a single instruction is called an instruction cycle.Program execution halts only if the processor is turned off, some sort of unrecoverable error occurs, or a program instruction that halts the processor is encountered.
At the beginning of each instruction cycle, the processor fetches an instruction from memory.Typically, the program counter (PC) holds the address of the next instruction to be fetched. Unless instructed otherwise, the processor always increments the PC after each instruction fetch so that it will fetch the next instruction in sequence (i.e., the instruction located at the next higher memory address).
The fetched instruction is loaded into the instruction register (IR). The instruction contains bits that specify the action the processor is to take.Processor-memory: Data may be transferred from processor to memory or from memory to processor.Processor-I/O: Data may be transferred to or from a peripheral device by transferring between the processor and an I/O module.Data processing: The processor may perform some arithmetic or logic opera- tion on data.Control: An instruction may specify that the sequence of execution be altered.
Consider a simple example using a hypothetical processor The processor contains a single data register, called the accumulator (AC). Both instructions and data are 16 bits long, and memory is organized as a sequence of 16-bit words. The instruction format provides 4 bits for the opcode, allowing as many as 24 = 16 different opcodesrepresented by a single hexadecimal digit. The opcode defines the operation the processor is to perform.With the remaining 12 bits of the instruction format, up to 212 = 4096 (4 K) words of memory can be directly addressed. denoted by three hexadecimal digits.
This figure illustrates a partial program execution, showing the relevant portions of memory and processor registers. The program fragment shown adds the contents of the memory word at address 940 to the contents of the memory word at address 941 and stores the result in the latter location.Three instructions, which can be described as three fetch and three execute stages, are required:1. The PC contains 300, the address of the first instruction. This instruction (the value 1940 in hexadecimal) is loaded into the IR and the PC is incremented.Note that this process involves the use of a memory address register (MAR) and a memory buffer register (MBR). For simplicity, these intermediate registers are not shown.2. The first 4 bits (first hexadecimal digit) in the IR indicate that the AC is to be loaded from memory. The remaining 12 bits (three hexadecimal digits) specify the address, which is 940.3. The next instruction (5941) is fetched from location 301 and the PC is incremented.4. The old contents of the AC and the contents of location 941 are added and the result is stored in the AC.5. The next instruction (2941) is fetched from location 302 and the PC is incremented.6. The contents of the AC are stored in location 941.
Virtually all computers provide a mechanism by which other modules (I/O, memory) may interrupt the normal sequencing of the processor. Interrupts are provided primarily as a way to improve processor utilization.For example,most I/O devices are much slower than the processor.Suppose that the processor is transferring data to a printer using the instruction cycle scheme described earlier. After each write operation, the processor must pause and remain idle until the printer catches up. The length of this pause may be on the order of many thousands or even millions of instruction cycles. Clearly, this is a very wasteful use of the processor.
Table 1.1 lists the most common classes of interrupts.
NB: Animated slide showing each stage one at a time.A) No InterruptsFigure 1.5a illustrates this previous printer example. The user program performs a series of WRITE calls interleaved with processing. The solid vertical lines represent segments of code in a program. Code segments 1, 2, and 3 refer to sequences of instructions that do not involve I/O. The WRITE calls are to an I/O routine that is a system utility and that will perform the actual I/O operation. The I/O program consists of three sections:• A sequence of instructions, labeled 4 in the figure, to prepare for the actual I/O operation. This may include copying the data to be output into a special buffer and preparing the parameters for a device command.• The actual I/O command. Without the use of interrupts, once this command is issued, the program must wait for the I/O device to perform the requested function (or periodically check the status, or poll, the I/O device). The program might wait by simply repeatedly performing a test operation to determine if the I/O operation is done.• A sequence of instructions, labeled 5 in the figure, to complete the operation. This may include setting a flag indicating the success or failure of the operation.The dashed line represents the path of execution followed by the processor; i.e., this line shows the sequence in which instructions are executed. Thus, after the first WRITE instruction is encountered, the user program is interrupted and execution continues with the I/O program. After the I/O program execution is complete, execution resumes in the user program immediately following the WRITE instruction.Because the I/O operation may take a relatively long time to complete, the I/O program is hung up waiting for the operation to complete; hence, the user program is stopped at the point of the WRITE call for some considerable period of time.
With interrupts, the processor can be engaged in executing other instructions while an I/O operation is in progress. As before, the user program reaches a point at which it makes a system call in the form of a WRITE call. The I/O program that is invoked in this case consists only of the preparation code and the actual I/O command. After these few instructions have been executed, control returns to the user program. Meanwhile, the external device is busy accepting data from computer memory and printing it.This I/O operation is conducted concurrently with the execution of instructions in the user program. When the external device becomes ready to be serviced, that is, when it is ready to accept more data from the processor, the I/O module for that external device sends an interrupt request signal to the processor. The processor responds by suspending operation of the current program; branching off to a routine to service that particular I/O device, known as an interrupt handler; and resuming the original execution after the device is serviced.The points at which such interrupts occur are indicated by X in Figure 1.5b. Note that an interrupt can occur at any point in the main program, not just at one specific instruction.
For the user program, an interrupt suspends the normal sequence of execution. When the interrupt processing is completed, execution resumes. Thus, the user program does not have to contain any special code to accommodate interrupts; the processor and the OS are responsible for suspending the user program and then resuming it at the same point.
To accommodate interrupts, an interrupt stage is added to the instruction cycle, as shown here (compare Figure 1.2 earlier). In the interrupt stage, the processor checks to see if any interrupts have occurred, indicated by the presence of an interrupt signal. If no interrupts are pending, the processor proceeds to the fetch stage and fetches the next instruction of the current program. If an interrupt is pending, the processor suspends execution of the current program and executes an interrupt-handler routine. The interrupt-handler routine is generally part of the OS. Typically, this routine determines the nature of the interrupt and performs whatever actions are needed.In the example we have been using, the handler determines which I/O module generated the interrupt and may branch to a program that will write more data out to that I/O module. When the interrupt-handler routine is completed, the processor can resume execution of the user program at the point of interruption.
To appreciate the gain in efficiency, consider Figure 1.8, which is a timing diagram based on the flow of control in Figures 1.5 a and 1.5b. Figures 1.5b and 1.8 assume that the time required for the I/O operation is relatively short: less than the time to complete the execution of instructions between write operations in the user program.The more typical case, especially for a slow device such as a printer, is that the I/O operation will take much more time than executing a sequence of user instructions.
Figure 1.5 c indicates a more typical state of affairs. In this case, the user program reaches the second WRITE call before the I/O operation spawned by the first call iscomplete. The result is that the user program is hung up at that point.When the preceding I/O operation is completed, this new WRITE call may be processed, and a new I/O operation may be started. Figure 1.9 shows the timing for this situation with and without the use of interrupts. We can see that there is still a gain in efficiency because part of the time during which the I/O operation is underway overlaps with the execution of user instructions.
An interrupt triggers a number of events, both in the processor hardware and in software. This figure shows a typical sequence.When an I/O device completes an I/O operation, the following sequence of hardware events occurs:1. The device issues an interrupt signal to the processor.2. The processor finishes execution of the current instruction before responding to the interrupt.3. The processor tests for a pending interrupt request, determines that there is one, and sends an acknowledgment signal to the device that issued the interrupt. The acknowledgment allows the device to remove its interrupt signal.4.The processor next needs to prepare to transfer control to the interrupt routine. 5. The processor then loads the program counter with the entry location of the interrupt-handling routine that will respond to this interrupt. 6. At this point, the program counter and PSW relating to the interrupted program have been saved on the control stack. Next slide shows more detail on this step7. The interrupt handler may now proceed to process the interrupt.8. The saved register values are retrieved from the stack and restored to the registers9. The final act is to restore the PSW and program counter values from the stack.It is important to save all of the state information about the interrupted program for later resumption. Because the interrupt is not a routine called from the program. Rather, the interrupt can occur at any time and therefore at any point in the execution of a user program. Its occurrence is unpredictable.
From step 6 in previous diagramIn this case, a user program is interrupted after the instruction at location N. The contents of all of the registers plus the address of the next instruction (N + 1), a total of M words, are pushed onto the control stack. The stack pointer is updated to point to the new top of stack, and the program counter is updated to point to the beginning of the interrupt service routine.
Suppose that one or more interrupts can occur while an interrupt is being processed. E.G. , a program may be receiving data from a communications line and printing results at the same time. The printer will generate an interrupt every time that it completes a print operation. The communication line controller will generate an interrupt every time a unit of data arrives.Two approaches can be taken to dealing with multiple interrupts.The first is to disable interrupts while an interrupt is being processed. A disabled interrupt simply means that the processor ignores any new interrupt request signal. If an interrupt occurs during this time, it generally remains pending and will be checked by the processor after the processor has re-enabled interrupts.A second approach is to define priorities for interrupts and to allow an interrupt of higher priority to cause a lower-priority interrupt handler to be interrupted.
Simple sequential approach to multiple interrupts
Using priorities to “nest” interrupt processing
As an example of this second approach, consider a system with three I/O devices: a printer (priority 2), a disk (priority 4), and a communications line (priority 5).This figure illustrates a possible sequence.A user program begins at t=0.At t=10, a printer interrupt occurs; user information is placed on the control stack and execution continues at the printer interrupt service routine (ISR).While this routine is still executing, at t =15 a communications interrupt occurs. Because the communications line has higher priority than the printer, the interrupt request is honored. The printer ISR is interrupted, its state is pushed onto the stack, and execution continues at the communications ISR.While this routine is executing, a disk interrupt occurs (t=20).Because this interrupt is of lower priority, it is simply held, and the communications ISR runs to completion.When the communications ISR is complete (t=25), the previous processor state is restored, which is the execution of the printer ISR. However, before even a single instruction in that routine can be executed, the processor honors the higher-priority disk interrupt and transfers control to the disk ISR. Only when that routine is complete (t=35) is the printer ISR resumed.When that routine completes (t=40), control finally returns to the user program.
Even with the use of interrupts, a processor may not be used very efficiently.
There is a tradeoff among the three key characteristics of memory: namely, capacity, access time, and cost.A variety of technologies are used to implement memory systems, and across this spectrum of technologies, thefollowing relationships hold:• Faster access time, greater cost per bit• Greater capacity, smaller cost per bit• Greater capacity, slower access speed
NOTE: Onclick, Animated diagram slides to centre and grows A typical hierarchy is illustrated in this figure.As one goes down the hierarchy, the following occur:a. Decreasing cost per bitb. Increasing capacityc. Increasing access timed. Decreasing frequency of access to the memory by the processorThus, smaller, more expensive, faster memories are supplemented by larger, cheaper, slower memories.The key to the success of this organization decreasing frequency of access at lower levels.
External, nonvolatile memory is also referred to as secondary memory or auxiliary memory. These are used to store program and data files and are usually visible to the programmer only in terms of files and records, as opposed to individual bytes or words.
Although cache memory is invisible to the OS, it interacts with other memory management hardware.On all instruction cycles, the processor accesses memory at least once, to fetch the instruction, and often one or more additional times, to fetch operands and/or store results.The rate at which the processor can execute instructions is clearly limited by the memory cycle timei.e. the time it takes to read one word from or write one word to memory. This limitation has been a significant problem because of the persistent mismatch between processor and main memory speeds: Over the years, processor speed has consistently increased more rapidly than memory access speed. We are faced with a tradeoff among speed, cost, and size. Ideally, main memory should be built with the same technology as that of the processor registers, giving memory cycle times comparable to processor cycle times. This has always been too expensive a strategy. The solution is to exploit the principle of locality by providing a small, fast memory between the processor and main memory, namely the cache.
Cache memory is intended to provide memory access time approaching that of the fastest memories available and at the same time support a large memory size that has the price of less expensive types of semiconductor memories. There is a relatively large and slow main memory together with a smaller, faster cache memory. The cache contains a copy of a portion of main memory.
When the processor attempts to read a byte or word of memory, a check is made to determine if the byte or word is in the cache. If so, the byte or word is delivered to the processor. If not, a block of main memory, consisting of some fixed number of bytes, is read into the cache and then the byte or word is delivered to the processor.Because of the phenomenon of locality of reference, when a block of data is fetched into the cache to satisfy a single memory reference, it is likely that many of the near-future memory references will be to other bytes in the block.
Figure 1.17 depicts the structure of a cache/main memory system.Main memory consists of up to 2n addressable words, with each word having a unique n-bit address.For mapping purposes, this memory is considered to consist of a number of fixed-length blocks of K words each.i.e., there are M=2n/K blocks.Cache consists of C slots (also referred to as lines) of K words each, and the number of slots is considerably lessthan the number of main memory blocks (C << M).Some subset of the blocks of main memory resides in the slots of the cache. If a word in a block of memory that is not in the cache is read, that block is transferred to one of the slots of the cache.Because there are more blocks than slots, an individual slot cannot be uniquely and permanently dedicated to a particular block. Therefore, each slot includes a tag that identifies which particular block is currently being stored. The tag is usually some number of higher-order bits of the address and refers to all addresses that begin with that sequence of bits.
Figure 1.18 illustrates the read operation.The processor generates the address, RA, of a word to be read. If the word is contained in the cache, it is delivered to the processor. Otherwise, the block containing that word is loaded into the cache and the word is delivered to the processor.
We will see that similar design issues must be addressed in dealing with virtual memory and disk cache design. They fall into the following categories:• Cache size• Block size• Mapping function• Replacement algorithm• Write policy
We have already dealt with the issue of cache size. It turns out that reasonably small caches can have a significant impact on performance.Another size issue is that of block size: the unit of data exchanged between cache and main memory.As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality: the high probability that data in the vicinity of a referenced word are likely to be referenced in the near future.As the block size increases, more useful data are brought into the cache. The hit ratio will begin to decrease. However, as the block becomes even bigger and the probability of using the newly fetched data becomes less than the probability of reusing the data that have to be moved out of the cache to make room for the new block.
When a new block of data is read into the cache, the mapping function determines which cache location the block will occupy.Two constraints affect the design of the mapping function. First, when one block is read in, another may have to be replaced. We would like to do this in such a way as to minimize the probability that we will replace a block that will be needed in the near future. The more flexible the mapping function, the more scope we have to design a replacement algorithm to maximize the hit ratio. Second, the more flexible the mapping function, the more complex is the circuitry required to search the cache to determine if a given block is in the cache.
The replacement algorithm chooses, within the constraints of the mapping function, which block to replace when a new block is to be loaded into the cache and the cache already has all slots filled with other blocks.We would like to replace the block that is least likely to be needed again in the near future.Although it is impossible to identify such a block, a reasonably effective strategy is to replace the block that has been in the cache longest with no reference to it.This policy is referred to as the least-recently-used (LRU) algorithm. Hardware mechanisms are needed to identify the least-recently-used block.
If the contents of a block in the cache are altered, then it is necessary to write it back to main memory before replacing it.The write policy dictates when the memory write operation takes place. At one extreme, the writing can occur every time that the block is updated. At the other extreme, the writing occurs only when the block is replaced. The latter policy minimizes memory write operations but leaves main memory in an obsolete state. This can interfere with multiple-processor operation and with direct memory access by I/O hardware modules.
The I/O module performs the requested action and then sets the appropriate bits in the I/O status register but takes no further action to alert the processor. In particular, it does not interrupt the processor. Thus, after the I/O instruction is invoked, the processor must take some active role in determining when the I/O instruction is completed. So, the processor periodically checks the status of the I/O module until it finds that the operation is complete.
With this technique, the processor is responsible for extracting data from main memory for output and storing data in main memory for input. The instruction set includes I/O instructions in the following categories:• Control:Used to activate an external device and tell it what to do. e.g. a magnetic-tape unit may be instructed to rewind or to move forward one record.• Status: Used to test various status conditions associated with an I/O module and its peripherals.• Transfer:Used to read and/or write data between processor registers and external devices.
Data are read in one word (e. g., 16 bits) at a time.For each word that is read in, the processor must remain in a status-checking loop until it determines that the word is available in the I/O module’s data register. This flowchart highlights the main disadvantage of this technique: It is a time-consuming process that keeps the processor busy needlessly.
The processor to issues an I/O command to a module and then go on to do some other useful work. The I/O module will then interrupt the processor to request service when it is ready to exchange data with the processor. The processor then executes the data transfer, as before, and then resumes its former processing.
This figure shows the use of interrupt-driven I/O for reading in a block of data. Interrupt-driven I/O is more efficient than programmed I/O because it eliminates needless waiting. However, interrupt-driven I/O still consumes a lot of processor time, because every word of data that goes from memory to I/O module or from I/O module to memory must pass through the processor.
When large volumes of data are to be moved, a more efficient technique is required: direct memory access (DMA). The DMA function can be performed by a separate module on the system bus or it can be incorporated into an I/O module. When the processor wishes to read or write a block of data, it issues a command to the DMA module, by sending to the DMA module the following information:• Whether a read or write is requested • The address of the I/O device involved •The starting location in memory to read data from or write data to• The number of words to be read or written
The processor can continue with other work. It has delegated this I/O operation to the DMA module, and that module will take care of it. The DMA module transfers the entire block of data, one word at a time, directly to or from memory without going through the processor.When the transfer is complete, the DMA module sends an interrupt signal to the processor. Thus the processor is involved only at the beginning and end of the transfer