This document summarizes I/O management and disk scheduling techniques in operating systems. It covers I/O devices, how the I/O function is organized, operating system design issues regarding I/O, I/O buffering, and different disk scheduling policies like FIFO, SSTF, SCAN, C-SCAN, and others. The document provides an overview of these fundamental operating system I/O concepts in just over 3 sentences.
Deadlock is a very important topic in operating system. In this presentation slide, try to relate deadlock with real life scenario and find out some solution with two main algorithm- Safety and Banker's Algorithm.
UNIT II PROCESS MANAGEMENT
Processes-Process Concept, Process Scheduling, Operations on Processes, Interprocess Communication; Threads- Overview, Multicore Programming, Multithreading Models; Windows 7 - Thread and SMP Management. Process Synchronization - Critical Section Problem, Mutex Locks, Semophores, Monitors; CPU Scheduling and Deadlocks.
The document discusses the motivation and design of file system implementations. It describes how file systems map the logical structure to physical storage, using various on-disk and in-memory data structures. These include boot blocks, superblocks, directories, inodes/file control blocks, buffer caches, open file tables, and more. Common operations like creating, opening, reading and closing files are also outlined.
The document provides information about shells in Linux operating systems. It defines what a kernel and shell are, explains why shells are used, describes different types of shells, and provides examples of shell scripting. The key points are:
- The kernel manages system resources and acts as an intermediary between hardware and software. A shell is a program that takes commands and runs them, providing an interface between the user and operating system.
- Shells are useful for automating tasks, combining commands to create new ones, and adding functionality to the operating system. Common shells include Bash, Bourne, C, Korn, and Tcsh.
- Shell scripts allow storing commands in files to automate tasks.
- Shell scripting allows users to automate repetitive tasks by writing scripts of shell commands that can be executed automatically. The shell acts as an interface between the user and the operating system kernel, accepting commands and passing them to the kernel for execution. Common shells used for scripting include Bash, C Shell, and Korn Shell. Shell scripts use shell commands, control structures, and functions to perform automated tasks like backups and system monitoring.
system calls, single user, multiuser os ...myrajendra
The document discusses operating system basics including system calls, operating system structure, and differences between single-user and multi-user operating systems. System calls allow programs to request services from the operating system kernel. A single-user operating system has a monolithic structure with user applications separated from the kernel, while a multi-user operating system uses a layered structure with modules organized in hierarchical layers.
In this presentation, I am explaining about Threads, types of threads, its advantages and disadvantages, difference between Process and Threads, multithreading and its type.
"Like the ppt if you liked the ppt"
LinkedIn - https://in.linkedin.com/in/prakharmaurya
This document discusses different memory management techniques used in operating systems. It begins by describing the basic components and functions of memory. It then explains various memory management algorithms like overlays, swapping, paging and segmentation. Overlays divide a program into instruction sets that are loaded and unloaded as needed. Swapping loads entire processes into memory for execution then writes them back to disk. Paging and segmentation are used to map logical addresses to physical addresses through page tables and segment tables respectively. The document compares advantages and limitations of these approaches.
Deadlock is a very important topic in operating system. In this presentation slide, try to relate deadlock with real life scenario and find out some solution with two main algorithm- Safety and Banker's Algorithm.
UNIT II PROCESS MANAGEMENT
Processes-Process Concept, Process Scheduling, Operations on Processes, Interprocess Communication; Threads- Overview, Multicore Programming, Multithreading Models; Windows 7 - Thread and SMP Management. Process Synchronization - Critical Section Problem, Mutex Locks, Semophores, Monitors; CPU Scheduling and Deadlocks.
The document discusses the motivation and design of file system implementations. It describes how file systems map the logical structure to physical storage, using various on-disk and in-memory data structures. These include boot blocks, superblocks, directories, inodes/file control blocks, buffer caches, open file tables, and more. Common operations like creating, opening, reading and closing files are also outlined.
The document provides information about shells in Linux operating systems. It defines what a kernel and shell are, explains why shells are used, describes different types of shells, and provides examples of shell scripting. The key points are:
- The kernel manages system resources and acts as an intermediary between hardware and software. A shell is a program that takes commands and runs them, providing an interface between the user and operating system.
- Shells are useful for automating tasks, combining commands to create new ones, and adding functionality to the operating system. Common shells include Bash, Bourne, C, Korn, and Tcsh.
- Shell scripts allow storing commands in files to automate tasks.
- Shell scripting allows users to automate repetitive tasks by writing scripts of shell commands that can be executed automatically. The shell acts as an interface between the user and the operating system kernel, accepting commands and passing them to the kernel for execution. Common shells used for scripting include Bash, C Shell, and Korn Shell. Shell scripts use shell commands, control structures, and functions to perform automated tasks like backups and system monitoring.
system calls, single user, multiuser os ...myrajendra
The document discusses operating system basics including system calls, operating system structure, and differences between single-user and multi-user operating systems. System calls allow programs to request services from the operating system kernel. A single-user operating system has a monolithic structure with user applications separated from the kernel, while a multi-user operating system uses a layered structure with modules organized in hierarchical layers.
In this presentation, I am explaining about Threads, types of threads, its advantages and disadvantages, difference between Process and Threads, multithreading and its type.
"Like the ppt if you liked the ppt"
LinkedIn - https://in.linkedin.com/in/prakharmaurya
This document discusses different memory management techniques used in operating systems. It begins by describing the basic components and functions of memory. It then explains various memory management algorithms like overlays, swapping, paging and segmentation. Overlays divide a program into instruction sets that are loaded and unloaded as needed. Swapping loads entire processes into memory for execution then writes them back to disk. Paging and segmentation are used to map logical addresses to physical addresses through page tables and segment tables respectively. The document compares advantages and limitations of these approaches.
Operating Systems - "Chapter 4: Multithreaded Programming"Ra'Fat Al-Msie'deen
This chapter discusses multithreaded programming and threads. It defines a thread as the basic unit of CPU utilization that allows multiple tasks to run concurrently within a process by sharing the process's resources. Different threading models like many-to-one, one-to-one, and many-to-many are described based on how user threads map to kernel threads. Common thread libraries for POSIX, Windows, and Java are also covered. The chapter examines issues in multithreaded programming and provides examples of how threads are implemented in Windows and Linux.
The document discusses key components and concepts related to operating system structures. It describes common system components like process management, memory management, file management, I/O management, and more. It then provides more details on specific topics like the role of processes, main memory management, file systems, I/O systems, secondary storage, networking, protection systems, and command interpreters in operating systems. Finally, it discusses operating system services, system calls, and how parameters are passed between programs and the operating system.
This document discusses deadlocks, including the four conditions required for a deadlock, methods to avoid deadlocks like using safe states and Banker's Algorithm, ways to detect deadlocks using wait-for graphs and detection algorithms, and approaches to recover from deadlocks such as terminating processes or preempting resources.
This document discusses disk scheduling and disk management. It provides information on disk scheduling concepts like seek time, rotational latency, transfer time, and disk access time. The purpose of disk scheduling is to efficiently schedule I/O requests arriving for the disk. Disk management allows management of disk drives in Windows, including partitioning, formatting, assigning drive letters. It is available in Windows 10, 8, 7, Vista, XP and 2000 and allows checking capacity and free space on disks.
An operating system acts as an interface between the user and computer hardware, controlling program execution and performing basic tasks like file management, memory management, and input/output control. There are four main types of operating systems: monolithic, layered, microkernel, and networked/distributed. A monolithic OS has all components in the kernel, while layered and microkernel OSes separate components into different privilege levels or layers for modularity. Networked/distributed OSes enable accessing resources across multiple connected computers.
The document discusses the Unix operating system. It describes Unix systems as using plain text storage, a hierarchical file system, and treating devices as files. It also discusses the Unix philosophy of using small, strung together programs instead of large monolithic programs. The document then summarizes Unix kernel subsystems like process management and memory management. It provides an overview of shell scripts, their advantages, and how to create and use variables within scripts.
RAID (Redundant Array of Independent Disks) distributes data across multiple disks to improve performance and provide redundancy. The common characteristics of RAID levels are that multiple physical disks act as a single logical disk, data is distributed across disks, and redundant parity information is used to recover data if a disk fails. RAID level 0 stripes data without parity for increased speed but no fault tolerance, while RAID level 1 uses mirroring to provide redundancy by writing all data to two disks.
Deadlocks-An Unconditional Waiting Situation in Operating System. We must make sure of This concept well before understanding deep in to Operating System. This PPT will understands you to get how the deadlocks Occur and how can we Detect, avoid and Prevent the deadlocks in Operating Systems.
This document provides an overview of mass storage structures and operating system services for mass storage. It discusses disk structure, disk scheduling algorithms, swap space management, RAID structures, and stable storage implementation. The document also describes the physical structure of secondary and tertiary storage devices and their performance characteristics.
RAID (Redundant Array of Independent Disks) combines multiple disk drive components into a single logical unit for the purposes of data redundancy, performance improvement, or both. There are several common RAID levels. RAID levels 0 and 1 are for performance and mirroring respectively, while RAID levels 3, 4, 5, and 6 provide redundancy through parity-based schemes, with levels 5 and 6 capable of recovering data if two drives fail simultaneously. The document provides details on the characteristics, advantages, and disadvantages of various RAID levels.
The document discusses the key components and structure of operating systems. It covers:
1) The main components of an OS include process management, memory management, I/O management, secondary storage management, file management, protection systems, accounting systems, and more.
2) Traditionally, OSes were structured as monolithic kernels containing all components, but this poses reliability and maintenance issues. Alternative structures include layering and microkernels.
3) Layering implements an OS as a set of layers, where each layer provides an abstract "machine" to the layer above. This improves modularity but can hurt performance.
4) Microkernels minimize kernel code and implement OS services as user-level processes for
This document provides instructions for configuring various settings on a Panasonic PBX phone system like the TDA100/200. It outlines steps to check card statuses, change port connections, set the date and time, assign operators, view feature codes, set call durations, record greetings, set ringing times, assign trunk groups, change passwords, create speed dials, and more. The codes listed at the end are for configuring settings on Panasonic KX-TA models like call blocking and language selection.
The document provides an overview of the UNIX operating system through a seminar presentation. It discusses the history of UNIX from the 1970s to the 2000s, defines what UNIX is, describes common UNIX commands and the file system structure, and covers topics like memory management, interrupts, reasons for using UNIX, and some applications of UNIX like storage consulting and middleware/database administration. The presentation is intended to educate about the key aspects and functionality of the UNIX operating system.
The document discusses Linux file systems. It describes that Linux uses a hierarchical tree structure with everything treated as a file. It explains the basic components of a file system including the boot block, super block, inode list, and block list. It then covers different types of file systems for Linux like ext2, ext3, ext4, FAT32, NTFS, and network file systems like NFS and SMB. It also discusses absolute vs relative paths and mounting and unmounting filesystems using the mount and umount commands.
This document discusses deadlock prevention by invalidating one of the four conditions necessary for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. It describes strategies to prevent each condition, such as not requiring mutual exclusion for sharable resources, requesting all resources before execution or only when a process has none to prevent hold and wait, allowing preemption of held resources to prevent no preemption, and imposing a total ordering of resource requests to prevent circular wait. These techniques aim to ensure deadlock is excluded from the beginning.
The document discusses various file allocation methods and disk scheduling algorithms. There are three main file allocation methods - contiguous allocation, linked allocation, and indexed allocation. Contiguous allocation suffers from fragmentation but allows fast sequential access. Linked allocation does not have external fragmentation but is slower. Indexed allocation supports direct access but has higher overhead. For disk scheduling, algorithms like FCFS, SSTF, SCAN, CSCAN, and LOOK are described. SSTF provides lowest seek time while SCAN and CSCAN have higher throughput but longer wait times.
This document discusses real-time operating systems (RTOS). It defines RTOS as operating systems that are able to respond to inputs immediately within a specified time delay. It compares RTOS to general operating systems and discusses the types, characteristics, functions, and applications of RTOS. Examples of RTOS like VxWorks are provided. The key functions of an RTOS include task management, scheduling, resource allocation, and interrupt handling. RTOS are widely used in applications that require deterministic responses like avionics, medical devices, industrial automation, and more.
Static libraries are linked during compilation and their functions are built into the executable file, increasing its size, while dynamic libraries are linked during execution by the operating system and loaded only once into memory from multiple programs, making programs smaller and faster to execute but potentially less compatible if the library is removed or updated.
Unix was created in 1969 by Ken Thompson at Bell Labs to allow multiple users to access a computer simultaneously. It features a multi-user design, hierarchical file system, and shell interface. The kernel handles memory management, process scheduling, and device interactions to enable these features. Common Unix commands like cat, ls, cp and rm allow users to work with files and directories from the shell. File permissions and ownership are managed through inodes to control access across users.
This document summarizes key concepts related to I/O structure in operating systems, including disk structure, disk scheduling, disk management, and swap-space management. It discusses how disks are logically addressed and mapped to physical sectors. It also describes different disk scheduling algorithms like SSTF, SCAN, C-SCAN and factors that influence algorithm selection. The document outlines the processes involved in low-level formatting, logical formatting, and handling bad blocks. It concludes with an overview of swap-space management in various operating systems.
Android 6.0.1 "Marshmallow" Android 7.0-7.1 "Nougat" is the seventh major version of the Android operating system. Learning about it is essential to stay ahead of other developers And Google’s still finding ways to enhance and improve the OS.
Operating Systems - "Chapter 4: Multithreaded Programming"Ra'Fat Al-Msie'deen
This chapter discusses multithreaded programming and threads. It defines a thread as the basic unit of CPU utilization that allows multiple tasks to run concurrently within a process by sharing the process's resources. Different threading models like many-to-one, one-to-one, and many-to-many are described based on how user threads map to kernel threads. Common thread libraries for POSIX, Windows, and Java are also covered. The chapter examines issues in multithreaded programming and provides examples of how threads are implemented in Windows and Linux.
The document discusses key components and concepts related to operating system structures. It describes common system components like process management, memory management, file management, I/O management, and more. It then provides more details on specific topics like the role of processes, main memory management, file systems, I/O systems, secondary storage, networking, protection systems, and command interpreters in operating systems. Finally, it discusses operating system services, system calls, and how parameters are passed between programs and the operating system.
This document discusses deadlocks, including the four conditions required for a deadlock, methods to avoid deadlocks like using safe states and Banker's Algorithm, ways to detect deadlocks using wait-for graphs and detection algorithms, and approaches to recover from deadlocks such as terminating processes or preempting resources.
This document discusses disk scheduling and disk management. It provides information on disk scheduling concepts like seek time, rotational latency, transfer time, and disk access time. The purpose of disk scheduling is to efficiently schedule I/O requests arriving for the disk. Disk management allows management of disk drives in Windows, including partitioning, formatting, assigning drive letters. It is available in Windows 10, 8, 7, Vista, XP and 2000 and allows checking capacity and free space on disks.
An operating system acts as an interface between the user and computer hardware, controlling program execution and performing basic tasks like file management, memory management, and input/output control. There are four main types of operating systems: monolithic, layered, microkernel, and networked/distributed. A monolithic OS has all components in the kernel, while layered and microkernel OSes separate components into different privilege levels or layers for modularity. Networked/distributed OSes enable accessing resources across multiple connected computers.
The document discusses the Unix operating system. It describes Unix systems as using plain text storage, a hierarchical file system, and treating devices as files. It also discusses the Unix philosophy of using small, strung together programs instead of large monolithic programs. The document then summarizes Unix kernel subsystems like process management and memory management. It provides an overview of shell scripts, their advantages, and how to create and use variables within scripts.
RAID (Redundant Array of Independent Disks) distributes data across multiple disks to improve performance and provide redundancy. The common characteristics of RAID levels are that multiple physical disks act as a single logical disk, data is distributed across disks, and redundant parity information is used to recover data if a disk fails. RAID level 0 stripes data without parity for increased speed but no fault tolerance, while RAID level 1 uses mirroring to provide redundancy by writing all data to two disks.
Deadlocks-An Unconditional Waiting Situation in Operating System. We must make sure of This concept well before understanding deep in to Operating System. This PPT will understands you to get how the deadlocks Occur and how can we Detect, avoid and Prevent the deadlocks in Operating Systems.
This document provides an overview of mass storage structures and operating system services for mass storage. It discusses disk structure, disk scheduling algorithms, swap space management, RAID structures, and stable storage implementation. The document also describes the physical structure of secondary and tertiary storage devices and their performance characteristics.
RAID (Redundant Array of Independent Disks) combines multiple disk drive components into a single logical unit for the purposes of data redundancy, performance improvement, or both. There are several common RAID levels. RAID levels 0 and 1 are for performance and mirroring respectively, while RAID levels 3, 4, 5, and 6 provide redundancy through parity-based schemes, with levels 5 and 6 capable of recovering data if two drives fail simultaneously. The document provides details on the characteristics, advantages, and disadvantages of various RAID levels.
The document discusses the key components and structure of operating systems. It covers:
1) The main components of an OS include process management, memory management, I/O management, secondary storage management, file management, protection systems, accounting systems, and more.
2) Traditionally, OSes were structured as monolithic kernels containing all components, but this poses reliability and maintenance issues. Alternative structures include layering and microkernels.
3) Layering implements an OS as a set of layers, where each layer provides an abstract "machine" to the layer above. This improves modularity but can hurt performance.
4) Microkernels minimize kernel code and implement OS services as user-level processes for
This document provides instructions for configuring various settings on a Panasonic PBX phone system like the TDA100/200. It outlines steps to check card statuses, change port connections, set the date and time, assign operators, view feature codes, set call durations, record greetings, set ringing times, assign trunk groups, change passwords, create speed dials, and more. The codes listed at the end are for configuring settings on Panasonic KX-TA models like call blocking and language selection.
The document provides an overview of the UNIX operating system through a seminar presentation. It discusses the history of UNIX from the 1970s to the 2000s, defines what UNIX is, describes common UNIX commands and the file system structure, and covers topics like memory management, interrupts, reasons for using UNIX, and some applications of UNIX like storage consulting and middleware/database administration. The presentation is intended to educate about the key aspects and functionality of the UNIX operating system.
The document discusses Linux file systems. It describes that Linux uses a hierarchical tree structure with everything treated as a file. It explains the basic components of a file system including the boot block, super block, inode list, and block list. It then covers different types of file systems for Linux like ext2, ext3, ext4, FAT32, NTFS, and network file systems like NFS and SMB. It also discusses absolute vs relative paths and mounting and unmounting filesystems using the mount and umount commands.
This document discusses deadlock prevention by invalidating one of the four conditions necessary for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. It describes strategies to prevent each condition, such as not requiring mutual exclusion for sharable resources, requesting all resources before execution or only when a process has none to prevent hold and wait, allowing preemption of held resources to prevent no preemption, and imposing a total ordering of resource requests to prevent circular wait. These techniques aim to ensure deadlock is excluded from the beginning.
The document discusses various file allocation methods and disk scheduling algorithms. There are three main file allocation methods - contiguous allocation, linked allocation, and indexed allocation. Contiguous allocation suffers from fragmentation but allows fast sequential access. Linked allocation does not have external fragmentation but is slower. Indexed allocation supports direct access but has higher overhead. For disk scheduling, algorithms like FCFS, SSTF, SCAN, CSCAN, and LOOK are described. SSTF provides lowest seek time while SCAN and CSCAN have higher throughput but longer wait times.
This document discusses real-time operating systems (RTOS). It defines RTOS as operating systems that are able to respond to inputs immediately within a specified time delay. It compares RTOS to general operating systems and discusses the types, characteristics, functions, and applications of RTOS. Examples of RTOS like VxWorks are provided. The key functions of an RTOS include task management, scheduling, resource allocation, and interrupt handling. RTOS are widely used in applications that require deterministic responses like avionics, medical devices, industrial automation, and more.
Static libraries are linked during compilation and their functions are built into the executable file, increasing its size, while dynamic libraries are linked during execution by the operating system and loaded only once into memory from multiple programs, making programs smaller and faster to execute but potentially less compatible if the library is removed or updated.
Unix was created in 1969 by Ken Thompson at Bell Labs to allow multiple users to access a computer simultaneously. It features a multi-user design, hierarchical file system, and shell interface. The kernel handles memory management, process scheduling, and device interactions to enable these features. Common Unix commands like cat, ls, cp and rm allow users to work with files and directories from the shell. File permissions and ownership are managed through inodes to control access across users.
This document summarizes key concepts related to I/O structure in operating systems, including disk structure, disk scheduling, disk management, and swap-space management. It discusses how disks are logically addressed and mapped to physical sectors. It also describes different disk scheduling algorithms like SSTF, SCAN, C-SCAN and factors that influence algorithm selection. The document outlines the processes involved in low-level formatting, logical formatting, and handling bad blocks. It concludes with an overview of swap-space management in various operating systems.
Android 6.0.1 "Marshmallow" Android 7.0-7.1 "Nougat" is the seventh major version of the Android operating system. Learning about it is essential to stay ahead of other developers And Google’s still finding ways to enhance and improve the OS.
This document discusses I/O management and disk scheduling. It begins by categorizing I/O devices as human readable, machine readable, or for communication. It then covers the evolution of I/O functions from programmed I/O to direct memory access. I/O buffering techniques like single, double, and circular buffers are introduced to deal with device speed and size mismatches. Finally, common disk scheduling policies like FIFO, SSTF, and SCAN are outlined and compared using an example request queue.
Cross-Platform Data Access for Android and iPhonePeter Friese
Many Apps need to access data online. This talk discusses problems and solutions when accessing remote data with Android and iPhone. After a brief review of a number of frameworks, an approach is outlined to implement data access in a cross-platform manner using a DSL.
This document summarizes new features in Android Marshmallow and Nougat. Marshmallow introduced runtime app permissions, fingerprint sensors, Doze mode for battery saving, Now on Tap contextual search, and Android Pay. Nougat added multi-window mode, improved notifications, and quick app switching. The document also discusses Firebase tools for remote configuration, deep linking, notifications and analytics and recommends focusing on onboarding user experience.
Dokumen ini memberikan instruksi lengkap untuk menginstal sistem operasi Android Nougat menggunakan VirtualBox, mulai dari pengaturan virtual machine, pembuatan hard disk virtual, format partisi, instalasi bootloader hingga proses instalasi Android Nougat. Langkah-langkah tersebut dijelaskan secara terperinci beserta ilustrasi tampilannya untuk memudahkan pembaca dalam menginstal Android secara virtual.
JPA1404 Context-based Access Control Systems for Mobile Deviceschennaijp
We are providing lot of best and quality 2014 and 2015 android IEEE projects for final year students. Here you can download the entire abstract and basepaper at free of cost.
For More Details:
http://jpinfotech.org/final-year-ieee-projects/2014-ieee-projects/android-projects/
Google Android 7.0 Nougat History Features and moreDevakumar Kp
Google released the first beta of Android "N" in March 2016 ahead of the Google I/O conference. This was the first time users could suggest the name of the next Android version, and it was later named Android Nougat. Nougat introduced features like multi-window view, Vulkan graphics API support, Doze battery optimizations, and improved notifications. The final version of Android Nougat was released in August 2016.
Context based access control systems for mobile devicesshanofa sanu
We provide project guidance for final year MTech, BTech, MSc, MCA, ME, BE, BSc, BCA & Diploma students in Electronics, Computer Science, Information Technology, Instrumentation, Electrical & Electronics, Power electronics, Mechanical, Automobile etc. We provide live project assistance and will make the students involve throughout the project. We specialize in Matlab, VLSI, CST, JAVA, .NET, ANDROID, PHP, NS2, EMBEDDED, ARDUINO, ARM, DSP, etc based areas. We research in Image processing, Signal Processing, Wireless communication, Cloud computing, Data mining, Networking, Artificial Intelligence and several other areas. We provide complete support in project completion, documentation and other works related to project.Success is a lousy teacher. It seduces smart people into thinking they can't lose.we have better knowledge in this field and updated with new innovative technologies.
Call me at: 9037291113.
Android is an open source, Linux-based operating system for mobile devices like smartphones and tablets. It was developed by Google and later the Open Handset Alliance. Some key versions and features include: Android 1.5 "Cupcake" introduced the on-screen keyboard and Bluetooth support; Android 2.0-2.1 "Eclair" added near field communication and VoIP support; Android "Marshmallow" focused on improving the user experience and introduced new permissions and features like fingerprint recognition; Android 7.0 "Nougat" allowed displaying multiple apps simultaneously and added notification replies.
The document provides an overview of the Android operating system. It describes Android's architecture as having four layers - the application layer, application framework, native libraries and runtime, and the Linux kernel. The application framework provides common services like activity management, resource management, and notifications. Android uses a multi-process model with user and group IDs for security between applications. Features of Android include background location, developer tools, optimization for mobile, component reuse/replacement, and support for media, touch, cameras and more. The document also discusses Android versions and compares Android to other operating systems.
Android was created by Andy Rubin, Rich Miner, Nick Sears and Chris White at Android Inc. in 2003 and was later acquired by Google. It is an open-source, Linux-based operating system used primarily in smartphones and tablets. Major versions include Cupcake, Donut, Eclair, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean, KitKat, Lollipop, Marshmallow, and Nougat. The platform emphasizes customization and integration of Google services.
The document discusses the latest Android and smartphone news, focusing on the release of Android 7.0 Nougat. Some key features of Android 7.0 Nougat include bundled notifications, enhanced UI, split screen multitasking, direct reply from notifications, instant toggles, and expanded emoji library. The update initially released for Nexus devices and is rolling out further. Instant Apps were also introduced, allowing users to instantly use apps from the Play Store without downloading.
Android is a mobile operating system developed by Google, based on the Linux kernel. It is used by many smartphones and tablets. The latest version, Android Nougat 7.0, was released in August 2016. It introduced new features like multi-window mode for using multiple apps at once, improved Doze mode for longer battery life, and call blocking functionality. Android Nougat also focused on optimizing apps to run more efficiently in the background through Project Svelte.
Disk scheduling algorithms are used by the operating system to efficiently service requests to read from and write to disk drives. The key components that disk scheduling aims to optimize are seek time, which is the time to move the disk head to the desired cylinder, and rotational latency, which is the additional wait for the desired sector to rotate under the head. Common disk scheduling algorithms include first-come, first-served (FCFS), shortest seek time first (SSTF), SCAN, C-SCAN, and C-LOOK, with SSTF and LOOK often being reasonable default choices.
The document provides an overview of embedded Android, including its features, history, ecosystem, legal framework, hardware requirements, and development tools. It discusses key aspects of Android such as components, intents, manifest files, and more. It also summarizes the system startup process and overall architecture at a high level.
The document summarizes challenges in continuing to improve single-processor performance and introduces multicore architectures as a solution. It discusses how the conventional wisdom in computer architecture has changed, noting issues like the power wall, memory wall, and limitations to extracting more instruction level parallelism (ILP). To overcome these challenges, architectures are moving to multiple cores per chip to improve parallelism and efficiency. Caches are area- and power-intensive, so multiple cores running at lower voltage and frequency can increase throughput while reducing power.
Context based access control systems for mobile devicesLeMeniz Infotech
Context based access control systems for mobile devices
Do Your Projects With Technology Experts
To Get this projects Call : 9566355386 / 99625 88976
Web : http://www.lemenizinfotech.com
Web : http://www.ieeemaster.com
Mail : projects@lemenizinfotech.com
Blog : http://ieeeprojectspondicherry.weebly.com
Blog : http://www.ieeeprojectsinpondicherry.blogspot.in/
Youtube:https://www.youtube.com/watch?v=eesBNUnKvws
This document discusses various methods of input/output (I/O) in computer systems including programmed I/O, interrupt-driven I/O, direct memory access, buffering strategies like using multiple system buffers, techniques for scheduling disk requests like shortest seek time first, and concepts like disk caching and I/O modules.
This document provides an outline for an operating systems file organization course. It lists the course instructor, references, format, policies, and weekly topics. The course will cover basic file concepts, physical data transfer considerations, different types of files like sequential, ordered, direct access, indexed sequential files and indexes. It will have weekly quizzes, assignments, a midterm, and final exam. Course material will be provided online and late assignments will receive point deductions.
This document discusses I/O systems and their components. It describes I/O hardware including devices, ports, buses and controllers. It then discusses the application I/O interface, kernel I/O subsystem, and how I/O requests are transformed to hardware operations. It also covers various types of I/O devices, blocking vs non-blocking I/O, kernel data structures for I/O, and factors that influence I/O performance.
This document discusses I/O systems and their components. It describes I/O hardware including devices, ports, buses and controllers. It then discusses the application I/O interface, kernel I/O subsystem, and how I/O requests are transformed to hardware operations. It also covers various types of I/O devices, blocking vs non-blocking I/O, kernel data structures for I/O, and factors that influence I/O performance.
This document discusses I/O systems and their components. It describes I/O hardware including devices, ports, buses and controllers. It then discusses the application I/O interface, kernel I/O subsystem, and how I/O requests are transformed to hardware operations. It also covers various types of I/O devices, blocking vs non-blocking I/O, kernel data structures used for I/O, and factors that influence I/O performance.
I/o management and disk scheduling .pptxwebip34973
This document provides an overview of I/O management and disk scheduling in operating systems. It begins with an introduction to different categories of I/O devices and how they vary. It then covers techniques for performing I/O like programmed I/O, interrupt-driven I/O, and direct memory access. The document discusses the evolution of the I/O function and a hierarchical model for organizing I/O. It also outlines concepts like I/O buffering, disk scheduling parameters, RAID levels, and disk caching.
This document discusses I/O systems, including an overview of I/O hardware, the application I/O interface, the kernel I/O subsystem, and I/O performance. It describes how I/O requests are transformed into hardware operations through techniques like interrupts, DMA, polling, and blocking vs. asynchronous I/O. Specific I/O concepts covered include STREAMS, device characteristics, and the life cycle of an I/O request through the kernel.
This document discusses I/O systems, including an overview of I/O hardware, the application I/O interface, the kernel I/O subsystem, and I/O performance. It describes how I/O requests are transformed into hardware operations through techniques like interrupts, DMA, polling, and blocking vs. asynchronous I/O. Specific I/O concepts covered include STREAMS, device characteristics, and data structures used in the kernel I/O subsystem.
There are several techniques for performing input/output (I/O) in a computer system, including programmed I/O, interrupt-driven I/O, and direct memory access (DMA). Buffering is commonly used to improve I/O efficiency by transferring data between I/O devices and memory in advance of or after requests. The operating system handles I/O through buffering techniques like single, double, and circular buffers to avoid blocking processes waiting for I/O completion.
The document discusses various aspects of I/O systems and mass storage devices. It describes how operating systems manage I/O devices through device drivers and controllers. It covers different types of I/O devices like block and character devices. It also discusses I/O techniques like memory mapped I/O, interrupts, DMA, polling vs interrupts. The document provides an overview of mass storage structure including magnetic disks, storage arrays, and RAID levels. It covers topics like swap space management, Windows architecture and process states in Windows.
This document discusses principles of computer input/output (I/O) hardware and software. It covers topics like I/O devices, device controllers, buses, I/O techniques (programmed I/O, interrupt-driven I/O, and direct memory access), device drivers, layers of I/O software, file systems, and storage devices like disks. The document provides details on how operating systems manage and interface with various I/O components to facilitate data transfer and storage.
Ch 7 io_management & disk schedulingmadhuributani
This document discusses input/output (I/O) management and disk scheduling. It begins by categorizing I/O devices as those for communicating with users, electronic equipment, and remote devices. It then describes how I/O devices differ in data rates, applications, control complexity, data transfer units, data representation, and error handling. The document outlines three I/O techniques - programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It also discusses the evolution of I/O architectures and covers I/O buffering, disk organization, and disk terminology.
The document discusses disk drives and how they are addressed and mapped. Logical blocks on disk drives are mapped sequentially to physical sectors. Host systems access storage through I/O ports and buses like SCSI and Fibre Channel. Network-attached storage uses network protocols like NFS and CIFS. Storage area networks connect multiple hosts to multiple storage units using Fibre Channel. The operating system schedules disk I/O requests using algorithms like SSTF, SCAN, C-SCAN, and LOOK to minimize disk head movement.
This document provides an overview of the key topics covered in an introductory operating systems course, including computer system organization, operating system structure and operations, process management, memory management, storage management, protection and security, kernel data structures, and different computing environments. The objectives of the course are to describe basic computer system organization, provide a tour of major operating system components, and explore open-source operating systems and different types of computing environments like mobile, distributed, client-server, and peer-to-peer.
operating system over view.ppt operating sysyemsJyoReddy9
The document discusses the key concepts of operating systems including their goals, structure, functions and management of processes, memory, storage and security. Specifically, it describes how an operating system acts as an intermediary between the user and hardware to execute programs efficiently while making resource allocation decisions. It also outlines the hierarchy of computer storage and caching strategies used to optimize performance.
Operating Systems Genesis, Development and Functions m.pptxDrIrfanulHaqAkhoon
The document discusses operating systems, including their functions, goals, historical development, and types used in different computers. It provides an overview of how operating systems act as an interface between the user and computer hardware, manage resources and allow for easier programming. Key points include that operating systems evolved from batch processing to support timesharing, personal computing and distributed systems. The types of operating systems depend on factors like response time and how data is input, with examples being batch, interactive, real-time, hybrid and embedded systems.
The document discusses operating systems and their key functions. It defines an operating system as a program that acts as an intermediary between the user and computer hardware. The main goals of an operating system are to execute user programs, make problem solving easier for users, and efficiently use computer hardware. It also controls low-level components like the CPU, memory, and I/O devices, and coordinates their use among application programs and users.
This document provides an overview of client/server computing and distributed systems. It discusses traditional centralized data processing and how distributed data processing departs from this model. Client/server architectures are introduced, including different types of client/server applications and architectures. Distributed message passing and remote procedure calls are covered as techniques for interprocess communication in distributed systems. The document also discusses clusters, including different cluster types, operating system design issues for clusters, examples of Windows Cluster Server and Sun Cluster, and Beowulf and Linux clusters using commodity hardware.
The document discusses principles of concurrency in operating systems, including mutual exclusion and synchronization. It covers various techniques for managing concurrent processes such as hardware support using interrupt disabling or compare-and-swap instructions. It also covers higher-level synchronization methods like semaphores, monitors, and message passing. It provides examples of how these techniques can solve concurrency issues like the bounded buffer problem and readers-writers problem.
The document summarizes threads, symmetric multiprocessing (SMP), and microkernels. It discusses threads in terms of resource ownership and execution. It describes SMP as allowing portions of the kernel to execute in parallel on multiple processors. Microkernels are described as having a small kernel core that provides modularity through extensions run in user space. Case studies of threads and SMP in Windows, Solaris, and Linux are provided.
The document provides an overview of operating system concepts, including:
1. The objectives and functions of operating systems such as providing convenience, efficiency, and ability to evolve for users and applications.
2. The evolution of operating systems from serial processing to time sharing systems to better utilize hardware resources and serve multiple users simultaneously.
3. Major achievements in operating system design including processes, memory management, information protection, scheduling, and system structure.
This document provides an overview of basic computer system elements and operating system concepts. It discusses the processor, memory, I/O modules, and system bus. It describes processor registers, instruction execution, interrupts, and the memory hierarchy including cache memory. It also covers I/O communication techniques such as programmed I/O, interrupt-driven I/O, and direct memory access.
This document provides an overview of basic computer system elements and operating system concepts. It discusses the processor, memory, I/O modules, and system bus. It then covers processor registers, instruction execution, interrupts, the memory hierarchy including cache memory, and I/O communication techniques like programmed I/O, interrupt-driven I/O, and direct memory access. The goal is to provide a high-level roadmap of core computer hardware and software concepts.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
2. Roadmap I/O Devices Organization of the I/O Function Operating System Design Issues I/O Buffering Disk Scheduling Raid Disk Cache UNIX SVR4 I/O LINUX I/O Windows I/O
3. Categories of I/O Devices Difficult area of OS design Difficult to develop a consistent solution due to a wide variety of devices and applications Three Categories: Human readable Machine readable Communications
4. Human readable Devices used to communicate with the user Printers and terminals Video display Keyboard Mouse etc
5. Machine readable Used to communicate with electronic equipment Disk drives USB keys Sensors Controllers Actuators
7. Differences in I/O Devices Devices differ in a number of areas Data Rate Application Complexity of Control Unit of Transfer Data Representation Error Conditions
8. Data Rate May be massive difference between the data transfer rates of devices
9. Application Disk used to store files requires file management software Disk used to store virtual memory pages needs special hardware and software to support it Terminal used by system administrator may have a higher priority
10. Complexity of control A printer requires a relatively simple control interface. A disk is much more complex. This complexity is filtered to some extent by the complexity of the I/O module that controls the device.
11. Unit of transfer Data may be transferred as a stream of bytes or characters (e.g., terminal I/O) or in larger blocks (e.g., disk I/O).
12. Data representation Different data encoding schemes are used by different devices, including differences in character code and parity conventions.
13. Error Conditions The nature of errors differ widely from one device to another. Aspects include: the way in which they are reported, their consequences, the available range of responses
14. Roadmap I/O Devices Organization of the I/O Function Operating System Design Issues I/O Buffering Disk Scheduling Raid Disk Cache UNIX SVR4 I/O LINUX I/O Windows I/O
16. Evolution of the I/O Function Processor directly controls a peripheral device Controller or I/O module is added Processor uses programmed I/O without interrupts Processor does not need to handle details of external devices
17. Evolution of the I/O Function cont… Controller or I/O module with interrupts Efficiency improves as processor does not spend time waiting for an I/O operation to be performed Direct Memory Access Blocks of data are moved into memory without involving the processor Processor involved at beginning and end only
18. Evolution of the I/O Function cont… I/O module is a separate processor CPU directs the I/O processor to execute an I/O program in main memory. I/O processor I/O module has its own local memory Commonly used to control communications with interactive terminals
19. Direct Memory Address Processor delegates I/O operation to the DMA module DMA module transfers data directly to or form memory When complete DMA module sends an interrupt signal to the processor
20.
21.
22.
23. Goals: Efficiency Most I/O devices extremely slow compared to main memory Use of multiprogramming allows for some processes to be waiting on I/O while another process executes I/O cannot keep up with processor speed Swapping used to bring in ready processes But this is an I/O operation itself
24. Generality For simplicity and freedom from error it is desirable to handle all I/O devices in a uniform manner Hide most of the details of device I/O in lower-level routines Difficult to completely generalize, but can use a hierarchical modular design of I/O functions
25. Hierarchical design A hierarchical philosophy leads to organizing an OS into layers Each layer relies on the next lower layer to perform more primitive functions It provides services to the next higher layer. Changes in one layer should not require changes in other layers
32. File System Directory management Concerned with user operations affecting files File System Logical structure and operations Physical organisation] Converts logical names to physical addresses
33. Roadmap I/O Devices Organization of the I/O Function Operating System Design Issues I/O Buffering Disk Scheduling Raid Disk Cache UNIX SVR4 I/O LINUX I/O Windows I/O
34. I/O Buffering Processes must wait for I/O to complete before proceeding To avoid deadlock certain pages must remain in main memory during I/O It may be more efficient to perform input transfers in advance of requests being made and to perform output transfers some time after the request is made.
35. Block-oriented Buffering Information is stored in fixed sized blocks Transfers are made a block at a time Can reference data b block number Used for disks and USB keys
36. Stream-Oriented Buffering Transfer information as a stream of bytes Used for terminals, printers, communication ports, mouse and other pointing devices, and most other devices that are not secondary storage
37. No Buffer Without a buffer, the OS directly access the device as and when it needs
39. Block OrientedSingle Buffer Input transfers made to buffer Block moved to user space when needed The next block is moved into the buffer Read ahead or Anticipated Input Often a reasonable assumption as data is usually accessed sequentially
40. Stream-orientedSingle Buffer Line-at-time or Byte-at-a-time Terminals often deal with one line at a time with carriage return signaling the end of the line Byte-at-a-time suites devices where a single keystroke may be significant Also sensors and controllers
41. Double Buffer Use two system buffers instead of one A process can transfer data to or from one buffer while the operating system empties or fills the other buffer
42. Circular Buffer More than two buffers are used Each individual buffer is one unit in a circular buffer Used when I/O operation must keep up with process
43. Buffer Limitations Buffering smoothes out peaks in I/O demand. But with enough demand eventually all buffers become full and their advantage is lost However, when there is a variety of I/O and process activities to service, buffering can increase the efficiency of the OS and the performance of individual processes.
44. Roadmap I/O Devices Organization of the I/O Function Operating System Design Issues I/O Buffering Disk Scheduling Raid Disk Cache UNIX SVR4 I/O LINUX I/O Windows I/O
45. Disk Performance Parameters The actual details of disk I/O operation depend on many things A general timing diagram of disk I/O transfer is shown here.
46. Positioning the Read/Write Heads When the disk drive is operating, the disk is rotating at constant speed. Track selection involves moving the head in a movable-head system or electronically selecting one head on a fixed-head system.
47. Disk Performance Parameters Access Time is the sum of: Seek time: The time it takes to position the head at the desired track Rotational delay or rotational latency: The time its takes for the beginning of the sector to reach the head Transfer Time is the time taken to transfer the data.
48. Disk SchedulingPolicies To compare various schemes, consider a disk head is initially located at track 100. assume a disk with 200 tracks and that the disk request queue has random requests in it. The requested tracks, in the order received by the disk scheduler, are 55, 58, 39, 18, 90, 160, 150, 38, 184.
49. First-in, first-out (FIFO) Process request sequentially Fair to all processes Approaches random scheduling in performance if there are many processes
50. Priority Goal is not to optimize disk use but to meet other objectives Short batch jobs may have higher priority Provide good interactive response time Longer jobs may have to wait an excessively long time A poor policy for database systems
51. Last-in, first-out Good for transaction processing systems The device is given to the most recent user so there should be little arm movement Possibility of starvation since a job may never regain the head of the line
52. Shortest Service Time First Select the disk I/O request that requires the least movement of the disk arm from its current position Always choose the minimum seek time
53. SCAN Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction then the direction is reversed
54. C-SCAN Restricts scanning to one direction only When the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again
55. N-step-SCAN Segments the disk request queue into subqueues of length N Subqueues are processed one at a time, using SCAN New requests added to other queue when queue is processed
58. Roadmap I/O Devices Organization of the I/O Function Operating System Design Issues I/O Buffering Disk Scheduling Raid Disk Cache UNIX SVR4 I/O LINUX I/O Windows I/O
59. Multiple Disks Disk I/O performance may be increased by spreading the operation over multiple read/write heads Or multiple disks Disk failures can be recovered if parity information is stored
60. RAID Redundant Array of Independent Disks Set of physical disk drives viewed by the operating system as a single logical drive Data are distributed across the physical drives of an array Redundant disk capacity is used to store parity information which provides recoverability from disk failure
64. RAID 2 (Using Hamming code) Synchronised disk rotation Data stripping is used (extremely small) Hamming code used to correct single bit errors and detect double-bit errors
65. RAID 3 bit-interleaved parity Similar to RAID-2 but uses all parity bits stored on a single drive
66. RAID 4 Block-level parity A bit-by-bit parity strip is calculated across corresponding strips on each data disk The parity bits are stored in the corresponding strip on the parity disk.
67. RAID 5 Block-level Distributed parity Similar to RAID-4 but distributing the parity bits across all drives
68. RAID 6 Dual Redundancy Two different parity calculations are carried out stored in separate blocks on different disks. Can recover from two disks failing
69. Roadmap I/O Devices Organization of the I/O Function Operating System Design Issues I/O Buffering Disk Scheduling Raid Disk Cache UNIX SVR4 I/O LINUX I/O Windows I/O
70. Disk Cache Buffer in main memory for disk sectors Contains a copy of some of the sectors on the disk When an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cache. A number of ways exist to populate the cache
71. Least Recently Used The block that has been in the cache the longest with no reference to it is replaced A stack of pointers reference the cache Most recently referenced block is on the top of the stack When a block is referenced or brought into the cache, it is placed on the top of the stack
72. Least Frequently Used The block that has experienced the fewest references is replaced A counter is associated with each block Counter is incremented each time block accessed When replacement is required, the block with the smallest count is selected.
75. Roadmap I/O Devices Organization of the I/O Function Operating System Design Issues I/O Buffering Disk Scheduling Raid Disk Cache UNIX SVR4 I/O LINUX I/O Windows I/O
76. Devices are Files Each I/O device is associated with a special file Managed by the file system Provides a clean uniform interface to users and processes. To access a device, read and write requests are made for the special file associated with the device.
77. UNIX SVR4 I/O Each individual device is associated with a special file Two types of I/O Buffered Unbuffered
78. Buffer Cache Three lists are maintained Free List Device List Driver I/O Queue
79. Character Cache Used by character oriented devices E.g. terminals and printers Either written by the I/O device and read by the process or vice versa Producer/consumer model used
80. Unbuffered I/O Unbuffered I/O is simply DMA between device and process Fastest method Process is locked in main memory and can’t be swapped out Device is tied to process and unavailable for other processes
82. Roadmap I/O Devices Organization of the I/O Function Operating System Design Issues I/O Buffering Disk Scheduling Raid Disk Cache UNIX SVR4 I/O LINUX I/O Windows I/O
83. Linux/Unix Similarities Linux and Unix (e.g. SVR4) are very similar in I/O terms The Linux kernel associates a special file with each I/O device driver. Block, character, and network devices are recognized.
84. The Elevator Scheduler Maintains a single queue for disk read and write requests Keeps list of requests sorted by block number Drive moves in a single direction to satisfy each request
85. Deadline scheduler Uses three queues Incoming requests Read requests go to the tail of a FIFO queue Write requests go to the tail of a FIFO queue Each request has an expiration time
86. Anticipatory I/O scheduler Elevator and deadline scheduling can be counterproductive if there are numerous synchronous read requests. Delay a short period of time after satisfying a read request to see if a new nearby request can be made
87. Linux Page Cache Linux 2.4 and later, a single unified page cache for all traffic between disk and main memory Benefits: Dirty pages can be collected and written out efficiently Pages in the page cache are likely to be referenced again due to temporal locality
88. Roadmap I/O Devices Organization of the I/O Function Operating System Design Issues I/O Buffering Disk Scheduling Raid Disk Cache UNIX SVR4 I/O LINUX I/O Windows I/O
89. Windows I/O Manager The I/O manager is responsible for all I/O for the operating system It provides a uniform interface that all types of drivers can call.
90. Windows I/O The I/O manager works closely with: Cache manager – handles all file caching File system drivers - routes I/O requests for file system volumes to the appropriate software driver for that volume. Network drivers - implemented as software drivers rather than part of the Windows Executive. Hardware device drivers
91. Asynchronous and Synchronous I/O Windows offers two modes of I/O operation: asynchronous and synchronous. Asynchronous mode is used whenever possible to optimize application performance.
92. Software RAID Windows implements RAID functionality as part of the operating system and can be used with any set of multiple disks. RAID 0, 1 and RAID 5 are supported. In the case of RAID 1 (disk mirroring), the two disks containing the primary and mirrored partitions may be on the same disk controller or different disk controllers.
93. Volume Shadow Copies Shadow copies are implemented by a software driver that makes copies of data on the volume before it is overwritten. Designed as an efficient way of making consistent snapshots of volumes to that they can be backed up. Also useful for archiving files on a per-volume basis
Editor's Notes
These slides are intended to help a teacher develop a presentation. This PowerPoint covers the entire chapter and includes too many slides for a single delivery. Professors are encouraged to adapt this presentation in ways which are best suited for their students and environment.
Beginning with a brief discussion of I/O devices and the organization of the I/O functions. Next examine operating system design issues, including design objectives, and the way in which the I/O function can be structured.Then I/O buffering is examined;The next sections of the chapter are devoted to magnetic disk I/O. We begin by developing a model of disk I/O performance and then examine several techniques that can be used to enhance performance.
Suitable for communicating with the computer user. Examples include printers and terminals, the latter consisting of video display, keyboard, and perhaps other devices such as a mouse.
Suitable for communicating with electronic equipment. Examples are disk drives, USB keys, sensors, controllers, and actuators.
Suitable for communicating with remote devices. Examples are digital line drivers and modems.
Each of these are covered in subsequent slidesThis diversity makes a uniform and consistent approach to I/O, both from the point of view of the operating system and from the point of view of user processes, difficult to achieve.
There may be differences of several orders of magnitude between the data transfer rates.
The use to which a device is put has an influence on the software and policies in the operating system and supporting utilities. Examples: disk used for files requires the support of file management software. disk used as a backing store for pages in a virtual memory scheme depends on the use of virtual memory hardware and software. These applications have an impact on disk scheduling algorithms.Another example, a terminal may be used by an ordinary user or a system administrator. implying different privilege levels and perhaps different priorities in the operating system.
A printer requires a relatively simple control interface while a disk is much more complex.This complexity is filtered to some extent by the complexity of the I/O module that controls the device.
Data may be transferred as a stream of bytes or characters (e.g., terminal I/O) or in larger blocks (e.g., disk I/O).
Different data encoding schemes are used by different devices, including differences in character code and parity conventions.
The nature of errors, the way in which they are reported, their consequences, and the available range of responses differ widely from one device to another.
From section 1.7Programmed I/O: Processor issues an I/O command, on behalf of a process, to an I/O module; that process then busy waits for the operation to be completed before proceeding.Interrupt-driven I/O: Processor issues an I/O command on behalf of a process.If the I/O instruction from the process is nonblocking, then the processor continues to execute instructions from the process that issued the I/O command. If the I/O instruction is blocking, then the next instruction that the processor executes is from the OS, which will put the current process in a blocked state and schedule another process.Direct memory access (DMA): A DMA module controls the exchange of data between main memory and an I/O module. The processor sends a request for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred.Table 11.1 indicates the relationship among these three techniques.
The processor directly controls a peripheral device. This is seen in simple microprocessor-controlled devices.A controller or I/O module is added.The processor uses programmed I/O without interrupts. With this step, the processor becomes somewhat divorced from the specific details of external device interfaces.
3. Now interrupts are employed. The processor need not spend time waiting for an I/O operation to be performed, thus increasing efficiency.4. The I/O module is given direct control of memory via DMA. It can now move a block of data to or from memory without involving the processor, except at the beginning and end of the transfer.
5. I/O module is enhanced to become a separate processor, with a specialized instruction set tailored for I/O. CPU directs the I/O processor to execute an I/O program in main memory. The I/O processor fetches and executes these instructions without processor intervention. Allowing the processor to specify a sequence of I/O activities and to be interrupted only when the entire sequence has been performed.6. The I/O module has a local memory of its own and is, in fact, a computer in its own right. A large set of I/O devices can be controlled, with minimal processor involvement. Commonly used to control communications with interactive terminals. The I/O processor takes care of most of the tasks involved in controlling the terminals.
The DMA mechanism can be configured in a variety of ways. Some possibilities are shown here In the first example, all modules share the same system bus. The DMA module, acting as a surrogate processor, uses programmed I/O to exchange data between memory and an I/O module through the DMA module. This is clearly inefficient: As with processor-controlled programmed I/O, each transfer of a word consumes two bus cycles (transfer request followed by transfer).
The number of required bus cycles can be cut substantially by integrating the DMA and I/O functions.This means that there is a path between the DMA module and one or more I/O modules that does not include the system bus.The DMA logic may actually be a part of an I/O module, or it may be a separate module that controls one or more I/O modules.
This concept can be taken one step further by connecting I/O modules to the DMA module using an I/O busThis reduces the number of I/O interfaces in the DMA module to one and provides for an easily expandable configuration. In all of these cases the system bus that the DMA module shares with the processor and main memory is used by the DMA module only to exchange data with memory and to exchange control signals with the processor. The exchange of data between the DMA and I/O modules takes place off the system bus.
Efficiency is important because I/O operations often form a bottleneck in a computing system. One way to tackle this problem is multiprogramming, which, as we have seen, allows some processes to be waiting on I/O operations while another process is executing. However, even with the vast size of main memory in today’s machines, often I/O is not keeping up with the activities of the processor. Swapping is used to bring in additional ready processes to keep the processor busy, but this in itself is an I/O operation. Thus, a major effort in I/O design has been schemes for improving the efficiency of the I/O. The area that has received the most attention, because of its importance, is disk I/O.
For simplicity and freedom from error, it is desirable to handle all devices in a uniform manner.This applies both to the way in which processes view I/O devices and the way in which the operating system manages I/O devices and operations. Because of the diversity of device characteristics, it is difficult in practice to achieve true generality.What can be done is to use a hierarchical, modular approach to the design of the I/O function.T his hides most of the details of device I/O in lower-level routines so that user processes and upper levels of the operating system see devices in terms of general functions, such as read, write, open, close, lock, unlock.
The hierarchical philosophy developed in Chapter 2 suggested that the functions of the operating system should be separated according to their complexity, their characteristic time scale, and their level of abstraction. This approach leads to an organization of the operating system into a series of layers. Each layer performs a related subset of the functions required of the operating system. It relies on the next lower layer to perform more primitive functions and to conceal the details of those functions. It provides services to the next higher layer. Ideally, the layers should be defined so that changes in one layer do not require changes in other layers.
Logical I/O: Deals with the device as a logical resource and is not concerned with the details of actually controlling the device. Concerned with managing general I/O functions on behalf of user processes, allowing them to deal with the device in terms of a device identifier and simple commands such as open, close, read, write.Device I/O: The requested operations and data (buffered characters, records, etc.) are converted into appropriate sequences of I/O instructions, channel commands, and controller orders.Buffering techniques may be used to improve utilization.Scheduling and control: The actual queuing and scheduling of I/O operations occurs at this layer, as well as the control of the operations. Interrupts are handled at this layer and I/O status is collected and reported. This is the layer of software that actually interacts with the I/O module and hence the device hardware.
The logical I/O module is replaced by a communications architecture, which may itself consist of a number of layers.An example is TCP/IP,.
Directory management: At this layer, symbolic file names are converted to identifiers that either reference the file directly or indirectly through a file descriptor or index table. Concerned with user operations that affect the directory of files, such as add, delete, and reorganize.File system:This layer deals with the logical structure of files and with the operations that can be specified by users, such as open, close, read, write. Access rights are also managed at this layer.Physical organization: Files and records must be converted to physical secondary storage addresses, taking into account the physical track and sector structure of the secondary storage device. Allocation of secondary storage space and main storage buffers is generally treated at this layer as well.
To avoid deadlock, the user memory involved in the I/O operation must be locked in main memory immediately before the I/O request is issued, even though the I/O operation is queued and may not be executed for some time. If a block is being transferred from a user process area directly to an I/O module, then the process is blocked during the transfer and the process may not be swapped out.To avoid these overheads and inefficiencies, it is sometimes convenient to perform input transfers in advance of requests being made and to perform output transfers some time after the request is made.
A block-oriented device stores information in blocks that are usually of fixed size, and transfers are made one block at a time.Generally, it is possible to reference data by its block number. Disks and USB keys are examples of block-oriented devices.
A stream-oriented device transfers data in and out as a stream of bytes, with no block structure.Terminals, printers, communications ports, mouse and other pointing devices, and most other devices that are not secondary storage are stream oriented.
When a user process issues an I/O request, the operating system assigns a buffer in the system portion of main memory to the operation.
For block-oriented devices, Input transfers are made to the system buffer. When the transfer is complete, the process moves the block into user space and immediately requests another block.Called reading ahead, or anticipated input; it is done in the expectation that the block will eventually be needed.Often this is a reasonable assumption most of the time because data are usually accessed sequentially. Only at the end of a sequence of processing will a block be read in unnecessarily.
The single buffering scheme can be used in a line-at-a-time fashion or a byte-at-a-time fashion. Line-at-a-time operation is appropriate for scroll-mode terminals (sometimes called dumb terminals). Byte-at-a-time operation is used on where each keystroke is significant, or for peripherals such as sensors and controllers.
A process transfers data to (or from) one buffer while the operating system empties (or fills) the other.
Double buffering may be inadequate if the process performs rapid bursts of I/O. The problem can often be alleviated by using more than two buffers.When more than two buffers are used, the collection of buffers is itself referred to as a circular buffer with each individual buffer being oneunit in the circular buffer.
Buffering is a technique that smoothes out peaks in I/O demand. However, no amount of buffering will allow an I/O device to keep pace with a process indefinitely when the average demand of the process is greater than the I/O device can service.Even with multiple buffers, all of the buffers will eventually fill up and the process will have to wait after processing each chunk of data.However, in a multiprogramming environment, when there is a variety of I/O activity and a variety of process activity to service, buffering is one tool that can increase the efficiency of the operating system and the performance of individual processes.
Over the last 40 years, the increase in the speed of processors and main memory has far outstripped that for disk access, with processor and main memory speeds increasing by about two orders of magnitude compared to one order of magnitude for disk. The result is that disks are currently at least four orders of magnitude slower than main memory. Thus, the performance of disk storage subsystem is of vital concern.In this section, we highlight some of the key issues and look at the most important approaches.
The actual details of disk I/O operation depend on the computer system, the operating system, and the nature of the I/O channel and disk controller hardware.A general timing diagram of disk I/O transfer is shown in Figure 11.6.
When the disk drive is operating, the disk is rotating at constant speed.To read or write, the head must be positioned at the desired track and at the beginning of the desired sector on that track.Track selection involves moving the head in a movable-head system or electronically selecting one head on a fixed-head system.
Access Time is the sum ofSeek Time isthe time it takes to position the head at the track. Rotational delay is the time it takes for the beginning of the sector to reach the headOnce the head is in position, the read or write operation is then performed as the sector moves under the head; this is the data transfer portion of the operation; the time required for the transfer is the transfer time.
Movie icon jumps to animation at http://gaia.ecs.csus.edu/%7ezhangd/oscal/DiskApplet.htmlThe simplest form of scheduling is first-in-first-out (FIFO) scheduling, which processes items from the queue in sequential order. This strategy has the advantage of being fair, because every request is honored and the requests are honored in the order received. This figure illustrates the disk arm movement with FIFO. This graph is generated directly from the data in Table 11.2a.As can be seen, the disk accesses are in the same order as the requests were originally received.With FIFO, if there are only a few processes that require access and if many of the requests are to clustered file sectors, then we can hope for good performance. But, this technique will often approximate random scheduling in performance, if there are many processes competing for the disk.
With a system based on priority (PRI), the control of the scheduling is outside the control of disk management software. This is not intended to optimize disk utilization but to meet other objectives within the operating system.Often short batch jobs and interactive jobs are given higher priority than longer jobs that require longer computation. This allows a lot of short jobs to be flushed through the system quickly and may provide good interactive response time. However, longer jobs may have to wait excessively long times. Furthermore, such a policy could lead to countermeasures on the part of users, who split their jobs into smaller pieces to beat the system.This type of policy tends to be poor for database systems.
In transaction processing systems, giving the device to the most recent user should result in little or no arm movement for moving through a sequential file.
Select the disk I/O request that requires the least movement of the disk arm from its current position.Thus, we always choose to incur the minimum seek time. Always choosing the minimum seek time does not guarantee that the average seek time over a number of arm movements will be minimum.However, this should provide better performance than FIFO. Because the arm can move in two directions, a random tie-breaking algorithm may be used to resolve cases of equal distances.
With SCAN, the arm is required to move in one direction only, satisfying all outstanding requests en route, until it reaches the last track in that direction or until there are no more requests in that direction. The service direction is then reversed and the scan proceeds in the opposite direction, again picking up all requests in order.This latter refinement is sometimes referred to as the LOOK policy. The SCAN policy favors jobs whose requests are for tracks nearest to both innermost and outermost tracks and favors the latest-arriving jobs.
The C-SCAN (circular SCAN) policy restricts scanning to one direction only.Thus, when the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again.This reduces the maximum delay experienced by new requests.
The N-step-SCAN policy segments the disk request queue into subqueues of length N. Subqueues are processed one at a time, using SCAN. While a queue is being processed, new requests must be added to some other queue. If fewer than N requests are available at the end of a scan, then all of them are processed with the next scan.
FSCAN is a policy that uses two subqueues. When a scan begins, all of the requests are in one of the queues, with the other empty.During the scan, all new requests are put into the other queue. Thus, service of new requests is deferred until all of the old requests have been processed.
T
With multiple disks, separate I/O requests can be handled in parallel, as long as the data required reside on separate disks.Also, a single I/O request can be executed in parallel if the block of data to be accessed is distributed across multiple disks.
Movie icon links to animation at: http://gaia.ecs.csus.edu/%7ezhangd/oscal/RAIDFiles/RAID.htmThe RAID scheme consists of seven levels, zero through six.These levels do not imply a hierarchical relationship but designate different design architectures that share three common characteristics:1. RAID is a set of physical disk drives viewed by the operating system as a single logical drive.2. Data are distributed across the physical drives of an array in a scheme known as striping, described subsequently.3. Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure.
RAID level 0 is not a true member of the RAID family, because it does not include redundancy.The advantage of this layout is that if a single I/O request consists of multiple logically contiguous strips, then up to n strips for that request can be handled in parallel, greatly reducing the I/O transfer time.
each logical strip is mapped to two separate physical disks so that every disk in the array has a mirror disk that contains the same data.A read request can be serviced by either of the two disks that contains the requested data, whichever one involves the minimum seek time plus rotational latency.A write request requires that both corresponding strips be updated, but this can be done in parallel. Thus, the write performance is dictated by the slower of the two writes Recovery from a failure is simple. When a drive fails, the data may still be accessed from the second drive.
In a parallel access array, all member disks participate in the execution of every I/O request. Typically, the spindles of the individual drives are synchronized so that each disk head is in the same position on each disk at any given time.As in the other RAID schemes, data striping is used. In the case of RAID 2 and 3, the strips are very small, often as small as a single byte or word.With RAID 2, an error-correcting code is calculated across corresponding bits on each data disk, and the bits of the code are stored in the corresponding bit positions on multiple parity disks. Typically, a Hamming code is used, which is able to correct single-bit errors and detect double-bit errors.
RAID 3 is organized in a similar fashion to RAID 2.The difference is that RAID 3 requires only a single redundant disk, no matter how large the disk array. RAID 3 employs parallel access, with data distributed in small strips. Instead of an error-correcting code, a simple parity bit is computed for the set of individual bits in the same position on all of the data disks.
A bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the corresponding strip on the parity disk.
RAID 5 is organized in a similar fashion to RAID 4.RAID 5 distributes the parity strips across all disks. A typical allocation is a round-robin scheme For an n-disk array, the parity strip is on a different disk for the first n stripes, and the pattern then repeats.The distribution of parity strips across all drives avoids the potential I/O bottleneck of the single parity disk found in RAID 4.
Two different parity calculations are carried out and stored in separate blocks on different disks. Thus, a RAID 6 array whose user data require N disks consists of N+2 disks.P and Q are two different data check algorithms. One of the two is the exclusive-OR calculation used in RAID 4 and 5. The other is an independent data check algorithm. This makes it possible to regenerate data even if two disks containing user data fail.
A disk cache is a buffer in main memory for disk sectors.The cache contains a copy of some of the sectors on the disk.When an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cache. If so, the request is satisfied via the cache. If not, the requested sector is read into the disk cache from the disk. Because of the phenomenon of locality of reference, when a block of data is fetched into the cache to satisfy a single I/O request, it is likely that there will be future references to that same block.
The most commonly used algorithm is least recently used (LRU)Replace that block that has been in the cache longest with no reference to it. The cache consists of a stack of blocks, with the most recently referenced block on the top of the stack. When a block in the cache is referenced, it is moved from its existing position on the stack to the top of the stack. When a block is brought in from secondary memory, remove the block that is on the bottom of the stack and push the incoming block onto the top of the stack.It is not necessary actually to move these blocks around in main memory; a stack of pointers can be associated with the cache.
Replace that block in the set that has experienced the fewest references. LFU could be implemented by associating a counter with each block. When a block is brought in, it is assigned a count of 1; with each reference to the block, its count is incremented by 1. When replacement is required, the block with the smallest count is selected. Intuitively, it might seem that LFU is more appropriate than LRU because LFU makes use of more pertinent information about each block in the selection process.
The blocks are logically organized in a stack, as with the LRU algorithm.A certain portion of the top part of the stack is designated the new section.When there is a cache hit, the referenced block is moved to the top of the stack. If the block was already in the new section, its reference count is not incremented; otherwise it is incremented by 1. Given a sufficiently large new section, this results in the reference counts for blocks that are repeatedly re-referenced within a short interval remaining unchanged. On a miss, the block with the smallest reference count that is not in the new section is chosen for replacement; the least recently used such block is chosen in the event of a tie.A further refinement (Figure 11.9b): Divide the stack into three sections: new, middle, and old.As before, reference counts are not incremented on blocks in the new section. However, only blocks in the old section are eligible for replacement. Assuming a sufficiently large middle section, this allows relatively frequently referenced blocks a chance to build up their reference counts before becoming eligible for replacement.
Figure 11.10 summarizes results from several studies using LRU, one for a UNIX system running on a VAXFigure 11.11 shows results for simulation studies of the frequency-based replacement algorithm.A comparison of the two figures points out one of the risks of this sort of performance assessment. The figures appear to show that LRU outperforms the frequency-based replacement algorithm. However, when identical reference patterns using the same cache structure are compared, the frequency-based replacement algorithm is superior. Thus, the exact sequence of reference patterns, plus related design issues such as block size, will have a profound influence on the performance achieved.
In UNIX, each individual I/O device is associated with a special file. These are managed by the file system and are read and written in the same manner as user data files. This provides a clean, uniform interface to users and processes.To read from or write to a device, read and write requests are made for the special file associated with the device.
There are two types of I/O in UNIX: buffered and unbuffered. Buffered I/O passes through system buffers, Whereas unbuffered I/O typically involves the DMA facility, with the transfer taking place directly between the I/O module and the process I/O area. For buffered I/O, two types of buffers are used: system buffer caches and character queues.
The buffer cache in UNIX is essentially a disk cache. I/O operations with disk are handled through the buffer cache.The data transfer between the buffer cache and the user process space always occurs using DMA. Because both the buffer cache and the process I/O area are in main memory, the DMA facility is used to perform a memory-to-memory copy. This does not use up any processor cycles, but it does consume bus cycles.To manage the buffer cache, three lists are maintained: Free list: List of all slots in the cache (a slot is referred to as a buffer in UNIX; each slot holds one disk sector) that are available for allocationDevice list: List of all buffers currently associated with each disk Driver I/O queue: List of buffers that are actually undergoing or waiting for I/O on a particular device
Block-oriented devices, such as disk and USB keys, can be effectively served by the buffer cache.A different form of buffering is appropriate for character-oriented devices, such as terminals and printers.A character queue is either written by the I/O device and read by the process or written by the process and read by the device. In both cases, the producer/consumer model introduced in Chapter 5 is used. Thus, character queues may only be read once; as each character is read, it is effectively destroyed. This is in contrast to the buffer cache, which may be read multiple times and hence follows the readers/writers model
Unbuffered I/O, which is simply DMA between device and process space, is always the fastest method for a process to perform I/O. A process that is performing unbuffered I/O is locked in main memory and cannot be swapped out.This reduces the opportunities for swapping by tying up part of main memory, thus reducing the overall system performance. Also, the I/O device is tied up with the process for the duration of the transfer, making it unavailable for other processes.
This figure shows the types of I/O suited to each type of device. Disk drives are block oriented, and have the potential for reasonable high throughput. Thus, I/O for these devices tends to be unbuffered or via buffer cache.Tape drives are functionally similar to disk drives and use similar I/O schemes.Because terminals involve relatively slow exchange of characters, terminal I/O typically makes use of the character queue. Similarly, communication lines require serial processing of bytes of data for input or output and are best handled by character queues.The type of I/O used for a printer will generally depend on its speed.
In general terms, the Linux I/O kernel facility is very similar to that of other UNIX implementation, such as SVR4. The Linux kernel associates a special file with each I/O device driver. Block, character, and network devices are recognized. In this section, we look at several features of the Linux I/O facility.
The elevator scheduler maintains a single queue for disk read and write requests and performs both sorting and merging functions on the queue. The elevator scheduler keeps the list of requests sorted by block number.As the disk requests are handled, the drive moves in a single direction, satisfying each request as it is encountered.
Each incoming request is placed in the sorted elevator queue. In addition, the same request is placed at the tail of a read FIFO queue for a read request or a write FIFO queue for a write request. The read and write queues maintain a list of requests in the sequence in which the requests were made. Associated with each request is an expiration time, with a default value of 0.5 seconds for a read request and 5 seconds for a write request.Ordinarily, the scheduler dispatches from the sorted queue. When a request is satisfied, it is removed from the head of the sorted queue and also from the appropriate FIFO queue.However, when the item at the head of one of the FIFO queues becomes older than its expiration time, then the scheduler next dispatches from that FIFO queue, taking the expired request, plus the next few requests from the queue.As each request is dispatched, it is also removed from the sorted queue.
In Linux, the anticipatory scheduler is superimposed on the deadline scheduler.When a read request is dispatched, the anticipatory scheduler causes the scheduling system to delay for up to 6 milliseconds, depending on the configuration.During this small delay, there is a good chance (principal of locality) that the application that issued the last read request will issue another read request to the same region of the disk. If so, that request will be serviced immediately. If no such read request occurs, the scheduler resumes using the deadline scheduling algorithm.
In Linux 2.2 and earlier releases, the kernel maintained a page cache for reads and writes from regular file system files and for virtual memory pages, and a separate buffer cache for block I/O. For Linux 2.4 and later, there is a single unified page cache that is involved in all traffic between disk and main memory.The page cache confers two benefits. When it is time to write back dirty pages to disk, a collection of them can be ordered properly and written out efficiently. Second, because of the principle of temporal locality, pages in the page cache are likely to be referenced again before they are flushed from the cache, thus saving a disk I/O operation.Dirty pages are written back to disk in two situations: When free memory falls below a specified threshold, the kernel reduces the size of the page cache to release memory to be added to the free memory pool. When dirty pages grow older than a specified threshold, a number of dirty pages are written back to disk.
The I/O manager is responsible for all I/O for the operating system and provides a uniform interface that all types of drivers can call.
The I/O manager works closely with four types of kernel components:Cache manager: The cache manager handles file caching for all file systems. A kernel thread, the lazy writer, periodically batches the updates together to write to disk which allows the I/O to be more efficient. The cache manager works by mapping regions of files into kernel virtual memory and then relying on the virtual memory manager to do most of the work to copy pages to and from the files on disk.File system drivers: The I/O manager treats a file system driver as just another device driver and routes I/O requests for file system volumes to the appropriate software driver for that volume. The file system, in turn, sends I/O requests to the software drivers that manage the hardware device adapter.Network drivers: Windows includes integrated networking capabilities and support for remote file systems. The facilities are implemented as software drivers rather than part of the Windows Executive.Hardware device drivers: These software drivers access the hardware registers of the peripheral devices using entry points in the kernel’s Hardware Abstraction Layer. A set of these routines exists for every platform that Windows supports; because the routine names are the same for all platforms, the source code of Windows device drivers is portable across different processor types.
Shadow copies are an efficient way of making consistent snapshots of volumes to that they can be backed up.They are also useful for archiving files on a per-volume basis. If a user deletes a file, he or she can retrieve an earlier copy from any available shadow copy made by the system administrator. Shadow copies are implemented by a software driver that makes copies of data on the volume before it is overwritten.