On October 23rd, 2014, we updated our
By continuing to use LinkedIn’s SlideShare service, you agree to the revised terms, so please take a few minutes to review them.
Operating System An operating system (OS) is software that manages the computer hardware as well as providing an environment for application programs to run. An operating system is an interface between a computer and user. It provides either Graphical User Interface or Command Line Interface.
Types of Operating Systems 1. Real-time Operating System: It is a multitasking operating system that aims at executing real-time applications. They either have an event-driven or a time-sharing design. An event-driven system switches between tasks based of their priorities while time-sharing operating systems switch tasks based on clock interrupts. They have well defined, fixed-time constraints. 2. Distributed Operating System: An operating system that manages a group of independent computers that are networked to provide the users with access to the various resources that the system maintains and makes them appear to be a single computer is known as a distributed operating system. 3. Embedded System: This operating systems are designed for being used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources. They are very compact and extremely efficient by design. Eg. Windows CE and Minix 3.
Multi-user Operating Systems: These systems allow a multiple users to access a computer system concurrently. Access is provided on a time sharing basis. Multi-tasking Operating Systems(Time-sharing): It allows the execution of multiple tasks at one time. Pre-emptive multitasking- slices the CPU time and dedicates one slot to each of the programs. Eg. LINUX and Solaris. Cooperative multitasking- is achieved by relying on each process to give time to the other processes in a defined manner. Eg. MS Windows. Multi-programming: Residing of more than one processes in main memory in order to increase CPU utilization. Multi-processing System: Two or more processors in close communication sharing some computer resources. It is used to increase throughput and reliability. It is more economical. Dual Mode Operation- Kernel mode: Execution of tasks related to operating system. User Mode: Execution of user applications and other user related tasks.
Kernel – basic component of OS The Kernelis the central component of operating system. It is a bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources through inter-process communication mechanisms and system calls. The basic facility provided by kernel include resources such as:- The Central Processing Unit. This is the most central part of a computer system, responsible for running or executing programs on it. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors (each of which can usually run only one program at a time). The computer's memory. Memory is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is available. Any Input/Output (I/O) devices present in the computer, such as keyboard, mouse, disk drives, printers, displays, etc. The kernel allocates requests from applications to perform I/O to an appropriate device and provides convenient methods for using the device.
Monolithic kernels : In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in the same memory area. This approach provides rich and powerful hardware access. It is "easier to implement a monolithic kernel" than microkernels. The main disadvantages of monolithic kernels are the dependencies between system components - a bug in a device driver might crash the entire system and the fact that large kernels can become very difficult to maintain. Microkernels : The microkernel approach consists of defining a simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as memory management, multitasking, and inter-process communication. Other services, including those normally provided by the kernel, such as networking, are implemented in user-space programs, referred to as servers. Microkernels are easier to maintain than monolithic kernels, but the large number of system calls and context switches might slow down the system because they typically generate more overhead than plain function calls. A microkernel allows the implementation of the remaining part of the operating system as a normal application program written in a high-level language, and the use of different operating systems on top of the same unchanged kernel.It is also possible to dynamically switch among operating systems and to have more than one active simultaneously.
Process Management Process States
New- The process is being created.
Running- Instructions are being executed.
Waiting- The process is waiting for some event to occur.
Ready- The process is waiting to be assigned to a processor.
Terminated- The process has finished execution.
Process Control Block- contains all information associated with a specific process like process state, program counter, CPU registers and info regarding CPU scheduling algorithms, memory, I/O and accounting.
Multithreaded process- Thread is a light-weight process within a process defining flow of control. A multithreaded process contains several different flows of control within the same address space. The benefits of multithreading include increased responsiveness to user, resource sharing, economy and ability to take advantage of multiprocessor architectures. Schedulers are mainly of three types: Long-term schedulers: Select the process that contend for the CPU from the job pool. Middle-term schedulers: It schedules swapping in and out of processes from main memory which are partially executed. Short-term Schedulers: Select among the processes which are ready to execute and allocates CPU to one of them.
Process Scheduling Algorithms First-Come, First-Served Scheduling: The process that requests the CPU first is allocated first. Shortest Job First Scheduling: On CPU availabililty, the next smallest CPU burst process is allocated. Priority Scheduling: Each process is associated with priorities. Process with highest priority is allocated to the CPU. Round-Robin Scheduling: Similar to FCFS but on time sharing basis. Each process executes for a fixed time slot in a circular queue. Multilevel Queue Scheduling: Priority queues are defined for every process like system processes, interactive processes, interactive editing processes, batch processes and student processes.
Deadlocks- A deadlock state occurs when two or more processes are waiting indefinitely for an event that can be caused only one by one of the waiting processes. A deadlock occurs if four necessary conditions occur simultaneously i.e. Mutual exclusion, Hold and Wait, No preemption and Circular Wait. Prevention: Ensure that one of the necessary conditions never holds. Avoidance: Employ protocol and algorithms to avoid deadlocks. Detection: First detect the occurrence of deadlock and then employ some recovery scheme. There are three principal methods for dealing deadlocks: 1. Use some protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlock state. 2. Allow the system to enter a deadlock state, detect it, and then recover. 3. Ignore the problem altogether and pretend that deadlocks never occur in the system.
Memory Management Paging & Segmentation- memory management schemes that permits the physical address space of a process to be noncontiguous. It involves breaking physical memory into fixed-sized blocks called frames and breaking logical memory into blocks of the same size called pages. When a process is to be executed, its pages are loaded into any available memory frames from the backing store. A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution is Swapping. Swapping policy is used for priority based scheduling algorithms. Pages need register(Page Table Base Register) and cache(Translation Look-aside buffer). Segmentation- is for user view of memory management as it supports logical address space segments in contrast to physical address space of paging technique.
Virtual Memory- is a technique that enables us to map a large logical address space onto a smaller physical memory and is implemented by demand paging. Demand Paging- loading of pages only when they are needed. Access to a page marked invalid causes a pafe fault. Rather than swapping whole process into memory only required pages are swapped. This is done by various page replacement methods:- a)First In First Out-Oldest page is replaced by new one. b)Optimal Page Replacement- Replace the page that will not be used for the longest period of time. c) Least Recently Used Page Replacement- Replace the page that has not been used for the longest period of time. Thrashing- high paging activity ie process spends more time paging than executing. This is resolved by using working-set model. The working set is the set of pages in current locality and provides enough frames to process to avoid thrashing.
Storage Management File System & Allocation methods File is a sequence of logical records which are required to be mapped onto physical storage devices. Organised into directories for ease of use. Various Directory Structures:-Single-level, Two-level, Tree-Structured, Acyclic-Graph and General Graph Directories. Various file allocation methods on disks:- a) Contiguous allocation- file occupy a set of contiguous blocks on the disk. It can suffer from external fragmentation. b) Linked allocation- each file is a linked list of disk blocks. It can be used only for sequential-access files, inefficient to support direct-access capability. c) Indexed allocation- each file has its own index block, which is an array of disk-block addresses. It may require substantial overhead for its index block
Secondary storage- additional disk space for improvement in design through strategies for disk-queue ordering such as SCAN, LOOK, C-SCAN & C-LOOK algorithms Input-Output System- The basic hardware elements involved in I/O are buses, device controllers and the devices themselves. The kernel module that controls a device is a device driver. The kernel’s I/O subsystem provides numerous services. Among these are I/O scheduling, buffering, caching, spooling, device reservation, and error handling.
Protection & Security Protection mechanisms control access to a system by limiting the types of file access permitted to users. In addition, protection must ensure that only processes that have gained proper authorization from the operating system can operate on memory segments, the CPU, and other resources. Protection is provided by a mechanism that controls the access of programs, processes, or users to the resources defined by a computer system. This mechanism must provide a means for specifying the controls to be imposed, together with a means of enforcing them. Security ensures the authentication of system users to protect the integrityof the information stored in the system( both data and code), as well as the physical resources of the computer system. The security system prevents unauthorized access, malicious destruction or alteration of data, and accidental introduction of inconsistency. Methods of preventing or detecting security incidents include intrusion detection systems, antivirus software, auditing and logging of system events, monitoring of system software changes, system call monitoring, and firewalls.
Linux Modern, free operating system based on UNIX standards. Designed to run efficiently and reliably on common PC hardware Provides a programming and user interface compatible with standard UNIX systems and can run a large number of UNIX applications. Multiuser system, providing protection between processes according to a time-sharing scheduler. High performance due to monolithic kernel, and allows most drivers to be dynamically loaded and unloaded at run time. Hierarchical directory tree file system that obeys UNIX semantics. Supports Device-oriented, networked, and virtual file systems. The memory-management system uses page sharing and copy-on-write to minimize the duplication of data shared by different processes. Uses Least Frequent Used(LFU) algorithm.
Windows XP Designed by Microsoft. Windows XP is an extensible & portable OS. Supports multiple operating environments and symmetric multiprocessing, including both 32-bit and 64-bit processors and NUMA computers. It provides virtual memory, integrated caching, and preemptive scheduling. Windows XP supports a security model stronger than those of previous Microsoft operating systems and internationalization features. Windows runs on a variety of computers, so users can choose and upgrade hardware to match their budgets and performance requirements without needing to alter the applications they run.