Your SlideShare is downloading. ×
0
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Distributed Operating System_1
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Distributed Operating System_1

3,344

Published on

INTRODUCTIONTO OPERATING SYSTEM …

INTRODUCTIONTO OPERATING SYSTEM
What is an Operating System?
Mainframe Systems
Desktop Systems
Multiprocessor Systems
Distributed Systems
Clustered System
Real -Time Systems
Handheld Systems
Computing Environments

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
3,344
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
178
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. DISTRIBUTED OPERATING SYSTEMS Sandeep Kumar Poonia Head Of Dept. CS/IT B.E., M.Tech., UGC-NET LM-IAENG, LM-IACSIT,LM-CSTA, LM-AIRCC, LM-SCIEI, AM-UACEE
  • 2. 1 MCS 5.1 DISTRIBUTED OPERATING SYSTEMS COURSE OUTLINE BROAD COVERAGE:  Introduction to distributed computing systems (DCS)  DCS design goals, Transparencies, Fundamental issues  Distributed Coordination  Process synchronization  Inter-process communication  Deadlocks in distributed systems  Load scheduling and balancing techniques  Case Study: Amoeba, Mach, Chorus, DCE PREREQUISITES  Operating Systems  Computer Networks  Database System
  • 3. REFERENCE BOOKS:  Distributed Operating Systems Concepts and Design, Pradeep K. Sinha, PHI  Distributed Operating Systems by Andrew S Tannebaum, PHI  Distributed Operating Systems and Algorithm Analysis by Randy Chow, Pearson Education.
  • 4. INTRODUCTION TO OPERATING SYSTEM  What is an Operating System?  Mainframe Systems  Desktop Systems  Multiprocessor Systems  Distributed Systems  Clustered System  Real -Time Systems  Handheld Systems  Computing Environments
  • 5. WHAT IS AN OPERATING SYSTEM?  A program that acts as an intermediary between a user of a computer and the computer hardware.  Operating system goals:  Execute user programs and make solving user problems easier.  Make the computer system convenient to use.  Use the computer hardware in an efficient manner.
  • 6. COMPUTER SYSTEM COMPONENTS 1.Hardware – provides basic computing resources (CPU, memory, I/O devices). 2.Operating system – controls and coordinates the use of the hardware among the various application programs for the various users. 3.Applications programs – define the ways in which the system resources are used to solve the computing problems of the users (compilers, database systems, video games, business programs). 4.Users (people, machines, other computers).
  • 7. Program Interface Humans User Programs O.S. Interface O.S. Hardware Interface/ Privileged Instructions Disk/Tape/Memory ABSTRACT VIEW OF SYSTEM COMPONENTS
  • 8. OPERATING SYSTEM DEFINITIONS  Resource allocator – manages and allocates resources.  Control program – controls the execution of user programs and operations of I/O devices .  Kernel – the one program running at all times (all else being application programs).
  • 9. MAINFRAME SYSTEMS  Reduce setup time by batching similar jobs  Automatic job sequencing – automatically transfers control from one job to another. First rudimentary operating system.  Resident monitor  initial control in monitor  control transfers to job  when job completes control transfers pack to monitor
  • 10. MEMORY LAYOUT FOR A SIMPLE BATCH SYSTEM
  • 11. MULTIPROGRAMMED BATCH SYSTEMS Several jobs are kept in main memory at the same time, and the CPU is multiplexed among them.
  • 12. OS FEATURES NEEDED FOR MULTIPROGRAMMING  I/O routine supplied by the system.  Memory management – the system must allocate the memory to several jobs.  CPU scheduling – the system must choose among several jobs ready to run.  Allocation of devices.
  • 13. TIME-SHARING SYSTEMS–INTERACTIVE COMPUTING  The CPU is multiplexed among several jobs that are kept in memory and on disk (the CPU is allocated to a job only if the job is in memory).  A job swapped in and out of memory to the disk.  On-line communication between the user and the system is provided; when the operating system finishes the execution of one command, it seeks the next “control statement” from the user’s keyboard.  On-line system must be available for users to access data and code.
  • 14. DESKTOP SYSTEMS  Personal computers – computer system dedicated to a single user.  I/O devices – keyboards, mice, display screens, small printers.  User convenience and responsiveness.  Can adopt technology developed for larger operating system’ often individuals have sole use of computer and do not need advanced CPU utilization of protection features.  May run several different types of operating systems (Windows, MacOS, UNIX, Linux)
  • 15. PARALLEL SYSTEMS  Multiprocessor systems with more than on CPU in close communication.  Tightly coupled system – processors share memory and a clock; communication usually takes place through the shared memory.  Advantages of parallel system:  Increased throughput  Economical  Increased reliability  graceful degradation  fail-soft systems
  • 16. PARALLEL SYSTEMS (CONT.)  Symmetric multiprocessing (SMP)  Each processor runs and identical copy of the operating system.  Many processes can run at once without performance deterioration.  Most modern operating systems support SMP  Asymmetric multiprocessing  Each processor is assigned a specific task; master processor schedules and allocated work to slave processors.  More common in extremely large systems
  • 17. SYMMETRIC MULTIPROCESSING ARCHITECTURE
  • 18. DISTRIBUTED SYSTEMS  Distribute the computation among several physical processors.  Loosely coupled system – each processor has its own local memory; processors communicate with one another through various communications lines, such as high-speed buses or telephone lines.  Advantages of distributed systems.  Resources Sharing  Computation speed up – load sharing  Reliability  Communications
  • 19. DISTRIBUTED SYSTEMS (CONT)  Requires networking infrastructure.  Local area networks (LAN) or Wide area networks (WAN)  May be either client-server or peer-to-peer systems.
  • 20. GENERAL STRUCTURE OF CLIENT-SERVER
  • 21. CLUSTERED SYSTEMS  Clustering allows two or more systems to share storage.  Provides high reliability.  Asymmetric clustering: one server runs the application while other servers standby.  Symmetric clustering: all N hosts are running the application.
  • 22. REAL-TIME SYSTEMS  Often used as a control device in a dedicated application such as controlling scientific experiments, medical imaging systems, industrial control systems, and some display systems.  Well-defined fixed-time constraints.  Real-Time systems may be either hard or soft real-time.
  • 23. REAL-TIME SYSTEMS (CONT.)  Hard real-time:  Secondary storage limited or absent, data stored in short term memory, or read-only memory (ROM)  Conflicts with time-sharing systems, not supported by general-purpose operating systems.  Soft real-time  Limited utility in industrial control of robotics  Useful in applications (multimedia, virtual reality) requiring advanced operating-system features.
  • 24. HANDHELD SYSTEMS  Personal Digital Assistants (PDAs)  Cellular telephones  Issues:  Limited memory  Slow processors  Small display screens.
  • 25. Very fast storage is very expensive. So the Operating System manages a hierarchy of storage devices in order to make the best use of resources. In fact, considerable effort goes into this support. OPERATING SYSTEM OVERVIEW Storage Hierarchy Fast and Expensive Slow an Cheap
  • 26. COMPUTER-SYSTEM STRUCTURES  Computer System Operation  I/O Structure  Storage Structure  Storage Hierarchy  Hardware Protection  General System Architecture
  • 27. COMPUTER-SYSTEM ARCHITECTURE
  • 28. COMPUTER-SYSTEM OPERATION  I/O devices and the CPU can execute concurrently.  Each device controller is in charge of a particular device type.  Each device controller has a local buffer.  CPU moves data from/to main memory to/from local buffers  I/O is from the device to local buffer of controller.  Device controller informs CPU that it has finished its operation by causing an interrupt.
  • 29. COMMON FUNCTIONS OF INTERRUPTS  Interrupt transfers control to the interrupt service routine generally, through the interrupt vector, which contains the addresses of all the service routines.  Interrupt architecture must save the address of the interrupted instruction.  Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt.  A trap is a software-generated interrupt caused either by an error or a user request.  An operating system is interrupt driven.
  • 30. I/O STRUCTURE  After I/O starts, control returns to user program only upon I/O completion.  Wait instruction idles the CPU until the next interrupt  Wait loop (contention for memory access).  At most one I/O request is outstanding at a time, no simultaneous I/O processing.  After I/O starts, control returns to user program without waiting for I/O completion.  System call – request to the operating system to allow user to wait for I/O completion.  Device-status table contains entry for each I/O device indicating its type, address, and state.  Operating system indexes into I/O device table to determine device status and to modify table entry to include interrupt.
  • 31. DIRECT MEMORY ACCESS STRUCTURE  Used for high-speed I/O devices able to transmit information at close to memory speeds.  Device controller transfers blocks of data from buffer storage directly to main memory without CPU intervention.  Only one interrupt is generated per block, rather than the one interrupt per byte.
  • 32. STORAGE STRUCTURE  Main memory – only large storage media that the CPU can access directly.  Secondary storage – extension of main memory that provides large nonvolatile storage capacity.  Magnetic disks – rigid metal or glass platters covered with magnetic recording material  Disk surface is logically divided into tracks, which are subdivided into sectors.  The disk controller determines the logical interaction between the device and the computer.
  • 33. STORAGE HIERARCHY  Storage systems organized in hierarchy.  Speed  Cost  Volatility  Caching – copying information into faster storage system; main memory can be viewed as a last cache for secondary storage.
  • 34. STORAGE-DEVICE HIERARCHY
  • 35. CACHING  Use of high-speed memory to hold recently- accessed data.  Requires a cache management policy.  Caching introduces another level in storage hierarchy. This requires data that is simultaneously stored in more than one level to be consistent.
  • 36. MIGRATION OF A FROM DISK TO REGISTER
  • 37. OPERATING-SYSTEM STRUCTURES  System Components  Operating System Services  System Calls  System Programs  System Structure  Virtual Machines  System Design and Implementation  System Generation
  • 38. COMMON SYSTEM COMPONENTS  Process Management  Main Memory Management  File Management  I/O System Management  Secondary Management  Networking  Protection System  Command-Interpreter System
  • 39. PROCESS MANAGEMENT  A process is a program in execution. A process needs certain resources, including CPU time, memory, files, and I/O devices, to accomplish its task.  The operating system is responsible for the following activities in connection with process management.  Process creation and deletion.  process suspension and resumption.  Provision of mechanisms for:  process synchronization  process communication
  • 40. MAIN-MEMORY MANAGEMENT  Memory is a large array of words or bytes, each with its own address. It is a repository of quickly accessible data shared by the CPU and I/O devices.  Main memory is a volatile storage device. It loses its contents in the case of system failure.  The operating system is responsible for the following activities in connections with memory management:  Keep track of which parts of memory are currently being used and by whom.  Decide which processes to load when memory space becomes available.  Allocate and deallocate memory space as needed.
  • 41. FILE MANAGEMENT  A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data.  The operating system is responsible for the following activities in connections with file management:  File creation and deletion.  Directory creation and deletion.  Support of primitives for manipulating files and directories.  Mapping files onto secondary storage.  File backup on stable (nonvolatile) storage media.
  • 42. I/O SYSTEM MANAGEMENT  The I/O system consists of:  A buffer-caching system  A general device-driver interface  Drivers for specific hardware devices
  • 43. SECONDARY-STORAGE MANAGEMENT Since main memory (primary storage) is volatile and too small to accommodate all data and programs permanently, the computer system must provide secondary storage to back up main memory.  Most modern computer systems use disks as the principle on-line storage medium, for both programs and data.  The operating system is responsible for the following activities in connection with disk management:  Free space management  Storage allocation  Disk scheduling
  • 44. NETWORKING (DISTRIBUTED SYSTEMS)  A distributed system is a collection processors that do not share memory or a clock. Each processor has its own local memory.  The processors in the system are connected through a communication network.  Communication takes place using a protocol.  A distributed system provides user access to various system resources.  Access to a shared resource allows:  Computation speed-up  Increased data availability  Enhanced reliability
  • 45. OPERATING SYSTEM SERVICES  Program execution – system capability to load a program into memory and to run it.  I/O operations – since user programs cannot execute I/O operations directly, the operating system must provide some means to perform I/O.  File-system manipulation – program capability to read, write, create, and delete files.  Communications – exchange of information between processes executing either on the same computer or on different systems tied together by a network. Implemented via shared memory or message passing.  Error detection – ensure correct computing by detecting errors in the CPU and memory hardware, in I/O devices, or in user programs.
  • 46. ADDITIONAL OPERATING SYSTEM FUNCTIONS Additional functions exist not for helping the user, but rather for ensuring efficient system operations. •Resource allocation – allocating resources to multiple users or multiple jobs running at the same time. •Accounting – keep track of and record which users use how much and what kinds of computer resources for account billing or for accumulating usage statistics. •Protection – ensuring that all access to system resources is controlled.
  • 47. SYSTEM DESIGN GOALS  User goals – operating system should be convenient to use, easy to learn, reliable, safe, and fast.  System goals – operating system should be easy to design, implement, and maintain, as well as flexible, reliable, error-free, and efficient.
  • 48. MECHANISMS AND POLICIES  Mechanisms determine how to do something, policies decide what will be done.  The separation of policy from mechanism is a very important principle, it allows maximum flexibility if policy decisions are to be changed later.
  • 49. PROCESSES  Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server Systems
  • 50. PROCESS CONCEPT  An operating system executes a variety of programs:  Batch system – jobs  Time-shared systems – user programs or tasks  Textbook uses the terms job and process almost interchangeably.  Process – a program in execution; process execution must progress in sequential fashion.  A process includes:  program counter  stack  data section
  • 51. PROCESS STATE  As a process executes, it changes state  new: The process is being created.  running: Instructions are being executed.  waiting: The process is waiting for some event to occur.  ready: The process is waiting to be assigned to a process.  terminated: The process has finished execution.
  • 52. DIAGRAM OF PROCESS STATE
  • 53. PROCESS CONTROL BLOCK (PCB) Information associated with each process.  Process state  Program counter  CPU registers  CPU scheduling information  Memory-management information  Accounting information  I/O status information
  • 54. PROCESS CONTROL BLOCK (PCB)
  • 55. CPU SWITCH FROM PROCESS TO PROCESS
  • 56. CONTEXT SWITCH  When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process.  Context-switch time is overhead; the system does no useful work while switching.  Time dependent on hardware support.
  • 57. PROCESS CREATION  Parent process create children processes, which, in turn create other processes, forming a tree of processes.  Resource sharing  Parent and children share all resources.  Children share subset of parent’s resources.  Parent and child share no resources.  Execution  Parent and children execute concurrently.  Parent waits until children terminate.
  • 58. PROCESS TERMINATION  Process executes last statement and asks the operating system to decide it (exit).  Output data from child to parent (via wait).  Process’ resources are deallocated by operating system.  Parent may terminate execution of children processes (abort).  Child has exceeded allocated resources.  Task assigned to child is no longer required.  Parent is exiting.  Operating system does not allow child to continue if its parent terminates.  Cascading termination.
  • 59. COOPERATING PROCESSES  Independent process cannot affect or be affected by the execution of another process.  Cooperating process can affect or be affected by the execution of another process  Advantages of process cooperation  Information sharing  Computation speed-up  Modularity  Convenience
  • 60. PRODUCER-CONSUMER PROBLEM  Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process.  unbounded-buffer places no practical limit on the size of the buffer.  bounded-buffer assumes that there is a fixed buffer size.
  • 61. REMOTE PROCEDURE CALLS  Remote procedure call (RPC) abstracts procedure calls between processes on networked systems.  Stubs – client-side proxy for the actual procedure on the server.  The client-side stub locates the server and marshalls the parameters.  The server-side stub receives this message, unpacks the marshalled parameters, and peforms the procedure on the server.
  • 62. REMOTE METHOD INVOCATION  Remote Method Invocation (RMI) is a Java mechanism similar to RPCs.  RMI allows a Java program on one machine to invoke a method on a remote object.
  • 63. THREADS  Overview  Multithreading Models  Threading Issues  Windows 2000 Threads  Linux Threads  Java Threads
  • 64. SINGLE AND MULTITHREADED PROCESSES
  • 65. BENEFITS  Responsiveness  Resource Sharing  Economy  Utilization of MP Architectures
  • 66. USER THREADS  Thread management done by user-level threads library  Examples - POSIX Pthreads - Mach C-threads - Solaris threads
  • 67. KERNEL THREADS  Supported by the Kernel  Examples - Windows 95/98/NT/2000 - Solaris - Tru64 UNIX - BeOS - Linux
  • 68. MULTITHREADING MODELS  Many-to-One  One-to-One  Many-to-Many
  • 69. MANY-TO-ONE  Many user-level threads mapped to single kernel thread.  Used on systems that do not support kernel threads.
  • 70. MANY-TO-ONE MODEL
  • 71. ONE-TO-ONE  Each user-level thread maps to kernel thread.  Examples - Windows 95/98/NT/2000 - OS/2
  • 72. ONE-TO-ONE MODEL
  • 73. MANY-TO-MANY MODEL  Allows many user level threads to be mapped to many kernel threads.  Allows the operating system to create a sufficient number of kernel threads.  Solaris 2  Windows NT/2000 with the ThreadFiber package
  • 74. MANY-TO-MANY MODEL
  • 75. WINDOWS 2000 THREADS  Implements the one-to-one mapping.  Each thread contains - a thread id - register set - separate user and kernel stacks - private data storage area
  • 76. LINUX THREADS  Linux refers to them as tasks rather than threads.  Thread creation is done through clone() system call.  Clone() allows a child task to share the address space of the parent task (process)
  • 77. JAVA THREADS  Java threads may be created by:  Extending Thread class  Implementing the Runnable interface  Java threads are managed by the JVM.

×