Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Operating System Support

69 views

Published on

Operating System Support

Published in: Education
  • Be the first to comment

  • Be the first to like this

Operating System Support

  1. 1. DISTRIBUTED SYSTEMS B. Tech IV/IV, II Semester COMPUTER SCIENCE & ENGINEERING 02/06/17 © Copyright Material 1
  2. 2. UNIT V 02/06/17 2 Distributed Systems Foundation Middleware Support Systems Algorithms & Shared Data Characterization of Distributed Systems System Models Inter Process Communication Distributed Objects Remote Invocation Operating System Distributed File System Coordination & Agreement Transactions & Replications © Copyright Material
  3. 3. What we learnt till NOW 1. Characterization of a Distributed System 1. Need for a Distributed System 2. Evolution of a Distributed System. 3. Examples of it & details on Resource Sharing 4. Challenges of it 2. How to build a Distributed System Model 1. Introduction to a System Model 2. Types of System models 3. Functionalities of each of these models 3. How to achieve Inter process communication 1. Learned sockets for establishing communication 2. Fundamentals of Protocols required for communication 3. Learnt about XDR for handling heterogeneity 4. Handled Multicast communication 4. Various programming models 1. Object model & Distributed Object Model 2. RMI, CORBA & XML 3. Role of Events & Notification 4. Case study of RMI 02/06/17 3© Copyright Material
  4. 4. What we learnt in Unit IV 1. Introduction of Programming Models 2. What is an Object Model & its role in communication 1. What are Remote Objects References? 2. What are Object References? 3. Actions, Exceptions 4. Garbage Collection 3. Understanding of Distributed Objects 4. Learning distributed object Model 5. Design issues of RMI 6. Implementation of RMI 7. Working of a Distributed Garbage Collection 8. Working of Remote Procedure Call 9. Role of Events & Notifications in Distributed Communication 10. Case Study: Java RMI. 02/06/17 4© Copyright Material
  5. 5. Topic of Contents 1. Introduction to Operating System – Definition of network OS – Definition of distributed OS 1. Layers of an Operating System – Process Manager – Thread Manager – Communication Manager – Memory Manager – Supervisor 1. Protection 2. Processes & Threads – Processes, Threads 1. Communication & Invocation 2. Operating System Architecture – Monolithic and Micro-kernels 02/06/17 5© Copyright Material
  6. 6. Learning Objective Upon completion of this unit, students will be able to: • Introducing the operating systems & learning how a general OS varies from a Network/Distributed OS • Learn what a modern operating system does to support distributed applications and middleware • Understand the relevant abstractions and techniques, focussing on Processes, Threads and Support for invocation mechanisms. • Learn the options for types of operating system architecture ex: Monolithic and Micro-Kernels 02/06/17 6© Copyright Material
  7. 7. Software and hardware service layers in Distributed Systems Types of OS • Middleware implements abstractions that support networkwide programming. Examples: – RPC and RMI (Sun RPC, Corba, Java RMI) • Traditional OS's (e.g. early Unix, Windows 3.0) – Smplify, protect and optimize the use of local resources • Network OS's (e.g. Mach, modern UNIX, Windows NT) – Do the same but they also support a wide range of communication standards and enable remote processes to access (some) local resources (e.g. files). 1. Introduction 02/06/17 7© Copyright Material PLATFORM What is the learning in this Unit??
  8. 8. Distributed OS: Presents users (and applications) with an integrated computing platform that hides the individual computers. • Has control over all of the nodes (computers) in the network and allocates their resources to tasks without user involvement. • In a distributed OS, the user doesn't know where his programs are running. • The operating system has control over all the nodes in the system, and it transparently locates new processes at whatever node suits its scheduling policies • Examples: – Cluster computer systems 1. Introduction 02/06/17 8© Copyright Material
  9. 9. Relationship between OS & Middleware: • In fact, there are no distributed operating systems in general use, only network operating systems such as UNIX, Mac OS and Windows. • The combination of middleware and network operating systems provides an acceptable balance between the requirement for autonomy on the one hand and network transparent resource access on the other. • The network operating system enables users to run their favorite word processors and other standalone applications • Middleware enables them to take advantage of services that become available in their distributed system. 1. Introduction 02/06/17 9© Copyright Material
  10. 10. Functions that OS provides to the Middleware 1. Encapsulation Provides a set of operations that meet their clients’ needs 2. Protection Protect resource from illegitimate access 3. Concurrent processing Support clients access resource concurrently 4. Invocation mechanism: a means of accessing an encapsulated resource Communication: Pass operation parameters and results between resource managers Scheduling: Schedule the processing of the invoked operation 1. Introduction 02/06/17 10© Copyright Material
  11. 11. OPERATING SYSTEM COMPONENTS 2. Layers of Operating System 02/06/17 11© Copyright Material Communication manager Thread manager Memory manager Supervisor Process manager
  12. 12.  Process Manager: Takes care of the creation of processes. Including the scheduling of each process to physical resources (such as the CPU)  Thread Manager:  Thread creation  Synchronisation  Scheduling.  Communication Manager: Manages the communication between separate processes (or threads attached to separate processes).  Memory Management: Management of physical and virtual memory. Note this is not the same as automatic memory management (or garbage collection) provided by the runtime for some high-level languages such as Java.  Supervisor: The controller for interrupts, system call traps and other kinds of exceptions (though not, generally, language level exceptions). 2. Layers of Operating System 02/06/17 12© Copyright Material
  13. 13. 2. Layers of OS Process Manager: Even though in actuality there are many processes running at once, the OS gives each process the illusion that it is running alone. • Modern general purpose operating systems permit a user to create and destroy processes. – In unix this is done by the fork system call, which creates a child process, and the exit system call, which terminates the current process. – After a fork both parent and child keep running (indeed they have the same program text) and each can fork off other processes. – A process tree results. The root of the tree is a special process created by the OS during startup. – A process can choose to wait for children to terminate. For example, if C issued a wait() system call it would block until G finished. 02/06/17 13@Copyright Material
  14. 14. 2. Layers of OS 02/06/17 14@Copyright Material Process Manager Characteristics 1. Process Life cycle 2. Process Scheduling 3. Process Synchronization
  15. 15. 2. Layers of OS 02/06/17 15@Copyright Material Process Creation in C #include<stdio.h> #include<unistd.h> //header defines miscellaneous symbolic constants and types, and declares miscellaneous functions int main() { int i; for(i=0;i<4;i++) fork(); return 0; } //The fork() function creates a new process. The new process (child process) is an exact copy of the calling process (parent process) except as detailed below. • The child process has a unique process ID. • The child process ID also does not match any active process group ID. • The child process has a different parent process ID (that is, the process ID of the parent process). • The child process has its own copy of the parent's file descriptors. 
  16. 16. 2. Layers of OS 02/06/17 16@Copyright Material Thread Creation in Java class ThreadDemo implements Runnable { Thread CHILD; public ThreadDemo() { CHILD = new Thread(this, "Child Thread"); CHILD.start(); } public void run() { for(int i=0;i<4;i++) System.out.println("Child Runs: "+i); } } public class ThreadMain { public static void main(String args[]) throws InterruptedException { Thread MAIN = Thread.currentThread(); System.out.println("Main Thread is created now"); System.out.println("ChildThread creation process initiated"); ThreadDemo demo = new ThreadDemo(); System.out.println("MainThread continues, but sleepsssss :)"); MAIN.sleep(2000); System.out.println("MainThread completes execution"); } }
  17. 17. Network vs Distributed Operating Systems Network Operating Systems:  There is an operating system image at each node  Each node therefore has control over which processes run at that physcial location  A user may invoke a process on another node, for example via ssh, but the operating system at the user's node has no control over the processes running at the remote node Distributed Operating Systems:  Provides the view of a single system image maintaining all processes running at every node  A process, when invoked, or during its run, may be moved to a different node in the network  Generally the reason for this is that the current node is more computationally loaded than the target node  It could also be that the target node is physically closer to some physical resource required by the process  The idea is to maximise the conguration of processes to nodes in a way which is completely transparent to the user 2. Layers of Operating System 02/06/17 17© Copyright Material
  18. 18. • All the resources require protection from illegitimate accesses • What all can cause Illegitimate Access? 1. Maliciously Contrived Code EX: attack scripts, viruses, worms, Trojan horses, backdoors 2. Benign Code: that contains a bug or that has unanticipated behaviour Example for Illegitimate access • If a file has read & write access controls, then let us assume Ravi has these rights on his own file. Rafiq has read access alone. Illegitimate access here would be if Rafiq somehow managed to perform a write operation on the file • Accessing the FilePointer Variable directly allows any other access to the file other than Read or Write. In this case even Ravi is illegitimate. To avoid this we use Type-safe languages like Java or Modulo 3 3. Protection 02/06/17 18© Copyright Material
  19. 19. Kernel & Protection: • Kernel can control the memory management unit and set the processor registers so that no other code may access the machine’s physical resources except in acceptable ways Execution Modes of Kernel: 1. Supervisor mode (kernel process): All Kernel processes executes with the processor in this mode 2. User mode (user process): Kernel arranges all other processes execution in this mode • The kernel also sets up address spaces to protect itself and other processes from the accesses of an aberrant process, and to provide processes with their required virtual memory layout • A system call trap is implemented by a machine-level TRAP instruction (called System TRAP) , which puts the processor into supervisor mode and switches to the kernel address space • When theTRAP instruction is executed, as with any type of exception, the hardware forces the processor to execute a kernel-supplied handler function, in order that no process may gain illicit control of the hardware. Penalties for Protection 1. Switching between different processes take many processor cycles 2. A system call trap is a more expensive operation than a simple method call 3. Protection 02/06/17 19© Copyright Material
  20. 20. WHAT ARE PROCESSES & WHY THREADS? 1. The traditional operating system notion of a process that executes a single activity was found in the 1980s to be unequal to the requirements of distributed systems 2. The problem, as we shall show, is that the traditional process makes sharing between related activities awkward and expensive 3. The solution reached was to enhance the notion of a process so that it could be associated with multiple activities 4. Therefore, a process is split into one or more Threads that share an execution environment together. 5. A Thread is an abstraction of a single activity 6. Benefits of a Thread – Responsiveness – Resource sharing – Economy – High Utilization of Multi Process architectures 4. Processes & Threads 02/06/17 20© Copyright Material
  21. 21. WHAT ARE PROCESSES & WHY THREADS? • An Execution Environment is the unit of resource management: a collection of local kernel managed resources to which its threads have access • An execution environment primarily consists of: – An address space; – Thread synchronization and communication resources such as semaphores and communication interfaces (for example, sockets); – Higher-level resources such as open files and windows. • Execution environments are normally expensive to create and manage, but several threads can share them & it is the protection domain in which its threads execute. • Aim of having multiple threads of execution is to maximize the degree of concurrent execution between operations, thus enabling the overlap of computation with input and output, and enabling concurrent processing on multiprocessors • An execution environment provides protection from threads outside it, so that the data and other resources contained in it are by default inaccessible to threads residing in other execution environments, while kernels allowing the sharing of resources. 4. Processes & Threads 02/06/17 21© Copyright Material
  22. 22. Address Space & Regions: • It is a unit of management of a process’s virtual memory • Up to 232 bytes and sometimes up to 264 bytes • Consists of one or more Regions, and they do not overlap • Gaps are left between regions to allow for growth • Each region is specified by the following properties: – Its extent (lowest virtual address and size); – Read/write/execute permissions for the process’s threads – Whether it can be grown upwards or downwards. • The number of regions is indefinite, and reasons for it – To support a separate stack for each thread because, Allocating a separate stack region to each thread makes it possible to detect attempts to exceed the stack limits and to control each stack’s growth – To enable files in general to be mapped into the address space, not just the text and data sections of binary files 4. Processes & Threads 02/06/17 22© Copyright Material Stack Text Heap Auxiliary regions 0 2 N Address Space
  23. 23. Address Space & Regions: • Regions also share memory between processes • Uses of shared regions include – Maintaining a single copy of Library – Kernel need not switch to a new set of address mappings with a new system call or exception – Shared data and communication – Copy-on-write • UNIX Address space consists of: – A stack, which is extensible towards lower virtual addresses. – a heap, part of which is initialized by values stored in the program’s binary file, and which is extensible towards higher virtual addresses; – a fixed, unmodifiable text region containing program code; 4. Processes & Threads 02/06/17 23© Copyright Material Stack Text Heap Auxiliary regions 0 2 N Address Space
  24. 24. Creation of a New Process: • Creation of a new process has traditionally been an indivisible operation provided by the operating system – Ex: UNIX fork system call creates a process with an execution environment copied from the caller • The creation of a new process can be separated into two independent aspects: 1. The choice of a target host, for example, the host may be chosen from among the nodes in a cluster of computers acting as a compute server 2. The creation of an execution environment 1. CHOICE OF PROCESS HOST a. Running new processes at their originator’s computer (or) b. Sharing processing load between a set of computers • Load sharing system – Centralized, Decentralized, Hierarchical – Sender-initiated, Receiver-initiated 4. Processes & Threads 02/06/17 24© Copyright Material
  25. 25. Load sharing policy: Indicates how the load could be shared • Transfer policy: situate a new process locally or remotely? • Location policy: which node should host the new process? • Static policy: without regard to the current state of the system • Adaptive policy: Applies heuristics to make their allocation decision • Migration policy: When & where should migrate the running process? Creating a New Execution Environment Initializing the address space 2 Approaches to do it • Address space is of Statically defined format Ex: contains only a program text region, heap region & stack region • Address space is defined with respect to an existing execution environment, Ex: fork() , exec()(Linux, Unix) Copy-on-Write scheme • A technique that is used to reduce a mount of data copy. The page • frames that make up the inherited region are shared between the two address spaces 4. Processes & Threads 02/06/17 25© Copyright Material
  26. 26. 4. Processes & Threads Copy-on-write 02/06/17 26@Copyright Material • Mach & Chorus apply Copy-on-write technique. When inherited region is copied from the parent, the region is copied, but no physical copying takes place by default • The page frames that make up the inherited region are shared between the two address spaces. • From the diagram, shown in the next slide – let us assume that process A set region RA to be copy-inherited by its child, process B, and that the region RB was thus created in process B. – Initially, all page frames associated with the regions are shared between the two processes’ page tables – If a thread in either process attempts to modify the data, a hardware exception called a page fault is taken – The page fault handler allocates a new frame for process B and copies the original frame’s data into it byte for byte – The old frame number is replaced by the new frame number in one process’s page table – it does not matter which – and the old frame number is left in the other page table. – The two corresponding pages in processes A and B are then each made writable once more at the hardware level – After all of this has taken place, process B’s modifying instruction is allowed to proceed.
  27. 27. 4. Processes & Threads Copy-on-first write 02/06/17 27@Copyright Material a) Before write b) After write Shared frame A's page table B's page table Process A’s address space Process B’s address space Kernel RA RB RB copied from RA
  28. 28. 4. Processes & Threads 02/06/17 28© Copyright Material
  29. 29. 4. Processes & Threads 02/06/17 29© Copyright Material Threads Concept & Implementation Address Space Illustration Process Thread activations Activation stacks (parameters, local variables) 'text' (program code)Heap (dynamic storage, objects, global variables) system-provided resources (sockets, windows, open files)
  30. 30. 4. Processes & Threads 02/06/17 30© Copyright Material Threads Concept & Implementation Client & Server With Threads Client Thread 2 makes Thread 1 requests to server generates results Server N threads Input-output Requests Receipt & queuing The 'worker pool' architecture
  31. 31. 4. Processes & Threads 02/06/17 31© Copyright Material Threads Concept & Implementation From the previous figure, let us assume that each request takes, on average, – 2 milliseconds of processing plus – 8 milliseconds of I/O (input/output) delay when the server reads from a disk (there is no caching) – Server executes at a single-processor computer If a single thread has to perform all processing, then the turnaround time for handling any request is on average 2 + 8 = 10 milliseconds maximum throughput is 1000/10 = 100 requests per second. Considering if the server pool contains two threads  • If all disk requests are serialized and take 8 milliseconds each, then the maximum throughput is 1000/8 = 125 requests per second.
  32. 32. 4. Processes & Threads 02/06/17 32© Copyright Material Threads Concept & Implementation Let us assume that each request takes, on average, – 2 milliseconds of processing plus – 8 milliseconds of I/O (input/output) delay when the server reads from a disk (If disk block caching is introduced) – Server executes at a single-processor computer If a 75% hit rate is achieved, the mean I/O time per request reduces to (0.75*0 + 0.25*8) = 2 milliseconds • Due to search in cache,processing delay increase,say 2.5ms the maximum theoretical throughput increases to 1000/2.5 = 400 requests per second If, Up to two threads, disk cache, two processors Then throughput increases to 1000/2 = 500 requests
  33. 33. 4. Processes & Threads 02/06/17 33© Copyright Material Architectures for multi-threaded servers A. THREAD-PER-REQUEST: • In this, the I/O thread spawns a new worker thread for each request, and that worker destroys itself when it has processed the request against its designated remote object. • This architecture has the advantage that the threads do not contend for a shared queue, and throughput is potentially maximized because the I/O thread can create as many workers as there are outstanding requests. • Its disadvantage is the overhead of the thread creation and destruction operations. • This would be useful for UDP-based service, e.g. NTP Advantage: throughput is potentially maximized Disadvantage: overhead of the thread creation and destruction
  34. 34. 4. Processes & Threads 02/06/17 34© Copyright Material Architectures for multi-threaded servers B. THREAD-PER-CONNECTION: • Server creates a fixed pool of “worker” threads to process the requests when it starts up • This associates a thread with each connection. • The server creates a new worker thread when a client makes a connection and destroys the thread when the client closes the connection. • In between, the client may make many requests over the connection, targeted at one or more remote Objects • This is the most commonly used - matches the TCP connection model
  35. 35. 4. Processes & Threads 02/06/17 35© Copyright Material Architectures for multi-threaded servers C. THREAD-PER-OBJECT: is used where the service is encapsulated as an object. Ex: could have multiple shared whiteboards with one thread each. Each object has only one thread, avoiding the need for thread synchronization within objects Associate a thread with each remote object Advantage: lower thread management overheads Compared with the thread-per-request Disadvantage: client may be delayed while a worker thread has several outstanding requests but another thread has no work to perform Pros & Cons are same for Thread-per-connection as well
  36. 36. 4. Processes & Threads 02/06/17 36© Copyright Material Threads Synchronization • Programming a multi-threaded process requires great care • Difficult issues are the sharing of objects and the techniques used for thread coordination and cooperation • Java provides the synchronized keyword for programmers to designate the well known monitor construct for thread coordination • The monitor’s guarantee is that at most one thread can execute within it at any time • Java allows threads to be blocked and woken up via arbitrary objects that act as condition variables
  37. 37. Thread Synchronization 02/06/17 37@Copyright Material // synchronize calls to call() public void run() { synchronized(target) { // synchronized block target.call(msg); } } } class Synch1 { public static void main(String args[]) { Callme target = new Callme(); Caller ob1 = new Caller(target, "Hello"); Caller ob2 = new Caller(target, "Synchronized"); Caller ob3 = new Caller(target, "World"); // wait for threads to end try { ob1.t.join(); ob2.t.join(); ob3.t.join(); } catch(InterruptedException e) { System.out.println("Interrupted"); }} // This program uses a synchronized block. class Callme { void call(String msg) { System.out.print("[" + msg); try { Thread.sleep(1000); } catch (InterruptedException e) { System.out.println("Interrupted"); } System.out.println("]"); } } class Caller implements Runnable { String msg; Callme target; Thread t; public Caller(Callme targ, String s) { target = targ; msg = s; t = new Thread(this); t.start(); }
  38. 38. 4. Processes & Threads 02/06/17 38© Copyright Material
  39. 39. 4. Processes & Threads 02/06/17 39© Copyright Material Thread Scheduling Preemptive scheduling • A thread may be suspended at any point to make way for another thread Non-preemptive scheduling • A thread runs until it makes a call to the threading system, when the system may de-schedule it and schedule another thread to run – Avoid race condition – Can’t take advantage of a multiprocessor since it run exclusively – The programmer need to insert yield() calls
  40. 40. 4. Processes & Threads 02/06/17 40© Copyright Material Threads Vs Multiple Processes • Creating a thread is much cheaper than a process, nearly 10-20 times. • Switching to a different thread in same process is much cheaper, about 5-50 times • Threads within same process can share data and other resources more conveniently and efficiently (without copying or messages) • Threads within a process are not protected from each other
  41. 41. 4. Processes & Threads 02/06/17 41© Copyright Material
  42. 42. 4. Processes & Threads 02/06/17 42© Copyright Material Threads Implementation Threads can be implemented: • in the OS kernel (Win NT, Solaris, Mach) • at user level (e.g. by a thread library: C threads, pthreads), or in the language (Ada, Java). – lightweight - no system calls – modifiable scheduler – low cost enables more threads to be employed not preemptive can exploit multiple processors page fault blocks all threads • hybrid approaches can gain some advantages of both – user-level hints to kernel scheduler – hierarchic threads (Solaris 2) – event-based (SPIN, FastThreads)
  43. 43. 4. Processes & Threads 02/06/17 43© Copyright Material Threads Implementation Thread(ThreadGroup group, Runnable target, String name) Creates a new thread in the SUSPENDED state, which will belong to group and be identified as name; the thread will execute the run() method of target. setPriority(int newPriority), getPriority() Sets and returns the thread’s priority. run() A thread executes the run() method of its target object, if it has one, and otherwise its own run() method (Thread implements Runnable). start() Change the state of the thread from SUSPENDED to RUNNABLE. sleep(int millisecs) Cause the thread to enter the SUSPENDED state for the specified time. yield() Enter the READY state and invoke the scheduler. destroy() Destroy the thread.
  44. 44. 4. Processes & Threads 02/06/17 44© Copyright Material Threads Implementation • When no kernel support for multi-threaded processes is provided, a user-level threads implementation suffers from the following problems: – The threads within a process cannot take advantage of a multiprocessor. – A thread that takes a page fault blocks the entire process and all threads within it. – Threads within different processes cannot be scheduled according to a single scheme of relative prioritization. • User-level threads implementations, on the other hand, have significant advantages over kernel-level implementations: – Certain thread operations are significantly less costly. EX: Switching between threads – Since thread-scheduling module is implemented outside the kernel, it can be customized or changed to suit particular application requirements – Many more user-level threads can be supported than could reasonably be provided by default by a kernel
  45. 45. 4. Processes & Threads 02/06/17 45© Copyright Material Threads Implementation Each application process contains a user-level scheduler, which manages the threads inside the process The kernel is responsible for allocating virtual processors to processes. The number of virtual processors assigned to a process can also vary EX: In the given figure, if process A has requested an extra virtual processor and B terminates, then the kernel can assign one to A.
  46. 46. 4. Processes & Threads 02/06/17 46© Copyright Material Threads Implementation Scheduler Activations
  47. 47. 4. Processes & Threads 02/06/17 47© Copyright Material Threads Implementation • The Figure shows that a process notifies the kernel when either of two types of event occurs: when a virtual processor is ‘idle’ and no longer needed, or when an extra virtual processor is required • A scheduler activation (SA) is a call from the kernel to a process, which notifies the process’s scheduler of an event • Entering a body of code from a lower layer(the kernel) is sometimes called an Upcall The four types of event that the kernel notifies the user-level scheduler (which we shall refer to simply as ‘the scheduler’) of are as follows – Virtual processor allocated: The kernel has assigned a new virtual processor to the process, and this is the first timeslice upon it; the scheduler can load the SA with the context of a READY thread, which can thus recommence execution. – SA blocked: An SA has blocked in the kernel, and the kernel is using a fresh SA to notify the scheduler; the scheduler sets the state of the corresponding thread to BLOCKED and can allocate a READY thread to the notifying SA. – SA unblocked: An SA that was blocked in the kernel has become unblocked and is ready to execute at user level again; the scheduler can now return the corresponding thread to the READY list. – SA preempted: The kernel has taken away the specified SA from the process
  48. 48. 4. Processes & Threads 02/06/17 48© Copyright Material Threads Lifecycle
  49. 49. 4. Processes & Threads 02/06/17 49© Copyright Material Threads Lifecycle As shown in the diagram, the lifecycle of the thread can be better understood through Java Implementation 1. A Thread can be created in 2 ways, by inheriting Thread Class or by implementing Runnable interface – public class ThreadUsingClass extends Thread – public class ThreadUsingInterface extends Applet implements Runnable 2. To specify the implementation of a Thread, we must override the public void run() method present in Thread class or Runnable interface public void run() { ------ ------ }
  50. 50. 4. Processes & Threads 02/06/17 50© Copyright Material Threads Lifecycle As shown in the diagram, the lifecycle of the thread can be better understood through Java Implementation 3. Creation of Thread Thread DemoThread = new Thread(this); //this points to the Runnable instance of a Thread 4. Starting a Thread DemoThread.start(); //start() method would invoke the start point of this Thread which has its run() method specifying the implementation 5. Stopping a Thread DemoThread.stop() 6. Asking the Thread to pause execution for sometime Thread.Sleep(2000);//Thread pauses for 2000milliseconds //sleep method throws an InterruptedException & therefore must be written inside try-catch
  51. 51. 5. Communication & Invocation 02/06/17 51© Copyright Material Communication primitives • TCP(UDP) Socket in Unix and Windows • DoOperation, getRequest, sendReply in Amoeba • Group communication primitives in V system and Chorus Protocols and openness 1. Provide standard protocols that enable internetworking between middleware 2. Integrate novel low-level protocols without upgrading their application Protocols are normally arranged in stack of layers • Static stack – new layer to be integrated permanently as a “driver” • Dynamic stack – protocol stack be composed on the fly EX. web browser utilize wide-area wireless link on the road and faster Ethernet connection in the office
  52. 52. Thread Lifecycle 02/06/17 52@Copyright Material public void start () { System.out.println("Starting " + threadName ); if (t == null) { t = new Thread (this, threadName); t.start (); } } } public class TestThread { public static void main(String args[]) { RunnableDemo R1 = new RunnableDemo( "Thread- 1"); R1.start(); RunnableDemo R2 = new RunnableDemo( "Thread- 2"); R2.start(); } } class RunnableDemo implements Runnable { private Thread t; private String threadName; RunnableDemo(String name) { threadName = name; System.out.println("Creating " + threadName ); } public void run() { System.out.println("Running " + threadName ); try { for(int i = 4; i > 0; i--) { System.out.println("Thread: " + threadName + ", " + i); // Let the thread sleep for a while. Thread.sleep(50); } } catch (InterruptedException e) { System.out.println("Thread " + threadName + " interrupted."); } System.out.println("Thread " + threadName + " exiting."); }
  53. 53. 5. Communication & Invocation 02/06/17 53© Copyright Material 1. INVOCATION PERFORMANCE • Invocation performance is a critical factor in distributed system design • The more designers separate functionality between address spaces, the more remote invocations are required • Clients and servers may make many millions of invocation-related operations in their lifetimes, so small fractions of milliseconds count in invocation costs Factors that affect invocation performance are Invocation Costs like • Delay: the total RPC call time experienced by a client • Latency: the fixed overhead of an RPC, measured by null RPC • Throughput: the rate of data transfer between computers in a single RPC
  54. 54. 5. Communication & Invocation 02/06/17 54© Copyright Material 1. INVOCATION PERFORMANCE Invocation over a Network The performance of RPC and RMI mechanisms is critical for effective distributed systems. • Typical times for 'null procedure call': • Local procedure call < 1 microseconds • Remote procedure call ~ 10 milliseconds • 'network time' (involving about 100 bytes transferred, at 100 megabits/sec.) accounts for only .01 millisecond; the remaining delays must be in OS and middleware - latency, not communication time. Factors affecting RPC/RMI performance • marshalling/unmarshalling + operation despatch at the server • data copying:- application  kernel space  communication buffers • thread scheduling and context switching:- including kernel entry • protocol processing:- for each protocol layer • network access delays:- connection set up , network latency
  55. 55. 5. Communication & Invocation 02/06/17 55© Copyright Material 1. INVOCATION PERFORMANCE Improving RPC performance 1.Memory sharing – rapid communication between processes in the same computer 2. Choice of protocol – TCP/UDP Ex: Persistent connections: several invocations during one – OS’s buffer collect several small messages and send them together 3. Invocation within a computer – Most cross-address-space invocation take place within a computer – LRPC (lightweight RPC)
  56. 56. 5. Communication & Invocation 02/06/17 56© Copyright Material 1. INVOCATION PERFORMANCE LRPC (lightweight RPC)
  57. 57. LRPC 02/06/17 57@Copyright Material The LRPC design is based on optimizations concerning data copying and thread scheduling It was noted that it would be more efficient to use shared memory regions for client-server communication, with a different (private) region between the server and each of its local clients Such a region contains one or more A (for argument) stacks, see the figure Instead of RPC parameters being copied between the kernel and user address spaces involved, the client and server are able to pass arguments and return values directly via an A stack In LRPC, arguments are copied once: when they are marshalled onto the stack A. In an equivalent RPC, they are copied four times: from the client stub’s stack onto a message; from the message to a kernel buffer, from the kernel buffer to a server message, and from the message to the server stub’s stack There may be several A stacks in a shared region, because several threads in the same client may call the server at the same time
  58. 58. LRPC 02/06/17 58@Copyright Material Steps performed within LRPC 1. A client thread enters the server’s execution environment by first trapping to the kernel and presenting it with a capability 2. The kernel checks this and only allows a context switch to a valid server procedure; 3. If it is valid, the kernel switches the thread’s context to call the procedure in the server’s execution environment 4. When the procedure in the server returns, the thread returns to the kernel, which switches the thread back to the client execution environment 5. Clients and servers employ stub procedures to hide the details from application writers.
  59. 59. 5. Communication & Invocation 02/06/17 59© Copyright Material 2. ASYNCHRONOUS OPERATION Performance characteristics of the Internet – High latencies, low bandwidths and high server loads – Network disconnection and reconnection. – Outweigh any benefits that the OS can provide Asynchronous operation – Concurrent invocations Ex: the browser fetches multiple images in a home page by concurrent GET requests – Asynchronous invocation: non-blocking call Ex: CORBA oneway invocation: maybe semantics, or collect result by a separate call
  60. 60. 5. Communication & Invocation 02/06/17 60© Copyright Material 2. ASYNCHRONOUS OPERATION Persistent asynchronous invocations • Designed for disconnected operation • Try indefinitely to perform the invocation, until it is known to have succeeded or failed, or until the application cancels the invocation • QRPC (Queued RPC) – Client queues outgoing invocation requests in a stable log – Server queues invocation results The issues to programmers • How user can continue while the results of invocations are still not known?
  61. 61. 6. OS Architectures 02/06/17 61© Copyright Material Monolithic Kernels & Micro Kernels Monolithic kernel – Kernel is massive: perform all basic operating system functions, megabytes of code and data – Kernel is undifferentiated: coded in a non-modular way Ex. Unix Advantage: efficiency Disadvantage: lack of structure Microkernel – Kernel provides only the most basic abstractions: address spaces, threads and local interprocess communication – All other system services are provided by servers that are dynamically loaded Ex: VM of IBM 370 Advantage: extensibility, modularity, free of bugs Disadvantage: relatively inefficiency
  62. 62. 6. OS Architectures 02/06/17 62© Copyright Material Monolithic Kernels & Micro Kernels • A monolithic kernel can contain some server processes that execute within its address space, including file servers and some networking • In the case of a microkernel design the kernel provides only the most basic abstractions, principally address spaces, threads and local interprocess communication
  63. 63. 6. OS Architectures 02/06/17 63© Copyright Material Monolithic Kernels & Micro Kernels • The place of the microkernel – in its most general form – in the overall distributed system design is shown in this Figure • The microkernel appears as a layer between the hardware layer and a layer consisting of major system components called subsystems
  64. 64. • Learnt the concepts of an operating systems & applied the differences between a Network/Distributed OS • Illustrated what a modern operating system does to support distributed applications and middleware • Learnt various layers of Operating Systems • Understand the concepts of Process Address space, Thread vs Process, role of a Thread, Thread implementation, and the life cycle of a Thread • Learnt the factors affecting & factors for improving invocation performance and the role for Asynchronous operations • The types of operating system architecture were emphasized & have identified the role of specific architecture for Distributed Systems SUMMARY 02/06/17 64© Copyright Material
  65. 65. Network OS: have a networking capability built into them and so can be used to access remote resources Distributed OS: An operating system that produces a single system image for all the resources in a distributed system Communication Manager: Manages the communication between separate processes (or threads attached to separate processes). Memory Management: Management of physical and virtual memory. Note this is not the same as automatic memory management (or garbage collection) provided by the runtime for some high-level languages such as Java. Supervisor: The controller for interrupts, system call traps and other kinds of exceptions (though not, generally, language level exceptions). Process Manager: Takes care of the creation of processes, including the scheduling of each process to physical resources (such as the CPU) GLOSSARY 02/06/17 65© Copyright Material
  66. 66. Malicious Code: It is the term used to describe any code in any part of a software system or script that is intended to cause undesired effects, security breaches or damage to a system. Malicious code describes a broad category of system security terms that includes attack scripts, viruses, worms, Trojan horses, backdoors, and malicious active content. Benign Code: Benign code that contains a bug or that has unanticipated behaviour may cause part of the rest of the system to behave incorrectly Type-Safe vs Non Type-Safe Languages: In type-safe languages, no module may access a target module unless it has a reference to it – it cannot make up a pointer to it, as would be possible in C or C++ (Non Type-Safe Languages) Kernel: The kernel is a program that is distinguished by the facts that it remains loaded from system initialization and its code is executed with complete access privileges for the physical resources on its host computer GLOSSARY 02/06/17 66© Copyright Material
  67. 67. System Trap: All processes can safely transfer from a user-level address space to the kernel’s address space through this interrupt Address Space: a collection of ranges of virtual memory locations, in each of which a specified combination of memory access rights applies, e.g.: read only or read-write Thread: A thread is the operating system abstraction of an activity (the term derives from the phrase ‘thread of execution’) Region: an area of continuous virtual memory that is accessible by the threads of the owning process Mapped File: A mapped file is one that is accessed as an array of bytes in memory Centralized Load Sharing: The load manager is a single shared system throughout the network. Load managers collect information about the nodes and use it to allocate new processes to nodes GLOSSARY 02/06/17 67© Copyright Material
  68. 68. Hierarchical Load Sharing: Several load managers organized in a tree structure throughout the network and managers make process allocation decisions as far down the tree as possible, but managers may transfer processes to one another, via a common ancestor, under certain load conditions Decentralized Load Sharing: In this, nodes exchange information with one another directly to make allocation decisions Sender-Initiated Load Sharing System: the node that requires a new process to be created is responsible for initiating the transfer decision. It typically initiates a transfer when its own load crosses a threshold. Receiver-initiated Load Sharing System: a node whose load is below a given threshold advertises its existence to other nodes so that relatively loaded nodes can transfer work to it. Worker Thread: The thread creates at the server to process the requests at the start up Thread Scheduling: The process of controlling multiple threads to work individually GLOSSARY 02/06/17 68© Copyright Material
  69. 69. Thread Synchronization: The process of ensuring that the resource will be used by only one thread at a time, when two or more threads need access to a shared resource.  Preemptive scheduling: Process of suspending a thread at any point to make way for another thread  Non-preemptive scheduling: A thread runs until it makes a call to the threading system, when the system may de-schedule it and schedule another thread to run  Virtual processors: The kernel can allocate different physical processors to each process as time goes by, while keeping its guarantee of how many processors it has allocated.  Schedular: is a call from the kernel to a process, which notifies the process’s scheduler of an event   GLOSSARY 02/06/17 69© Copyright Material
  70. 70. Upcall: The process of entering a body of code from a lower layer, the kernel. Asynchronous Operation: it is made with a non-blocking call, which returns as soon as the invocation request message has been created and is ready for dispatch Monolithic Kernel: it performs all basic operating system functions and takes up in the order of megabytes of code and data –and it is coded in a non- modular way Micro Kernel: Kernel provides only the most basic abstractions: address spaces, threads and local interprocess communication and all other system services are provided by servers that are dynamically loaded GLOSSARY 02/06/17 70© Copyright Material
  71. 71. 1. Which of the following is not the function of a Middleware a. Encapsulation b. Protection c. Encryption d. Invocation 2. Which layer of the operating system is responsible for Synchronization a. Thread Manager b. Supervisor c. Memory Management d. Communication Manager 3. Which OS provides the view of a single system image maintaining all processes running at every node a. Distributed OS b. Network OS c. Default OS d. Hybrid OS 4. _______ is a portion of the address space of a process which is initialized by values stored in binary files a. Stack b. Heap c. Text Region d. Address Mapping QUIZ 02/06/17 71© Copyright Material
  72. 72. 5. An area of continuous virtual memory that is accessible by the threads of the owning process is called a. Mapped File b. Region c. Heap d. Stack  6. Nodes exchange information with one another directly to make allocation decisions in which form of Load Sharing systems a. Centralized b. Hierarchical c. Decentralized d. None  7. Which of the following Load sharing Policy is valid? a. Transfer policy: situate a new process locally or remotely? b. Location policy: Applies heuristics to make their allocation decision c. Static policy: which node should host the new process? d. Adaptive policy: without regard to the current state of the system  QUIZ 02/06/17 72© Copyright Material
  73. 73. 8. Let us assume that each request takes, on average, 4 milliseconds of processing plus 12 milliseconds of I/O (input/output) delay when the server reads from a disk. If disk block caching is introduced and if assumed miss rate is 10% with IO processing delay of 0.8ms, what is the maximum throughput of the system with 800 requests inline? a. 74 requests per second b. 474 requests per second c. 326 requests per second d. 400 requests per second 9. Threads-per-request has the following valid properties a. Threads do not contend for a shared queue b. Useful for TCP based services c. Server creates a fixed pool of “worker” threads to process the requests d. Used where the service is encapsulated as an object. QUIZ 02/06/17 73© Copyright Material
  74. 74. 10. Threads can be implemented at what level a. Hybrid Level b. User Level c. Kernel Level d. All 11. ________ is a call from the kernel to a process, which notifies the process’s scheduler of an event a. Upcall b. TRAP c. Scheduler Activation d. Virtual Processor allocated 12. Which of the following Invocation costs wouldn’t affect invocation performance a. Delay b. Marshalling c. Latency d. Throughput 13. Which of the following is not a performance characteristics of the Internet a. High latencies, low bandwidths and high server loads b. Network disconnection and reconnection. c. Outweigh any benefits that the OS can provide d. Allocation of virtual processors to the processes QUIZ 02/06/17 74© Copyright Material
  75. 75. 14. UNIX OS is an example of a. Monolithic b. Micro c. Distributed d. Default QUIZ 02/06/17 75© Copyright Material
  76. 76. 1. C 2. A 3. B 4. D 5. B 6. C 7. A 8. D 9. A 10. D 11. C 12. B KEY 02/06/17 76© Copyright Material 13. D 14. A

×