Mcs 041.1
Upcoming SlideShare
Loading in...5
×
 

Mcs 041.1

on

  • 524 views

 

Statistics

Views

Total Views
524
Views on SlideShare
524
Embed Views
0

Actions

Likes
0
Downloads
2
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Mcs 041.1 Mcs 041.1 Document Transcript

    • ShankhaFor Solving question with Answers MINTU 11
    • ●●●●●●Lamport’s Algorithm:-Assuming the presence of the pipelining property & eventual delivery of all messages the solution requires time stamping of a llmessages & it also assumes that each process maintains a request queue, initially empty, that contains request messagesordered by the following five rules: 1. Initiator: i When process pi desires to acquire exclusive ownership of the resources it sends the time stamp message request (t i , i) where ti=ci to every other process & records the requests in its own queue. 2. Other processes: j, ji When process pj receives the request (ti, i) message, it places the request on its own queue & sends a time stamped reply (tj, j) to process pi. 3. Process pi is allowed to access the resource when the following 2 conditions are satisfied a) Pi’s request message is at the front of the queue b) Pi has received a message from every other process with a time stamp later than (t i, i). 4. Process pi releases the resource by removing the request from its own queue & sending a time stamped release messages to every other process. 5. Upon receipts of pi‟s release message, process pj removes pi‟s request from its request queue.Correctness of the algorithm follows from rule 3, which guarantees that the initiating process p i learns about all potentiallyconflicting requests that precede it. Given that message can‟t be received out of ordering of events that the relation  providesa total ordering of events in the system & in the preprocess request queues, rule 3(a) permits only one process to access theresource at a time. The solution is dead lock free due to the time stamp ordering of the requests which precludes formation ofthe wait for loops. Granting of requests in the order in which they are made prevent process starvation & lockouts.The communication cost of the algorithm is 3(N-1) messages: (N-1) request messages, (N-1) reply messages, and (N-1)release messages. Given that requests & release notifications are effectively broadcasts the algorithm clearly performs betterin a broadcast type network such as bus.●●●●●●●Give a solution Dining Philosophers problem using monitors:-Consider 5 philosophers who spend their lives thing & eating. The philosophers shares a common circular tablesurrounded by 5 chairs, each belonging to one philosopher. In a center of table is a bowel of rice, & thephilosopher thinks she does not share with her colleagues. From time to time the philosopher gets hungry &tries to pick up the two chop sticks at a time. Obviously she can‟t pick up a chopstick that already in the handof her colleagues. When a hungry philosopher has both her chopsticks at a same time she ate it withoutreleasing chopsticks. When she finished eating she puts down both the chopsticks & start to thinking again.The dining philosopher problem is a classic synchronization problem neither have its practical importance norcomputer scientist dislike philosophers but because it is an example of a large class of concurrency controlproblem. It is a simple representation of the need to allocate several resources among several processes in adeadlock & starvation free manner. One simple solution is to represent each chopstick by a semaphore. Aphilosopher tries to grab all her chopstick by executing a wait operation on that semaphore; she releases herchopstick by executing the signal operation on the appropriate semaphores. Thus the shared data are: semaphore chopstick[5]; where all the elements of the chopstick are initialized to 1. The structure of the philosopher „i‟ shown in the figure. do{ Although this solution guarantees that no two neighbors are eating simultaneously, it nevertheless of creating a deadlock. wait (chopstick[i]); Suppose that all 5 philosophers become hungry wait (chopstick[(i+1)%5]); simultaneously, & each grabs her left chopstick. All the ----- elements of a chopstick will now be equal to 0. When each ----- philosopher tries to grab her right chopstick she will be eat delayed forever. Another high level synchronization construct is the monitor type. A monitor is characterized by a set of signal(chopstick[i]); programmer defined operators. The representation of a signal(chopstick[(i+1)%5]); monitor type consists of declarations of variables whose ----- values define the state of an instance of the type, as well as ------ the bodies of procedure or functions that implemented think operations on the type. The syntax of a monitor is shown below. }while(1); monitor monitor_name Fig: structure of philosopher { Shared variable declaration Procedure body p1 (---) { --- } Procedure body p2 (---) { --- } ***** ***** Procedure body pn (---) { --- } Initialization code }Now we are in a position to describe to our solution to the dining philosopher problem using monitor.
    • DATA: condition can_eat[NUM_PHILS]; enum states {THINKING, HUNGRY, EATING} state[NUM_PHILS-1]; int index;INITIALIZATION: for (index=0; index<NUM_PHILS; index++) { flags[index] = THINKING; }MONITOR PROCEDURES: /* request the right to pickup chopsticks and eat */ entry void pickup(int mynum) { /* announce that were hungry */ state[mynum] = HUNGRY; /* if neighbors arent eating, proceed */ if ((state[mynum-1 mod NUM_PHILS] != EATING) && (state [mynum+1 mod NUM_PHILS] != EATING)) { state[mynum] = EATING; } /* otherwise wait for them */ else can_eat[mynum].wait; /* ready to eat now */ state[mynum] = EATING; } /* announce that were finished, give others a chance */ entry void putdown(int mynum) { /* announce that were done */ state[mynum] = THINKING; /* give left (lower) neighbor a chance to eat */ if ((state [mynum-1 mod NUM_PHILS] == HUNGRY) && (state [mynum-2 mod NUM_PHILS] != EATING)) { can_eat[mynum-1 mod NUM_PHILS].signal; } /* give right (higher) neighbor a chance to eat */ if ((state [mynum+1 mod NUM_PHILS] == HUNGRY) && (state [mynum+2 mod NUM_PHILS] != EATING)) { can_eat[mynum+1 mod NUM_PHILS].signal; } }PHILOSOPHER: /* find out our id, then repeat forever */ me = get_my_id(); while (TRUE) { /* think, wait, eat, do it all again ... */ think(); pickup(me); eat(); putdown(me); }●●●●●Explain the Real time operating System (RTOS). Give any 2 example application suitable for RTOS. Differentiatebetween time sharing & RTOSReal time operating systems are used in environments where a large number of events mostly external to the computer system,must be accepted & processed in a short time or within a certain deadlines. Such applications include industrial central,telephone, switching & real time simulation. A real-time OS has an advanced algorithm for scheduling. Scheduler flexibilityenables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to anarrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency, buta real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in agiven period of time. The primary issue of real time operating is to provide quick event response times & thus meet thescheduling deadlines. User convenience & resource utilization are the secondary concern to the RTOS designers. It is notuncommon for a real time system to be expected to process bursts of thousands of interrupts per second without missing a
    • single event. Such requirements usually can‟t be met by multiprogramming alone, & real time operating systems usually rely onsome specific policies & techniques for doing their job.Explicit programmer defined & controlled processes are commonly encountered in real time operating systems. Basically aseparate process is charged with handling a single external event. The process is activated upon occurrence of the relatedevent, which is often signaled by an interrupt. Multitasking operation is accomplished by scheduling processes for executionindependently of each other. Each process is assigned a curtained level of priority that corresponds to the relative importanceof the event that it services. The processor is normally allocated to the highest priority process among those that are ready toexecute. Higher priority processes usually preempt execution of the lower priority processes. This form of scheduling is knownas priority based preemptive scheduling is used by a majority of real time systems.Differences between time sharing & RTOS are time sharing is a popular representation of multi-programmed, multi-user systemwhereas RTOS are used in environments mostly external to the computer system, must be accepted & processed in a shorttime or within a certain deadlines. The primary objectives of real time operating system is good terminal response time whereasthe primary objective of RTOS is to provide quick event response time & thus meet the scheduling deadline.●●●●●●Explain the windows 2000 operating system architectureWindows 2000 Architecture DiagramThe Windows 2000® Architecture Roadmap provides a global view of the operating system architecture, itsmain components, and mechanisms. It also provides "logical navigation" to other locations for more in depthdiscussions.The goal is to help the user to go from general to specific information, in a way that is logical and based on thesystem structure itself. She (he) should be able to become familiar with the operating system main conceptsand components. Novice and experienced users should also benefit from this comprehensive operating systemdescription, its numerous diagrams, and examples. Refer to the site organization for background informationand for navigation suggestions.Windows 2000 OverviewThe Windows 2000 operating system constitutes the environment in which applications run. It provides themeans to access processor(s) and all other hardware resources. Also, it allows the applications and its owncomponents to communicate with each other.Windows 2000 has been built combining the following models: Layered Model. The operating system code is grouped in modules layered on top of each other. Each module provides a set of functions used by modules of higher levels. This model is applied mainly to the operating system executive. Client/Server Model. The operating system is divided into several processes, called servers, each implementing a set of specific services. Each of these servers runs in user mode, waiting for client requests.User-ModeThe software in user mode runs in a non-privileged state with limited access to system resources. Windows2000 applications and protected subsystems run in user-mode. The protected subsystems run in their ownprotected space and do not interfere with each other. They are divided into the following two groups:
    •  Environment subsystems. Services that provide Application Programming Interfaces (APIs) specific to an operating system. Integral subsystems. Services that provide interfaces with important operating system functions such as security and network services.Kernel-Mode versus User-ModeWindows 2000 divides the executing code in the following two areas or modes.Kernel-ModeIn the privileged kernel mode, the software can access all the system resources such as computer hardware,and sensitive system data. The kernel-mode software constitutes the core of the operating system and can begrouped as follows: System Components. Responsible for providing system services, to environment subsystems and other executive components. They perform system tasks such as input/output (I/O), file management, virtual memory management, resource management, and interposes communications. Kernel. The executive core component. It performs crucial functions such as scheduling, interrupt, exception dispatching, and multiprocessor synchronization. Hardware Abstract Layer (HAL). Isolates the rest of the Windows NT executive from the specific hardware, making the operating system compatible with multiple processor platforms.For more information, refer to: Windows 2000 basic techniques. Standard operating system techniques used by Windows 2000. User mode components. They execute in their own protected address space and have limited access to system resources. Kernel mode components. Performance sensitive operating system components.●●●●●Cache MemoryCache memory is extremely fast memory that is built into a computer‟s central processing unit (CPU), orlocated next to it on a separate chip. The CPU uses cache memory to store instructions that are repeatedlyrequired to run programs, improving overall system speed. The advantage of cache memory is that the CPUdoes not have to use the motherboard‟s system bus for data transfer. Whenever data must be passed throughthe system bus, the data transfer speed slows to the motherboard‟s capability. The CPU can process data muchfaster by avoiding the bottleneck created by the system bus.As it happens, once most programs are open and running, they use very few resources. When these resourcesare kept in cache, programs can operate more quickly and efficiently. All else being equal, cache is so effectivein system performance that a computer running a fast CPU with little cache can have lower benchmarks than asystem running a somewhat slower CPU with more cache. Cache built into the CPU itself is referred to as Level1 (L1) cache. Cache that resides on a separate chip next to the CPU is called Level 2 (L2) cache. Some CPUshave both L1 and L2 cache built-in and designate the separate cache chip as Level 3 (L3) cache.●●●●●●Physical vs. Virtual memoryPhysical Memory is the form of the hardware called RAM. While the virtual memory is exactly the word virtualwhich means its not real, because it uses the space in the hard disk to create some memory space.The problem with virtual memory is that when you have allocated exact number of bytes and then your harddisk gets low on space; your system will have errors. Just leave the system allocate the virtual memory that itneeds so you wont have any problem.Virtual memory is a memory management technique, used by multitasking computer operating systemswherein non-contiguous memory is presented to software as contiguous memory. This contiguous memory isreferred to as the virtual address space. It is used commonly and provides great benefit for users at a very lowcost. The computer hardware that is the primary memory system of a computer also called as RAM is thephysical memory.●●●●●●Paging vs. SwappingThe difference between swapping and paging is that paging swaps memory pages while swapping swaps theaddress space of complete processes. Paging refers to paging out individual pages of memory. Swapping iswhen an entire process is swapped out. Paging is normal. Swapping happens when the resource load isheavy and the entire process is written out to disk. If youre swapping a lot, you might want to look deeperinto looking at things like adding memory. Paging is how all processes normally run. A page fault occurs, whichis normal operation, and a new page of a program is paged in, and freed pages are paged out. This is notswapping. Swapping is at the process level. This will only occur if the resource load is heavy and systemperformance is degrading. The lowest priority process is written out to swap space. This could be asleeping process, which are the highest candidates for swap out. When they become active again, a newcandidate is swapped out, if need be, and the process is swapped in.●●●●●●Scheduling or CPU schedulingScheduling refers to a set of policies & mechanisms built into the operating system that govern the order inwhich the work to be done by a computer system that select the next job to be admitted into the system & thenext process to run. The primary objective of the scheduling is to optimize the system performance inaccordance with the criteria deemed most important by the system designers. The main objective of thescheduling is to increase CPU utilization & higher throughput. Throughput is the amount of work accompaniedin a given time interval. CPU scheduling is the basis of operating system which supports multiprogrammingconcepts. So the objectives of scheduling are: (i) Scheduling should attempt to service the largest possible number of processes per unit time. (ii) Scheduling should minimize the wasted resources overhead. (iii) The scheduling mechanism should keep the resource of the system busy. Processes that will use under utilized resources should be favored.
    • (iv) It would be fair if all processes are treated the same, & no process can suffer indefinite postponement. (v) In environments in which process are given priorities the scheduling should favor the higher priority processes.●●●●●●●Differences between network operating system & distributed operating systemDistributed operating system is a collection of loosely coupled system interconnected by a communicationnetwork. From the point of view of the specific processor in a distributed system, the rest of the processors &their respective resources are remote whereas its own resources are local.But Network operating system it provides an environment in which users who are aware of the multiplicity ofmachines can access remote resources by either logging into the appropriate remote machine or transferringdata from the remote machine to their own machine.●●●●●●ThreadThread represents a software approach to improving the performance of operating systems by reducing theoverhead of process switching. A thread is lightweight process with a reduced state. State reduction isachieved by having a group of related thread is equivalent to a classical process. Each thread belongs toexactly one process. Processes are static & only threads can scheduled for the execution. Threads cancommunicate efficiently by means of commonly accessible shared memory within the enclosing process.Threads have been successfully used in network services.●●●●●●AuthenticationThe primary goal of authentication is to allow access to legitimate system user & to deny access tounauthorized parties. The 2 primary measures of authentication effectiveness are:- (i) The false acceptance ratio i.e. the percentage of illegitimate erroneously admitted. (ii) The false rejection rations i.e. the percentage of legitimate users who are access due to failure of the authentication mechanism.Obviously the objective is to minimize both the false acceptance & false rejection ratio. One way authenticationis usually based on: 1> Possession of a secret. 2> Possession of an artifact. 3> Unique physiological or behavioral characteristics of the user.The 2 types of authentication are 1) Mutual authentication & 2) Extensible Authentication. Mutualauthentication or two-way authentication refers to two parties authenticating each other suitably. In technology terms, itrefers to a client or user authenticating themselves to a server and that server authenticating itself to the user in such a waythat both parties are assured of the others identity. When describing online authentication processes, mutual authentication isoften referred to as website-to-user authentication, or site-to-user authentication. Mutual SSL provides the same things as SSL,with the addition of authentication and non-repudiation of the client authentication, using digital signatures. However, due toissues with complexity, cost, logistics, and effectiveness, most web applications are designed so they do not require client-sidecertificates. This creates an opening for a man-in-the-middle attack, in particular for online banking. Extensible AuthenticationProtocol, or EAP, is an authentication framework frequently used in wireless networks and Point-to-Point connections. EAP isan authentication framework providing for the transport and usage of keying material and parameters generated by EAPmethods. There are many methods defined by RFCs and a number of vendor specific methods and new proposals exist. EAPis not a wire protocol; instead it only defines message formats. Each protocol that uses EAP defines a way to encapsulate EAPmessages within that protocols messages.●●●●●●SwappingRemoving suspended or preempted process from memory & their subsequent bringing back is called swapping.Swapping has traditionally been used to implement multiprogramming in systems with respective memorycapacity or with little hardware support for improving processor utilization in partitioned memory environmentsby increasing the ratio of ready to resident processes. Swapping is usually employed in memory managementsystems with contiguous allocation. Such as fixed & dynamically partitioned memory & segmentation. Theswapper is an operating system process whose major responsibilities include: a) Selection of processes to swap out. b) Selection of processes to swap in. c) Allocation & management of swap space.The swapper usually selects a victim among the suspended processes that occupy partitions large enough tosatisfy the needs of the incoming process. Among the qualifying processes the more likely candidates forswapping are the ones with low priority & those waiting for slow events and thus having a higher probability ofbeing suspended for a comparatively long time. Another important consideration is the time spends in memoryby the potential victim & whether it ran while in memory. Otherwise there is a danger of thrashing caused byrepeatedly removing processes from memory almost immediately after loading them into memory. So the benefits of using Swapping are: 1) Allows higher of multiprogramming. 2) Better memory utilization. 3) Less wastage of CPU time based on compaction. 4) Can easily be applied on priority based scheduling algorithms to improve performance.
    • ●●●●●●ThrashingIf the number of frames allocated to a low priority process falls below the minimum number required bythe computer architecture, we must suspend that process execution. We should then page out itsremaining pages, freeing all its allocating frames. This provision introduces a swap in swap out level ofintermediate CPU scheduling. Any processes that does not have enough frames. Although it is technicallypossible to reduce the number of allocated frames to the minimum, there is some number of pages inactive use. If that processes does not have this number of frames, it will quickly page fault. It mustreplace some page. Since all its pages are in active use. It must replace a page that will be needed againright away. Consequently it quickly faults again & again & again. This high paging is called thrashing.●●●●●●●Four necessary condition to occur Deadlock Conditiona) Mutual Exclusion: - the resources involved are non sharable. At least one resource must be held in a non sharable mode i.e. only one process at a time claims exclusive control of the resource. If another process requests that resource the requesting process must be delayed until the resource has been released.b) Hold & Wait condition: - in this condition a requesting process already holds the resources & waiting for the requested resources. A process holding a resource allocated to it waits for an additional resource that is/ are currently being held by another process.c) No preemptive condition: - resources are already allocated to a process can‟t be preempted. Resources can‟t be removed forcibly voluntarily by the process holding it.d) Circular wait condition: - the process in the system form a circular list or chain where each process in the list is waiting for a resource held by the next process in the list.We emphasize that all four conditions must hold for a deadlock to occur. The circular wait condition impliesthe hold & wait condition, so the four conditions are not completely independent.●●●●●●●Lattice modelThe lattice for security levels is widely used to describe the structure of military security levels. A lattice isa finite set together with a partial ordering on its elements such that for every pair of elements there is aleast upper bound & a greatest lower bound. The simple linear ordering of sensitivity levels gas alreadybeen defined. Compartment sets can be partially ordered by the subset relation: one compartment isgreater than or equal to another if the latter set is a subset of the former. Classification which includes asensitivity level & a compartment set can then be partially ordered as follows:For any sensitivity levels a, b & compartment set c, d; the relation (a,c)≥(b,d) exists if & only if a≥b &cd i.e. each pair of classifications has a greatest lower bound & a least upper bound follows from thesedefinitions & the facts that the classification “Unclassified” , no compartments is a global lower bound &that we can postulate (assume) a classification “top-secret all compartments” as a global upper bound.Because the lattice model niches (role) the military classification structure so closely it is widely used.●●●●●●●Three dimensional Hypercube systemsVarious cube type multiprocessor topologies address the scalability & cost issues by providinginterconnections whose complexity grows logarithmically with the increasing number of nodes. The figure illustrates this. The three degree hypercube will have 2n nodes i.e. 23-8 nodes. Nodes are arranged in 3 dimensional cubes that is each node connected to 3 numbers of nodes. Each node is assigned with a unique number or address lies between 0 to 7(2n-1) i.e. 000, 001, 010, 011, 100, 101, 110, 111. The adjacent nodes differing in 1 bit (001, 010) & the third (nth) node is having maximum 3 internodes distance (100). Hypercube provides a good basis for scalable system since their complexities grow logarithmically with the number of nodes. It provides a bi directional communication between two processors. It is normally used in loosely coupled system because the transfer ofdata between two processors goes through several intermediate processors. Increasing the I/O bandwidththe I/O devices can be attached with every node.●●●●●●●Access Matrix modelIn computer science, an Access Control Matrix or Access Matrix is an abstract, formal securitymodel of protection state in computer systems that characterize the rights of each subject with respect toevery object in the system. The access matrix model is the policy for user authentication, and has severalimplementations such as access control lists (ACLs) and capabilities. It is used to describe which usershave access to what objects.The access matrix model consists of four major parts: A list of objects A list of subjects A function T which returns an objects type
    •  The matrix itself, with the objects making the columns and the subjects making the rows In the cells where a subject and object meet lie the rights the subject has on that object. Some exampleaccess rights are read, write, execute, list and delete. An access matrix has several standard operations associated with it:  Entry of a right into a specified cell  Removal of a right from a specified cell  Creation of a subject  Creation of an object  Removal of an subject  Removal of an object ●●●●●●Differences between security policy & security model The security policy outlines several level points: how the data is accessed, the amount of security required & what the steps are when these requirements are not met. The security model is more in depth & supports the security policy. Security model is an important concept in the design of any security system. They all have different security policies applying to the systems. ●●●●●●Take grant model The take-grant protection model is a formal model used in the field of computer security to establish or disprove the safety of a given computer system that follows specific rules. It shows that for specific systems the question of safety is decidable in linear time, which is in general un-decidable. The model represents a system as directed graph, where vertices are either subjects or objects. The edges between them are labeled and the label indicates the rights that the source of the edge has over the destination. Two rights occur in every instance of the model: take and grant. They play a special role in the graph rewriting rules describing admissible changes of the graph. There are a total of four such rules: take rule allows a subject to take rights of another object (add an edge originating at the subject) grant rule allows a subject to grant own rights to another object (add an edge terminating at the subject) create rule allows a subject to create new objects (add a vertex and an edge from the subject to the new vertex) remove rule allows a subject to remove rights it has over on another object (remove an edge originating at the subject) Preconditions for take (o, p, r): Subject s has the right take for o. Object o has the right r on p. Preconditions for grant (o, p, r): Subject s has the right Grant for o. s has the right r on p. Using the rules of the take-grant protection model, one can reproduce in which states a system can change, with respect to the distribution of rights. Therefore one can show if rights can leak with respect to a given safety model. ●●●●●●●Bakery algorithm In computer science, it is common for multiple threads to simultaneously access the same resources. Data corruption can occur if two or more threads try to write into the same memory location, or if one thread reads a memory location before another has finished writing into it. Lamports bakery algorithm is one of many mutual exclusion algorithms designed to prevent concurrent threads entering critical sections of code concurrently to eliminate the risk of data corruption. Lamport envisioned a bakery with a numbering machine at its entrance so each customer is given a unique number. Numbers increase by one as customers enter the store. A global counter displays the number of the customer that is currently being served. All other customers must wait in a queue until the baker finishes serving the current customer and the next number is displayed. When the customer is done shopping and has disposed of his or her number, the clerk increments the number, allowing the next customer to be served. That customer must draw another number from the numbering machine in order to shop again. According to the analogy, the "customers" are threads, identified by the letter i, and obtained from a global variable. Due to the limitations of computer architecture, some parts of the Lamports analogy need slight modification. It is possible that more than one thread will get the same number when they request it; this cannot be avoided. Therefore, it is assumed that the thread identifier i is also a priority identifier. A lower value of i means a higher priority and threads with higher priority will enter the critical section first. The critical section is that part of code that requires exclusive access to resources and may only be executed by one thread at a time. In the bakery analogy, it is when the customer trades with the baker and others must wait. When a thread wants to enter the critical section, it has to check whether it is its turn to do so. It should check the numbers of every other thread to make sure that it has the smallest one. In case another thread has the same number, the thread with the smallest i will enter the critical section first.In pseudo code this comparison will be written in the form: (a, b) < (c, d) is similar to (a < c) or ((a == c)and (b < d)) Once the thread ends its critical job, it gets rid of its number and enters the non-critical section. The non-critical section is the part of code that doesnt need exclusive access. It represents some thread-specific computation that doesnt interfere with other threads resources and execution.
    • This part is analogous to actions that occur after shopping, such as putting change back into the wallet. ●●●●●●Mutual Exclusion Mutual exclusion algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections. A critical section is a piece of code in which a process or thread accesses a common resource. The critical section by itself is not a mechanism or algorithm for mutual exclusion. A program, process, or thread can have the critical section in it without any mechanism or algorithm which implements mutual exclusion. Examples of such resources are fine-grained flags, counters or queues, used to communicate between code that runs concurrently, such as an application and its interrupt handlers. The synchronization of access to those resources is an acute problem because a thread can be stopped or started at any time. To illustrate: suppose a section of code is altering a piece of data over several program steps, when another thread, perhaps triggered by some unpredictable event, starts executing. If this second thread reads from the same piece of data, the data, which is in the process of being overwritten, is in an inconsistent and unpredictable state. If the second thread tries overwriting that data, the ensuing state will probably be unrecoverable. These shared data being accessed by critical sections of code must, therefore, be protected, so that other processes which read from or write to the chunk of data are excluded from running. A mutex is also a common name for a program object that negotiates mutual exclusion among threads, also called a lock. On a uniprocessor system a common way to achieve mutual exclusion inside kernels is to disable interrupts for the smallest possible number of instructions that will prevent corruption of the shared data structure, the critical section. This prevents interrupt code from running in the critical section that also protects against interrupt-based process-change. In a computer in which several processors share memory, an indivisible test-and-set of a flag could be used in a tight loop to wait until the other processor clears the flag. The test-and-set performs both operations without releasing the memory bus to another processor. ●●●●●●Test & Set instruction In computer science, the test-and-set instruction is an instruction used to write to a memory location and return its old value as a single atomic (i.e. non-interruptible) operation. If multiple processes may access the same memory and if a process is currently performing a test-and-set, no other process may begin another test-and-set until the first process is done. CPUs may use test-and-set instructions offered by other electronic components, such as Dual-Port RAM; CPUs may also offer a test-and-set instruction themselves. A lock can be built using an atomic test-and-set instruction as follows:function Lock(boolean *lock){ while (test_and_set (lock) == 1)}; The test-and-set operation can solve the wait-free consensus problem for no more than two concurrent processes. However, more than two decades before Herlihys proof, IBM had replaced test-and-set by compare-and-swap, which is a more general solution to this problem. Ultimately, IBM would release a processor family with 12 processors, whereas Amdahl would release a processor family with the architectural maximum of 16 processors. The test and set instruction when used with Boolean values behaves like the following function. Crucially the entire function is executed atomically: no process can interrupt the function mid-execution and hence see a state that only exists during the execution of the function. This code only serves to help explain the behavior of test-and-set; atomicity requires explicit hardware support and hence cant be implemented as a simple function. ●●●●●●●Synchronization mechanism Inter-process synchronization & communication are necessary for designing concurrent s/w i.e. correct & reliable. Parallel program execution & read/write sharing of data place heavy demands on the synchronization & communication are handled via messages. Once the necessary data are transmitted to individual processors for processing there is usually little need for processes to synchronize while operating on data in speed improvements for many application but it also intensified (increase) to need for synchronization. Properly designed uni-processor instructions such as test & Set & compare & swap implemented using the indivisible (undividable) read modify write cycle can be used as a foundation for Inter-process synchronization in multiprocessor system. ●●●●●Conditional Critical Region The critical region construct can be effectively used to solve the critical section problem. It cannot, however, be used to solve some general synchronization problems. For this reason the conditional critical region was introduced. The shared variable is declared in the same way the region construct is used again for controlling access & the only new keyword is await. It is illustrated in the following sequence of code: var v: shared T; Implementation of this construct allows a process waiting on a condition begin within a critical region to be suspended in a special queue, pending satisfaction of the related condition. Unlike a semaphore a conditional ……. critical section in that case. Consequently a process waiting on a condition ……. that does not prevent others from using the resource & when the condition region v do is eventually satisfied, the suspended process is awaited. Since it is { cumbersome to keep track of dynamic changes of the numerous possible individual conditions the common implementation of the conditional critical begin region assumes that each completed process may have modified the system …….. state in a way that has caused some of the waited on condition to become ……. satisfied. Whenever process leaves the critical section all conditioned that } have suspended earlier process are evaluated & if warranted one of the process is awakened. When that process leaves the next waiting process Qwait condition; whose waiting condition is satisfied is activated. No more suspended …….. processes are left or none of them has the necessary conditions to process. …….. end;
    • ●●●●●Explain the 5 design goals of Distributed shared memoryIn order to design a good distributed system there are many design goals among them 5 are explained below 1) Concurrency: - A server must handle client requests at the same time distributed systems are naturally concurrent that is there are multiple workstations running programs independently & at the same time. Concurrency is important because any distributed service that is not concurrent would become a bottle neck that would serialize the actions of its clients & thus reduce the natural concurrency of the system. 2) Scalability: - The capability of a system to adopt to increase load is its scalability. Systems have bounded resources & can become completely saturated under increased load. A scalable system reacts more gracefully to increased load then does a non scalable one. Its resources reach a saturated state later. Even perfect design can‟t accommodate an ever growing load. Adding new resources might solve the problem. A scalable system should have the potential to grow without problems. In short a scalable design should withstand high service load, accommodate growth of the user community & enable simple integration of added resources. 3) Openness: - Two types of openness are important: non- proprietary & extensibility. Public protocols are important because they make it possible for many s/w manufactures that will be able to talk to each other. A system is extensible if it permits customization needed to meet unanticipated requirements. Extensibility is important because it aids (help) scalability & allows a system to survive (live) over time as the demands on it & the ways it is used to change. 4) Fault Tolerance (Acceptance): - Many clients are affected by the failure of distributed services, unlike a non distributed system in which a failure affects only single nodes. A distributed service depends on many components like n/w, switches etc all of which must work. Furthermore a client will often depend on multiple distributed services in order to function properly. If a client that depends on the N components that each have failure probability p will fail with probability roughly N*P. this approximation is (1-(1-P)^N) . 5) Transparency: - The final goal is transparency. We often use term single system image to refer to this goal of making the distributed system look to programs like it is a tightly coupled system. This id rely what a distributed operating system s/w is all about. There are 8 types of transparencies: a) Access transparency enables local & remote resources to be accessed using identical operations. b) Local transparency that enables resources to be accessed without knowledge of their location. c) Concurrency transparency enables several processes to operate concurrently using shared resources without interfaces b/w them. d) Replication transparency enables multiple instances to be used to increase reliability & performance without knowledge of the replicas by user. e) Failure transparency enables the concealment of faults allowing users & application program to complete their task. f) Mobility transparency allows the movement of resources & clients within a system without affecting the operation of users or programs. g) Performance transparency allows the system to be reconfigured to improve performance. h) Scaling transparency allows the system & applications to expand in scale without change to the system structure or the application algorithm.●●●●●Working SetPeter Denning (1968) defines “the working set of information W(t,τ) of a process at time t to be the collection of informationreferenced by the process during the process time interval (t − τ,t)”. Typically the units of information in question are consideredto be memory pages. This is suggested to be an approximation of the set of pages that the process will access in the future(say during the next τ time units), and more specifically is suggested to be an indication of what pages ought to be kept in mainmemory to allow most progress to be made in the execution of that process.The effect of choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) isimportant: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time.If too few pages of a process are kept in main memory, then the page fault frequency is greatly increased and the number ofactive (non-suspended) processes currently executing in the system is set to zero.The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (oftenapproximated by the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages itneeds to use increases, and there is no room in RAM, the process is swapped out of memory to free the memory for otherprocesses to use.Often a heavily loaded computer has so many processes queued up that, if all the processes were allowed to run forone scheduling time slice, they would refer to more pages than there is RAM, causing the computer to "thrash".
    • By swapping some processes from memory, the result is that processes -- even processes that were temporarily removed frommemory -- finish much sooner than they would if the computer attempted to run them all at once. The processes also finishmuch sooner than they would if the computer only ran one process at a time to completion, since it allows other processes torun and make progress during times that one process is waiting on the hard drive or some other global resource.In other words, the working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible.Thus it optimizes CPU utilization and throughput.The main hurdle in implementing the working set model is keeping track of the working set. The working set window is a movingwindow. At each memory reference a new reference appears at one end and the oldest reference drops off the other end. Apage is in the working set if it is referenced in the working set window.To avoid the overhead of keeping a list of the last k referenced pages, the working set is often implemented by keeping track ofthe time t of the last reference, and considering the working set to be all pages referenced within a certain period of time.The working set isnt a page replacement algorithm, but page-replacement algorithms can be designed to only remove pagesthat arent in the working set for a particular process. One example is a modified version of the clock algorithm called WS-Clock.●●●●●●●Bell & LaPadula ModelThe Bell-LaPadula Model (abbreviated BLP) is a state machine model used for enforcing access control in government andmilitary applications. The model is a formal state transition model of computer security policy that describes a set of accesscontrol rules which use security labels on objects and clearances for subjects. Security labels range from the most sensitive(e.g."Top Secret"), down to the least sensitive (e.g., "Unclassified" or "Public"). The Bell-LaPadula model is an example of amodel where there is no clear distinction of protection and security. Features of La Padual Model:- The Bell-LaPadula model focuses on data confidentiality and controlled access to classified information, in contrast to the Biba Integrity Model which describes rules for the protection of data integrity. In this formal model, the entities in an information system are divided into subjects and objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves security by moving from secure state to secure state, thereby inductively proving that the system satisfies the security objectives of the model. The Bell-LaPadula model is built on the concept of a state machine with a set of allowable states in a computer network system. The transition from one state to another state is defined by transition functions. A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the security level) to determine if the subject is authorized for the specific access mode. The clearance/classification scheme is expressed in terms of a lattice. The model defines two mandatory access control (MAC) rules and one discretionary access control (DAC) rule with three security properties: 1. The Simple Security Property - a subject at a given security level may not read an object at a higher security level (no read-up). 2. The ★-property (read "star"-property) - a subject at a given security level must not write to any object at a lower security level (no write-down). The ★-property is also known as the Confinement property. 3. The Discretionary Security Property - use of an access matrix to specify the discretionary access control. The transfer of information from a high-sensitivity document to a lower-sensitivity document may happen in the Bell- LaPadula model via the concept of trusted subjects. Trusted Subjects are not restricted by the ★-property. Untrusted subjects are. Trusted Subjects must be shown to be trustworthy with regard to the security policy. This security model is directed toward access control and is characterized by the phrase: "no read up, no write down." Compare the Biba model, the Clark-Wilson model and the Chinese wall model. With Bell-LaPadula, users can create content only at or above their own security level (i.e. secret researchers can create secret or top-secret files but may not create public files; no write-down). Conversely, users can view content only at or below their own security level (i.e. secret researchers can view public or secret files, but may not view top- secret files; no read-up). The Bell-LaPadula model explicitly defined its scope. It did not treat the following extensively: Covert channels. Passing information via pre-arranged actions was described briefly. Networks of systems. Later modeling work did address this topic. Policies outside multilevel security. Work in the early 1990s showed that MLS is one version of Boolean policies, as are all other published policies. Strong * property:- The Strong ★ Property is an alternative to the ★-Property, in which subjects may write to objects with only a matching security level. Thus, the write-up operation permitted in the usual ★-Property is not present, only a write-to- same operation. The Strong ★ Property is usually discussed in the context of multilevel database management systems and is motivated by integrity concerns. This Strong ★ Property was anticipated in the Biba model where it was shown that strong integrity in combination with the Bell-LaPadula model resulted in reading and writing at a single level. Tranquility principle The tranquility principle of the Bell-LaPadula model states that the classification of a subject or object does not change while it is being referenced. There are two forms to the tranquility principle: the "principle of strong tranquility" states that security levels do not change during the normal operation of the system. The "principle of weak tranquility" states that security levels may never change in such a way as to violate a defined security policy. Weak tranquility is desirable as it allows systems to observe the principle of least privilege. That is, processes start with a low clearance level regardless of their owners‟ clearance, and progressively accumulate higher clearance levels as actions require it. ●●●●●●Briefly describe the multiprocessor operating system A multiprocessor operating system manages all the available resources schedule functionality to form an abstraction it wills facilitates program execution & interaction with users. The process is one of the important & basic types of resources that need to be managed. For effective use of multiprocessors the processor scheduling is necessary. Processors scheduling undertakes the following tasks.
    • 1> Allocation of processor among applications in such a manner that will be consistent with system design objectives. It affects the system throughput. Throughput can be improved by co-scheduling several applications together, thus availing fewer processors to each.2> Ensure efficient use of processors allocation to an application. This primarily affects the speed up of the system.The second basic types of resources are memory management that needs to be managed. In multiprocessor systemmemory management is highly dependent on the architecture & interconnection scheme.a) In multiprocessor operating systems the operating system should provide a flexible memory model that facilitate safe & efficient access to shared memory may be simulated by means of a message passing mechanism.b) In shared memory system the operating system should provide a flexible memory model that facilitates safe & efficient access to share data structures & synchronized data.A multiprocessor operating system should provide an h/w independent. Unified modeling of shared memory tofacilitates parting of applications between different multiprocessor environments.●●●●●●Fetched & Add instructionThe fetch & add instruction is a multiple operation memory access instruction that automatically adds a constant to amemory location & returns the previouscontents of the memory location. Thisinstruction is defined follows:-The fetch & add instruction is powerful & itallows the implementation of „p‟ & ‟v‟ Function fetched and add(m: integer; c:operations on a general semaphore. S is in integer)the following manner:- Var tenp: integer;p(s): while (fetched add (s, -1)<0)do{ Begin{ begin temp:=m; fetched & add(s, 1); m:=m+c; while (s<0) do nothing; Return(temp;) end; } } end;●●●●●●Briefly describe the structure of UNIX operating systemUNIX is a layered operating system. The innermost layered is the h/w that provides the services for the OS. Thefollowing are the components of UNIX OS.1) The Kernel:- the operating system referred to in UNIX as the kernel interacts directly with the h/w & provides the services the user programs. These User programs don‟t need to know how to interact with the kernel & it‟s up to the kernel to provide the desired service. User program interacts directly with the kernel through a services would be provided by the kernel. Such services would include accessing a file: open, read, write link or execute a file, starting or updating accounting records changing ownership of a file or directory; changing to a new directory; creating, suspending or killing a process; enabling to h/w devices; & setting limits on system resources.2) The shell:- shell is often called a command line interpreter, since it presents a single prompt for the user. The user types a command; the shell invokes that command, & then presents the prompt again when the command has finished. This is done on a line by line. Hence the term “commands line”. The shell program provides a method for adapting each user‟s setup requirements & storing this information for re use. The user interacts with /bin/sh, which interprets each command typed. Internal commands are handled within the shell & external commands are cited as programs link ls, grep, sort, ps etc.3) System Utilities:- the system utilities are intended to be controlling tools that do a single task exceptionally well. Users can solve problems by integrating these tools instead of writing a large monolithic application.4) Application programs: - some application programs include the Emacs Editor, GCC, G++, Xfig, Latex. UNIX works very differently rather than having a kernel tasks examine the requests of a process. The process itself enters kernel space. This means that rather than the process waiting outside the kernel it enters the kernel itself. When a process invokes a system call the h/w is switched to the settings. The process will be executing from the kernel image.●●●●●●●Explain Resource Allocation graph for multiple Instances with an example & also explain therecovery of DeadlockDeadlock can be described more precisely in terms of a directed graph called a system allocation graph. This graphconsists of a set of vertices v and set of edges e. The set of vertices v is divided into two different types of nodes p={p1, p2, ….., pn} the set consisting of all active processes in the system & R= {R1, R2,…. ,RM } the set consisting of allresourses types in the system.A direct edge from process pi to resource Rj is denoted by pi Rj it signifies that process pi requested an instanceof resource type Rj & is currently waiting for that resource. A directed edge from resource type Rj to process pi isdenoted by Rj pi ; it signifies that an instance of resource type Rj has been allocated to process pi. A direct edge Rj pi is called an assignment edge & piRj is called request edge. Pictorially we represent each process pi as a circle & each resource type Rj as a sequence. Since resource type Rj may have more than one instance we represent each such instance as a dot within the square. The request edge points to only the square Rj whereas an assignment edge must also design one of the dots in the square. When process pi requests an instance of resource type Rj a request edge is inserted in the graph. When this request can be fulfilled, the request edge is instantaneously
    • transformed to an assignment edge when the process no longer needs access to the resource it releases the resource, & as a result the assignment edge is deleted. The graph from the above diagram depicts the following situation. (a) The set P, R, E where i) P= {p1, p2, p3 } j) R= {R1, R2, R3} k) E= {P1R1, P2R3, R1P2, R2P2, R2P1, R3P3} (b) Resource instances i) One instance of resource type R1; ii) Two instance of resource type R2; iii) One instance of resource type R3; iv) Three instance of resource type R4; (C)Process states: I) Process P1 is holding an instance of resource type R2 & is waiting for a resource type R1. ii) Process P2 is holding an instance of R1, & R2 & is waiting for a resource type R3. iii) Process P3 is holding an instance of R3.(I)So from the definition we easily understand that if there have a cycle then deadlock may exists but if no cycle exists then anyprocess in the system is deadlock free.(II) If each resource type has exactly one instance, then a cycle implies that a deadlock has occurred. If the cycle involves onlya set of resource types, each of which has a single instance then deadlock occurred. Each process involved in the cycle isdeadlock. But if each resource type has several instances then a cycle doesn‟t necessarily imply that a deadlock has occurred.In this case a cycle in the graph is a necessary but not a sufficient condition for the existence of deadlock. We can use aprotocol to prevent a deadlock ensuring that deadlock never occur & system can use either a deadlock prevention or deadlockavoidance methods. Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions can‟thold. These methods prevent deadlocks by consisting how requests for resources can be made.(III) If a system doesn‟t employ either deadlock prevention or a deadlock avoidance algorithm then a deadlock situation occur.In this environment, the system can provide an algorithm that examines the state of the system to determine whether adeadlock has occurred & an algorithm to recover from the deadlock.●●●●●●Explain the concept of Virtual Memory. List any two methods implementation & explain any one with the helpof a diagramVirtual Memory is a technique that allows the execution may not be completely in memory. One major advantage of thisscheme is that programs can be larger than physical memory. Further virtual memory abstracts main memory into an extremelylarge uniform array of storage separating logical memory as viewed by the user from physical memory. Virtual memory alsoallows processes to easily share files & address spaces & it provides an efficient mechanism for process creation. Virtualmemory is not easy to implement, however & may substantially decrease performance if it is used carelessly. The two methodsfor implementing & explaining are as follows: (1) Principle of Operation virtual memory can be implemented as an extension of paged or segmented memory management or as a combination of both. Accordingly address translation is performed by means of page mapped table,segmented descriptor table, or both. The important characteristics is that in virtual memory systems some portions of addressspace of the running process can be absent from main memory. To emphasize the distinction, the term real memory is oftenused to denote physical memory. The operating system dynamically allocates real memory to portions of the virtual addressspace. The address translation mechanism must be able to associates virtual names with physical locations. The type ofmissing items depends on the basic underlying memory management scheme & may be a segment or a page. The page maptable contains an entry for each virtual page of the related process. The diagram describe above. (2) Management of Virtual Memory Assuming that paging is used as an underlying memory management scheme. The implementation of virtual memory requires maintenance of one page map table per active process. A new component of the memory manager data structures is the file map table (FMT). A FMT contains secondary storage accesses of all pages. The memory manager‟s use into the main memory. The base may be kept in the control block of the related
    • process. A pair of page map table base & page map length registered may be provided in h/w to expedite the address translation process & to reduce the size of PMT for smaller processes.●●●●●●Explain the concept of segmentation with the help of a diagram. Make a relative comparison between paging &segmentation. Explain the concept of page fault with the help of an exampleSegmentation is a memory management scheme that supports this user‟s view of memory. A logical address space is acollection of segments. Each segment has a name & the offset within the segment. The user therefore specifies each addressby 2 quantities (a) A segment name & (b) An offset. The diagram is described bellow. Although most of our specificexamples are based on the paging, it is also possible to implement virtual memory in the form of demand segmentation. Suchimplementations usually inherits the benefits of sharing & protection that provided by segmentation. Moreover their placementprocedure is explicit awareness of the types of information contained in particular segments. For example a working set ofsegments should include at least one each of code, data & stack segments. As with segment references alert the operatingsystem to changes of the locality. However the variability of segment sizes & the complicate the management of both main &secondary memories. Placement strategies i.e; methods of finding a suitable area of free memory to load an incomingsegment, are quite complex in segment systems. Paging is very convenient for the management of main & secondarymemories but it is inferior with regard to protection & sharing. The transparency of paging necessitates the use of probabilisticreplacement algorithms. Both segmented & page implementations of virtual memory have their respective advantages &disadvantages & neither is superior to the other over all characteristics. Some computer systems combine the two approachesin order to enjoy the benefits of both. The diagram is shown in the bellow. The working set module is based on the assumption of the locality. The working set model is successful & knowledge of theworking set can be useful for pre-paging, but it seems a clumsy way to control thrashing. A strategy that uses the page faultfrequency takes a more direct approach. The specific problem is how to prevent thrashing. Thrashing has a high page faultrate. Thus we want to control the page fault rate. When it is too high, we know that the process needs more frames. Similarly ifthe page fault rate is too low, then the process may have too many frames. We can establish upper & lower bounds on thedesired page fault rate. If the page fault rate exceeds the upper limit, we allocate
    • that process another frame; if the page fault rate falls below the lower limit, we remove a frame from that process. Thus we can directly measure & control the page fault rate to prevent thrashing. If the page fault rate increases & no more free frames are available, we must select some process & suspend it. The freed frames are then distributed to process with high page fault rates. ●●●●●●What is meant by context switch? Explain the o/h incurred due to the context switching on process & thread The process of changing context from an executing program to an interrupt will assume control requires a combination of h/w & s/w. since the interrupted program knows neither when an interrupt will assume control of the processor nor which port of the machine context will be modified by the interrupt routine, the interrupt service routine itself is changed with the saving & restoring the context of the preempted activity. In a context switched, the state of the first process must be saved somehow, so that when the scheduler gets back to the execution of the first process, it can restore this state & continue normally. The state of the process includes all the registers that the process may be using; especially the program may be necessary. Often all the data structures called PCB. Now in order to switch the processes the PCB for the first processes must be created & saved. The threads are normally cheaper than the processes & that they can be scheduled for execution in a user dependent way with less o/hs. They are cheaper because they do not have a full set of resources each, whereas the PCB for a heavy weight process is large & costly to context switch the PCB‟s for threads are much smaller, since each threads has only a stack & some registers to manage. It has no open file lists or resource lists or resource lists, no accounting structures to update. All of these resources are shared by all threads within the process. ●●●●●●What are the limitations of Banker’s Algorithm used for deadlock avoidance? There are some problems with the Banker‟s algorithm as follows a> It is time consuming to execute on the operation of every resource. b> If the claim information is not accurate system resources may be underutilized. c> Another difficulty can occur when the system is heavily loaded. In this situation so many resources are granted away that very safe sequences remain & as a consequence the job will be executed sequentially. So Banker‟s algorithm is referred to as the “Most Liberal granting Process”. d> The process claim must be less than the total number of units of the resource in the system. If not the process is not accepted by the manager. e> Since the state without the new process is safe, so is the state with the new process. Just use the order we had originally & put the new process at the end. f> A resource becoming unavailable can result in an unsafe state.●●●●Advantage & disadvantage of Multiuser operating systemThe advantage of having a multiuser operating system is that normally the h/w is very expensive & it lets a no. of users sharethis expensive resource. This means the cost is divided amongst the users. Since the resources are shared, they are morelikely to be in user then sitting idle being unproductive.The disadvantage with multi user computer systems is that as more users access it the performance becomes slower & slower.Another limitation is the cost of h/w, as a multiuser operating system requires a lot of disk space & memory. In addition theactual s/w for multiuser operating systems tend to cost more than single user operating system.●●●●●●What is Remote procedure call or RPC? How RPC work? Give its limitations alsoDistributed systems usually use remote procedure call (RPC) as a fundamental building block for implementing remoteoperations & RPC is a powerful technique for constructing distributed client server based applications. It is based on extendingthe notion of conventional or local procedure need not exist in the same address space as the calling procedure. The 2processes may be on the same system or they may be on different systems with a n/w connecting them. By using RPCprogrammers of distributed application avoid the details of the interface with the n/w.An RPC is analogous to a function call. Like a function call, when an RPC is made the calling arguments are passed to theRemote procedure & the caller waits for a response to be returned from the remote procedure. The following figure shows theflow of activity of that takes place during an RPC call b/w two networked system.
    • The client make a procedure call thatsends a request to the server &waits. The thread is blocked from processing until either a reply is received or it times out. Whenthe request arrives, the server calls a dispatch routine that performs the request service, & sends the reply to clients. After RPCcall is completed, the client program continues. RPC specifically supports network application. RPC implementations arenominally incomputable with other RPC implementation, although some are compatible. Using a single implementation of aRPC in a system will most likely results in a dependence on the RPC vendor for maintenance support & future enhancements.This could have a highly negative impact on a system‟s flexibility, maintainability, portability because there is no single standardfor implementing a RPC, different features may be offered by individual RPC implementations. Features that may affect thedesign & cost of a RPC based application.●●●●●●Linked Allocation & Indexed AllocationLinked allocation The problems in contiguous allocation can be traced directly to the requirement that thespaces are allocated contiguously and that the files that need these spaces are of different sizes. Theserequirements can be avoided by using linked allocation. In linked allocation, each file is a linked list of diskblocks. The directory contains a pointer to the first and (optionally the last) block of the file. For example, a fileof 5 blocks which starts at block 4, might continue at block 7, then block 16, block 10, and finally block 27.Each block contains a pointer to the next block and the last block contains a NIL pointer. The value -1 may beused for NIL to differentiate it from block 0. With linked allocation, each directory entry has a pointer to thefirst disk block of the file. This pointer is initialized to nil (the end-of-list pointer value) to signify an empty file.A write to a file removes the first free block and writes to that block. This new block is then linked to the end ofthe file. To read a file, the pointers are just followed from block to block. There is no external fragmentationwith linked allocation. Any free block can be used to satisfy a request. Notice also that there is no need todeclare the size of a file when that file is created. A file can continue to grow as long as there are free blocks.Linked allocation, does have disadvantages, however. The major problem is that it is inefficient to supportdirect-access; it is effective only for sequential-access files. To find the ith block of a file, it must start at thebeginning of that file and follow the pointers until the ith block is reached. Note that each access to a pointerrequires a disk read. Another severe problem is reliability. A bug in OS or disk hardware failure might result inpointers being lost and damaged. The effect of which could be picking up a wrong pointer and linking it to afree block or into another file.Index Allocation linked allocation does not support random access of file since pointer hidden in block sequentially.Indexed allocation solves this problem by bringing pointer together into an index block. Indexed allocation uses an index todirectly track the file block locations. A user declares the maximum file size, and the file system allocates a file header with anarray of pointers big enough to point to all file blocks.Although indexed allocation provides fast disk location lookups for random accesses, file blocks may be scattered all over thedisk. A file system needs to provide additional mechanisms to ensure that disk blocks are grouped together for goodperformance (e.g., disk defragmenter). Also, as a file increases in size, the file system needs to reallocate the index array andcopy old entries. Ideally, the index can grow incrementally. File header Block 0 Data blocks Block 1 Block 2Multilevel Indexed AllocationLinux uses multilevel indexed allocation, so certain index entries point to index blocks as opposed to data blocks. The fileheader, or the i_node data structure, holds 15 index pointers. The first 12 pointers point to data blocks. The 13 th pointer pointsto a single indirect block, which contains 1,024 additional pointers to data blocks. The 14th pointer in the file header points to
    • a double indirect block, which contains 1,024 pointers to single indirect blocks. The 15th pointer points to a triple indirectblock, which contains 1,024 pointers to double indirect blocks.This skewed multilevel index tree is optimized for both small and large files. Small files can be accessed through the first 12pointers, while large files can grow with incremental allocations of index blocks. However, accessing a data block under thetriple indirect block involves multiple disk accesses—one disk access for the triple indirect block, another for the double indirectblock, and yet another for the single indirect block before accessing the actual data block. Also, the number of pointersprovided by this data structure caps the largest file size. Finally, the boundaries between the last four pointers are somewhatarbitrary. With a given block number, it is not immediately obvious as to of which of the 15 pointers to follow.●●●●●●●●External Fragmentation & Internal FragmentationExternal fragmentation: - External fragmentation is the phenomenon in which free storage becomes divided into many smallpieces over time. It is a weakness of certain storage allocation algorithms, occurring when an application allocates and de-allocates ("frees") regions of storage of varying sizes, and the allocation algorithm responds by leaving the allocated and de-allocated regions interspersed. The result is that although free storage is available, it is effectively unusable because it isdivided into pieces that are too small to satisfy the demands of the application. The term "external" refers to the fact that theunusable storage is outside the allocated regions. "A partition of main memory is the wastage of an entire partition is said to beExternal Fragmentation". Fragmentation can also refer to RAM that has small, unused holes scattered throughout it. This iscalled external fragmentation. With modern operating systems that use a paging scheme, a more common type of RAMfragmentation is internal fragmentation. This occurs when memory is allocated in frames and the frame size is larger than theamount of memory requested. External fragmentation refers to the division of free storage into small pieces over a period oftime, due to an inefficient memory allocation algorithm, resulting in the lack of sufficient storage for another program becausethese small pieces are not contiguous. In External Fragmentation Both first fit and best fit strategies suffer from this. Dependingon the total amount of memory storage, size, external fragmentation may be minor or major problem.Internal fragmentation: - Internal fragmentation is the space wasted inside of allocated memory blocks because ofrestriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this sizedifference is memory internal to a partition, but not being used. Internal fragmentation occurs when storage is allocated withoutintention to use it. This space is wasted. While this seems foolish, it is often accepted in return for increased efficiency orsimplicity. The term "internal" refers to the fact that the unusable storage is inside the allocated region but is not being used. "Apartition of main memory is wasted with in a partition is said to be Internal Fragmentation". For example, in many file systems,each file always starts at the beginning of a cluster, because this simplifies organization and makes it easier to grow files. Anyspace left over between the last byte of the file and the first byte of the next cluster is a form of internal fragmentation called fileslack or slack space. Slack space is a very important source of evidence in computer forensic investigation. Similarly, aprogram which allocates a single byte of data is often allocated many additional bytes for metadata and alignment. This extraspace is also internal fragmentation.●●●●●●●What is semaphore? Give the solution to producer consumer problem using semaphore & explain thesolutionA semaphore is hardware or a software tag variable whose value indicates the status of a common resource. Its purpose is tolock the resource being used. A process which needs the resource will check the semaphore for determining the status of theresource followed by the decision for proceeding. In multitasking operating systems, the activities are synchronized by using thesemaphore techniques. In computer science, producer-consumer problem (also known as the bounded-buffer problem) isa classical example of a multi-process synchronization problem. The problem describes two processes, the producer and theconsumer, who share a common, fixed-size buffer. An inadequate solution could result in a deadlock where both processes arewaiting to be awakened. The problem can also be generalized to have multiple producers and consumers. Semaphores solve
    • the problem of lost wakeup calls. In the solution below we use two semaphores, fill Count and empty Count, to solve theproblem. Fill Count is the number of items to be read in the buffer, and empty Count is the number of available spaces in thebuffer where items could be written. Fill Count is incremented and empty Count decremented when a new item has been putinto the buffer. If the producer tries to decrement empty Count while its value is zero, the producer is put to sleep. The next timean item is consumed, empty Count is incremented and the producer wakes up. The consumer works analogously.semaphore fillCount = 0; // items producedsemaphore emptyCount = BUFFER_SIZE; // remaining space procedure producer() { while (true) { item = produceItem(); down(emptyCount); putItemIntoBuffer(item); up(fillCount); }}procedure consumer() { while (true) { down(fillCount); item = removeItemFromBuffer(); up(emptyCount); consumeItem(item); }}The solution above works fine when there is only one producer and consumer. Unfortunately, with multiple producers orconsumers this solution contains a serious race condition that could result in two or more processes reading or writing into thesame slot at the same time. To understand how this is possible, imagine how the procedure putItemIntoBuffer() can beimplemented. It could contain two actions, one determining the next available slot and the other writing into it. If the procedurecan be executed concurrently by multiple producers, then the following scenario is possible: 1. Two producers decrement empty-Count 2. One of the producers determines the next empty slot in the buffer 3. Second producer determines the next empty slot and gets the same result as the first producer 4. Both producers write into the same slotTo overcome this problem, we need a way to make sure that only one producer is executing putItemIntoBuffer() at a time. Inother words we need a way to execute a critical section with mutual exclusion. To accomplish this we use a binary semaphorecalled mutex. Since the value of a binary semaphore can be only either one or zero, only one process can be executingbetween down (mutex) and up (mutex). The solution for multiple producers and consumers is shown below.semaphore mutex = 1;semaphore fillCount = 0;semaphore emptyCount = BUFFER_SIZE;procedure producer() { while (true) { item = produceItem(); down(emptyCount); down(mutex); putItemIntoBuffer(item); up(mutex); up(fillCount); } up(fillCount); //the consumer may not finish before the producer.}procedure consumer() { while (true) { down(fillCount); down(mutex); item = removeItemFromBuffer(); up(mutex); up(emptyCount); consumeItem(item); }}The order in which different semaphores are incremented or decremented is essential: changing the order might result in adeadlock.●●●●●●●Pipes & Filters in UNIX operating systemA pipe is a unidirectional channel that may be written as one end & read at the other. A pipe is used for communicationbetween 2 processes. The producer process writes data into one end of the pipe & the consumer process retrieves them fromthe other end. The system provided limited buffering for each open pipe. Control of data flow is performed by the system, whichhalts the producer attempting to write into the full pipe & halts the consumer attempting to read an empty pipe.In UNIX and Unix-like operating systems, a filter is a program that gets most of its data from its standard input (the main inputstream) and writes its main results to its standard output (the main output stream). UNIX filters are often used as elementsof pipelines. The pipe operator ("|") on a command line signifies that the main output of the command to the left is passed asmain input to the command on the right.●●●●●●●Deadlock avoidance algorithm or Banker’s algorithmThis is an algorithm that deals with operating resources such as memory or processor time as though they were money and theprocesses competing for them as though they were bank customers. The operating system takes on the role of the banker.
    • The banker has a set of units to allocate to its customers. Each customer states in advance its total requirements for eachresource. The banker accepts a request for more units if the customers maximum doesnt exceed the capital the banker has. Ifthe loan is granted, the customer agrees to return the units within a finite time. The current loan of a customer cant exceed hismaximum need. During a transaction a customer only borrows/returns one unit at a time.This prevents circular waiting. It allows piecemeal allocation, but before any partial allocation the remaining free resource ischecked to make sure enough is free. The problem is execution time: if we have m resource types and n processes, the worstcase execution time is approximately mn(n+1)/2. For m and n both equal to 10, each resource request takes about half asecond, which is bad.The current position is said to be safe if the banker may allow all his present customers to complete their transactions within afinite time, otherwise it is said to be unsafe (but that doesnt necessarily mean an inevitable deadlock as there is a certain timedependency).A customer is characterized by his current loan and his claim where the claim is the customers need minus the current loan tothat customer. Similarly for the banker, the total "cash" that he has is the starting capital minus the sum of all the loans. Thebanker prevents deadlock by satisfying one request at a time, but only when absolutely necessary.Consider the more general problem with several "currencies", as shown by this pseudo code: TYPE B = 1..number of customers; D = 1..number of currencies; C = array [D] of integer; S = record Transactions : array [B] of record Claim, loan : C; Completed : boolean end; Capital, cash : C; end;PROCEDURE return_loan (VAR loan, cash : C);VAR currency : D;BEGIN FOR every currency DO Cash[currency] := cash[currency] + loan[currency]END;PROCEDURE complete_transactions (VAR state : S);VAR customer : B; progress : boolean;BEGIN WITH state DO REPEAT progress := false; FOR every customer DO WITH transactions[customer] DO IF NOT completed THEN BEGIN returnloan(loan,cash); completed:=true; progress:=true END UNTIL NOT progressEND;FUNCTION all_transactions_completed (state : S) : boolean;BEGIN WITH state DO all_transactions_completed:=(capital = cash)END;FUNCTION safe (current_state : S) : boolean;VAR state : S;BEGIN state:=current_state; complete_transactions(state); safe:=all_transactions_completed(state)END;If all transactions can be completed, the current position is safe and its alright to honor the request for a new loan. In practice, aprocess may crash for one of several reasons, liberating its held resources and making no further claim.If all the OS resources were controlled by such an algorithm, we would need just the variable current state and anoperation safe. Safe can be micro-coded as a single machine instruction, or included in the resident OS part as code.●●●●●●Acyclic-Graph DirectoryThe acyclic graph is a natural generalization of the tree-structured directory scheme. The common subdirectory should beshared. A shared directory or file will exist in the file system in two (or more) places at once. A tree structure prohibits thesharing of files or directories. An acyclic graph (a graph with no cycles) allows directories to share subdirectories and files.Here the same file or subdirectory may be in two different directories. It is important to note that a shared file (or directory) is notthe same as two copies of the file. With two copies, each programmer can view the copy rather than the original, but if oneprogrammer changes the file, the changes will not appear in the others copy. With a shared file, only one actual file exists, soany changes made by one person are immediately visible to the other. A common way, exemplified by many of the UNIXsystems, is to create a new directory entry called a link. When a reference to a file is made, we search the directory. If thedirectory entry is marked as a link, then the name of the real file is included in the link information. We resolve the link by usingthat path name to locate the real file. Links are easily identified by their format in the directory entry and are effectively namedindirect pointers. In a system where sharing is implemented by symbolic links, this situation is somewhat easier to handle.
    •  The deletion of a link need not affect the original file; only the link is removed.  If the file entry itself is deleted, the space for the file is de-allocated, leaving the links dangling.●●●●●●●PCB/ TCBPCB is the Area in memory where the OS can find all the information it needs to know about a process, means PCB hold theinformation of all processes and keep track of all the processes.A process control block or PCB is a data structure (a table) that holds information about a process. Every process or programthat runs needs a PCB. When a user requests to run a particular program, the operating system constructs a process controlblock for that program.The PCB contains important information about the specific process including:-  The current state of the process i.e., whether it is ready, running, waiting, or whatever.  Unique identification of the process in order to track "which is which" information.  A pointer to parent process.  Similarly, a pointer to child process (if it exists).  The priority of process (a part of CPU scheduling information).  Pointers to locate memory of processes.  A register save area.  The processor it is running on.The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the PCB is thedata structure that defines a process to the operating systems.●●●●●●Fault tolerance techniques for distributed systemsA reliable client-server model will be explained as an example for the failure models in distributed systems. Then, mainhardware reliability models, that are series and parallel models, will be mentioned in a detailed manner. After giving the models,another important issue in distributed systems will be discussed: Agreement in faulty distributed systems. After giving enoughinformation about that subject, two important cases will be examined for making the subject clear. These two examples are TwoArmy Problem and Byzantine Generals Problem.Then, another important subject Replication of Data in fault tolerant distributed systems will be discussed. Active and PassiveReplication will be explained with the advantages and disadvantages of the models, then The Gossip Architecture which is themixture of this two model will be discussed. After all of that, some information about recovery in distributed systems will begiven and the paper will be finished with a conclusion part. Fault tolerance is the ability of a system to perform its functioncorrectly even in the presence of internal faults. The purpose of fault tolerance is to increase the dependability of a system. Acomplementary but separate approach to increasing dependability is fault prevention. This consists of techniques, such asinspection, whose intent is to eliminate the circumstances by which faults arise.●●●● Distributed Shared ArchitectureIn client server model Remote file system allows computer to mount one or more file systems from one or more remotemachines whereas in distributed shared architecture model is an abstraction of shared memory implemented in a distributedsystem. In client server model the machine containing files is the server & machine wanting access to the files is the client. Butin DSM the distributed systems consists of loosely coupled systems with no physically shared memory which may be the part ofkernel that uses message passing.●●●●Features of UNIX operating system & Windows Operating SystemUNIX: - 1) Multiuser i.e. more than one user can use the machine at a time supported via terminal. 2) Multitasking i.e. morethan one program can be run at a time. 3) Hierarchical file system i.e. to support file organization & maintenance in an easiermanner. 4) Portability i.e. only the kernel is written in assembler. This means that the OS could be easily converted to run ondifferent h/w.Windows: - 1) Support for FAT 16, FAT 32, NTFS. 2) Increase uptime of the system & significantly fewer OS reboot scenarios.3) Protects memory of Individual apps & processes to avoid single apps bringing the system down. 4) Encrypted file systemsprotect sensitive data. 5) Secure virtual private Networking supports tunneling in to private LAN over public Internet. 6)Personalized menus adapt to the way we work. 7) Supports universal Serial Bus i.e. USB and IEEE1394 for greater bandwidthdevices.In distributed environments, mutual exclusion is provided via a series of messages passed between nodes that areinterested in a certain resource. Several algorithms to solve mutual exclusion for distributed systems have beendeveloped. They can be distinguished by their approaches as token-based and non-token-based. The former onesmay be based on broadcast protocols or they may use logical structures with point-to-point communication. Thealgorithm uses a token-passing approach with point-to-point communication along a logical structure. Each nodekeeps a local queue and records the time of requests locally. These queues form a virtual global queue ordered bypriority and FIFO within each priority level with regard to the property of relative fairness.