This document discusses the goals, principles, and implementation of protection in computer systems. The key points are:
1) Protection aims to prevent malicious use, ensure resources are only used according to policies, and minimize damage from errant programs.
2) The principle of least privilege dictates that users and programs are given only the minimal privileges needed to perform their tasks.
3) Protection domains define the resources a process can access, with access rights specifying allowed operations on each object. Domains can be static or allow switching between processes.
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...vtunotesbysree
Here are three major complications that concurrent processing adds to an operating system:
1. Resource allocation and scheduling becomes more complex. The OS must allocate CPU time, memory, file descriptors, etc. among multiple concurrent processes and ensure all processes receive adequate resources. It must also schedule which process runs at what time on what CPU core.
2. Synchronization and communication between processes is more difficult. The OS must provide mechanisms for processes to synchronize their actions when accessing shared resources and to allow inter-process communication. This introduces challenges around things like race conditions and deadlocks.
3. Reliability and fault tolerance is harder. If one process crashes or hangs, it should not affect other processes. The OS must be able to
The document provides an overview of the Unix operating system, including its history, design principles, and key components. It describes how Unix was developed at Bell Labs in the 1960s and later influenced by BSD at UC Berkeley. The core elements discussed include the process model, file system, I/O, and standard user interface through shells and commands.
The document discusses operating system layers and components. It describes how kernels and servers work together to perform invocation-related tasks like communication, scheduling, and resource management. Kernels set protection for physical resources through memory management and address spaces. Processes have separate user and kernel address spaces to safely access resources. Threads and processes are core components managed by the operating system.
This document discusses embedded systems and their applications. It begins with an introduction to embedded systems, their characteristics, and deciding factors. It then covers various aspects of embedded system development including the development cycle, operating systems, real-time operating systems, firmware development, system initialization, exceptions, design problems, and applications. Key concepts discussed include threads, scheduling, semaphores, mutexes, message queues, and event registers. Common embedded system applications mentioned include automotive, telecommunications, healthcare, entertainment, and industrial automation.
The document discusses operating systems and processes. It defines an operating system as an interface between the user and computer hardware that manages system resources efficiently. Processes are programs in execution that are represented in memory by a process control block containing information like state, registers, scheduling details. Processes go through various states like running, ready, waiting and terminated. The document also describes process creation, termination, and context switching between processes.
This document contains a question bank on various topics in operating systems including:
1. Process synchronization questions focusing on critical section problems, semaphores, and solutions for two processes.
2. Memory management questions on paging and segmentation.
3. Deadlock questions on safe/unsafe states, bankers algorithm, prevention/avoidance strategies, and recovery from deadlocks.
4. CPU scheduling questions on algorithms like FCFS, RR, SJF and characteristics like short term, long term and medium term schedulers.
5. Process management questions on states, control blocks, creation/termination, and interprocess communication.
The questions provided are meant to help students study notes
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
VTU 4TH SEM CSE COMPUTER ORGANIZATION SOLVED PAPERS OF JUNE-2013 JUNE-2014 & ...vtunotesbysree
This document contains a solved question paper for Computer Organization from June 2013. It includes questions and detailed answers on topics such as basic computer operations, number representation systems, addressing modes, input/output operations, interrupts, bus arbitration, and the Universal Serial Bus protocol. The solved questions cover concepts, provide examples, and include diagrams to illustrate computer hardware and architecture.
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...vtunotesbysree
Here are three major complications that concurrent processing adds to an operating system:
1. Resource allocation and scheduling becomes more complex. The OS must allocate CPU time, memory, file descriptors, etc. among multiple concurrent processes and ensure all processes receive adequate resources. It must also schedule which process runs at what time on what CPU core.
2. Synchronization and communication between processes is more difficult. The OS must provide mechanisms for processes to synchronize their actions when accessing shared resources and to allow inter-process communication. This introduces challenges around things like race conditions and deadlocks.
3. Reliability and fault tolerance is harder. If one process crashes or hangs, it should not affect other processes. The OS must be able to
The document provides an overview of the Unix operating system, including its history, design principles, and key components. It describes how Unix was developed at Bell Labs in the 1960s and later influenced by BSD at UC Berkeley. The core elements discussed include the process model, file system, I/O, and standard user interface through shells and commands.
The document discusses operating system layers and components. It describes how kernels and servers work together to perform invocation-related tasks like communication, scheduling, and resource management. Kernels set protection for physical resources through memory management and address spaces. Processes have separate user and kernel address spaces to safely access resources. Threads and processes are core components managed by the operating system.
This document discusses embedded systems and their applications. It begins with an introduction to embedded systems, their characteristics, and deciding factors. It then covers various aspects of embedded system development including the development cycle, operating systems, real-time operating systems, firmware development, system initialization, exceptions, design problems, and applications. Key concepts discussed include threads, scheduling, semaphores, mutexes, message queues, and event registers. Common embedded system applications mentioned include automotive, telecommunications, healthcare, entertainment, and industrial automation.
The document discusses operating systems and processes. It defines an operating system as an interface between the user and computer hardware that manages system resources efficiently. Processes are programs in execution that are represented in memory by a process control block containing information like state, registers, scheduling details. Processes go through various states like running, ready, waiting and terminated. The document also describes process creation, termination, and context switching between processes.
This document contains a question bank on various topics in operating systems including:
1. Process synchronization questions focusing on critical section problems, semaphores, and solutions for two processes.
2. Memory management questions on paging and segmentation.
3. Deadlock questions on safe/unsafe states, bankers algorithm, prevention/avoidance strategies, and recovery from deadlocks.
4. CPU scheduling questions on algorithms like FCFS, RR, SJF and characteristics like short term, long term and medium term schedulers.
5. Process management questions on states, control blocks, creation/termination, and interprocess communication.
The questions provided are meant to help students study notes
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
VTU 4TH SEM CSE COMPUTER ORGANIZATION SOLVED PAPERS OF JUNE-2013 JUNE-2014 & ...vtunotesbysree
This document contains a solved question paper for Computer Organization from June 2013. It includes questions and detailed answers on topics such as basic computer operations, number representation systems, addressing modes, input/output operations, interrupts, bus arbitration, and the Universal Serial Bus protocol. The solved questions cover concepts, provide examples, and include diagrams to illustrate computer hardware and architecture.
This document discusses main memory management techniques used in operating systems. It covers topics like address binding, logical versus physical address spaces, dynamic loading, swapping, contiguous memory allocation, segmentation, paging, and page tables. The key points are:
1) Main memory management aims to efficiently allocate memory for processes while allowing processes to be swapped in and out of memory. It maps logical addresses to physical addresses through techniques like segmentation and paging.
2) Early techniques allocated processes to contiguous blocks of memory, but this led to fragmentation issues. Segmentation and paging allow non-contiguous allocation to avoid fragmentation.
3) Segmentation divides memory into variable-sized segments, while paging divides it into fixed
This document discusses main memory management techniques used in operating systems. It covers topics like address binding, logical versus physical address spaces, dynamic loading, swapping, contiguous memory allocation, segmentation, paging, and page tables. The key points are:
1) Main memory management aims to efficiently allocate memory for processes while allowing processes to be swapped in and out of memory. It maps logical addresses to physical addresses through techniques like segmentation and paging.
2) Early techniques allocated processes to contiguous blocks of memory, but this led to fragmentation issues. Segmentation and paging allow non-contiguous allocation to avoid fragmentation.
3) Segmentation divides a process's memory into variable-sized segments that can be placed anywhere
This document discusses different approaches to implementing access control and protection in operating systems, including:
1. Using an access matrix to represent access rights, with domains as rows and objects as columns. This separates the protection mechanism from authorization policy.
2. Capability-based systems provide data and software capabilities to control access to objects and interpret access rights.
3. Language-based protection specifies access policies within a programming language and generates calls to the underlying protection system. For example, Java uses protection domains and stack inspection.
The document describes the high-level design of a software rejuvenation system. It discusses how the system is broken down into modular components with dependencies between them. Data flow diagrams are used to design the system at different levels, showing how data flows between processing steps. Level 0 shows the main rejuvenation process, level 1 shows setting threshold values, and level 2 shows more detailed steps like checking aging factors and memory usage. The sequence diagram depicts the variable time and workload policy, with the system updating the rejuvenation time based on workload. Detailed modules are described for OS and VM cold/warm reboot processes.
Overview - Functions of an Operating System – Design Approaches – Types of Advanced
Operating System - Synchronization Mechanisms – Concept of a Process, Concurrent
Processes – The Critical Section Problem, Other Synchronization Problems – Language
Mechanisms for Synchronization – Axiomatic Verification of Parallel Programs - Process
Deadlocks - Preliminaries – Models of Deadlocks, Resources, System State – Necessary and
Sufficient conditions for a Deadlock – Systems with Single-Unit Requests, Consumable
Resources, Reusable Resources.
This document discusses shared memory in Linux, including creating shared memory segments using shmget, attaching and detaching shared memory using shmat and shmdt, controlling shared memory segments using shmctl, and using mmap to map files to shared memory. It provides details on the system calls used for shared memory and examples of creating and using shared memory between processes.
A binary semaphore is a synchronization primitive that can have the values of 0 or 1. They are used to implement mutual exclusion and synchronize concurrent processes. Paging is when pages of memory are written to disk to make space in physical memory for other processes, which can lead to thrashing if too many page faults occur from too much paging. The four conditions that can lead to a deadlock are mutual exclusion, hold and wait, no preemption, and circular wait.
This chapter introduces operating systems and their major components. It discusses how operating systems act as an intermediary between the user and computer hardware to execute programs and manage system resources like the CPU, memory, storage and I/O devices. It also covers the basic structure of a computer system including hardware components, the operating system, application programs, and users. Key operating system functions like process management, memory management and storage management are introduced.
This document provides course material for the subject of Operating Systems for 4th semester B.E. Computer Science Engineering students at A.V.C. College of Engineering. It includes information on the name and designation of the faculty teaching the course, the academic year, curriculum regulations, 5 units that make up the course content, textbook and reference details. The course aims to cover key topics in operating systems including processes, process scheduling, storage management, file systems and I/O systems.
This document discusses prefetching and spooling as methods to overlap I/O operations with CPU operations to improve system performance. Prefetching involves initiating the next read operation after the current one completes, allowing the CPU and I/O device to work in parallel. Spooling overlaps the I/O of one job with the computation and output of other jobs by using a buffer. Spooling is generally more effective than prefetching at overlapping operations.
This document contains two sample question papers for an Operating Systems exam for a 4th semester BTech course in IT/CSE. Each paper has three sections - Section A contains 10 short answer questions worth 2 marks each, Section B contains 4 long answer questions worth 5 marks each, and Section C contains 2 long answer questions worth 10 marks each. The questions cover topics like virtual memory, processes, threads, CPU scheduling algorithms, deadlocks, memory management techniques like paging, segmentation, swapping etc.
This document summarizes a software engineering solved paper from June 2013. It discusses various topics in software engineering including attributes of good software, definitions of software engineering and different types of software products, emergent system properties with examples, critical systems and their types, the Rational Unified Process (RUP) methodology with block diagrams, security terminology, and metrics for specifying non-functional requirements. The requirement engineering process is also explained with the main goal being to create and maintain a system requirement document through various sub-processes like feasibility study, elicitation and analysis, validation, and management of requirements.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
The document discusses operating systems and computer system architecture. It defines an operating system as a program that manages a computer's hardware resources and provides common services for application software. It describes the components of a computer system as the CPU, memory, I/O devices, and how the operating system controls and coordinates their use. It also discusses different types of operating systems designed for single-user systems, multi-user systems, servers, handheld devices, and embedded systems.
The document discusses operating system interview questions and answers. It covers topics such as the definition of an operating system, basic functions of an OS, types of operating systems, kernel functions, process states, virtual memory, deadlocks, threads, synchronization, scheduling algorithms, and memory management. Some key points covered are that an OS acts as an intermediary between the user and hardware, the kernel provides basic services, processes can be in states like running, waiting, or ready, and virtual memory uses timesharing to simulate more memory than physically available.
This document provides teaching material on distributed systems replication from the book "Distributed Systems: Concepts and Design". It includes slides on replication concepts such as performance enhancement through replication, fault tolerance, and availability. The slides cover replication transparency, consistency requirements, system models, group communication, fault-tolerant and highly available services, and consistency criteria like linearizability.
This document summarizes Chapter 14 from the textbook "Operating System Concepts" which discusses system protection. It covers the goals and principles of protection including least privilege and domain structure. The chapter explains how an access matrix can be used to specify the resources a process may access based on its protection domain. It also examines capability-based and language-based protection systems.
Protection in operating systems aims to control access to system resources and prevent misuse. Key goals are to ensure resources are only used according to policies and that errors cause minimal damage. Protection is achieved through principles like least privilege and access controls. For example, the MULTICS system used rings to separate privileges, with outer rings having fewer privileges than inner rings. Access is represented by an access matrix where rows are domains and columns are resources, with entries indicating access permissions. Role-based access control also follows least privilege by assigning privileges based on roles. Revoking access raises questions about when and how it takes effect, whether it applies selectively or generally, and if it is temporary or permanent. Protection is crucial as systems have become more complex
This document discusses main memory management techniques used in operating systems. It covers topics like address binding, logical versus physical address spaces, dynamic loading, swapping, contiguous memory allocation, segmentation, paging, and page tables. The key points are:
1) Main memory management aims to efficiently allocate memory for processes while allowing processes to be swapped in and out of memory. It maps logical addresses to physical addresses through techniques like segmentation and paging.
2) Early techniques allocated processes to contiguous blocks of memory, but this led to fragmentation issues. Segmentation and paging allow non-contiguous allocation to avoid fragmentation.
3) Segmentation divides memory into variable-sized segments, while paging divides it into fixed
This document discusses main memory management techniques used in operating systems. It covers topics like address binding, logical versus physical address spaces, dynamic loading, swapping, contiguous memory allocation, segmentation, paging, and page tables. The key points are:
1) Main memory management aims to efficiently allocate memory for processes while allowing processes to be swapped in and out of memory. It maps logical addresses to physical addresses through techniques like segmentation and paging.
2) Early techniques allocated processes to contiguous blocks of memory, but this led to fragmentation issues. Segmentation and paging allow non-contiguous allocation to avoid fragmentation.
3) Segmentation divides a process's memory into variable-sized segments that can be placed anywhere
This document discusses different approaches to implementing access control and protection in operating systems, including:
1. Using an access matrix to represent access rights, with domains as rows and objects as columns. This separates the protection mechanism from authorization policy.
2. Capability-based systems provide data and software capabilities to control access to objects and interpret access rights.
3. Language-based protection specifies access policies within a programming language and generates calls to the underlying protection system. For example, Java uses protection domains and stack inspection.
The document describes the high-level design of a software rejuvenation system. It discusses how the system is broken down into modular components with dependencies between them. Data flow diagrams are used to design the system at different levels, showing how data flows between processing steps. Level 0 shows the main rejuvenation process, level 1 shows setting threshold values, and level 2 shows more detailed steps like checking aging factors and memory usage. The sequence diagram depicts the variable time and workload policy, with the system updating the rejuvenation time based on workload. Detailed modules are described for OS and VM cold/warm reboot processes.
Overview - Functions of an Operating System – Design Approaches – Types of Advanced
Operating System - Synchronization Mechanisms – Concept of a Process, Concurrent
Processes – The Critical Section Problem, Other Synchronization Problems – Language
Mechanisms for Synchronization – Axiomatic Verification of Parallel Programs - Process
Deadlocks - Preliminaries – Models of Deadlocks, Resources, System State – Necessary and
Sufficient conditions for a Deadlock – Systems with Single-Unit Requests, Consumable
Resources, Reusable Resources.
This document discusses shared memory in Linux, including creating shared memory segments using shmget, attaching and detaching shared memory using shmat and shmdt, controlling shared memory segments using shmctl, and using mmap to map files to shared memory. It provides details on the system calls used for shared memory and examples of creating and using shared memory between processes.
A binary semaphore is a synchronization primitive that can have the values of 0 or 1. They are used to implement mutual exclusion and synchronize concurrent processes. Paging is when pages of memory are written to disk to make space in physical memory for other processes, which can lead to thrashing if too many page faults occur from too much paging. The four conditions that can lead to a deadlock are mutual exclusion, hold and wait, no preemption, and circular wait.
This chapter introduces operating systems and their major components. It discusses how operating systems act as an intermediary between the user and computer hardware to execute programs and manage system resources like the CPU, memory, storage and I/O devices. It also covers the basic structure of a computer system including hardware components, the operating system, application programs, and users. Key operating system functions like process management, memory management and storage management are introduced.
This document provides course material for the subject of Operating Systems for 4th semester B.E. Computer Science Engineering students at A.V.C. College of Engineering. It includes information on the name and designation of the faculty teaching the course, the academic year, curriculum regulations, 5 units that make up the course content, textbook and reference details. The course aims to cover key topics in operating systems including processes, process scheduling, storage management, file systems and I/O systems.
This document discusses prefetching and spooling as methods to overlap I/O operations with CPU operations to improve system performance. Prefetching involves initiating the next read operation after the current one completes, allowing the CPU and I/O device to work in parallel. Spooling overlaps the I/O of one job with the computation and output of other jobs by using a buffer. Spooling is generally more effective than prefetching at overlapping operations.
This document contains two sample question papers for an Operating Systems exam for a 4th semester BTech course in IT/CSE. Each paper has three sections - Section A contains 10 short answer questions worth 2 marks each, Section B contains 4 long answer questions worth 5 marks each, and Section C contains 2 long answer questions worth 10 marks each. The questions cover topics like virtual memory, processes, threads, CPU scheduling algorithms, deadlocks, memory management techniques like paging, segmentation, swapping etc.
This document summarizes a software engineering solved paper from June 2013. It discusses various topics in software engineering including attributes of good software, definitions of software engineering and different types of software products, emergent system properties with examples, critical systems and their types, the Rational Unified Process (RUP) methodology with block diagrams, security terminology, and metrics for specifying non-functional requirements. The requirement engineering process is also explained with the main goal being to create and maintain a system requirement document through various sub-processes like feasibility study, elicitation and analysis, validation, and management of requirements.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
The document discusses operating systems and computer system architecture. It defines an operating system as a program that manages a computer's hardware resources and provides common services for application software. It describes the components of a computer system as the CPU, memory, I/O devices, and how the operating system controls and coordinates their use. It also discusses different types of operating systems designed for single-user systems, multi-user systems, servers, handheld devices, and embedded systems.
The document discusses operating system interview questions and answers. It covers topics such as the definition of an operating system, basic functions of an OS, types of operating systems, kernel functions, process states, virtual memory, deadlocks, threads, synchronization, scheduling algorithms, and memory management. Some key points covered are that an OS acts as an intermediary between the user and hardware, the kernel provides basic services, processes can be in states like running, waiting, or ready, and virtual memory uses timesharing to simulate more memory than physically available.
This document provides teaching material on distributed systems replication from the book "Distributed Systems: Concepts and Design". It includes slides on replication concepts such as performance enhancement through replication, fault tolerance, and availability. The slides cover replication transparency, consistency requirements, system models, group communication, fault-tolerant and highly available services, and consistency criteria like linearizability.
This document summarizes Chapter 14 from the textbook "Operating System Concepts" which discusses system protection. It covers the goals and principles of protection including least privilege and domain structure. The chapter explains how an access matrix can be used to specify the resources a process may access based on its protection domain. It also examines capability-based and language-based protection systems.
Protection in operating systems aims to control access to system resources and prevent misuse. Key goals are to ensure resources are only used according to policies and that errors cause minimal damage. Protection is achieved through principles like least privilege and access controls. For example, the MULTICS system used rings to separate privileges, with outer rings having fewer privileges than inner rings. Access is represented by an access matrix where rows are domains and columns are resources, with entries indicating access permissions. Role-based access control also follows least privilege by assigning privileges based on roles. Revoking access raises questions about when and how it takes effect, whether it applies selectively or generally, and if it is temporary or permanent. Protection is crucial as systems have become more complex
This document discusses operating system security concepts including protection domains, access matrices, and revocation of access rights. It describes how protection domains combined with an access matrix specify the resources a process may access. It also examines different methods for implementing access matrices and revoking access rights. The goals are to discuss protection principles in modern computer systems and how mechanisms can mitigate system attacks.
Goals of Protection
Principles of Protection
Domain of Protection
Access Matrix
Implementation of Access Matrix
Access Control
Revocation of Access Rights
Capability-Based Systems
Language-Based Protection
This document discusses computer system protection. It outlines goals of protection like preventing unauthorized access. Principles like least privilege aim to minimize damage from compromised access. Protection domains define which objects and operations processes can access. Access matrices represent these access rights. Examples of early capability-based and language-based protection systems are described.
Senthilkanth,MCA..
The following ppt's full topic covers Operating System for BSc CS, BCA, MSc CS, MCA students..
1.Introduction
2.OS Structures
3.Process
4.Threads
5.CPU Scheduling
6.Process Synchronization
7.Dead Locks
8.Memory Management
9.Virtual Memory
10.File system Interface
11.File system implementation
12.Mass Storage System
13.IO Systems
14.Protection
15.Security
16.Distributed System Structure
17.Distributed File System
18.Distributed Co Ordination
19.Real Time System
20.Multimedia Systems
21.Linux
22.Windows
This chapter discusses database security and authorization. It covers types of database security including access control, discretionary access control through granting and revoking privileges, and mandatory access control. It also discusses statistical database security, flow control, and encryption. The chapter outlines the role of the database administrator in managing security and authorization, including creating user accounts, granting privileges, and performing audits. It describes different types of privileges that can be granted at the account and relation levels.
Database Security Introduction,Methods for database security
Discretionary access control method
Mandatory access control
Role base access control for multilevel security.
Use of views in security enforcement
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
1) The document discusses various concepts related to operating systems including system calls, processes, memory management, file systems, interprocess communication, and more.
2) It compares and contrasts different operating system designs such as microkernels versus monolithic kernels, and discusses features of mobile operating systems like Android and iOS.
3) The document is a chapter from a textbook or course, providing definitions and explanations of core operating system concepts. It examines both low-level system components and higher-level aspects.
This document discusses swap space in Linux systems. It explains that swap space uses disk space as virtual memory to hold process images when physical RAM is full. Swap space can be located in a separate disk partition or within the normal file system. Linux uses a swap map data structure to track which pages are stored in swap space. The goal of swap space is to make RAM usage more efficient and prevent the system from slowing down or crashing when physical memory is exhausted.
The document introduces distributed systems, defining them as collections of independent computers that appear as a single system to users, discusses the goals of transparency, openness, and scalability in distributed systems, and describes three main types - distributed computing systems for tasks like clustering and grids, distributed information systems for integrating applications, and distributed pervasive systems for mobile and embedded devices.
The document discusses distributed system models and issues in designing distributed systems. It describes three distributed system models: architectural models which describe how system components are distributed and placed, interaction models which handle timing aspects, and fault models which define how failures are handled. For architectural models, it explains the client-server and peer-to-peer models. For interaction models, it discusses synchronous and asynchronous systems. It then lists 10 issues to consider in distributed system design, such as heterogeneity, openness, security, scalability, failure handling, concurrency, transparency, quality of service, reliability, and performance.
The document discusses operating systems and their core functions. It covers:
1) Operating systems control computers by managing processes, memory, networking, protection, and scheduling.
2) Operating systems aim to allocate resources to processes, manage main memory, and protect systems from unauthorized access.
3) Key components of operating systems include processes, threads, scheduling algorithms, system calls, interprocess communication using semaphores, and memory management.
This document provides an overview of grid computing. It defines grid computing as combining computer resources from multiple administrative domains to reach a common goal. It discusses key grid computing concepts like types of resources (computation, storage, communication, software/licenses), jobs and applications, and grid software components for management and distributed grid management. The document also distinguishes between inter-grid and intra-grid systems, with grids able to be built at various scales from just a few machines to globally spanning hierarchies.
This document provides an overview of the key components that make up an Android application - activities, services, content providers, intents, broadcast receivers, widgets and notifications. It describes each component in detail and how they work together. It also covers the application lifecycle and priority levels, and how Android features are declared in the app manifest file, including intent filters, permissions, icons/labels and libraries.
The document discusses different approaches to implementing access control and protection in operating systems, including using access matrices to represent access rights for domains and objects, and capability-based systems where capabilities function like keys to allow access. It also covers revoking access rights, language-based protection where policies are specified in code, and how protection is implemented in Java using protection domains and stack inspection.
The document discusses different approaches to implementing access control and protection in operating systems, including using access matrices to represent access rights for domains and objects, and capability-based systems where capabilities function like keys to allow access. It also covers revoking access rights, language-based protection where policies are specified in code, and how protection is implemented in Java using protection domains and stack inspection.
This document discusses distributed file systems (DFS) and distributed coordination. It provides details on the key components, features, and applications of DFS, including location transparency and redundancy. It also explains various distributed coordination techniques such as event ordering using happened-before relations, mutual exclusion using centralized, distributed, and token-passing approaches, and ensuring atomicity through two-phase commit protocols. Concurrency control methods like locking and timestamp ordering are discussed. The document also covers deadlock handling in distributed systems.
This document proposes a novel role-based cross-domain access control scheme for cloud storage. It aims to address problems with time constraints and location constraints when accessing data across different cloud domains. The proposed scheme combines domain RBAC, role-based access control, and attribute-based access control. Each user is assigned attributes and roles. Data is encrypted with attribute-based encryption before being uploaded. Domains are separated and manage their own users, roles, and permissions to allow cross-domain access while minimizing time delays in accessing data located in different domains.
Similar to Aos v unit protection and access control (20)
1. Protection
14.1 Goals of Protection
Obviously to prevent malicious misuse of the system by users or programs. See chapter
15 for a more thorough coverage of this goal.
To ensure that each shared resource is used only in accordance with system policies,
which may be set either by system designers or by system administrators.
To ensure that errant programs cause the minimal amount of damage possible.
Note that protection systems only provide the mechanisms for enforcing policies and
ensuring reliable systems. It is up to administrators and users to implement those
mechanisms effectively.
14.2 Principles of Protection
The principle of least privilege dictates that programs, users, and systems be given just
enough privileges to perform their tasks.
This ensures that failures do the least amount of harm and allow the least of harm to be
done.
For example, if a program needs special privileges to perform a task, it is better to make
it a SGID program with group ownership of "network" or "backup" or some other pseudo
group, rather than SUID with root ownership. This limits the amount of damage that can
occur if something goes wrong.
Typically each user is given their own account, and has only enough privilege to modify
their own files.
The root account should not be used for normal day to day activities - The System
Administrator should also have an ordinary account, and reserve use of the root account
for only those tasks which need the root privileges
14.3 Domain of Protection
A computer can be viewed as a collection of processes and objects ( both HW & SW ).
The need to know principle states that a process should only have access to those objects
it needs to accomplish its task, and furthermore only in the modes for which it needs
access and only during the time frame when it needs access.
The modes available for a particular object may depend upon its type.
14.3.1 Domain Structure
A protection domain specifies the resources that a process may access.
Each domain defines a set of objects and the types of operations that may be
invoked on each object.
An access right is the ability to execute an operation on an object.
A domain is defined as a set of < object, { access right set } > pairs, as shown
below. Note that some domains may be disjoint while others overlap.
2. Figure 14.1 - System with three protection domains.
The association between a process and a domain may be static or dynamic.
o If the association is static, then the need-to-know principle requires a way
of changing the contents of the domain dynamically.
o If the association is dynamic, then there needs to be a mechanism for
domain switching.
Domains may be realized in different fashions - as users, or as processes, or as
procedures. E.g. if each user corresponds to a domain, then that domain defines
the access of that user, and changing domains involves changing user ID.
14.3.2 An Example: UNIX
UNIX associates domains with users.
Certain programs operate with the SUID bit set, which effectively changes the
user ID, and therefore the access domain, while the program is running. ( and
similarly for the SGID bit. ) Unfortunately this has some potential for abuse.
An alternative used on some systems is to place privileged programs in special
directories, so that they attain the identity of the directory owner when they run.
This prevents crackers from placing SUID programs in random directories around
the system.
Yet another alternative is to not allow the changing of ID at all. Instead, special
privileged daemons are launched at boot time, and user processes send messages
to these daemons when they need special tasks performed.
14.3.3 An Example: MULTICS
The MULTICS system uses a complex system of rings, each corresponding to a
different protection domain, as shown below:
3. Figure 14.2 - MULTICS ring structure.
Rings are numbered from 0 to 7, with outer rings having a subset of the privileges
of the inner rings.
Each file is a memory segment, and each segment description includes an entry
that indicates the ring number associated with that segment, as well as read, write,
and execute privileges.
Each process runs in a ring, according to the current-ring-number, a counter
associated with each process.
A process operating in one ring can only access segments associated with higher
( farther out ) rings, and then only according to the access bits. Processes cannot
access segments associated with lower rings.
Domain switching is achieved by a process in one ring calling upon a process
operating in a lower ring, which is controlled by several factors stored with each
segment descriptor:
o An access bracket, defined by integers b1 <= b2.
o A limit b3 > b2
o A list of gates, identifying the entry points at which the segments may be
called.
If a process operating in ring i calls a segment whose bracket is such that b1 <= i
<= b2, then the call succeeds and the process remains in ring i.
Otherwise a trap to the OS occurs, and is handled as follows:
o If i < b1, then the call is allowed, because we are transferring to a
procedure with fewer privileges. However if any of the parameters being
passed are of segments below b1, then they must be copied to an area
accessible by the called procedure.
4. o If i > b2, then the call is allowed only if i <= b3 and the call is directed to
one of the entries on the list of gates.
Overall this approach is more complex and less efficient than other protection
schemes.
14.4 Access Matrix
The model of protection that we have been discussing can be viewed as an access matrix,
in which columns represent different system resources and rows represent different
protection domains. Entries within the matrix indicate what access that domain has to that
resource.
Figure 14.3 - Access matrix.
Domain switching can be easily supported under this model, simply by providing
"switch" access to other domains:
5. Figure 14.4 - Access matrix of Figure 14.3 with domains as objects.
The ability to copy rights is denoted by an asterisk, indicating that processes in that
domain have the right to copy that access within the same column, i.e. for the same
object. There are two important variations:
o If the asterisk is removed from the original access right, then the right is
transferred, rather than being copied. This may be termed a transfer right as
opposed to a copy right.
o If only the right and not the asterisk is copied, then the access right is added to the
new domain, but it may not be propagated further. That is the new domain does
not also receive the right to copy the access. This may be termed a limited copy
right, as shown in Figure 14.5 below:
6. Figure 14.5 - Access matrix with copy rights.
The owner right adds the privilege of adding new rights or removing existing ones:
7. Figure 14.6 - Access matrix with owner rights.
Copy and owner rights only allow the modification of rights within a column. The
addition of control rights, which only apply to domain objects, allow a process operating
in one domain to affect the rights available in other domains. For example in the table
below, a process operating in domain D2 has the right to control any of the rights in
domain D4.
8. Figure 14.7 - Modified access matrix of Figure 14.4
14.5 Implementation of Access Matrix
14.5.1 Global Table
The simplest approach is one big global table with < domain, object, rights >
entries.
Unfortunately this table is very large ( even if sparse ) and so cannot be kept in
memory ( without invoking virtual memory techniques. )
There is also no good way to specify groupings - If everyone has access to some
resource, then it still needs a separate entry for every domain.
14.5.2 Access Lists for Objects
Each column of the table can be kept as a list of the access rights for that
particular object, discarding blank entries.
For efficiency a separate list of default access rights can also be kept, and checked
first.
14.5.3 Capability Lists for Domains
In a similar fashion, each row of the table can be kept as a list of the capabilities
of that domain.
Capability lists are associated with each domain, but not directly accessible by the
domain or any user process.
Capability lists are themselves protected resources, distinguished from other data
in one of two ways:
o A tag, possibly hardware implemented, distinguishing this special type of
data. ( other types may be floats, pointers, booleans, etc. )
9. o The address space for a program may be split into multiple segments, at
least one of which is inaccessible by the program itself, and used by the
operating system for maintaining the process's access right capability list.
14.5.4 A Lock-Key Mechanism
Each resource has a list of unique bit patterns, termed locks.
Each domain has its own list of unique bit patterns, termed keys.
Access is granted if one of the domain's keys fits one of the resource's locks.
Again, a process is not allowed to modify its own keys.
14.5.5 Comparison
Each of the methods here has certain advantages or disadvantages, depending on
the particular situation and task at hand.
Many systems employ some combination of the listed methods.
14.6 Access Control
Role-Based Access Control, RBAC, assigns privileges to users, programs, or roles as
appropriate, where "privileges" refer to the right to call certain system calls, or to use
certain parameters with those calls.
RBAC supports the principle of least privilege, and reduces the susceptibility to abuse as
opposed to SUID or SGID programs.
10. Figure 14.8 - Role-based access control in Solaris 10.
14.7 Revocation of Access Rights
The need to revoke access rights dynamically raises several questions:
o Immediate versus delayed - If delayed, can we determine when the revocation
will take place?
o Selective versus general - Does revocation of an access right to an object affect all
users who have that right, or only some users?
o Partial versus total - Can a subset of rights for an object be revoked, or are all
rights revoked at once?
o Temporary versus permanent - If rights are revoked, is there a mechanism for
processes to re-acquire some or all of the revoked rights?
With an access list scheme revocation is easy, immediate, and can be selective, general,
partial, total, temporary, or permanent, as desired.
With capabilities lists the problem is more complicated, because access rights are
distributed throughout the system. A few schemes that have been developed include:
o Reacquisition - Capabilities are periodically revoked from each domain, which
must then re-acquire them.
o Back-pointers - A list of pointers is maintained from each object to each
capability which is held for that object.
o Indirection - Capabilities point to an entry in a global table rather than to the
object. Access rights can be revoked by changing or invalidating the table entry,
11. which may affect multiple processes, which must then re-acquire access rights to
continue.
o Keys - A unique bit pattern is associated with each capability when created, which
can be neither inspected nor modified by the process.
A master key is associated with each object.
When a capability is created, its key is set to the object's master key.
As long as the capability's key matches the object's key, then the
capabilities remain valid.
The object master key can be changed with the set-key command, thereby
invalidating all current capabilities.
More flexibility can be added to this scheme by implementing a list of
keys for each object, possibly in a global table.
14.8 Capability-Based Systems ( Optional )
14.8.1 An Example: Hydra
Hydra is a capability-based system that includes both system-defined rights and
user-defined rights. The interpretation of user-defined rights is up to the specific
user programs, but the OS provides support for protecting access to those rights,
whatever they may be
Operations on objects are defined procedurally, and those procedures are
themselves protected objects, accessed indirectly through capabilities.
The names of user-defined procedures must be identified to the protection system
if it is to deal with user-defined rights.
When an object is created, the names of operations defined on that object become
auxiliary rights, described in a capability for an instance of the type. For a
process to act on an object, the capabilities it holds for that object must contain
the name of the operation being invoked. This allows access to be controlled on
an instance-by-instance and process-by-process basis.
Hydra also allows rights amplification, in which a process is deemed to be
trustworthy, and thereby allowed to act on any object corresponding to its
parameters.
Programmers can make direct use of the Hydra protection system, using suitable
libraries which are documented in appropriate reference manuals.
14.8.2 An Example: Cambridge CAP System
The CAP system has two kinds of capabilities:
o Data capability, used to provide read, write, and execute access to objects.
These capabilities are interpreted by microcode in the CAP machine.
o Software capability, is protected but not interpreted by the CAP
microcode.
Software capabilities are interpreted by protected ( privileged )
procedures, possibly written by application programmers.
12. When a process executes a protected procedure, it temporarily
gains the ability to read or write the contents of a software
capability.
This leaves the interpretation of the software capabilities up to the
individual subsystems, and limits the potential damage that could
be caused by a faulty privileged procedure.
Note, however, that protected procedures only get access to
software capabilities for the subsystem of which they are a part.
Checks are made when passing software capabilities to protected
procedures that they are of the correct type.
Unfortunately the CAP system does not provide libraries, making
it harder for an individual programmer to use than the Hydra
system.
14.9 Language-Based Protection ( Optional )
o As systems have developed, protection systems have become more powerful, and
also more specific and specialized.
o To refine protection even further requires putting protection capabilities into the
hands of individual programmers, so that protection policies can be implemented
on the application level, i.e. to protect resources in ways that are known to the
specific applications but not to the more general operating system.
14.9.1 Compiler-Based Enforcement
In a compiler-based approach to protection enforcement, programmers directly
specify the protection needed for different resources at the time the resources are
declared.
This approach has several advantages:
1. Protection needs are simply declared, as opposed to a complex series of
procedure calls.
2. Protection requirements can be stated independently of the support
provided by a particular OS.
3. The means of enforcement need not be provided directly by the developer.
4. Declarative notation is natural, because access privileges are closely
related to the concept of data types.
Regardless of the means of implementation, compiler-based protection relies upon
the underlying protection mechanisms provided by the underlying OS, such as the
Cambridge CAP or Hydra systems.
Even if the underlying OS does not provide advanced protection mechanisms, the
compiler can still offer some protection, such as treating memory accesses
differently in code versus data segments. ( E.g. code segments cant be modified,
data segments can't be executed. )
There are several areas in which compiler-based protection can be compared to
kernel-enforced protection:
13. o Security. Security provided by the kernel offers better protection than that
provided by a compiler. The security of the compiler-based enforcement is
dependent upon the integrity of the compiler itself, as well as requiring
that files not be modified after they are compiled. The kernel is in a better
position to protect itself from modification, as well as protecting access to
specific files. Where hardware support of individual memory accesses is
available, the protection is stronger still.
o Flexibility. A kernel-based protection system is not as flexible to provide
the specific protection needed by an individual programmer, though it may
provide support which the programmer may make use of. Compilers are
more easily changed and updated when necessary to change the protection
services offered or their implementation.
o Efficiency. The most efficient protection mechanism is one supported by
hardware and microcode. Insofar as software based protection is
concerned, compiler-based systems have the advantage that many checks
can be made off-line, at compile time, rather that during execution.
The concept of incorporating protection mechanisms into programming languages
is in its infancy, and still remains to be fully developed. However the general goal
is to provide mechanisms for three functions:
0. Distributing capabilities safely and efficiently among customer processes.
In particular a user process should only be able to access resources for
which it was issued capabilities.
1. Specifying the type of operations a process may execute on a resource,
such as reading or writing.
2. Specifying the order in which operations are performed on the resource,
such as opening before reading.
14.9.2 Protection in Java
Java was designed from the very beginning to operate in a distributed
environment, where code would be executed from a variety of trusted and
untrusted sources. As a result the Java Virtual Machine, JVM incorporates many
protection mechanisms
When a Java program runs, it load up classes dynamically, in response to requests
to instantiates objects of particular types. These classes may come from a variety
of different sources, some trusted and some not, which requires that the protection
mechanism be implemented at the resolution of individual classes, something not
supported by the basic operating system.
As each class is loaded, it is placed into a separate protection domain. The
capabilities of each domain depend upon whether the source URL is trusted or
not, the presence or absence of any digital signatures on the class ( Chapter 15 ),
and a configurable policy file indicating which servers a particular user trusts, etc.
When a request is made to access a restricted resource in Java, ( e.g. open a local
file ), some process on the current call stack must specifically assert a privilege to
perform the operation. In essence this method assumes responsibility for the
restricted access. Naturally the method must be part of a class which resides in a
14. protection domain that includes the capability for the requested operation. This
approach is termed stack inspection, and works like this:
o When a caller may not be trusted, a method executes an access request
within a doPrivileged( ) block, which is noted on the calling stack.
o When access to a protected resource is requested, checkPermissions( )
inspects the call stack to see if a method has asserted the privilege to
access the protected resource.
If a suitable doPriveleged block is encountered on the stack before
a domain in which the privilege is disallowed, then the request is
granted.
If a domain in which the request is disallowed is encountered first,
then the access is denied and a AccessControlException is thrown.
If neither is encountered, then the response is implementation
dependent.
In the example below the untrusted applet's call to get( ) succeeds, because the
trusted URL loader asserts the privilege of opening the specific URL lucent.com.
However when the applet tries to make a direct call to open( ) it fails, because it
does not have privilege to access any sockets.
Figure 14.9 - Stack inspection.
14.10 Summary