A distributed system consists of multiple connected CPUs that appear as a single system to users. Distributed systems provide advantages like communication, resource sharing, reliability and scalability. However, they require distribution-aware software and uninterrupted network connectivity. Distributed operating systems manage resources across connected computers transparently. They provide various forms of transparency and handle issues like failure, concurrency and replication. Remote procedure calls allow calling remote services like local procedures to achieve transparency.
DDBMS, characteristics, Centralized vs. Distributed Database, Homogeneous DDBMS, Heterogeneous DDBMS, Advantages, Disadvantages, What is parallel database, Data fragmentation, Replication, Distribution Transaction
The transport layer provides efficient, reliable, and cost-effective process-to-process delivery by making use of network layer services. The transport layer works through transport entities to achieve its goal of reliable delivery between application processes. It provides an interface for applications to access its services.
The document provides an introduction to distributed systems, defining them as a collection of independent computers that communicate over a network to act as a single coherent system. It discusses the motivation for and characteristics of distributed systems, including concurrency, lack of a global clock, and independence of failures. Architectural categories of distributed systems include tightly coupled and loosely coupled, with examples given of different types of distributed systems such as database management systems, ATM networks, and the internet.
This document provides an overview of distributed operating systems. It discusses the motivation for distributed systems including resource sharing, reliability, and computation speedup. It describes different types of distributed operating systems like network operating systems where users are aware of multiple machines, and distributed operating systems where users are not aware. It also covers network structures, topologies, communication structures, protocols, and provides an example of networking. The objectives are to provide a high-level overview of distributed systems and discuss the general structure of distributed operating systems.
This document discusses concurrency control algorithms for distributed database systems. It describes distributed two-phase locking (2PL), wound-wait, basic timestamp ordering, and distributed optimistic concurrency control algorithms. For distributed 2PL, transactions lock data items in a growing phase and release locks in a shrinking phase. Wound-wait prevents deadlocks by aborting younger transactions that wait on older ones. Basic timestamp ordering orders transactions based on their timestamps to ensure serializability. The distributed optimistic approach allows transactions to read and write freely until commit, when certification checks for conflicts. Maintaining consistency across distributed copies is important for concurrency control algorithms.
Distributed shared memory (DSM) is a memory architecture where physically separate memories can be addressed as a single logical address space. In a DSM system, data moves between nodes' main and secondary memories when a process accesses shared data. Each node has a memory mapping manager that maps the shared virtual memory to local physical memory. DSM provides advantages like shielding programmers from message passing, lower cost than multiprocessors, and large virtual address spaces, but disadvantages include potential performance penalties from remote data access and lack of programmer control over messaging.
Service level agreement in cloud computing an overviewDr Neelesh Jain
In the presentation overview of Service Level Agreement in Cloud Computing is discussed. Also introduction to Cloud Computing, and its benefits are too discussed.
DDBMS, characteristics, Centralized vs. Distributed Database, Homogeneous DDBMS, Heterogeneous DDBMS, Advantages, Disadvantages, What is parallel database, Data fragmentation, Replication, Distribution Transaction
The transport layer provides efficient, reliable, and cost-effective process-to-process delivery by making use of network layer services. The transport layer works through transport entities to achieve its goal of reliable delivery between application processes. It provides an interface for applications to access its services.
The document provides an introduction to distributed systems, defining them as a collection of independent computers that communicate over a network to act as a single coherent system. It discusses the motivation for and characteristics of distributed systems, including concurrency, lack of a global clock, and independence of failures. Architectural categories of distributed systems include tightly coupled and loosely coupled, with examples given of different types of distributed systems such as database management systems, ATM networks, and the internet.
This document provides an overview of distributed operating systems. It discusses the motivation for distributed systems including resource sharing, reliability, and computation speedup. It describes different types of distributed operating systems like network operating systems where users are aware of multiple machines, and distributed operating systems where users are not aware. It also covers network structures, topologies, communication structures, protocols, and provides an example of networking. The objectives are to provide a high-level overview of distributed systems and discuss the general structure of distributed operating systems.
This document discusses concurrency control algorithms for distributed database systems. It describes distributed two-phase locking (2PL), wound-wait, basic timestamp ordering, and distributed optimistic concurrency control algorithms. For distributed 2PL, transactions lock data items in a growing phase and release locks in a shrinking phase. Wound-wait prevents deadlocks by aborting younger transactions that wait on older ones. Basic timestamp ordering orders transactions based on their timestamps to ensure serializability. The distributed optimistic approach allows transactions to read and write freely until commit, when certification checks for conflicts. Maintaining consistency across distributed copies is important for concurrency control algorithms.
Distributed shared memory (DSM) is a memory architecture where physically separate memories can be addressed as a single logical address space. In a DSM system, data moves between nodes' main and secondary memories when a process accesses shared data. Each node has a memory mapping manager that maps the shared virtual memory to local physical memory. DSM provides advantages like shielding programmers from message passing, lower cost than multiprocessors, and large virtual address spaces, but disadvantages include potential performance penalties from remote data access and lack of programmer control over messaging.
Service level agreement in cloud computing an overviewDr Neelesh Jain
In the presentation overview of Service Level Agreement in Cloud Computing is discussed. Also introduction to Cloud Computing, and its benefits are too discussed.
THIS DESCRIBES VARIOUS ELEMENTS OF TRANSPORT PROTOCOL IN TRANSPORT LAYER OF COMPUTER NETWORKS
THERE ARE SIX ELEMENTS OF TRANSPORT PROTOCOL NAMELY
1. ADDRESSING
2. CONNECTION ESTABLISHMENT
3.CONNECTION REFUSE
4.FLOW CONTROL AND BUFFERS
5.MULTIPLEXING
6.CRASH RECOVERY
This document describes the sliding window protocol. It discusses key concepts like both the sender and receiver maintaining buffers to hold packets, acknowledgements being sent for every received packet, and the sender being able to send a window of packets before receiving an acknowledgement. It then explains the sender side process of numbering packets and maintaining a sending window. The receiver side maintains a window size of 1 and acknowledges by sending the next expected sequence number. A one bit sliding window protocol acts like stop and wait. Merits include multiple packets being sent without waiting for acknowledgements while demerits include potential bandwidth waste in some situations.
This document discusses protocol layering in communication networks. It introduces the need for protocol layering when communication becomes complex. Protocol layering involves dividing communication tasks across different layers, with each layer having its own protocol. The document then discusses two principles of protocol layering: 1) each layer must support bidirectional communication and 2) the objects under each layer must be identical at both sites. It provides an overview of the OSI 7-layer model and describes the basic functions of each layer.
This document discusses distributed databases and distributed database management systems (DDBMS). It defines a distributed database as a logically interrelated collection of shared data physically distributed over a computer network. A DDBMS is software that manages the distributed database and makes the distribution transparent to users. The document outlines key concepts of distributed databases including data fragmentation, allocation, and replication across multiple database sites connected by a network. It also discusses reference architectures, components, design considerations, and types of transparency provided by DDBMS.
INTRODUCTIONTO OPERATING SYSTEM
What is an Operating System?
Mainframe Systems
Desktop Systems
Multiprocessor Systems
Distributed Systems
Clustered System
Real -Time Systems
Handheld Systems
Computing Environments
A distributed system is a collection of independent computers that appears as a single coherent system to users. It provides advantages like cost-effectiveness, reliability, scalability, and flexibility but introduces challenges in achieving transparency, dependability, performance, and flexibility due to its distributed nature. A true distributed system that solves all these challenges perfectly is difficult to achieve due to limitations like network complexity and security issues.
This document provides an overview of distributed operating systems, including:
- A distributed operating system runs applications on multiple connected computers that look like a single centralized system to users. It distributes jobs across processors for efficient processing.
- Early research began in the 1950s with systems like DYSEAC and Lincoln TX-2 that exhibited distributed control features. Major development occurred from the 1970s-1990s, though few systems achieved commercial success.
- Key considerations in designing distributed operating systems include transparency, inter-process communication, process management, resource management, reliability, and performance. Examples of distributed operating systems include Windows Server and Linux-based systems.
The document discusses naming in distributed systems. It covers desirable features of naming systems like location transparency and location independence. It differentiates between human-oriented and system-oriented names. It also discusses name spaces, name servers, name resolution including recursive and iterative approaches, and name caching.
A storage area network (SAN) provides centralized storage for multiple servers to access over a network. SANs are useful for large networks that require more storage than a single server can offer, allowing terabytes of data to be accessible by multiple machines. The key components of a SAN include fiber channel switches that connect servers and storage devices, host bus adapters that interface storage with operating systems, and storage devices like fiber channel disks. SANs provide benefits like high storage capacity, reduced costs, increased performance, and improved backup and recovery compared to adding more individual servers. However, SANs also have disadvantages in being expensive to implement and maintain and requiring technical expertise.
Here you will learn:
How to Connect two or more devices to share data and information.
What is OSI Model?
Introduction to OSI Model
What is Physical Layer?
Devices used Physical Layer
What is Signal?
Types of Signals?
Analog Signals
Digital SIgnals
What is Transmission Medium?
What Is Switch in Networking?
Networking 7 Layers.
.
Please like and comments your Question and suggestion?
This document summarizes distributed computing. It discusses the history and origins of distributed computing in the 1960s with concurrent processes communicating through message passing. It describes how distributed computing works by splitting a program into parts that run simultaneously on multiple networked computers. Examples of distributed systems include telecommunication networks, network applications, real-time process control systems, and parallel scientific computing. The advantages of distributed computing include economics, speed, reliability, and scalability while the disadvantages include complexity and network problems.
Distributed shared memory (DSM) provides processes with a shared address space across distributed memory systems. DSM exists only virtually through primitives like read and write operations. It gives the illusion of physically shared memory while allowing loosely coupled distributed systems to share memory. DSM refers to applying this shared memory paradigm using distributed memory systems connected by a communication network. Each node has CPUs, memory, and blocks of shared memory can be cached locally but migrated on demand between nodes to maintain consistency.
The document discusses network models including the OSI model and TCP/IP model. It describes the seven layers of the OSI model and the functions of each layer. It also discusses the four layers of the TCP/IP model and compares the two models, noting they are similar in concept but differ in number of layers and how protocols fit within each model.
Query Processing : Query Processing Problem, Layers of Query Processing Query Processing in Centralized Systems – Parsing & Translation, Optimization, Code generation, Example Query Processing in Distributed Systems – Mapping global query to local, Optimization,
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
This document discusses structured naming in distributed systems. It describes name spaces as labeled, directed graphs with leaf nodes representing named entities and directory nodes linking to other nodes. Name resolution starts at the root node and follows the directory tables at each node until reaching the target node. Name spaces can be hierarchical trees or directed acyclic graphs. The Domain Name System (DNS) implements a global, hierarchical name space as a rooted tree with domain names representing subtrees.
This document discusses two common models for distributed computing communication: message passing and remote procedure calls (RPC). It describes the basic primitives and design issues for each model. For message passing, it covers synchronous vs asynchronous and blocking vs non-blocking primitives. For RPC, it explains the client-server model and how stubs are used to convert parameters and return results between machines. It also discusses binding, parameter passing techniques, and ensuring error handling and execution semantics.
2. Distributed Systems Hardware & Software conceptsPrajakta Rane
This document discusses distributed system software and middleware. It describes three types of operating systems used in distributed systems - distributed operating systems, network operating systems, and middleware operating systems. Middleware operating systems provide a common set of services for local applications and independent services for remote applications. Common middleware models include remote procedure call, remote method invocation, CORBA, and message-oriented middleware. Middleware offers services like naming, persistence, messaging, querying, concurrency control, and security.
Top-down design and modular development are techniques used by computer programmers to break down large, complex problems into smaller, more manageable parts. Top-down design involves starting with a clear statement of the overall problem and then systematically breaking it into sub-problems, while modular development involves developing individual software modules separately and then combining them into a solution. These techniques make projects more manageable, faster to complete, higher quality, easier to debug, and promote code reusability.
The document discusses the distributed operating system Amoeba. It provides an overview of Amoeba's goals of distribution, parallelism, transparency and performance. The key concepts of Amoeba are that it uses a microkernel architecture and remote procedure calls for communication between client and server processes. Amoeba's architecture consists of four main parts: workstations, processor pools, servers, and WAN gateways.
THIS DESCRIBES VARIOUS ELEMENTS OF TRANSPORT PROTOCOL IN TRANSPORT LAYER OF COMPUTER NETWORKS
THERE ARE SIX ELEMENTS OF TRANSPORT PROTOCOL NAMELY
1. ADDRESSING
2. CONNECTION ESTABLISHMENT
3.CONNECTION REFUSE
4.FLOW CONTROL AND BUFFERS
5.MULTIPLEXING
6.CRASH RECOVERY
This document describes the sliding window protocol. It discusses key concepts like both the sender and receiver maintaining buffers to hold packets, acknowledgements being sent for every received packet, and the sender being able to send a window of packets before receiving an acknowledgement. It then explains the sender side process of numbering packets and maintaining a sending window. The receiver side maintains a window size of 1 and acknowledges by sending the next expected sequence number. A one bit sliding window protocol acts like stop and wait. Merits include multiple packets being sent without waiting for acknowledgements while demerits include potential bandwidth waste in some situations.
This document discusses protocol layering in communication networks. It introduces the need for protocol layering when communication becomes complex. Protocol layering involves dividing communication tasks across different layers, with each layer having its own protocol. The document then discusses two principles of protocol layering: 1) each layer must support bidirectional communication and 2) the objects under each layer must be identical at both sites. It provides an overview of the OSI 7-layer model and describes the basic functions of each layer.
This document discusses distributed databases and distributed database management systems (DDBMS). It defines a distributed database as a logically interrelated collection of shared data physically distributed over a computer network. A DDBMS is software that manages the distributed database and makes the distribution transparent to users. The document outlines key concepts of distributed databases including data fragmentation, allocation, and replication across multiple database sites connected by a network. It also discusses reference architectures, components, design considerations, and types of transparency provided by DDBMS.
INTRODUCTIONTO OPERATING SYSTEM
What is an Operating System?
Mainframe Systems
Desktop Systems
Multiprocessor Systems
Distributed Systems
Clustered System
Real -Time Systems
Handheld Systems
Computing Environments
A distributed system is a collection of independent computers that appears as a single coherent system to users. It provides advantages like cost-effectiveness, reliability, scalability, and flexibility but introduces challenges in achieving transparency, dependability, performance, and flexibility due to its distributed nature. A true distributed system that solves all these challenges perfectly is difficult to achieve due to limitations like network complexity and security issues.
This document provides an overview of distributed operating systems, including:
- A distributed operating system runs applications on multiple connected computers that look like a single centralized system to users. It distributes jobs across processors for efficient processing.
- Early research began in the 1950s with systems like DYSEAC and Lincoln TX-2 that exhibited distributed control features. Major development occurred from the 1970s-1990s, though few systems achieved commercial success.
- Key considerations in designing distributed operating systems include transparency, inter-process communication, process management, resource management, reliability, and performance. Examples of distributed operating systems include Windows Server and Linux-based systems.
The document discusses naming in distributed systems. It covers desirable features of naming systems like location transparency and location independence. It differentiates between human-oriented and system-oriented names. It also discusses name spaces, name servers, name resolution including recursive and iterative approaches, and name caching.
A storage area network (SAN) provides centralized storage for multiple servers to access over a network. SANs are useful for large networks that require more storage than a single server can offer, allowing terabytes of data to be accessible by multiple machines. The key components of a SAN include fiber channel switches that connect servers and storage devices, host bus adapters that interface storage with operating systems, and storage devices like fiber channel disks. SANs provide benefits like high storage capacity, reduced costs, increased performance, and improved backup and recovery compared to adding more individual servers. However, SANs also have disadvantages in being expensive to implement and maintain and requiring technical expertise.
Here you will learn:
How to Connect two or more devices to share data and information.
What is OSI Model?
Introduction to OSI Model
What is Physical Layer?
Devices used Physical Layer
What is Signal?
Types of Signals?
Analog Signals
Digital SIgnals
What is Transmission Medium?
What Is Switch in Networking?
Networking 7 Layers.
.
Please like and comments your Question and suggestion?
This document summarizes distributed computing. It discusses the history and origins of distributed computing in the 1960s with concurrent processes communicating through message passing. It describes how distributed computing works by splitting a program into parts that run simultaneously on multiple networked computers. Examples of distributed systems include telecommunication networks, network applications, real-time process control systems, and parallel scientific computing. The advantages of distributed computing include economics, speed, reliability, and scalability while the disadvantages include complexity and network problems.
Distributed shared memory (DSM) provides processes with a shared address space across distributed memory systems. DSM exists only virtually through primitives like read and write operations. It gives the illusion of physically shared memory while allowing loosely coupled distributed systems to share memory. DSM refers to applying this shared memory paradigm using distributed memory systems connected by a communication network. Each node has CPUs, memory, and blocks of shared memory can be cached locally but migrated on demand between nodes to maintain consistency.
The document discusses network models including the OSI model and TCP/IP model. It describes the seven layers of the OSI model and the functions of each layer. It also discusses the four layers of the TCP/IP model and compares the two models, noting they are similar in concept but differ in number of layers and how protocols fit within each model.
Query Processing : Query Processing Problem, Layers of Query Processing Query Processing in Centralized Systems – Parsing & Translation, Optimization, Code generation, Example Query Processing in Distributed Systems – Mapping global query to local, Optimization,
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
This document discusses structured naming in distributed systems. It describes name spaces as labeled, directed graphs with leaf nodes representing named entities and directory nodes linking to other nodes. Name resolution starts at the root node and follows the directory tables at each node until reaching the target node. Name spaces can be hierarchical trees or directed acyclic graphs. The Domain Name System (DNS) implements a global, hierarchical name space as a rooted tree with domain names representing subtrees.
This document discusses two common models for distributed computing communication: message passing and remote procedure calls (RPC). It describes the basic primitives and design issues for each model. For message passing, it covers synchronous vs asynchronous and blocking vs non-blocking primitives. For RPC, it explains the client-server model and how stubs are used to convert parameters and return results between machines. It also discusses binding, parameter passing techniques, and ensuring error handling and execution semantics.
2. Distributed Systems Hardware & Software conceptsPrajakta Rane
This document discusses distributed system software and middleware. It describes three types of operating systems used in distributed systems - distributed operating systems, network operating systems, and middleware operating systems. Middleware operating systems provide a common set of services for local applications and independent services for remote applications. Common middleware models include remote procedure call, remote method invocation, CORBA, and message-oriented middleware. Middleware offers services like naming, persistence, messaging, querying, concurrency control, and security.
Top-down design and modular development are techniques used by computer programmers to break down large, complex problems into smaller, more manageable parts. Top-down design involves starting with a clear statement of the overall problem and then systematically breaking it into sub-problems, while modular development involves developing individual software modules separately and then combining them into a solution. These techniques make projects more manageable, faster to complete, higher quality, easier to debug, and promote code reusability.
The document discusses the distributed operating system Amoeba. It provides an overview of Amoeba's goals of distribution, parallelism, transparency and performance. The key concepts of Amoeba are that it uses a microkernel architecture and remote procedure calls for communication between client and server processes. Amoeba's architecture consists of four main parts: workstations, processor pools, servers, and WAN gateways.
The Amoeba operating system was developed as a research project in Amsterdam to provide users with a single, powerful time-sharing system distributed across multiple machines. Its key goals were distribution, parallelism, transparency, and performance. It used a microkernel architecture and object-based model with capabilities for security. While innovative in its design, Amoeba's complexity made it difficult to adopt compared to other clustered operating systems. However, it provided valuable insights that informed later distributed systems.
The document discusses software design, which involves deciding how to implement system requirements using available technology. It covers topics like software architecture, dividing a system into subsystems and interfaces. The key benefits of design are that it makes a project easier to implement, test and maintain. Good design leads to good quality software while bad design can make a project impossible. The phases of design process include architectural design, class design, user interface design, and algorithm design. Design principles discussed aim to divide problems into smaller parts, increase cohesion, reduce coupling, use abstraction, design for flexibility and testability.
Distributed operating system amoeba case studyRamuAryan
Amoeba server,one of the most useful research topic in distributed operating system,description about objects,capabilities, pool server, process management in amoeba
This document outlines three rebus activities for different year levels. The first activity for Year 4 involves recalling a story from the previous lesson using rebuses. The second activity for Year 5 has students learn about places in Malaysia by looking at pictures and answering questions. The third activity for Year 6 has students make a mind map about a topic like transportation after discussing what they already know and reading a text provided by the teacher.
This document compares three distributed operating systems: Amoeba, Mach, and Chorus. Amoeba was designed for distributed systems and uses a pool processor execution model, automatic load balancing, and automatic file replication. Mach was designed for single CPU/multiprocessors and provides extensive multiprocessor support. Chorus is a microkernel-based real-time operating system that is optimized for the local case and provides asynchronous communication. The document outlines key differences between the three operating systems in areas such as architecture, communication methods, memory management, and UNIX compatibility.
Bottom-up and top-down models describe two approaches to reading. Bottom-up processing focuses on individual letters and words and proceeds from parts to the whole, like the phonics approach which teaches letter-sound relationships. Top-down processing emphasizes using context and prior knowledge to understand texts as a whole before analyzing individual parts, like the whole language approach. Both approaches have benefits for different types of learners.
Complex problems can be solved using Top-down design model, also known as Step-wise refinement, where we break the problem into parts and then break the parts into sub parts and finally soon, each of the parts will be easy to code and accomplish…
Theories in reading instruction
TOP-DOWN READING MODEL
Emphasizes what the reader brings to the text
Says reading is driven by meaning
Proceeds from whole to part
Views from some researchers
1. Frank Smith – Reading is not decoding written language to spoken language
2. reading is a matter of bringing meaning to print
FEATURES OF TOP-DOWN APPROACH
Readers can comprehend a selection even though they do not recognize each word.
Readers should use meaning and grammatical cues to identify unrecognized words.
Reading for meaning is the primary objective of reading, rather than mastery of letters, letters/sound relationships and words.
FEATURES OF TOP-DOWN APPROACH
Reading requires the use of meaning activities than the mastery of series of word- recognition skills.
The primary focus of instruction should be the reading of sentences, paragraphs, and whole selections
The most important aspect about reading is the amount and kind of information gained through reading.
BOTTOM UP
Emphasizes a single direction
Emphasizes the written or printed texts
Part to whole model
Reading is driven by a process that results in meaning
PROPONENTS OF THE BOTTOM UP
Flesch 1955
Gough 1985
FEATURES OF BOTTOM-UP
Believes the reader needs to:
Identify letter features
Link these features to recognize letters
Combine letter to recognize spelling patterns
Link spelling patterns to recognize words
Proceed to sentence, paragraph, and text- level processing
INTERACTIVE READING MODEL
It recognizes the interaction of bottom-up and top-down processes simultaneously throughout the reading process.
Reading as an active process that depends on reader characteristics, the text, and the reading situation (Rumelhart, 1985)
Attempts to combine the valid insights of bottom-up and top-down models.
PROPONENTS OF THE INTERACTIVE READING MODEL
Rumelhart, D. 1985
Barr, Sadow, and Blachowicz 1990
Ruddell and Speaker 1985
This document defines and discusses key principles and characteristics of distributed systems. It states that a distributed system is a collection of independent computers that appear as a single coherent system to users. Important goals of distributed systems are connecting users to resources, transparency, openness, and scalability. Distributed systems are made up of hardware components like multiple autonomous machines that communicate over a network, as well as software like middleware that hides the underlying platform heterogeneity from applications.
The document provides an introduction to distributed systems, including definitions, goals, types, and challenges. It defines a distributed system as a collection of independent computers that appear as a single system to users. Distributed systems aim to share resources and data across multiple computers for availability, reliability, scalability, and performance. There are three main types: distributed computing systems, distributed information systems, and distributed pervasive systems. Developing distributed systems faces challenges around concurrency, security, partial failures, and heterogeneity.
The document discusses the history and concepts of distributed systems. It defines a distributed system as a collection of independent computers that appears as a single system to users. Distributed systems provide benefits like resource sharing, availability, scalability, and performance. However, they also introduce challenges around concurrency, security, partial failures, and heterogeneity. The document outlines common goals for distributed systems like transparency, openness, and scalability. It describes different approaches to scaling distributed systems through techniques like hiding latencies, distribution, and replication. Finally, it discusses key hardware concepts like multiprocessors and multicomputers as well as software approaches like distributed operating systems, network operating systems, and middleware.
This document outlines the key concepts in distributed systems and paradigms. It begins with definitions of distributed systems and discusses various forms of transparency in distributed systems like access, location, and replication transparency. It then covers scaling techniques like hiding communication latencies and distribution. The document outlines concepts in distributed operating systems, network operating systems, middleware, and how they provide different degrees of transparency and scalability. It provides examples of client-server models and multitier architectures in distributed systems.
The document introduces distributed systems, defining them as collections of independent computers that appear as a single system to users, discusses the goals of transparency, openness, and scalability in distributed systems, and describes three main types - distributed computing systems for tasks like clustering and grids, distributed information systems for integrating applications, and distributed pervasive systems for mobile and embedded devices.
This document provides an introduction and definition of distributed systems. It discusses that a distributed system consists of multiple autonomous computers that appear as a single system to users. It describes characteristics like transparency, openness, and scalability. Hardware concepts like shared memory multiprocessors and message passing multicomputers are covered. Software concepts like distributed operating systems and network operating systems are introduced. Transparency, organization, goals and examples of distributed systems are summarized.
A distributed system is a collection of independent computers that appears to its users as a single coherent system. It provides transparency around resources that may be located remotely, shared concurrently, or migrated. Distributed systems achieve scalability through techniques like dividing resources across multiple servers. They employ various architectures including client-server, multitier, and middleware to distribute functionality.
A distributed system is a collection of independent computers that appears to its users as a single coherent system. It provides various forms of transparency such as hiding location, failure recovery and concurrency. Distributed systems face scalability challenges and employ techniques like dividing resources across multiple servers. They use concepts such as middleware, distributed operating systems and network operating systems to manage resources and provide transparency.
A distributed system is a collection of independent computers that appears to users as a single coherent system. Key properties of distributed systems include transparency, where differences between computers are hidden from users, coherency in providing consistent interaction regardless of location or time, and scalability to expand the system size and resources. Distributed systems aim to be reliable, remaining continuously available despite potential failures of individual components.
The document discusses internetworking models and the OSI reference model. It provides details on each of the 7 layers of the OSI model:
1. The Application layer handles communication between applications and users.
2. The Presentation layer translates and formats data for transmission.
3. The Session layer establishes and manages communication sessions between devices.
4. The Transport layer segments data, establishes logical connections, and ensures reliable delivery between hosts.
1. A distributed system is a collection of independent computers that appears as a single coherent system to users. It is organized as middleware that extends over multiple machines.
2. Transparency in distributed systems hides where resources are located, that they may move, be replicated, or shared concurrently. It also hides failures and whether resources are in memory or disk.
3. Scaling techniques include dividing work like form checking between servers and clients, partitioning namespaces, and distributing data and services across machines.
Distributed computing involves a collection of independent computers that appear as a single coherent system to users. It allows for pooling of resources and increased reliability through replication. Key aspects of distributed systems include hiding the distribution from users, providing a consistent interface, scalability, and fault tolerance. Common examples are web search, online games, and financial trading systems. Distributed computing is used for tasks like high-performance computing through cluster and grid computing.
The document discusses the basic concepts of networking including:
- A computer network allows computers to exchange information through connections like copper wire, fiber optics, or wireless technologies.
- A distributed system makes multiple connected computers appear as a single system to users by automatically allocating jobs and sharing files without user intervention.
- The OSI model defines seven layers that each perform network functions, from physical transmission of bits to high-level applications.
The document provides an overview of computer networking concepts. It begins by defining what a computer network is and listing different types of networks, including PAN, LAN, WAN, MAN, SAN, and VPN. It then explains the OSI 7-layer model and TCP/IP model, describing each layer. Finally, it compares the OSI and TCP/IP models, noting differences in how layers are combined in each model. The document serves as an introduction to fundamental networking topics for someone learning about computer networks.
The document discusses computer networks and networking concepts. It defines a computer network as an interconnection of two or more computers that allows users to share information and resources. The document describes two common network models - the peer-to-peer and client/server models. It also explains the seven layer OSI reference model and compares it to the four layer TCP/IP model. Finally, it categorizes different types of networks including LANs, MANs, WANs, PANs, wireless networks, and home networks.
Networking involves connecting electronic devices like computers to share resources and communicate. It allows devices to share internet access, hardware like printers, files and folders, and play multiplayer games. Networks use various topologies like star, bus or mesh to connect devices via physical cables or wireless links, and network protocols allow the connected devices to communicate according to shared rules.
1. The OSI model is a standard reference model that defines the functions of a networking system by separating it into 7 layers.
2. Each layer has a specific role and provides services to the layer above it. Data moves down the layers at the sending device and up at the receiving device.
3. The model was developed in 1984 by ISO to provide a common way of designing and implementing communication between any two systems using a network. It aims to make networks more flexible, efficient and easy to maintain.
A distributed operating system allows applications to run on multiple interconnected computers. It makes the distributed computers appear as a single centralized system to users. There are two main types - networking operating systems, which allow file and printer sharing on a local network, and distributed operating systems, where users are unaware of the underlying machines. Effective communication between the distributed systems requires addressing issues like naming, routing, connections, and dealing with contention for shared resources. While distributed systems provide benefits like improved performance and reliability, they also face challenges such as security, bandwidth limitations, and reduced performance due to network delays.
The document discusses the OSI model, which is a standard networking framework that defines 7 layers for network communication. Each layer provides services to the layer above it and receives services from the layer below it. The layers are physical, data link, network, transport, session, presentation, and application. The model separates network functions into logical layers to simplify network design, debugging, and management. It allows interoperability between different types of networks and systems.
The document provides an overview of the OSI model, which is a standard communication architecture for connecting devices. It describes the seven layers of the OSI model from physical to application layer and their functions. Each layer provides services to the layer above it and communicates with the same layer on other systems. Data is encapsulated with protocol information as it moves down the layers before being transmitted over the network.
This document outlines the syllabus for an MTCSCS302 course on Soft Computing taught by Dr. Sandeep Kumar Poonia. The course covers topics including neural networks, fuzzy logic, probabilistic reasoning, and genetic algorithms. It is divided into five units: (1) neural networks, (2) fuzzy logic, (3) fuzzy arithmetic and logic, (4) neuro-fuzzy systems and applications of fuzzy logic, and (5) genetic algorithms and their applications. The goal of the course is to provide students with knowledge of soft computing fundamentals and approaches for solving complex real-world problems.
Artificial Bee Colony (ABC) is a swarm
optimization technique. This algorithm generally used to solve
nonlinear and complex problems. ABC is one of the simplest
and up to date population based probabilistic strategy for
global optimization. Analogous to other population based
algorithms, ABC also has some drawbacks computationally
pricey due to its sluggish temperament of search procedure.
The solution search equation of ABC is notably motivated by a
haphazard quantity which facilitates in exploration at the cost
of exploitation of the search space. Due to the large step size in
the solution search equation of ABC there are chances of
skipping the factual solution are higher. For that reason, this
paper introduces a new search strategy in order to balance the
diversity and convergence capability of the ABC. Both
employed bee phase and onlooker bee phase are improved
with help of a local search strategy stimulated by memetic
algorithm. This paper also proposes a new strategy for fitness
calculation and probability calculation. The proposed
algorithm is named as Improved Memetic Search in ABC
(IMeABC). It is tested over 13 impartial benchmark functions
of different complexities and two real word problems are also
considered to prove proposed algorithms superiority over
original ABC algorithm and its recent variants
Spider Monkey optimization (SMO) algorithm is newest addition in class of swarm intelligence. SMO is a population based stochastic meta-heuristic. It is motivated by intelligent foraging behaviour of fission fusion structured social creatures. SMO is a very good option for complex optimization problems. This paper proposed a modified strategy in order to enhance performance of original SMO. This paper introduces a position update strategy in SMO and modifies both local leader and global leader phase. The proposed strategy is named as Modified Position Update in Spider Monkey Optimization (MPU-SMO) algorithm. The proposed algorithm tested over benchmark problems and results show that it gives better results for considered unbiased problems.
Artificial Bee Colony (ABC) algorithm is a Nature Inspired Algorithm (NIA) which based in intelligent food foraging behaviour of honey bee swarm. ABC outperformed over other NIAs and other local search heuristics when tested for benchmark functions as well as factual world problems but occasionally it shows premature convergence and stagnation due to lack of balance between exploration and exploitation. This paper establishes a local search mechanism that enhances exploration capability of ABC and avoids the dilemma of stagnation. With help of recently introduces local search strategy it tries to balance intensification and diversification of search space. The anticipated algorithm named as Enhanced local search in ABC (EnABC) and tested over eleven benchmark functions. Results are evidence for its dominance over other competitive algorithms.
The document discusses a proposed Randomized Memetic Artificial Bee Colony (RMABC) algorithm for optimization problems. RMABC incorporates local search techniques into the Artificial Bee Colony algorithm to improve exploitation of promising solutions. It randomizes the step size in the local search to balance diversification and intensification. Experimental results on benchmark problems show RMABC outperforms other ABC algorithm variants in finding optimal solutions. The document provides background on optimization problems, nature-inspired algorithms, Artificial Bee Colony algorithm, and Memetic algorithms.
Differential Evolution (DE) is a renowned optimization stratagem that can easily solve nonlinear and comprehensive problems. DE is a well known and uncomplicated population based probabilistic approach for comprehensive optimization. It has apparently outperformed a number of Evolutionary Algorithms and further search heuristics in the vein of Particle Swarm Optimization at what time of testing over both yardstick and actual world problems. Nevertheless, DE, like other probabilistic optimization algorithms, from time to time exhibits precipitate convergence and stagnates at suboptimal position. In order to stay away from stagnation behavior while maintaining an excellent convergence speed, an innovative search strategy is introduced, named memetic search in DE. In the planned strategy, positions update equation customized as per a memetic search stratagem. In this strategy a better solution participates more times in the position modernize procedure. The position update equation is inspired from the memetic search in artificial bee colony algorithm. The proposed strategy is named as Memetic Search in Differential Evolution (MSDE). To prove efficiency and efficacy of MSDE, it is tested over 8 benchmark optimization problems and three real world optimization problems. A comparative analysis has also been carried out among proposed MSDE and original DE. Results show that the anticipated algorithm go one better than the basic DE and its recent deviations in a good number of the experiments.
Artificial Bee Colony (ABC) is a distinguished optimization strategy that can resolve nonlinear and multifaceted problems. It is comparatively a straightforward and modern population based probabilistic approach for comprehensive optimization. In the vein of the other population based algorithms, ABC is moreover computationally classy due to its slow nature of search procedure. The solution exploration equation of ABC is extensively influenced by a arbitrary quantity which helps in exploration at the cost of exploitation of the better search space. In the solution exploration equation of ABC due to the outsized step size the chance of skipping the factual solution is high. Therefore, here this paper improve onlooker bee phase with help of a local search strategy inspired by memetic algorithm to balance the diversity and convergence capability of the ABC. The proposed algorithm is named as Improved Onlooker Bee Phase in ABC (IoABC). It is tested over 12 well known un-biased test problems of diverse complexities and two engineering optimization problems; results show that the anticipated algorithm go one better than the basic ABC and its recent deviations in a good number of the experiments.
Artificial bee colony (ABC) algorithm is a well known and one of the latest swarm intelligence based techniques. This method is a population based meta-heuristic algorithm used for numerical optimization. It is based on the intelligent behavior of honey bees. Artificial Bee Colony algorithm is one of the most popular techniques that are used in optimization problems. Artificial Bee Colony algorithm has some major advantages over other heuristic methods. To utilize its good feature a number of researchers combined ABC algorithm with other methods, and generate some new hybrid methods. This paper provides comparative analysis of hybrid differential Artificial Bee Colony algorithm with hybrid ABC – SPSO, Genetic algorithm and Independent rough set approach based on some parameters like technique, dimension, methodology etc. KEYWORDS
Artificial bee colony (ABC) algorithm has proved its importance in solving a number of problems including engineering optimization problems. ABC algorithm is one of the most popular and youngest member of the family of population based nature inspired meta-heuristic swarm intelligence method. ABC has been proved its superiority over some other Nature Inspired Algorithms (NIA) when applied for both benchmark functions and real world problems. The performance of search process of ABC depends on a random value which tries to balance exploration and exploitation phase. In order to increase the performance it is required to balance the exploration of search space and exploitation of optimal solution of the ABC. This paper outlines a new hybrid of ABC algorithm with Genetic Algorithm. The proposed method integrates crossover operation from Genetic Algorithm (GA) with original ABC algorithm. The proposed method is named as Crossover based ABC (CbABC). The CbABC strengthens the exploitation phase of ABC as crossover enhances exploration of search space. The CbABC tested over four standard benchmark functions and a popular continuous optimization problem.
Multiplication of two 3 d sparse matrices using 1d arrays and linked listsDr Sandeep Kumar Poonia
A basic algorithm of 3D sparse matrix multiplication (BASMM) is presented using one dimensional (1D) arrays which is used further for multiplying two 3D sparse matrices using Linked Lists. In this algorithm, a general concept is derived in which we enter non- zeros elements in 1st and 2nd sparse matrices (3D) but store that values in 1D arrays and linked lists so that zeros could be removed or ignored to store in memory. The positions of that non-zero value are also stored in memory like row and column position. In this way space complexity is decreased. There are two ways to store the sparse matrix in memory. First is row major order and another is column major order. But, in this algorithm, row major order is used. Now multiplying those two matrices with the help of BASMM algorithm, time complexity also decreased. For the implementation of this, simple c programming and concepts of data structures are used which are very easy to understand for everyone.
This document summarizes a tool called Sunzip that uses the Huffman algorithm for data compression. It discusses how Huffman encoding works by assigning shorter bit codes to more common symbols to reduce file size. The tool analyzes files to determine symbol frequencies and builds a Huffman tree to assign variable-length codes. It allows compressing different data types like text, images, audio and video. Adaptive Huffman coding is also described, which dynamically updates the code tree as more data is processed. Benefits of Huffman compression include being fast, simple to implement and achieving close to optimal compression. Sample screenshots of the Sunzip tool are also provided showing file details before and after compression.
Artificial Bee Colony (ABC) algorithm is a Nature
Inspired Algorithm (NIA) which based on intelligent food
foraging behaviour of honey bee swarm. This paper introduces
a local search strategy that enhances exploration competence
of ABC and avoids the problem of stagnation. The proposed
strategy introduces two new local search phases in original
ABC. One just after onlooker bee phase and one after scout
bee phase. The newly introduced phases are inspired by
modified Golden Section Search (GSS) strategy. The proposed
strategy named as new local search strategy in ABC
(NLSSABC). The proposed NLSSABC algorithm applied over
thirteen standard benchmark functions in order to prove its
efficiency.
This document presents a new approach called mixed S-D slicing that combines static and dynamic program slicing using object-oriented concepts in C++. Static slicing analyzes the entire program code but produces larger slices, while dynamic slicing produces smaller slices based on a specific execution but is more difficult to compute. The mixed S-D slicing aims to generate dynamic slices faster by leveraging object-oriented features like classes. An example C++ program is provided to demonstrate the S-D slicing approach using concepts like classes, inheritance, and polymorphism. The approach is intended to reduce complexity and aid in debugging object-oriented programs by combining static and dynamic slicing techniques.
Performance evaluation of different routing protocols in wsn using different ...Dr Sandeep Kumar Poonia
This document evaluates the performance of different routing protocols in wireless sensor networks using various network parameters. It simulates the Dynamic Source Routing (DSR) and Adhoc On-Demand Distance Vector (AODV) routing protocols in a 1000m x 1000m terrain area with 100 sensor nodes. The packet delivery fraction, average throughput, and normalized routing load are analyzed at different node speeds ranging from 20-100m/s. The results show that AODV performs better than DSR in terms of packet delivery fraction and normalized routing load, while DSR has better average throughput performance. In conclusion, AODV is more optimal for small terrain areas when considering packet delivery and routing overhead, while DSR provides higher data rates.
Articial bee Colony algorithm (ABC) is a population based
heuristic search technique used for optimization problems. ABC
is a very eective optimization technique for continuous opti-
mization problem. Crossover operators have a better exploration
property so crossover operators are added to the ABC. This pa-
per presents ABC with dierent types of real coded crossover op-
erator and its application to Travelling Salesman Problem (TSP).
Each crossover operator is applied to two randomly selected par-
ents from current swarm. Two o-springs generated from crossover
and worst parent is replaced by best ospring, other parent remains
same. ABC with real coded crossover operator applied to travelling
salesman problem. The experimental result shows that our proposed
algorithm performs better than the ABC without crossover in terms
of eciency and accuracy.
This document describes a simulator for database aggregation using metadata. The simulator sits between an end-user application and a database management system (DBMS) to intercept SQL queries and transform them to take advantage of available aggregates using metadata describing the data warehouse schema. The simulator provides performance gains by optimizing queries to use appropriate aggregate tables. It was found to improve performance over previous aggregate navigators by making fewer calls to system tables through the use of metadata mappings. Experimental results showed the simulator solved queries faster than alternative approaches by transforming queries to leverage aggregate tables.
Performance evaluation of diff routing protocols in wsn using difft network p...Dr Sandeep Kumar Poonia
In the recent past, wireless sensor networks have been introduced to use in many applications. To design the networks, the factors needed to be considered are the coverage area, mobility, power consumption, communication capabilities etc. The challenging goal of our project is to create a simulator to support the wireless sensor network simulation. The network simulator (NS-2) which supports both wire and wireless networks is implemented to be used with the wireless sensor network.
The Traveling Salesman Problem (TSP) involves finding the minimum cost tour that visits each customer exactly once and returns to the starting depot. Key heuristics to solve the TSP include nearest neighbor, insertion methods, and 2-opt exchanges. The Vehicle Routing Problem (VRP) extends the TSP by routing multiple vehicles of limited capacity from a central depot to serve customer demands. Common heuristics for the VRP include savings algorithms and sweep methods.
This document provides an overview of linear programming, including:
- It describes the linear programming model which involves maximizing a linear objective function subject to linear constraints.
- It provides examples of linear programming problems like product mix, blending, transportation, and network flow problems.
- It explains how to develop a linear programming model by defining decision variables, the objective function, and constraints.
- It discusses solutions methods like the graphical and simplex methods. The simplex method involves iteratively moving to adjacent extreme points to maximize the objective function.
This document discusses approximation algorithms and introduces several combinatorial optimization problems. It begins by explaining that approximation algorithms are needed to find near-optimal solutions for problems that cannot be solved in polynomial time, such as set cover and bin packing. It then provides examples of problems that are in P, NP, and NP-complete. Several techniques for designing approximation algorithms are outlined, including greedy algorithms, linear programming, and semidefinite programming. Specific NP-complete problems like vertex cover, set cover, and independent set are introduced and approximations algorithms with performance guarantees are provided for set cover and vertex cover.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
2. DEFINITION OF A DISTRIBUTED SYSTEM
A distributed system:
Multiple connected CPUs working together
A collection of independent computers that
appears to its users as a single coherent
system
Examples: parallel machines, networked
machines
3. ADVANTAGES AND DISADVANTAGES
Advantages
Communication and resource sharing possible
Economics – price-performance ratio
Reliability, scalability
Potential for incremental growth
Disadvantages
Distribution-aware PLs, OSs and applications
Network connectivity essential
Security and privacy
4. TRANSPARENCY IN A DISTRIBUTED SYSTEM
Different forms of transparency in a distributed system.
Transparency Description
Access
Hide differences in data representation and how a resource is
accessed
Location Hide where a resource is located
Migration Hide that a resource may move to another location
Relocation
Hide that a resource may be moved to another location while in
use
Replication User cannot tell how many copies exist
Concurrency Hide that a resource may be shared by several competitive users
Failure Hide the failure and recovery of a resource
Persistence Hide whether a (software) resource is in memory or on disk
5.
6. SCALABILITY PROBLEMS
Examples of scalability limitations.
Concept Example
Centralized services A single server for all users
Centralized data A single on-line telephone book
Centralized algorithms Doing routing based on complete information
7. HARDWARE CONCEPTS: MULTIPROCESSORS (1)
Multiprocessor dimensions
Memory: could be shared or be private to each CPU
Interconnect: could be shared (bus-based) or switched
A bus-based multiprocessor.
9. DISTRIBUTED SYSTEMS MODELS
Minicomputer model (e.g., early networks)
Each user has local machine
Local processing but can fetch remote data (files, databases)
Workstation model (e.g., Sprite)
Processing can also migrate
Client-server Model (e.g., V system, world wide web)
User has local workstation
Powerful workstations serve as servers (file, print, DB servers)
Processor pool model (e.g., Amoeba, Plan 9)
Terminals are Xterms or diskless terminals
Pool of backend processors handle processing
10. UNIPROCESSOR OPERATING SYSTEMS
An OS acts as a resource manager or an arbitrator
Manages CPU, I/O devices, memory
OS provides a virtual interface that is easier to use
than hardware
Structure of uniprocessor operating systems
Monolithic (e.g., MS-DOS, early UNIX)
One large kernel that handles everything
Layered design
Functionality is decomposed into N layers
Each layer uses services of layer N-1 and implements
new service(s) for layer N+1
12. DISTRIBUTED OPERATING SYSTEM
Manages resources in a distributed system
Seamlessly and transparently to the user
Looks to the user like a centralized OS
But operates on multiple independent CPUs
Provides transparency
Location, migration, concurrency, replication,…
Presents users with a virtual uniprocessor
13. DOS: CHARACTERISTICS (1)
Distributed Operating Systems
Allows a multiprocessor or multicomputer network
resources to be integrated as a single system image
Hide and manage hardware and software resources
provides transparency support
provide heterogeneity support
control network in most effective way
consists of low level commands + local operating systems +
distributed features
Inter-process communication (IPC)
14. DOS: CHARACTERISTICS (2)
remote file and device access
global addressing and naming
trading and naming services
synchronization and deadlock avoidance
resource allocation and protection
global resource sharing
deadlock avoidance
communication security
no examples in general use but many research systems:
Amoeba, Chorus etc.
15. TYPES OF DISTRIBUTED OS
System Description Main Goal
DOS
Tightly-coupled operating system for multi-
processors and homogeneous multicomputers
Hide and manage
hardware resources
NOS
Loosely-coupled operating system for
heterogeneous multicomputers (LAN and WAN)
Offer local services
to remote clients
Middleware
Additional layer atop of NOS implementing general-
purpose services
Provide distribution
transparency
16. MULTIPROCESSOR OPERATING SYSTEMS
Like a uniprocessor operating system
Manages multiple CPUs transparently to the
user
Each processor has its own hardware cache
Maintain consistency of cached data
21. MIDDLEWARE EXAMPLES
Examples: Sun RPC, CORBA, DCOM, Java RMI (distributed
object technology)
Built on top of transport layer in the ISO/OSI 7 layer reference
model: application (protocol), presentation (semantic), session
(dialogue), transport (e.g. TCP or UDP), network (IP, ATM etc),
data link (frames, checksum), physical (bits and bytes)
Most are implemented over the internet protocols
Masks heterogeneity of underlying networks, hardware,
operating system and programming languages – so provides a
uniform programming model with standard services
3 types of middleware:
transaction oriented (for distributed database applications)
message oriented (for reliable asynchronous communication)
remote procedure calls (RPC) – the original OO middleware
22. COMPARISON BETWEEN SYSTEMS
Item
Distributed OS
Network OS
Middleware-
based OS
Multiproc. Multicomp.
Degree of transparency Very High High Low High
Same OS on all nodes Yes Yes No No
Number of copies of OS 1 N N N
Basis for communication
Shared
memory
Messages Files Model specific
Resource management
Global,
central
Global,
distributed
Per node Per node
Scalability No Moderately Yes Varies
Openness Closed Closed Open Open
23. COMMUNICATION IN DISTRIBUTED SYSTEMS
Issues in communication
Message-oriented Communication
Remote Procedure Calls
Transparency but poor for passing references
Remote Method Invocation
RMIs are essentially RPCs but specific to remote
objects
System wide references passed as parameters
Stream-oriented Communication
24. TYPES OF COMMUNICATION
Message passing is the general basis of
communication in a distributed system: transferring a
set of data from a sender to a receiver.
25. COMMUNICATION BETWEEN PROCESSES
Unstructured communication
Use shared memory or shared data structures
Structured communication
Use explicit messages (IPCs)
Distributed Systems: both need low-level
communication support
28. Physical layer
The physical layer is responsible for movements of
individual bits from one hop (node) to the next.
29. PHYSICAL LAYER
Provides physical interface for transmission of information.
Defines rules by which bits are passed from one system to
another on a physical communication medium.
Covers all - mechanical, electrical, functional and
procedural - aspects for physical communication.
Such characteristics as voltage levels, timing of voltage
changes, physical data rates, maximum transmission
distances, physical connectors, and other similar attributes
are defined by physical layer specifications.
OSI Model
30. Data link layer
The data link layer is responsible for moving
frames from one hop (node) to the next and perform error
detection and correction
31. DATA LINK LAYER
Data link layer attempts to provide reliable communication
over the physical layer interface.
Breaks the outgoing data into frames and reassemble the
received frames.
Create and detect frame boundaries.
Handle errors by implementing an acknowledgement and
retransmission scheme.
Implement flow control.
Supports points-to-point as well as broadcast
communication.
Supports simplex, half-duplex or full-duplex communication.
OSI Model
33. Network layer
The network layer is responsible for the
delivery of individual packets from
the source host to the destination host.
Protocols: X.25Connection oriented
IP Connection Less
34. NETWORK LAYER
Implements routing of frames (packets) through the
network.
Defines the most optimum path the packet should take
from the source to the destination
Defines logical addressing so that any endpoint can be
identified.
Handles congestion in the network.
Facilitates interconnection between heterogeneous
networks (Internetworking).
The network layer also defines how to fragment a packet
into smaller packets to accommodate different media.
OSI Model
36. Transport layer
The transport layer is responsible for the delivery
of a message from one process to another.
Protocols: TCP Connection oriented
UDP Connection Less
37. TRANSPORT LAYER
Purpose of this layer is to provide a reliable mechanism for
the exchange of data between two processes in different
computers.
Ensures that the data units are delivered error free.
Ensures that data units are delivered in sequence.
Ensures that there is no loss or duplication of data units.
Provides connectionless or connection oriented service.
Provides for the connection management.
Multiplex multiple connection over a single channel.
OSI Model
40. SESSION LAYER
Session layer provides mechanism for controlling the dialogue
between the two end systems. It defines how to start, control
and end conversations (called sessions) between applications.
This layer requests for a logical connection to be established on
an end-user’s request.
Any necessary log-on or password validation is also handled by
this layer.
Session layer is also responsible for terminating the connection.
This layer provides services like dialogue discipline which can be
full duplex or half duplex.
Session layer can also provide check-pointing mechanism such
that if a failure of some sort occurs between checkpoints, all
data can be retransmitted from the last checkpoint.
OSI Model
42. PRESENTATION LAYER
Presentation layer defines the format in which the data is to
be exchanged between the two communicating entities.
Also handles data compression and data encryption
(cryptography).
OSI Model
44. APPLICATION LAYER
Application layer interacts with application programs and is
the highest level of OSI model.
Application layer contains management functions to
support distributed applications.
Examples of application layer are applications such as file
transfer, electronic mail, remote login etc.
OSI Model
46. MIDDLEWARE PROTOCOLS
Middleware: layer that resides between an OS and an application
May implement general-purpose protocols that warrant their own layers
Example: distributed commit
2-5
47. CLIENT-SERVER COMMUNICATION MODEL
Structure: group of servers offering service to
clients
Based on a request/response paradigm
Techniques:
Socket, remote procedure calls (RPC), Remote
Method Invocation (RMI)
kernel
client
kernel kernel kernel
file
server
process
server
terminal
server
48. ISSUES IN CLIENT-SERVER COMMUNICATION
Addressing
Blocking versus non-blocking
Buffered versus unbuffered
Reliable versus unreliable
Server architecture: concurrent versus
sequential
Scalability
49. ADDRESSING ISSUES
Question: how is the server
located?
Hard-wired address
Machine address and process
address are known a priori
Broadcast-based
Server chooses address from a
sparse address space
Client broadcasts request
Can cache response for future
Locate address via name
server
user server
user server
user serverNS
50.
51. BLOCKING VERSUS NON-BLOCKING
Blocking communication (synchronous)
Send blocks until message is actually sent
Receive blocks until message is actually received
Non-blocking communication (asynchronous)
Send returns immediately
Return does not block either
52.
53.
54. BUFFERING ISSUES
Unbuffered
communication
Server must call receive
before client can call send
Buffered communication
Client send to a mailbox
Server receives from a
mailbox
user server
user server
55. RELIABILITY
Unreliable channel
Need acknowledgements (ACKs)
Applications handle ACKs
ACKs for both request and reply
Reliable channel
Reply acts as ACK for request
Explicit ACK for response
Reliable communication on
unreliable channels
Transport protocol handles lost
messages
request
ACK
reply
ACK
User
Server
request
reply
ACK
User
Server
56.
57. SERVER ARCHITECTURE
Sequential
Serve one request at a time
Can service multiple requests by employing events and
asynchronous communication
Concurrent
Server spawns a process or thread to service each request
Can also use a pre-spawned pool of threads/processes
(apache)
Thus servers could be
Pure-sequential, event-based, thread-based, process-based
Discussion: which architecture is most efficient?
58. SCALABILITY
Question:How can you scale the server
capacity?
Buy bigger machine!
Replicate
Distribute data and/or algorithms
Ship code instead of data
Cache
59. TO PUSH OR PULL ?
Client-pull architecture
Clients pull data from servers (by sending requests)
Example: HTTP
Pro: stateless servers, failures are each to handle
Con: limited scalability
Server-push architecture
Servers push data to client
Example: video streaming, stock tickers
Pro: more scalable, Con: stateful servers, less resilient to
failure
When/how-often to push or pull?
60. GROUP COMMUNICATION
One-to-many communication: useful for
distributed applications
Issues:
Group characteristics:
Static/dynamic, open/closed
Group addressing
Multicast, broadcast, application-level multicast (unicast)
Atomicity
Message ordering
Scalability
61. PUTTING IT ALL TOGETHER: EMAIL
User uses mail client to compose a message
Mail client connects to mail server
Mail server looks up address to destination
mail server
Mail server sets up a connection and passes
the mail to destination mail server
Destination stores mail in input buffer (user
mailbox)
Recipient checks mail at a later time
62. EMAIL: DESIGN CONSIDERATIONS
Structured or unstructured?
Addressing?
Blocking/non-blocking?
Buffered or unbuffered?
Reliable or unreliable?
Server architecture
Scalability
Push or pull?
Group communication
63. REMOTE PROCEDURE CALLS
Goal: Make distributed computing look like centralized
computing
Allow remote services to be called as procedures
Transparency with regard to location, implementation,
language
Issues
How to pass parameters
Bindings
Semantics in face of errors
Two classes: integrated into prog, language and
separate
64. CONVENTIONAL PROCEDURE CALL
a) Parameter passing in a local
procedure call: the stack before the
call to read
b) The stack while the called procedure is
active
65. PARAMETER PASSING
Local procedure parameter passing
Call-by-value
Call-by-reference: arrays, complex data structures
Remote procedure calls simulate this through:
Stubs – proxies
Flattening – marshalling
Related issue: global variables are not allowed
in RPCs
66. CLIENT AND SERVER STUBS
Principle of RPC between a client and server program.
67. STUBS
Client makes procedure call (just like a local
procedure call) to the client stub
Server is written as a standard procedure
Stubs take care of packaging arguments and
sending messages
Packaging parameters is called marshalling
Stub compiler generates stub automatically
from specs in an Interface Definition Language
(IDL)
Simplifies programmer task
68. STEPS OF A REMOTE PROCEDURE CALL
1. Client procedure calls client stub in normal way
2. Client stub builds message, calls local OS
3. Client's OS sends message to remote OS
4. Remote OS gives message to server stub
5. Server stub unpacks parameters, calls server
6. Server does work, returns result to the stub
7. Server stub packs it in message, calls local OS
8. Server's OS sends message to client's OS
9. Client's OS gives message to client stub
10. Stub unpacks result, returns to client
70. MARSHALLING
Problem: different machines have different data formats
Intel: little endian, SPARC: big endian
Solution: use a standard representation
Example: external data representation (XDR)
Problem: how do we pass pointers?
If it points to a well-defined data structure, pass a copy and the server
stub passes a pointer to the local copy
What about data structures containing pointers?
Prohibit
Chase pointers over network
Marshalling: transform parameters/results into a byte stream
71. BINDING
Problem: how does a client locate a server?
Use Bindings
Server
Export server interface during initialization
Send name, version no, unique identifier, handle (address)
to binder
Client
First RPC: send message to binder to import server interface
Binder: check to see if server has exported interface
Return handle and unique identifier to client
72. BINDING: COMMENTS
Exporting and importing incurs overheads
Binder can be a bottleneck
Use multiple binders
Binder can do load balancing
73. FAILURE SEMANTICS
Client unable to locate server: return error
Lost request messages: simple timeout mechanisms
Lost replies: timeout mechanisms
Make operation idempotent
Use sequence numbers, mark retransmissions
Server failures: did failure occur before or after
operation?
At least once semantics (SUNRPC)
At most once
No guarantee
Exactly once: desirable but difficult to achieve
74. FAILURE SEMANTICS
Client failure: what happens to the server
computation?
Referred to as an orphan
Extermination: log at client stub and explicitly kill orphans
Overhead of maintaining disk logs
Reincarnation: Divide time into epochs between failures
and delete computations from old epochs
Gentle reincarnation: upon a new epoch broadcast, try to
locate owner first (delete only if no owner)
Expiration: give each RPC a fixed quantum T; explicitly
request extensions
Periodic checks with client during long computations
75. IMPLEMENTATION ISSUES
Choice of protocol [affects communication costs]
Use existing protocol (UDP) or design from scratch
Packet size restrictions
Reliability in case of multiple packet messages
Flow control
Copying costs are dominant overheads
Need at least 2 copies per message
From client to NIC and from server NIC to server
As many as 7 copies
Stack in stub – message buffer in stub – kernel – NIC – medium –
NIC – kernel – stub – server
Scatter-gather operations can reduce overheads
76. CASE STUDY: SUNRPC
One of the most widely used RPC systems
Developed for use with NFS
Built on top of UDP or TCP
TCP: stream is divided into records
UDP: max packet size < 8912 bytes
UDP: timeout plus limited number of retransmissions
TCP: return error if connection is terminated by server
Multiple arguments marshaled into a single structure
At-least-once semantics if reply received, at-least-zero
semantics if no reply. With UDP tries at-most-once
Use SUN’s eXternal Data Representation (XDR)
Big endian order for 32 bit integers, handle arbitrarily large data
structures
77. BINDER: PORT MAPPER
Server start-up: create port
Server stub calls svc_register
to register prog. #, version #
with local port mapper
Port mapper stores prog #,
version #, and port
Client start-up: call
clnt_create to locate server
port
Upon return, client can call
procedures at the server