Distributed computing allows computers connected over a network to coordinate activities and share resources. It appears as a single, integrated system to users. Key characteristics include resource sharing, openness, concurrency, scalability, fault tolerance, and transparency. Common architectures include client-server, n-tier, and peer-to-peer. Paradigms for distributed applications include message passing between processes, the client-server model with asymmetric roles, and the peer-to-peer model with equal roles.
This document contains study of Peer to Peer Distributed system.Three Models of Distributed system.Such as Centralizes,Decentralized,Hybird Model and Pros and cons of these models. Skpye and Bit torrent architecture is also discussed.This tutorial can be very help full for those who are beginners.
This document contains study of Peer to Peer Distributed system.Three Models of Distributed system.Such as Centralizes,Decentralized,Hybird Model and Pros and cons of these models. Skpye and Bit torrent architecture is also discussed.This tutorial can be very help full for those who are beginners.
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...EUDAT
Giuseppe will present the differences between high-performance and high-throughput applications. High-throughput computing (HTC) refers to computations where individual tasks do not need to interact while running. It differs from High-performance (HPC) where frequent and rapid exchanges of intermediate results is required to perform the computations. HPC codes are based on tightly coupled MPI, OpenMP, GPGPU, and hybrid programs and require low latency interconnected nodes. HTC makes use of unreliable components distributing the work out to every node and collecting results at the end of all parallel tasks.
Visit: https://www.eudat.eu/eudat-summer-school
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
This slides will provide viewers a complete understanding of all the different virtualization techniques.
The main reference for the presentation is taken from Mastering cloud computing By Rajkumar Buyya.
Provides a simple and unambiguous taxonomy of three service models
- Software as a service (SaaS)
- Platform as a service (PaaS)
- Infrastructure as a service (IaaS)
(Private cloud, Community cloud, Public cloud, and Hybrid cloud)
Agreement Protocols, Distributed Resource Management: Issues in distributed File Systems, Mechanism for building distributed file systems, Design issues in Distributed Shared Memory, Algorithm for Implementation of Distributed Shared Memory.
Simple Power Point Slides about Client-Server Architecture and Peer-to-Peer Architecture.
A short description (with pictures) about Client and Server computers is given in slides.
Feel Free to Like or Dislike the Slides.
A Distributed computing architeture consists of very lightweight software agents installed on a number of client systems , and one or more dedicated distributed computing managment servers.
Subject: Software Architecture Design
Topic: Distributed Architecture
In this presentation, you will learn about design pattern, softawre architecture, distributed architecture, basis of distributed architecture, why distributed architecture, need of distributed architecture, advantages and disadvantages of DA and much more.
Rate my presentation, It's designed graphically.
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...EUDAT
Giuseppe will present the differences between high-performance and high-throughput applications. High-throughput computing (HTC) refers to computations where individual tasks do not need to interact while running. It differs from High-performance (HPC) where frequent and rapid exchanges of intermediate results is required to perform the computations. HPC codes are based on tightly coupled MPI, OpenMP, GPGPU, and hybrid programs and require low latency interconnected nodes. HTC makes use of unreliable components distributing the work out to every node and collecting results at the end of all parallel tasks.
Visit: https://www.eudat.eu/eudat-summer-school
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
This slides will provide viewers a complete understanding of all the different virtualization techniques.
The main reference for the presentation is taken from Mastering cloud computing By Rajkumar Buyya.
Provides a simple and unambiguous taxonomy of three service models
- Software as a service (SaaS)
- Platform as a service (PaaS)
- Infrastructure as a service (IaaS)
(Private cloud, Community cloud, Public cloud, and Hybrid cloud)
Agreement Protocols, Distributed Resource Management: Issues in distributed File Systems, Mechanism for building distributed file systems, Design issues in Distributed Shared Memory, Algorithm for Implementation of Distributed Shared Memory.
Simple Power Point Slides about Client-Server Architecture and Peer-to-Peer Architecture.
A short description (with pictures) about Client and Server computers is given in slides.
Feel Free to Like or Dislike the Slides.
A Distributed computing architeture consists of very lightweight software agents installed on a number of client systems , and one or more dedicated distributed computing managment servers.
Subject: Software Architecture Design
Topic: Distributed Architecture
In this presentation, you will learn about design pattern, softawre architecture, distributed architecture, basis of distributed architecture, why distributed architecture, need of distributed architecture, advantages and disadvantages of DA and much more.
Rate my presentation, It's designed graphically.
Distributed System Unit 1 Notes by Dr. Nilam Choudhary, SKIT JaipurDrNilam Choudhary
Distributed System is a collection of autonomous computer systems that are physically separated but are connected by a centralized computer network that is equipped with distributed system software. The autonomous computers will communicate among each system by sharing resources and files and performing the tasks assigned to them.
Illustrate this Basic concept of Computer networks and
distributed systems, Goals of networking, General approaches of communication
within a network, Network classification, Uses & Network Software's.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Distributed Computing Report
1. INTRODUCTION:-
In the term distributed computing, the word distributed means spread out across
space. Thus, distributed computing is an activity performed on a spatially distributed
system.
A distributed system consists of collection of autonomous computers, connected
through a network and distributed operating system software, which enables computers to
coordinate their activities and to share the resources of the system - hardware, software
and data, so that users perceive the system as a single, integrated computing facility.
16
2. (Figure 1-Distributed Computing)
These networked computers may be in the same room, same campus, same
country, or in different continents. A distributed system may have a common goal, such
as solving a large computational problem. Alternatively, each computer may have its own
16
3. user with individual needs, and the purpose of the distributed system is to coordinate the
use of shared resources or provide communication services to the users.
Rise of Distributed Computing:-
Computer hardware prices are falling and power increasing.
Network connectivity is increasing.
Everyone is connected with fat pipes.
It is easy to connect hardware together.
Combination of cheap processors often more
Cost-effective than one expensive fast system.
Flexibility to add according to needs.
Potential increase of reliability.
Sharing of resources.
Characteristics of Distributed Computing:-
Six key characteristics are primarily responsible for the usefulness of distributed
system. They are resource sharing, openness, concurrency, scalability, fault tolerance and
transparency. It should be emphasized that they are not automatic consequences of
distribution; system must be carefully designed in order to ensure that they are achieved.
Resource Sharing:-
Resource sharing is the ability to use any hardware, software or data anywhere in
the system. Resources in a distributed system, unlike the centralized one, are physically
encapsulated within one of the computers and can only be accessed from others by
communication. It is the resource manager to offers a communication interface enabling
the resource be accessed, manipulated and updated reliability and consistently. There are
mainly two kinds of model resource managers: client/server model and the object-based
model. Object Management Group uses the latter one in CORBA, in which any resource
16
4. is treated as an object that encapsulates the resource by means of operations that users
can invoke.
Openness:-
Openness is concerned with extensions and improvements of distributed systems.
New components have to be integrated with existing components so that the added
functionality becomes accessible from the distributed system as a whole. Hence, the static
and dynamic properties of services provided by components have to be published in
detailed interfaces.
Concurrency:-
Concurrency arises naturally in distributed systems from the separate activities
of users, the independence of resources and the location of server processes in separate
computers. Components in distributed systems are executed in concurrent processes.
These processes may access the same resource concurrently. Thus the server process
must coordinate their actions to ensure system integrity and data integrity.
Scalability:-
Scalability concerns the ease of the increasing the scale of the system (e.g. the
number of processor) so as to accommodate more users and/or to improve the
corresponding responsiveness of the system. Ideally, components should not need to be
changed when the scale of a system increases.
Fault tolerance:-
Fault tolerance cares the reliability of the system so that in case of failure of
hardware, software or network, the system continues to operate properly, without
significantly degrading the performance of the system. It may be achieved by recovery
(software) and redundancy (both software and hardware).
16
5. Transparency:-
Transparency hides the complexity of the distributed systems to the users and
application programmers. They can perceive it as a whole rather than a collection of
cooperating components in order to reduce the difficulties in design and in operation.
This characteristic is orthogonal to the others. There are many aspects of transparency,
including access transparency, location transparency, concurrency transparency,
replication transparency, failure transparency, migration transparency, performance
transparency and scaling transparency.
Distributed Computing Architecture:-
Various hardware and software architectures are used for distributed computing. At
a lower level, it is necessary to interconnect multiple CPUs with some sort of network,
regardless of whether that network is printed onto a circuit board or made up of loosely-
coupled devices and cables. At a higher level, it is necessary to
interconnect processes running on those CPUs with some sort of communication system.
Distributed programming typically falls into one of several basic architectures or
categories: Client-server, 3-tier architecture, N-tier architecture, Distributed objects, loose
coupling, or tight coupling.
Client-server:-
Smart client code contacts the server for data, then formats and displays it to the
user. Input at the client is committed back to the server when it represents a
permanent change.
16
6. 3-tier architecture :-
Three tier systems move the client intelligence to a middle tier so that stateless
clients can be used. This simplifies application deployment. Most web
applications are 3-Tier.
N-tier architecture:-
N-Tier refers typically to web applications which further forward their requests
to other enterprise services. This type of application is the one most responsible
for the success of application servers.
Tightly coupled (clustered):-
Tightly coupled architecture refers typically to a cluster of machines that
closely work together, running a shared process in parallel. The task is subdivided
in parts that are made individually by each one and then put back together to
make the final result.
Peer-to-peer:-
Peer-to-peer is an architecture where there is no special machine or machines
that provide a service or manage the network resources. Instead all responsibilities
are uniformly divided among all machines, known as peers. Peers can serve both
as clients and servers.
Space based :-
Space based refers to an infrastructure that creates the illusion (virtualization)
of one single address-space. Data are transparently replicated according to
application needs. Decoupling in time, space and reference is achieved.
16
7. Another basic aspect of distributed computing architecture is the method of
communicating and coordinating work among concurrent processes. Through various
message passing protocols, processes may communicate directly with one another,
typically in a master/slave relationship. Alternatively, a "database-centric"
architecture can enable distributed computing to be done without any form of direct inter-
process communication, by utilizing a shared database.
Distributed Computing Paradigms:-
The Message Passing Paradigm:-
Message passing is the most fundamental paradigm for distributed applications.
A process sends a message representing a request. The message is delivered to a receiver,
which processes the request, and sends a message in response. In turn, the reply may
trigger a further request, which leads to a subsequent reply, and so forth.
16
8. The Client-Server Paradigm:-
Perhaps the best known paradigm for network applications, the client-server
model assigns asymmetric roles to two collaborating processes. One process, the server,
plays the role of a service provider which waits passively for the arrival of requests. The
other, the client, issues specific requests to the server and awaits its response. Simple in
concept, the client-server model provides an efficient abstraction for the delivery of
network services. Operations required include those for a server process to listen and to
accept requests, and for a client process to issue requests and accept responses. By
assigning asymmetric roles to the two sides, event synchronization is simplified: the
server process waits for requests, and the client in turn waits for responses. Many Internet
services are client-server applications. These services are often known by the protocol
that the application implements. Well known Internet services include HTTP, FTP, DNS,
etc.
16
9. The Peer-to-Peer Distributed Computing Paradigm:-
In the peer-to-peer paradigm, the participating processes play equal roles, with
equivalent capabilities and responsibilities (hence the term “peer”). Each participant may
issue a request to another participant and receive a response. The peer-to-peer paradigm
is more appropriate for applications such as instant messaging, peer-to-peer file transfers,
video conferencing, and collaborative work. It is also possible for an application to be
based on both the client-server model and the peer-to-peer model. A well-known example
of a peer-to-peer file transfer service is Napster.com or similar sites which allow files
(primarily audio files) to be transmitted among computers on the Internet. It makes use of
a server for directory in addition to the peer-to-peer computing.
16
10. Application:-
There are many examples of commercial application of distributed system, such as
the Database Management System, distributed computing using mobile agents, local
intranet, internet (World Wide Web), JAVA RMI, etc.
Distributed Computing Using Mobile Agents:-
Mobile agents can be wandering around in a network using free resources for
their own computations.
Local Intranet:-
A portion of Internet that is separately administered & supports internal sharing
of resources (file/storage systems and printers) is called local intranet.
16
11. Internet:-
The Internet is a global system of interconnected computer networks that use the
standardized Internet Protocol Suite (TCP/IP).
16
13. JAVA RMI:-
Communicating Entities:-
Implementing some application for user
Using support of distributed services
Layers of support
Client/server
Embedded in language Java:-
Object variant of remote procedure call
Adds naming compared with RPC
Restricted to Java environments
16
14. RMI Features:-
Distributed object model:-
Objects: normal and remote
Idea:-
Remote object exists on other host
Remote object can be used as normal object
Behavior described by interface
Environment takes care of remote invocation
Differences normal and remote objects:-
Remote references can be distributed freely
Clients only know/use interface, not actual implementation
Passing remote objects by reference, normal objects by copying
Failure handling more complicated since invocation itself can also fail
RMI Architecture:-
16
15. Advantages:-
Economics:-
Computers harnessed together give a better price/performance ratio than
mainframes.
Speed:-
A distributed system may have more total computing power than a mainframe.
Inherent distribution of applications:-
Some applications are inherently distributed. E.g., an ATM-banking application.
16
16. Reliability:-
If one machine crashes, the system as a whole can still survive if you have
multiple server machines and multiple storage devices (redundancy).
Extensibility and Incremental Growth:-
Possible to gradually scale up (in terms of processing power and functionality)
by adding more sources (both hardware and software). This can be done without
disruption to the rest of the system.
Distributed custodianship:-
The National Spatial Data Infrastructure (NSDI) calls for a system of
partnerships to produce a future national framework for data as a patchwork quilt
of information collected at different scales and produced and maintained by
different governments and agencies. NSDI will require novel arrangements for
framework management, area integration, and data distribution. This research will
examine the basic feasibility and likely effects of such distributed custodianship
in the context of distributed computing architectures, and will determine the
institutional structures that must evolve to support such custodianship.
Data integration:-
This research will contribute to the integration of geographic information and
GISs into the mainstream of future libraries, which are likely to have full digital
capacity. The digital libraries of the future will offer services for manipulating
and processing data as well as for simple searches and retrieval.
Missed opportunities:-
16
17. By anticipating the impact that a rapidly advancing technology will have on
GISs, this research will allow the GIS community to take better advantage of the
opportunities that the technology offers.
Disadvantages:-
Lack of experience in designing, and implementing a distributed system. E.g.
which platform (hardware and OS) to use, which language to use etc. But this is
changing now.
If the network underlying a distributed system saturates or goes down, then the
distributed system will be effectively disabled thus negating most of the
advantages of the distributed system.
Security is a major hazard since easy access to data means easy access to secret
data as well.
Conclusions:-
In this age of optimization everybody is trying to get optimized output from their
limited resources. The concept of distributed computing is the most efficient way to
achieve the optimization. In case of distributed computing the actual task is modularized
and is distributed among various computer system. It not only increases the efficiency of
the task but also reduce the total time required to complete the task. Now the advance
concept of this distributed computing, that is the distributed computing through mobile
agents is setting a new landmark in this technology. A mobile agent is a process that can
transport its state from one environment to another, with its data intact, and be capable of
performing appropriately in the new environment.
16
18. References:-
Andrews, Gregory R. (2000), Foundations of Multithreaded, Parallel, and
Distributed Programming, Addison–Wesley, ISBN 0-201-35752-6.
Arora, Sanjeev; Barak, Boaz (2009), Computational Complexity – A Modern
Approach, Cambridge, ISBN 978-0-521-42426-4.
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald
L. (1990), Introduction to Algorithms (1st ed.), MIT Press, ISBN 0-262-03141-8.
Dolev, Shlomi (2000), Self-Stabilization, MIT Press, ISBN 0-262-04178-2.
16