What is distributed system?
A distributed system is a collection of independent
computers that coordinate their activity and share
resources and appears to its users as a single coherent
Why do we need distributed
• Nature of application required distributed
• Availability/Reliability (no single point of failure)
• Performance (bunch of commodity servers give
more performance that one supercomputer)
• Cost efficient (bunch of commodity servers cost less
than one supercomputer)
Telecom networks (telephone/computer networks)
WWW, peer-to-peer networks
Multiplayer online games
Network file systems
Aircraft control systems
Scientific computing (cluster/grid computing)
Distributed systems characteristics
Lack of a global clock
Multiple autonomous components
Components are not shared by all users
Resources may not be accessible
Software runs in concurrent processes on different processors
Multiple Points of control (distributed management)
Multiple Points of failure (fault tolerance)
The structure of the system (network topology, network
latency, number of computers) is not known in advance
• Each computer has only a limited, incomplete view of the system.
Advantages over centralized
•The system can easily be expanded by adding more machines as needed.
•Several machines can provide the same services, so if one is unavailable, work does not stop.
•A collection of microprocessors offer a better price/performance than mainframes. Low
price/performance ratio: cost effective way to increase computing power.
•If one machine crashes, the system as a whole can still survive.
•A distributed system may have more total computing power than a mainframe
•Computing power can be added in small increments
Advantages over independent PCs
• Allow many users to access common data
• Allow shared access to common resources
• Enhance human-to-human communication
• Spread the workload over the available machines
processors may have
access to a shared
memory to exchange
processor has its own
exchanged by passing
messages between the
Parallel algorithms in shared-memory model
• All computers have access to a shared memory. The algorithm designer chooses the
program executed by each computer.
Parallel algorithms in message-passing model
• The algorithm designer chooses the structure of the network, as well as the program
executed by each computer.
Distributed algorithms in message-passing model
• The algorithm designer only chooses the computer program. All computers run the
same program. The system must work correctly regardless of the structure of the
It appeared that Distributed
Systems have some fundamental
Byzantine fault-tolerance problem
The objective of Byzantine fault tolerance is to be able
to defend against Byzantine failures, in which
components of a system fail in arbitrary ways
Known algorithms can ensure correct operation only if
<1/3 of the processes are faulty.
Two Generals problems
It is proved that there is no solution for these problems other than probabilistic…
• Agreeing on the identity of leader
• State-machine replication
• Atomic broadcasts
There are number of protocols to solve consensus problem in distributed
systems such as widely used `Paxos consensus protocol` http://en.wikipedia.org/wiki/Paxos_algorithm
Types of distributed systems
Grid computing is the collection of computer
resources from multiple locations to reach a common
goal. What distinguishes grid computing from
conventional high performance computing systems
such as cluster computing is that grids tend to be more
loosely coupled, heterogeneous, and geographically
Computer clustering relies on a centralized
management approach which makes the nodes
available as orchestrated shared servers. It is distinct
from other approaches such as peer to peer or grid
computing which also use many nodes, but with a far
more distributed nature.
Distributed systems design and
The art of
Aggressive use of
® Copyright 2012 Gigaspaces Ltd. All Rights
HA nodes configuration
• Traffic intended for the failed node is either passed onto an
existing node or load balanced across the remaining nodes.
• Provides a fully redundant instance of each node, which is
only brought online when its associated primary node fails:
• Software components are installed and available on both
primary and secondary nodes.
• The software component is installed and available on
the secondary node. The secondary node is up and
• The secondary node acts as backup of another identical
primary system. It will be installed and configured only
when the primary node breaks down for the first time.
Redundancy as is
Redundant Web/App Servers
Redundant storage network
Redundant electrical power
Redundancy in HA cluster
• Easy start/stop procedures
• Using NAS/SAN shared storage
• App should be able to store it’s state in shared
• App should be able to restart from stored
shared state on another node
• App shouldn’t corrupt data if it crashes or
Replication in computing involves sharing information so as to
ensure consistency between redundant resources.
• Primary-backup (master-slave) schema – only primary node
• Multi-primary (multi-master) schema – all nodes are
processing requests simultaneously and distribute state
between each other.
Backup differs from replication in that it saves a copy of data
unchanged for a long period of time. Replicas, on the other
hand, undergo frequent updates and quickly lose any
• Transactional replication. Synchronous replication to number of nodes.
• State machine replication. Using state machine based on Paxis
• Virtual synchrony (Performance over fault-tolerance). Sending
asynchronous events to other nodes.
• Synchronous replication (Consistency over Performance) - guarantees
"zero data loss" by the means of atomic write operation.
• Asynchronous replication (Performance over Consistency) (Eventual
consistency) - write is considered complete as soon as local storage
acknowledges it. Remote storage is updated, but probably with a
Sharding is the process of storing data records across multiple
machines to meet demands of data growth.
• High query rates can exhaust the CPU capacity of the
• Larger data sets exceed the storage capacity of a single
• Finally, working set sizes larger than the system’s RAM stress
the I/O capacity of disk drives.
• Sharding reduces the number
of operations each shard
• Sharding reduces the amount
of data that each server needs
Data Partitioning Principles
Back to key
Partitioned Data with Backup Per Partition
® Copyright 2012 Gigaspaces Ltd. All Rights Reserved
When connectivity between nodes in cluster gone
and cluster divided in several parts
• Optimistic approach (Availability over Consistency)
o Leave as is and rely on later resynch (Hazelcast)
• Pessimistic approach (Consistency over Availability)
o Leave only one partition live before connectivity fixed (MongoDB)
• After update completes any subsequent access will
return the updated value.
• The system does not guarantee that subsequent
accesses will return the updated value.
• The storage system guarantees that if no new updates
are made to object eventually all accesses will return
the last updated value.
Strong => W + R > N
Weak/Eventual => W + R <= N
Optimized read => R=1, W=N
Optimized write => W=1, R=N
N – number of nodes
W – number of replicas to aknowledge update
R – number of replicas contacted for read
• No single point of failure
• Fault isolation
• Roll-back/Roll-forward procedures
• Diversity – several alternative implementations of
Design using fault isolated “swimlanes”
Never trust single point of failure
Avoid putting systems in series
Ensure you have “switch on/switch off” for your new functionality
Put data closer to clients scaling by Z-axis.
Locate processing units near data to be processed.
• Basic Availability
• Eventual consistency
Alternative model to well known ACID which is used in
Distributed Systems to relax strong consistency
constraints in favor to achieve higher Availability
together with Partition Tolerance as per CAP theorem.
Eric Brewer’s quote
“Because partitions are rare, CAP should allow perfect C and A most of
the time, but when partitions are present or perceived, a strategy that
detects partitions and explicitly accounts for them is in order. This
strategy should have three steps: detect partitions, enter an explicit
partition mode that can limit some operations, and initiate a recovery
process to restore consistency and compensate for mistakes made
during a partition.”
® Copyright 2012 Gigaspaces Ltd. All Rights
Scaling out (Z/Y/Z axis)
[X-Axis]: Horizontal duplication (design to clone things)
[Y-Axis]: Split by Function, Service or Resource (design to split diff things)
[Z-Axis]: Lookups split (design to split similar things)
The art of simplicity
KISS (Keep it simple). Don’t overengineer a solution.
Simplify solution 3 times over (scope, design, implementation)
Reduce DNS lookups. Reduce objects where possible (Google main page)
Use homogenous networks where possible
Avoid too many traffic redirects
Don’t check your work (avoid defensive programing)
Relax temporal constraints where possible
Aggressive use of caching
Use expires headers
Cache AJAX calls
Leverage Page Caches (Proxy Web Servers)
Utilize Application caches
Use Object Caches (ORM level)
Put caches in their own tier
Using messaging whenever
• Communicate asynchronously as
much as possible
• Ensure your message bus can scale
• Avoid overcrowding your message
Scaling your database layer
Denormalize data where possible cause relationships are costly.
Use the right type of lock.
Avoid using multiphase commits and distributed transactions.
Avoid using “select for update” statements.
Don’t select everything.
® Copyright 2012 Gigaspaces Ltd. All Rights