Systèmes et Applications
Reparties
Dr DIALLO Mohamed
UFRMI 2016
diallo.med@gmail.com
1
Objectifs du cours
• Comprendre les challenges dans un système repartis
• Se familiariser avec la mise en œuvre de systèmes repartis
• Découvrir l’algorithmique repartie
• Etudier des exemples de systèmes distribues
• Explorer la recherche dans les systèmes distribues
L'éducation est l'allumage d'une flamme,
et non pas le remplissage d'un navire.
(Socrate)
2
Présentation de l’UE
• Huit séances de 4h
• CM - 10h - TD 10h – TP 8h
• Expose - 4h
• Evaluation
• Projet: Présentation d’un papier
de recherche ou d’un système
distribue (DEMO) en binôme.
• Examen sur table
• Introduction
• Communication
• Socket et RMI
• Algorithmique distribuée
• Synchronisation
• Election
• Exclusion
• Tolérance aux pannes et P2P
• Services web
3
Définition
A distributed system is a collection of independent computers that
appears to its users as a single coherent system. (A. Tanenbaum)
Un système réparti :
• Des sites indépendants avec un but commun
• Un système de communication
A distributed system is one that stops you from getting any work done
when a machine you’re never heard of crashes (L. Lamport)
Crédit C. Rabat – Introduction aux systèmes repartis
4
Characteristics of distributed systems
• Each node executes a program concurrently
• Knowledge is local
• Nodes have fast access only to their local state, and any information about global
state is potentially out of date
• Nodes can fail and recover from failure independently
• Messages can be delayed or lost
• Independent of node failure;
• it is not easy to distinguish network failure and node failure
• Clocks are not synchronized across nodes
• local timestamps do not correspond to the global real time order, which cannot be
easily observed
Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html5
Fallacies of distributed computing
• The network is reliable.
• Redundancy / Reliable messaging
• Latency is zero.
• Strive to make as few as possible calls / Move
as much data in each call
• Bandwidth is infinite.
• Strive to limit the size of the information we
send over the wire
• The network is secure.
• Assess risks
• Be aware of security and implications
• Topology doesn't change.
• Do not depend on specific routes/addresses
• Location transparency (ESB, multicast) / Directory
services
• There is one administrator.
• Different agendas / rules that can constrain your
app
• Help them manage your app.
• Transport cost is zero.
• Overhead (Marshalling…)
• Costs for running the network
• The network is homogeneous
• Do not rely on proprietary protocols, rather
XML…
Arnon Rotem - Fallacies of Distributed Computing Explained 6
Sample distributed system :
The Google cluster architecture (2003)
• Scale
• Raw documents (tens of terabytes of
data)
• Inverted index (#terabyte)
• Approach
• Partitioning and replication (load
balancing)
Combining more than 15,000 commodity-class PCs with
fault-tolerant software creates a solution that is more
cost-effective than a comparable system built out of a
smaller number of high-end servers
7
Real Facts
Lots of Data out there
• NYSE generates 1TB/day
• Google processes 700PB/month
• Facebook hosts 10 billion photos
taking 1PB of storage
Google search workloads
• Google now processes over
40,000 search queries every
second on average.
• A single Google query uses 1,000
computers in 0.2 seconds to
retrieve an answer
Snia.org http://www.internetlivestats.com/google-search-statistics/
8
Objectifs des systèmes repartis
• Accès aux ressources
• Transparence
• Passage à l’échelle
(Scalability)
• Tolérance aux pannes
•Fiabilité (Reliability)
•Ouverture
(Interoperability)
• Sécurité
Crédit C. Rabat – Introduction aux systèmes repartis
9
Transparence
Transparency Description
Access Hide differences in data representation and how a resource is
accessed
Location Hide where a resource is located
Migration Hide that a resource may me moved to another location
Relocation Hide that a resource may me moved to another location while in
use
Replication Hide that a resource is replicated
Concurrency Hide that a resource may be shared by several competitive users
Failure Hide the failure and recovery of a resource
Credit A. Tanenbaum
10
Scalability
• Size scalability
• Adding more nodes should make the system linearly faster;
• Growing the dataset should not increase latency
• Geographic scalability
• Administrative scalability
• Adding more nodes should not increase the administrative costs of the
system
A scalable system is one that continues to meet the needs of its users
as scale increases
Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html11
Scalability: Performance
• Short response time/low latency for a given piece of work
• High throughput (rate of processing work)
• Low utilization of computing resource(s)
Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html12
Scalability: Availability (and Fault tolerance)
Distributed systems can take a bunch of unreliable components, and
build a reliable system on top of them (Design for fault tolerance)
Because the probability of a failure occurring increases with the
number of components, the system should be able to compensate so
as to not become less reliable as the number of components increases.
Fault tolerance
Ability of a system to behave in a well-defined manner once faults
occur
Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html13
Scale out vs Scale up ?
Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html
High-end (128 core) – low-end (4 core)
14
Service Level Agreement
• If I write data, how quickly can I access it elsewhere?
• After the data is written, what guarantees do I have of
durability?
• If I ask the system to run a computation, how quickly will it
return results?
• When components fail, or are taken out of operation, what
impact will this have on the system?
Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html15
Consequences of distribution
• An increase in the number of independent nodes increases the
probability of failure in a system
• Reducing availability and increasing administrative costs
• An increase in the number of independent nodes may increase the
need for communication between nodes
• Reducing performance as scale increases
• An increase in geographic distance increases the minimum latency for
communication between distant nodes
• Reducing performance for certain operations
Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html16
Théorie des systèmes repartis
• Efficient solutions to specific
problems .
• Guidance about what is possible.
• Minimum cost of a correct
implementation.
• What is impossible.
• Timestamping distributed
events. (Lamport)
• Leader election
• Consistent snapshoting
• Consensus is impossible to solve
in fewer than 2 rounds of
messages in general
• CAP theorem
• FLP impossibility
• Two Generals problem
Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html17
FLP impossibility result
• Validity: the value agreed upon must have
been proposed by some process – safety
• Agreement: all deciding processes agree on
the same value - safety
• Termination: at least one non-faulty process
eventually decides - liveness
Consensus is the problem of having
a set of processes agree on a value
proposed by one of those
processes.
18
FLP impossibility result
In an asynchronous setting,
where only one processor
might crash, there is no
distributed algorithm that
solves the consensus problem
Fischer, M. J., Lynch, N. A., &
Paterson, M. S. (1985).
Impossibility of distributed
consensus with one faulty
process. Journal of the ACM
(JACM), 32(2), 374-382.
19
CAP Theorem (Brewer Theorem)
Partition tolerance
The system continues to operate despite
arbitrary partitioning due to network failures
Consistency
Every read receives the most recent write or
an error
Availability
Every request receives a response, without
guarantee that it contains the most recent
version of the information
http://book.mixu.net/distsys/abstractions.html 20
Beware !
C in ACID
• If the system has certain
invariants that must always hold,
if they held before the
transaction, they will hold
afterward too.
(Example: law of conservation of money)
• In distributed systems : when
transactions run concurrently,
the result is the same as if it
runs in serial.
C in CAP
• Relates to data updates
spreading accross all replicas in a
cluster.
• How operations on a single item
are ordered, and made visible to
all nodes of the database.
21
Technologies pour les systèmes repartis
• Intergiciels (Corba, ESB)
• RPC, RMI, Web services
• Amazon Dynamo / Apache Cassandra
• Apache Hadoop
22
Amazon Dynamo: Highly available NoSQL
• A highly available key-value
storage system that some of
Amazon’s core services use to
provide an “always-on”
experience.
• To achieve this level of availability,
Dynamo sacrifices consistency
under certain failure scenarios.
Giuseppe DeCandia, et al, “Dynamo: Amazon's
Highly Available Key-Value Store”, in the
Proceedings of the 21st ACM Symposium on
Operating Systems Principles, Stevenson, WA,
October 2007.
23
Hadoop: Distributed framework for Big Data.
• Apache top level project, open-
source implementation of
frameworks for reliable,
scalable, distributed computing
and data storage.
• It is a flexible and highly-
available architecture for large
scale computation and data
processing on a network of
commodity hardware.
• Hadoop fractionne les fichiers en
gros blocs et les distribue à
travers les nœuds du cluster.
• Pour traiter les données,
Hadoop transfère le code à
chaque nœud et chaque nœud
traite les données dont il dispose
24
Apache Hadoop
• Hadoop Usage scenarios
• Search through data looking
for particular patterns.
• Sort large amount of data
(#Terabytes)
25
Intergiciel
26
Enterprise Service Bus
• Middleware oriente message
• Echange de message asynchrone
• Services web (SOA)
• Transformations
• Routage intelligent
• Découplage expéditeur et
destinataire
• Business activity monitoring (BAM)
• Business process modeling (BPM)
• Mule ESB
• Talend ESB
Wikipedia.fr27
Service Oriented Architecture
28
Modèles fonctionnels
Deux/Trois/N-tiers
29
Architecture deux tiers
30
Architecture trois-tiers
31
Architecture n-tiers
32
Modèles d’échange
Client/serveur
Communication par message
Code mobile
Mémoire partagée
33
Modèle client/serveur (1/2)
34
Modèle client/serveur (2/2)
35
Communication par message
• Pas de réponse attendue
• Messages non sollicites
• Exemple: Message Oriented Middleware.
• Point-a-point
• Publish-Subscribe
(Apache ActiveMQ, IBM Websphere MQ, OpenJMS)
36
Code mobile
37
Mémoire virtuelle partagée
• Les différentes applications partagent une zone mémoire commune.
• Applications parallèles: thread
• Application distribuée: intergiciel
38
Configurations
Centralise
Totalement décentralise
Hybride
39
Centralise
40
! Un système peut être
centralise mais
distribue.
Totalement décentralisée
1. No machine has complete information
about the system state.
2. Machines make decisions based only
on local information,
3. Failure of one machine does not ruin
the algorithm/system.
4. There is no implicit assumption that a
global clock exists (no strong
coordination).
(Credit A. Tanenbaum)
41
• Symétrie
• Autonomie (administrative)
• Fédération
Hiérarchique
i.e. DNS
Exemple de système
décentralisé mais:
• Serveurs racines
• Serveurs TLD
• Serveurs autorités
42
Hybride
43
i.e. Kazaa
Système décentralisé
Mais Peers vs Super-peers
Cloud et Virtualisation
Cloud computing
Virtualisation
44
Environnement Cloud
45
Community: the members
of the community generally
share similar security,
privacy, performance and
compliance requirements.
Credit Bamba Gueye - UCAD
Modèles d’utilisation
SaaS : c’est la plateforme applicative mettant à disposition des applications
complètes fournies à la demande. On y trouve différents types d'application
allant du CRM, à la gestion des ressources humaines, comptabilité, outils
collaboratifs, messagerie et d'autres applications métiers.
46
PaaS : c’est la plate-forme d’exécution, de déploiement et de
développement des applications sur la plate-forme du Cloud Computing.
IaaS : permet d'externaliser les serveurs, le réseau, le stockage dans des
salles informatiques distantes. Les entreprises démarrent ou arrêtent des
serveurs virtuels hébergés sur la plate-forme de Cloud Computing.
Credit Bamba Gueye - UCAD
Exemple d’application (AWS)
47Credit C. Rabat - CNAM
Common virtualization uses today
48
Common virtualisation uses…
• Run legacy software on non-legacy hardware
• Run multiple operating systems on the same hardware
• Create a manageable upgrade path
• Reduce costs by consolidating services onto the fewest number of
physical machines
49
http://www.vmware.com/img/serverconsolidation.jpg
Non-virtualized data centers
50
• Too many servers for too little work
• High costs and infrastructure needs
Maintenance
Networking
Floor space
Cooling
Power
Disaster Recovery
Virtualisation Features
51
VM Isolation
Secure Multiplexing
• Processor HW isolates VMs
Strong guarantees
• Software bugs, crashes, viruses
within one VM cannot affect other
VMs
Performance Isolation
• Partition system resources
(Controls for reservation, limit, shares)
VM Encapsulation
Entire VM is a File
Snapshots and clones
Easy content distribution
• Pre-configured apps, demos
• Virtual appliances
VM Compatibility
Hardware-independent
Create Once, Run Anywhere
• Migrate VMs between hosts
Legacy VMs
• Run ancient OS on new platform
PlanetLab
Different organizations contribute machines, which they subsequently
share for various experiments.
52
Problem: We need to ensure that different distributed applications
do not get into each other’s way => VIRTUALISATION
Planetlab
53
Vserver: Independent and protected environment with its own libraries, server versions
and so on.
Distributed apps are assigned a collection of vservers distributed accross multiple machines
(slice).
Planetlab map
54
https://www.planet-lab.org/
Références et liens
• Cyril Rabat – Introduction aux systèmes repartis (CNAM)
• Distributed systems reading list
• https://dancres.github.io/Pages/
55

Introduction

  • 1.
    Systèmes et Applications Reparties DrDIALLO Mohamed UFRMI 2016 diallo.med@gmail.com 1
  • 2.
    Objectifs du cours •Comprendre les challenges dans un système repartis • Se familiariser avec la mise en œuvre de systèmes repartis • Découvrir l’algorithmique repartie • Etudier des exemples de systèmes distribues • Explorer la recherche dans les systèmes distribues L'éducation est l'allumage d'une flamme, et non pas le remplissage d'un navire. (Socrate) 2
  • 3.
    Présentation de l’UE •Huit séances de 4h • CM - 10h - TD 10h – TP 8h • Expose - 4h • Evaluation • Projet: Présentation d’un papier de recherche ou d’un système distribue (DEMO) en binôme. • Examen sur table • Introduction • Communication • Socket et RMI • Algorithmique distribuée • Synchronisation • Election • Exclusion • Tolérance aux pannes et P2P • Services web 3
  • 4.
    Définition A distributed systemis a collection of independent computers that appears to its users as a single coherent system. (A. Tanenbaum) Un système réparti : • Des sites indépendants avec un but commun • Un système de communication A distributed system is one that stops you from getting any work done when a machine you’re never heard of crashes (L. Lamport) Crédit C. Rabat – Introduction aux systèmes repartis 4
  • 5.
    Characteristics of distributedsystems • Each node executes a program concurrently • Knowledge is local • Nodes have fast access only to their local state, and any information about global state is potentially out of date • Nodes can fail and recover from failure independently • Messages can be delayed or lost • Independent of node failure; • it is not easy to distinguish network failure and node failure • Clocks are not synchronized across nodes • local timestamps do not correspond to the global real time order, which cannot be easily observed Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html5
  • 6.
    Fallacies of distributedcomputing • The network is reliable. • Redundancy / Reliable messaging • Latency is zero. • Strive to make as few as possible calls / Move as much data in each call • Bandwidth is infinite. • Strive to limit the size of the information we send over the wire • The network is secure. • Assess risks • Be aware of security and implications • Topology doesn't change. • Do not depend on specific routes/addresses • Location transparency (ESB, multicast) / Directory services • There is one administrator. • Different agendas / rules that can constrain your app • Help them manage your app. • Transport cost is zero. • Overhead (Marshalling…) • Costs for running the network • The network is homogeneous • Do not rely on proprietary protocols, rather XML… Arnon Rotem - Fallacies of Distributed Computing Explained 6
  • 7.
    Sample distributed system: The Google cluster architecture (2003) • Scale • Raw documents (tens of terabytes of data) • Inverted index (#terabyte) • Approach • Partitioning and replication (load balancing) Combining more than 15,000 commodity-class PCs with fault-tolerant software creates a solution that is more cost-effective than a comparable system built out of a smaller number of high-end servers 7
  • 8.
    Real Facts Lots ofData out there • NYSE generates 1TB/day • Google processes 700PB/month • Facebook hosts 10 billion photos taking 1PB of storage Google search workloads • Google now processes over 40,000 search queries every second on average. • A single Google query uses 1,000 computers in 0.2 seconds to retrieve an answer Snia.org http://www.internetlivestats.com/google-search-statistics/ 8
  • 9.
    Objectifs des systèmesrepartis • Accès aux ressources • Transparence • Passage à l’échelle (Scalability) • Tolérance aux pannes •Fiabilité (Reliability) •Ouverture (Interoperability) • Sécurité Crédit C. Rabat – Introduction aux systèmes repartis 9
  • 10.
    Transparence Transparency Description Access Hidedifferences in data representation and how a resource is accessed Location Hide where a resource is located Migration Hide that a resource may me moved to another location Relocation Hide that a resource may me moved to another location while in use Replication Hide that a resource is replicated Concurrency Hide that a resource may be shared by several competitive users Failure Hide the failure and recovery of a resource Credit A. Tanenbaum 10
  • 11.
    Scalability • Size scalability •Adding more nodes should make the system linearly faster; • Growing the dataset should not increase latency • Geographic scalability • Administrative scalability • Adding more nodes should not increase the administrative costs of the system A scalable system is one that continues to meet the needs of its users as scale increases Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html11
  • 12.
    Scalability: Performance • Shortresponse time/low latency for a given piece of work • High throughput (rate of processing work) • Low utilization of computing resource(s) Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html12
  • 13.
    Scalability: Availability (andFault tolerance) Distributed systems can take a bunch of unreliable components, and build a reliable system on top of them (Design for fault tolerance) Because the probability of a failure occurring increases with the number of components, the system should be able to compensate so as to not become less reliable as the number of components increases. Fault tolerance Ability of a system to behave in a well-defined manner once faults occur Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html13
  • 14.
    Scale out vsScale up ? Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html High-end (128 core) – low-end (4 core) 14
  • 15.
    Service Level Agreement •If I write data, how quickly can I access it elsewhere? • After the data is written, what guarantees do I have of durability? • If I ask the system to run a computation, how quickly will it return results? • When components fail, or are taken out of operation, what impact will this have on the system? Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html15
  • 16.
    Consequences of distribution •An increase in the number of independent nodes increases the probability of failure in a system • Reducing availability and increasing administrative costs • An increase in the number of independent nodes may increase the need for communication between nodes • Reducing performance as scale increases • An increase in geographic distance increases the minimum latency for communication between distant nodes • Reducing performance for certain operations Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html16
  • 17.
    Théorie des systèmesrepartis • Efficient solutions to specific problems . • Guidance about what is possible. • Minimum cost of a correct implementation. • What is impossible. • Timestamping distributed events. (Lamport) • Leader election • Consistent snapshoting • Consensus is impossible to solve in fewer than 2 rounds of messages in general • CAP theorem • FLP impossibility • Two Generals problem Distributed Systems for fun and profit - book.mixu.net/distsys/ebook.html17
  • 18.
    FLP impossibility result •Validity: the value agreed upon must have been proposed by some process – safety • Agreement: all deciding processes agree on the same value - safety • Termination: at least one non-faulty process eventually decides - liveness Consensus is the problem of having a set of processes agree on a value proposed by one of those processes. 18
  • 19.
    FLP impossibility result Inan asynchronous setting, where only one processor might crash, there is no distributed algorithm that solves the consensus problem Fischer, M. J., Lynch, N. A., & Paterson, M. S. (1985). Impossibility of distributed consensus with one faulty process. Journal of the ACM (JACM), 32(2), 374-382. 19
  • 20.
    CAP Theorem (BrewerTheorem) Partition tolerance The system continues to operate despite arbitrary partitioning due to network failures Consistency Every read receives the most recent write or an error Availability Every request receives a response, without guarantee that it contains the most recent version of the information http://book.mixu.net/distsys/abstractions.html 20
  • 21.
    Beware ! C inACID • If the system has certain invariants that must always hold, if they held before the transaction, they will hold afterward too. (Example: law of conservation of money) • In distributed systems : when transactions run concurrently, the result is the same as if it runs in serial. C in CAP • Relates to data updates spreading accross all replicas in a cluster. • How operations on a single item are ordered, and made visible to all nodes of the database. 21
  • 22.
    Technologies pour lessystèmes repartis • Intergiciels (Corba, ESB) • RPC, RMI, Web services • Amazon Dynamo / Apache Cassandra • Apache Hadoop 22
  • 23.
    Amazon Dynamo: Highlyavailable NoSQL • A highly available key-value storage system that some of Amazon’s core services use to provide an “always-on” experience. • To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. Giuseppe DeCandia, et al, “Dynamo: Amazon's Highly Available Key-Value Store”, in the Proceedings of the 21st ACM Symposium on Operating Systems Principles, Stevenson, WA, October 2007. 23
  • 24.
    Hadoop: Distributed frameworkfor Big Data. • Apache top level project, open- source implementation of frameworks for reliable, scalable, distributed computing and data storage. • It is a flexible and highly- available architecture for large scale computation and data processing on a network of commodity hardware. • Hadoop fractionne les fichiers en gros blocs et les distribue à travers les nœuds du cluster. • Pour traiter les données, Hadoop transfère le code à chaque nœud et chaque nœud traite les données dont il dispose 24
  • 25.
    Apache Hadoop • HadoopUsage scenarios • Search through data looking for particular patterns. • Sort large amount of data (#Terabytes) 25
  • 26.
  • 27.
    Enterprise Service Bus •Middleware oriente message • Echange de message asynchrone • Services web (SOA) • Transformations • Routage intelligent • Découplage expéditeur et destinataire • Business activity monitoring (BAM) • Business process modeling (BPM) • Mule ESB • Talend ESB Wikipedia.fr27
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
    Modèles d’échange Client/serveur Communication parmessage Code mobile Mémoire partagée 33
  • 34.
  • 35.
  • 36.
    Communication par message •Pas de réponse attendue • Messages non sollicites • Exemple: Message Oriented Middleware. • Point-a-point • Publish-Subscribe (Apache ActiveMQ, IBM Websphere MQ, OpenJMS) 36
  • 37.
  • 38.
    Mémoire virtuelle partagée •Les différentes applications partagent une zone mémoire commune. • Applications parallèles: thread • Application distribuée: intergiciel 38
  • 39.
  • 40.
    Centralise 40 ! Un systèmepeut être centralise mais distribue.
  • 41.
    Totalement décentralisée 1. Nomachine has complete information about the system state. 2. Machines make decisions based only on local information, 3. Failure of one machine does not ruin the algorithm/system. 4. There is no implicit assumption that a global clock exists (no strong coordination). (Credit A. Tanenbaum) 41 • Symétrie • Autonomie (administrative) • Fédération
  • 42.
    Hiérarchique i.e. DNS Exemple desystème décentralisé mais: • Serveurs racines • Serveurs TLD • Serveurs autorités 42
  • 43.
  • 44.
    Cloud et Virtualisation Cloudcomputing Virtualisation 44
  • 45.
    Environnement Cloud 45 Community: themembers of the community generally share similar security, privacy, performance and compliance requirements. Credit Bamba Gueye - UCAD
  • 46.
    Modèles d’utilisation SaaS :c’est la plateforme applicative mettant à disposition des applications complètes fournies à la demande. On y trouve différents types d'application allant du CRM, à la gestion des ressources humaines, comptabilité, outils collaboratifs, messagerie et d'autres applications métiers. 46 PaaS : c’est la plate-forme d’exécution, de déploiement et de développement des applications sur la plate-forme du Cloud Computing. IaaS : permet d'externaliser les serveurs, le réseau, le stockage dans des salles informatiques distantes. Les entreprises démarrent ou arrêtent des serveurs virtuels hébergés sur la plate-forme de Cloud Computing. Credit Bamba Gueye - UCAD
  • 47.
  • 48.
  • 49.
    Common virtualisation uses… •Run legacy software on non-legacy hardware • Run multiple operating systems on the same hardware • Create a manageable upgrade path • Reduce costs by consolidating services onto the fewest number of physical machines 49 http://www.vmware.com/img/serverconsolidation.jpg
  • 50.
    Non-virtualized data centers 50 •Too many servers for too little work • High costs and infrastructure needs Maintenance Networking Floor space Cooling Power Disaster Recovery
  • 51.
    Virtualisation Features 51 VM Isolation SecureMultiplexing • Processor HW isolates VMs Strong guarantees • Software bugs, crashes, viruses within one VM cannot affect other VMs Performance Isolation • Partition system resources (Controls for reservation, limit, shares) VM Encapsulation Entire VM is a File Snapshots and clones Easy content distribution • Pre-configured apps, demos • Virtual appliances VM Compatibility Hardware-independent Create Once, Run Anywhere • Migrate VMs between hosts Legacy VMs • Run ancient OS on new platform
  • 52.
    PlanetLab Different organizations contributemachines, which they subsequently share for various experiments. 52 Problem: We need to ensure that different distributed applications do not get into each other’s way => VIRTUALISATION
  • 53.
    Planetlab 53 Vserver: Independent andprotected environment with its own libraries, server versions and so on. Distributed apps are assigned a collection of vservers distributed accross multiple machines (slice).
  • 54.
  • 55.
    Références et liens •Cyril Rabat – Introduction aux systèmes repartis (CNAM) • Distributed systems reading list • https://dancres.github.io/Pages/ 55

Editor's Notes

  • #10 Tolerance aux pannes vs performance/costs
  • #18 http://the-paper-trail.org/blog/distributed-systems-theory-for-the-distributed-systems-engineer/ Snapshoting to determine stable properties: + computation has terminated + the system is deadlocked + all tokens in a token ring have disappeared
  • #19 Tradeoff between safety (consistency) and liveness (availabiliy)!
  • #20 http://the-paper-trail.org/blog/a-brief-tour-of-flp-impossibility/ This impossibility result is important because it highlights that assuming the asynchronous system model leads to a tradeoff: algorithms that solve the consensus problem must either give up safety or liveness when the guarantees regarding bounds on message delivery do not hold. CAP ~ Impossibility of guaranteeing both safety and liveness in an unreliable distributed system Consistency ~ safety – every response served to a client is correct Availability ~ liveness – every request eventually receives a response Consensus is more difficult to meet than the requirements of CAP CAP also implies that it is impossible to achieve consensus in a system subject to partitions
  • #21 Presented as a conjuncture at PODC 2000 Formalized and proved in 2002 by Nancy Lynch and Seth Gilbert http://www.slideshare.net/YoavFrancis/cap-theorem-theory-implications-and-practices ACID uses 2PC. You can’t implement consistent storage and respond to all requests if you might drop messages between processes. Strong consistency models allow you as a programmer to replace a single server with a cluster of distributed nodes and not run into any problems Good Reference on CAP: https://dzone.com/articles/better-explaining-cap-theorem 2PC (Two Phase commit): MySQL Cluster provides synchronous replication using 2PC. 1/ vote 2/ decision
  • #24 Blog of Amazon’s CTO http://www.allthingsdistributed.com/2007/10/amazons_dynamo.html Voir aussi service cloud : DynamoDB Ou Apache Cassandra.
  • #25 Inspire de Big Table / Map Reduce.
  • #26 Image from ibm.com http://blogs.sas.com/content/datamanagement/2011/11/08/1038/
  • #41 (Provisionned)
  • #44 Super-peer
  • #54 Vserver  LXC