Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Big data 101 for beginners devoxxpl
1. #DevoxxPL
BIG DATA 101, FOUNDATIONAL
KNOWLEDGE FOR A NEW PROJECT
IN 2017
DuyHai DOAN
@doanduyhai
@doanduyhai1
2. #DevoxxPL
Who Am I ?
Duy Hai DOAN
Technical Advocate @ Datastax
• talks, meetups, confs
• open-source devs (Achilles, Zeppelin,…)
• OSS Cassandra point of contact
☞ duy_hai.doan@datastax.com
☞ @doanduyhai
Apache Zeppelin™ committer
@doanduyhai2
3. #DevoxxPL
Agenda
1) Distributed systems theories & properties
2) Data sharding , replication
3) CAP theorem
4) Distributed systems architecture: master/slave vs masterless
@doanduyhai3
5. #DevoxxPL
Time
There is no absolute time in theory (even with atomic clocks!)
Time-drift is unavoidable
• unless you provide atomic clock to each server
• unless you’re Google
NTP is your friend ☞ configure it properly !
@doanduyhai5
6. #DevoxxPL
Ordering of operations
How to order operations ?
What does before/after mean ?
• when clock is not 100% reliable
• when operations occur on multiple machines …
• … that live in multiple continents (1000s km distance)
@doanduyhai6
7. #DevoxxPL
Ordering of operations
Local/relative ordering is possible
Global ordering ?
• either execute all operations on single machine (☞ master)
• or ensure time is perfectly synchronized on all machines executing the
operations (really feasible ?)
@doanduyhai7
8. #DevoxxPL
Known algorithms
Lamport clock
• algorithm for message sender
• algorithm for message receiver
• partial ordering between a pair of (sender, receiver) is possible
@doanduyhai8
time = time+1;
time_stamp = time;
send(Message, time_stamp);
(message, time_stamp) = receive();
time = max(time_stamp, time)+1;
10. #DevoxxPL
Latency
Def: time interval between request & response.
Latency is composed of
• network delay: router/switch delay + physical medium delay
• OS delay (negligible)
• time to process the query by the target (disk access, computation …)
@doanduyhai10
11. #DevoxxPL
Latency
Speed of light physics
• ≈ 300 000 km/s in the void
• ≈ 197 000 km/s in fiber optic cable (due to refraction indice)
London – New York bird flight distance ≈ 5500km è 28ms for a
one way trip
Conclusion: a ping between London – New York cannot take less
than 56ms
@doanduyhai11
14. #DevoxxPL
Failure modes
• Byzantine failure: same input, different outputs à application bug !!!
• Performance failure: response correct but arrives too late
• Omission failure: special case of performance failure, no response (timeout)
• Crash failure: self-explanatory, server stops responding
Byzantine failure à value issue
Other failures à timing issue
@doanduyhai14
15. #DevoxxPL
Failure
Root causes
• Hardware: disk, CPU, …
• Software: packet lost, process crash, OS crash …
• Workload-specific: flushing huge file to SAN (🙀)
• JVM-related: long GC pause
Defining failure is hard
@doanduyhai15
16. #DevoxxPL @doanduyhai16
"A Server fails when it does
not respond to one or
multiple request(s) in a
timely manner"
Usual meaning of failure
17. #DevoxxPL
Failure detection
Timely manner ☞ timeout!
Failure detector:
• heart beat: binary state, (up/down), too simple
• multi-heartbeat with threshold
• multi-heartbeat with exponential backoff : better model
• phi accrual detector: advanced model using statictics
@doanduyhai17
18. #DevoxxPL
Distributed consensus protocols
Since time is unreliable, global ordering is hard to achieve & failure is
hard to detect ...
... how different machines can agree on a single value ?
Important properties:
• validity: the agreed value must have been proposed by some process
• termination: at least one non-faulty process eventually decides
• agreement: all processes agree on the same value
@doanduyhai18
19. #DevoxxPL
Distributed consensus protocols
2-phases commit
• termination KO: the protocol can be blocked if coordinator fails
3-phases commit
• agreement KO: in case of network partition, possibility of inconsistent state
Paxos, RAFT & Zab (Zookeeper)
• OK: satisfies 3 requirements
• QUORUM-based: requires a strict majority of copies/replicas to be alive
@doanduyhai19
21. #DevoxxPL
Data Sharding
Why sharding ?
• scalability: map logical shard to physical hardware (machines/racks,...)
• divide & conquer: each shard represents the DB at a smaller scale
How to shard ?
• user-defined algorithm: user chooses the sharding algorithm & the target
columns on which applies the algorithm.
• fixed algorithm: the DB imposes the sharding algorithm. The user decides only
on which columns to apply the algorithm. Ex: user_id
@doanduyhai21
22. #DevoxxPL
Data Sharding
Example of user-defined sharding
• user data with sharding key == email, sharding algo == take 1st letter 😱
@doanduyhai22
19
32
27
15
5
2
0
5
10
15
20
25
30
35
a - c e - h m - p q - t u - x y - z
Dataownershipin%
Shards
1st letter Data Distribution
23. #DevoxxPL
Data Sharding
Example of fixed sharding algo Murmur3
• user data with sharding key == user_id or whatever key 😎
@doanduyhai23
19
23
18
19
21
0
5
10
15
20
25
0-19 20-39 40-59 60-79 80-99
Dataownershipin%
Shards
Murmur3 Data Distribution
30. #DevoxxPL
Data Sharding Trade-off
Logical sharding (which allows ordering of sharding key)
• can lead to hotspots & imbalance in data distribution
• but allows range queries
• WHERE sharding_key >= xxx AND sharding_key <= yyy
Hash-based sharding
• guarantees uniform distribution (with sufficient distinct shard key values)
• range queries not possible, only point queries
• WHERE sharding_key >= xxx AND sharding_key <= yyy
• WHERE sharding_key == zzz
@doanduyhai30
31. #DevoxxPL
Data Sharding and Rebalancing
For some category of NoSQL solutions
• range queries is mandatory à hotspots not avoidable !!!
• mainly K/V databases, some wide columns databases too
@doanduyhai31
a - d
e - h
i - l
m - p
q - t
32. #DevoxxPL
Data Sharding and Rebalancing
Rebalancy is necessary
• sometimes automated process
• sometimes manual admin process 😭
• resource-intensive operation (CPU, disk I/O + network) à impact live
production traffic 😱
@doanduyhai32
33. #DevoxxPL
Data Replication
How ? By having multiple copies/replicas
Type of replicas
• symetric: no role, each replica is similar to others
• asymetric: "master/slave" style. All operations (read/write) should go through
a single server
@doanduyhai33
40. #DevoxxPL
Data Replication
@doanduyhai40
Master
Replica
Asymetric replicas, common write failure scenarios
✘ Message lost (network)
àMaster never receives ack
à KO
Master
Replica
✘
Write dropped (overload)
àMaster never receives ack
à KO
Master
Replica
✘
Replica crashed right away
àMaster never receives ack
à KO
43. #DevoxxPL
CAP theorem
@doanduyhai43
Conjecture by Brewer, formalized later in a paper (2002):
The CAP theorem states that any networked shared-data system
can have at most two of three desirable properties
• consistency (C): equivalent to having a single up-to-date copy of the data
• high availability (A): of that data (for updates)
• and tolerance to network partitions (P)
45. #DevoxxPL
CAP theorem revised (2012)
@doanduyhai45
You cannot choose not to be partition-tolerant
Choice is not that binary:
• in the absence of partition, you can tend toward CA
• if when partition occurs, choose your side (C or A)
☞ tunable consistency
46. #DevoxxPL
What is Consistency ?
@doanduyhai46
Meaning is different from the C of ACID
Read Uncommited
Read Commited
Cursor Stability
Repeatable Read
Eventual Consistency
Read Your Write
Pipelined RAM
Causal
Snapshot Isolation Linearizability
Serializability
Without coordination
Requires coordination
47. #DevoxxPL
Consistency with some AP system
@doanduyhai47
Cassandra tunable consistency
Read Uncommited
Read Commited
Cursor Stability
Repeatable Read
Eventual Consistency
Read Your Write
Pipelined RAM
Causal
Snapshot Isolation Linearizability
Serializability
Without coordination
Requires coordination
Consistency Level
ONE
48. #DevoxxPL
Consistency with some AP system
@doanduyhai48
Cassandra tunable consistency
Read Uncommited
Read Commited
Cursor Stability
Repeatable Read
Eventual Consistency
Read Your Write
Pipelined RAM
Causal
Snapshot Isolation Linearizability
Serializability
Without coordination
Requires coordination
Consistency Level
QUORUM
49. #DevoxxPL
Consistency with some AP system
@doanduyhai49
Cassandra tunable consistency
Read Uncommited
Read Commited
Cursor Stability
Repeatable Read
Eventual Consistency
Read Your Write
Pipelined RAM
Causal
Snapshot Isolation Linearizability
Serializability
Without coordination
Requires coordination
LightWeight
Transaction
Single partition writes
are linearizable
50. #DevoxxPL
What is availability ?
@doanduyhai50
Ability to:
• Read in the case of failure ?
• Write in the case of failure ?
Brewer definition: high availability of the data (for updates)
54. #DevoxxPL
So how can it be highly available ???
@doanduyhai54
C*
C*
C*
C*
C*C*
C*
C*
C*
C*
C*
C*
C*
Read/Write at
Consistency level ONE
C*
C*
C*
C*
C*C*
C*
C*
C*
C*
C*
C*
C*
US DataCenter EU DataCenter
✘
Datacenter-aware load balancing strategy at driver level
56. #DevoxxPL
Pure master/slave architecture
@doanduyhai56
Single server for all writes, read can be done on master or any slave
Advantages
• operations can be serialized
• easy to reason about
• pre-aggregation is possible
Drawbacks
• cannot scale on write (read can be scaled)
• single point of failure (SPOF)
59. #DevoxxPL
Multi-master/slave architecture
@doanduyhai59
Distribute data between shards. One master per shard
Advantages
• operations can still be serialized in a single shard
• easy to reason about in a single shard
• no more big SPOF
Drawbacks
• consistent only in a single shard (unless global lock)
• multiple small points of failure (SPOF inside a shard)
• global pre-aggregation is no longer possible
60. #DevoxxPL @doanduyhai60
Failure of a master/primary
shard is not a problem
because it takes less than
xxx millisecs to elect a
slave into a master Wrong Objection Rhetoric
61. #DevoxxPL
Recovery
Real domain of unavailability
@doanduyhai61
Time to detect a master/primary shard is down (seconds)
- simple heartbeat
- multi-heartbeat
- multi-heartbeat with exponential backoff
Master election
duration
(millisecs)
69. #DevoxxPL
Masterless architecture
@doanduyhai69
No master, every node has equal role
☞ how to manage consistency then if there is no master ?
☞ which replica has the right value of my data ?
Some data-structures to the rescue:
• vector clock
• CRDT (Convergent Replicated Data Type)
73. #DevoxxPL
Timestamp, again …
@doanduyhai73
But didn’t we say that timestamp not really reliable ?
Why not implement other CRDTs ?
Why choose LWW-registered ?
• because last-write-win is still the most "intuitive"
• because conflict resolution with other CRDT is the user responsibility
• because one should not be required to have a PhD in CS to use Cassandra
74. #DevoxxPL
Example of write conflict with Cassandra
@doanduyhai74
C*
C*
C*
C*
C*C*
C*
C*
C*
C*
UPDATE users SET
age=32 WHERE id=1
C* C*
C*
Local time
10:00:01.050
age=32 @ 10:00:01.050 age=32 @ 10:00:01.050
age=32 @ 10:00:01.050
75. #DevoxxPL
Example of write conflict with Cassandra
@doanduyhai75
C*
C*
C*
C*
C*C*
C*
C*
C*
C*
UPDATE users SET
age=33 WHERE id=1
C* C*
C*
Local time
10:00:01.020
age=32 @ 10:00:01.050
age=33 @ 10:00:01.020
age=32 @ 10:00:01.050
age=33 @ 10:00:01.020
age=32 @ 10:00:01.050
age=33 @ 10:00:01.020
76. #DevoxxPL
Example of write conflict with Cassandra
@doanduyhai76
C*
C*
C*
C*
C*C*
C*
C*
C*
C*
UPDATE users SET
age=33 WHERE id=1
C* C*
C*
Local time
10:00:01.020
age=32 @ 10:00:01.050
age=33 @ 10:00:01.020
age=32 @ 10:00:01.050
age=33 @ 10:00:01.020
age=32 @ 10:00:01.050
age=33 @ 10:00:01.020
77. #DevoxxPL
Example of write conflict
@doanduyhai77
How can we cope with this ?
• It’s functionally rare to have a update on the same column by differents
clients at atmost same time (few millisecs apart)
• can also force timestamp at client-side (but need to synchronize clients now
…)
• can always use LightWeight Transaction to guarantee linearizability but very
expensive
UPDATE user SET age = 33 WHERE id = 1 IF age = 32
78. #DevoxxPL
Masterless architecture
@doanduyhai78
Advantages
• no SPOF
• no failover procedure
• can achieve 0 downtime with correct tuning/multi-DC setup
Drawbacks
• consistency model hard to reason about (not intuitive)
• very expensive to have linearizability (when implemented)
• pre-aggregation impossible