SlideShare a Scribd company logo
1 of 52
Download to read offline
Making Cassandra more
capable, faster, and more
reliable
Hiroyuki Yamada – CTO/CEO at Scalar, Inc.
Yuji Ito – Architect at Scalar, Inc.
APACHECON @HOME
Sep, 29th – Oct. 1st 2020
© 2020 Scalar, inc.
Speakers
• Hiroyuki Yamada
– CTO at Scalar, Inc.
– Passionate about
Database Systems and
Distributed Systems
– Ph.D. in Computer
Science, the University of
Tokyo
– Formerly IIS the University
of Tokyo, Yahoo! Japan,
IBM Japan
2
• Yuji Ito
– Architect at Scalar, Inc.
– Improve the performance and
the reliability of Scalar DLT
– Love failure analysis
– Formerly an SSD firmware
engineer at Fixstars, Hitachi
© 2020 Scalar, inc.
Cassandra @ Scalar
• Scalar tries to make Cassandra the next level
– More capable: ACID transactions with Scalar DB
– Faster: Group CommitLog Sync
– More reliable: Jepsen tests for LWT
• This talk will present why we do them and what we do
3
© 2020 Scalar, inc.
ACID transactions on Cassandra with Scalar DB
4
© 2020 Scalar, inc.
What is Scalar DB
• A universal transaction manager
– A Java library that makes non-ACID databases ACID-compliant
– The architecture is inspired by Deuteronomy [CIDR’09,11]
• Cassandra is the first supported database
5
https://github.com/scalar-labs/scalardb
© 2020 Scalar, inc.
Why ACID Transactions with Cassandra? Why with Scalar DB?
• ACID is a must-have feature in some mission-critical applications
– C* has been getting widely used for such applications
– C* is one of the major open-source distributed databases
• Lots of risks and burden for modifying C*
– Scalar DB enables ACID transactions without modifying C* at all
since it is dependent only on the exposed APIs
– No risks for breaking the exiting code
6
© 2020 Scalar, inc.
Pros and Cons in Scalar DB on Cassandra
• Non-invasive
– No modifications in C*
• High availability and
scalability
– C* properties are fully
sustained by the client-
coordinated approach
• Flexible deployment
– Transaction layer and
storage layer can be
independently scaled
7
• Slower than NewSQLs
– More abstraction layers and
storage-oblivious
transaction manager
• Hard to optimize
– Transaction manager has
not much information about
storage
• No CQL support
– A transaction has to be
written procedurally with a
programming language
© 2020 Scalar, inc.
Programming Interface and System Architecture
• CRUD interface
– put, get, scan, delete
• Begin and commit semantics
– Arbitrary number of
operations can be handled
• Client-coordinated
– Transaction code is run in
the library
– No middleware is managed
8
DistributedTranasctionManager manager = …;
DistributedTransaction transaction = manager.start();
Get get = createGet();
Optional<Result> result = transaction.get(get);
Pub put = createPut(result);
transaction.put(put);
transaction.commit();
Client
programs
/ Web
applications
Scalar DBCommand
execution
/ HTTP
Cassandra
© 2020 Scalar, inc.
Data Model
• Multi-dimensional map [OSDI’06]
– (partition-key, clustering-key, value-name) -> value-content
– Assumed to be hash partitioned
9
© 2020 Scalar, inc.
Transaction Management - Overview
• Based on Cherry Garcia [ICDE’15]
– Two phase commit on linearizable operations (for Atomicity)
– Protocol correction is our extended work
– Distributed WAL records (for Atomicity and Durability)
– Single version optimistic concurrency control (for Isolation)
– Serializability support is our extended work
• Requirements in underlining databases/storages
– Linearizable read and linearizable conditional/CAS write
– An ability to store metadata for each record
10
© 2020 Scalar, inc.
Transaction Commit Protocol (for Atomicity)
• Two phase commit protocol on linearizable operations
– Similar to Paxos Commit [TODS’06]
– Data records are assumed to be distributed
• The protocol
– Prepare phase: prepare records
– Commit phase 1: commit status record
– This is where a transaction is regarded as committed or aborted
– Commit phase 2: commit records
• Lazy recovery
– Uncommitted records will be rollforwarded or rollbacked based on the
status of a transaction when the records are read
11
© 2020 Scalar, inc.
Distributed WAL (for Atomicity and Durability)
• WAL (Write-Ahead Logging) is distributed into records
12
Application data Transaction metadata
After image Before image
Application data
(Before)
Transaction metadata
(Before)
Status Version TxID
Status
(before)
Version
(before)
TxID
(before)
TxID Status Other metadata
Status Record
in coordinator
table
User/Application
Record
in user tables
Application data
(managed by users)
Transaction metadata
(managed by Scalar DB)
© 2020 Scalar, inc.
Concurrency Control (for Isolation)
• Single version OCC
– Simple implementation of Snapshot Isolation
– Conflicts are detected by linearizable conditional write (LWT)
– No clock dependency, no use of HLC (Hybrid Logical Clock)
• Supported isolation level
– Read-committed Snapshot Isolation (RCSI)
– Read-skew, write-skew, read-only, phantom anomalies could
happen
– Serializable
– No anomalies (Strict Serializability)
– RCSI-based but non-serializable schedules are aborted
13
© 2020 Scalar, inc.
Transaction With Example – Prepare Phase
14
Client1
Client1’s memory space
Cassandra
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
© 2020 Scalar, inc.
Transaction With Example – Prepare Phase
14
Client1
Client1’s memory space
Cassandra
Read
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
© 2020 Scalar, inc.
Transaction With Example – Prepare Phase
14
Client1
Client1’s memory space
Cassandra
Read
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
© 2020 Scalar, inc.
Transaction With Example – Prepare Phase
14
Client1
Client1’s memory space
Cassandra
Read
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
1 80 P 6Tx1
2 120 P 5Tx1
Tx1: Transfer 20 from 1 to 2
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
© 2020 Scalar, inc.
Transaction With Example – Prepare Phase
14
Client1
Client1’s memory space
Cassandra
Read
Conditional write
(LWT)
Update only if
the versions and the
TxIDs are the same as
the ones it read
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
1 80 P 6Tx1
2 120 P 5Tx1
Tx1: Transfer 20 from 1 to 2
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
© 2020 Scalar, inc.
Transaction With Example – Prepare Phase
14
Client1
Client1’s memory space
Cassandra
Read
Conditional write
(LWT)
Update only if
the versions and the
TxIDs are the same as
the ones it read
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
1 80 P 6Tx1
2 120 P 5Tx1
Tx1: Transfer 20 from 1 to 2
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
P 6Tx1
P 5Tx1
© 2020 Scalar, inc.
Transaction With Example – Prepare Phase
14
Client1
Client1’s memory space
Cassandra
Read
Conditional write
(LWT)
Update only if
the versions and the
TxIDs are the same as
the ones it read
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
1 80 P 6Tx1
2 120 P 5Tx1
Tx1: Transfer 20 from 1 to 2
Client2
UserID Balance Status Version
1 100 C 5
Client2’s memory space
Tx2: Transfer 10 from 1 to 2
TxID
XXX
2 100 C 4YYY
1 90 P 6Tx2
2 110 P 5Tx2
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
P 6Tx1
P 5Tx1
© 2020 Scalar, inc.
Transaction With Example – Prepare Phase
14
Client1
Client1’s memory space
Cassandra
Read
Conditional write
(LWT)
Update only if
the versions and the
TxIDs are the same as
the ones it read
Fail due to
the condition mismatch
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
1 80 P 6Tx1
2 120 P 5Tx1
Tx1: Transfer 20 from 1 to 2
Client2
UserID Balance Status Version
1 100 C 5
Client2’s memory space
Tx2: Transfer 10 from 1 to 2
TxID
XXX
2 100 C 4YYY
1 90 P 6Tx2
2 110 P 5Tx2
UserID Balance Status Version
1 100 C 5
TxID
XXX
2 100 C 4YYY
P 6Tx1
P 5Tx1
© 2020 Scalar, inc.
Transaction With Example – Commit Phase 1
15
UserID Balance Status Version
1 80 P 6
TxID
Tx1
2 120 P 5Tx1
Status
C
TxID
XXX
CYYY
AZZZ
Client1 with
Tx1
Cassandra
© 2020 Scalar, inc.
Transaction With Example – Commit Phase 1
15
UserID Balance Status Version
1 80 P 6
TxID
Tx1
2 120 P 5Tx1
Status
C
TxID
XXX
CYYY
AZZZ
CTx1
Conditional write
(LWT)
Update if
the TxID
does not exist
Client1 with
Tx1
Cassandra
© 2020 Scalar, inc.
Transaction With Example – Commit Phase 2
16
Cassandra
UserID Balance Status Version
1 80 C 6
TxID
Tx1
2 120 C 5Tx1
Status
C
TxID
XXX
CYYY
AZZZ
CTx1
Conditional write
(LWT)
Update status if
the record is
prepared by the TxID
Client1 with
Tx1
© 2020 Scalar, inc.
Recovery
17
Prepare
Phase
Commit
Phase1
Commit
Phase2
TX1
• Recovery is lazily done when a record is read
Nothing is
needed
(local memory
space is
automatically
cleared)
Recovery
process
Rollbacked by
another TX
lazily using
before image
Roll-forwarded
by another TX
lazily updating
status to C
No need for
recovery
Crash
© 2020 Scalar, inc.
Serializable Strategy
• Basic strategy
– Avoid anti/rw-dependency dangerous structure [TODS’05]
– No use of SSI [SIGMOD’08] or its variant [EuroSys’12]
– Many linearizable operations for managing in/outConflicts or
correct clock are required
• Two implementations
– Extra-write
– Convert read into write
– Extra care is done if a record doesn’t exist (Delete the record)
– Extra-read
– Check read-set after prepared to see if it is not updated by
other transactions
18
© 2020 Scalar, inc.
Benchmark Results with Scalar DB on Cassandra
19
Workload2 (Evidence)Workload1 (Payment)
Each node: i3.4xlarge (16 vCPUs, 122 GB RAM, 1900 GB NVMe SSD * 2), RF: 3
• Achieved 90 % scalability in 100-node cluster
(Compared to the Ideal TPS based on the performance of 3-node cluster)
© 2020 Scalar, inc.
Verification Results for Scalar DB on Cassandra
• Scalar DB on Cassandra has been heavily tested with Jepsen
and our destructive tools
– Jepsen tests are created and conducted by Scalar
– See https://github.com/scalar-labs/scalar-jepsen for more detail
• Transaction commit protocol is verified with TLA+
– See https://github.com/scalar-labs/scalardb/tree/master/tla%2B/consensus-commit
20
Jepsen
Passed TLA+
Passed
© 2020 Scalar, inc.
Speakers
• Hiroyuki Yamada
– CTO at Scalar, Inc.
– Passionate about
Database Systems and
Distributed Systems
– Ph.D. in Computer Science,
the University of Tokyo
– Formerly IIS the University of
Tokyo, Yahoo! Japan, IBM
Japan
21
• Yuji Ito
– Architect at Scalar, Inc.
– Improve the performance
and the reliability of Scalar
DLT
– Love failure analysis
– Formerly an SSD firmware
engineer at Fixstars, Hitachi
© 2020 Scalar, inc.
Group CommitLog Sync
22
© 2020 Scalar, inc.
Why we need a new mode?
• Scalar DB transaction relies on Cassandra’s
– Durability
– Performance
• Synchronous commitlog sync is required for durability
– Periodic mode might lose commitlogs
• Commitlog sync performance is the key factor
– Batch mode tends to issue lots of IOs
23
© 2020 Scalar, inc.
Group CommitLog Sync
• New commitlog sync mode on 4.0
– https://issues.apache.org/jira/browse/CASSANDRA-13530
• The mode syncs multiple commitlogs at once periodically
24
© 2020 Scalar, inc.
Commitlog
• Logs of all mutations to a Cassandra node
– All writes append commitlogs and the mutations are written to the
memtable
• Recover write data from commitlogs on startup
– These data on memtable are gone when crash
25Commitlog disk
memtable
Write
Commitlog
© 2020 Scalar, inc.
Commitlog
• Logs of all mutations to a Cassandra node
– All writes append commitlogs and the mutations are written to the
memtable
• Recover write data from commitlogs on startup
– These data on memtable are gone when crash
26Commitlog disk
memtable
Recover
© 2020 Scalar, inc.
Commitlog
• Logs of all mutations to a Cassandra node
– All writes append commitlogs and the mutations are written to the
memtable
• Recover write data from commitlogs on startup
– These data on memtable are gone when crash
27Commitlog disk
memtable
Write
Commitlog
© 2020 Scalar, inc.
• Sync commitlogs periodically
• NOT wait for the completion of the sync (Asynchronous sync)
Existing mode: Periodic (default mode)
28
Commitlog
disk
Commitlog
sync
thread
Sync
Request
thread
ack ackack ack
commitlog_sync_period_in_ms
© 2020 Scalar, inc.
• Sync commitlogs periodically
• NOT wait for the completion of the sync (Asynchronous sync)
Þ commitlog(write data) might be lost when crash
Existing mode: Periodic (default mode)
29
These commitlogs are lost !!
Commitlog
disk
Commitlog
sync
thread
Request
thread
commitlog_sync_period_in_ms
ack ack ack
© 2020 Scalar, inc.
Existing mode: Batch
• Sync commitlogs immediately
– Wait for the completion of the sync (Synchronus sync)
– Commitlogs issued at about the same time can be synced
together
Þ Throughput is degraded due to many small IOs
30
Commitlog
disk
ack ack ackack
Commitlog
sync
thread
Sync
Request
thread
Sync Sync Sync
“commitlog_sync_batch_window_in_ms” is the maximum length of a window, it always syncs immediately
© 2020 Scalar, inc.
Issues in the existing modes
• Periodic
– Commitlogs might be lost when Cassandra crashes
• Batch
– Performance could be degradaded due to many small IOs
– Batch doesn’t work as users would expect from the name
31
© 2020 Scalar, inc.
Grouping commitlogs
• Sync multiple commitlogs at once periodically (Synchronus sync)
– Reduce IOs by grouping syncs
32
commitlog_sync_group_window_in_ms
Commitlog
disk
ack ack
Commitlog
sync
thread
Sync
Request
thread
Sync
© 2020 Scalar, inc.
Evaluation
• Workload
– Small (<< 1KB) update operations with IF EXISTS (LWT) and without
IF EXISTS (non LWT)
• Environment
33
Instance type AWS EC2 m4.large
Disk type AWS EBS io1 200 IOPS
# of nodes 3
Replication factor 3
Window time Batch: 2 ms(default), 10 ms
Group: 10 ms, 15 ms
© 2020 Scalar, inc.
Evaluation result
• Results with 2 ms and 10 ms batch window are almost the same
• Group mode is a bit better than Batch mode
– The difference becomes smaller with a faster disk
34
0
500
1000
1500
2000
2500
0 50 100 150 200 250 300
Throughput[operation/sec]
Threads
Throughput - UPDATE
Batch 2ms
Batch 10ms
Group 10ms
Group 15ms
0
20
40
60
80
100
120
140
160
0 200 400 600 800 1000 1200
AverageLatency[ms]
Throughput [ops]
Latency of UPDATE
Batch 2ms
Group 10ms
Group 15ms
© 2020 Scalar, inc.
Evaluation result
• Between 8 and 32 threads, the throughput of Group mode is
better than that of Batch mode up to 75 %
– With LWT, many commitlogs are issued and affect the
performance
35
0
20
40
60
80
100
120
140
160
0 200 400 600 800 1000 1200
AverageLatency[ms]
Throughput [ops]
Latency of UPDATE
Batch 2ms
Group 10ms
Group 15ms
0
50
100
150
200
250
300
350
0 10 20 30 40
Throughput[operation/sec]
Threads
Throughput - UPDATE (Low concurrency)
Batch 2ms
Batch 10ms
Group 10ms
Group 15ms
75 %
© 2020 Scalar, inc.
Evaluation result
• Without LWT, the latency of Batch mode is better than that of
Group mode in small requests
36
0
5
10
15
20
25
0 200 400 600 800 1000 1200
AverageLatency[ms]
Throughput [ops]
Latency of UPDATE without LWT
Batch 2ms
Group 15ms
© 2020 Scalar, inc.
When to use Group mode?
• When durability is required
• When commitlog disk IOPS is lower than request arrival rate
– Group mode can remedy latency increase due to IO saturation
37
© 2020 Scalar, inc.
Jepsen Tests for LWT
38
© 2020 Scalar, inc.
Why we do Jepsen test for LWT?
• Scalar DB transaction relies on on the “correctness” of LWT
– Jepsen can check the correctness (linearizability)
• The existing Jepsen test for Cassandra has not been maintained
• https://github.com/riptano/jepsen
• Last commit: Feb 3, 2016
39
© 2020 Scalar, inc.
Jepsen tests for Cassandra
• Our tests have LWT, Batch, Set, Map, and Counter with various
faults
40
DB
DB
DB
DB
DB
Join/Leave/Rejoin
DB
DB
DB
DB
DB
DB
Network faults
(Bridge, Isolation, Halves)
DB
DB
DB
DB
DB
Node crash
DB
DB
DB
DB
DB
Clock drift
© 2020 Scalar, inc.
Our contributions to Jepsen testing for Cassandra
• Replaced Cassaforte with Alia (Clojure wrapper for Cassandra)
– Cassaforte has not been maintained
– There seems a bug in getting results
• Rewrote tests with the latest Jepsen
– The previous LWT test failed due to OOM
– New Jepsen can check the logs by dividing a test to some parts
41
© 2020 Scalar, inc.
Our contributions to Jepsen testing for Cassandra
• Report the result of short tests when a new version is released
– 1 minute per test
– Without fault injection
• Run tests with fault injection for 4.0 beta every week
– Sometimes, a node can not join the cluster before testing
– This issue didn’t happen with 4.0 alpha
42
jepsen@node0:~$ sudo /root/cassandra/bin/nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.0.1.7 978.53 KiB 256 ? b7713da3-2ac6-4f10-bea0-6374f23b907a rack1
UN 10.0.1.9 1003.29 KiB 256 ? c5c961fa-b585-41a0-ad19-1c51590ccfb0 rack1
UN 10.0.1.8 975.07 KiB 256 ? 981dd1aa-fd12-472e-9fb6-41d24470716e rack1
UJ 10.0.1.4 182.66 KiB 256 ? 9cc222d5-ba45-4e61-ac2d-b42a31cb74b1 rack1
© 2020 Scalar, inc.
[Discussion] Jepsen tests migration
• Jepsen test is now maintained in https://github.com/scalar-
labs/scalar-jepsen
• Probably more beneficial to many developers if it is migrated
into official Cassandra repo
– Thought?
43
© 2020 Scalar, inc.
Summary
• Scalar has enhanced Cassandra from various perspectives
– More capable: ACID transactions with Scalar DB
– Faster: Group CommitLog Sync
– More reliable: Jepsen tests for LWT
• They are mainly done without updating the core of C*
– Making C* more loosely coupled makes such contributions
way easier to do
44

More Related Content

What's hot

What's hot (20)

PostgreSQL 15の新機能を徹底解説
PostgreSQL 15の新機能を徹底解説PostgreSQL 15の新機能を徹底解説
PostgreSQL 15の新機能を徹底解説
 
これがCassandra
これがCassandraこれがCassandra
これがCassandra
 
Scalar IST のご紹介
Scalar IST のご紹介 Scalar IST のご紹介
Scalar IST のご紹介
 
トランザクションをSerializableにする4つの方法
トランザクションをSerializableにする4つの方法トランザクションをSerializableにする4つの方法
トランザクションをSerializableにする4つの方法
 
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte DataProblems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
 
[Aurora事例祭り]Amazon Aurora を使いこなすためのベストプラクティス
[Aurora事例祭り]Amazon Aurora を使いこなすためのベストプラクティス[Aurora事例祭り]Amazon Aurora を使いこなすためのベストプラクティス
[Aurora事例祭り]Amazon Aurora を使いこなすためのベストプラクティス
 
モノリスからマイクロサービスへの移行 ~ストラングラーパターンの検証~(Spring Fest 2020講演資料)
モノリスからマイクロサービスへの移行 ~ストラングラーパターンの検証~(Spring Fest 2020講演資料)モノリスからマイクロサービスへの移行 ~ストラングラーパターンの検証~(Spring Fest 2020講演資料)
モノリスからマイクロサービスへの移行 ~ストラングラーパターンの検証~(Spring Fest 2020講演資料)
 
A critique of ansi sql isolation levels 解説公開用
A critique of ansi sql isolation levels 解説公開用A critique of ansi sql isolation levels 解説公開用
A critique of ansi sql isolation levels 解説公開用
 
PostgreSQLをKubernetes上で活用するためのOperator紹介!(Cloud Native Database Meetup #3 発表資料)
PostgreSQLをKubernetes上で活用するためのOperator紹介!(Cloud Native Database Meetup #3 発表資料)PostgreSQLをKubernetes上で活用するためのOperator紹介!(Cloud Native Database Meetup #3 発表資料)
PostgreSQLをKubernetes上で活用するためのOperator紹介!(Cloud Native Database Meetup #3 発表資料)
 
iostat await svctm の 見かた、考え方
iostat await svctm の 見かた、考え方iostat await svctm の 見かた、考え方
iostat await svctm の 見かた、考え方
 
PostgreSQL 12は ここがスゴイ! ~性能改善やpluggable storage engineなどの新機能を徹底解説~ (NTTデータ テクノ...
PostgreSQL 12は ここがスゴイ! ~性能改善やpluggable storage engineなどの新機能を徹底解説~ (NTTデータ テクノ...PostgreSQL 12は ここがスゴイ! ~性能改善やpluggable storage engineなどの新機能を徹底解説~ (NTTデータ テクノ...
PostgreSQL 12は ここがスゴイ! ~性能改善やpluggable storage engineなどの新機能を徹底解説~ (NTTデータ テクノ...
 
MySQLとPostgreSQLの基本的なレプリケーション設定比較
MySQLとPostgreSQLの基本的なレプリケーション設定比較MySQLとPostgreSQLの基本的なレプリケーション設定比較
MySQLとPostgreSQLの基本的なレプリケーション設定比較
 
Dockerイメージ管理の内部構造
Dockerイメージ管理の内部構造Dockerイメージ管理の内部構造
Dockerイメージ管理の内部構造
 
Snowflake Architecture and Performance
Snowflake Architecture and PerformanceSnowflake Architecture and Performance
Snowflake Architecture and Performance
 
単なるキャッシュじゃないよ!?infinispanの紹介
単なるキャッシュじゃないよ!?infinispanの紹介単なるキャッシュじゃないよ!?infinispanの紹介
単なるキャッシュじゃないよ!?infinispanの紹介
 
PGOを用いたPostgreSQL on Kubernetes入門(PostgreSQL Conference Japan 2022 発表資料)
PGOを用いたPostgreSQL on Kubernetes入門(PostgreSQL Conference Japan 2022 発表資料)PGOを用いたPostgreSQL on Kubernetes入門(PostgreSQL Conference Japan 2022 発表資料)
PGOを用いたPostgreSQL on Kubernetes入門(PostgreSQL Conference Japan 2022 発表資料)
 
Hadoopのシステム設計・運用のポイント
Hadoopのシステム設計・運用のポイントHadoopのシステム設計・運用のポイント
Hadoopのシステム設計・運用のポイント
 
Apache Kafka 0.11 の Exactly Once Semantics
Apache Kafka 0.11 の Exactly Once SemanticsApache Kafka 0.11 の Exactly Once Semantics
Apache Kafka 0.11 の Exactly Once Semantics
 
CloudNativePGを動かしてみた! ~PostgreSQL on Kubernetes~(第34回PostgreSQLアンカンファレンス@オンライ...
CloudNativePGを動かしてみた! ~PostgreSQL on Kubernetes~(第34回PostgreSQLアンカンファレンス@オンライ...CloudNativePGを動かしてみた! ~PostgreSQL on Kubernetes~(第34回PostgreSQLアンカンファレンス@オンライ...
CloudNativePGを動かしてみた! ~PostgreSQL on Kubernetes~(第34回PostgreSQLアンカンファレンス@オンライ...
 
分散システムについて語らせてくれ
分散システムについて語らせてくれ分散システムについて語らせてくれ
分散システムについて語らせてくれ
 

Similar to Making Cassandra more capable, faster, and more reliable (at ApacheCon@Home 2020)

Data Engineer's Lunch #86: Building Real-Time Applications at Scale: A Case S...
Data Engineer's Lunch #86: Building Real-Time Applications at Scale: A Case S...Data Engineer's Lunch #86: Building Real-Time Applications at Scale: A Case S...
Data Engineer's Lunch #86: Building Real-Time Applications at Scale: A Case S...
Anant Corporation
 
Streaming Data Into Your Lakehouse With Frank Munz | Current 2022
Streaming Data Into Your Lakehouse With Frank Munz | Current 2022Streaming Data Into Your Lakehouse With Frank Munz | Current 2022
Streaming Data Into Your Lakehouse With Frank Munz | Current 2022
HostedbyConfluent
 

Similar to Making Cassandra more capable, faster, and more reliable (at ApacheCon@Home 2020) (20)

Transaction Management on Cassandra
Transaction Management on CassandraTransaction Management on Cassandra
Transaction Management on Cassandra
 
Avoiding Common Pitfalls: Spark Structured Streaming with Kafka
Avoiding Common Pitfalls: Spark Structured Streaming with KafkaAvoiding Common Pitfalls: Spark Structured Streaming with Kafka
Avoiding Common Pitfalls: Spark Structured Streaming with Kafka
 
Pulsar in the Lakehouse: Overview of Apache Pulsar and Delta Lake Connector -...
Pulsar in the Lakehouse: Overview of Apache Pulsar and Delta Lake Connector -...Pulsar in the Lakehouse: Overview of Apache Pulsar and Delta Lake Connector -...
Pulsar in the Lakehouse: Overview of Apache Pulsar and Delta Lake Connector -...
 
AMIS Oracle OpenWorld 2015 Review – part 3- PaaS Database, Integration, Ident...
AMIS Oracle OpenWorld 2015 Review – part 3- PaaS Database, Integration, Ident...AMIS Oracle OpenWorld 2015 Review – part 3- PaaS Database, Integration, Ident...
AMIS Oracle OpenWorld 2015 Review – part 3- PaaS Database, Integration, Ident...
 
Capital One Delivers Risk Insights in Real Time with Stream Processing
Capital One Delivers Risk Insights in Real Time with Stream ProcessingCapital One Delivers Risk Insights in Real Time with Stream Processing
Capital One Delivers Risk Insights in Real Time with Stream Processing
 
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015 Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
 
Tungsten Fabric Overview
Tungsten Fabric OverviewTungsten Fabric Overview
Tungsten Fabric Overview
 
Apache Pulsar Overview
Apache Pulsar OverviewApache Pulsar Overview
Apache Pulsar Overview
 
Container World 2017 - Characterizing and Contrasting Container Orchestrators
Container World 2017 - Characterizing and Contrasting Container OrchestratorsContainer World 2017 - Characterizing and Contrasting Container Orchestrators
Container World 2017 - Characterizing and Contrasting Container Orchestrators
 
What’s New in ScyllaDB Open Source 5.0
What’s New in ScyllaDB Open Source 5.0What’s New in ScyllaDB Open Source 5.0
What’s New in ScyllaDB Open Source 5.0
 
Oracle Database Migration to Oracle Cloud Infrastructure
Oracle Database Migration to Oracle Cloud InfrastructureOracle Database Migration to Oracle Cloud Infrastructure
Oracle Database Migration to Oracle Cloud Infrastructure
 
VoltDB on SolftLayer Cloud
VoltDB on SolftLayer CloudVoltDB on SolftLayer Cloud
VoltDB on SolftLayer Cloud
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
Data Engineer's Lunch #86: Building Real-Time Applications at Scale: A Case S...
Data Engineer's Lunch #86: Building Real-Time Applications at Scale: A Case S...Data Engineer's Lunch #86: Building Real-Time Applications at Scale: A Case S...
Data Engineer's Lunch #86: Building Real-Time Applications at Scale: A Case S...
 
Streaming Data Into Your Lakehouse With Frank Munz | Current 2022
Streaming Data Into Your Lakehouse With Frank Munz | Current 2022Streaming Data Into Your Lakehouse With Frank Munz | Current 2022
Streaming Data Into Your Lakehouse With Frank Munz | Current 2022
 
Presentation deploying cloud based services
Presentation   deploying cloud based servicesPresentation   deploying cloud based services
Presentation deploying cloud based services
 
Bloomreach - BloomStore Compute Cloud Infrastructure
Bloomreach - BloomStore Compute Cloud Infrastructure Bloomreach - BloomStore Compute Cloud Infrastructure
Bloomreach - BloomStore Compute Cloud Infrastructure
 
UKOUG Tech15 - Deploying Oracle 12c Cloud Control in Maximum Availability Arc...
UKOUG Tech15 - Deploying Oracle 12c Cloud Control in Maximum Availability Arc...UKOUG Tech15 - Deploying Oracle 12c Cloud Control in Maximum Availability Arc...
UKOUG Tech15 - Deploying Oracle 12c Cloud Control in Maximum Availability Arc...
 
Citi Tech Talk: Hybrid Cloud
Citi Tech Talk: Hybrid CloudCiti Tech Talk: Hybrid Cloud
Citi Tech Talk: Hybrid Cloud
 
Circonus: Design failures - A Case Study
Circonus: Design failures - A Case StudyCirconus: Design failures - A Case Study
Circonus: Design failures - A Case Study
 

Recently uploaded

Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Medical / Health Care (+971588192166) Mifepristone and Misoprostol tablets 200mg
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
VictoriaMetrics
 

Recently uploaded (20)

WSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
WSO2Con2024 - Enabling Transactional System's Exponential Growth With SimplicityWSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
WSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
 
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
 
AzureNativeQumulo_HPC_Cloud_Native_Benchmarks.pdf
AzureNativeQumulo_HPC_Cloud_Native_Benchmarks.pdfAzureNativeQumulo_HPC_Cloud_Native_Benchmarks.pdf
AzureNativeQumulo_HPC_Cloud_Native_Benchmarks.pdf
 
WSO2CON 2024 - IoT Needs CIAM: The Importance of Centralized IAM in a Growing...
WSO2CON 2024 - IoT Needs CIAM: The Importance of Centralized IAM in a Growing...WSO2CON 2024 - IoT Needs CIAM: The Importance of Centralized IAM in a Growing...
WSO2CON 2024 - IoT Needs CIAM: The Importance of Centralized IAM in a Growing...
 
WSO2CON 2024 - Building a Digital Government in Uganda
WSO2CON 2024 - Building a Digital Government in UgandaWSO2CON 2024 - Building a Digital Government in Uganda
WSO2CON 2024 - Building a Digital Government in Uganda
 
WSO2CON 2024 - How CSI Piemonte Is Apifying the Public Administration
WSO2CON 2024 - How CSI Piemonte Is Apifying the Public AdministrationWSO2CON 2024 - How CSI Piemonte Is Apifying the Public Administration
WSO2CON 2024 - How CSI Piemonte Is Apifying the Public Administration
 
%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto
 
WSO2Con2024 - Hello Choreo Presentation - Kanchana
WSO2Con2024 - Hello Choreo Presentation - KanchanaWSO2Con2024 - Hello Choreo Presentation - Kanchana
WSO2Con2024 - Hello Choreo Presentation - Kanchana
 
WSO2CON2024 - It's time to go Platformless
WSO2CON2024 - It's time to go PlatformlessWSO2CON2024 - It's time to go Platformless
WSO2CON2024 - It's time to go Platformless
 
WSO2Con2024 - Software Delivery in Hybrid Environments
WSO2Con2024 - Software Delivery in Hybrid EnvironmentsWSO2Con2024 - Software Delivery in Hybrid Environments
WSO2Con2024 - Software Delivery in Hybrid Environments
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK Software
 
WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...
WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...
WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...
 
WSO2Con2024 - Low-Code Integration Tooling
WSO2Con2024 - Low-Code Integration ToolingWSO2Con2024 - Low-Code Integration Tooling
WSO2Con2024 - Low-Code Integration Tooling
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
 
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
 
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
 
WSO2CON 2024 - Navigating API Complexity: REST, GraphQL, gRPC, Websocket, Web...
WSO2CON 2024 - Navigating API Complexity: REST, GraphQL, gRPC, Websocket, Web...WSO2CON 2024 - Navigating API Complexity: REST, GraphQL, gRPC, Websocket, Web...
WSO2CON 2024 - Navigating API Complexity: REST, GraphQL, gRPC, Websocket, Web...
 
WSO2Con2024 - Unleashing the Financial Potential of 13 Million People
WSO2Con2024 - Unleashing the Financial Potential of 13 Million PeopleWSO2Con2024 - Unleashing the Financial Potential of 13 Million People
WSO2Con2024 - Unleashing the Financial Potential of 13 Million People
 
WSO2CON 2024 - Unlocking the Identity: Embracing CIAM 2.0 for a Competitive A...
WSO2CON 2024 - Unlocking the Identity: Embracing CIAM 2.0 for a Competitive A...WSO2CON 2024 - Unlocking the Identity: Embracing CIAM 2.0 for a Competitive A...
WSO2CON 2024 - Unlocking the Identity: Embracing CIAM 2.0 for a Competitive A...
 

Making Cassandra more capable, faster, and more reliable (at ApacheCon@Home 2020)

  • 1. Making Cassandra more capable, faster, and more reliable Hiroyuki Yamada – CTO/CEO at Scalar, Inc. Yuji Ito – Architect at Scalar, Inc. APACHECON @HOME Sep, 29th – Oct. 1st 2020
  • 2. © 2020 Scalar, inc. Speakers • Hiroyuki Yamada – CTO at Scalar, Inc. – Passionate about Database Systems and Distributed Systems – Ph.D. in Computer Science, the University of Tokyo – Formerly IIS the University of Tokyo, Yahoo! Japan, IBM Japan 2 • Yuji Ito – Architect at Scalar, Inc. – Improve the performance and the reliability of Scalar DLT – Love failure analysis – Formerly an SSD firmware engineer at Fixstars, Hitachi
  • 3. © 2020 Scalar, inc. Cassandra @ Scalar • Scalar tries to make Cassandra the next level – More capable: ACID transactions with Scalar DB – Faster: Group CommitLog Sync – More reliable: Jepsen tests for LWT • This talk will present why we do them and what we do 3
  • 4. © 2020 Scalar, inc. ACID transactions on Cassandra with Scalar DB 4
  • 5. © 2020 Scalar, inc. What is Scalar DB • A universal transaction manager – A Java library that makes non-ACID databases ACID-compliant – The architecture is inspired by Deuteronomy [CIDR’09,11] • Cassandra is the first supported database 5 https://github.com/scalar-labs/scalardb
  • 6. © 2020 Scalar, inc. Why ACID Transactions with Cassandra? Why with Scalar DB? • ACID is a must-have feature in some mission-critical applications – C* has been getting widely used for such applications – C* is one of the major open-source distributed databases • Lots of risks and burden for modifying C* – Scalar DB enables ACID transactions without modifying C* at all since it is dependent only on the exposed APIs – No risks for breaking the exiting code 6
  • 7. © 2020 Scalar, inc. Pros and Cons in Scalar DB on Cassandra • Non-invasive – No modifications in C* • High availability and scalability – C* properties are fully sustained by the client- coordinated approach • Flexible deployment – Transaction layer and storage layer can be independently scaled 7 • Slower than NewSQLs – More abstraction layers and storage-oblivious transaction manager • Hard to optimize – Transaction manager has not much information about storage • No CQL support – A transaction has to be written procedurally with a programming language
  • 8. © 2020 Scalar, inc. Programming Interface and System Architecture • CRUD interface – put, get, scan, delete • Begin and commit semantics – Arbitrary number of operations can be handled • Client-coordinated – Transaction code is run in the library – No middleware is managed 8 DistributedTranasctionManager manager = …; DistributedTransaction transaction = manager.start(); Get get = createGet(); Optional<Result> result = transaction.get(get); Pub put = createPut(result); transaction.put(put); transaction.commit(); Client programs / Web applications Scalar DBCommand execution / HTTP Cassandra
  • 9. © 2020 Scalar, inc. Data Model • Multi-dimensional map [OSDI’06] – (partition-key, clustering-key, value-name) -> value-content – Assumed to be hash partitioned 9
  • 10. © 2020 Scalar, inc. Transaction Management - Overview • Based on Cherry Garcia [ICDE’15] – Two phase commit on linearizable operations (for Atomicity) – Protocol correction is our extended work – Distributed WAL records (for Atomicity and Durability) – Single version optimistic concurrency control (for Isolation) – Serializability support is our extended work • Requirements in underlining databases/storages – Linearizable read and linearizable conditional/CAS write – An ability to store metadata for each record 10
  • 11. © 2020 Scalar, inc. Transaction Commit Protocol (for Atomicity) • Two phase commit protocol on linearizable operations – Similar to Paxos Commit [TODS’06] – Data records are assumed to be distributed • The protocol – Prepare phase: prepare records – Commit phase 1: commit status record – This is where a transaction is regarded as committed or aborted – Commit phase 2: commit records • Lazy recovery – Uncommitted records will be rollforwarded or rollbacked based on the status of a transaction when the records are read 11
  • 12. © 2020 Scalar, inc. Distributed WAL (for Atomicity and Durability) • WAL (Write-Ahead Logging) is distributed into records 12 Application data Transaction metadata After image Before image Application data (Before) Transaction metadata (Before) Status Version TxID Status (before) Version (before) TxID (before) TxID Status Other metadata Status Record in coordinator table User/Application Record in user tables Application data (managed by users) Transaction metadata (managed by Scalar DB)
  • 13. © 2020 Scalar, inc. Concurrency Control (for Isolation) • Single version OCC – Simple implementation of Snapshot Isolation – Conflicts are detected by linearizable conditional write (LWT) – No clock dependency, no use of HLC (Hybrid Logical Clock) • Supported isolation level – Read-committed Snapshot Isolation (RCSI) – Read-skew, write-skew, read-only, phantom anomalies could happen – Serializable – No anomalies (Strict Serializability) – RCSI-based but non-serializable schedules are aborted 13
  • 14. © 2020 Scalar, inc. Transaction With Example – Prepare Phase 14 Client1 Client1’s memory space Cassandra UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY
  • 15. © 2020 Scalar, inc. Transaction With Example – Prepare Phase 14 Client1 Client1’s memory space Cassandra Read UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY
  • 16. © 2020 Scalar, inc. Transaction With Example – Prepare Phase 14 Client1 Client1’s memory space Cassandra Read UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY
  • 17. © 2020 Scalar, inc. Transaction With Example – Prepare Phase 14 Client1 Client1’s memory space Cassandra Read UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY 1 80 P 6Tx1 2 120 P 5Tx1 Tx1: Transfer 20 from 1 to 2 UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY
  • 18. © 2020 Scalar, inc. Transaction With Example – Prepare Phase 14 Client1 Client1’s memory space Cassandra Read Conditional write (LWT) Update only if the versions and the TxIDs are the same as the ones it read UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY 1 80 P 6Tx1 2 120 P 5Tx1 Tx1: Transfer 20 from 1 to 2 UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY
  • 19. © 2020 Scalar, inc. Transaction With Example – Prepare Phase 14 Client1 Client1’s memory space Cassandra Read Conditional write (LWT) Update only if the versions and the TxIDs are the same as the ones it read UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY 1 80 P 6Tx1 2 120 P 5Tx1 Tx1: Transfer 20 from 1 to 2 UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY P 6Tx1 P 5Tx1
  • 20. © 2020 Scalar, inc. Transaction With Example – Prepare Phase 14 Client1 Client1’s memory space Cassandra Read Conditional write (LWT) Update only if the versions and the TxIDs are the same as the ones it read UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY 1 80 P 6Tx1 2 120 P 5Tx1 Tx1: Transfer 20 from 1 to 2 Client2 UserID Balance Status Version 1 100 C 5 Client2’s memory space Tx2: Transfer 10 from 1 to 2 TxID XXX 2 100 C 4YYY 1 90 P 6Tx2 2 110 P 5Tx2 UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY P 6Tx1 P 5Tx1
  • 21. © 2020 Scalar, inc. Transaction With Example – Prepare Phase 14 Client1 Client1’s memory space Cassandra Read Conditional write (LWT) Update only if the versions and the TxIDs are the same as the ones it read Fail due to the condition mismatch UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY 1 80 P 6Tx1 2 120 P 5Tx1 Tx1: Transfer 20 from 1 to 2 Client2 UserID Balance Status Version 1 100 C 5 Client2’s memory space Tx2: Transfer 10 from 1 to 2 TxID XXX 2 100 C 4YYY 1 90 P 6Tx2 2 110 P 5Tx2 UserID Balance Status Version 1 100 C 5 TxID XXX 2 100 C 4YYY P 6Tx1 P 5Tx1
  • 22. © 2020 Scalar, inc. Transaction With Example – Commit Phase 1 15 UserID Balance Status Version 1 80 P 6 TxID Tx1 2 120 P 5Tx1 Status C TxID XXX CYYY AZZZ Client1 with Tx1 Cassandra
  • 23. © 2020 Scalar, inc. Transaction With Example – Commit Phase 1 15 UserID Balance Status Version 1 80 P 6 TxID Tx1 2 120 P 5Tx1 Status C TxID XXX CYYY AZZZ CTx1 Conditional write (LWT) Update if the TxID does not exist Client1 with Tx1 Cassandra
  • 24. © 2020 Scalar, inc. Transaction With Example – Commit Phase 2 16 Cassandra UserID Balance Status Version 1 80 C 6 TxID Tx1 2 120 C 5Tx1 Status C TxID XXX CYYY AZZZ CTx1 Conditional write (LWT) Update status if the record is prepared by the TxID Client1 with Tx1
  • 25. © 2020 Scalar, inc. Recovery 17 Prepare Phase Commit Phase1 Commit Phase2 TX1 • Recovery is lazily done when a record is read Nothing is needed (local memory space is automatically cleared) Recovery process Rollbacked by another TX lazily using before image Roll-forwarded by another TX lazily updating status to C No need for recovery Crash
  • 26. © 2020 Scalar, inc. Serializable Strategy • Basic strategy – Avoid anti/rw-dependency dangerous structure [TODS’05] – No use of SSI [SIGMOD’08] or its variant [EuroSys’12] – Many linearizable operations for managing in/outConflicts or correct clock are required • Two implementations – Extra-write – Convert read into write – Extra care is done if a record doesn’t exist (Delete the record) – Extra-read – Check read-set after prepared to see if it is not updated by other transactions 18
  • 27. © 2020 Scalar, inc. Benchmark Results with Scalar DB on Cassandra 19 Workload2 (Evidence)Workload1 (Payment) Each node: i3.4xlarge (16 vCPUs, 122 GB RAM, 1900 GB NVMe SSD * 2), RF: 3 • Achieved 90 % scalability in 100-node cluster (Compared to the Ideal TPS based on the performance of 3-node cluster)
  • 28. © 2020 Scalar, inc. Verification Results for Scalar DB on Cassandra • Scalar DB on Cassandra has been heavily tested with Jepsen and our destructive tools – Jepsen tests are created and conducted by Scalar – See https://github.com/scalar-labs/scalar-jepsen for more detail • Transaction commit protocol is verified with TLA+ – See https://github.com/scalar-labs/scalardb/tree/master/tla%2B/consensus-commit 20 Jepsen Passed TLA+ Passed
  • 29. © 2020 Scalar, inc. Speakers • Hiroyuki Yamada – CTO at Scalar, Inc. – Passionate about Database Systems and Distributed Systems – Ph.D. in Computer Science, the University of Tokyo – Formerly IIS the University of Tokyo, Yahoo! Japan, IBM Japan 21 • Yuji Ito – Architect at Scalar, Inc. – Improve the performance and the reliability of Scalar DLT – Love failure analysis – Formerly an SSD firmware engineer at Fixstars, Hitachi
  • 30. © 2020 Scalar, inc. Group CommitLog Sync 22
  • 31. © 2020 Scalar, inc. Why we need a new mode? • Scalar DB transaction relies on Cassandra’s – Durability – Performance • Synchronous commitlog sync is required for durability – Periodic mode might lose commitlogs • Commitlog sync performance is the key factor – Batch mode tends to issue lots of IOs 23
  • 32. © 2020 Scalar, inc. Group CommitLog Sync • New commitlog sync mode on 4.0 – https://issues.apache.org/jira/browse/CASSANDRA-13530 • The mode syncs multiple commitlogs at once periodically 24
  • 33. © 2020 Scalar, inc. Commitlog • Logs of all mutations to a Cassandra node – All writes append commitlogs and the mutations are written to the memtable • Recover write data from commitlogs on startup – These data on memtable are gone when crash 25Commitlog disk memtable Write Commitlog
  • 34. © 2020 Scalar, inc. Commitlog • Logs of all mutations to a Cassandra node – All writes append commitlogs and the mutations are written to the memtable • Recover write data from commitlogs on startup – These data on memtable are gone when crash 26Commitlog disk memtable Recover
  • 35. © 2020 Scalar, inc. Commitlog • Logs of all mutations to a Cassandra node – All writes append commitlogs and the mutations are written to the memtable • Recover write data from commitlogs on startup – These data on memtable are gone when crash 27Commitlog disk memtable Write Commitlog
  • 36. © 2020 Scalar, inc. • Sync commitlogs periodically • NOT wait for the completion of the sync (Asynchronous sync) Existing mode: Periodic (default mode) 28 Commitlog disk Commitlog sync thread Sync Request thread ack ackack ack commitlog_sync_period_in_ms
  • 37. © 2020 Scalar, inc. • Sync commitlogs periodically • NOT wait for the completion of the sync (Asynchronous sync) Þ commitlog(write data) might be lost when crash Existing mode: Periodic (default mode) 29 These commitlogs are lost !! Commitlog disk Commitlog sync thread Request thread commitlog_sync_period_in_ms ack ack ack
  • 38. © 2020 Scalar, inc. Existing mode: Batch • Sync commitlogs immediately – Wait for the completion of the sync (Synchronus sync) – Commitlogs issued at about the same time can be synced together Þ Throughput is degraded due to many small IOs 30 Commitlog disk ack ack ackack Commitlog sync thread Sync Request thread Sync Sync Sync “commitlog_sync_batch_window_in_ms” is the maximum length of a window, it always syncs immediately
  • 39. © 2020 Scalar, inc. Issues in the existing modes • Periodic – Commitlogs might be lost when Cassandra crashes • Batch – Performance could be degradaded due to many small IOs – Batch doesn’t work as users would expect from the name 31
  • 40. © 2020 Scalar, inc. Grouping commitlogs • Sync multiple commitlogs at once periodically (Synchronus sync) – Reduce IOs by grouping syncs 32 commitlog_sync_group_window_in_ms Commitlog disk ack ack Commitlog sync thread Sync Request thread Sync
  • 41. © 2020 Scalar, inc. Evaluation • Workload – Small (<< 1KB) update operations with IF EXISTS (LWT) and without IF EXISTS (non LWT) • Environment 33 Instance type AWS EC2 m4.large Disk type AWS EBS io1 200 IOPS # of nodes 3 Replication factor 3 Window time Batch: 2 ms(default), 10 ms Group: 10 ms, 15 ms
  • 42. © 2020 Scalar, inc. Evaluation result • Results with 2 ms and 10 ms batch window are almost the same • Group mode is a bit better than Batch mode – The difference becomes smaller with a faster disk 34 0 500 1000 1500 2000 2500 0 50 100 150 200 250 300 Throughput[operation/sec] Threads Throughput - UPDATE Batch 2ms Batch 10ms Group 10ms Group 15ms 0 20 40 60 80 100 120 140 160 0 200 400 600 800 1000 1200 AverageLatency[ms] Throughput [ops] Latency of UPDATE Batch 2ms Group 10ms Group 15ms
  • 43. © 2020 Scalar, inc. Evaluation result • Between 8 and 32 threads, the throughput of Group mode is better than that of Batch mode up to 75 % – With LWT, many commitlogs are issued and affect the performance 35 0 20 40 60 80 100 120 140 160 0 200 400 600 800 1000 1200 AverageLatency[ms] Throughput [ops] Latency of UPDATE Batch 2ms Group 10ms Group 15ms 0 50 100 150 200 250 300 350 0 10 20 30 40 Throughput[operation/sec] Threads Throughput - UPDATE (Low concurrency) Batch 2ms Batch 10ms Group 10ms Group 15ms 75 %
  • 44. © 2020 Scalar, inc. Evaluation result • Without LWT, the latency of Batch mode is better than that of Group mode in small requests 36 0 5 10 15 20 25 0 200 400 600 800 1000 1200 AverageLatency[ms] Throughput [ops] Latency of UPDATE without LWT Batch 2ms Group 15ms
  • 45. © 2020 Scalar, inc. When to use Group mode? • When durability is required • When commitlog disk IOPS is lower than request arrival rate – Group mode can remedy latency increase due to IO saturation 37
  • 46. © 2020 Scalar, inc. Jepsen Tests for LWT 38
  • 47. © 2020 Scalar, inc. Why we do Jepsen test for LWT? • Scalar DB transaction relies on on the “correctness” of LWT – Jepsen can check the correctness (linearizability) • The existing Jepsen test for Cassandra has not been maintained • https://github.com/riptano/jepsen • Last commit: Feb 3, 2016 39
  • 48. © 2020 Scalar, inc. Jepsen tests for Cassandra • Our tests have LWT, Batch, Set, Map, and Counter with various faults 40 DB DB DB DB DB Join/Leave/Rejoin DB DB DB DB DB DB Network faults (Bridge, Isolation, Halves) DB DB DB DB DB Node crash DB DB DB DB DB Clock drift
  • 49. © 2020 Scalar, inc. Our contributions to Jepsen testing for Cassandra • Replaced Cassaforte with Alia (Clojure wrapper for Cassandra) – Cassaforte has not been maintained – There seems a bug in getting results • Rewrote tests with the latest Jepsen – The previous LWT test failed due to OOM – New Jepsen can check the logs by dividing a test to some parts 41
  • 50. © 2020 Scalar, inc. Our contributions to Jepsen testing for Cassandra • Report the result of short tests when a new version is released – 1 minute per test – Without fault injection • Run tests with fault injection for 4.0 beta every week – Sometimes, a node can not join the cluster before testing – This issue didn’t happen with 4.0 alpha 42 jepsen@node0:~$ sudo /root/cassandra/bin/nodetool status Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.0.1.7 978.53 KiB 256 ? b7713da3-2ac6-4f10-bea0-6374f23b907a rack1 UN 10.0.1.9 1003.29 KiB 256 ? c5c961fa-b585-41a0-ad19-1c51590ccfb0 rack1 UN 10.0.1.8 975.07 KiB 256 ? 981dd1aa-fd12-472e-9fb6-41d24470716e rack1 UJ 10.0.1.4 182.66 KiB 256 ? 9cc222d5-ba45-4e61-ac2d-b42a31cb74b1 rack1
  • 51. © 2020 Scalar, inc. [Discussion] Jepsen tests migration • Jepsen test is now maintained in https://github.com/scalar- labs/scalar-jepsen • Probably more beneficial to many developers if it is migrated into official Cassandra repo – Thought? 43
  • 52. © 2020 Scalar, inc. Summary • Scalar has enhanced Cassandra from various perspectives – More capable: ACID transactions with Scalar DB – Faster: Group CommitLog Sync – More reliable: Jepsen tests for LWT • They are mainly done without updating the core of C* – Making C* more loosely coupled makes such contributions way easier to do 44