SlideShare a Scribd company logo
1 of 59
#DevoxxFR
JVMs & GCs
GC tuning for low latencies with Cassandra
Quentin Ambard
@qambard
1
(NOT a Cassandra benchmark!)
It’s all about JVM
2
Agenda
• Hotspot GCs
• G1: Heap size & target pause time
• G1: 31GB or 32GB?
• G1: Advanced settings
• Low latencies JVMs
• Wrap up
3
#DevoxxFR
Hotspot GCs
4
Parallel collector
CMS
G1
Parallel collector
5
Eden Survivor (S0/S1)
Parallel collector
6
Heap is full
Triggers a Stop The World (STW) GC to mark & compact
Parallel gc profile
7
Young GC Old is filled, full GC
Concurrent Mark Sweep (CMS) collector
8
Young collection uses ParNewGC. Behaves like Parallel GC
Difference: needs to communicate with CMS for the old generation
Concurrent Mark Sweep (CMS) collector
9
Old region is getting too big
Limit defined by -XX:CMSInitiatingHeapOccupancyPercent
Start concurrent marking & cleanup
Only smalls STW phase
Delete only. No compaction leading to fragmentation
Triggers a serial full STW GC if continuous memory can’t be allocated
Memory is requested by “block” –XX:OldPLABSize=16
each thread requests a block to copy young to old
CMS profile
10
Hard to tune. Need to fix young size, won’t adapt to new workload
inexplicit options. Easier when heap remains around 8GB
Fragmentation will trigger super long full GC
Garbage first (G1) collector
11
Heap (31GB)
Empty Regions (-XX:G1HeapRegionSize)
Garbage first (G1) collector
12
Young region
Garbage first (G1) collector
13
Young space is full
Dynamically sized to reach pause objective -XX:MaxGCPauseMillis
Trigger STW “parallel gc” on young
Scan object in each region
Copy survivor to survivor
In another young region
Free regions
Garbage first (G1) collector
14
Young space is full
Dynamically sized to reach pause objective -XX:MaxGCPauseMillis
Trigger STW “parallel gc” on young
Scan object in each region
Copy survivor to survivor
In another young region. Survivor size dynamically adjusted
Copy old survivor to old region
In another young region
Free regions
Garbage first (G1) collector
15
Old space is getting too big
Limit defined by -XX:InitiatingHeapOccupancyPercent=40
Start region concurrent scan
Not blocking (2 short STW with SATB: start + end)
100% empty regions reclaimed
Fast, “free” operation
Trigger STW mixed gc:
Dramatically reduce young size
Includes a few old region in the next young gc
Repeat mixed gc until -XX:G1HeapWastePercent
10% 100% 100%
80%78%
30%
Young size being reduced to respect target pause time,
G1 triggers several concurrent gc
G1 profile
16
“mixed” gcEasy to tune, predictable young GC
#DevoxxFR
Heap size & target pause time
1
7
Test protocol
18
128GB RAM, 2x8 cores (32 ht cores)
2 CPU E5-2620 v4 @ 2.10GHz. Disk SSD RAID0. JDK 1.8.161
DSE 5.1.7, memtable size fixed to 4G. Data saved in ramdisk (40GB)
Gatling query on 3 tables
jvm: zing. Default C* schema configuration. Durable write=false (avoid commit log to reduce disk activity)
Rows size: 100byte, 1.5k, 3k
50% Read, 50% write. Throughput 40-120kq/sec
33% of the read return a range of 10 values
Datastax recommended OS settings applied
Which Heap size
19
Why a bigger heap could be better?
• Heap get filled slower (less gc)
• Increase ratio of dead object during collections
Moving (compacting) the remaining objects is the most heavy operation
• Increases chances to flush an entirely region
Which Heap size
20
Why a bigger heap could be worse?
• Full GC has a bigger impact
Now parallel with java1.10
• Increases chances to trigger longer pauses
• Less memory remaining for disk cache
Heap size
21
Small heap (<16GB) have bad
latencies
After a given size, no obvious
difference
0
100
200
300
400
500
600
0pt
20pt
40pt
55pt
65pt
75pt
80pt
85pt
88.75pt
91.25pt
93.75pt
95pt
96.25pt
97.18pt
97.81pt
98.43pt
98.75pt
99.06pt
99.29pt
99.45pt
99.6pt
99.68pt
99.76pt
99.92pt
Latency(ms)
Client latencies by Heap Size - target pause= 300ms, 60kq/sec
8GB 12GB 20GB 28GB 36GB 44GB 52GB 60GB
Heap size & GC Pause time
22
0
20
40
60
80
100
120
8 18 28 38 48 58
Totalpausetime(sec)
Heap size (GB)
GC STW Pause by Heap size and heap allocation - target pause 300ms
Total pause time (800mb/s)
Heap size & GC Pause time
23
0
100
200
300
400
500
600
700
0
20
40
60
80
100
120
8 18 28 38 48 58
Heap size (GB)
MaxPausetime(ms)
Totalpausetime(sec) GC STW Pause by Heap size and heap allocation - target pause 300ms
Total pause time (800mb/s) GC Max pause time (800mb/s)
Heap size & GC Pause time
24
0
100
200
300
400
500
600
0
20
40
60
80
100
120
140
160
180
8 18 28 38 48 58
Heap size (GB)
MaxPausetime(ms)
Totalpausetime(sec) GC STW Pause by Heap size and heap allocation - target pause 300ms
Total pause time (1250mb/s) GC Max pause time (1250mb/s)
Heap size & GC Pause time
25
0
100
200
300
400
500
600
700
0
20
40
60
80
100
120
140
160
180
8 18 28 38 48 58
MaxPausetime(ms)
Totalpausetime(sec)
Heap size (GB)
GC STW Pause by Heap size and heap allocation - target pause 300ms
Total pause time (800mb/s) Total pause time (1250mb/s)
GC Max pause time (800mb/s) GC Max pause time (1250mb/s)
Target pause time -XX:MaxGCPauseMillis
26
0
10
20
30
40
50
60
70
50 100 200 300 400 500 600
Totalpausetime(sec)
G1 Pause time target (ms)
Total STW GC pause by target pause - heap 28GB
Total STW GC duration
Target pause time -XX:MaxGCPauseMillis
27
0
100
200
300
400
500
600
0
10
20
30
40
50
60
70
50 100 200 300 400 500 600
Maxpausetime(ms)
Totalpausetime(sec)
G1 Pause time target (ms)
Total STW GC pause by target pause - heap 28GB
Total STW GC duration (900 mb/sec) Max Pause time (900mb/sec)
Target pause time -XX:MaxGCPauseMillis
28
0
50
100
150
200
250
300
350
400
450
500
90pt
91.25pt
92.5pt
93.75pt
94.37pt
95pt
95.62pt
96.25pt
96.87pt
97.18pt
97.5pt
97.81pt
98.12pt
98.43pt
98.59pt
98.75pt
98.9pt
99.06pt
99.21pt
99.29pt
99.37pt
99.45pt
99.53pt
99.6pt
99.64pt
99.68pt
99.72pt
99.76pt
99.8pt
ClientPausetime(ms)
percentiles
Client pause time by GC target Pause
36GB-100ms 36GB-200ms 36GB-300ms 36GB-400ms
36GB-500ms 36GB-50ms 36GB-600ms
Heap size Conclusion
29
G1 struggle with a too small heap. Also increases full GC risk
GC pause time doesn’t reduce proportionally when heap size increase
Sweet spot seems to be around 30x allocation rate
Keep -XX:MaxGCPauseMillis >= 200ms
#DevoxxFR
More advanced settings for G1
3
0
31GB or 32GB?
31
Up to 31GB: Oops compressed on 32bit with 8 bytes alignement (3 bit shift)
8 0000 1000 0000 0001
32 0010 0000 0000 0100
40 0010 1000 0000 0101
2^32 => 4G. 3 bit shift trick leads to 2^3 = 8 times more addresses. 2^32 * 2^3 = 32G
32GB: Oops on 64 bits
Heap from 32GB to 64GB can be aligned on 16bit
G1 targets 2048 regions and changes the default size at 32GB
31GB => XX:G1HeapRegionSize=8m = 3875 regions
32GB => XX:G1HeapRegionSize=16m = 2000 regions
Region number can have an impact on Remember Set update/scan
nodetool sjk mx -mc -b "com.sun.management:type=HotSpotDiagnostic" -op getVMOption -a UseCompressedOops
31GB or 32GB?
32
No major difference
Concurrent marking cycle is slower with smaller
region (8MB => +20%)
No major difference in cpu usage
31GB + RegionSize=16m
Total GC Pause time -10%
Latencies mean -8%
All other GC metrics are very similar
Not sure? stick with 31GB + RegionSize=16m
0
50
100
150
200
250
300
350
400
92.5pt
93.75pt
94.375pt
95pt
95.625pt
96.25pt
96.875pt
97.187pt
97.5pt
97.812pt
98.125pt
98.437pt
98.593pt
98.75pt
98.906pt
99.062pt
99.218pt
99.296pt
99.375pt
99.453pt
99.531pt
99.609pt
99.648pt
99.687pt
99.726pt
99.765pt
99.804pt
99.824pt
99.843pt
99.863pt
99.882pt
Clientlatency(ms)
percentiles
32 or 31GB / Compressed oops & regions size
32GB RS=16 bytealign=16 32GB RS=8 bytealign=16 32GB RS=16
31GB RS=8 31GB RS=16 32GB RS=8
Zero based compresse oops?
33
Zero based compressed oops
Virtual memory starts at zero:
native oop = (compressed oop << 3)
Not zero based:
if (compressed oop is null)
native oop = null
else
native oop = base + (compressed oop << 3)
Happens around 26/30GB
Can be checked with -XX:+UnlockDiagnosticVMOptions -XX:+PrintCompressedOopsMode
No noticeable difference for this workload
XX:ParallelGCThreads
34
Defines how many thread participate in GC.
2x8 physical cores
32 with hyperthreading
threads STW total time
8 90 sec
16 41 sec
32 32 sec
40 35 sec
0
50
100
150
200
250
300
350
400
Clientlatency(ms)
ParallelGCThreads - 31GB / 300ms
8 threads 16 threads 32 threads 40 threads
Minimum young size
35
During mixed gc, young size is drastically reduced (1.5GB with 31GB heap)
Young get filled in a second. Can lead to multiple consecutive GC ()
We can force it to a minimum size
XX:G1NewSizePercent=10 seems to be a better default (5)
Interval between GC increased by x3 during mixed GC (min 3 sec)
No noticeable changes in throughout and latencies
(Increase mixed time, reduce young gc time)
GC Pause time, 31GB GC Pause time, 31GB -XX:NewSize=4GB
GC every sec (or multiple per sec)
Survivor threshold
36
Defines how many times data will be copied into young before promotion to old
Dynamically resized by G1
Desired survivor size 27262976 bytes, new threshold 3 (max 15)
- age 1: 21624512 bytes, 21624512 total
- age 2: 4510912 bytes, 26135424 total
- age 3: 5541504 bytes, 31676928 total
Default 15, but tends to remains <= 4 under heavy load
quickly fill survivor space defined by XX:SurvivorRatio
Most object should be either long-living or instantaneous. Is it worth disabling survivor ?
-XX:MaxTenuringThreshold=0 (default 15)
Survivor threshold
37
0
100
200
300
400
500
600
0pt
40pt
65pt
80pt
88.75pt
93.75pt
96.25pt
97.8125pt
98.75pt
99.2968pt
99.6093pt
99.7656pt
99.8632pt
99.9218pt
99.956pt
99.9755pt
99.9853pt
99.9914pt
99.9951pt
99.9972pt
99.9984pt
99.999pt
Client Latencies by Max survivor age - 31GB,
300ms
Max age 0 Max age 1 Max age 2 Max age 3 Max age 4
Removing survivor greatly reduces GC
(count -40%, time -50%)
In this case doesn’t increase old gc count
most survivor objects seems to be promoted anyway
! “Prematured promotion” could potentially
fill the old generation quickly !
Survivor threshold - JVM pauses
38
Max tenuring = 15 (default)
GC Avg: 235 ms
GC Max: 381 ms
STW GC Total: 53 sec
Max tenuring = 0
GC Avg: 157 ms
GC Max: 290 ms
STW GC Total: 28 sec
Generated by gceasy.io
Survivor threshold
39
Generated by gceasy.io
Check your GC log first!
In this case (almost no activity), the survivor size doesn’t reduce much after 4 periods
Try with -XX:MaxTenuringThreshold=4
Delaying the marking cycle
40
By default, G1 starts a marking cycle when the heap is used at 45%
-XX:InitiatingHeapOccupancyPercent (IHOP)
By delaying marking cycle:
• Reduce count of old gc
• Increase chance to reclaim empty regions ?
• Increase risk to trigger full GC
Java 1.9 now dynamically resizes IHOP!
JDK-8136677. Disable adaptative behavior with -XX:-G1UseAdaptiveIHOP
Delaying the marking cycle
41
0
50
100
150
200
250
300
350
0pt
20pt
40pt
55pt
65pt
75pt
80pt
85pt
88.75pt
91.25pt
93.75pt
95pt
96.25pt
97.1875pt
97.8125pt
98.4375pt
98.75pt
99.0625pt
99.2968pt
99.4531pt
99.6093pt
99.6875pt
99.7656pt
99.8242pt
99.8632pt
99.9023pt
99.9218pt
99.9414pt
99.956pt
99.9658pt
99.9755pt
99.9804pt
99.9853pt
99.989pt
Clientlatency(ms)
percentiles
Client latencies - IOHP - 31GB 300ms
IHOP 45 IHOP 60 IOHP 70 IHOP 80
42
Generated by gceasy.io
IHOP=80, 31GB heap
Max heap after GC = 25GB
4 “old” compaction
+10% “young” GC
GC Total: 24sec
IHOP=60, 31GB heap
Max heap after GC = 20GB
6 “old” compaction
GC Total: 21sec
Delaying the marking cycle
43
Conclusion:
• Reduce number of old gc but increase young
• No major improvement
• Increases risk of full GC
• Avoid increasing over 60% (or rely on java9 dynamic sizing – not tested)
Remember set updating time
44
G1 keep cross-region references into a structured called Remember Set.
Updated in batches to improve performances and avoid concurrent issue
-XX:G1RSetUpdatingPauseTimePercent controls how many time should be spent
In evacuation phase
percent of the MaxPauseTime. Default to10
Adjust refinement thread zones (G1ConcRefinementGreen/Yellow/RedZone) after each gc
GC logs: [Update RS (ms): Min: 1255,1, Avg: 1256,0, Max: 1257,8, Diff: 2,8, Sum: 28889,1]
Remember set updating time
45
0
50
100
150
200
250
300
350
400
450
Clientlatencies(ms)
percentiles
R. Set updating time percent - 31GB, RegionSize=16mb, 300ms
RSetUpdatingPauseTimePercent=0 RSetUpdatingPauseTimePercent=5
RSetUpdatingPauseTimePercent=10 RSetUpdatingPauseTimePercent=15
Other settings
46
-XX:+ParallelRefProcEnabled
No noticeable difference for this workload
-XX:+UseStringDeduplication
No noticeable difference for this workload
-XX:G1MixedGCLiveThresholdPercent=45/55/65
No noticeable difference for this workload
Final settings for G1
47
-Xmx31G -Xms31G -XX:MaxGCPauseMillis=300 -XX:G1HeapRegionSize=16m -
XX:MaxTenuringThreshold=0 -XX:+UnlockExperimentalVMOptions -XX:NewSize=2500m -
XX:ParallelGCThreads=32 -XX:InitiatingHeapOccupancyPercent=55
-XX:G1RSetUpdatingPauseTimePercent=5
Non-default VM flags: -XX:+AlwaysPreTouch -XX:CICompilerCount=15 -XX:CompileCommandFile=null -XX:ConcGCThreads=8 -XX:G1HeapRegionSize=16777216
-XX:G1RSetUpdatingPauseTimePercent=5 -XX:GCLogFileSize=10485760 -XX:+HeapDumpOnOutOfMemoryError -XX:InitialHeapSize=33285996544 -
XX:InitialTenuringThreshold=0 -XX:+ManagementServer -XX:MarkStackSize=4194304 -XX:MaxGCPauseMillis=300 -XX:MaxHeapSize=33285996544 -
XX:MaxNewSize=19964887040 -XX:MaxTenuringThreshold=0 -XX:MinHeapDeltaBytes=16777216 -XX:NewSize=2621440000 -XX:NumberOfGCLogFiles=10 -
XX:OnOutOfMemoryError=null -XX:ParallelGCThreads=32 -XX:+ParallelRefProcEnabled -XX:+PerfDisableSharedMem -XX:PrintFLSStatistics=1 -XX:+PrintGC -
XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintPromotionFailure -
XX:+PrintTenuringDistribution -XX:+ResizeTLAB -XX:StringTableSize=1000003 -XX:ThreadPriorityPolicy=42 -XX:ThreadStackSize=256 -XX:-UseBiasedLocking -
XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastUnorderedTimeStamps -XX:+UseG1GC -XX:+UseGCLogFileRotation -
XX:+UseNUMA -XX:+UseNUMAInterleaving -XX:+UseTLAB -XX:+UseThreadPriorities
#DevoxxFR
Low latencies JVMs
4
8
Zing (azul)
https://www.azul.com/files/wp_pgc_zing_v52.pdf
Shenandoah (red hat)
https://www.youtube.com/watch?v=qBQtbkmURiQ https://www.youtube.com/watch?v=VCeHkcwfF9Q
Zgc (oracle)
Experimental. https://www.youtube.com/watch?v=tShc0dyFtgw
Write barrier
49
A B C DB C
Object not marked
Hotspot uses write barrier to capture “pointer deletion”
Prevent D from being cleaned during the next GC
What about compaction?
Hotspot stop all applications thread, solving concurrent issues
C.myFieldD = null
-----------
markObjectAsPotentialAlive (D)
Read barrier
50
A B C DB C
Read D
-----------
If (D hasn’t been marked in this cycle){
markObject (D)
}
Return DD
What about compaction?
Read D
-----------
If (D is in a memory page being compacted){
followNewAddressOrMoveObjectToNewAddressNow(D)
updateReferenceToNewAddress(D)
}
Return D
Read barrier
51
A B C DB C D
Predictable, constant pause time
No matter how big the heap size is
Comes with a performance cost
Higher cpu usage
Ex: Gatling report takes 10 to 20% more time to complete
using low latency jvm vs hotspot+G1
(computation on 1 cpu running for 10minutes)
Low latencies JVM
52
Zing rampup
G1 rampup
JVM test, not a Cassandra benchmark!
Low latencies JVM
53
0
100
200
300
400
500
600
0pt
20pt
40pt
55pt
65pt
75pt
80pt
85pt
88.75pt
91.25pt
93.75pt
95pt
96.25pt
97.18pt
97.81pt
98.43pt
98.75pt
99.06pt
99.29pt
99.45pt
99.6pt
99.68pt
99.76pt
99.82pt
99.86pt
99.9pt
99.92pt
99.94pt
99.95pt
99.96pt
99.97pt
99.98pt
99.98pt
99.98pt
Clientlatncy(ms)
percentiles
31GB G1 31GB Zing
Low latencies JVM (shenandoah)
54
Shenandoah rampup
G1 rampup
JVM test, not a Cassandra benchmark!
Low latencies conclusion
55
Capable to deal with big heap
Can handle a bigger throughput than G1 before getting the first error
G1 pauses create a burst with potential timeout
Zing offers good performances, including with lower heap.
Shenandoah stable in our tests. Offers a very good alternative to G1 to avoid pause time
Try carefully, still young
#DevoxxFR
Conclusion
5
6
Conclusion
57
(Super) easy to get things wrong. Change 1 param at a time & test for > 1 day
Measure the wrong thing, test too short introducing external (not JVM) noise…
Lot of work for to save a few percentiles
Tests have been running for more than 300hours
Don’t over-tune. Keep things simple to avoid side effect
Keep target pause > 200ms
Keep heap size between 16GB and 31GB
Start with heap size = 30*(allocation rate).
Conclusion
58
GC pause still an issue? Upgrade to DSE6 or add extra servers
Running after the 99.9x pt? Go with low-latency jvms
Don’t trust what you’ve read! Check your GC logs!
#DevoxxFR
Thank you
& Thanks to
Lucas Bruand & Laurent Dubois (DeepPay)
Pierre Laporte, Samina & James (Datastax)!
5
9

More Related Content

What's hot

Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
DataStax
 
Apache Spark At Scale in the Cloud
Apache Spark At Scale in the CloudApache Spark At Scale in the Cloud
Apache Spark At Scale in the Cloud
Databricks
 
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
The Rise of ZStandard: Apache Spark/Parquet/ORC/AvroThe Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
Databricks
 

What's hot (20)

How to size up an Apache Cassandra cluster (Training)
How to size up an Apache Cassandra cluster (Training)How to size up an Apache Cassandra cluster (Training)
How to size up an Apache Cassandra cluster (Training)
 
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth WiesmanWebinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
 
Deletes Without Tombstones or TTLs (Eric Stevens, ProtectWise) | Cassandra Su...
Deletes Without Tombstones or TTLs (Eric Stevens, ProtectWise) | Cassandra Su...Deletes Without Tombstones or TTLs (Eric Stevens, ProtectWise) | Cassandra Su...
Deletes Without Tombstones or TTLs (Eric Stevens, ProtectWise) | Cassandra Su...
 
HBase Application Performance Improvement
HBase Application Performance ImprovementHBase Application Performance Improvement
HBase Application Performance Improvement
 
Log Structured Merge Tree
Log Structured Merge TreeLog Structured Merge Tree
Log Structured Merge Tree
 
Myths of Big Partitions (Robert Stupp, DataStax) | Cassandra Summit 2016
Myths of Big Partitions (Robert Stupp, DataStax) | Cassandra Summit 2016Myths of Big Partitions (Robert Stupp, DataStax) | Cassandra Summit 2016
Myths of Big Partitions (Robert Stupp, DataStax) | Cassandra Summit 2016
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
 
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
 
Scylla Summit 2022: The Future of Consensus in ScyllaDB 5.0 and Beyond
Scylla Summit 2022: The Future of Consensus in ScyllaDB 5.0 and BeyondScylla Summit 2022: The Future of Consensus in ScyllaDB 5.0 and Beyond
Scylla Summit 2022: The Future of Consensus in ScyllaDB 5.0 and Beyond
 
Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016
Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016
Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016
 
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
 
Apache Spark At Scale in the Cloud
Apache Spark At Scale in the CloudApache Spark At Scale in the Cloud
Apache Spark At Scale in the Cloud
 
Storing State Forever: Why It Can Be Good For Your Analytics
Storing State Forever: Why It Can Be Good For Your AnalyticsStoring State Forever: Why It Can Be Good For Your Analytics
Storing State Forever: Why It Can Be Good For Your Analytics
 
Cosco: An Efficient Facebook-Scale Shuffle Service
Cosco: An Efficient Facebook-Scale Shuffle ServiceCosco: An Efficient Facebook-Scale Shuffle Service
Cosco: An Efficient Facebook-Scale Shuffle Service
 
RocksDB Performance and Reliability Practices
RocksDB Performance and Reliability PracticesRocksDB Performance and Reliability Practices
RocksDB Performance and Reliability Practices
 
Under the Hood of a Shard-per-Core Database Architecture
Under the Hood of a Shard-per-Core Database ArchitectureUnder the Hood of a Shard-per-Core Database Architecture
Under the Hood of a Shard-per-Core Database Architecture
 
HBase Accelerated: In-Memory Flush and Compaction
HBase Accelerated: In-Memory Flush and CompactionHBase Accelerated: In-Memory Flush and Compaction
HBase Accelerated: In-Memory Flush and Compaction
 
HBase Advanced - Lars George
HBase Advanced - Lars GeorgeHBase Advanced - Lars George
HBase Advanced - Lars George
 
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
The Rise of ZStandard: Apache Spark/Parquet/ORC/AvroThe Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
 

Similar to Jvm tuning for low latency application & Cassandra

Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
Jayesh Thakrar
 
Am I reading GC logs Correctly?
Am I reading GC logs Correctly?Am I reading GC logs Correctly?
Am I reading GC logs Correctly?
Tier1 App
 

Similar to Jvm tuning for low latency application & Cassandra (20)

Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
 
JVM memory management & Diagnostics
JVM memory management & DiagnosticsJVM memory management & Diagnostics
JVM memory management & Diagnostics
 
G1 Garbage Collector - Big Heaps and Low Pauses?
G1 Garbage Collector - Big Heaps and Low Pauses?G1 Garbage Collector - Big Heaps and Low Pauses?
G1 Garbage Collector - Big Heaps and Low Pauses?
 
Am I reading GC logs Correctly?
Am I reading GC logs Correctly?Am I reading GC logs Correctly?
Am I reading GC logs Correctly?
 
Become a Garbage Collection Hero
Become a Garbage Collection HeroBecome a Garbage Collection Hero
Become a Garbage Collection Hero
 
A G1GC Saga-KCJUG.pptx
A G1GC Saga-KCJUG.pptxA G1GC Saga-KCJUG.pptx
A G1GC Saga-KCJUG.pptx
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolvers
 
Taming Java Garbage Collector
Taming Java Garbage CollectorTaming Java Garbage Collector
Taming Java Garbage Collector
 
Performance tuning jvm
Performance tuning jvmPerformance tuning jvm
Performance tuning jvm
 
Pick diamonds from garbage
Pick diamonds from garbagePick diamonds from garbage
Pick diamonds from garbage
 
Sun jdk 1.6 gc english version
Sun jdk 1.6 gc english versionSun jdk 1.6 gc english version
Sun jdk 1.6 gc english version
 
Java GC, Off-heap workshop
Java GC, Off-heap workshopJava GC, Off-heap workshop
Java GC, Off-heap workshop
 
Hotspot gc
Hotspot gcHotspot gc
Hotspot gc
 
G1GC
G1GCG1GC
G1GC
 
“Show Me the Garbage!”, Garbage Collection a Friend or a Foe
“Show Me the Garbage!”, Garbage Collection a Friend or a Foe“Show Me the Garbage!”, Garbage Collection a Friend or a Foe
“Show Me the Garbage!”, Garbage Collection a Friend or a Foe
 
Speedrunning the Open Street Map osm2pgsql Loader
Speedrunning the Open Street Map osm2pgsql LoaderSpeedrunning the Open Street Map osm2pgsql Loader
Speedrunning the Open Street Map osm2pgsql Loader
 
hbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMihbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMi
 
JVM Magic
JVM MagicJVM Magic
JVM Magic
 
PostgreSQL performance archaeology
PostgreSQL performance archaeologyPostgreSQL performance archaeology
PostgreSQL performance archaeology
 
Cram
CramCram
Cram
 

Recently uploaded

+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
VishalKumarJha10
 

Recently uploaded (20)

A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
 
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
LEVEL 5 - SESSION 1 2023 (1).pptx - PDF 123456
LEVEL 5   - SESSION 1 2023 (1).pptx - PDF 123456LEVEL 5   - SESSION 1 2023 (1).pptx - PDF 123456
LEVEL 5 - SESSION 1 2023 (1).pptx - PDF 123456
 
BUS PASS MANGEMENT SYSTEM USING PHP.pptx
BUS PASS MANGEMENT SYSTEM USING PHP.pptxBUS PASS MANGEMENT SYSTEM USING PHP.pptx
BUS PASS MANGEMENT SYSTEM USING PHP.pptx
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
The Top App Development Trends Shaping the Industry in 2024-25 .pdf
The Top App Development Trends Shaping the Industry in 2024-25 .pdfThe Top App Development Trends Shaping the Industry in 2024-25 .pdf
The Top App Development Trends Shaping the Industry in 2024-25 .pdf
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
 
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
 

Jvm tuning for low latency application & Cassandra

  • 1. #DevoxxFR JVMs & GCs GC tuning for low latencies with Cassandra Quentin Ambard @qambard 1 (NOT a Cassandra benchmark!)
  • 3. Agenda • Hotspot GCs • G1: Heap size & target pause time • G1: 31GB or 32GB? • G1: Advanced settings • Low latencies JVMs • Wrap up 3
  • 6. Parallel collector 6 Heap is full Triggers a Stop The World (STW) GC to mark & compact
  • 7. Parallel gc profile 7 Young GC Old is filled, full GC
  • 8. Concurrent Mark Sweep (CMS) collector 8 Young collection uses ParNewGC. Behaves like Parallel GC Difference: needs to communicate with CMS for the old generation
  • 9. Concurrent Mark Sweep (CMS) collector 9 Old region is getting too big Limit defined by -XX:CMSInitiatingHeapOccupancyPercent Start concurrent marking & cleanup Only smalls STW phase Delete only. No compaction leading to fragmentation Triggers a serial full STW GC if continuous memory can’t be allocated Memory is requested by “block” –XX:OldPLABSize=16 each thread requests a block to copy young to old
  • 10. CMS profile 10 Hard to tune. Need to fix young size, won’t adapt to new workload inexplicit options. Easier when heap remains around 8GB Fragmentation will trigger super long full GC
  • 11. Garbage first (G1) collector 11 Heap (31GB) Empty Regions (-XX:G1HeapRegionSize)
  • 12. Garbage first (G1) collector 12 Young region
  • 13. Garbage first (G1) collector 13 Young space is full Dynamically sized to reach pause objective -XX:MaxGCPauseMillis Trigger STW “parallel gc” on young Scan object in each region Copy survivor to survivor In another young region Free regions
  • 14. Garbage first (G1) collector 14 Young space is full Dynamically sized to reach pause objective -XX:MaxGCPauseMillis Trigger STW “parallel gc” on young Scan object in each region Copy survivor to survivor In another young region. Survivor size dynamically adjusted Copy old survivor to old region In another young region Free regions
  • 15. Garbage first (G1) collector 15 Old space is getting too big Limit defined by -XX:InitiatingHeapOccupancyPercent=40 Start region concurrent scan Not blocking (2 short STW with SATB: start + end) 100% empty regions reclaimed Fast, “free” operation Trigger STW mixed gc: Dramatically reduce young size Includes a few old region in the next young gc Repeat mixed gc until -XX:G1HeapWastePercent 10% 100% 100% 80%78% 30% Young size being reduced to respect target pause time, G1 triggers several concurrent gc
  • 16. G1 profile 16 “mixed” gcEasy to tune, predictable young GC
  • 17. #DevoxxFR Heap size & target pause time 1 7
  • 18. Test protocol 18 128GB RAM, 2x8 cores (32 ht cores) 2 CPU E5-2620 v4 @ 2.10GHz. Disk SSD RAID0. JDK 1.8.161 DSE 5.1.7, memtable size fixed to 4G. Data saved in ramdisk (40GB) Gatling query on 3 tables jvm: zing. Default C* schema configuration. Durable write=false (avoid commit log to reduce disk activity) Rows size: 100byte, 1.5k, 3k 50% Read, 50% write. Throughput 40-120kq/sec 33% of the read return a range of 10 values Datastax recommended OS settings applied
  • 19. Which Heap size 19 Why a bigger heap could be better? • Heap get filled slower (less gc) • Increase ratio of dead object during collections Moving (compacting) the remaining objects is the most heavy operation • Increases chances to flush an entirely region
  • 20. Which Heap size 20 Why a bigger heap could be worse? • Full GC has a bigger impact Now parallel with java1.10 • Increases chances to trigger longer pauses • Less memory remaining for disk cache
  • 21. Heap size 21 Small heap (<16GB) have bad latencies After a given size, no obvious difference 0 100 200 300 400 500 600 0pt 20pt 40pt 55pt 65pt 75pt 80pt 85pt 88.75pt 91.25pt 93.75pt 95pt 96.25pt 97.18pt 97.81pt 98.43pt 98.75pt 99.06pt 99.29pt 99.45pt 99.6pt 99.68pt 99.76pt 99.92pt Latency(ms) Client latencies by Heap Size - target pause= 300ms, 60kq/sec 8GB 12GB 20GB 28GB 36GB 44GB 52GB 60GB
  • 22. Heap size & GC Pause time 22 0 20 40 60 80 100 120 8 18 28 38 48 58 Totalpausetime(sec) Heap size (GB) GC STW Pause by Heap size and heap allocation - target pause 300ms Total pause time (800mb/s)
  • 23. Heap size & GC Pause time 23 0 100 200 300 400 500 600 700 0 20 40 60 80 100 120 8 18 28 38 48 58 Heap size (GB) MaxPausetime(ms) Totalpausetime(sec) GC STW Pause by Heap size and heap allocation - target pause 300ms Total pause time (800mb/s) GC Max pause time (800mb/s)
  • 24. Heap size & GC Pause time 24 0 100 200 300 400 500 600 0 20 40 60 80 100 120 140 160 180 8 18 28 38 48 58 Heap size (GB) MaxPausetime(ms) Totalpausetime(sec) GC STW Pause by Heap size and heap allocation - target pause 300ms Total pause time (1250mb/s) GC Max pause time (1250mb/s)
  • 25. Heap size & GC Pause time 25 0 100 200 300 400 500 600 700 0 20 40 60 80 100 120 140 160 180 8 18 28 38 48 58 MaxPausetime(ms) Totalpausetime(sec) Heap size (GB) GC STW Pause by Heap size and heap allocation - target pause 300ms Total pause time (800mb/s) Total pause time (1250mb/s) GC Max pause time (800mb/s) GC Max pause time (1250mb/s)
  • 26. Target pause time -XX:MaxGCPauseMillis 26 0 10 20 30 40 50 60 70 50 100 200 300 400 500 600 Totalpausetime(sec) G1 Pause time target (ms) Total STW GC pause by target pause - heap 28GB Total STW GC duration
  • 27. Target pause time -XX:MaxGCPauseMillis 27 0 100 200 300 400 500 600 0 10 20 30 40 50 60 70 50 100 200 300 400 500 600 Maxpausetime(ms) Totalpausetime(sec) G1 Pause time target (ms) Total STW GC pause by target pause - heap 28GB Total STW GC duration (900 mb/sec) Max Pause time (900mb/sec)
  • 28. Target pause time -XX:MaxGCPauseMillis 28 0 50 100 150 200 250 300 350 400 450 500 90pt 91.25pt 92.5pt 93.75pt 94.37pt 95pt 95.62pt 96.25pt 96.87pt 97.18pt 97.5pt 97.81pt 98.12pt 98.43pt 98.59pt 98.75pt 98.9pt 99.06pt 99.21pt 99.29pt 99.37pt 99.45pt 99.53pt 99.6pt 99.64pt 99.68pt 99.72pt 99.76pt 99.8pt ClientPausetime(ms) percentiles Client pause time by GC target Pause 36GB-100ms 36GB-200ms 36GB-300ms 36GB-400ms 36GB-500ms 36GB-50ms 36GB-600ms
  • 29. Heap size Conclusion 29 G1 struggle with a too small heap. Also increases full GC risk GC pause time doesn’t reduce proportionally when heap size increase Sweet spot seems to be around 30x allocation rate Keep -XX:MaxGCPauseMillis >= 200ms
  • 31. 31GB or 32GB? 31 Up to 31GB: Oops compressed on 32bit with 8 bytes alignement (3 bit shift) 8 0000 1000 0000 0001 32 0010 0000 0000 0100 40 0010 1000 0000 0101 2^32 => 4G. 3 bit shift trick leads to 2^3 = 8 times more addresses. 2^32 * 2^3 = 32G 32GB: Oops on 64 bits Heap from 32GB to 64GB can be aligned on 16bit G1 targets 2048 regions and changes the default size at 32GB 31GB => XX:G1HeapRegionSize=8m = 3875 regions 32GB => XX:G1HeapRegionSize=16m = 2000 regions Region number can have an impact on Remember Set update/scan nodetool sjk mx -mc -b "com.sun.management:type=HotSpotDiagnostic" -op getVMOption -a UseCompressedOops
  • 32. 31GB or 32GB? 32 No major difference Concurrent marking cycle is slower with smaller region (8MB => +20%) No major difference in cpu usage 31GB + RegionSize=16m Total GC Pause time -10% Latencies mean -8% All other GC metrics are very similar Not sure? stick with 31GB + RegionSize=16m 0 50 100 150 200 250 300 350 400 92.5pt 93.75pt 94.375pt 95pt 95.625pt 96.25pt 96.875pt 97.187pt 97.5pt 97.812pt 98.125pt 98.437pt 98.593pt 98.75pt 98.906pt 99.062pt 99.218pt 99.296pt 99.375pt 99.453pt 99.531pt 99.609pt 99.648pt 99.687pt 99.726pt 99.765pt 99.804pt 99.824pt 99.843pt 99.863pt 99.882pt Clientlatency(ms) percentiles 32 or 31GB / Compressed oops & regions size 32GB RS=16 bytealign=16 32GB RS=8 bytealign=16 32GB RS=16 31GB RS=8 31GB RS=16 32GB RS=8
  • 33. Zero based compresse oops? 33 Zero based compressed oops Virtual memory starts at zero: native oop = (compressed oop << 3) Not zero based: if (compressed oop is null) native oop = null else native oop = base + (compressed oop << 3) Happens around 26/30GB Can be checked with -XX:+UnlockDiagnosticVMOptions -XX:+PrintCompressedOopsMode No noticeable difference for this workload
  • 34. XX:ParallelGCThreads 34 Defines how many thread participate in GC. 2x8 physical cores 32 with hyperthreading threads STW total time 8 90 sec 16 41 sec 32 32 sec 40 35 sec 0 50 100 150 200 250 300 350 400 Clientlatency(ms) ParallelGCThreads - 31GB / 300ms 8 threads 16 threads 32 threads 40 threads
  • 35. Minimum young size 35 During mixed gc, young size is drastically reduced (1.5GB with 31GB heap) Young get filled in a second. Can lead to multiple consecutive GC () We can force it to a minimum size XX:G1NewSizePercent=10 seems to be a better default (5) Interval between GC increased by x3 during mixed GC (min 3 sec) No noticeable changes in throughout and latencies (Increase mixed time, reduce young gc time) GC Pause time, 31GB GC Pause time, 31GB -XX:NewSize=4GB GC every sec (or multiple per sec)
  • 36. Survivor threshold 36 Defines how many times data will be copied into young before promotion to old Dynamically resized by G1 Desired survivor size 27262976 bytes, new threshold 3 (max 15) - age 1: 21624512 bytes, 21624512 total - age 2: 4510912 bytes, 26135424 total - age 3: 5541504 bytes, 31676928 total Default 15, but tends to remains <= 4 under heavy load quickly fill survivor space defined by XX:SurvivorRatio Most object should be either long-living or instantaneous. Is it worth disabling survivor ? -XX:MaxTenuringThreshold=0 (default 15)
  • 37. Survivor threshold 37 0 100 200 300 400 500 600 0pt 40pt 65pt 80pt 88.75pt 93.75pt 96.25pt 97.8125pt 98.75pt 99.2968pt 99.6093pt 99.7656pt 99.8632pt 99.9218pt 99.956pt 99.9755pt 99.9853pt 99.9914pt 99.9951pt 99.9972pt 99.9984pt 99.999pt Client Latencies by Max survivor age - 31GB, 300ms Max age 0 Max age 1 Max age 2 Max age 3 Max age 4 Removing survivor greatly reduces GC (count -40%, time -50%) In this case doesn’t increase old gc count most survivor objects seems to be promoted anyway ! “Prematured promotion” could potentially fill the old generation quickly !
  • 38. Survivor threshold - JVM pauses 38 Max tenuring = 15 (default) GC Avg: 235 ms GC Max: 381 ms STW GC Total: 53 sec Max tenuring = 0 GC Avg: 157 ms GC Max: 290 ms STW GC Total: 28 sec Generated by gceasy.io
  • 39. Survivor threshold 39 Generated by gceasy.io Check your GC log first! In this case (almost no activity), the survivor size doesn’t reduce much after 4 periods Try with -XX:MaxTenuringThreshold=4
  • 40. Delaying the marking cycle 40 By default, G1 starts a marking cycle when the heap is used at 45% -XX:InitiatingHeapOccupancyPercent (IHOP) By delaying marking cycle: • Reduce count of old gc • Increase chance to reclaim empty regions ? • Increase risk to trigger full GC Java 1.9 now dynamically resizes IHOP! JDK-8136677. Disable adaptative behavior with -XX:-G1UseAdaptiveIHOP
  • 41. Delaying the marking cycle 41 0 50 100 150 200 250 300 350 0pt 20pt 40pt 55pt 65pt 75pt 80pt 85pt 88.75pt 91.25pt 93.75pt 95pt 96.25pt 97.1875pt 97.8125pt 98.4375pt 98.75pt 99.0625pt 99.2968pt 99.4531pt 99.6093pt 99.6875pt 99.7656pt 99.8242pt 99.8632pt 99.9023pt 99.9218pt 99.9414pt 99.956pt 99.9658pt 99.9755pt 99.9804pt 99.9853pt 99.989pt Clientlatency(ms) percentiles Client latencies - IOHP - 31GB 300ms IHOP 45 IHOP 60 IOHP 70 IHOP 80
  • 42. 42 Generated by gceasy.io IHOP=80, 31GB heap Max heap after GC = 25GB 4 “old” compaction +10% “young” GC GC Total: 24sec IHOP=60, 31GB heap Max heap after GC = 20GB 6 “old” compaction GC Total: 21sec
  • 43. Delaying the marking cycle 43 Conclusion: • Reduce number of old gc but increase young • No major improvement • Increases risk of full GC • Avoid increasing over 60% (or rely on java9 dynamic sizing – not tested)
  • 44. Remember set updating time 44 G1 keep cross-region references into a structured called Remember Set. Updated in batches to improve performances and avoid concurrent issue -XX:G1RSetUpdatingPauseTimePercent controls how many time should be spent In evacuation phase percent of the MaxPauseTime. Default to10 Adjust refinement thread zones (G1ConcRefinementGreen/Yellow/RedZone) after each gc GC logs: [Update RS (ms): Min: 1255,1, Avg: 1256,0, Max: 1257,8, Diff: 2,8, Sum: 28889,1]
  • 45. Remember set updating time 45 0 50 100 150 200 250 300 350 400 450 Clientlatencies(ms) percentiles R. Set updating time percent - 31GB, RegionSize=16mb, 300ms RSetUpdatingPauseTimePercent=0 RSetUpdatingPauseTimePercent=5 RSetUpdatingPauseTimePercent=10 RSetUpdatingPauseTimePercent=15
  • 46. Other settings 46 -XX:+ParallelRefProcEnabled No noticeable difference for this workload -XX:+UseStringDeduplication No noticeable difference for this workload -XX:G1MixedGCLiveThresholdPercent=45/55/65 No noticeable difference for this workload
  • 47. Final settings for G1 47 -Xmx31G -Xms31G -XX:MaxGCPauseMillis=300 -XX:G1HeapRegionSize=16m - XX:MaxTenuringThreshold=0 -XX:+UnlockExperimentalVMOptions -XX:NewSize=2500m - XX:ParallelGCThreads=32 -XX:InitiatingHeapOccupancyPercent=55 -XX:G1RSetUpdatingPauseTimePercent=5 Non-default VM flags: -XX:+AlwaysPreTouch -XX:CICompilerCount=15 -XX:CompileCommandFile=null -XX:ConcGCThreads=8 -XX:G1HeapRegionSize=16777216 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:GCLogFileSize=10485760 -XX:+HeapDumpOnOutOfMemoryError -XX:InitialHeapSize=33285996544 - XX:InitialTenuringThreshold=0 -XX:+ManagementServer -XX:MarkStackSize=4194304 -XX:MaxGCPauseMillis=300 -XX:MaxHeapSize=33285996544 - XX:MaxNewSize=19964887040 -XX:MaxTenuringThreshold=0 -XX:MinHeapDeltaBytes=16777216 -XX:NewSize=2621440000 -XX:NumberOfGCLogFiles=10 - XX:OnOutOfMemoryError=null -XX:ParallelGCThreads=32 -XX:+ParallelRefProcEnabled -XX:+PerfDisableSharedMem -XX:PrintFLSStatistics=1 -XX:+PrintGC - XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintPromotionFailure - XX:+PrintTenuringDistribution -XX:+ResizeTLAB -XX:StringTableSize=1000003 -XX:ThreadPriorityPolicy=42 -XX:ThreadStackSize=256 -XX:-UseBiasedLocking - XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastUnorderedTimeStamps -XX:+UseG1GC -XX:+UseGCLogFileRotation - XX:+UseNUMA -XX:+UseNUMAInterleaving -XX:+UseTLAB -XX:+UseThreadPriorities
  • 48. #DevoxxFR Low latencies JVMs 4 8 Zing (azul) https://www.azul.com/files/wp_pgc_zing_v52.pdf Shenandoah (red hat) https://www.youtube.com/watch?v=qBQtbkmURiQ https://www.youtube.com/watch?v=VCeHkcwfF9Q Zgc (oracle) Experimental. https://www.youtube.com/watch?v=tShc0dyFtgw
  • 49. Write barrier 49 A B C DB C Object not marked Hotspot uses write barrier to capture “pointer deletion” Prevent D from being cleaned during the next GC What about compaction? Hotspot stop all applications thread, solving concurrent issues C.myFieldD = null ----------- markObjectAsPotentialAlive (D)
  • 50. Read barrier 50 A B C DB C Read D ----------- If (D hasn’t been marked in this cycle){ markObject (D) } Return DD What about compaction? Read D ----------- If (D is in a memory page being compacted){ followNewAddressOrMoveObjectToNewAddressNow(D) updateReferenceToNewAddress(D) } Return D
  • 51. Read barrier 51 A B C DB C D Predictable, constant pause time No matter how big the heap size is Comes with a performance cost Higher cpu usage Ex: Gatling report takes 10 to 20% more time to complete using low latency jvm vs hotspot+G1 (computation on 1 cpu running for 10minutes)
  • 52. Low latencies JVM 52 Zing rampup G1 rampup JVM test, not a Cassandra benchmark!
  • 54. Low latencies JVM (shenandoah) 54 Shenandoah rampup G1 rampup JVM test, not a Cassandra benchmark!
  • 55. Low latencies conclusion 55 Capable to deal with big heap Can handle a bigger throughput than G1 before getting the first error G1 pauses create a burst with potential timeout Zing offers good performances, including with lower heap. Shenandoah stable in our tests. Offers a very good alternative to G1 to avoid pause time Try carefully, still young
  • 57. Conclusion 57 (Super) easy to get things wrong. Change 1 param at a time & test for > 1 day Measure the wrong thing, test too short introducing external (not JVM) noise… Lot of work for to save a few percentiles Tests have been running for more than 300hours Don’t over-tune. Keep things simple to avoid side effect Keep target pause > 200ms Keep heap size between 16GB and 31GB Start with heap size = 30*(allocation rate).
  • 58. Conclusion 58 GC pause still an issue? Upgrade to DSE6 or add extra servers Running after the 99.9x pt? Go with low-latency jvms Don’t trust what you’ve read! Check your GC logs!
  • 59. #DevoxxFR Thank you & Thanks to Lucas Bruand & Laurent Dubois (DeepPay) Pierre Laporte, Samina & James (Datastax)! 5 9

Editor's Notes

  1. see JobGenerator.scala
  2. see JobGenerator.scala
  3. see JobGenerator.scala
  4. see JobGenerator.scala
  5. see JobGenerator.scala
  6. see JobGenerator.scala
  7. see JobGenerator.scala
  8. see JobGenerator.scala
  9. see JobGenerator.scala
  10. see JobGenerator.scala
  11. see JobGenerator.scala
  12. see JobGenerator.scala
  13. see JobGenerator.scala
  14. see JobGenerator.scala
  15. see JobGenerator.scala
  16. see JobGenerator.scala
  17. see JobGenerator.scala
  18. see JobGenerator.scala
  19. see JobGenerator.scala
  20. see JobGenerator.scala
  21. see JobGenerator.scala
  22. see JobGenerator.scala
  23. see JobGenerator.scala
  24. see JobGenerator.scala
  25. see JobGenerator.scala
  26. see JobGenerator.scala
  27. see JobGenerator.scala
  28. see JobGenerator.scala
  29. see JobGenerator.scala
  30. see JobGenerator.scala
  31. see JobGenerator.scala
  32. see JobGenerator.scala
  33. see JobGenerator.scala
  34. see JobGenerator.scala
  35. see JobGenerator.scala
  36. see JobGenerator.scala
  37. see JobGenerator.scala
  38. see JobGenerator.scala
  39. see JobGenerator.scala
  40. see JobGenerator.scala
  41. see JobGenerator.scala
  42. see JobGenerator.scala
  43. see JobGenerator.scala
  44. see JobGenerator.scala
  45. see JobGenerator.scala
  46. see JobGenerator.scala