SlideShare a Scribd company logo
#DevoxxFR
JVMs & GCs
GC tuning for low latencies with Cassandra
Quentin Ambard
@qambard
1
(NOT a Cassandra benchmark!)
It’s all about JVM
2
Agenda
• Hotspot GCs
• G1: Heap size & target pause time
• G1: 31GB or 32GB?
• G1: Advanced settings
• Low latencies JVMs
• Wrap up
3
#DevoxxFR
Hotspot GCs
4
Parallel collector
CMS
G1
Parallel collector
5
Eden Survivor (S0/S1)
Parallel collector
6
Heap is full
Triggers a Stop The World (STW) GC to mark & compact
Parallel gc profile
7
Young GC Old is filled, full GC
Concurrent Mark Sweep (CMS) collector
8
Young collection uses ParNewGC. Behaves like Parallel GC
Difference: needs to communicate with CMS for the old generation
Concurrent Mark Sweep (CMS) collector
9
Old region is getting too big
Limit defined by -XX:CMSInitiatingHeapOccupancyPercent
Start concurrent marking & cleanup
Only smalls STW phase
Delete only. No compaction leading to fragmentation
Triggers a serial full STW GC if continuous memory can’t be allocated
Memory is requested by “block” –XX:OldPLABSize=16
each thread requests a block to copy young to old
CMS profile
10
Hard to tune. Need to fix young size, won’t adapt to new workload
inexplicit options. Easier when heap remains around 8GB
Fragmentation will trigger super long full GC
Garbage first (G1) collector
11
Heap (31GB)
Empty Regions (-XX:G1HeapRegionSize)
Garbage first (G1) collector
12
Young region
Garbage first (G1) collector
13
Young space is full
Dynamically sized to reach pause objective -XX:MaxGCPauseMillis
Trigger STW “parallel gc” on young
Scan object in each region
Copy survivor to survivor
In another young region
Free regions
Garbage first (G1) collector
14
Young space is full
Dynamically sized to reach pause objective -XX:MaxGCPauseMillis
Trigger STW “parallel gc” on young
Scan object in each region
Copy survivor to survivor
In another young region. Survivor size dynamically adjusted
Copy old survivor to old region
In another young region
Free regions
Garbage first (G1) collector
15
Old space is getting too big
Limit defined by -XX:InitiatingHeapOccupancyPercent=40
Start region concurrent scan
Not blocking (2 short STW with SATB: start + end)
100% empty regions reclaimed
Fast, “free” operation
Trigger STW mixed gc:
Dramatically reduce young size
Includes a few old region in the next young gc
Repeat mixed gc until -XX:G1HeapWastePercent
10% 100% 100%
80%78%
30%
Young size being reduced to respect target pause time,
G1 triggers several concurrent gc
G1 profile
16
“mixed” gcEasy to tune, predictable young GC
#DevoxxFR
Heap size & target pause time
1
7
Test protocol
18
128GB RAM, 2x8 cores (32 ht cores)
2 CPU E5-2620 v4 @ 2.10GHz. Disk SSD RAID0. JDK 1.8.161
DSE 5.1.7, memtable size fixed to 4G. Data saved in ramdisk (40GB)
Gatling query on 3 tables
jvm: zing. Default C* schema configuration. Durable write=false (avoid commit log to reduce disk activity)
Rows size: 100byte, 1.5k, 3k
50% Read, 50% write. Throughput 40-120kq/sec
33% of the read return a range of 10 values
Datastax recommended OS settings applied
Which Heap size
19
Why a bigger heap could be better?
• Heap get filled slower (less gc)
• Increase ratio of dead object during collections
Moving (compacting) the remaining objects is the most heavy operation
• Increases chances to flush an entirely region
Which Heap size
20
Why a bigger heap could be worse?
• Full GC has a bigger impact
Now parallel with java1.10
• Increases chances to trigger longer pauses
• Less memory remaining for disk cache
Heap size
21
Small heap (<16GB) have bad
latencies
After a given size, no obvious
difference
0
100
200
300
400
500
600
0pt
20pt
40pt
55pt
65pt
75pt
80pt
85pt
88.75pt
91.25pt
93.75pt
95pt
96.25pt
97.18pt
97.81pt
98.43pt
98.75pt
99.06pt
99.29pt
99.45pt
99.6pt
99.68pt
99.76pt
99.92pt
Latency(ms)
Client latencies by Heap Size - target pause= 300ms, 60kq/sec
8GB 12GB 20GB 28GB 36GB 44GB 52GB 60GB
Heap size & GC Pause time
22
0
20
40
60
80
100
120
8 18 28 38 48 58
Totalpausetime(sec)
Heap size (GB)
GC STW Pause by Heap size and heap allocation - target pause 300ms
Total pause time (800mb/s)
Heap size & GC Pause time
23
0
100
200
300
400
500
600
700
0
20
40
60
80
100
120
8 18 28 38 48 58
Heap size (GB)
MaxPausetime(ms)
Totalpausetime(sec) GC STW Pause by Heap size and heap allocation - target pause 300ms
Total pause time (800mb/s) GC Max pause time (800mb/s)
Heap size & GC Pause time
24
0
100
200
300
400
500
600
0
20
40
60
80
100
120
140
160
180
8 18 28 38 48 58
Heap size (GB)
MaxPausetime(ms)
Totalpausetime(sec) GC STW Pause by Heap size and heap allocation - target pause 300ms
Total pause time (1250mb/s) GC Max pause time (1250mb/s)
Heap size & GC Pause time
25
0
100
200
300
400
500
600
700
0
20
40
60
80
100
120
140
160
180
8 18 28 38 48 58
MaxPausetime(ms)
Totalpausetime(sec)
Heap size (GB)
GC STW Pause by Heap size and heap allocation - target pause 300ms
Total pause time (800mb/s) Total pause time (1250mb/s)
GC Max pause time (800mb/s) GC Max pause time (1250mb/s)
Target pause time -XX:MaxGCPauseMillis
26
0
10
20
30
40
50
60
70
50 100 200 300 400 500 600
Totalpausetime(sec)
G1 Pause time target (ms)
Total STW GC pause by target pause - heap 28GB
Total STW GC duration
Target pause time -XX:MaxGCPauseMillis
27
0
100
200
300
400
500
600
0
10
20
30
40
50
60
70
50 100 200 300 400 500 600
Maxpausetime(ms)
Totalpausetime(sec)
G1 Pause time target (ms)
Total STW GC pause by target pause - heap 28GB
Total STW GC duration (900 mb/sec) Max Pause time (900mb/sec)
Target pause time -XX:MaxGCPauseMillis
28
0
50
100
150
200
250
300
350
400
450
500
90pt
91.25pt
92.5pt
93.75pt
94.37pt
95pt
95.62pt
96.25pt
96.87pt
97.18pt
97.5pt
97.81pt
98.12pt
98.43pt
98.59pt
98.75pt
98.9pt
99.06pt
99.21pt
99.29pt
99.37pt
99.45pt
99.53pt
99.6pt
99.64pt
99.68pt
99.72pt
99.76pt
99.8pt
ClientPausetime(ms)
percentiles
Client pause time by GC target Pause
36GB-100ms 36GB-200ms 36GB-300ms 36GB-400ms
36GB-500ms 36GB-50ms 36GB-600ms
Heap size Conclusion
29
G1 struggle with a too small heap. Also increases full GC risk
GC pause time doesn’t reduce proportionally when heap size increase
Sweet spot seems to be around 30x allocation rate
Keep -XX:MaxGCPauseMillis >= 200ms
#DevoxxFR
More advanced settings for G1
3
0
31GB or 32GB?
31
Up to 31GB: Oops compressed on 32bit with 8 bytes alignement (3 bit shift)
8 0000 1000 0000 0001
32 0010 0000 0000 0100
40 0010 1000 0000 0101
2^32 => 4G. 3 bit shift trick leads to 2^3 = 8 times more addresses. 2^32 * 2^3 = 32G
32GB: Oops on 64 bits
Heap from 32GB to 64GB can be aligned on 16bit
G1 targets 2048 regions and changes the default size at 32GB
31GB => XX:G1HeapRegionSize=8m = 3875 regions
32GB => XX:G1HeapRegionSize=16m = 2000 regions
Region number can have an impact on Remember Set update/scan
nodetool sjk mx -mc -b "com.sun.management:type=HotSpotDiagnostic" -op getVMOption -a UseCompressedOops
31GB or 32GB?
32
No major difference
Concurrent marking cycle is slower with smaller
region (8MB => +20%)
No major difference in cpu usage
31GB + RegionSize=16m
Total GC Pause time -10%
Latencies mean -8%
All other GC metrics are very similar
Not sure? stick with 31GB + RegionSize=16m
0
50
100
150
200
250
300
350
400
92.5pt
93.75pt
94.375pt
95pt
95.625pt
96.25pt
96.875pt
97.187pt
97.5pt
97.812pt
98.125pt
98.437pt
98.593pt
98.75pt
98.906pt
99.062pt
99.218pt
99.296pt
99.375pt
99.453pt
99.531pt
99.609pt
99.648pt
99.687pt
99.726pt
99.765pt
99.804pt
99.824pt
99.843pt
99.863pt
99.882pt
Clientlatency(ms)
percentiles
32 or 31GB / Compressed oops & regions size
32GB RS=16 bytealign=16 32GB RS=8 bytealign=16 32GB RS=16
31GB RS=8 31GB RS=16 32GB RS=8
Zero based compresse oops?
33
Zero based compressed oops
Virtual memory starts at zero:
native oop = (compressed oop << 3)
Not zero based:
if (compressed oop is null)
native oop = null
else
native oop = base + (compressed oop << 3)
Happens around 26/30GB
Can be checked with -XX:+UnlockDiagnosticVMOptions -XX:+PrintCompressedOopsMode
No noticeable difference for this workload
XX:ParallelGCThreads
34
Defines how many thread participate in GC.
2x8 physical cores
32 with hyperthreading
threads STW total time
8 90 sec
16 41 sec
32 32 sec
40 35 sec
0
50
100
150
200
250
300
350
400
Clientlatency(ms)
ParallelGCThreads - 31GB / 300ms
8 threads 16 threads 32 threads 40 threads
Minimum young size
35
During mixed gc, young size is drastically reduced (1.5GB with 31GB heap)
Young get filled in a second. Can lead to multiple consecutive GC ()
We can force it to a minimum size
XX:G1NewSizePercent=10 seems to be a better default (5)
Interval between GC increased by x3 during mixed GC (min 3 sec)
No noticeable changes in throughout and latencies
(Increase mixed time, reduce young gc time)
GC Pause time, 31GB GC Pause time, 31GB -XX:NewSize=4GB
GC every sec (or multiple per sec)
Survivor threshold
36
Defines how many times data will be copied into young before promotion to old
Dynamically resized by G1
Desired survivor size 27262976 bytes, new threshold 3 (max 15)
- age 1: 21624512 bytes, 21624512 total
- age 2: 4510912 bytes, 26135424 total
- age 3: 5541504 bytes, 31676928 total
Default 15, but tends to remains <= 4 under heavy load
quickly fill survivor space defined by XX:SurvivorRatio
Most object should be either long-living or instantaneous. Is it worth disabling survivor ?
-XX:MaxTenuringThreshold=0 (default 15)
Survivor threshold
37
0
100
200
300
400
500
600
0pt
40pt
65pt
80pt
88.75pt
93.75pt
96.25pt
97.8125pt
98.75pt
99.2968pt
99.6093pt
99.7656pt
99.8632pt
99.9218pt
99.956pt
99.9755pt
99.9853pt
99.9914pt
99.9951pt
99.9972pt
99.9984pt
99.999pt
Client Latencies by Max survivor age - 31GB,
300ms
Max age 0 Max age 1 Max age 2 Max age 3 Max age 4
Removing survivor greatly reduces GC
(count -40%, time -50%)
In this case doesn’t increase old gc count
most survivor objects seems to be promoted anyway
! “Prematured promotion” could potentially
fill the old generation quickly !
Survivor threshold - JVM pauses
38
Max tenuring = 15 (default)
GC Avg: 235 ms
GC Max: 381 ms
STW GC Total: 53 sec
Max tenuring = 0
GC Avg: 157 ms
GC Max: 290 ms
STW GC Total: 28 sec
Generated by gceasy.io
Survivor threshold
39
Generated by gceasy.io
Check your GC log first!
In this case (almost no activity), the survivor size doesn’t reduce much after 4 periods
Try with -XX:MaxTenuringThreshold=4
Delaying the marking cycle
40
By default, G1 starts a marking cycle when the heap is used at 45%
-XX:InitiatingHeapOccupancyPercent (IHOP)
By delaying marking cycle:
• Reduce count of old gc
• Increase chance to reclaim empty regions ?
• Increase risk to trigger full GC
Java 1.9 now dynamically resizes IHOP!
JDK-8136677. Disable adaptative behavior with -XX:-G1UseAdaptiveIHOP
Delaying the marking cycle
41
0
50
100
150
200
250
300
350
0pt
20pt
40pt
55pt
65pt
75pt
80pt
85pt
88.75pt
91.25pt
93.75pt
95pt
96.25pt
97.1875pt
97.8125pt
98.4375pt
98.75pt
99.0625pt
99.2968pt
99.4531pt
99.6093pt
99.6875pt
99.7656pt
99.8242pt
99.8632pt
99.9023pt
99.9218pt
99.9414pt
99.956pt
99.9658pt
99.9755pt
99.9804pt
99.9853pt
99.989pt
Clientlatency(ms)
percentiles
Client latencies - IOHP - 31GB 300ms
IHOP 45 IHOP 60 IOHP 70 IHOP 80
42
Generated by gceasy.io
IHOP=80, 31GB heap
Max heap after GC = 25GB
4 “old” compaction
+10% “young” GC
GC Total: 24sec
IHOP=60, 31GB heap
Max heap after GC = 20GB
6 “old” compaction
GC Total: 21sec
Delaying the marking cycle
43
Conclusion:
• Reduce number of old gc but increase young
• No major improvement
• Increases risk of full GC
• Avoid increasing over 60% (or rely on java9 dynamic sizing – not tested)
Remember set updating time
44
G1 keep cross-region references into a structured called Remember Set.
Updated in batches to improve performances and avoid concurrent issue
-XX:G1RSetUpdatingPauseTimePercent controls how many time should be spent
In evacuation phase
percent of the MaxPauseTime. Default to10
Adjust refinement thread zones (G1ConcRefinementGreen/Yellow/RedZone) after each gc
GC logs: [Update RS (ms): Min: 1255,1, Avg: 1256,0, Max: 1257,8, Diff: 2,8, Sum: 28889,1]
Remember set updating time
45
0
50
100
150
200
250
300
350
400
450
Clientlatencies(ms)
percentiles
R. Set updating time percent - 31GB, RegionSize=16mb, 300ms
RSetUpdatingPauseTimePercent=0 RSetUpdatingPauseTimePercent=5
RSetUpdatingPauseTimePercent=10 RSetUpdatingPauseTimePercent=15
Other settings
46
-XX:+ParallelRefProcEnabled
No noticeable difference for this workload
-XX:+UseStringDeduplication
No noticeable difference for this workload
-XX:G1MixedGCLiveThresholdPercent=45/55/65
No noticeable difference for this workload
Final settings for G1
47
-Xmx31G -Xms31G -XX:MaxGCPauseMillis=300 -XX:G1HeapRegionSize=16m -
XX:MaxTenuringThreshold=0 -XX:+UnlockExperimentalVMOptions -XX:NewSize=2500m -
XX:ParallelGCThreads=32 -XX:InitiatingHeapOccupancyPercent=55
-XX:G1RSetUpdatingPauseTimePercent=5
Non-default VM flags: -XX:+AlwaysPreTouch -XX:CICompilerCount=15 -XX:CompileCommandFile=null -XX:ConcGCThreads=8 -XX:G1HeapRegionSize=16777216
-XX:G1RSetUpdatingPauseTimePercent=5 -XX:GCLogFileSize=10485760 -XX:+HeapDumpOnOutOfMemoryError -XX:InitialHeapSize=33285996544 -
XX:InitialTenuringThreshold=0 -XX:+ManagementServer -XX:MarkStackSize=4194304 -XX:MaxGCPauseMillis=300 -XX:MaxHeapSize=33285996544 -
XX:MaxNewSize=19964887040 -XX:MaxTenuringThreshold=0 -XX:MinHeapDeltaBytes=16777216 -XX:NewSize=2621440000 -XX:NumberOfGCLogFiles=10 -
XX:OnOutOfMemoryError=null -XX:ParallelGCThreads=32 -XX:+ParallelRefProcEnabled -XX:+PerfDisableSharedMem -XX:PrintFLSStatistics=1 -XX:+PrintGC -
XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintPromotionFailure -
XX:+PrintTenuringDistribution -XX:+ResizeTLAB -XX:StringTableSize=1000003 -XX:ThreadPriorityPolicy=42 -XX:ThreadStackSize=256 -XX:-UseBiasedLocking -
XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastUnorderedTimeStamps -XX:+UseG1GC -XX:+UseGCLogFileRotation -
XX:+UseNUMA -XX:+UseNUMAInterleaving -XX:+UseTLAB -XX:+UseThreadPriorities
#DevoxxFR
Low latencies JVMs
4
8
Zing (azul)
https://www.azul.com/files/wp_pgc_zing_v52.pdf
Shenandoah (red hat)
https://www.youtube.com/watch?v=qBQtbkmURiQ https://www.youtube.com/watch?v=VCeHkcwfF9Q
Zgc (oracle)
Experimental. https://www.youtube.com/watch?v=tShc0dyFtgw
Write barrier
49
A B C DB C
Object not marked
Hotspot uses write barrier to capture “pointer deletion”
Prevent D from being cleaned during the next GC
What about compaction?
Hotspot stop all applications thread, solving concurrent issues
C.myFieldD = null
-----------
markObjectAsPotentialAlive (D)
Read barrier
50
A B C DB C
Read D
-----------
If (D hasn’t been marked in this cycle){
markObject (D)
}
Return DD
What about compaction?
Read D
-----------
If (D is in a memory page being compacted){
followNewAddressOrMoveObjectToNewAddressNow(D)
updateReferenceToNewAddress(D)
}
Return D
Read barrier
51
A B C DB C D
Predictable, constant pause time
No matter how big the heap size is
Comes with a performance cost
Higher cpu usage
Ex: Gatling report takes 10 to 20% more time to complete
using low latency jvm vs hotspot+G1
(computation on 1 cpu running for 10minutes)
Low latencies JVM
52
Zing rampup
G1 rampup
JVM test, not a Cassandra benchmark!
Low latencies JVM
53
0
100
200
300
400
500
600
0pt
20pt
40pt
55pt
65pt
75pt
80pt
85pt
88.75pt
91.25pt
93.75pt
95pt
96.25pt
97.18pt
97.81pt
98.43pt
98.75pt
99.06pt
99.29pt
99.45pt
99.6pt
99.68pt
99.76pt
99.82pt
99.86pt
99.9pt
99.92pt
99.94pt
99.95pt
99.96pt
99.97pt
99.98pt
99.98pt
99.98pt
Clientlatncy(ms)
percentiles
31GB G1 31GB Zing
Low latencies JVM (shenandoah)
54
Shenandoah rampup
G1 rampup
JVM test, not a Cassandra benchmark!
Low latencies conclusion
55
Capable to deal with big heap
Can handle a bigger throughput than G1 before getting the first error
G1 pauses create a burst with potential timeout
Zing offers good performances, including with lower heap.
Shenandoah stable in our tests. Offers a very good alternative to G1 to avoid pause time
Try carefully, still young
#DevoxxFR
Conclusion
5
6
Conclusion
57
(Super) easy to get things wrong. Change 1 param at a time & test for > 1 day
Measure the wrong thing, test too short introducing external (not JVM) noise…
Lot of work for to save a few percentiles
Tests have been running for more than 300hours
Don’t over-tune. Keep things simple to avoid side effect
Keep target pause > 200ms
Keep heap size between 16GB and 31GB
Start with heap size = 30*(allocation rate).
Conclusion
58
GC pause still an issue? Upgrade to DSE6 or add extra servers
Running after the 99.9x pt? Go with low-latency jvms
Don’t trust what you’ve read! Check your GC logs!
#DevoxxFR
Thank you
& Thanks to
Lucas Bruand & Laurent Dubois (DeepPay)
Pierre Laporte, Samina & James (Datastax)!
5
9

More Related Content

What's hot

Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkBo Yang
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...DataStax
 
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...Monica Beckwith
 
Linux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old SecretsLinux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old SecretsBrendan Gregg
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overviewDataArt
 
Running Apache Spark on Kubernetes: Best Practices and Pitfalls
Running Apache Spark on Kubernetes: Best Practices and PitfallsRunning Apache Spark on Kubernetes: Best Practices and Pitfalls
Running Apache Spark on Kubernetes: Best Practices and PitfallsDatabricks
 
Introduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingIntroduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingCloudera, Inc.
 
Data Engineer's Lunch #83: Strategies for Migration to Apache Iceberg
Data Engineer's Lunch #83: Strategies for Migration to Apache IcebergData Engineer's Lunch #83: Strategies for Migration to Apache Iceberg
Data Engineer's Lunch #83: Strategies for Migration to Apache IcebergAnant Corporation
 
Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016
Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016
Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016DataStax
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsDataWorks Summit
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsSpark Summit
 
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSE
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSEUnderstanding blue store, Ceph's new storage backend - Tim Serong, SUSE
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSEOpenStack
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
 
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
 
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsDatabricks
 
Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclustr) | Cas...
Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclustr) | Cas...Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclustr) | Cas...
Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclustr) | Cas...DataStax
 
jemalloc 세미나
jemalloc 세미나jemalloc 세미나
jemalloc 세미나Jang Hoon
 
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark ApplicationsTop 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark ApplicationsCloudera, Inc.
 

What's hot (20)

Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...
 
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
 
Linux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old SecretsLinux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old Secrets
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
 
Running Apache Spark on Kubernetes: Best Practices and Pitfalls
Running Apache Spark on Kubernetes: Best Practices and PitfallsRunning Apache Spark on Kubernetes: Best Practices and Pitfalls
Running Apache Spark on Kubernetes: Best Practices and Pitfalls
 
Introduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingIntroduction to Apache Spark Developer Training
Introduction to Apache Spark Developer Training
 
Data Engineer's Lunch #83: Strategies for Migration to Apache Iceberg
Data Engineer's Lunch #83: Strategies for Migration to Apache IcebergData Engineer's Lunch #83: Strategies for Migration to Apache Iceberg
Data Engineer's Lunch #83: Strategies for Migration to Apache Iceberg
 
Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016
Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016
Troubleshooting Cassandra (J.B. Langston, DataStax) | C* Summit 2016
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark Applications
 
Deep Dive on Amazon Aurora
Deep Dive on Amazon AuroraDeep Dive on Amazon Aurora
Deep Dive on Amazon Aurora
 
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSE
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSEUnderstanding blue store, Ceph's new storage backend - Tim Serong, SUSE
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSE
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
 
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
 
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL Joins
 
Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclustr) | Cas...
Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclustr) | Cas...Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclustr) | Cas...
Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclustr) | Cas...
 
jemalloc 세미나
jemalloc 세미나jemalloc 세미나
jemalloc 세미나
 
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark ApplicationsTop 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
 

Similar to Jvm tuning for low latency application & Cassandra

Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14Jayesh Thakrar
 
JVM memory management & Diagnostics
JVM memory management & DiagnosticsJVM memory management & Diagnostics
JVM memory management & DiagnosticsDhaval Shah
 
G1 Garbage Collector - Big Heaps and Low Pauses?
G1 Garbage Collector - Big Heaps and Low Pauses?G1 Garbage Collector - Big Heaps and Low Pauses?
G1 Garbage Collector - Big Heaps and Low Pauses?C2B2 Consulting
 
Am I reading GC logs Correctly?
Am I reading GC logs Correctly?Am I reading GC logs Correctly?
Am I reading GC logs Correctly?Tier1 App
 
Become a Garbage Collection Hero
Become a Garbage Collection HeroBecome a Garbage Collection Hero
Become a Garbage Collection HeroTier1app
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolversinside-BigData.com
 
Taming Java Garbage Collector
Taming Java Garbage CollectorTaming Java Garbage Collector
Taming Java Garbage CollectorDaya Atapattu
 
Performance tuning jvm
Performance tuning jvmPerformance tuning jvm
Performance tuning jvmPrem Kuppumani
 
Pick diamonds from garbage
Pick diamonds from garbagePick diamonds from garbage
Pick diamonds from garbageTier1 App
 
“Show Me the Garbage!”, Garbage Collection a Friend or a Foe
“Show Me the Garbage!”, Garbage Collection a Friend or a Foe“Show Me the Garbage!”, Garbage Collection a Friend or a Foe
“Show Me the Garbage!”, Garbage Collection a Friend or a FoeHaim Yadid
 
Speedrunning the Open Street Map osm2pgsql Loader
Speedrunning the Open Street Map osm2pgsql LoaderSpeedrunning the Open Street Map osm2pgsql Loader
Speedrunning the Open Street Map osm2pgsql LoaderGregSmith458515
 
hbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMihbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMiHBaseCon
 
PostgreSQL performance archaeology
PostgreSQL performance archaeologyPostgreSQL performance archaeology
PostgreSQL performance archaeologyTomas Vondra
 
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...Monica Beckwith
 

Similar to Jvm tuning for low latency application & Cassandra (20)

Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
Chicago-Java-User-Group-Meetup-Some-Garbage-Talk-2015-01-14
 
JVM memory management & Diagnostics
JVM memory management & DiagnosticsJVM memory management & Diagnostics
JVM memory management & Diagnostics
 
G1 Garbage Collector - Big Heaps and Low Pauses?
G1 Garbage Collector - Big Heaps and Low Pauses?G1 Garbage Collector - Big Heaps and Low Pauses?
G1 Garbage Collector - Big Heaps and Low Pauses?
 
Am I reading GC logs Correctly?
Am I reading GC logs Correctly?Am I reading GC logs Correctly?
Am I reading GC logs Correctly?
 
Become a Garbage Collection Hero
Become a Garbage Collection HeroBecome a Garbage Collection Hero
Become a Garbage Collection Hero
 
A G1GC Saga-KCJUG.pptx
A G1GC Saga-KCJUG.pptxA G1GC Saga-KCJUG.pptx
A G1GC Saga-KCJUG.pptx
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolvers
 
Taming Java Garbage Collector
Taming Java Garbage CollectorTaming Java Garbage Collector
Taming Java Garbage Collector
 
Performance tuning jvm
Performance tuning jvmPerformance tuning jvm
Performance tuning jvm
 
Pick diamonds from garbage
Pick diamonds from garbagePick diamonds from garbage
Pick diamonds from garbage
 
Java GC, Off-heap workshop
Java GC, Off-heap workshopJava GC, Off-heap workshop
Java GC, Off-heap workshop
 
Hotspot gc
Hotspot gcHotspot gc
Hotspot gc
 
G1GC
G1GCG1GC
G1GC
 
“Show Me the Garbage!”, Garbage Collection a Friend or a Foe
“Show Me the Garbage!”, Garbage Collection a Friend or a Foe“Show Me the Garbage!”, Garbage Collection a Friend or a Foe
“Show Me the Garbage!”, Garbage Collection a Friend or a Foe
 
Speedrunning the Open Street Map osm2pgsql Loader
Speedrunning the Open Street Map osm2pgsql LoaderSpeedrunning the Open Street Map osm2pgsql Loader
Speedrunning the Open Street Map osm2pgsql Loader
 
hbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMihbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMi
 
JVM Magic
JVM MagicJVM Magic
JVM Magic
 
PostgreSQL performance archaeology
PostgreSQL performance archaeologyPostgreSQL performance archaeology
PostgreSQL performance archaeology
 
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...
 
Cram
CramCram
Cram
 

Recently uploaded

Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
 
iGaming Platform & Lottery Solutions by Skilrock
iGaming Platform & Lottery Solutions by SkilrockiGaming Platform & Lottery Solutions by Skilrock
iGaming Platform & Lottery Solutions by SkilrockSkilrock Technologies
 
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdfMastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdfmbmh111980
 
De mooiste recreatieve routes ontdekken met RouteYou en FME
De mooiste recreatieve routes ontdekken met RouteYou en FMEDe mooiste recreatieve routes ontdekken met RouteYou en FME
De mooiste recreatieve routes ontdekken met RouteYou en FMEJelle | Nordend
 
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
 
Accelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with PlatformlessAccelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with PlatformlessWSO2
 
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
 
GraphAware - Transforming policing with graph-based intelligence analysis
GraphAware - Transforming policing with graph-based intelligence analysisGraphAware - Transforming policing with graph-based intelligence analysis
GraphAware - Transforming policing with graph-based intelligence analysisNeo4j
 
Studiovity film pre-production and screenwriting software
Studiovity film pre-production and screenwriting softwareStudiovity film pre-production and screenwriting software
Studiovity film pre-production and screenwriting softwareinfo611746
 
Agnieszka Andrzejewska - BIM School Course in Kraków
Agnieszka Andrzejewska - BIM School Course in KrakówAgnieszka Andrzejewska - BIM School Course in Kraków
Agnieszka Andrzejewska - BIM School Course in Krakówbim.edu.pl
 
Crafting the Perfect Measurement Sheet with PLM Integration
Crafting the Perfect Measurement Sheet with PLM IntegrationCrafting the Perfect Measurement Sheet with PLM Integration
Crafting the Perfect Measurement Sheet with PLM IntegrationWave PLM
 
Designing for Privacy in Amazon Web Services
Designing for Privacy in Amazon Web ServicesDesigning for Privacy in Amazon Web Services
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
 
AI/ML Infra Meetup | Perspective on Deep Learning Framework
AI/ML Infra Meetup | Perspective on Deep Learning FrameworkAI/ML Infra Meetup | Perspective on Deep Learning Framework
AI/ML Infra Meetup | Perspective on Deep Learning FrameworkAlluxio, Inc.
 
Into the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdfInto the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdfOrtus Solutions, Corp
 
A Python-based approach to data loading in TM1 - Using Airflow as an ETL for TM1
A Python-based approach to data loading in TM1 - Using Airflow as an ETL for TM1A Python-based approach to data loading in TM1 - Using Airflow as an ETL for TM1
A Python-based approach to data loading in TM1 - Using Airflow as an ETL for TM1KnowledgeSeed
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2
 
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...Alluxio, Inc.
 
top nidhi software solution freedownload
top nidhi software solution freedownloadtop nidhi software solution freedownload
top nidhi software solution freedownloadvrstrong314
 
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERROR
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERROR
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
 

Recently uploaded (20)

Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...
 
iGaming Platform & Lottery Solutions by Skilrock
iGaming Platform & Lottery Solutions by SkilrockiGaming Platform & Lottery Solutions by Skilrock
iGaming Platform & Lottery Solutions by Skilrock
 
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdfMastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
 
De mooiste recreatieve routes ontdekken met RouteYou en FME
De mooiste recreatieve routes ontdekken met RouteYou en FMEDe mooiste recreatieve routes ontdekken met RouteYou en FME
De mooiste recreatieve routes ontdekken met RouteYou en FME
 
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?
 
Accelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with PlatformlessAccelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with Platformless
 
Corporate Management | Session 3 of 3 | Tendenci AMS
Corporate Management | Session 3 of 3 | Tendenci AMSCorporate Management | Session 3 of 3 | Tendenci AMS
Corporate Management | Session 3 of 3 | Tendenci AMS
 
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
 
GraphAware - Transforming policing with graph-based intelligence analysis
GraphAware - Transforming policing with graph-based intelligence analysisGraphAware - Transforming policing with graph-based intelligence analysis
GraphAware - Transforming policing with graph-based intelligence analysis
 
Studiovity film pre-production and screenwriting software
Studiovity film pre-production and screenwriting softwareStudiovity film pre-production and screenwriting software
Studiovity film pre-production and screenwriting software
 
Agnieszka Andrzejewska - BIM School Course in Kraków
Agnieszka Andrzejewska - BIM School Course in KrakówAgnieszka Andrzejewska - BIM School Course in Kraków
Agnieszka Andrzejewska - BIM School Course in Kraków
 
Crafting the Perfect Measurement Sheet with PLM Integration
Crafting the Perfect Measurement Sheet with PLM IntegrationCrafting the Perfect Measurement Sheet with PLM Integration
Crafting the Perfect Measurement Sheet with PLM Integration
 
Designing for Privacy in Amazon Web Services
Designing for Privacy in Amazon Web ServicesDesigning for Privacy in Amazon Web Services
Designing for Privacy in Amazon Web Services
 
AI/ML Infra Meetup | Perspective on Deep Learning Framework
AI/ML Infra Meetup | Perspective on Deep Learning FrameworkAI/ML Infra Meetup | Perspective on Deep Learning Framework
AI/ML Infra Meetup | Perspective on Deep Learning Framework
 
Into the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdfInto the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdf
 
A Python-based approach to data loading in TM1 - Using Airflow as an ETL for TM1
A Python-based approach to data loading in TM1 - Using Airflow as an ETL for TM1A Python-based approach to data loading in TM1 - Using Airflow as an ETL for TM1
A Python-based approach to data loading in TM1 - Using Airflow as an ETL for TM1
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
 
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
 
top nidhi software solution freedownload
top nidhi software solution freedownloadtop nidhi software solution freedownload
top nidhi software solution freedownload
 
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERROR
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERROR
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERROR
 

Jvm tuning for low latency application & Cassandra

  • 1. #DevoxxFR JVMs & GCs GC tuning for low latencies with Cassandra Quentin Ambard @qambard 1 (NOT a Cassandra benchmark!)
  • 3. Agenda • Hotspot GCs • G1: Heap size & target pause time • G1: 31GB or 32GB? • G1: Advanced settings • Low latencies JVMs • Wrap up 3
  • 6. Parallel collector 6 Heap is full Triggers a Stop The World (STW) GC to mark & compact
  • 7. Parallel gc profile 7 Young GC Old is filled, full GC
  • 8. Concurrent Mark Sweep (CMS) collector 8 Young collection uses ParNewGC. Behaves like Parallel GC Difference: needs to communicate with CMS for the old generation
  • 9. Concurrent Mark Sweep (CMS) collector 9 Old region is getting too big Limit defined by -XX:CMSInitiatingHeapOccupancyPercent Start concurrent marking & cleanup Only smalls STW phase Delete only. No compaction leading to fragmentation Triggers a serial full STW GC if continuous memory can’t be allocated Memory is requested by “block” –XX:OldPLABSize=16 each thread requests a block to copy young to old
  • 10. CMS profile 10 Hard to tune. Need to fix young size, won’t adapt to new workload inexplicit options. Easier when heap remains around 8GB Fragmentation will trigger super long full GC
  • 11. Garbage first (G1) collector 11 Heap (31GB) Empty Regions (-XX:G1HeapRegionSize)
  • 12. Garbage first (G1) collector 12 Young region
  • 13. Garbage first (G1) collector 13 Young space is full Dynamically sized to reach pause objective -XX:MaxGCPauseMillis Trigger STW “parallel gc” on young Scan object in each region Copy survivor to survivor In another young region Free regions
  • 14. Garbage first (G1) collector 14 Young space is full Dynamically sized to reach pause objective -XX:MaxGCPauseMillis Trigger STW “parallel gc” on young Scan object in each region Copy survivor to survivor In another young region. Survivor size dynamically adjusted Copy old survivor to old region In another young region Free regions
  • 15. Garbage first (G1) collector 15 Old space is getting too big Limit defined by -XX:InitiatingHeapOccupancyPercent=40 Start region concurrent scan Not blocking (2 short STW with SATB: start + end) 100% empty regions reclaimed Fast, “free” operation Trigger STW mixed gc: Dramatically reduce young size Includes a few old region in the next young gc Repeat mixed gc until -XX:G1HeapWastePercent 10% 100% 100% 80%78% 30% Young size being reduced to respect target pause time, G1 triggers several concurrent gc
  • 16. G1 profile 16 “mixed” gcEasy to tune, predictable young GC
  • 17. #DevoxxFR Heap size & target pause time 1 7
  • 18. Test protocol 18 128GB RAM, 2x8 cores (32 ht cores) 2 CPU E5-2620 v4 @ 2.10GHz. Disk SSD RAID0. JDK 1.8.161 DSE 5.1.7, memtable size fixed to 4G. Data saved in ramdisk (40GB) Gatling query on 3 tables jvm: zing. Default C* schema configuration. Durable write=false (avoid commit log to reduce disk activity) Rows size: 100byte, 1.5k, 3k 50% Read, 50% write. Throughput 40-120kq/sec 33% of the read return a range of 10 values Datastax recommended OS settings applied
  • 19. Which Heap size 19 Why a bigger heap could be better? • Heap get filled slower (less gc) • Increase ratio of dead object during collections Moving (compacting) the remaining objects is the most heavy operation • Increases chances to flush an entirely region
  • 20. Which Heap size 20 Why a bigger heap could be worse? • Full GC has a bigger impact Now parallel with java1.10 • Increases chances to trigger longer pauses • Less memory remaining for disk cache
  • 21. Heap size 21 Small heap (<16GB) have bad latencies After a given size, no obvious difference 0 100 200 300 400 500 600 0pt 20pt 40pt 55pt 65pt 75pt 80pt 85pt 88.75pt 91.25pt 93.75pt 95pt 96.25pt 97.18pt 97.81pt 98.43pt 98.75pt 99.06pt 99.29pt 99.45pt 99.6pt 99.68pt 99.76pt 99.92pt Latency(ms) Client latencies by Heap Size - target pause= 300ms, 60kq/sec 8GB 12GB 20GB 28GB 36GB 44GB 52GB 60GB
  • 22. Heap size & GC Pause time 22 0 20 40 60 80 100 120 8 18 28 38 48 58 Totalpausetime(sec) Heap size (GB) GC STW Pause by Heap size and heap allocation - target pause 300ms Total pause time (800mb/s)
  • 23. Heap size & GC Pause time 23 0 100 200 300 400 500 600 700 0 20 40 60 80 100 120 8 18 28 38 48 58 Heap size (GB) MaxPausetime(ms) Totalpausetime(sec) GC STW Pause by Heap size and heap allocation - target pause 300ms Total pause time (800mb/s) GC Max pause time (800mb/s)
  • 24. Heap size & GC Pause time 24 0 100 200 300 400 500 600 0 20 40 60 80 100 120 140 160 180 8 18 28 38 48 58 Heap size (GB) MaxPausetime(ms) Totalpausetime(sec) GC STW Pause by Heap size and heap allocation - target pause 300ms Total pause time (1250mb/s) GC Max pause time (1250mb/s)
  • 25. Heap size & GC Pause time 25 0 100 200 300 400 500 600 700 0 20 40 60 80 100 120 140 160 180 8 18 28 38 48 58 MaxPausetime(ms) Totalpausetime(sec) Heap size (GB) GC STW Pause by Heap size and heap allocation - target pause 300ms Total pause time (800mb/s) Total pause time (1250mb/s) GC Max pause time (800mb/s) GC Max pause time (1250mb/s)
  • 26. Target pause time -XX:MaxGCPauseMillis 26 0 10 20 30 40 50 60 70 50 100 200 300 400 500 600 Totalpausetime(sec) G1 Pause time target (ms) Total STW GC pause by target pause - heap 28GB Total STW GC duration
  • 27. Target pause time -XX:MaxGCPauseMillis 27 0 100 200 300 400 500 600 0 10 20 30 40 50 60 70 50 100 200 300 400 500 600 Maxpausetime(ms) Totalpausetime(sec) G1 Pause time target (ms) Total STW GC pause by target pause - heap 28GB Total STW GC duration (900 mb/sec) Max Pause time (900mb/sec)
  • 28. Target pause time -XX:MaxGCPauseMillis 28 0 50 100 150 200 250 300 350 400 450 500 90pt 91.25pt 92.5pt 93.75pt 94.37pt 95pt 95.62pt 96.25pt 96.87pt 97.18pt 97.5pt 97.81pt 98.12pt 98.43pt 98.59pt 98.75pt 98.9pt 99.06pt 99.21pt 99.29pt 99.37pt 99.45pt 99.53pt 99.6pt 99.64pt 99.68pt 99.72pt 99.76pt 99.8pt ClientPausetime(ms) percentiles Client pause time by GC target Pause 36GB-100ms 36GB-200ms 36GB-300ms 36GB-400ms 36GB-500ms 36GB-50ms 36GB-600ms
  • 29. Heap size Conclusion 29 G1 struggle with a too small heap. Also increases full GC risk GC pause time doesn’t reduce proportionally when heap size increase Sweet spot seems to be around 30x allocation rate Keep -XX:MaxGCPauseMillis >= 200ms
  • 31. 31GB or 32GB? 31 Up to 31GB: Oops compressed on 32bit with 8 bytes alignement (3 bit shift) 8 0000 1000 0000 0001 32 0010 0000 0000 0100 40 0010 1000 0000 0101 2^32 => 4G. 3 bit shift trick leads to 2^3 = 8 times more addresses. 2^32 * 2^3 = 32G 32GB: Oops on 64 bits Heap from 32GB to 64GB can be aligned on 16bit G1 targets 2048 regions and changes the default size at 32GB 31GB => XX:G1HeapRegionSize=8m = 3875 regions 32GB => XX:G1HeapRegionSize=16m = 2000 regions Region number can have an impact on Remember Set update/scan nodetool sjk mx -mc -b "com.sun.management:type=HotSpotDiagnostic" -op getVMOption -a UseCompressedOops
  • 32. 31GB or 32GB? 32 No major difference Concurrent marking cycle is slower with smaller region (8MB => +20%) No major difference in cpu usage 31GB + RegionSize=16m Total GC Pause time -10% Latencies mean -8% All other GC metrics are very similar Not sure? stick with 31GB + RegionSize=16m 0 50 100 150 200 250 300 350 400 92.5pt 93.75pt 94.375pt 95pt 95.625pt 96.25pt 96.875pt 97.187pt 97.5pt 97.812pt 98.125pt 98.437pt 98.593pt 98.75pt 98.906pt 99.062pt 99.218pt 99.296pt 99.375pt 99.453pt 99.531pt 99.609pt 99.648pt 99.687pt 99.726pt 99.765pt 99.804pt 99.824pt 99.843pt 99.863pt 99.882pt Clientlatency(ms) percentiles 32 or 31GB / Compressed oops & regions size 32GB RS=16 bytealign=16 32GB RS=8 bytealign=16 32GB RS=16 31GB RS=8 31GB RS=16 32GB RS=8
  • 33. Zero based compresse oops? 33 Zero based compressed oops Virtual memory starts at zero: native oop = (compressed oop << 3) Not zero based: if (compressed oop is null) native oop = null else native oop = base + (compressed oop << 3) Happens around 26/30GB Can be checked with -XX:+UnlockDiagnosticVMOptions -XX:+PrintCompressedOopsMode No noticeable difference for this workload
  • 34. XX:ParallelGCThreads 34 Defines how many thread participate in GC. 2x8 physical cores 32 with hyperthreading threads STW total time 8 90 sec 16 41 sec 32 32 sec 40 35 sec 0 50 100 150 200 250 300 350 400 Clientlatency(ms) ParallelGCThreads - 31GB / 300ms 8 threads 16 threads 32 threads 40 threads
  • 35. Minimum young size 35 During mixed gc, young size is drastically reduced (1.5GB with 31GB heap) Young get filled in a second. Can lead to multiple consecutive GC () We can force it to a minimum size XX:G1NewSizePercent=10 seems to be a better default (5) Interval between GC increased by x3 during mixed GC (min 3 sec) No noticeable changes in throughout and latencies (Increase mixed time, reduce young gc time) GC Pause time, 31GB GC Pause time, 31GB -XX:NewSize=4GB GC every sec (or multiple per sec)
  • 36. Survivor threshold 36 Defines how many times data will be copied into young before promotion to old Dynamically resized by G1 Desired survivor size 27262976 bytes, new threshold 3 (max 15) - age 1: 21624512 bytes, 21624512 total - age 2: 4510912 bytes, 26135424 total - age 3: 5541504 bytes, 31676928 total Default 15, but tends to remains <= 4 under heavy load quickly fill survivor space defined by XX:SurvivorRatio Most object should be either long-living or instantaneous. Is it worth disabling survivor ? -XX:MaxTenuringThreshold=0 (default 15)
  • 37. Survivor threshold 37 0 100 200 300 400 500 600 0pt 40pt 65pt 80pt 88.75pt 93.75pt 96.25pt 97.8125pt 98.75pt 99.2968pt 99.6093pt 99.7656pt 99.8632pt 99.9218pt 99.956pt 99.9755pt 99.9853pt 99.9914pt 99.9951pt 99.9972pt 99.9984pt 99.999pt Client Latencies by Max survivor age - 31GB, 300ms Max age 0 Max age 1 Max age 2 Max age 3 Max age 4 Removing survivor greatly reduces GC (count -40%, time -50%) In this case doesn’t increase old gc count most survivor objects seems to be promoted anyway ! “Prematured promotion” could potentially fill the old generation quickly !
  • 38. Survivor threshold - JVM pauses 38 Max tenuring = 15 (default) GC Avg: 235 ms GC Max: 381 ms STW GC Total: 53 sec Max tenuring = 0 GC Avg: 157 ms GC Max: 290 ms STW GC Total: 28 sec Generated by gceasy.io
  • 39. Survivor threshold 39 Generated by gceasy.io Check your GC log first! In this case (almost no activity), the survivor size doesn’t reduce much after 4 periods Try with -XX:MaxTenuringThreshold=4
  • 40. Delaying the marking cycle 40 By default, G1 starts a marking cycle when the heap is used at 45% -XX:InitiatingHeapOccupancyPercent (IHOP) By delaying marking cycle: • Reduce count of old gc • Increase chance to reclaim empty regions ? • Increase risk to trigger full GC Java 1.9 now dynamically resizes IHOP! JDK-8136677. Disable adaptative behavior with -XX:-G1UseAdaptiveIHOP
  • 41. Delaying the marking cycle 41 0 50 100 150 200 250 300 350 0pt 20pt 40pt 55pt 65pt 75pt 80pt 85pt 88.75pt 91.25pt 93.75pt 95pt 96.25pt 97.1875pt 97.8125pt 98.4375pt 98.75pt 99.0625pt 99.2968pt 99.4531pt 99.6093pt 99.6875pt 99.7656pt 99.8242pt 99.8632pt 99.9023pt 99.9218pt 99.9414pt 99.956pt 99.9658pt 99.9755pt 99.9804pt 99.9853pt 99.989pt Clientlatency(ms) percentiles Client latencies - IOHP - 31GB 300ms IHOP 45 IHOP 60 IOHP 70 IHOP 80
  • 42. 42 Generated by gceasy.io IHOP=80, 31GB heap Max heap after GC = 25GB 4 “old” compaction +10% “young” GC GC Total: 24sec IHOP=60, 31GB heap Max heap after GC = 20GB 6 “old” compaction GC Total: 21sec
  • 43. Delaying the marking cycle 43 Conclusion: • Reduce number of old gc but increase young • No major improvement • Increases risk of full GC • Avoid increasing over 60% (or rely on java9 dynamic sizing – not tested)
  • 44. Remember set updating time 44 G1 keep cross-region references into a structured called Remember Set. Updated in batches to improve performances and avoid concurrent issue -XX:G1RSetUpdatingPauseTimePercent controls how many time should be spent In evacuation phase percent of the MaxPauseTime. Default to10 Adjust refinement thread zones (G1ConcRefinementGreen/Yellow/RedZone) after each gc GC logs: [Update RS (ms): Min: 1255,1, Avg: 1256,0, Max: 1257,8, Diff: 2,8, Sum: 28889,1]
  • 45. Remember set updating time 45 0 50 100 150 200 250 300 350 400 450 Clientlatencies(ms) percentiles R. Set updating time percent - 31GB, RegionSize=16mb, 300ms RSetUpdatingPauseTimePercent=0 RSetUpdatingPauseTimePercent=5 RSetUpdatingPauseTimePercent=10 RSetUpdatingPauseTimePercent=15
  • 46. Other settings 46 -XX:+ParallelRefProcEnabled No noticeable difference for this workload -XX:+UseStringDeduplication No noticeable difference for this workload -XX:G1MixedGCLiveThresholdPercent=45/55/65 No noticeable difference for this workload
  • 47. Final settings for G1 47 -Xmx31G -Xms31G -XX:MaxGCPauseMillis=300 -XX:G1HeapRegionSize=16m - XX:MaxTenuringThreshold=0 -XX:+UnlockExperimentalVMOptions -XX:NewSize=2500m - XX:ParallelGCThreads=32 -XX:InitiatingHeapOccupancyPercent=55 -XX:G1RSetUpdatingPauseTimePercent=5 Non-default VM flags: -XX:+AlwaysPreTouch -XX:CICompilerCount=15 -XX:CompileCommandFile=null -XX:ConcGCThreads=8 -XX:G1HeapRegionSize=16777216 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:GCLogFileSize=10485760 -XX:+HeapDumpOnOutOfMemoryError -XX:InitialHeapSize=33285996544 - XX:InitialTenuringThreshold=0 -XX:+ManagementServer -XX:MarkStackSize=4194304 -XX:MaxGCPauseMillis=300 -XX:MaxHeapSize=33285996544 - XX:MaxNewSize=19964887040 -XX:MaxTenuringThreshold=0 -XX:MinHeapDeltaBytes=16777216 -XX:NewSize=2621440000 -XX:NumberOfGCLogFiles=10 - XX:OnOutOfMemoryError=null -XX:ParallelGCThreads=32 -XX:+ParallelRefProcEnabled -XX:+PerfDisableSharedMem -XX:PrintFLSStatistics=1 -XX:+PrintGC - XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintPromotionFailure - XX:+PrintTenuringDistribution -XX:+ResizeTLAB -XX:StringTableSize=1000003 -XX:ThreadPriorityPolicy=42 -XX:ThreadStackSize=256 -XX:-UseBiasedLocking - XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastUnorderedTimeStamps -XX:+UseG1GC -XX:+UseGCLogFileRotation - XX:+UseNUMA -XX:+UseNUMAInterleaving -XX:+UseTLAB -XX:+UseThreadPriorities
  • 48. #DevoxxFR Low latencies JVMs 4 8 Zing (azul) https://www.azul.com/files/wp_pgc_zing_v52.pdf Shenandoah (red hat) https://www.youtube.com/watch?v=qBQtbkmURiQ https://www.youtube.com/watch?v=VCeHkcwfF9Q Zgc (oracle) Experimental. https://www.youtube.com/watch?v=tShc0dyFtgw
  • 49. Write barrier 49 A B C DB C Object not marked Hotspot uses write barrier to capture “pointer deletion” Prevent D from being cleaned during the next GC What about compaction? Hotspot stop all applications thread, solving concurrent issues C.myFieldD = null ----------- markObjectAsPotentialAlive (D)
  • 50. Read barrier 50 A B C DB C Read D ----------- If (D hasn’t been marked in this cycle){ markObject (D) } Return DD What about compaction? Read D ----------- If (D is in a memory page being compacted){ followNewAddressOrMoveObjectToNewAddressNow(D) updateReferenceToNewAddress(D) } Return D
  • 51. Read barrier 51 A B C DB C D Predictable, constant pause time No matter how big the heap size is Comes with a performance cost Higher cpu usage Ex: Gatling report takes 10 to 20% more time to complete using low latency jvm vs hotspot+G1 (computation on 1 cpu running for 10minutes)
  • 52. Low latencies JVM 52 Zing rampup G1 rampup JVM test, not a Cassandra benchmark!
  • 54. Low latencies JVM (shenandoah) 54 Shenandoah rampup G1 rampup JVM test, not a Cassandra benchmark!
  • 55. Low latencies conclusion 55 Capable to deal with big heap Can handle a bigger throughput than G1 before getting the first error G1 pauses create a burst with potential timeout Zing offers good performances, including with lower heap. Shenandoah stable in our tests. Offers a very good alternative to G1 to avoid pause time Try carefully, still young
  • 57. Conclusion 57 (Super) easy to get things wrong. Change 1 param at a time & test for > 1 day Measure the wrong thing, test too short introducing external (not JVM) noise… Lot of work for to save a few percentiles Tests have been running for more than 300hours Don’t over-tune. Keep things simple to avoid side effect Keep target pause > 200ms Keep heap size between 16GB and 31GB Start with heap size = 30*(allocation rate).
  • 58. Conclusion 58 GC pause still an issue? Upgrade to DSE6 or add extra servers Running after the 99.9x pt? Go with low-latency jvms Don’t trust what you’ve read! Check your GC logs!
  • 59. #DevoxxFR Thank you & Thanks to Lucas Bruand & Laurent Dubois (DeepPay) Pierre Laporte, Samina & James (Datastax)! 5 9

Editor's Notes

  1. see JobGenerator.scala
  2. see JobGenerator.scala
  3. see JobGenerator.scala
  4. see JobGenerator.scala
  5. see JobGenerator.scala
  6. see JobGenerator.scala
  7. see JobGenerator.scala
  8. see JobGenerator.scala
  9. see JobGenerator.scala
  10. see JobGenerator.scala
  11. see JobGenerator.scala
  12. see JobGenerator.scala
  13. see JobGenerator.scala
  14. see JobGenerator.scala
  15. see JobGenerator.scala
  16. see JobGenerator.scala
  17. see JobGenerator.scala
  18. see JobGenerator.scala
  19. see JobGenerator.scala
  20. see JobGenerator.scala
  21. see JobGenerator.scala
  22. see JobGenerator.scala
  23. see JobGenerator.scala
  24. see JobGenerator.scala
  25. see JobGenerator.scala
  26. see JobGenerator.scala
  27. see JobGenerator.scala
  28. see JobGenerator.scala
  29. see JobGenerator.scala
  30. see JobGenerator.scala
  31. see JobGenerator.scala
  32. see JobGenerator.scala
  33. see JobGenerator.scala
  34. see JobGenerator.scala
  35. see JobGenerator.scala
  36. see JobGenerator.scala
  37. see JobGenerator.scala
  38. see JobGenerator.scala
  39. see JobGenerator.scala
  40. see JobGenerator.scala
  41. see JobGenerator.scala
  42. see JobGenerator.scala
  43. see JobGenerator.scala
  44. see JobGenerator.scala
  45. see JobGenerator.scala
  46. see JobGenerator.scala