SlideShare a Scribd company logo
1 of 107
@kavya719
Let’s talk locks!
InfoQ.com: News & Community Site
• 750,000 unique visitors/month
• Published in 4 languages (English, Chinese, Japanese and Brazilian
Portuguese)
• Post content from our QCon conferences
• News 15-20 / week
• Articles 3-4 / week
• Presentations (videos) 12-15 / week
• Interviews 2-3 / week
• Books 1 / month
Watch the video with slide
synchronization on InfoQ.com!
https://www.infoq.com/presentations/
go-locks/
Presented at QCon New York
www.qconnewyork.com
Purpose of QCon
- to empower software development by facilitating the spread of
knowledge and innovation
Strategy
- practitioner-driven conference designed for YOU: influencers of
change and innovation in your teams
- speakers and topics driving the evolution and innovation
- connecting and catalyzing the influencers and innovators
Highlights
- attended by more than 12,000 delegates since 2007
- held in 9 cities worldwide
kavya
locks.
“locks are slow”
“locks are slow”
lock contention causes ~10x latency
latency(ms)
time
“locks are slow”
…but they’re used everywhere.
from schedulers to databases and web servers.
lock contention causes ~10x latency
latency(ms)
time
“locks are slow”
…but they’re used everywhere.
from schedulers to databases and web servers.
lock contention causes ~10x latency
latency(ms)
time
?
let’s analyze its performance!
performance models for contention
let’s build a lock!
a tour through lock internals
let’s use it, smartly!
a few closing strategies
our case-study
Lock implementations are hardware, ISA, OS and language specific:



We assume an x86_64 SMP machine running a modern Linux.

We’ll look at the lock implementation in Go 1.12.
CPU 0 CPU 1
cache cache
interconnect
memory
simplified SMP system diagram
use as you would threads 

> go handle_request(r)

but user-space threads:

managed entirely by the Go runtime, not the operating system.
The unit of concurrent execution: goroutines.
a brief go primer
use as you would threads 

> go handle_request(r)

but user-space threads:

managed entirely by the Go runtime, not the operating system.
The unit of concurrent execution: goroutines.
a brief go primer
Data shared between goroutines must be synchronized.
One way is to use the blocking, non-recursive lock construct:
> var mu sync.Mutex

mu.Lock()

…
mu.Unlock()
let’s build a lock!
a tour through lock internals.
want: “mutual exclusion”
only one thread has access to shared data at any given time
T1
running on CPU 1
T2
running on CPU 2
func reader() {

// Read a task

t := tasks.get()



// Do something with it.

...
}
func writer() {

// Write to tasks

tasks.put(t)
}
// track whether tasks is
// available (0) or not (1)
// shared ring buffer
var tasks Tasks
want: “mutual exclusion”
only one thread has access to shared data at any given time
func reader() {

// Read a task

t := tasks.get()



// Do something with it.

...
}
func writer() {

// Write to tasks

tasks.put(t)
}
// track whether tasks is
// available (0) or not (1)
// shared ring buffer
var tasks Tasks
want: “mutual exclusion”
…idea! use a flag?
T1
running on CPU 1
T2
running on CPU 2
// track whether tasks can be
// accessed (0) or not (1)
var flag int
var tasks Tasks
// track whether tasks can be
// accessed (0) or not (1)
var flag int
var tasks Tasks
func reader() {

for {
/* If flag is 0,
can access tasks. */

if flag == 0 {

/* Set flag */
flag++
...

/* Unset flag */
flag--
return
}
/* Else, keep looping. */ 

}
}
T1
running on CPU 1
// track whether tasks can be
// accessed (0) or not (1)
var flag int
var tasks Tasks
func reader() {

for {
/* If flag is 0,
can access tasks. */

if flag == 0 {

/* Set flag */
flag++
...

/* Unset flag */
flag--
return
}
/* Else, keep looping. */ 

}
}
func writer() {

for {
/* If flag is 0,
can access tasks. */

if flag == 0 {

/* Set flag */
flag++
...

/* Unset flag */
flag--
return
}
/* Else, keep looping. */ 

}
}
T1
running on CPU 1
T2
running on CPU 2
// track whether tasks can be
// accessed (0) or not (1)
var flag int
var tasks Tasks
func reader() {

for {
/* If flag is 0,
can access tasks. */

if flag == 0 {

/* Set flag */
flag++
...

/* Unset flag */
flag--
return
}
/* Else, keep looping. */ 

}
}
func writer() {

for {
/* If flag is 0,
can access tasks. */

if flag == 0 {

/* Set flag */
flag++
...

/* Unset flag */
flag--
return
}
/* Else, keep looping. */ 

}
}
T1
running on CPU 1
T2
running on CPU 2
flag++
T1
running on CPU 1
flag++
CPU
memory
1. Read (0)
2. Modify
3. Write (1)
T1
running on CPU 1
R
W
flag++
timeline of
memory operations
T1
running on CPU 1
R
R
W
flag++
if flag == 0
timeline of
memory operations
T1
running on CPU 1
T2
running on CPU 2
T2 may observe T1’s RMW half-complete
atomicity
A memory operation is non-atomic if it can be
observed half-complete by another thread.
An operation may be non-atomic because it:

• uses multiple CPU instructions:

operations on a large data structure; 

compiler decisions.

• use a single non-atomic CPU instruction:

RMW instructions; unaligned loads and stores.
> o := Order {
id: 10,
name: “yogi bear”,
order: “pie”,
count: 3,
}
atomicity
A memory operation is non-atomic if it can be
observed half-complete by another thread.
An operation may be non-atomic because it:

• uses multiple CPU instructions:

operations on a large data structure; 

compiler decisions.

• uses a single non-atomic CPU instruction:

RMW instructions; unaligned loads and stores.
> flag++
atomicity
A memory operation is non-atomic if it can be
observed half-complete by another thread.
An operation may be non-atomic because it:

• uses multiple CPU instructions:

operations on a large data structure; 

compiler decisions.

• uses a single non-atomic CPU instruction:

RMW instructions; unaligned loads and stores.
> flag++
An atomic operation is an “indivisible”
memory access.
In x86_64, loads, stores that are 

naturally aligned up to 64b.*
guarantees the data item fits within a cache line;

cache coherency guarantees a consistent view for a
single cache line.
* these are not the only guaranteed atomic operations.
nope; not atomic.
…idea! use a flag?
func reader() {

for {
/* If flag is 0,
can access tasks. */

if flag == 0 {

/* Set flag */
flag = 1
t := tasks.get()

...

/* Unset flag */
flag = 0
return
}
/* Else, keep looping. */ 

}
}
T1
running on CPU 1
the compiler may reorder operations.
// Sets flag to 1 & reads data.
func reader() {
flag = 1
t := tasks.get()
...
flag = 0
the processor may reorder operations.
StoreLoad reordering
load t before store flag = 1
// Sets flag to 1 & reads data.
func reader() {
flag = 1
t := tasks.get()
...
flag = 0
memory access reordering
The compiler, processor can reorder memory operations to optimize execution.
memory access reordering
The compiler, processor can reorder memory operations to optimize execution.
• The only cardinal rule is sequential consistency for single threaded programs.

• Other guarantees about compiler reordering are captured by a 

language’s memory model:

C++, Go guarantee data-race free programs will be sequentially consistent.
• For processor reordering, by the hardware memory model:

x86_64 provides Total Store Ordering (TSO).
memory access reordering
The compiler, processor can reorder memory operations to optimize execution.
• The only cardinal rule is sequential consistency for single threaded programs.

• Other guarantees about compiler reordering are captured by a 

language’s memory model:

C++, Go guarantee data-race free programs will be sequentially consistent.
• For processor reordering, by the hardware memory model:

x86_64 provides Total Store Ordering (TSO).
memory access reordering
The compiler, processor can reorder memory operations to optimize execution.
• The only cardinal rule is sequential consistency for single threaded programs.

• Other guarantees about compiler reordering are captured by a 

language’s memory model:

C++, Go guarantee data-race free programs will be sequentially consistent.
• For processor reordering, by the hardware memory model:

x86_64 provides Total Store Ordering (TSO).
a relaxed consistency model.
most reorderings are invalid but StoreLoad is game;

allows processor to hide the latency of writes.
nope; not atomic and no memory order guarantees.
…idea! use a flag?
nope; not atomic and no memory order guarantees.
…idea! use a flag?
need a construct that provides atomicity and prevents memory reordering.
nope; not atomic and no memory order guarantees.
…idea! use a flag?
need a construct that provides atomicity and prevents memory reordering.
…the hardware provides!
For guaranteed atomicity and to prevent memory reordering.
special hardware instructions
x86 example:
XCHG (exchange)
these instructions are called memory barriers.
they prevent reordering by the compiler too.
x86 example: MFENCE, LFENCE, SFENCE.
special hardware instructions
The x86 LOCK instruction prefix provides both.
Used to prefix memory access instructions:
LOCK ADD
For guaranteed atomicity and to prevent memory reordering.
}
atomic operations in languages like Go:
atomic.Add
atomic.CompareAndSwap
special hardware instructions
The x86 LOCK instruction prefix provides both.
Used to prefix memory access instructions:
LOCK ADD
For guaranteed atomicity and to prevent memory reordering.
}
atomic operations in languages like Go:
atomic.Add
atomic.CompareAndSwapLOCK CMPXCHG
Atomic compare-and-swap (CAS) conditionally updates a variable:

checks if it has the expected value and if so, changes it to the desired value.
the CAS succeeded;
we set flag to 1.
flag was 1 so our CAS failed;
try again.
var flag int
var tasks Tasks
func reader() {
for {
// Try to atomically CAS flag from 0 -> 1
if atomic.CompareAndSwap(&flag, 0, 1) {
...
// Atomically set flag back to 0.
atomic.Store(&flag, 0)
return
}

// CAS failed, try again :)
}
}
baby’s first lock
var flag int
var tasks Tasks
func reader() {
for {
// Try to atomically CAS flag from 0 -> 1
if atomic.CompareAndSwap(&flag, 0, 1) {
...
// Atomically set flag back to 0.
atomic.Store(&flag, 0)
return
}

// CAS failed, try again :)
}
}
baby’s first lock: spinlocks
This is a simplified spinlock.
Spinlocks are used extensively in
the Linux kernel.}
The atomic CAS is the quintessence of any lock implementation.
cost of an atomic operation
Run on a 12-core x86_64 SMP machine.

Atomic store to a C _Atomic int, 10M times in
a tight loop.
Measure average time taken per operation

(from within the program).
With 1 thread: ~13ns (vs. regular operation: ~2ns)
With 12 cpu-pinned threads: ~110ns
threads are effectively serialized
var flag int
var tasks Tasks
func reader() {
for {
// Try to atomically CAS flag from 0 -> 1
if atomic.CompareAndSwap(&flag, 0, 1) {
...
// Atomically set flag back to 0.
atomic.Store(&flag, 0)
return
}

// CAS failed, try again :)
}
}
spinlocks
sweet.
We have a scheme for mutual exclusion that provides atomicity and
memory ordering guarantees.
sweet.
…but
spinning for long durations is wasteful; it takes away CPU time from
other threads.
We have a scheme for mutual exclusion that provides atomicity and
memory ordering guarantees.
sweet.
…but
spinning for long durations is wasteful; it takes away CPU time from
other threads.
We have a scheme for mutual exclusion that provides atomicity and
memory ordering guarantees.
enter the operating system!
Linux’s futex
Interface and mechanism for userspace code to ask the kernel to suspend/ resume threads.
futex syscall kernel-managed queue
flag can be 0: unlocked

1: locked
2: there’s a waiter
var flag int
var tasks Tasks
set flag to 2 (there’s a waiter)
flag can be 0: unlocked

1: locked
2: there’s a waiter
futex syscall to tell the kernel
to suspend us until flag changes.
when we’re resumed, we’ll CAS again.
var flag int
var tasks Tasks
func reader() {
for {
if atomic.CompareAndSwap(&flag, 0, 1) {
...
}

// CAS failed, set flag to sleeping.
v := atomic.Xchg(&flag, 2)
// and go to sleep.
futex(&flag, FUTEX_WAIT, ...)

}
}
T1’s CAS fails

(because T2 has set the flag)
T1
in the kernel:
keyA
(from the userspace address:

&flag)
keyA
T1
futex_q
1. arrange for thread to be resumed in the future:

add an entry for this thread in the kernel queue for the address we care about
in the kernel:
keyA
(from the userspace address:

&flag)
keyA
T1
futex_q
keyother
Tother
futex_q
keyother
hash(keyA)
1. arrange for thread to be resumed in the future:

add an entry for this thread in the kernel queue for the address we care about
in the kernel:
keyA
(from the userspace address:

&flag)
keyA
T1
futex_q
keyother
Tother
futex_q
keyother
hash(keyA)
1. arrange for thread to be resumed in the future:

add an entry for this thread in the kernel queue for the address we care about
2. deschedule the calling thread to suspend it.
T2 is done

(accessing the shared data)
T2
func writer() {
for {
if atomic.CompareAndSwap(&flag, 0, 1) {
... 

// Set flag to unlocked.
v := atomic.Xchg(&flag, 0)
if v == 2 {
// If there was a waiter, issue a wake up.
futex(&flag, FUTEX_WAKE, ...)
}
return
}

v := atomic.Xchg(&flag, 2)
futex(&flag, FUTEX_WAIT, …)
}
}
T2 is done

(accessing the shared data)
T2
func writer() {
for {
if atomic.CompareAndSwap(&flag, 0, 1) {
... 

// Set flag to unlocked.
v := atomic.Xchg(&flag, 0)
if v == 2 {
// If there was a waiter, issue a wake up.
futex(&flag, FUTEX_WAKE, ...)
}
return
}

v := atomic.Xchg(&flag, 2)
futex(&flag, FUTEX_WAIT, …)
}
}
if flag was 2, there’s at least one waiter
futex syscall to tell the kernel to wake
a waiter up.
func writer() {
for {
if atomic.CompareAndSwap(&flag, 0, 1) {
... 

// Set flag to unlocked.
v := atomic.Xchg(&flag, 0)
if v == 2 {
// If there was a waiter, issue a wake up.
futex(&flag, FUTEX_WAKE, ...)
}
return
}

v := atomic.Xchg(&flag, 2)
futex(&flag, FUTEX_WAIT, …)
}
}
if flag was 2, there’s at least one waiter
futex syscall to tell the kernel to wake
a waiter up.
hashes the key
walks the hash bucket’s futex queue
finds the first thread waiting on the address
schedules it to run again!
}
T2 is done

(accessing the shared data)
T2
pretty convenient!
pthread mutexes use futexes.
That was a hella simplified futex.
…but we still have a nice, lightweight primitive to build synchronization constructs.
cost of a futex
Run on a 12-core x86_64 SMP machine.

Lock & unlock a pthread mutex 10M times in loop

(lock, increment an integer, unlock).

Measure average time taken per lock/unlock pair

(from within the program).
uncontended case (1 thread): ~13ns
contended case (12 cpu-pinned threads): ~0.9us
cost of a futex
Run on a 12-core x86_64 SMP machine.

Lock & unlock a pthread mutex 10M times in loop

(lock, increment an integer, unlock).

Measure average time taken per lock/unlock pair

(from within the program).
uncontended case (1 thread): ~13ns
contended case (12 cpu-pinned threads): ~0.9us
cost of the user-space atomic CAS = ~13ns
}
cost of the atomic CAS +
syscall + thread context switch = ~0.9us
}
spinning vs. sleeping
Spinning makes sense for short durations; it keeps the thread on the CPU.
The trade-off is it uses CPU cycles not making progress.
So at some point, it makes sense to pay the cost of the context switch to go to sleep.
There are smart “hybrid” futexes:

CAS-spin a small, fixed number of times —> if that didn’t lock, make the futex syscall.
Example: the Go runtime’s futex implementation.
spinning vs. sleeping
Spinning makes sense for short durations; it keeps the thread on the CPU.
The trade-off is it uses CPU cycles not making progress.
So at some point, it makes sense to pay the cost of the context switch to go to sleep.
There are smart “hybrid” futexes:

CAS-spin a small, fixed number of times —> if that didn’t lock, make the futex syscall.
Examples: the Go runtime’s futex implementation; a variant of the pthread_mutex.
…can we do better for user-space threads?
…can we do better for user-space threads?
goroutines are user-space threads.
The go runtime multiplexes them onto threads.
lighter-weight and cheaper than threads:

goroutine switches = ~tens of ns; 

thread switches = ~a µs. CPU core
g1 g6g2
thread
CPU core } OS scheduler
Go scheduler
}
…can we do better for user-space threads?
goroutines are user-space threads.
The go runtime multiplexes them onto threads.
lighter-weight and cheaper than threads:

goroutine switches = ~tens of ns; 

thread switches = ~a µs. CPU core
g1 g6g2
thread
CPU core } OS scheduler
Go scheduler
}
we can block the goroutine without blocking the underlying thread!
to avoid the thread context switch cost.
This is what the Go runtime’s semaphore does!

The semaphore is conceptually very similar to futexes in Linux*, but it is used to 

sleep/wake goroutines:
a goroutine that blocks on a mutex is descheduled, but not the underlying thread.
the goroutine wait queues are managed by the runtime, in user-space.
* There are, of course, differences in implementation though.
the goroutine wait queues are managed
by the Go runtime, in user-space.
var flag int
var tasks Tasks
func reader() {
for {
// Attempt to CAS flag.
if atomic.CompareAndSwap(&flag, ...) {
...
}

// CAS failed; add G1 as a waiter for flag.
root.queue()
// and to sleep.
futex(&flag, FUTEX_WAIT, ...)
}
}
G1’s CAS fails

(because G2 has set the flag)
G1
&flag
(the userspace address)
&flag
G1 G3
G4
&other
hash(&flag)
}
the top-level waitlist for a hash bucket
is implemented as a treap
}
there’s a second-level wait queue 

for each unique address
the goroutine wait queues
(in user-space, managed by the go runtime)
the goroutine wait queues are managed
by the Go runtime, in user-space.
var flag int
var tasks Tasks
func reader() {
for {
// Attempt to CAS flag.
if atomic.CompareAndSwap(&flag, ...) {
...
}

// CAS failed; add G1 as a waiter for flag.
root.queue()
// and suspend G1.
gopark()
}
}
G1’s CAS fails

(because G2 has set the flag)
G1
the Go runtime deschedules the goroutine;
keeps the thread running!
G2’s done

(accessing the shared data)
G2
func writer() {
for {
if atomic.CompareAndSwap(&flag, 0, 1) {
... 

// Set flag to unlocked.
atomic.Xadd(&flag, ...)



// If there’s a waiter, reschedule it.
waiter := root.dequeue(&flag)
goready(waiter)
return
}

root.queue()
gopark()
}
}
find the first waiter goroutine and reschedule it
]
this is clever.
Avoids the hefty thread context switch cost in the contended case,

up to a point.
this is clever.
Avoids the hefty thread context switch cost in the contended case,

up to a point.
but…
func reader() {
for {
if atomic.CompareAndSwap(&flag, ...) {
...
}

// CAS failed; add G1 as a waiter for flag.
semaroot.queue()
// and suspend G1.
gopark()
}
}
once G1 is resumed, 

it will try to CAS again.
Resumed goroutines have to compete with any other goroutines trying to CAS.



They will likely lose:

there’s a delay between when the flag was set to 0 and this goroutine was rescheduled..
G1
Resumed goroutines have to compete with any other goroutines trying to CAS.



They will likely lose:

there’s a delay between when the flag was set to 0 and this goroutine was rescheduled..
// Set flag to unlocked.
atomic.Xadd(&flag, …)



// If there’s a waiter, reschedule it.
waiter := root.dequeue(&flag)
goready(waiter)
return
Resumed goroutines have to compete with any other goroutines trying to CAS.



They will likely lose:

there’s a delay between when the flag was set to 0 and this goroutine was rescheduled..
So, the semaphore implementation may end up:

• unnecessarily resuming a waiter goroutine

results in a goroutine context switch again.

• cause goroutine starvation

can result in long wait times, high tail latencies.
Resumed goroutines have to compete with any other goroutines trying to CAS.



They will likely lose:

there’s a delay between when the flag was set to 0 and this goroutine was rescheduled..
So, the semaphore implementation may end up:

• unnecessarily resuming a waiter goroutine

results in a goroutine context switch again.

• cause goroutine starvation

can result in long wait times, high tail latencies.
the sync.Mutex implementation adds a layer that fixes these.
go’s sync.Mutex
Is a hybrid lock that uses a semaphore to sleep / wake goroutines.
go’s sync.Mutex
Additionally, it tracks extra state to:
Is a hybrid lock that uses a semaphore to sleep / wake goroutines.
prevent unnecessarily waking up a goroutine

“There’s a goroutine actively trying to CAS”: An unlock in this case does not wake a waiter.

prevent severe goroutine starvation
“a waiter has been waiting”:
If a waiter is resumed but loses the CAS again, it’s queued at the head of the wait queue.

If a waiter fails to lock for 1ms, switch the mutex to “starvation mode”.
prevent unnecessarily waking up a goroutine

“There’s a goroutine actively trying to CAS”: An unlock in this case does not wake a waiter.

prevent severe goroutine starvation
“a waiter has been waiting”:
If a waiter is resumed but loses the CAS again, it’s queued at the head of the wait queue.

If a waiter fails to lock for 1ms, switch the mutex to “starvation mode”.
go’s sync.Mutex
Additionally, it tracks extra state to:
Is a hybrid lock that uses a semaphore to sleep / wake goroutines.
prevent unnecessarily waking up a goroutine

“There’s a goroutine actively trying to CAS”: An unlock in this case does not wake a waiter.

prevent severe goroutine starvation
“a waiter has been waiting”:
If a waiter is resumed but loses the CAS again, it’s queued at the head of the wait queue.

If a waiter fails to lock for 1ms, switch the mutex to “starvation mode”.
other goroutines cannot CAS, they must queue
The unlock hands the mutex off to the first waiter.

i.e. the waiter does not have to compete.
how does it perform?
Run on a 12-core x86_64 SMP machine.

Lock & unlock a Go sync.Mutex 10M times in loop

(lock, increment an integer, unlock).

Measure average time taken per lock/unlock pair

(from within the program).
uncontended case (1 goroutine): ~13ns
contended case (12 goroutines): ~0.8us
how does it perform?
Contended case performance of C vs. Go:

Go initially performs better than C

but they ~converge as concurrency gets high enough.
}
how does it perform?
Contended case performance of C vs. Go:

Go initially performs better than C

but they ~converge as concurrency gets high enough.
}
}
uses a semaphore
sync.Mutex
&flag
G1 G3
G4
&other
the Go runtime semaphore’s
hash table for waiting goroutines:
each hash bucket needs a lock.
…and it’s a futex!
&flag
G1 G3
G4
&other
the Go runtime semaphore’s
hash table for waiting goroutines:
each hash bucket needs a lock.
…it’s a futex!
&flag
G1 G3
G4
&other &flag
G1
the Linux kernel’s futex hash table
for waiting threads:
each hash bucket needs a lock.
…it’s a spin lock!
each hash bucket needs a lock.
…it’s a futex!
the Go runtime semaphore’s
hash table for waiting goroutines:
&flag
G1 G3
G4
&other &flag
G1
each hash bucket needs a lock.
…it’s a spinlock!
each hash bucket needs a lock.
…it’s a futex!
the Go runtime semaphore’s
hash table for waiting goroutines:
the Linux kernel’s futex hash table
for waiting threads:
uses futexes
uses spin-locks
It’s locks all the way down!
uses a semaphore
sync.Mutex
let’s analyze its performance!
performance models for contention.
uncontended case

Cost of the atomic CAS.
contended case
In the worst-case, cost of failed atomic operations + spinning + goroutine context switch + 

thread context switch.
….But really, depends on degree of contention.
how many threads do we need to support a target throughput? 

while keeping response time the same.
how does response time change with the number of threads?
assuming a constant workload.
“How does application performance change with concurrency?”
Amdahl’s Law
Speed-up depends on the fraction of the workload that can be parallelized (p).
speed-up with N threads = 1
(1 — p) + p
N
a simple experiment
Measure time taken to complete a fixed workload.

serial fraction holds a lock (sync.Mutex).
scale parallel fraction (p) from 0.25 to 0.75
measure time taken for number of goroutines (N) = 1 —> 12.
p = 0.75
p = 0.25
Amdahl’s Law
Speed-up depends on the fraction of the workload that can be parallelized (p).
Universal Scalability Law (USL)
• contention penalty

due to serialization for shared resources.

examples: lock contention, database
contention.

• crosstalk penalty

due to coordination for coherence.
examples: servers coordinating to synchronize

mutable state.
αN
Scalability depends on contention and cross-talk.
Universal Scalability Law (USL)
• contention penalty

due to serialization for shared resources.

examples: lock contention, database
contention.

• crosstalk penalty

due to coordination for coherence.
examples: servers coordinating to synchronize

mutable state.
αN
Scalability depends on contention and cross-talk.
βN2
Universal Scalability Law (USL)
N
(αN + βN2 + C)
N
C
N
(αN + C)
contention and crosstalk
linear scaling
contention
throughput
concurrency
throughput of N threads = N
(αN + βN2 + C)
p = 0.75p = 0.25
USL curves
plotted using the R usl package
p = parallel fraction of workload
let’s use it, smartly!
a few closing strategies.
but first, profile!
Go mutex
• Go mutex contention profiler

https://golang.org/doc/diagnostics.html
Linux
• perf-lock:

perf examples by Brendan Gregg

Brendan Gregg article on off-cpu analysis
• eBPF:

example bcc tool to measure user lock contention
• Dtrace, systemtap
• mutrace, Valgrind-drd

pprof mutex contention profile
strategy I: don’t use a lock
• remove the need for synchronization from hot-paths:

typically involves rearchitecting.
• reduce the number of lock operations:

doing more thread local work, buffering, batching, copy-on-write.
• use atomic operations.
• use lock-free data structures

see: http://www.1024cores.net/
strategy II: granular locks
• shard data:

but ensure no false sharing, by padding to cache line size.

examples: 

go runtime semaphore’s hash table buckets;

Linux scheduler’s per-CPU runqueues;

Go scheduler’s per-CPU runqueues;
• use read-write locks
scheduler benchmark
(CreateGoroutineParallel)
modified scheduler: global lock; runqueue
go scheduler: per-CPU core, lock-free runqueues
strategy III: do less serial work
lock contention causes ~10x latency
latency
time time
smaller critical section change
• move computation out of critical section:

typically involves rearchitecting.
bonus strategy:
• contention-aware schedulers
example: Contention-aware scheduling in MySQL 8.0 Innodb
Special thanks to Eben Freeman, Justin Delegard, Austin Duffield for reading drafts of this.
@kavya719
speakerdeck.com/kavya719/lets-talk-locks
References

Jeff Preshing’s excellent blog series

Memory Barriers: A Hardware View for Software Hackers

LWN.net on futexes

The Go source code
The Universal Scalability Law Manifesto, Neil Gunther
Watch the video with slide
synchronization on InfoQ.com!
https://www.infoq.com/presentations/
go-locks/

More Related Content

What's hot

Boosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringBoosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uring
ShapeBlue
 
오픈 소스를 활용한 캐쥬얼 게임 서버 프레임워크 개발
오픈 소스를 활용한 캐쥬얼 게임 서버 프레임워크 개발오픈 소스를 활용한 캐쥬얼 게임 서버 프레임워크 개발
오픈 소스를 활용한 캐쥬얼 게임 서버 프레임워크 개발
주항 박
 

What's hot (20)

Shell scripting
Shell scriptingShell scripting
Shell scripting
 
Introduction to Shell script
Introduction to Shell scriptIntroduction to Shell script
Introduction to Shell script
 
Using Wildcards with rsyslog's File Monitor imfile
Using Wildcards with rsyslog's File Monitor imfileUsing Wildcards with rsyslog's File Monitor imfile
Using Wildcards with rsyslog's File Monitor imfile
 
Docker, Linux Containers (LXC), and security
Docker, Linux Containers (LXC), and securityDocker, Linux Containers (LXC), and security
Docker, Linux Containers (LXC), and security
 
An Introduction to Locks in Go
An Introduction to Locks in GoAn Introduction to Locks in Go
An Introduction to Locks in Go
 
FD.io VPP事始め
FD.io VPP事始めFD.io VPP事始め
FD.io VPP事始め
 
Linux monitoring and Troubleshooting for DBA's
Linux monitoring and Troubleshooting for DBA'sLinux monitoring and Troubleshooting for DBA's
Linux monitoring and Troubleshooting for DBA's
 
Containerd internals: building a core container runtime
Containerd internals: building a core container runtimeContainerd internals: building a core container runtime
Containerd internals: building a core container runtime
 
2 phase locking protocol DBMS
2 phase locking protocol DBMS2 phase locking protocol DBMS
2 phase locking protocol DBMS
 
Accelerating Envoy and Istio with Cilium and the Linux Kernel
Accelerating Envoy and Istio with Cilium and the Linux KernelAccelerating Envoy and Istio with Cilium and the Linux Kernel
Accelerating Envoy and Istio with Cilium and the Linux Kernel
 
Serialization in Go
Serialization in GoSerialization in Go
Serialization in Go
 
Accelerating with Ansible
Accelerating with AnsibleAccelerating with Ansible
Accelerating with Ansible
 
Boosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringBoosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uring
 
Linux IO
Linux IOLinux IO
Linux IO
 
LinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking WalkthroughLinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking Walkthrough
 
Docker Networking Deep Dive
Docker Networking Deep DiveDocker Networking Deep Dive
Docker Networking Deep Dive
 
Linux 4.x Tracing: Performance Analysis with bcc/BPF
Linux 4.x Tracing: Performance Analysis with bcc/BPFLinux 4.x Tracing: Performance Analysis with bcc/BPF
Linux 4.x Tracing: Performance Analysis with bcc/BPF
 
Learn O11y from Grafana ecosystem.
Learn O11y from Grafana ecosystem.Learn O11y from Grafana ecosystem.
Learn O11y from Grafana ecosystem.
 
오픈 소스를 활용한 캐쥬얼 게임 서버 프레임워크 개발
오픈 소스를 활용한 캐쥬얼 게임 서버 프레임워크 개발오픈 소스를 활용한 캐쥬얼 게임 서버 프레임워크 개발
오픈 소스를 활용한 캐쥬얼 게임 서버 프레임워크 개발
 
K8s beginner 2_advanced_ep02_201904221130_post
K8s beginner 2_advanced_ep02_201904221130_postK8s beginner 2_advanced_ep02_201904221130_post
K8s beginner 2_advanced_ep02_201904221130_post
 

Similar to Let's Talk Locks!

Medical Image Processing Strategies for multi-core CPUs
Medical Image Processing Strategies for multi-core CPUsMedical Image Processing Strategies for multi-core CPUs
Medical Image Processing Strategies for multi-core CPUs
Daniel Blezek
 
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Docker, Inc.
 
Linux multiplexing
Linux multiplexingLinux multiplexing
Linux multiplexing
Mark Veltzer
 
Programming with Threads in Java
Programming with Threads in JavaProgramming with Threads in Java
Programming with Threads in Java
koji lin
 

Similar to Let's Talk Locks! (20)

Medical Image Processing Strategies for multi-core CPUs
Medical Image Processing Strategies for multi-core CPUsMedical Image Processing Strategies for multi-core CPUs
Medical Image Processing Strategies for multi-core CPUs
 
.NET Multithreading/Multitasking
.NET Multithreading/Multitasking.NET Multithreading/Multitasking
.NET Multithreading/Multitasking
 
concurrency
concurrencyconcurrency
concurrency
 
Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...
Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...
Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...
 
Cgroups, namespaces and beyond: what are containers made from?
Cgroups, namespaces and beyond: what are containers made from?Cgroups, namespaces and beyond: what are containers made from?
Cgroups, namespaces and beyond: what are containers made from?
 
Algoritmi e Calcolo Parallelo 2012/2013 - OpenMP
Algoritmi e Calcolo Parallelo 2012/2013 - OpenMPAlgoritmi e Calcolo Parallelo 2012/2013 - OpenMP
Algoritmi e Calcolo Parallelo 2012/2013 - OpenMP
 
Introduction to containers
Introduction to containersIntroduction to containers
Introduction to containers
 
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
 
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
 
MyShell - English
MyShell - EnglishMyShell - English
MyShell - English
 
[CCC-28c3] Post Memory Corruption Memory Analysis
[CCC-28c3] Post Memory Corruption Memory Analysis[CCC-28c3] Post Memory Corruption Memory Analysis
[CCC-28c3] Post Memory Corruption Memory Analysis
 
Asynchronous Python A Gentle Introduction
Asynchronous Python A Gentle IntroductionAsynchronous Python A Gentle Introduction
Asynchronous Python A Gentle Introduction
 
Here comes the Loom - Ya!vaConf.pdf
Here comes the Loom - Ya!vaConf.pdfHere comes the Loom - Ya!vaConf.pdf
Here comes the Loom - Ya!vaConf.pdf
 
Docker and-containers-for-development-and-deployment-scale12x
Docker and-containers-for-development-and-deployment-scale12xDocker and-containers-for-development-and-deployment-scale12x
Docker and-containers-for-development-and-deployment-scale12x
 
Linux multiplexing
Linux multiplexingLinux multiplexing
Linux multiplexing
 
Lightweight Virtualization: LXC containers & AUFS
Lightweight Virtualization: LXC containers & AUFSLightweight Virtualization: LXC containers & AUFS
Lightweight Virtualization: LXC containers & AUFS
 
Node.js Course 1 of 2 - Introduction and first steps
Node.js Course 1 of 2 - Introduction and first stepsNode.js Course 1 of 2 - Introduction and first steps
Node.js Course 1 of 2 - Introduction and first steps
 
"Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo...
"Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo..."Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo...
"Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo...
 
The art of concurrent programming
The art of concurrent programmingThe art of concurrent programming
The art of concurrent programming
 
Programming with Threads in Java
Programming with Threads in JavaProgramming with Threads in Java
Programming with Threads in Java
 

More from C4Media

More from C4Media (20)

Streaming a Million Likes/Second: Real-Time Interactions on Live Video
Streaming a Million Likes/Second: Real-Time Interactions on Live VideoStreaming a Million Likes/Second: Real-Time Interactions on Live Video
Streaming a Million Likes/Second: Real-Time Interactions on Live Video
 
Next Generation Client APIs in Envoy Mobile
Next Generation Client APIs in Envoy MobileNext Generation Client APIs in Envoy Mobile
Next Generation Client APIs in Envoy Mobile
 
Software Teams and Teamwork Trends Report Q1 2020
Software Teams and Teamwork Trends Report Q1 2020Software Teams and Teamwork Trends Report Q1 2020
Software Teams and Teamwork Trends Report Q1 2020
 
Understand the Trade-offs Using Compilers for Java Applications
Understand the Trade-offs Using Compilers for Java ApplicationsUnderstand the Trade-offs Using Compilers for Java Applications
Understand the Trade-offs Using Compilers for Java Applications
 
Kafka Needs No Keeper
Kafka Needs No KeeperKafka Needs No Keeper
Kafka Needs No Keeper
 
High Performing Teams Act Like Owners
High Performing Teams Act Like OwnersHigh Performing Teams Act Like Owners
High Performing Teams Act Like Owners
 
Does Java Need Inline Types? What Project Valhalla Can Bring to Java
Does Java Need Inline Types? What Project Valhalla Can Bring to JavaDoes Java Need Inline Types? What Project Valhalla Can Bring to Java
Does Java Need Inline Types? What Project Valhalla Can Bring to Java
 
Service Meshes- The Ultimate Guide
Service Meshes- The Ultimate GuideService Meshes- The Ultimate Guide
Service Meshes- The Ultimate Guide
 
Shifting Left with Cloud Native CI/CD
Shifting Left with Cloud Native CI/CDShifting Left with Cloud Native CI/CD
Shifting Left with Cloud Native CI/CD
 
CI/CD for Machine Learning
CI/CD for Machine LearningCI/CD for Machine Learning
CI/CD for Machine Learning
 
Fault Tolerance at Speed
Fault Tolerance at SpeedFault Tolerance at Speed
Fault Tolerance at Speed
 
Architectures That Scale Deep - Regaining Control in Deep Systems
Architectures That Scale Deep - Regaining Control in Deep SystemsArchitectures That Scale Deep - Regaining Control in Deep Systems
Architectures That Scale Deep - Regaining Control in Deep Systems
 
ML in the Browser: Interactive Experiences with Tensorflow.js
ML in the Browser: Interactive Experiences with Tensorflow.jsML in the Browser: Interactive Experiences with Tensorflow.js
ML in the Browser: Interactive Experiences with Tensorflow.js
 
Build Your Own WebAssembly Compiler
Build Your Own WebAssembly CompilerBuild Your Own WebAssembly Compiler
Build Your Own WebAssembly Compiler
 
User & Device Identity for Microservices @ Netflix Scale
User & Device Identity for Microservices @ Netflix ScaleUser & Device Identity for Microservices @ Netflix Scale
User & Device Identity for Microservices @ Netflix Scale
 
Scaling Patterns for Netflix's Edge
Scaling Patterns for Netflix's EdgeScaling Patterns for Netflix's Edge
Scaling Patterns for Netflix's Edge
 
Make Your Electron App Feel at Home Everywhere
Make Your Electron App Feel at Home EverywhereMake Your Electron App Feel at Home Everywhere
Make Your Electron App Feel at Home Everywhere
 
The Talk You've Been Await-ing For
The Talk You've Been Await-ing ForThe Talk You've Been Await-ing For
The Talk You've Been Await-ing For
 
Future of Data Engineering
Future of Data EngineeringFuture of Data Engineering
Future of Data Engineering
 
Automated Testing for Terraform, Docker, Packer, Kubernetes, and More
Automated Testing for Terraform, Docker, Packer, Kubernetes, and MoreAutomated Testing for Terraform, Docker, Packer, Kubernetes, and More
Automated Testing for Terraform, Docker, Packer, Kubernetes, and More
 

Recently uploaded

Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider  Progress from Awareness to Implementation.pptxTales from a Passkey Provider  Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
FIDO Alliance
 
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
FIDO Alliance
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on ThanabotsContinuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
 
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Intro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptxIntro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptx
 
Decarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceDecarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational Performance
 
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider  Progress from Awareness to Implementation.pptxTales from a Passkey Provider  Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Quantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingQuantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation Computing
 
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Design Guidelines for Passkeys 2024.pptx
Design Guidelines for Passkeys 2024.pptxDesign Guidelines for Passkeys 2024.pptx
Design Guidelines for Passkeys 2024.pptx
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and Insight
 
Stronger Together: Developing an Organizational Strategy for Accessible Desig...
Stronger Together: Developing an Organizational Strategy for Accessible Desig...Stronger Together: Developing an Organizational Strategy for Accessible Desig...
Stronger Together: Developing an Organizational Strategy for Accessible Desig...
 
WebRTC and SIP not just audio and video @ OpenSIPS 2024
WebRTC and SIP not just audio and video @ OpenSIPS 2024WebRTC and SIP not just audio and video @ OpenSIPS 2024
WebRTC and SIP not just audio and video @ OpenSIPS 2024
 
Event-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream ProcessingEvent-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream Processing
 
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
ADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptxADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptx
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
ChatGPT and Beyond - Elevating DevOps Productivity
ChatGPT and Beyond - Elevating DevOps ProductivityChatGPT and Beyond - Elevating DevOps Productivity
ChatGPT and Beyond - Elevating DevOps Productivity
 

Let's Talk Locks!

  • 2. InfoQ.com: News & Community Site • 750,000 unique visitors/month • Published in 4 languages (English, Chinese, Japanese and Brazilian Portuguese) • Post content from our QCon conferences • News 15-20 / week • Articles 3-4 / week • Presentations (videos) 12-15 / week • Interviews 2-3 / week • Books 1 / month Watch the video with slide synchronization on InfoQ.com! https://www.infoq.com/presentations/ go-locks/
  • 3. Presented at QCon New York www.qconnewyork.com Purpose of QCon - to empower software development by facilitating the spread of knowledge and innovation Strategy - practitioner-driven conference designed for YOU: influencers of change and innovation in your teams - speakers and topics driving the evolution and innovation - connecting and catalyzing the influencers and innovators Highlights - attended by more than 12,000 delegates since 2007 - held in 9 cities worldwide
  • 7. “locks are slow” lock contention causes ~10x latency latency(ms) time
  • 8. “locks are slow” …but they’re used everywhere. from schedulers to databases and web servers. lock contention causes ~10x latency latency(ms) time
  • 9. “locks are slow” …but they’re used everywhere. from schedulers to databases and web servers. lock contention causes ~10x latency latency(ms) time ?
  • 10. let’s analyze its performance! performance models for contention let’s build a lock! a tour through lock internals let’s use it, smartly! a few closing strategies
  • 11. our case-study Lock implementations are hardware, ISA, OS and language specific:
 
 We assume an x86_64 SMP machine running a modern Linux.
 We’ll look at the lock implementation in Go 1.12. CPU 0 CPU 1 cache cache interconnect memory simplified SMP system diagram
  • 12. use as you would threads 
 > go handle_request(r)
 but user-space threads:
 managed entirely by the Go runtime, not the operating system. The unit of concurrent execution: goroutines. a brief go primer
  • 13. use as you would threads 
 > go handle_request(r)
 but user-space threads:
 managed entirely by the Go runtime, not the operating system. The unit of concurrent execution: goroutines. a brief go primer Data shared between goroutines must be synchronized. One way is to use the blocking, non-recursive lock construct: > var mu sync.Mutex
 mu.Lock()
 … mu.Unlock()
  • 14. let’s build a lock! a tour through lock internals.
  • 15. want: “mutual exclusion” only one thread has access to shared data at any given time
  • 16. T1 running on CPU 1 T2 running on CPU 2 func reader() {
 // Read a task
 t := tasks.get()
 
 // Do something with it.
 ... } func writer() {
 // Write to tasks
 tasks.put(t) } // track whether tasks is // available (0) or not (1) // shared ring buffer var tasks Tasks want: “mutual exclusion” only one thread has access to shared data at any given time
  • 17. func reader() {
 // Read a task
 t := tasks.get()
 
 // Do something with it.
 ... } func writer() {
 // Write to tasks
 tasks.put(t) } // track whether tasks is // available (0) or not (1) // shared ring buffer var tasks Tasks want: “mutual exclusion” …idea! use a flag? T1 running on CPU 1 T2 running on CPU 2
  • 18. // track whether tasks can be // accessed (0) or not (1) var flag int var tasks Tasks
  • 19. // track whether tasks can be // accessed (0) or not (1) var flag int var tasks Tasks func reader() {
 for { /* If flag is 0, can access tasks. */
 if flag == 0 {
 /* Set flag */ flag++ ...
 /* Unset flag */ flag-- return } /* Else, keep looping. */ 
 } } T1 running on CPU 1
  • 20. // track whether tasks can be // accessed (0) or not (1) var flag int var tasks Tasks func reader() {
 for { /* If flag is 0, can access tasks. */
 if flag == 0 {
 /* Set flag */ flag++ ...
 /* Unset flag */ flag-- return } /* Else, keep looping. */ 
 } } func writer() {
 for { /* If flag is 0, can access tasks. */
 if flag == 0 {
 /* Set flag */ flag++ ...
 /* Unset flag */ flag-- return } /* Else, keep looping. */ 
 } } T1 running on CPU 1 T2 running on CPU 2
  • 21. // track whether tasks can be // accessed (0) or not (1) var flag int var tasks Tasks func reader() {
 for { /* If flag is 0, can access tasks. */
 if flag == 0 {
 /* Set flag */ flag++ ...
 /* Unset flag */ flag-- return } /* Else, keep looping. */ 
 } } func writer() {
 for { /* If flag is 0, can access tasks. */
 if flag == 0 {
 /* Set flag */ flag++ ...
 /* Unset flag */ flag-- return } /* Else, keep looping. */ 
 } } T1 running on CPU 1 T2 running on CPU 2
  • 23. flag++ CPU memory 1. Read (0) 2. Modify 3. Write (1) T1 running on CPU 1
  • 25. R R W flag++ if flag == 0 timeline of memory operations T1 running on CPU 1 T2 running on CPU 2 T2 may observe T1’s RMW half-complete
  • 26. atomicity A memory operation is non-atomic if it can be observed half-complete by another thread. An operation may be non-atomic because it:
 • uses multiple CPU instructions:
 operations on a large data structure; 
 compiler decisions.
 • use a single non-atomic CPU instruction:
 RMW instructions; unaligned loads and stores. > o := Order { id: 10, name: “yogi bear”, order: “pie”, count: 3, }
  • 27. atomicity A memory operation is non-atomic if it can be observed half-complete by another thread. An operation may be non-atomic because it:
 • uses multiple CPU instructions:
 operations on a large data structure; 
 compiler decisions.
 • uses a single non-atomic CPU instruction:
 RMW instructions; unaligned loads and stores. > flag++
  • 28. atomicity A memory operation is non-atomic if it can be observed half-complete by another thread. An operation may be non-atomic because it:
 • uses multiple CPU instructions:
 operations on a large data structure; 
 compiler decisions.
 • uses a single non-atomic CPU instruction:
 RMW instructions; unaligned loads and stores. > flag++ An atomic operation is an “indivisible” memory access. In x86_64, loads, stores that are 
 naturally aligned up to 64b.* guarantees the data item fits within a cache line;
 cache coherency guarantees a consistent view for a single cache line. * these are not the only guaranteed atomic operations.
  • 30. func reader() {
 for { /* If flag is 0, can access tasks. */
 if flag == 0 {
 /* Set flag */ flag = 1 t := tasks.get()
 ...
 /* Unset flag */ flag = 0 return } /* Else, keep looping. */ 
 } } T1 running on CPU 1
  • 31. the compiler may reorder operations. // Sets flag to 1 & reads data. func reader() { flag = 1 t := tasks.get() ... flag = 0
  • 32. the processor may reorder operations. StoreLoad reordering load t before store flag = 1 // Sets flag to 1 & reads data. func reader() { flag = 1 t := tasks.get() ... flag = 0
  • 33. memory access reordering The compiler, processor can reorder memory operations to optimize execution.
  • 34. memory access reordering The compiler, processor can reorder memory operations to optimize execution. • The only cardinal rule is sequential consistency for single threaded programs.
 • Other guarantees about compiler reordering are captured by a 
 language’s memory model:
 C++, Go guarantee data-race free programs will be sequentially consistent. • For processor reordering, by the hardware memory model:
 x86_64 provides Total Store Ordering (TSO).
  • 35. memory access reordering The compiler, processor can reorder memory operations to optimize execution. • The only cardinal rule is sequential consistency for single threaded programs.
 • Other guarantees about compiler reordering are captured by a 
 language’s memory model:
 C++, Go guarantee data-race free programs will be sequentially consistent. • For processor reordering, by the hardware memory model:
 x86_64 provides Total Store Ordering (TSO).
  • 36. memory access reordering The compiler, processor can reorder memory operations to optimize execution. • The only cardinal rule is sequential consistency for single threaded programs.
 • Other guarantees about compiler reordering are captured by a 
 language’s memory model:
 C++, Go guarantee data-race free programs will be sequentially consistent. • For processor reordering, by the hardware memory model:
 x86_64 provides Total Store Ordering (TSO). a relaxed consistency model. most reorderings are invalid but StoreLoad is game;
 allows processor to hide the latency of writes.
  • 37. nope; not atomic and no memory order guarantees. …idea! use a flag?
  • 38. nope; not atomic and no memory order guarantees. …idea! use a flag? need a construct that provides atomicity and prevents memory reordering.
  • 39. nope; not atomic and no memory order guarantees. …idea! use a flag? need a construct that provides atomicity and prevents memory reordering. …the hardware provides!
  • 40. For guaranteed atomicity and to prevent memory reordering. special hardware instructions x86 example: XCHG (exchange) these instructions are called memory barriers. they prevent reordering by the compiler too. x86 example: MFENCE, LFENCE, SFENCE.
  • 41. special hardware instructions The x86 LOCK instruction prefix provides both. Used to prefix memory access instructions: LOCK ADD For guaranteed atomicity and to prevent memory reordering. } atomic operations in languages like Go: atomic.Add atomic.CompareAndSwap
  • 42. special hardware instructions The x86 LOCK instruction prefix provides both. Used to prefix memory access instructions: LOCK ADD For guaranteed atomicity and to prevent memory reordering. } atomic operations in languages like Go: atomic.Add atomic.CompareAndSwapLOCK CMPXCHG Atomic compare-and-swap (CAS) conditionally updates a variable:
 checks if it has the expected value and if so, changes it to the desired value.
  • 43. the CAS succeeded; we set flag to 1. flag was 1 so our CAS failed; try again. var flag int var tasks Tasks func reader() { for { // Try to atomically CAS flag from 0 -> 1 if atomic.CompareAndSwap(&flag, 0, 1) { ... // Atomically set flag back to 0. atomic.Store(&flag, 0) return }
 // CAS failed, try again :) } } baby’s first lock
  • 44. var flag int var tasks Tasks func reader() { for { // Try to atomically CAS flag from 0 -> 1 if atomic.CompareAndSwap(&flag, 0, 1) { ... // Atomically set flag back to 0. atomic.Store(&flag, 0) return }
 // CAS failed, try again :) } } baby’s first lock: spinlocks This is a simplified spinlock. Spinlocks are used extensively in the Linux kernel.}
  • 45. The atomic CAS is the quintessence of any lock implementation.
  • 46. cost of an atomic operation Run on a 12-core x86_64 SMP machine.
 Atomic store to a C _Atomic int, 10M times in a tight loop. Measure average time taken per operation
 (from within the program). With 1 thread: ~13ns (vs. regular operation: ~2ns) With 12 cpu-pinned threads: ~110ns threads are effectively serialized var flag int var tasks Tasks func reader() { for { // Try to atomically CAS flag from 0 -> 1 if atomic.CompareAndSwap(&flag, 0, 1) { ... // Atomically set flag back to 0. atomic.Store(&flag, 0) return }
 // CAS failed, try again :) } } spinlocks
  • 47. sweet. We have a scheme for mutual exclusion that provides atomicity and memory ordering guarantees.
  • 48. sweet. …but spinning for long durations is wasteful; it takes away CPU time from other threads. We have a scheme for mutual exclusion that provides atomicity and memory ordering guarantees.
  • 49. sweet. …but spinning for long durations is wasteful; it takes away CPU time from other threads. We have a scheme for mutual exclusion that provides atomicity and memory ordering guarantees. enter the operating system!
  • 50. Linux’s futex Interface and mechanism for userspace code to ask the kernel to suspend/ resume threads. futex syscall kernel-managed queue
  • 51. flag can be 0: unlocked
 1: locked 2: there’s a waiter var flag int var tasks Tasks
  • 52. set flag to 2 (there’s a waiter) flag can be 0: unlocked
 1: locked 2: there’s a waiter futex syscall to tell the kernel to suspend us until flag changes. when we’re resumed, we’ll CAS again. var flag int var tasks Tasks func reader() { for { if atomic.CompareAndSwap(&flag, 0, 1) { ... }
 // CAS failed, set flag to sleeping. v := atomic.Xchg(&flag, 2) // and go to sleep. futex(&flag, FUTEX_WAIT, ...)
 } } T1’s CAS fails
 (because T2 has set the flag) T1
  • 53. in the kernel: keyA (from the userspace address:
 &flag) keyA T1 futex_q 1. arrange for thread to be resumed in the future:
 add an entry for this thread in the kernel queue for the address we care about
  • 54. in the kernel: keyA (from the userspace address:
 &flag) keyA T1 futex_q keyother Tother futex_q keyother hash(keyA) 1. arrange for thread to be resumed in the future:
 add an entry for this thread in the kernel queue for the address we care about
  • 55. in the kernel: keyA (from the userspace address:
 &flag) keyA T1 futex_q keyother Tother futex_q keyother hash(keyA) 1. arrange for thread to be resumed in the future:
 add an entry for this thread in the kernel queue for the address we care about 2. deschedule the calling thread to suspend it.
  • 56. T2 is done
 (accessing the shared data) T2 func writer() { for { if atomic.CompareAndSwap(&flag, 0, 1) { ... 
 // Set flag to unlocked. v := atomic.Xchg(&flag, 0) if v == 2 { // If there was a waiter, issue a wake up. futex(&flag, FUTEX_WAKE, ...) } return }
 v := atomic.Xchg(&flag, 2) futex(&flag, FUTEX_WAIT, …) } }
  • 57. T2 is done
 (accessing the shared data) T2 func writer() { for { if atomic.CompareAndSwap(&flag, 0, 1) { ... 
 // Set flag to unlocked. v := atomic.Xchg(&flag, 0) if v == 2 { // If there was a waiter, issue a wake up. futex(&flag, FUTEX_WAKE, ...) } return }
 v := atomic.Xchg(&flag, 2) futex(&flag, FUTEX_WAIT, …) } } if flag was 2, there’s at least one waiter futex syscall to tell the kernel to wake a waiter up.
  • 58. func writer() { for { if atomic.CompareAndSwap(&flag, 0, 1) { ... 
 // Set flag to unlocked. v := atomic.Xchg(&flag, 0) if v == 2 { // If there was a waiter, issue a wake up. futex(&flag, FUTEX_WAKE, ...) } return }
 v := atomic.Xchg(&flag, 2) futex(&flag, FUTEX_WAIT, …) } } if flag was 2, there’s at least one waiter futex syscall to tell the kernel to wake a waiter up. hashes the key walks the hash bucket’s futex queue finds the first thread waiting on the address schedules it to run again! } T2 is done
 (accessing the shared data) T2
  • 59. pretty convenient! pthread mutexes use futexes. That was a hella simplified futex. …but we still have a nice, lightweight primitive to build synchronization constructs.
  • 60. cost of a futex Run on a 12-core x86_64 SMP machine.
 Lock & unlock a pthread mutex 10M times in loop
 (lock, increment an integer, unlock).
 Measure average time taken per lock/unlock pair
 (from within the program). uncontended case (1 thread): ~13ns contended case (12 cpu-pinned threads): ~0.9us
  • 61. cost of a futex Run on a 12-core x86_64 SMP machine.
 Lock & unlock a pthread mutex 10M times in loop
 (lock, increment an integer, unlock).
 Measure average time taken per lock/unlock pair
 (from within the program). uncontended case (1 thread): ~13ns contended case (12 cpu-pinned threads): ~0.9us cost of the user-space atomic CAS = ~13ns } cost of the atomic CAS + syscall + thread context switch = ~0.9us }
  • 62. spinning vs. sleeping Spinning makes sense for short durations; it keeps the thread on the CPU. The trade-off is it uses CPU cycles not making progress. So at some point, it makes sense to pay the cost of the context switch to go to sleep. There are smart “hybrid” futexes:
 CAS-spin a small, fixed number of times —> if that didn’t lock, make the futex syscall. Example: the Go runtime’s futex implementation.
  • 63. spinning vs. sleeping Spinning makes sense for short durations; it keeps the thread on the CPU. The trade-off is it uses CPU cycles not making progress. So at some point, it makes sense to pay the cost of the context switch to go to sleep. There are smart “hybrid” futexes:
 CAS-spin a small, fixed number of times —> if that didn’t lock, make the futex syscall. Examples: the Go runtime’s futex implementation; a variant of the pthread_mutex.
  • 64. …can we do better for user-space threads?
  • 65. …can we do better for user-space threads? goroutines are user-space threads. The go runtime multiplexes them onto threads. lighter-weight and cheaper than threads:
 goroutine switches = ~tens of ns; 
 thread switches = ~a µs. CPU core g1 g6g2 thread CPU core } OS scheduler Go scheduler }
  • 66. …can we do better for user-space threads? goroutines are user-space threads. The go runtime multiplexes them onto threads. lighter-weight and cheaper than threads:
 goroutine switches = ~tens of ns; 
 thread switches = ~a µs. CPU core g1 g6g2 thread CPU core } OS scheduler Go scheduler } we can block the goroutine without blocking the underlying thread! to avoid the thread context switch cost.
  • 67. This is what the Go runtime’s semaphore does!
 The semaphore is conceptually very similar to futexes in Linux*, but it is used to 
 sleep/wake goroutines: a goroutine that blocks on a mutex is descheduled, but not the underlying thread. the goroutine wait queues are managed by the runtime, in user-space. * There are, of course, differences in implementation though.
  • 68. the goroutine wait queues are managed by the Go runtime, in user-space. var flag int var tasks Tasks func reader() { for { // Attempt to CAS flag. if atomic.CompareAndSwap(&flag, ...) { ... }
 // CAS failed; add G1 as a waiter for flag. root.queue() // and to sleep. futex(&flag, FUTEX_WAIT, ...) } } G1’s CAS fails
 (because G2 has set the flag) G1
  • 69. &flag (the userspace address) &flag G1 G3 G4 &other hash(&flag) } the top-level waitlist for a hash bucket is implemented as a treap } there’s a second-level wait queue 
 for each unique address the goroutine wait queues (in user-space, managed by the go runtime)
  • 70. the goroutine wait queues are managed by the Go runtime, in user-space. var flag int var tasks Tasks func reader() { for { // Attempt to CAS flag. if atomic.CompareAndSwap(&flag, ...) { ... }
 // CAS failed; add G1 as a waiter for flag. root.queue() // and suspend G1. gopark() } } G1’s CAS fails
 (because G2 has set the flag) G1 the Go runtime deschedules the goroutine; keeps the thread running!
  • 71. G2’s done
 (accessing the shared data) G2 func writer() { for { if atomic.CompareAndSwap(&flag, 0, 1) { ... 
 // Set flag to unlocked. atomic.Xadd(&flag, ...)
 
 // If there’s a waiter, reschedule it. waiter := root.dequeue(&flag) goready(waiter) return }
 root.queue() gopark() } } find the first waiter goroutine and reschedule it ]
  • 72. this is clever. Avoids the hefty thread context switch cost in the contended case,
 up to a point.
  • 73. this is clever. Avoids the hefty thread context switch cost in the contended case,
 up to a point. but…
  • 74. func reader() { for { if atomic.CompareAndSwap(&flag, ...) { ... }
 // CAS failed; add G1 as a waiter for flag. semaroot.queue() // and suspend G1. gopark() } } once G1 is resumed, 
 it will try to CAS again. Resumed goroutines have to compete with any other goroutines trying to CAS.
 
 They will likely lose:
 there’s a delay between when the flag was set to 0 and this goroutine was rescheduled.. G1
  • 75. Resumed goroutines have to compete with any other goroutines trying to CAS.
 
 They will likely lose:
 there’s a delay between when the flag was set to 0 and this goroutine was rescheduled.. // Set flag to unlocked. atomic.Xadd(&flag, …)
 
 // If there’s a waiter, reschedule it. waiter := root.dequeue(&flag) goready(waiter) return
  • 76. Resumed goroutines have to compete with any other goroutines trying to CAS.
 
 They will likely lose:
 there’s a delay between when the flag was set to 0 and this goroutine was rescheduled.. So, the semaphore implementation may end up:
 • unnecessarily resuming a waiter goroutine
 results in a goroutine context switch again.
 • cause goroutine starvation
 can result in long wait times, high tail latencies.
  • 77. Resumed goroutines have to compete with any other goroutines trying to CAS.
 
 They will likely lose:
 there’s a delay between when the flag was set to 0 and this goroutine was rescheduled.. So, the semaphore implementation may end up:
 • unnecessarily resuming a waiter goroutine
 results in a goroutine context switch again.
 • cause goroutine starvation
 can result in long wait times, high tail latencies. the sync.Mutex implementation adds a layer that fixes these.
  • 78. go’s sync.Mutex Is a hybrid lock that uses a semaphore to sleep / wake goroutines.
  • 79. go’s sync.Mutex Additionally, it tracks extra state to: Is a hybrid lock that uses a semaphore to sleep / wake goroutines. prevent unnecessarily waking up a goroutine
 “There’s a goroutine actively trying to CAS”: An unlock in this case does not wake a waiter.
 prevent severe goroutine starvation “a waiter has been waiting”: If a waiter is resumed but loses the CAS again, it’s queued at the head of the wait queue.
 If a waiter fails to lock for 1ms, switch the mutex to “starvation mode”. prevent unnecessarily waking up a goroutine
 “There’s a goroutine actively trying to CAS”: An unlock in this case does not wake a waiter.
 prevent severe goroutine starvation “a waiter has been waiting”: If a waiter is resumed but loses the CAS again, it’s queued at the head of the wait queue.
 If a waiter fails to lock for 1ms, switch the mutex to “starvation mode”.
  • 80. go’s sync.Mutex Additionally, it tracks extra state to: Is a hybrid lock that uses a semaphore to sleep / wake goroutines. prevent unnecessarily waking up a goroutine
 “There’s a goroutine actively trying to CAS”: An unlock in this case does not wake a waiter.
 prevent severe goroutine starvation “a waiter has been waiting”: If a waiter is resumed but loses the CAS again, it’s queued at the head of the wait queue.
 If a waiter fails to lock for 1ms, switch the mutex to “starvation mode”. other goroutines cannot CAS, they must queue The unlock hands the mutex off to the first waiter.
 i.e. the waiter does not have to compete.
  • 81. how does it perform? Run on a 12-core x86_64 SMP machine.
 Lock & unlock a Go sync.Mutex 10M times in loop
 (lock, increment an integer, unlock).
 Measure average time taken per lock/unlock pair
 (from within the program). uncontended case (1 goroutine): ~13ns contended case (12 goroutines): ~0.8us
  • 82. how does it perform? Contended case performance of C vs. Go:
 Go initially performs better than C
 but they ~converge as concurrency gets high enough. }
  • 83. how does it perform? Contended case performance of C vs. Go:
 Go initially performs better than C
 but they ~converge as concurrency gets high enough. } }
  • 85. &flag G1 G3 G4 &other the Go runtime semaphore’s hash table for waiting goroutines: each hash bucket needs a lock. …and it’s a futex!
  • 86. &flag G1 G3 G4 &other the Go runtime semaphore’s hash table for waiting goroutines: each hash bucket needs a lock. …it’s a futex!
  • 87. &flag G1 G3 G4 &other &flag G1 the Linux kernel’s futex hash table for waiting threads: each hash bucket needs a lock. …it’s a spin lock! each hash bucket needs a lock. …it’s a futex! the Go runtime semaphore’s hash table for waiting goroutines:
  • 88. &flag G1 G3 G4 &other &flag G1 each hash bucket needs a lock. …it’s a spinlock! each hash bucket needs a lock. …it’s a futex! the Go runtime semaphore’s hash table for waiting goroutines: the Linux kernel’s futex hash table for waiting threads:
  • 89. uses futexes uses spin-locks It’s locks all the way down! uses a semaphore sync.Mutex
  • 90. let’s analyze its performance! performance models for contention.
  • 91. uncontended case
 Cost of the atomic CAS. contended case In the worst-case, cost of failed atomic operations + spinning + goroutine context switch + 
 thread context switch. ….But really, depends on degree of contention.
  • 92. how many threads do we need to support a target throughput? 
 while keeping response time the same. how does response time change with the number of threads? assuming a constant workload. “How does application performance change with concurrency?”
  • 93. Amdahl’s Law Speed-up depends on the fraction of the workload that can be parallelized (p). speed-up with N threads = 1 (1 — p) + p N
  • 94. a simple experiment Measure time taken to complete a fixed workload.
 serial fraction holds a lock (sync.Mutex). scale parallel fraction (p) from 0.25 to 0.75 measure time taken for number of goroutines (N) = 1 —> 12.
  • 95. p = 0.75 p = 0.25 Amdahl’s Law Speed-up depends on the fraction of the workload that can be parallelized (p).
  • 96. Universal Scalability Law (USL) • contention penalty
 due to serialization for shared resources.
 examples: lock contention, database contention.
 • crosstalk penalty
 due to coordination for coherence. examples: servers coordinating to synchronize
 mutable state. αN Scalability depends on contention and cross-talk.
  • 97. Universal Scalability Law (USL) • contention penalty
 due to serialization for shared resources.
 examples: lock contention, database contention.
 • crosstalk penalty
 due to coordination for coherence. examples: servers coordinating to synchronize
 mutable state. αN Scalability depends on contention and cross-talk. βN2
  • 98. Universal Scalability Law (USL) N (αN + βN2 + C) N C N (αN + C) contention and crosstalk linear scaling contention throughput concurrency throughput of N threads = N (αN + βN2 + C)
  • 99. p = 0.75p = 0.25 USL curves plotted using the R usl package p = parallel fraction of workload
  • 100. let’s use it, smartly! a few closing strategies.
  • 101. but first, profile! Go mutex • Go mutex contention profiler
 https://golang.org/doc/diagnostics.html Linux • perf-lock:
 perf examples by Brendan Gregg
 Brendan Gregg article on off-cpu analysis • eBPF:
 example bcc tool to measure user lock contention • Dtrace, systemtap • mutrace, Valgrind-drd
 pprof mutex contention profile
  • 102. strategy I: don’t use a lock • remove the need for synchronization from hot-paths:
 typically involves rearchitecting. • reduce the number of lock operations:
 doing more thread local work, buffering, batching, copy-on-write. • use atomic operations. • use lock-free data structures
 see: http://www.1024cores.net/
  • 103. strategy II: granular locks • shard data:
 but ensure no false sharing, by padding to cache line size.
 examples: 
 go runtime semaphore’s hash table buckets;
 Linux scheduler’s per-CPU runqueues;
 Go scheduler’s per-CPU runqueues; • use read-write locks scheduler benchmark (CreateGoroutineParallel) modified scheduler: global lock; runqueue go scheduler: per-CPU core, lock-free runqueues
  • 104. strategy III: do less serial work lock contention causes ~10x latency latency time time smaller critical section change • move computation out of critical section:
 typically involves rearchitecting.
  • 105. bonus strategy: • contention-aware schedulers example: Contention-aware scheduling in MySQL 8.0 Innodb
  • 106. Special thanks to Eben Freeman, Justin Delegard, Austin Duffield for reading drafts of this. @kavya719 speakerdeck.com/kavya719/lets-talk-locks References
 Jeff Preshing’s excellent blog series
 Memory Barriers: A Hardware View for Software Hackers
 LWN.net on futexes
 The Go source code The Universal Scalability Law Manifesto, Neil Gunther
  • 107. Watch the video with slide synchronization on InfoQ.com! https://www.infoq.com/presentations/ go-locks/