Rust - Synchronization and Concurrency
Safe synchronization abstractions and their implementation

Corey (Rust) Richardson...
What is Rust?

Rust is a systems language, aimed at replacing C++, with the
following design goals, in roughly descending ...
Concurrency model

”Tasks” as unit of computation
No observable shared memory
No race conditions! †
No race conditions?

How can we avoid race conditions?
A type system which enables safe sharing of data
Careful design of ...
Hello World

fn main() {
println("Hello, world!");
}
UnsafeArc

Unsafe data structure. Provides atomic reference counting of a
type. Ensures memory does not leak.
pub struct U...
UnsafeArc cont.
fn new_inner<T>(data: T, initial_count: uint) -> *mut ArcData<T>
let data = box ArcData {
count: AtomicUin...
UnsafeArc cont.
pub fn get(&self) -> *mut T {
unsafe {
// problems?
assert!((*self.data).count.load(Relaxed) > 0);
return ...
UnsafeArc cloning

impl<T: Send> Clone for UnsafeArc<T> {
fn clone(&self) -> UnsafeArc<T> {
unsafe {
let old_count =
(*sel...
Adding Safety
Arc: wraps UnsafeArc, provides read-only access.
pub struct Arc<T> { priv x: UnsafeArc<T> }
impl<T: Freeze +...
Mutexes?
pub struct Mutex { priv sem: Sem<~[WaitQueue]> }
impl Mutex {
pub fn new() -> Mutex { Mutex::new_with_condvars(1)...
Mutexes!

Mutexes in Rust are implemented on top of semaphores, using 100
No ‘unlock’ operation? Closures!
Wait Queues

Wait queues provide an ordering when waiting on a lock.
// Each waiting task receives on one of these.
type W...
Channels and Ports

Message passing. Provides a way to send ‘Send‘ data to another
task. Very efficient, single-reader, sing...
Wait Queue Implementation
Given Ports and Chans, how can we express wait queues?
impl WaitQueue {
fn signal(&self) -> bool...
Wait Queue Impl Cont.
fn broadcast(&self) -> uint {
let mut count = 0;
loop {
match self.head.try_recv() {
comm::Data(ch) ...
Wait Queue Impl End

fn wait_end(&self) -> WaitEnd {
let (wait_end, signal_end) = Chan::new();
assert!(self.tail.try_send(...
Raw Semaphores
We have a way to express order and waiting, now to build some
actual *synchronization*.
struct Sem<Q>(Unsaf...
Semaphore Implementation
impl<Q: Send> Sem<Q> {
pub fn access<U>(&self, blk: || -> U) -> U {
(|| {
self.acquire();
blk()
}...
Acquiring a semaphore (P)

pub fn acquire(&self) {
unsafe {
let mut waiter_nobe = None;
self.with(|state| {
state.count -=...
Releasing a Semaphore (V)

pub fn release(&self) {
unsafe {
self.with(|state| {
state.count += 1;
if state.count <= 0 {
st...
Filling in the last pieces

impl Sem<~[WaitQueue]> {
fn new_and_signal(count: int, num_condvars: uint) -> Se
let mut queue...
And more?

On top of these primitives, as we have seen in class, every other
synchronization primitive can be constructed....
Thank You

Thanks for your time!
Upcoming SlideShare
Loading in...5
×

Rust Synchronization Primitives

1,564

Published on

These are slides for a presentation I gave to my OS class. It was mostly me talking, using the code to drive the examples and show how Rust approaches the problems of synchronization. Not super detailed, doesn't get into the meaty stuff (Condvar, mutex::Mutex, etc)

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,564
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
14
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Rust Synchronization Primitives

  1. 1. Rust - Synchronization and Concurrency Safe synchronization abstractions and their implementation Corey (Rust) Richardson February 17, 2014
  2. 2. What is Rust? Rust is a systems language, aimed at replacing C++, with the following design goals, in roughly descending order of importance: Zero-cost abstraction Easy, safe concurrency and parallelism Memory safety (no data races) Type safety (no willy-nilly casting) Simplicity Compilation speed
  3. 3. Concurrency model ”Tasks” as unit of computation No observable shared memory No race conditions! †
  4. 4. No race conditions? How can we avoid race conditions? A type system which enables safe sharing of data Careful design of concurrency abstractions
  5. 5. Hello World fn main() { println("Hello, world!"); }
  6. 6. UnsafeArc Unsafe data structure. Provides atomic reference counting of a type. Ensures memory does not leak. pub struct UnsafeArc<T> { priv data: *mut ArcData<T> } struct ArcData<T> { count: AtomicUint, data: T }
  7. 7. UnsafeArc cont. fn new_inner<T>(data: T, initial_count: uint) -> *mut ArcData<T> let data = box ArcData { count: AtomicUint::new(initial_count), data: data } cast::transmute(data) } impl<T: Send> UnsafeArc<T> { pub fn new(data: T) -> UnsafeArc<T> { unsafe { UnsafeArc { data: new_inner(data, 1) } } } pub fn new2(data: T) -> (UnsafeArc<T>, UnsafeArc<T>) { unsafe { let ptr = new_inner(data, 2); (UnsafeArc { data: ptr }, UnsafeArc { data: ptr }) } }
  8. 8. UnsafeArc cont. pub fn get(&self) -> *mut T { unsafe { // problems? assert!((*self.data).count.load(Relaxed) > 0); return &mut (*self.data).data as *mut T; } } pub fn get_immut(&self) -> *T { unsafe { // problems? assert!((*self.data).count.load(Relaxed) > 0); return &(*self.data).data as *T; } } pub fn is_owned(&self) -> bool { unsafe { // problems? (*self.data).count.load(Relaxed) == 1 } } }
  9. 9. UnsafeArc cloning impl<T: Send> Clone for UnsafeArc<T> { fn clone(&self) -> UnsafeArc<T> { unsafe { let old_count = (*self.data).count .fetch_add(1, Acquire); // ^~~~~~~ Why? assert!(old_count >= 1); return UnsafeArc { data: self.data }; } } }
  10. 10. Adding Safety Arc: wraps UnsafeArc, provides read-only access. pub struct Arc<T> { priv x: UnsafeArc<T> } impl<T: Freeze + Send> Arc<T> { pub fn new(data: T) -> Arc<T> { Arc { x: UnsafeArc::new(data) } } pub fn get<’a>(&’a self) -> &’a T { unsafe { &*self.x.get_immut() } } }
  11. 11. Mutexes? pub struct Mutex { priv sem: Sem<~[WaitQueue]> } impl Mutex { pub fn new() -> Mutex { Mutex::new_with_condvars(1) } pub fn new_with_condvars(num: uint) -> Mutex { Mutex { sem: Sem::new_and_signal(1, num) } } pub fn lock<U>(&self, blk: || -> U) -> U { // magic? (&self.sem).access(blk) } pub fn lock_cond<U>(&self, blk: |c: &Condvar| -> U) -> U { (&self.sem).access_cond(blk) } }
  12. 12. Mutexes! Mutexes in Rust are implemented on top of semaphores, using 100 No ‘unlock’ operation? Closures!
  13. 13. Wait Queues Wait queues provide an ordering when waiting on a lock. // Each waiting task receives on one of these. type WaitEnd = Port<()>; type SignalEnd = Chan<()>; // A doubly-ended queue of waiting tasks. struct WaitQueue { head: Port<SignalEnd>, tail: Chan<SignalEnd> }
  14. 14. Channels and Ports Message passing. Provides a way to send ‘Send‘ data to another task. Very efficient, single-reader, single-writer. impl <T: Send> Chan<T> { fn send(&self, data: T) { ... } fn try_send(&self, data: T) -> bool { ... } } impl <T: Send> Port<T> { fn recv(&self) -> T { ... } fn try_recv(&self) -> TryRecvResult<T> { ... } }
  15. 15. Wait Queue Implementation Given Ports and Chans, how can we express wait queues? impl WaitQueue { fn signal(&self) -> bool { match self.head.try_recv() { comm::Data(ch) => { // Send a wakeup signal. If the waiter // was killed, its port will // have closed. Keep trying until we // get a live task. if ch.try_send(()) { true } else { self.signal() } } _ => false } }
  16. 16. Wait Queue Impl Cont. fn broadcast(&self) -> uint { let mut count = 0; loop { match self.head.try_recv() { comm::Data(ch) => { if ch.try_send(()) { count += 1; } } _ => break } } count }
  17. 17. Wait Queue Impl End fn wait_end(&self) -> WaitEnd { let (wait_end, signal_end) = Chan::new(); assert!(self.tail.try_send(signal_end)); wait_end } }
  18. 18. Raw Semaphores We have a way to express order and waiting, now to build some actual *synchronization*. struct Sem<Q>(UnsafeArc<SemInner<Q>>); struct SemInner<Q> { lock: LowLevelMutex, count: int, waiters: WaitQueue, // Can be either unit or another waitqueue. // Some sems shouldn’t come with // a condition variable attached, others should. blocked: Q }
  19. 19. Semaphore Implementation impl<Q: Send> Sem<Q> { pub fn access<U>(&self, blk: || -> U) -> U { (|| { self.acquire(); blk() }).finally(|| { self.release(); }) } unsafe fn with(&self, f: |&mut SemInner<Q>|) { let Sem(ref arc) = *self; let state = arc.get(); let _g = (*state).lock.lock(); // unlock???? f(cast::transmute(state)); }
  20. 20. Acquiring a semaphore (P) pub fn acquire(&self) { unsafe { let mut waiter_nobe = None; self.with(|state| { state.count -= 1; if state.count < 0 { // Create waiter nobe, enqueue ourself, // outer scope we need to block. waiter_nobe = Some(state.waiters.wait_e } }); // Need to wait outside the exclusive. if waiter_nobe.is_some() { let _ = waiter_nobe.unwrap().recv(); } } }
  21. 21. Releasing a Semaphore (V) pub fn release(&self) { unsafe { self.with(|state| { state.count += 1; if state.count <= 0 { state.waiters.signal(); } }) } } }
  22. 22. Filling in the last pieces impl Sem<~[WaitQueue]> { fn new_and_signal(count: int, num_condvars: uint) -> Se let mut queues = ~[]; for _ in range(0, num_condvars) { queues.push(WaitQ Sem::new(count, queues) } }
  23. 23. And more? On top of these primitives, as we have seen in class, every other synchronization primitive can be constructed. In particular, we also provide starvation-free Reader-Writer locks, Barriers, and Copy-on-Write Arcs.
  24. 24. Thank You Thanks for your time!
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×