Rust Synchronization Primitives
Upcoming SlideShare
Loading in...5
×
 

Rust Synchronization Primitives

on

  • 1,063 views

These are slides for a presentation I gave to my OS class. It was mostly me talking, using the code to drive the examples and show how Rust approaches the problems of synchronization. Not super ...

These are slides for a presentation I gave to my OS class. It was mostly me talking, using the code to drive the examples and show how Rust approaches the problems of synchronization. Not super detailed, doesn't get into the meaty stuff (Condvar, mutex::Mutex, etc)

Statistics

Views

Total Views
1,063
Views on SlideShare
1,059
Embed Views
4

Actions

Likes
1
Downloads
7
Comments
0

2 Embeds 4

http://www.slideee.com 3
https://www.hackthissite.org 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

CC Attribution-ShareAlike LicenseCC Attribution-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Rust Synchronization Primitives Rust Synchronization Primitives Presentation Transcript

  • Rust - Synchronization and Concurrency Safe synchronization abstractions and their implementation Corey (Rust) Richardson February 17, 2014
  • What is Rust? Rust is a systems language, aimed at replacing C++, with the following design goals, in roughly descending order of importance: Zero-cost abstraction Easy, safe concurrency and parallelism Memory safety (no data races) Type safety (no willy-nilly casting) Simplicity Compilation speed
  • Concurrency model ”Tasks” as unit of computation No observable shared memory No race conditions! †
  • No race conditions? How can we avoid race conditions? A type system which enables safe sharing of data Careful design of concurrency abstractions
  • Hello World fn main() { println("Hello, world!"); }
  • UnsafeArc Unsafe data structure. Provides atomic reference counting of a type. Ensures memory does not leak. pub struct UnsafeArc<T> { priv data: *mut ArcData<T> } struct ArcData<T> { count: AtomicUint, data: T }
  • UnsafeArc cont. fn new_inner<T>(data: T, initial_count: uint) -> *mut ArcData<T> let data = box ArcData { count: AtomicUint::new(initial_count), data: data } cast::transmute(data) } impl<T: Send> UnsafeArc<T> { pub fn new(data: T) -> UnsafeArc<T> { unsafe { UnsafeArc { data: new_inner(data, 1) } } } pub fn new2(data: T) -> (UnsafeArc<T>, UnsafeArc<T>) { unsafe { let ptr = new_inner(data, 2); (UnsafeArc { data: ptr }, UnsafeArc { data: ptr }) } }
  • UnsafeArc cont. pub fn get(&self) -> *mut T { unsafe { // problems? assert!((*self.data).count.load(Relaxed) > 0); return &mut (*self.data).data as *mut T; } } pub fn get_immut(&self) -> *T { unsafe { // problems? assert!((*self.data).count.load(Relaxed) > 0); return &(*self.data).data as *T; } } pub fn is_owned(&self) -> bool { unsafe { // problems? (*self.data).count.load(Relaxed) == 1 } } }
  • UnsafeArc cloning impl<T: Send> Clone for UnsafeArc<T> { fn clone(&self) -> UnsafeArc<T> { unsafe { let old_count = (*self.data).count .fetch_add(1, Acquire); // ^~~~~~~ Why? assert!(old_count >= 1); return UnsafeArc { data: self.data }; } } }
  • Adding Safety Arc: wraps UnsafeArc, provides read-only access. pub struct Arc<T> { priv x: UnsafeArc<T> } impl<T: Freeze + Send> Arc<T> { pub fn new(data: T) -> Arc<T> { Arc { x: UnsafeArc::new(data) } } pub fn get<’a>(&’a self) -> &’a T { unsafe { &*self.x.get_immut() } } }
  • Mutexes? pub struct Mutex { priv sem: Sem<~[WaitQueue]> } impl Mutex { pub fn new() -> Mutex { Mutex::new_with_condvars(1) } pub fn new_with_condvars(num: uint) -> Mutex { Mutex { sem: Sem::new_and_signal(1, num) } } pub fn lock<U>(&self, blk: || -> U) -> U { // magic? (&self.sem).access(blk) } pub fn lock_cond<U>(&self, blk: |c: &Condvar| -> U) -> U { (&self.sem).access_cond(blk) } }
  • Mutexes! Mutexes in Rust are implemented on top of semaphores, using 100 No ‘unlock’ operation? Closures!
  • Wait Queues Wait queues provide an ordering when waiting on a lock. // Each waiting task receives on one of these. type WaitEnd = Port<()>; type SignalEnd = Chan<()>; // A doubly-ended queue of waiting tasks. struct WaitQueue { head: Port<SignalEnd>, tail: Chan<SignalEnd> }
  • Channels and Ports Message passing. Provides a way to send ‘Send‘ data to another task. Very efficient, single-reader, single-writer. impl <T: Send> Chan<T> { fn send(&self, data: T) { ... } fn try_send(&self, data: T) -> bool { ... } } impl <T: Send> Port<T> { fn recv(&self) -> T { ... } fn try_recv(&self) -> TryRecvResult<T> { ... } }
  • Wait Queue Implementation Given Ports and Chans, how can we express wait queues? impl WaitQueue { fn signal(&self) -> bool { match self.head.try_recv() { comm::Data(ch) => { // Send a wakeup signal. If the waiter // was killed, its port will // have closed. Keep trying until we // get a live task. if ch.try_send(()) { true } else { self.signal() } } _ => false } }
  • Wait Queue Impl Cont. fn broadcast(&self) -> uint { let mut count = 0; loop { match self.head.try_recv() { comm::Data(ch) => { if ch.try_send(()) { count += 1; } } _ => break } } count }
  • Wait Queue Impl End fn wait_end(&self) -> WaitEnd { let (wait_end, signal_end) = Chan::new(); assert!(self.tail.try_send(signal_end)); wait_end } }
  • Raw Semaphores We have a way to express order and waiting, now to build some actual *synchronization*. struct Sem<Q>(UnsafeArc<SemInner<Q>>); struct SemInner<Q> { lock: LowLevelMutex, count: int, waiters: WaitQueue, // Can be either unit or another waitqueue. // Some sems shouldn’t come with // a condition variable attached, others should. blocked: Q }
  • Semaphore Implementation impl<Q: Send> Sem<Q> { pub fn access<U>(&self, blk: || -> U) -> U { (|| { self.acquire(); blk() }).finally(|| { self.release(); }) } unsafe fn with(&self, f: |&mut SemInner<Q>|) { let Sem(ref arc) = *self; let state = arc.get(); let _g = (*state).lock.lock(); // unlock???? f(cast::transmute(state)); }
  • Acquiring a semaphore (P) pub fn acquire(&self) { unsafe { let mut waiter_nobe = None; self.with(|state| { state.count -= 1; if state.count < 0 { // Create waiter nobe, enqueue ourself, // outer scope we need to block. waiter_nobe = Some(state.waiters.wait_e } }); // Need to wait outside the exclusive. if waiter_nobe.is_some() { let _ = waiter_nobe.unwrap().recv(); } } }
  • Releasing a Semaphore (V) pub fn release(&self) { unsafe { self.with(|state| { state.count += 1; if state.count <= 0 { state.waiters.signal(); } }) } } }
  • Filling in the last pieces impl Sem<~[WaitQueue]> { fn new_and_signal(count: int, num_condvars: uint) -> Se let mut queues = ~[]; for _ in range(0, num_condvars) { queues.push(WaitQ Sem::new(count, queues) } }
  • And more? On top of these primitives, as we have seen in class, every other synchronization primitive can be constructed. In particular, we also provide starvation-free Reader-Writer locks, Barriers, and Copy-on-Write Arcs.
  • Thank You Thanks for your time!