Your SlideShare is downloading. ×
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Rust Synchronization Primitives
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Rust Synchronization Primitives

1,447

Published on

These are slides for a presentation I gave to my OS class. It was mostly me talking, using the code to drive the examples and show how Rust approaches the problems of synchronization. Not super …

These are slides for a presentation I gave to my OS class. It was mostly me talking, using the code to drive the examples and show how Rust approaches the problems of synchronization. Not super detailed, doesn't get into the meaty stuff (Condvar, mutex::Mutex, etc)

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,447
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
12
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Rust - Synchronization and Concurrency Safe synchronization abstractions and their implementation Corey (Rust) Richardson February 17, 2014
  • 2. What is Rust? Rust is a systems language, aimed at replacing C++, with the following design goals, in roughly descending order of importance: Zero-cost abstraction Easy, safe concurrency and parallelism Memory safety (no data races) Type safety (no willy-nilly casting) Simplicity Compilation speed
  • 3. Concurrency model ”Tasks” as unit of computation No observable shared memory No race conditions! †
  • 4. No race conditions? How can we avoid race conditions? A type system which enables safe sharing of data Careful design of concurrency abstractions
  • 5. Hello World fn main() { println("Hello, world!"); }
  • 6. UnsafeArc Unsafe data structure. Provides atomic reference counting of a type. Ensures memory does not leak. pub struct UnsafeArc<T> { priv data: *mut ArcData<T> } struct ArcData<T> { count: AtomicUint, data: T }
  • 7. UnsafeArc cont. fn new_inner<T>(data: T, initial_count: uint) -> *mut ArcData<T> let data = box ArcData { count: AtomicUint::new(initial_count), data: data } cast::transmute(data) } impl<T: Send> UnsafeArc<T> { pub fn new(data: T) -> UnsafeArc<T> { unsafe { UnsafeArc { data: new_inner(data, 1) } } } pub fn new2(data: T) -> (UnsafeArc<T>, UnsafeArc<T>) { unsafe { let ptr = new_inner(data, 2); (UnsafeArc { data: ptr }, UnsafeArc { data: ptr }) } }
  • 8. UnsafeArc cont. pub fn get(&self) -> *mut T { unsafe { // problems? assert!((*self.data).count.load(Relaxed) > 0); return &mut (*self.data).data as *mut T; } } pub fn get_immut(&self) -> *T { unsafe { // problems? assert!((*self.data).count.load(Relaxed) > 0); return &(*self.data).data as *T; } } pub fn is_owned(&self) -> bool { unsafe { // problems? (*self.data).count.load(Relaxed) == 1 } } }
  • 9. UnsafeArc cloning impl<T: Send> Clone for UnsafeArc<T> { fn clone(&self) -> UnsafeArc<T> { unsafe { let old_count = (*self.data).count .fetch_add(1, Acquire); // ^~~~~~~ Why? assert!(old_count >= 1); return UnsafeArc { data: self.data }; } } }
  • 10. Adding Safety Arc: wraps UnsafeArc, provides read-only access. pub struct Arc<T> { priv x: UnsafeArc<T> } impl<T: Freeze + Send> Arc<T> { pub fn new(data: T) -> Arc<T> { Arc { x: UnsafeArc::new(data) } } pub fn get<’a>(&’a self) -> &’a T { unsafe { &*self.x.get_immut() } } }
  • 11. Mutexes? pub struct Mutex { priv sem: Sem<~[WaitQueue]> } impl Mutex { pub fn new() -> Mutex { Mutex::new_with_condvars(1) } pub fn new_with_condvars(num: uint) -> Mutex { Mutex { sem: Sem::new_and_signal(1, num) } } pub fn lock<U>(&self, blk: || -> U) -> U { // magic? (&self.sem).access(blk) } pub fn lock_cond<U>(&self, blk: |c: &Condvar| -> U) -> U { (&self.sem).access_cond(blk) } }
  • 12. Mutexes! Mutexes in Rust are implemented on top of semaphores, using 100 No ‘unlock’ operation? Closures!
  • 13. Wait Queues Wait queues provide an ordering when waiting on a lock. // Each waiting task receives on one of these. type WaitEnd = Port<()>; type SignalEnd = Chan<()>; // A doubly-ended queue of waiting tasks. struct WaitQueue { head: Port<SignalEnd>, tail: Chan<SignalEnd> }
  • 14. Channels and Ports Message passing. Provides a way to send ‘Send‘ data to another task. Very efficient, single-reader, single-writer. impl <T: Send> Chan<T> { fn send(&self, data: T) { ... } fn try_send(&self, data: T) -> bool { ... } } impl <T: Send> Port<T> { fn recv(&self) -> T { ... } fn try_recv(&self) -> TryRecvResult<T> { ... } }
  • 15. Wait Queue Implementation Given Ports and Chans, how can we express wait queues? impl WaitQueue { fn signal(&self) -> bool { match self.head.try_recv() { comm::Data(ch) => { // Send a wakeup signal. If the waiter // was killed, its port will // have closed. Keep trying until we // get a live task. if ch.try_send(()) { true } else { self.signal() } } _ => false } }
  • 16. Wait Queue Impl Cont. fn broadcast(&self) -> uint { let mut count = 0; loop { match self.head.try_recv() { comm::Data(ch) => { if ch.try_send(()) { count += 1; } } _ => break } } count }
  • 17. Wait Queue Impl End fn wait_end(&self) -> WaitEnd { let (wait_end, signal_end) = Chan::new(); assert!(self.tail.try_send(signal_end)); wait_end } }
  • 18. Raw Semaphores We have a way to express order and waiting, now to build some actual *synchronization*. struct Sem<Q>(UnsafeArc<SemInner<Q>>); struct SemInner<Q> { lock: LowLevelMutex, count: int, waiters: WaitQueue, // Can be either unit or another waitqueue. // Some sems shouldn’t come with // a condition variable attached, others should. blocked: Q }
  • 19. Semaphore Implementation impl<Q: Send> Sem<Q> { pub fn access<U>(&self, blk: || -> U) -> U { (|| { self.acquire(); blk() }).finally(|| { self.release(); }) } unsafe fn with(&self, f: |&mut SemInner<Q>|) { let Sem(ref arc) = *self; let state = arc.get(); let _g = (*state).lock.lock(); // unlock???? f(cast::transmute(state)); }
  • 20. Acquiring a semaphore (P) pub fn acquire(&self) { unsafe { let mut waiter_nobe = None; self.with(|state| { state.count -= 1; if state.count < 0 { // Create waiter nobe, enqueue ourself, // outer scope we need to block. waiter_nobe = Some(state.waiters.wait_e } }); // Need to wait outside the exclusive. if waiter_nobe.is_some() { let _ = waiter_nobe.unwrap().recv(); } } }
  • 21. Releasing a Semaphore (V) pub fn release(&self) { unsafe { self.with(|state| { state.count += 1; if state.count <= 0 { state.waiters.signal(); } }) } } }
  • 22. Filling in the last pieces impl Sem<~[WaitQueue]> { fn new_and_signal(count: int, num_condvars: uint) -> Se let mut queues = ~[]; for _ in range(0, num_condvars) { queues.push(WaitQ Sem::new(count, queues) } }
  • 23. And more? On top of these primitives, as we have seen in class, every other synchronization primitive can be constructed. In particular, we also provide starvation-free Reader-Writer locks, Barriers, and Copy-on-Write Arcs.
  • 24. Thank You Thanks for your time!

×