• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Some Rough Fibrous Material
 

Some Rough Fibrous Material

on

  • 932 views

A rough guide to Fibers in Ruby 1.9. Given at the February 2010 meeting of LRUG, using the 20x20 format, hence the brevity.

A rough guide to Fibers in Ruby 1.9. Given at the February 2010 meeting of LRUG, using the 20x20 format, hence the brevity.

Statistics

Views

Total Views
932
Views on SlideShare
931
Embed Views
1

Actions

Likes
0
Downloads
5
Comments
1

1 Embed 1

http://coderwall.com 1

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

11 of 1 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • I&apos;m going to talk about Fibers in Ruby 1.9. <br /> <br /> Keyword is rough <br /> <br /> no knowledge <br /> not ruby 1.9 dayjob <br /> nor in spare time (lazy) <br /> researched in last week <br /> apologies for any ommissions (there will be some) <br /> if you know it, don&#x2019;t ask mean questions <br /> sorry <br /> <br /> <br /> -- <br /> <br /> The key word in my title is "rough"; I&apos;m coming to the material with little or no practical knowledge. I&apos;m not using 1.9 in my day job, and although I probably would use it for any spare-time hacking, it&apos;s very rare that I get down to any as I&apos;m, basically, lazy. <br /> <br /> So apologies to anyone that knows this stuff already; I might get things wrong, or not cover everything in enough detail. I&apos;m sorry.
  • 2 ideas that should sound familiar: <br /> <br /> co-routines & co-operative multitasking <br /> co-routines = familiar bcz sub-routines <br /> co-operative multitasking = pre-emptive multitasking <br /> <br /> detour to cover each of these ideas then onto ruby <br /> <br /> <br /> <br /> <br /> <br /> -- <br /> <br /> Fibers are an implementation of 2 important ideas: <br /> <br /> 1. The first idea is &#x201C;co-routines&#x201D; (and this should sound familiar, as you&#x2019;ll have heard of sub-routines which are related) <br /> and <br /> 2. The second idea is &#x201C;co-operative multitasking&#x201D; (and again, you should recognise this as similar sounding to &#x201C;pre-emptive mutlitasking&#x201D;). <br /> <br /> So, we&apos;ll take a quick detour to cover these in turn and then we&apos;ll come back to Ruby.
  • sub-routine invoke = <br /> start on first line, proceed to end, STOP <br /> go-in, come out <br /> <br /> co-routines are different <br /> start on first line, proceed to end, STOP <br /> in between = take detour come back later <br /> <br /> been around a bit <br /> but hardly implemented <br /> <br /> <br /> <br /> <br /> -- <br /> <br /> So pretty much every method or function you&#x2019;ve ever written is a sub-routine. When you invoke them you start at the first line and run through them till they terminate and give you their result. <br /> <br /> A co-routine is a little bit different. When you invoke them they also start on the first line of code but they can halt execution and exit before they terminate. Later you can then re-enter and resume execution from where you left off. <br /> <br /> It&#x2019;s also unlikely you&#x2019;ll have written one, yet, as despite being around for a while not many languages provide them as a feature. <br /> ----- <br /> <br /> Every method or function you write is a sub-routine. It&apos;s a package of code that has an entry point, and a single exit point. Admittedly things like exceptions and multiple return paths might confuse this and make it seem like you have many exit points, but for each *single run through the code* there&apos;s one path: you go in, do something and you come out and that&apos;s it.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • To clarify: <br /> <br /> invoke subroutine -style method. <br /> CPU enters method <br /> bounce around <br /> until execution stops <br /> with return <br /> (or exception) <br /> (or implicit last statement) <br /> and release CPU to caller <br /> <br /> <br /> --- <br /> <br /> So, here&#x2019;s a simple subroutine example. <br /> <br /> When you call a method the flow of control enters the function, and is trapped until the method terminates. <br /> <br /> Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. <br /> <br /> The only way to go back into the function is to go back to the start by calling it again.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited, <br /> no going back inside <br /> that method is dead <br /> <br /> want to re-run? <br /> have to re-invoke <br /> create new copy of stack (expensive) <br /> and enter at start <br /> nothing shared (&#x2018;cept pass-ins) <br /> <br /> <br /> --- <br /> <br /> <br /> So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out. <br /> <br /> To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different? <br /> <br /> start&#x2019;s same <br /> invoke method <br /> CPU trapped <br /> execute statements <br /> exit with yield <br /> gives caller back CPU <br /> <br /> caller later resume <br /> re-enter co-routine <br /> at EXACT POINT WHERE WE LEFT OFF <br /> same stack, same everything <br /> continue exec <br /> <br /> <br /> --- <br /> <br /> And here&#x2019;s a similar example for a co-routine. <br /> <br /> It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • more interesting is... <br /> not a one-time deal <br /> yield to the caller <br /> caller resume routine <br /> many times! <br /> <br /> even more interesting <br /> yield from multiple places <br /> and resume knows which yield to go back to <br /> <br /> <br /> --- <br /> <br /> What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. <br /> <br /> We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
  • 2nd idea - multitasking <br /> <br /> 1st = thread model <br /> several running tasks <br /> OS or lang runtime schedules <br /> don&#x2019;t know when so access shared objects = pain (locks) <br /> <br /> Fibers = 2nd <br /> programmer has control <br /> choose when in each task to give up CPU <br /> and who to give it to <br /> <br /> <br /> -- <br /> <br /> <br /> You should be familiar with pre-emptive multitasking as it&#x2019;s the standard model of concurrency used by most Thread implementations. <br /> You have several tasks running at the same time, scheduled by the OS or language runtime. <br /> The gotcha is access to shared objects. <br /> <br /> Fiber&#x2019;s however use the co-operative model. <br /> With this no tasks run at the exact same time and it&#x2019;s up to the programmer to decide when each task will give up control and who to pass control onto.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • 2 threads, alpha, beta <br /> scheduler gives each some CPU time <br /> for work <br /> they don&#x2019;t know when <br /> so alpha wants shared data <br /> locks it <br /> stops changes when CPU elsewhere <br /> <br /> when beta gets the CPU <br /> if shared data is locked, it can&#x2019;t use it, <br /> probably can&#x2019;t do anything, wasted effort <br /> <br /> <br /> <br /> --- <br /> <br /> The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op <br /> fibers, not threads <br /> no external scheduler <br /> when fiber has CPU it has CPU <br /> can use shared data without lock <br /> nothing else running. <br /> when done <br /> or done enough <br /> transfers CPU away <br /> other fiber picks up and starts work <br /> <br /> <br /> --- <br /> <br /> <br /> On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done. <br /> <br /> When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • science over. code now. <br /> <br /> simple example of creating Fiber. <br /> familiar if worked with threads <br /> block is workload for fiber <br /> illustrates 3 things... <br /> Fiber.yield is exit point <br /> shared stack (local var i same between yields) <br /> infinite loop (!) <br /> <br /> <br /> --- <br /> <br /> <br /> So, I&#x2019;ve bored you with the science part, how about looking at some code? <br /> <br /> If you&#x2019;ve used threads in ruby this should be familiar. You create a Fiber by passing a block to a constructor. The block is the &#x201C;work load&#x201D; for that Fiber. In this case an infinite loop to generate increasingly excited hello&#x2019;s to the LRUG crowd. Don&#x2019;t worry about that pesky &#x201C;infinite&#x201D; though...
  • after create fiber <br /> like thread, not running <br /> call &#x201C;resume&#x201D; (chicken-before-egg) <br /> makes fiber run from start to Fiber.yield <br /> returns value <br /> each successive .resume goes back in <br /> resumes from Fiber.yield <br /> with previous stack intact <br /> <br /> <br /> ---- <br /> <br /> <br /> So, when you create a Fiber, again just like a thread, it won&#x2019;t do anything until you ask it to. To start it you call the somewhat chicken-before-the-egg &#x201C;resume&#x201D; method. This causes hello_lrug to run until it hits that Fiber.yield. This pauses execution of the Fiber and returns the value passed to it. You also use &#x201C;resume&#x201D; to re-enter the Fiber to do some more work.
  • 3rd interesting thing <br /> that pesky infinite loop <br /> it&#x2019;s ok <br /> Fiber only runs up to .yield <br /> then exit <br /> CPU is out and nothing running <br /> only call resume 5 times, never get our 6th <br /> no longer need to think about explicit termination <br /> lazy eval = super easy <br /> <br /> <br /> ---- <br /> <br /> So although we gave hello_lrug a workload that *will never end*, it&#x2019;s not a problem because we use the yield and resume methods to explicitly schedule when hello_lrug run. If we only want to run it 5 times and never come back to it, that&#x2019;s ok, it won&#x2019;t eat up CPU time. This gives us an interesting new way to think about writing functions; if they don&#x2019;t have to end lazy evaluation becomes super easy...
  • Fibonacci <br /> <br /> standard fib method using recursion <br /> can be hard to get head around <br /> have to worry about termination clauses <br /> can be expensive <br /> (this impl will calc fib(1) several times) <br /> <br /> --- <br /> <br /> Hey, so what&#x2019;s a talk without Fibonacci? <br /> <br /> Here&#x2019;s the standard implementation for generating a number in the fibonacci sequence using ruby. It uses recursion, which is something you have to get your head around before you see how it works, and that can be hard sometimes, and you have to take care to have correct guard clauses to make sure you terminate the recursion.
  • same thing, with fibers <br /> understanding co-routines is probably hard <br /> both have mental roadblock <br /> but the def is more natural <br /> <br /> advantage, unlike recursion <br /> get fib 6, gives us fib 1 - 5 as well <br /> recursion calcs, <br /> but doesn&#x2019;t share <br /> <br /> <br /> <br /> --- <br /> <br /> <br /> Here&#x2019;s the Fibrous way of doing it. Again, there is a fundamental concept you need to understand first (co-routines), but I do think this is a slightly more natural way of defining the sequence. <br /> <br /> The difference is that to get the 6th number, we have to call resume on the fiber 6 times. With the side-effect of being provided with all the preceding 5 numbers in the sequence.
  • lazy eval = fibers! <br /> most use I think <br /> <br /> is where used in 1.9 stdlib <br /> .each, .map &c without block = enumerator <br /> can be chained <br /> under the hood all done with fibers <br /> <br /> <br /> -- <br /> <br /> This sort of lazy evalutation is where Fibers shine, and probably where they&#x2019;ll see the most use. <br /> <br /> And, in fact, it&#x2019;s exactly this sort of thing that Fibers are being used for in the ruby 1.9 stdlib. Things like .each and .map have been reworked so that without a block they now return enumerators that you can chain together. And under the hood these enumerators are implemented using fibers.
  • and in the real world? <br /> <br /> (I dunno) <br /> github search <br /> plenty results <br /> <br /> on closer inspection <br /> most forks/copies of rubyspec for fibers <br /> (a good resource to read <br /> if you want <br /> know ruby) <br /> <br /> the first non-rubyspec result though... <br /> <br /> <br /> <br /> --- <br /> <br /> So, that&#x2019;s all a bit theoretical. What real use are fibers? <br /> <br /> Well, I don&#x2019;t know, so I did a quick search on github, and to my surprise there were actually plenty of results. <br /> <br /> But... on closer inspection, the first few pages are entirely forks and copies of the Ruby specs for fibers. Which, by the way, I totally recommend reading if you want to get an idea how something in ruby actually works. <br /> <br /> The first result that wasn&#x2019;t a ruby spec requires a detour first...
  • another quick detour <br /> if you&#x2019;ve done it <br /> you know <br /> evented programming is different <br /> <br /> example reading a webpage <br /> normal is simple, call a couple of methods <br /> <br /> evented - much more complex. <br /> define state recording models <br /> use callback methods <br /> you gain performance & flexibility <br /> but you lose simplicity and familiarity <br /> <br /> <br /> --- <br /> <br /> <br /> Well.. another quick detour. If you&#x2019;ve ever done any evented programming you&#x2019;ll know that the code is very different looking to normal code. <br /> <br /> Here&#x2019;s a simplified example of how to read a webpage. For the normal case it&#x2019;s really simple, you just call a couple of methods. <br /> <br /> The evented case, not so much. You have to rely on callback methods and keep some object around to hold the result of those callbacks. What you lose in a simplified API you gain in performance and flexibility, but it&#x2019;s hard to get your head around.
  • that first non-rubyspec github hit? <br /> <br /> Neverblock - fibers + eventmachine + async libs <br /> give you sync style API for async programming <br /> get performance (not flex) <br /> without changing much code <br /> or the what it feels like <br /> just replace blocking libraries with neverblock <br /> <br /> not going to cover in detail. 1 more slide! <br /> <br /> <br /> -- <br /> <br /> <br /> <br /> <br /> The first non-ruby spec result on github that uses fibers was: Neverblock. <br /> <br /> This library uses Fibers, Event Machine and other non-blocking APIs to present you with an API for doing asynchronous programming that looks remarkably synchronous. So you don&#x2019;t have to change your code to get the benefit of asynchronous performance. <br /> <br /> I won&#x2019;t go into details (I only have 1 more slide!), but you should check it out if you&#x2019;re interested.
  • plenty I didn&#x2019;t cover <br /> <br /> remaining API <br /> transfer - yield this fiber + resume another fiber in one go <br /> don&#x2019;t go back to caller <br /> others simple enough <br /> <br /> lightweight - less mem than same num threads <br /> <br /> single core only (all fibers run in same thread) <br /> <br /> <br /> -- <br /> <br /> Last slide. There&#x2019;s loads I didn&#x2019;t cover, but I think I got the basics. <br /> <br /> 3 remaining API methods (apart from resume and yield). <br /> Transfer is like yield, but instead of giving CPU back to the caller, you give it to the Fiber you called transfer on. The other two are simple enough. <br /> <br /> Supremely lightweight. Spinning up fibers takes much less memory than Threads, there&#x2019;s a good comparison. <br /> <br /> Single core solution really. <br /> <br /> I&#x2019;ll put a resource slide up when I post these slides....
  • (...and breathe!)
  • Bonus slide for the internet.

Some Rough Fibrous Material Some Rough Fibrous Material Presentation Transcript

  • Some rough fibrous material A 20x20 guide to Fibers in Ruby 1.9 Murray Steele - LRUG February 2010
  • What are fibers? • coroutines • cooperative multitasking
  • Detour #1: subroutines vs. coroutines • sub-routines • every function you’ve ever written • single entry-point & single exit • co-routines • single entry-point, multiple exit & re- entry points
  • Detour #1.a: subroutines def a_sub_routine( ) CPU end
  • Detour #1.a: subroutines def a_sub_routine( ) return CPU end
  • Detour #1.b: subroutines def a_sub_routine( ) return CPU end
  • Detour #1.b: subroutines def a_sub_routine( ) def a_sub_routine( )  return end return CPU end
  • Detour #1.c: coroutines def a_co_routine( ) CPU end
  • Detour #1.c: coroutines def a_co_routine( ) CPU yield resume end
  • Detour #1.d: coroutines def a_co_routine( ) CPU end
  • Detour #1.d: coroutines def a_co_routine( ) yield yield CPU yield resume end
  • Detour #2: multitasking • pre-emptive multitasking • standard thread model • locking & state issues • co-operative multitasking • programmer control
  • Detour #2.a: pre-emptive time thread α shared data thread β
  • Detour #2.a: pre-emptive time thread α shared data thread β
  • Detour #2.b: co-operative time fiber α shared data fiber β
  • Detour #2.b: co-operative time fiber α yield shared data fiber β
  • Back on track: finally some code hello_lrug = Fiber.new do i=1 loop do Fiber.yield "Hello LR#{'U' * i}G!" i += 1 end end
  • Using fibers hello_lrug.resume #=> "Hello LRUG!" 4.times { puts hello_lrug.resume } # outputs: # "Hello LRUUG!" # "Hello LRUUUG!" # "Hello LRUUUUG!" # "Hello LRUUUUUG!"
  • Using fibers means never having to say you’re finished
  • Detour #1.1.2.3.5 def fib(n) if (0..1).include? n n elsif n > 1 fib(n-1) + fib(n-2) end end puts fib(6)
  • Detour #1.1.2.3.5.8 fib = Fiber.new do x, y = 0, 1 loop do Fiber.yield y x, y = y, x + y end end 6.times { puts fib.resume }
  • What use is a fiber? lrug = ['L','R','U','G'] enum = lrug.map .with_index enum.each {|*l| puts l } # outputs: # “[‘L’, 0]” # “[‘R’, 1]” # ...
  • What practical use is a fiber?
  • Detour.do {|d| talk << 4} # Non Evented open('http://lrug.org/').read #=> ‘<html.... # Evented class HTTPClient def receive_body(data) @data << data end end http_client = HTTPClient.new EventMachine::run do EventMachine::connect 'lrug.org', 80, http_client end http_client.data #=> ‘<html....
  • So…what is a practical use for a fiber? http://www.espace.com.eg/neverblock/
  • What I didn’t say • The rest of the API • fiber_instance.transfer - invoke on a Fiber to pass control to it, instead of yielding to the caller • fiber_instance.alive? - can we safely resume this Fiber, or has it terminated? • Fiber.current - get the current Fiber so we can play with it • Lightweight - less memory over head than threads • The downsides - single core only really
  • It’s over! Thanks for listening, any questions?
  • Resources • http://delicious.com/hlame/fibers • (most of the stuff I researched is here) • http://github.com/oldmoe/neverblock • http://en.wikipedia.org/wiki/Fiber_(computer_science) • http://en.wikipedia.org/wiki/Coroutine