Some Rough Fibrous Material
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Some Rough Fibrous Material

  • 1,123 views
Uploaded on

A rough guide to Fibers in Ruby 1.9. Given at the February 2010 meeting of LRUG, using the 20x20 format, hence the brevity.

A rough guide to Fibers in Ruby 1.9. Given at the February 2010 meeting of LRUG, using the 20x20 format, hence the brevity.

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to like this
No Downloads

Views

Total Views
1,123
On Slideshare
1,122
From Embeds
1
Number of Embeds
1

Actions

Shares
Downloads
5
Comments
1
Likes
0

Embeds 1

http://coderwall.com 1

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • I'm going to talk about Fibers in Ruby 1.9.

    Keyword is rough

    no knowledge
    not ruby 1.9 dayjob
    nor in spare time (lazy)
    researched in last week
    apologies for any ommissions (there will be some)
    if you know it, don’t ask mean questions
    sorry


    --

    The key word in my title is "rough"; I'm coming to the material with little or no practical knowledge. I'm not using 1.9 in my day job, and although I probably would use it for any spare-time hacking, it's very rare that I get down to any as I'm, basically, lazy.

    So apologies to anyone that knows this stuff already; I might get things wrong, or not cover everything in enough detail. I'm sorry.
  • 2 ideas that should sound familiar:

    co-routines & co-operative multitasking
    co-routines = familiar bcz sub-routines
    co-operative multitasking = pre-emptive multitasking

    detour to cover each of these ideas then onto ruby





    --

    Fibers are an implementation of 2 important ideas:

    1. The first idea is “co-routines” (and this should sound familiar, as you’ll have heard of sub-routines which are related)
    and
    2. The second idea is “co-operative multitasking” (and again, you should recognise this as similar sounding to “pre-emptive mutlitasking”).

    So, we'll take a quick detour to cover these in turn and then we'll come back to Ruby.
  • sub-routine invoke =
    start on first line, proceed to end, STOP
    go-in, come out

    co-routines are different
    start on first line, proceed to end, STOP
    in between = take detour come back later

    been around a bit
    but hardly implemented




    --

    So pretty much every method or function you’ve ever written is a sub-routine. When you invoke them you start at the first line and run through them till they terminate and give you their result.

    A co-routine is a little bit different. When you invoke them they also start on the first line of code but they can halt execution and exit before they terminate. Later you can then re-enter and resume execution from where you left off.

    It’s also unlikely you’ll have written one, yet, as despite being around for a while not many languages provide them as a feature.
    -----

    Every method or function you write is a sub-routine. It's a package of code that has an entry point, and a single exit point. Admittedly things like exceptions and multiple return paths might confuse this and make it seem like you have many exit points, but for each *single run through the code* there's one path: you go in, do something and you come out and that's it.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • To clarify:

    invoke subroutine -style method.
    CPU enters method
    bounce around
    until execution stops
    with return
    (or exception)
    (or implicit last statement)
    and release CPU to caller


    ---

    So, here’s a simple subroutine example.

    When you call a method the flow of control enters the function, and is trapped until the method terminates.

    Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.

    The only way to go back into the function is to go back to the start by calling it again.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • once exited,
    no going back inside
    that method is dead

    want to re-run?
    have to re-invoke
    create new copy of stack (expensive)
    and enter at start
    nothing shared (‘cept pass-ins)


    ---


    So, once you exit a sub-routine, the door is closed; you can’t return to it the way you came out.

    To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there’s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • how are co-routines different?

    start’s same
    invoke method
    CPU trapped
    execute statements
    exit with yield
    gives caller back CPU

    caller later resume
    re-enter co-routine
    at EXACT POINT WHERE WE LEFT OFF
    same stack, same everything
    continue exec


    ---

    And here’s a similar example for a co-routine.

    It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • more interesting is...
    not a one-time deal
    yield to the caller
    caller resume routine
    many times!

    even more interesting
    yield from multiple places
    and resume knows which yield to go back to


    ---

    What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.

    We can also have as many yield’s as we want, we don’t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can’t choose some other yield point to re-enter at.
  • 2nd idea - multitasking

    1st = thread model
    several running tasks
    OS or lang runtime schedules
    don’t know when so access shared objects = pain (locks)

    Fibers = 2nd
    programmer has control
    choose when in each task to give up CPU
    and who to give it to


    --


    You should be familiar with pre-emptive multitasking as it’s the standard model of concurrency used by most Thread implementations.
    You have several tasks running at the same time, scheduled by the OS or language runtime.
    The gotcha is access to shared objects.

    Fiber’s however use the co-operative model.
    With this no tasks run at the exact same time and it’s up to the programmer to decide when each task will give up control and who to pass control onto.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • 2 threads, alpha, beta
    scheduler gives each some CPU time
    for work
    they don’t know when
    so alpha wants shared data
    locks it
    stops changes when CPU elsewhere

    when beta gets the CPU
    if shared data is locked, it can’t use it,
    probably can’t do anything, wasted effort



    ---

    The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don’t know when in their life-cycle this’ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can’t do anything.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • with co-op
    fibers, not threads
    no external scheduler
    when fiber has CPU it has CPU
    can use shared data without lock
    nothing else running.
    when done
    or done enough
    transfers CPU away
    other fiber picks up and starts work


    ---


    On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn’t need to lock anything because it’s safe in the knowledge that no other fiber will be running unless it says it’s done.

    When the fiber is done (or happy that it’s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  • science over. code now.

    simple example of creating Fiber.
    familiar if worked with threads
    block is workload for fiber
    illustrates 3 things...
    Fiber.yield is exit point
    shared stack (local var i same between yields)
    infinite loop (!)


    ---


    So, I’ve bored you with the science part, how about looking at some code?

    If you’ve used threads in ruby this should be familiar. You create a Fiber by passing a block to a constructor. The block is the “work load” for that Fiber. In this case an infinite loop to generate increasingly excited hello’s to the LRUG crowd. Don’t worry about that pesky “infinite” though...
  • after create fiber
    like thread, not running
    call “resume” (chicken-before-egg)
    makes fiber run from start to Fiber.yield
    returns value
    each successive .resume goes back in
    resumes from Fiber.yield
    with previous stack intact


    ----


    So, when you create a Fiber, again just like a thread, it won’t do anything until you ask it to. To start it you call the somewhat chicken-before-the-egg “resume” method. This causes hello_lrug to run until it hits that Fiber.yield. This pauses execution of the Fiber and returns the value passed to it. You also use “resume” to re-enter the Fiber to do some more work.
  • 3rd interesting thing
    that pesky infinite loop
    it’s ok
    Fiber only runs up to .yield
    then exit
    CPU is out and nothing running
    only call resume 5 times, never get our 6th
    no longer need to think about explicit termination
    lazy eval = super easy


    ----

    So although we gave hello_lrug a workload that *will never end*, it’s not a problem because we use the yield and resume methods to explicitly schedule when hello_lrug run. If we only want to run it 5 times and never come back to it, that’s ok, it won’t eat up CPU time. This gives us an interesting new way to think about writing functions; if they don’t have to end lazy evaluation becomes super easy...
  • Fibonacci

    standard fib method using recursion
    can be hard to get head around
    have to worry about termination clauses
    can be expensive
    (this impl will calc fib(1) several times)

    ---

    Hey, so what’s a talk without Fibonacci?

    Here’s the standard implementation for generating a number in the fibonacci sequence using ruby. It uses recursion, which is something you have to get your head around before you see how it works, and that can be hard sometimes, and you have to take care to have correct guard clauses to make sure you terminate the recursion.
  • same thing, with fibers
    understanding co-routines is probably hard
    both have mental roadblock
    but the def is more natural

    advantage, unlike recursion
    get fib 6, gives us fib 1 - 5 as well
    recursion calcs,
    but doesn’t share



    ---


    Here’s the Fibrous way of doing it. Again, there is a fundamental concept you need to understand first (co-routines), but I do think this is a slightly more natural way of defining the sequence.

    The difference is that to get the 6th number, we have to call resume on the fiber 6 times. With the side-effect of being provided with all the preceding 5 numbers in the sequence.
  • lazy eval = fibers!
    most use I think

    is where used in 1.9 stdlib
    .each, .map &c without block = enumerator
    can be chained
    under the hood all done with fibers


    --

    This sort of lazy evalutation is where Fibers shine, and probably where they’ll see the most use.

    And, in fact, it’s exactly this sort of thing that Fibers are being used for in the ruby 1.9 stdlib. Things like .each and .map have been reworked so that without a block they now return enumerators that you can chain together. And under the hood these enumerators are implemented using fibers.
  • and in the real world?

    (I dunno)
    github search
    plenty results

    on closer inspection
    most forks/copies of rubyspec for fibers
    (a good resource to read
    if you want
    know ruby)

    the first non-rubyspec result though...



    ---

    So, that’s all a bit theoretical. What real use are fibers?

    Well, I don’t know, so I did a quick search on github, and to my surprise there were actually plenty of results.

    But... on closer inspection, the first few pages are entirely forks and copies of the Ruby specs for fibers. Which, by the way, I totally recommend reading if you want to get an idea how something in ruby actually works.

    The first result that wasn’t a ruby spec requires a detour first...
  • another quick detour
    if you’ve done it
    you know
    evented programming is different

    example reading a webpage
    normal is simple, call a couple of methods

    evented - much more complex.
    define state recording models
    use callback methods
    you gain performance & flexibility
    but you lose simplicity and familiarity


    ---


    Well.. another quick detour. If you’ve ever done any evented programming you’ll know that the code is very different looking to normal code.

    Here’s a simplified example of how to read a webpage. For the normal case it’s really simple, you just call a couple of methods.

    The evented case, not so much. You have to rely on callback methods and keep some object around to hold the result of those callbacks. What you lose in a simplified API you gain in performance and flexibility, but it’s hard to get your head around.
  • that first non-rubyspec github hit?

    Neverblock - fibers + eventmachine + async libs
    give you sync style API for async programming
    get performance (not flex)
    without changing much code
    or the what it feels like
    just replace blocking libraries with neverblock

    not going to cover in detail. 1 more slide!


    --




    The first non-ruby spec result on github that uses fibers was: Neverblock.

    This library uses Fibers, Event Machine and other non-blocking APIs to present you with an API for doing asynchronous programming that looks remarkably synchronous. So you don’t have to change your code to get the benefit of asynchronous performance.

    I won’t go into details (I only have 1 more slide!), but you should check it out if you’re interested.
  • plenty I didn’t cover

    remaining API
    transfer - yield this fiber + resume another fiber in one go
    don’t go back to caller
    others simple enough

    lightweight - less mem than same num threads

    single core only (all fibers run in same thread)


    --

    Last slide. There’s loads I didn’t cover, but I think I got the basics.

    3 remaining API methods (apart from resume and yield).
    Transfer is like yield, but instead of giving CPU back to the caller, you give it to the Fiber you called transfer on. The other two are simple enough.

    Supremely lightweight. Spinning up fibers takes much less memory than Threads, there’s a good comparison.

    Single core solution really.

    I’ll put a resource slide up when I post these slides....
  • (...and breathe!)
  • Bonus slide for the internet.

Transcript

  • 1. Some rough fibrous material A 20x20 guide to Fibers in Ruby 1.9 Murray Steele - LRUG February 2010
  • 2. What are fibers? • coroutines • cooperative multitasking
  • 3. Detour #1: subroutines vs. coroutines • sub-routines • every function you’ve ever written • single entry-point & single exit • co-routines • single entry-point, multiple exit & re- entry points
  • 4. Detour #1.a: subroutines def a_sub_routine( ) CPU end
  • 5. Detour #1.a: subroutines def a_sub_routine( ) return CPU end
  • 6. Detour #1.b: subroutines def a_sub_routine( ) return CPU end
  • 7. Detour #1.b: subroutines def a_sub_routine( ) def a_sub_routine( )  return end return CPU end
  • 8. Detour #1.c: coroutines def a_co_routine( ) CPU end
  • 9. Detour #1.c: coroutines def a_co_routine( ) CPU yield resume end
  • 10. Detour #1.d: coroutines def a_co_routine( ) CPU end
  • 11. Detour #1.d: coroutines def a_co_routine( ) yield yield CPU yield resume end
  • 12. Detour #2: multitasking • pre-emptive multitasking • standard thread model • locking & state issues • co-operative multitasking • programmer control
  • 13. Detour #2.a: pre-emptive time thread α shared data thread β
  • 14. Detour #2.a: pre-emptive time thread α shared data thread β
  • 15. Detour #2.b: co-operative time fiber α shared data fiber β
  • 16. Detour #2.b: co-operative time fiber α yield shared data fiber β
  • 17. Back on track: finally some code hello_lrug = Fiber.new do i=1 loop do Fiber.yield "Hello LR#{'U' * i}G!" i += 1 end end
  • 18. Using fibers hello_lrug.resume #=> "Hello LRUG!" 4.times { puts hello_lrug.resume } # outputs: # "Hello LRUUG!" # "Hello LRUUUG!" # "Hello LRUUUUG!" # "Hello LRUUUUUG!"
  • 19. Using fibers means never having to say you’re finished
  • 20. Detour #1.1.2.3.5 def fib(n) if (0..1).include? n n elsif n > 1 fib(n-1) + fib(n-2) end end puts fib(6)
  • 21. Detour #1.1.2.3.5.8 fib = Fiber.new do x, y = 0, 1 loop do Fiber.yield y x, y = y, x + y end end 6.times { puts fib.resume }
  • 22. What use is a fiber? lrug = ['L','R','U','G'] enum = lrug.map .with_index enum.each {|*l| puts l } # outputs: # “[‘L’, 0]” # “[‘R’, 1]” # ...
  • 23. What practical use is a fiber?
  • 24. Detour.do {|d| talk << 4} # Non Evented open('http://lrug.org/').read #=> ‘<html.... # Evented class HTTPClient def receive_body(data) @data << data end end http_client = HTTPClient.new EventMachine::run do EventMachine::connect 'lrug.org', 80, http_client end http_client.data #=> ‘<html....
  • 25. So…what is a practical use for a fiber? http://www.espace.com.eg/neverblock/
  • 26. What I didn’t say • The rest of the API • fiber_instance.transfer - invoke on a Fiber to pass control to it, instead of yielding to the caller • fiber_instance.alive? - can we safely resume this Fiber, or has it terminated? • Fiber.current - get the current Fiber so we can play with it • Lightweight - less memory over head than threads • The downsides - single core only really
  • 27. It’s over! Thanks for listening, any questions?
  • 28. Resources • http://delicious.com/hlame/fibers • (most of the stuff I researched is here) • http://github.com/oldmoe/neverblock • http://en.wikipedia.org/wiki/Fiber_(computer_science) • http://en.wikipedia.org/wiki/Coroutine