SlideShare a Scribd company logo
Some rough fibrous
    material
 A 20x20 guide to Fibers in Ruby 1.9
 Murray Steele - LRUG February 2010
What are fibers?


• coroutines
• cooperative multitasking
Detour #1: subroutines
    vs. coroutines
• sub-routines
 • every function you’ve ever written
 • single entry-point & single exit
• co-routines
 • single entry-point, multiple exit & re-
    entry points
Detour #1.a:
      subroutines
      def a_sub_routine(   )



CPU



      end
Detour #1.a:
subroutines
def a_sub_routine(   )



   return                CPU



end
Detour #1.b:
subroutines
def a_sub_routine(   )



   return CPU



end
Detour #1.b:
                         subroutines
                         def a_sub_routine(   )
def a_sub_routine(   )




      
  return



end
                            return CPU



                         end
Detour #1.c:
       coroutines
      def a_co_routine(   )



CPU



      end
Detour #1.c:
 coroutines
def a_co_routine(   )
          CPU


      yield

                        resume
end
Detour #1.d:
       coroutines
      def a_co_routine(   )



CPU



      end
Detour #1.d:
 coroutines
def a_co_routine(        )
              yield

      yield                   CPU

                 yield       resume
end
Detour #2: multitasking

• pre-emptive multitasking
 • standard thread model
 • locking & state issues
• co-operative multitasking
 • programmer control
Detour #2.a:
           pre-emptive
time

thread α


            shared data


thread β
Detour #2.a:
           pre-emptive
time

thread α


            shared data


thread β
Detour #2.b:
         co-operative
time

fiber α


           shared data


fiber β
Detour #2.b:
         co-operative
time

fiber α
                         yield

           shared data


fiber β
Back on track:
        finally some code
hello_lrug = Fiber.new do
 i=1
 loop do
   Fiber.yield "Hello LR#{'U' * i}G!"
   i += 1
 end
end
Using fibers
hello_lrug.resume #=> "Hello LRUG!"

4.times { puts hello_lrug.resume }

#   outputs:
#   "Hello LRUUG!"
#   "Hello LRUUUG!"
#   "Hello LRUUUUG!"
#   "Hello LRUUUUUG!"
Using fibers
means never
  having
    to say
you’re finished
Detour #1.1.2.3.5
def fib(n)
 if (0..1).include? n
   n
 elsif n > 1
   fib(n-1) + fib(n-2)
 end
end

puts fib(6)
Detour #1.1.2.3.5.8
fib = Fiber.new do
 x, y = 0, 1
 loop do
   Fiber.yield y
   x, y = y, x + y
 end
end

6.times { puts fib.resume }
What use is a fiber?
lrug = ['L','R','U','G']
enum = lrug.map
       .with_index

enum.each {|*l| puts l }

#   outputs:
#   “[‘L’, 0]”
#   “[‘R’, 1]”
#   ...
What practical use is a
       fiber?
Detour.do {|d| talk << 4}
# Non Evented
open('http://lrug.org/').read #=> ‘<html....

# Evented
class HTTPClient
 def receive_body(data)
   @data << data
 end
end
http_client = HTTPClient.new
EventMachine::run do
 EventMachine::connect 'lrug.org', 80, http_client
end
http_client.data #=> ‘<html....
So…what is a practical
   use for a fiber?



 http://www.espace.com.eg/neverblock/
What I didn’t say
•   The rest of the API

    •   fiber_instance.transfer - invoke on a Fiber to pass
        control to it, instead of yielding to the caller

    •   fiber_instance.alive? - can we safely resume this
        Fiber, or has it terminated?

    •   Fiber.current - get the current Fiber so we can play
        with it

•   Lightweight - less memory over head than threads

•   The downsides - single core only really
It’s over!
Thanks for listening, any questions?
Resources

•   http://delicious.com/hlame/fibers

    •   (most of the stuff I researched is here)

•   http://github.com/oldmoe/neverblock

•   http://en.wikipedia.org/wiki/Fiber_(computer_science)

•   http://en.wikipedia.org/wiki/Coroutine

More Related Content

What's hot

agri inventory - nouka data collector / yaoya data convertor
agri inventory - nouka data collector / yaoya data convertoragri inventory - nouka data collector / yaoya data convertor
agri inventory - nouka data collector / yaoya data convertor
Toshiaki Baba
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
Ceph Community
 
Ansible, Simplicity, and the Zen of Python
Ansible, Simplicity, and the Zen of PythonAnsible, Simplicity, and the Zen of Python
Ansible, Simplicity, and the Zen of Python
toddmowen
 
Linux networking
Linux networkingLinux networking
Linux networking
Arie Bregman
 
Linux fundamental - Chap 14 shell script
Linux fundamental - Chap 14 shell scriptLinux fundamental - Chap 14 shell script
Linux fundamental - Chap 14 shell script
Kenny (netman)
 
Docker tips-for-java-developers
Docker tips-for-java-developersDocker tips-for-java-developers
Docker tips-for-java-developers
Aparna Chaudhary
 
Docker, JVM and CPU
Docker, JVM and CPUDocker, JVM and CPU
Docker, JVM and CPU
Aparna Chaudhary
 
Reversing the dropbox client on windows
Reversing the dropbox client on windowsReversing the dropbox client on windows
Reversing the dropbox client on windows
extremecoders
 
Sge
SgeSge
Shell scripting
Shell scriptingShell scripting
Shell scripting
Ashrith Mekala
 
Kubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga | Elixir Club Ukraine
Kubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga | Elixir Club UkraineKubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga | Elixir Club Ukraine
Kubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga | Elixir Club Ukraine
Elixir Club
 
Happy Go Programming Part 1
Happy Go Programming Part 1Happy Go Programming Part 1
Happy Go Programming Part 1Lin Yo-An
 
Perl - laziness, impatience, hubris, and one liners
Perl - laziness, impatience, hubris, and one linersPerl - laziness, impatience, hubris, and one liners
Perl - laziness, impatience, hubris, and one liners
Kirk Kimmel
 
Puppet
PuppetPuppet
Practicing Python 3
Practicing Python 3Practicing Python 3
Practicing Python 3
Mosky Liu
 
CRONtab Tutorial
CRONtab TutorialCRONtab Tutorial
CRONtab Tutorial
Joseph ...
 
Puppet and Openshift
Puppet and OpenshiftPuppet and Openshift
Puppet and Openshift
Gareth Rushgrove
 
OlinData Puppet Presentation for MOSC 2012
OlinData Puppet Presentation for MOSC 2012OlinData Puppet Presentation for MOSC 2012
OlinData Puppet Presentation for MOSC 2012
Walter Heck
 
Go Profiling - John Graham-Cumming
Go Profiling - John Graham-Cumming Go Profiling - John Graham-Cumming
Go Profiling - John Graham-Cumming Cloudflare
 

What's hot (20)

agri inventory - nouka data collector / yaoya data convertor
agri inventory - nouka data collector / yaoya data convertoragri inventory - nouka data collector / yaoya data convertor
agri inventory - nouka data collector / yaoya data convertor
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
 
Ansible, Simplicity, and the Zen of Python
Ansible, Simplicity, and the Zen of PythonAnsible, Simplicity, and the Zen of Python
Ansible, Simplicity, and the Zen of Python
 
Linux networking
Linux networkingLinux networking
Linux networking
 
email delivery
email deliveryemail delivery
email delivery
 
Linux fundamental - Chap 14 shell script
Linux fundamental - Chap 14 shell scriptLinux fundamental - Chap 14 shell script
Linux fundamental - Chap 14 shell script
 
Docker tips-for-java-developers
Docker tips-for-java-developersDocker tips-for-java-developers
Docker tips-for-java-developers
 
Docker, JVM and CPU
Docker, JVM and CPUDocker, JVM and CPU
Docker, JVM and CPU
 
Reversing the dropbox client on windows
Reversing the dropbox client on windowsReversing the dropbox client on windows
Reversing the dropbox client on windows
 
Sge
SgeSge
Sge
 
Shell scripting
Shell scriptingShell scripting
Shell scripting
 
Kubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga | Elixir Club Ukraine
Kubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga | Elixir Club UkraineKubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga | Elixir Club Ukraine
Kubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga | Elixir Club Ukraine
 
Happy Go Programming Part 1
Happy Go Programming Part 1Happy Go Programming Part 1
Happy Go Programming Part 1
 
Perl - laziness, impatience, hubris, and one liners
Perl - laziness, impatience, hubris, and one linersPerl - laziness, impatience, hubris, and one liners
Perl - laziness, impatience, hubris, and one liners
 
Puppet
PuppetPuppet
Puppet
 
Practicing Python 3
Practicing Python 3Practicing Python 3
Practicing Python 3
 
CRONtab Tutorial
CRONtab TutorialCRONtab Tutorial
CRONtab Tutorial
 
Puppet and Openshift
Puppet and OpenshiftPuppet and Openshift
Puppet and Openshift
 
OlinData Puppet Presentation for MOSC 2012
OlinData Puppet Presentation for MOSC 2012OlinData Puppet Presentation for MOSC 2012
OlinData Puppet Presentation for MOSC 2012
 
Go Profiling - John Graham-Cumming
Go Profiling - John Graham-Cumming Go Profiling - John Graham-Cumming
Go Profiling - John Graham-Cumming
 

Viewers also liked

Care For The Community
Care For The CommunityCare For The Community
Care For The Community
Murray Steele
 
Wild & Weird Ideas: An Overview of Ruby 1.9
Wild & Weird Ideas: An Overview of Ruby 1.9Wild & Weird Ideas: An Overview of Ruby 1.9
Wild & Weird Ideas: An Overview of Ruby 1.9
Murray Steele
 
My First Ruby
My First RubyMy First Ruby
My First Ruby
Murray Steele
 
WOOD AND ITS DERIVATES
WOOD AND ITS DERIVATESWOOD AND ITS DERIVATES
WOOD AND ITS DERIVATESIES Consaburum
 
W504+Asbestos+and+Other+Fibres
W504+Asbestos+and+Other+FibresW504+Asbestos+and+Other+Fibres
W504+Asbestos+and+Other+Fibres
OHLearning.com
 
Simulation in Design - Dive into ANSYS simulation
Simulation in Design -  Dive into ANSYS simulationSimulation in Design -  Dive into ANSYS simulation
Simulation in Design - Dive into ANSYS simulation
Derek Sweeney
 

Viewers also liked (7)

Effective Scala @ Jfokus
Effective Scala @ JfokusEffective Scala @ Jfokus
Effective Scala @ Jfokus
 
Care For The Community
Care For The CommunityCare For The Community
Care For The Community
 
Wild & Weird Ideas: An Overview of Ruby 1.9
Wild & Weird Ideas: An Overview of Ruby 1.9Wild & Weird Ideas: An Overview of Ruby 1.9
Wild & Weird Ideas: An Overview of Ruby 1.9
 
My First Ruby
My First RubyMy First Ruby
My First Ruby
 
WOOD AND ITS DERIVATES
WOOD AND ITS DERIVATESWOOD AND ITS DERIVATES
WOOD AND ITS DERIVATES
 
W504+Asbestos+and+Other+Fibres
W504+Asbestos+and+Other+FibresW504+Asbestos+and+Other+Fibres
W504+Asbestos+and+Other+Fibres
 
Simulation in Design - Dive into ANSYS simulation
Simulation in Design -  Dive into ANSYS simulationSimulation in Design -  Dive into ANSYS simulation
Simulation in Design - Dive into ANSYS simulation
 

Similar to Some Rough Fibrous Material

Игорь Фесенко "Direction of C# as a High-Performance Language"
Игорь Фесенко "Direction of C# as a High-Performance Language"Игорь Фесенко "Direction of C# as a High-Performance Language"
Игорь Фесенко "Direction of C# as a High-Performance Language"
Fwdays
 
Elegant concurrency
Elegant concurrencyElegant concurrency
Elegant concurrency
Mosky Liu
 
Continuous Go Profiling & Observability
Continuous Go Profiling & ObservabilityContinuous Go Profiling & Observability
Continuous Go Profiling & Observability
ScyllaDB
 
SMP implementation for OpenBSD/sgi
SMP implementation for OpenBSD/sgiSMP implementation for OpenBSD/sgi
SMP implementation for OpenBSD/sgi
Takuya ASADA
 
Devel::NYTProf v5 at YAPC::NA 201406
Devel::NYTProf v5 at YAPC::NA 201406Devel::NYTProf v5 at YAPC::NA 201406
Devel::NYTProf v5 at YAPC::NA 201406
Tim Bunce
 
What we Learned Implementing Puppet at Backstop
What we Learned Implementing Puppet at BackstopWhat we Learned Implementing Puppet at Backstop
What we Learned Implementing Puppet at Backstop
Puppet
 
Dynamic Instrumentation- OpenEBS Golang Meetup July 2017
Dynamic Instrumentation- OpenEBS Golang Meetup July 2017Dynamic Instrumentation- OpenEBS Golang Meetup July 2017
Dynamic Instrumentation- OpenEBS Golang Meetup July 2017
OpenEBS
 
How to Begin to Develop Ruby Core
How to Begin to Develop Ruby CoreHow to Begin to Develop Ruby Core
How to Begin to Develop Ruby Core
Hiroshi SHIBATA
 
New features in Ruby 2.5
New features in Ruby 2.5New features in Ruby 2.5
New features in Ruby 2.5
Ireneusz Skrobiś
 
Let's write a Debugger!
Let's write a Debugger!Let's write a Debugger!
Let's write a Debugger!
Levente Kurusa
 
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
Tim Bunce
 
May2010 hex-core-opt
May2010 hex-core-optMay2010 hex-core-opt
May2010 hex-core-optJeff Larkin
 
Design Summit - Migrating to Ruby 2 - Joe Rafaniello
Design Summit - Migrating to Ruby 2 - Joe RafanielloDesign Summit - Migrating to Ruby 2 - Joe Rafaniello
Design Summit - Migrating to Ruby 2 - Joe Rafaniello
ManageIQ
 
Linux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old SecretsLinux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old Secrets
Brendan Gregg
 
Devel::NYTProf 2009-07 (OUTDATED, see 201008)
Devel::NYTProf 2009-07 (OUTDATED, see 201008)Devel::NYTProf 2009-07 (OUTDATED, see 201008)
Devel::NYTProf 2009-07 (OUTDATED, see 201008)
Tim Bunce
 
Tuning parallelcodeonsolaris005
Tuning parallelcodeonsolaris005Tuning parallelcodeonsolaris005
Tuning parallelcodeonsolaris005dflexer
 
Parallelism in a NumPy-based program
Parallelism in a NumPy-based programParallelism in a NumPy-based program
Parallelism in a NumPy-based program
Ralf Gommers
 
Packaging perl (LPW2010)
Packaging perl (LPW2010)Packaging perl (LPW2010)
Packaging perl (LPW2010)
p3castro
 
Test Driven Development
Test Driven DevelopmentTest Driven Development
Test Driven Development
Papp Laszlo
 

Similar to Some Rough Fibrous Material (20)

Игорь Фесенко "Direction of C# as a High-Performance Language"
Игорь Фесенко "Direction of C# as a High-Performance Language"Игорь Фесенко "Direction of C# as a High-Performance Language"
Игорь Фесенко "Direction of C# as a High-Performance Language"
 
Elegant concurrency
Elegant concurrencyElegant concurrency
Elegant concurrency
 
Continuous Go Profiling & Observability
Continuous Go Profiling & ObservabilityContinuous Go Profiling & Observability
Continuous Go Profiling & Observability
 
SMP implementation for OpenBSD/sgi
SMP implementation for OpenBSD/sgiSMP implementation for OpenBSD/sgi
SMP implementation for OpenBSD/sgi
 
Devel::NYTProf v5 at YAPC::NA 201406
Devel::NYTProf v5 at YAPC::NA 201406Devel::NYTProf v5 at YAPC::NA 201406
Devel::NYTProf v5 at YAPC::NA 201406
 
What we Learned Implementing Puppet at Backstop
What we Learned Implementing Puppet at BackstopWhat we Learned Implementing Puppet at Backstop
What we Learned Implementing Puppet at Backstop
 
Dynamic Instrumentation- OpenEBS Golang Meetup July 2017
Dynamic Instrumentation- OpenEBS Golang Meetup July 2017Dynamic Instrumentation- OpenEBS Golang Meetup July 2017
Dynamic Instrumentation- OpenEBS Golang Meetup July 2017
 
Nyt Prof 200910
Nyt Prof 200910Nyt Prof 200910
Nyt Prof 200910
 
How to Begin to Develop Ruby Core
How to Begin to Develop Ruby CoreHow to Begin to Develop Ruby Core
How to Begin to Develop Ruby Core
 
New features in Ruby 2.5
New features in Ruby 2.5New features in Ruby 2.5
New features in Ruby 2.5
 
Let's write a Debugger!
Let's write a Debugger!Let's write a Debugger!
Let's write a Debugger!
 
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
 
May2010 hex-core-opt
May2010 hex-core-optMay2010 hex-core-opt
May2010 hex-core-opt
 
Design Summit - Migrating to Ruby 2 - Joe Rafaniello
Design Summit - Migrating to Ruby 2 - Joe RafanielloDesign Summit - Migrating to Ruby 2 - Joe Rafaniello
Design Summit - Migrating to Ruby 2 - Joe Rafaniello
 
Linux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old SecretsLinux Performance Analysis: New Tools and Old Secrets
Linux Performance Analysis: New Tools and Old Secrets
 
Devel::NYTProf 2009-07 (OUTDATED, see 201008)
Devel::NYTProf 2009-07 (OUTDATED, see 201008)Devel::NYTProf 2009-07 (OUTDATED, see 201008)
Devel::NYTProf 2009-07 (OUTDATED, see 201008)
 
Tuning parallelcodeonsolaris005
Tuning parallelcodeonsolaris005Tuning parallelcodeonsolaris005
Tuning parallelcodeonsolaris005
 
Parallelism in a NumPy-based program
Parallelism in a NumPy-based programParallelism in a NumPy-based program
Parallelism in a NumPy-based program
 
Packaging perl (LPW2010)
Packaging perl (LPW2010)Packaging perl (LPW2010)
Packaging perl (LPW2010)
 
Test Driven Development
Test Driven DevelopmentTest Driven Development
Test Driven Development
 

Recently uploaded

FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
Frank van Harmelen
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 

Recently uploaded (20)

FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 

Some Rough Fibrous Material

  • 1. Some rough fibrous material A 20x20 guide to Fibers in Ruby 1.9 Murray Steele - LRUG February 2010
  • 2. What are fibers? • coroutines • cooperative multitasking
  • 3. Detour #1: subroutines vs. coroutines • sub-routines • every function you’ve ever written • single entry-point & single exit • co-routines • single entry-point, multiple exit & re- entry points
  • 4. Detour #1.a: subroutines def a_sub_routine( ) CPU end
  • 7. Detour #1.b: subroutines def a_sub_routine( ) def a_sub_routine( )  return end return CPU end
  • 8. Detour #1.c: coroutines def a_co_routine( ) CPU end
  • 9. Detour #1.c: coroutines def a_co_routine( ) CPU yield resume end
  • 10. Detour #1.d: coroutines def a_co_routine( ) CPU end
  • 11. Detour #1.d: coroutines def a_co_routine( ) yield yield CPU yield resume end
  • 12. Detour #2: multitasking • pre-emptive multitasking • standard thread model • locking & state issues • co-operative multitasking • programmer control
  • 13. Detour #2.a: pre-emptive time thread α shared data thread β
  • 14. Detour #2.a: pre-emptive time thread α shared data thread β
  • 15. Detour #2.b: co-operative time fiber α shared data fiber β
  • 16. Detour #2.b: co-operative time fiber α yield shared data fiber β
  • 17. Back on track: finally some code hello_lrug = Fiber.new do i=1 loop do Fiber.yield "Hello LR#{'U' * i}G!" i += 1 end end
  • 18. Using fibers hello_lrug.resume #=> "Hello LRUG!" 4.times { puts hello_lrug.resume } # outputs: # "Hello LRUUG!" # "Hello LRUUUG!" # "Hello LRUUUUG!" # "Hello LRUUUUUG!"
  • 19. Using fibers means never having to say you’re finished
  • 20. Detour #1.1.2.3.5 def fib(n) if (0..1).include? n n elsif n > 1 fib(n-1) + fib(n-2) end end puts fib(6)
  • 21. Detour #1.1.2.3.5.8 fib = Fiber.new do x, y = 0, 1 loop do Fiber.yield y x, y = y, x + y end end 6.times { puts fib.resume }
  • 22. What use is a fiber? lrug = ['L','R','U','G'] enum = lrug.map .with_index enum.each {|*l| puts l } # outputs: # “[‘L’, 0]” # “[‘R’, 1]” # ...
  • 23. What practical use is a fiber?
  • 24. Detour.do {|d| talk << 4} # Non Evented open('http://lrug.org/').read #=> ‘<html.... # Evented class HTTPClient def receive_body(data) @data << data end end http_client = HTTPClient.new EventMachine::run do EventMachine::connect 'lrug.org', 80, http_client end http_client.data #=> ‘<html....
  • 25. So…what is a practical use for a fiber? http://www.espace.com.eg/neverblock/
  • 26. What I didn’t say • The rest of the API • fiber_instance.transfer - invoke on a Fiber to pass control to it, instead of yielding to the caller • fiber_instance.alive? - can we safely resume this Fiber, or has it terminated? • Fiber.current - get the current Fiber so we can play with it • Lightweight - less memory over head than threads • The downsides - single core only really
  • 27. It’s over! Thanks for listening, any questions?
  • 28. Resources • http://delicious.com/hlame/fibers • (most of the stuff I researched is here) • http://github.com/oldmoe/neverblock • http://en.wikipedia.org/wiki/Fiber_(computer_science) • http://en.wikipedia.org/wiki/Coroutine

Editor's Notes

  1. I&apos;m going to talk about Fibers in Ruby 1.9. Keyword is rough no knowledge not ruby 1.9 dayjob nor in spare time (lazy) researched in last week apologies for any ommissions (there will be some) if you know it, don&amp;#x2019;t ask mean questions sorry -- The key word in my title is &quot;rough&quot;; I&apos;m coming to the material with little or no practical knowledge. I&apos;m not using 1.9 in my day job, and although I probably would use it for any spare-time hacking, it&apos;s very rare that I get down to any as I&apos;m, basically, lazy. So apologies to anyone that knows this stuff already; I might get things wrong, or not cover everything in enough detail. I&apos;m sorry.
  2. 2 ideas that should sound familiar: co-routines &amp; co-operative multitasking co-routines = familiar bcz sub-routines co-operative multitasking = pre-emptive multitasking detour to cover each of these ideas then onto ruby -- Fibers are an implementation of 2 important ideas: 1. The first idea is &amp;#x201C;co-routines&amp;#x201D; (and this should sound familiar, as you&amp;#x2019;ll have heard of sub-routines which are related) and 2. The second idea is &amp;#x201C;co-operative multitasking&amp;#x201D; (and again, you should recognise this as similar sounding to &amp;#x201C;pre-emptive mutlitasking&amp;#x201D;). So, we&apos;ll take a quick detour to cover these in turn and then we&apos;ll come back to Ruby.
  3. sub-routine invoke = start on first line, proceed to end, STOP go-in, come out co-routines are different start on first line, proceed to end, STOP in between = take detour come back later been around a bit but hardly implemented -- So pretty much every method or function you&amp;#x2019;ve ever written is a sub-routine. When you invoke them you start at the first line and run through them till they terminate and give you their result. A co-routine is a little bit different. When you invoke them they also start on the first line of code but they can halt execution and exit before they terminate. Later you can then re-enter and resume execution from where you left off. It&amp;#x2019;s also unlikely you&amp;#x2019;ll have written one, yet, as despite being around for a while not many languages provide them as a feature. ----- Every method or function you write is a sub-routine. It&apos;s a package of code that has an entry point, and a single exit point. Admittedly things like exceptions and multiple return paths might confuse this and make it seem like you have many exit points, but for each *single run through the code* there&apos;s one path: you go in, do something and you come out and that&apos;s it.
  4. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  5. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  6. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  7. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  8. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  9. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  10. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  11. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  12. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  13. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  14. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  15. To clarify: invoke subroutine -style method. CPU enters method bounce around until execution stops with return (or exception) (or implicit last statement) and release CPU to caller --- So, here&amp;#x2019;s a simple subroutine example. When you call a method the flow of control enters the function, and is trapped until the method terminates. Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller. The only way to go back into the function is to go back to the start by calling it again.
  16. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  17. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  18. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  19. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  20. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  21. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  22. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  23. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  24. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  25. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  26. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  27. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  28. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  29. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  30. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  31. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  32. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  33. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  34. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  35. once exited, no going back inside that method is dead want to re-run? have to re-invoke create new copy of stack (expensive) and enter at start nothing shared (&amp;#x2018;cept pass-ins) --- So, once you exit a sub-routine, the door is closed; you can&amp;#x2019;t return to it the way you came out. To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&amp;#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
  36. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  37. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  38. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  39. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  40. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  41. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  42. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  43. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  44. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  45. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  46. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  47. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  48. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  49. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  50. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  51. how are co-routines different? start&amp;#x2019;s same invoke method CPU trapped execute statements exit with yield gives caller back CPU caller later resume re-enter co-routine at EXACT POINT WHERE WE LEFT OFF same stack, same everything continue exec --- And here&amp;#x2019;s a similar example for a co-routine. It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
  52. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  53. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  54. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  55. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  56. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  57. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  58. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  59. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  60. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  61. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  62. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  63. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  64. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  65. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  66. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  67. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  68. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  69. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  70. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  71. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  72. more interesting is... not a one-time deal yield to the caller caller resume routine many times! even more interesting yield from multiple places and resume knows which yield to go back to --- What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination. We can also have as many yield&amp;#x2019;s as we want, we don&amp;#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&amp;#x2019;t choose some other yield point to re-enter at.
  73. 2nd idea - multitasking 1st = thread model several running tasks OS or lang runtime schedules don&amp;#x2019;t know when so access shared objects = pain (locks) Fibers = 2nd programmer has control choose when in each task to give up CPU and who to give it to -- You should be familiar with pre-emptive multitasking as it&amp;#x2019;s the standard model of concurrency used by most Thread implementations. You have several tasks running at the same time, scheduled by the OS or language runtime. The gotcha is access to shared objects. Fiber&amp;#x2019;s however use the co-operative model. With this no tasks run at the exact same time and it&amp;#x2019;s up to the programmer to decide when each task will give up control and who to pass control onto.
  74. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  75. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  76. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  77. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  78. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  79. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  80. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  81. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  82. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  83. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  84. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  85. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  86. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  87. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  88. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  89. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  90. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  91. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  92. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  93. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  94. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  95. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  96. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  97. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  98. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  99. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  100. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  101. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  102. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  103. 2 threads, alpha, beta scheduler gives each some CPU time for work they don&amp;#x2019;t know when so alpha wants shared data locks it stops changes when CPU elsewhere when beta gets the CPU if shared data is locked, it can&amp;#x2019;t use it, probably can&amp;#x2019;t do anything, wasted effort --- The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&amp;#x2019;t know when in their life-cycle this&amp;#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&amp;#x2019;t do anything.
  104. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  105. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  106. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  107. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  108. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  109. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  110. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  111. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  112. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  113. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  114. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  115. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  116. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  117. with co-op fibers, not threads no external scheduler when fiber has CPU it has CPU can use shared data without lock nothing else running. when done or done enough transfers CPU away other fiber picks up and starts work --- On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&amp;#x2019;t need to lock anything because it&amp;#x2019;s safe in the knowledge that no other fiber will be running unless it says it&amp;#x2019;s done. When the fiber is done (or happy that it&amp;#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
  118. science over. code now. simple example of creating Fiber. familiar if worked with threads block is workload for fiber illustrates 3 things... Fiber.yield is exit point shared stack (local var i same between yields) infinite loop (!) --- So, I&amp;#x2019;ve bored you with the science part, how about looking at some code? If you&amp;#x2019;ve used threads in ruby this should be familiar. You create a Fiber by passing a block to a constructor. The block is the &amp;#x201C;work load&amp;#x201D; for that Fiber. In this case an infinite loop to generate increasingly excited hello&amp;#x2019;s to the LRUG crowd. Don&amp;#x2019;t worry about that pesky &amp;#x201C;infinite&amp;#x201D; though...
  119. after create fiber like thread, not running call &amp;#x201C;resume&amp;#x201D; (chicken-before-egg) makes fiber run from start to Fiber.yield returns value each successive .resume goes back in resumes from Fiber.yield with previous stack intact ---- So, when you create a Fiber, again just like a thread, it won&amp;#x2019;t do anything until you ask it to. To start it you call the somewhat chicken-before-the-egg &amp;#x201C;resume&amp;#x201D; method. This causes hello_lrug to run until it hits that Fiber.yield. This pauses execution of the Fiber and returns the value passed to it. You also use &amp;#x201C;resume&amp;#x201D; to re-enter the Fiber to do some more work.
  120. 3rd interesting thing that pesky infinite loop it&amp;#x2019;s ok Fiber only runs up to .yield then exit CPU is out and nothing running only call resume 5 times, never get our 6th no longer need to think about explicit termination lazy eval = super easy ---- So although we gave hello_lrug a workload that *will never end*, it&amp;#x2019;s not a problem because we use the yield and resume methods to explicitly schedule when hello_lrug run. If we only want to run it 5 times and never come back to it, that&amp;#x2019;s ok, it won&amp;#x2019;t eat up CPU time. This gives us an interesting new way to think about writing functions; if they don&amp;#x2019;t have to end lazy evaluation becomes super easy...
  121. Fibonacci standard fib method using recursion can be hard to get head around have to worry about termination clauses can be expensive (this impl will calc fib(1) several times) --- Hey, so what&amp;#x2019;s a talk without Fibonacci? Here&amp;#x2019;s the standard implementation for generating a number in the fibonacci sequence using ruby. It uses recursion, which is something you have to get your head around before you see how it works, and that can be hard sometimes, and you have to take care to have correct guard clauses to make sure you terminate the recursion.
  122. same thing, with fibers understanding co-routines is probably hard both have mental roadblock but the def is more natural advantage, unlike recursion get fib 6, gives us fib 1 - 5 as well recursion calcs, but doesn&amp;#x2019;t share --- Here&amp;#x2019;s the Fibrous way of doing it. Again, there is a fundamental concept you need to understand first (co-routines), but I do think this is a slightly more natural way of defining the sequence. The difference is that to get the 6th number, we have to call resume on the fiber 6 times. With the side-effect of being provided with all the preceding 5 numbers in the sequence.
  123. lazy eval = fibers! most use I think is where used in 1.9 stdlib .each, .map &amp;c without block = enumerator can be chained under the hood all done with fibers -- This sort of lazy evalutation is where Fibers shine, and probably where they&amp;#x2019;ll see the most use. And, in fact, it&amp;#x2019;s exactly this sort of thing that Fibers are being used for in the ruby 1.9 stdlib. Things like .each and .map have been reworked so that without a block they now return enumerators that you can chain together. And under the hood these enumerators are implemented using fibers.
  124. and in the real world? (I dunno) github search plenty results on closer inspection most forks/copies of rubyspec for fibers (a good resource to read if you want know ruby) the first non-rubyspec result though... --- So, that&amp;#x2019;s all a bit theoretical. What real use are fibers? Well, I don&amp;#x2019;t know, so I did a quick search on github, and to my surprise there were actually plenty of results. But... on closer inspection, the first few pages are entirely forks and copies of the Ruby specs for fibers. Which, by the way, I totally recommend reading if you want to get an idea how something in ruby actually works. The first result that wasn&amp;#x2019;t a ruby spec requires a detour first...
  125. another quick detour if you&amp;#x2019;ve done it you know evented programming is different example reading a webpage normal is simple, call a couple of methods evented - much more complex. define state recording models use callback methods you gain performance &amp; flexibility but you lose simplicity and familiarity --- Well.. another quick detour. If you&amp;#x2019;ve ever done any evented programming you&amp;#x2019;ll know that the code is very different looking to normal code. Here&amp;#x2019;s a simplified example of how to read a webpage. For the normal case it&amp;#x2019;s really simple, you just call a couple of methods. The evented case, not so much. You have to rely on callback methods and keep some object around to hold the result of those callbacks. What you lose in a simplified API you gain in performance and flexibility, but it&amp;#x2019;s hard to get your head around.
  126. that first non-rubyspec github hit? Neverblock - fibers + eventmachine + async libs give you sync style API for async programming get performance (not flex) without changing much code or the what it feels like just replace blocking libraries with neverblock not going to cover in detail. 1 more slide! -- The first non-ruby spec result on github that uses fibers was: Neverblock. This library uses Fibers, Event Machine and other non-blocking APIs to present you with an API for doing asynchronous programming that looks remarkably synchronous. So you don&amp;#x2019;t have to change your code to get the benefit of asynchronous performance. I won&amp;#x2019;t go into details (I only have 1 more slide!), but you should check it out if you&amp;#x2019;re interested.
  127. plenty I didn&amp;#x2019;t cover remaining API transfer - yield this fiber + resume another fiber in one go don&amp;#x2019;t go back to caller others simple enough lightweight - less mem than same num threads single core only (all fibers run in same thread) -- Last slide. There&amp;#x2019;s loads I didn&amp;#x2019;t cover, but I think I got the basics. 3 remaining API methods (apart from resume and yield). Transfer is like yield, but instead of giving CPU back to the caller, you give it to the Fiber you called transfer on. The other two are simple enough. Supremely lightweight. Spinning up fibers takes much less memory than Threads, there&amp;#x2019;s a good comparison. Single core solution really. I&amp;#x2019;ll put a resource slide up when I post these slides....
  128. (...and breathe!)
  129. Bonus slide for the internet.