It is the slides for COSCUP[1] 2013 Hands-on[2], "Learning Python from Data".
It aims for using examples to show the world of Python. Hope it will help you with learning Python.
[1] COSCUP: http://coscup.org/
[2] COSCUP Hands-on: http://registrano.com/events/coscup-2013-hands-on-mosky
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataAnne Nicolas
GNU poke is a new interactive editor for binary data. Not limited to editing basic ntities such as bits and bytes, it provides a full-fledged procedural, interactive programming language designed to describe data structures and to operate on them. Once a user has defined a structure for binary data (usually matching some file format) she can search, inspect, create, shuffle and modify abstract entities such as ELF relocations, MP3 tags, DWARF expressions, partition table entries, and so on, with primitives resembling simple editing of bits and bytes. The program comes with a library of already written descriptions (or “pickles” in poke parlance) for many binary formats.
GNU poke is useful in many domains. It is very well suited to aid in the development of programs that operate on binary files, such as assemblers and linkers. This was in fact the primary inspiration that brought me to write it: easily injecting flaws into ELF files in order to reproduce toolchain bugs. Also, due to its flexibility, poke is also very useful for reverse engineering, where the real structure of the data being edited is discovered by experiment, interactively. It is also good for the fast development of prototypes for programs like linkers, compressors or filters, and it provides a convenient foundation to write other utilities such as diff and patch tools for binary files.
This talk (unlike Gaul) is divided into four parts. First I will introduce the program and show what it does: from simple bits/bytes editing to user-defined structures. Then I will show some of the internals, and how poke is implemented. The third block will cover the way of using Poke to describe user data, which is to say the art of writing “pickles”. The presentation ends with a status of the project, a call for hackers, and a hint at future works.
Jose E. Marchesi
The slides from my July Django-District presentation. It shows some of the basics of using the new fabric. I have uploaded the example fabfile.py to slideshare as well.
It is the slides for COSCUP[1] 2013 Hands-on[2], "Learning Python from Data".
It aims for using examples to show the world of Python. Hope it will help you with learning Python.
[1] COSCUP: http://coscup.org/
[2] COSCUP Hands-on: http://registrano.com/events/coscup-2013-hands-on-mosky
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataAnne Nicolas
GNU poke is a new interactive editor for binary data. Not limited to editing basic ntities such as bits and bytes, it provides a full-fledged procedural, interactive programming language designed to describe data structures and to operate on them. Once a user has defined a structure for binary data (usually matching some file format) she can search, inspect, create, shuffle and modify abstract entities such as ELF relocations, MP3 tags, DWARF expressions, partition table entries, and so on, with primitives resembling simple editing of bits and bytes. The program comes with a library of already written descriptions (or “pickles” in poke parlance) for many binary formats.
GNU poke is useful in many domains. It is very well suited to aid in the development of programs that operate on binary files, such as assemblers and linkers. This was in fact the primary inspiration that brought me to write it: easily injecting flaws into ELF files in order to reproduce toolchain bugs. Also, due to its flexibility, poke is also very useful for reverse engineering, where the real structure of the data being edited is discovered by experiment, interactively. It is also good for the fast development of prototypes for programs like linkers, compressors or filters, and it provides a convenient foundation to write other utilities such as diff and patch tools for binary files.
This talk (unlike Gaul) is divided into four parts. First I will introduce the program and show what it does: from simple bits/bytes editing to user-defined structures. Then I will show some of the internals, and how poke is implemented. The third block will cover the way of using Poke to describe user data, which is to say the art of writing “pickles”. The presentation ends with a status of the project, a call for hackers, and a hint at future works.
Jose E. Marchesi
The slides from my July Django-District presentation. It shows some of the basics of using the new fabric. I have uploaded the example fabfile.py to slideshare as well.
Ansible, Simplicity, and the Zen of Pythontoddmowen
Slides from the following talk presented at PyCon Australia 2015:
https://www.youtube.com/watch?v=JlrkizEBjXk
Ansible is a configuration management tool, written in Python, that has taken the world of IT automation by storm. Its most remarkable quality is simplicity.
The Zen of Python is a set of aphorisms which capture the design philosophy of the Python language, one being "Simple is better than complex".
With the increasing adoption of cloud native technologies and containerization; the gap between Java development and system administration is decreasing. Whether you are using Docker Swarm, Kubernetes or Mesos as a container orchestrator; fundamental challenges for running docker in production are common.
In this talk, I would like to share some of the basic linux concepts (like memory management, CPU, IO, sockets, file descriptors, signals, OOM killer) every Java Developer should know to be able to perform effective configuration and troubleshooting for docker containers.
With the increasing adoption of cloud native technologies and containerization; the gap between Java development and system administration is decreasing. Whether you are using Docker Swarm, Kubernetes or Mesos/Marathon as a container orchestrator; fundamental challenges for running docker in production are common.
In this talk, I would like to share some of the basic linux concepts about CPU scheduling every Java Developer should know to be able to perform effective configuration and troubleshooting for docker containers.
Yes, Docker provides isolation, but only if you know how best to configure it.
Kubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga | Elixir Club UkraineElixir Club
Kubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga
Slides of Alexei Sholik, Andrew Dryga, at Lightning Talk session at Elixir Club Ukraine, Kyiv, 28.09.2019
Next conference - http://www.elixirkyiv.club/
Follow us on social networks @ElixirClubUA and #ElixirClubUA
Announce and materials from conf - https://www.fb.me/ElixirClubUA
News - https://twitter.com/ElixirClubUA
Photo and free atmosphere - https://www.instagram.com/ElixirClubUA
*Organizer’s channel - https://t.me/incredevly
A talk I gave at the BOF session for the Erlang Exchange on the 26th June 2008. The talk was about how LRUG (the London Ruby User Group) manages it's community and why that might be of interest to the Erlang community.
Ansible, Simplicity, and the Zen of Pythontoddmowen
Slides from the following talk presented at PyCon Australia 2015:
https://www.youtube.com/watch?v=JlrkizEBjXk
Ansible is a configuration management tool, written in Python, that has taken the world of IT automation by storm. Its most remarkable quality is simplicity.
The Zen of Python is a set of aphorisms which capture the design philosophy of the Python language, one being "Simple is better than complex".
With the increasing adoption of cloud native technologies and containerization; the gap between Java development and system administration is decreasing. Whether you are using Docker Swarm, Kubernetes or Mesos as a container orchestrator; fundamental challenges for running docker in production are common.
In this talk, I would like to share some of the basic linux concepts (like memory management, CPU, IO, sockets, file descriptors, signals, OOM killer) every Java Developer should know to be able to perform effective configuration and troubleshooting for docker containers.
With the increasing adoption of cloud native technologies and containerization; the gap between Java development and system administration is decreasing. Whether you are using Docker Swarm, Kubernetes or Mesos/Marathon as a container orchestrator; fundamental challenges for running docker in production are common.
In this talk, I would like to share some of the basic linux concepts about CPU scheduling every Java Developer should know to be able to perform effective configuration and troubleshooting for docker containers.
Yes, Docker provides isolation, but only if you know how best to configure it.
Kubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga | Elixir Club UkraineElixir Club
Kubernetes + Docker + Elixir - Alexei Sholik, Andrew Dryga
Slides of Alexei Sholik, Andrew Dryga, at Lightning Talk session at Elixir Club Ukraine, Kyiv, 28.09.2019
Next conference - http://www.elixirkyiv.club/
Follow us on social networks @ElixirClubUA and #ElixirClubUA
Announce and materials from conf - https://www.fb.me/ElixirClubUA
News - https://twitter.com/ElixirClubUA
Photo and free atmosphere - https://www.instagram.com/ElixirClubUA
*Organizer’s channel - https://t.me/incredevly
A talk I gave at the BOF session for the Erlang Exchange on the 26th June 2008. The talk was about how LRUG (the London Ruby User Group) manages it's community and why that might be of interest to the Erlang community.
Wild & Weird Ideas: An Overview of Ruby 1.9Murray Steele
A presentation I gave at the London Ruby User Group (LRUG) in December 2007 about the changes in the ruby programming language in the soon to be released 1.9 version.
Note that Ruby 1.9 was still in development at the time I wrote this talk, so it's possible the stuff I say in it is completely inaccurate with respect to any currently released version of Ruby 1.9.
A talk I gave at the June 2010 meeting of the London Ruby User Group. It's about the first bit of ruby I ever wrote, way back in 2003. A little bit of personal history, a little bit of ruby history, a whole lot of terrible code for you to learn from.
Training course for occupational hygienists and consultants in occupational hygiene. Asbestos, MMMF, aramids and other fibres identification, health effects. Sampling of airborne fibres, containment and removal in buildings. Occupational exposure to fibres and public health risk of asbestos and other fibres.
Игорь Фесенко "Direction of C# as a High-Performance Language"Fwdays
There are a lot of upcoming performance changes in .NET. Starting from code generation (JIT, AOT) and optimizations that can be performed by the compiler (inlining, flowgraph & loop analysis, dead code elimination, SIMD, stack allocation and so on). In this talk we will cover some features of C# 7 are going towards making low level optimization.
I will share not only how we can improve performance with the next version of .NET, but how we can do it today using different techniques and tools like Roslyn analyzers, Channels (Push based Streams), System.Slices, System.Buffers and System.Runtime.CompilerServices.Unsafe.
Writing concurrent program is hard; maintaining concurrent program even is a nightmare. Actually, a pattern which helps us to write good concurrent code is available, that is, using “channels” to communicate.
This talk will share the channel concept with common libraries, like threading and multiprocessing, to make concurrent code elegant.
It's the talk at PyCon TW 2017 [1] and PyCon APAC/MY 2017 [2].
[1]: https://tw.pycon.org/2017
[2]: https://pycon.my/pycon-apac-2017-program-schedule/
This presentation is for Go developers and operators of Go applications who are interested in reducing costs and latency, or debugging problems such as memory leaks, infinite loops, performance regressions, etc. of such applications. We'll start with a brief description of the unique aspects of the Go runtime, and then take a look at the builtin profilers as well as Go's execution tracer. Additionally we'll look at the interoperability with popular observability tools such as Linux perf and bpftrace. After this presentation you should have a good idea of the various tools you can use, and which ones might be the most useful to you in a production environment.
Slides of my talk on Devel::NYTProf and optimizing perl code at YAPC::NA in June 2014. It covers use of NYTProf and outlines a multi-phase approach to optimizing your perl code.
A video of the talk and questions is available at https://www.youtube.com/watch?v=T7EK6RZAnEA&list=UU7y4qaRSb5w2O8cCHOsKZDw
What we Learned Implementing Puppet at BackstopPuppet
"What We Learned Implementing Puppet at Backstop" by Bill Weiss at Puppet Camp Chicago 2013. Learn about upcoming Puppet Camps at http://puppetlabs.com/community/puppet-camp/
Dynamic Instrumentation- OpenEBS Golang Meetup July 2017OpenEBS
The slides were presented by Jeffry Molanus who is the CTO of OpenEBS in Golang Meetup. OpenEBS is an open source cloud native storage. OpenEBS delivers storage and storage services to containerized environments. OpenEBS allows stateful workloads to be managed more like stateless containers. OpenEBS storage services include: per container (or pod) QoS SLAs, tiering and replica policies across AZs and environments, and predictable and scalable performance.Our vision is simple: let’s let storage and storage services for persistent workloads be so fully integrated into the environment and hence managed automatically that is almost disappears into the background as just yet another infrastructure service that works.
A short, introductory talk to the world of debuggers. During the talk, we write a simple debugger application in Rust.
Video at: https://www.youtube.com/watch?v=qS51kIHWARM
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)Tim Bunce
Slides of my talk on Devel::NYTProf and optimizing perl code at the Italian Perl Workshop (IPW09). It covers the new features in NYTProf v3 and a new section outlining a multi-phase approach to optimizing your perl code.
30 mins long plus 10 mins of questions. Best viewed fullscreen.
Design Summit - Migrating to Ruby 2 - Joe RafanielloManageIQ
ManageIQ currently runs on Ruby 1.9.3. This presentation is about the effort to move ManageIQ to Ruby 2.x to take advantage of new features and performance in the language and runtime engine.
For more on ManageIQ, see http://manageiq.org/
Linux Performance Analysis: New Tools and Old SecretsBrendan Gregg
Talk for USENIX/LISA2014 by Brendan Gregg, Netflix. At Netflix performance is crucial, and we use many high to low level tools to analyze our stack in different ways. In this talk, I will introduce new system observability tools we are using at Netflix, which I've ported from my DTraceToolkit, and are intended for our Linux 3.2 cloud instances. These show that Linux can do more than you may think, by using creative hacks and workarounds with existing kernel features (ftrace, perf_events). While these are solving issues on current versions of Linux, I'll also briefly summarize the future in this space: eBPF, ktap, SystemTap, sysdig, etc.
Devel::NYTProf 2009-07 (OUTDATED, see 201008)Tim Bunce
The slides of my "State-of-the-art Profiling with Devel::NYTProf" talk at OSCON in July 2009.
I'll upload a screencast and give the link in a blog post at http://blog.timbunce.org
A story of how we went about packaging perl and all of the dependencies that our project has.
Where we were before, the chosen path, and the end result.
The pitfalls and a view on the pros and cons of the previous state of affairs versus the pros/cons of the end result.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
3. Detour #1: subroutines
vs. coroutines
• sub-routines
• every function you’ve ever written
• single entry-point & single exit
• co-routines
• single entry-point, multiple exit & re-
entry points
4. Detour #1.a:
subroutines
def a_sub_routine( )
CPU
end
24. Detour.do {|d| talk << 4}
# Non Evented
open('http://lrug.org/').read #=> ‘<html....
# Evented
class HTTPClient
def receive_body(data)
@data << data
end
end
http_client = HTTPClient.new
EventMachine::run do
EventMachine::connect 'lrug.org', 80, http_client
end
http_client.data #=> ‘<html....
25. So…what is a practical
use for a fiber?
http://www.espace.com.eg/neverblock/
26. What I didn’t say
• The rest of the API
• fiber_instance.transfer - invoke on a Fiber to pass
control to it, instead of yielding to the caller
• fiber_instance.alive? - can we safely resume this
Fiber, or has it terminated?
• Fiber.current - get the current Fiber so we can play
with it
• Lightweight - less memory over head than threads
• The downsides - single core only really
28. Resources
• http://delicious.com/hlame/fibers
• (most of the stuff I researched is here)
• http://github.com/oldmoe/neverblock
• http://en.wikipedia.org/wiki/Fiber_(computer_science)
• http://en.wikipedia.org/wiki/Coroutine
Editor's Notes
I'm going to talk about Fibers in Ruby 1.9.
Keyword is rough
no knowledge
not ruby 1.9 dayjob
nor in spare time (lazy)
researched in last week
apologies for any ommissions (there will be some)
if you know it, don&#x2019;t ask mean questions
sorry
--
The key word in my title is "rough"; I'm coming to the material with little or no practical knowledge. I'm not using 1.9 in my day job, and although I probably would use it for any spare-time hacking, it's very rare that I get down to any as I'm, basically, lazy.
So apologies to anyone that knows this stuff already; I might get things wrong, or not cover everything in enough detail. I'm sorry.
2 ideas that should sound familiar:
co-routines & co-operative multitasking
co-routines = familiar bcz sub-routines
co-operative multitasking = pre-emptive multitasking
detour to cover each of these ideas then onto ruby
--
Fibers are an implementation of 2 important ideas:
1. The first idea is &#x201C;co-routines&#x201D; (and this should sound familiar, as you&#x2019;ll have heard of sub-routines which are related)
and
2. The second idea is &#x201C;co-operative multitasking&#x201D; (and again, you should recognise this as similar sounding to &#x201C;pre-emptive mutlitasking&#x201D;).
So, we'll take a quick detour to cover these in turn and then we'll come back to Ruby.
sub-routine invoke =
start on first line, proceed to end, STOP
go-in, come out
co-routines are different
start on first line, proceed to end, STOP
in between = take detour come back later
been around a bit
but hardly implemented
--
So pretty much every method or function you&#x2019;ve ever written is a sub-routine. When you invoke them you start at the first line and run through them till they terminate and give you their result.
A co-routine is a little bit different. When you invoke them they also start on the first line of code but they can halt execution and exit before they terminate. Later you can then re-enter and resume execution from where you left off.
It&#x2019;s also unlikely you&#x2019;ll have written one, yet, as despite being around for a while not many languages provide them as a feature.
-----
Every method or function you write is a sub-routine. It's a package of code that has an entry point, and a single exit point. Admittedly things like exceptions and multiple return paths might confuse this and make it seem like you have many exit points, but for each *single run through the code* there's one path: you go in, do something and you come out and that's it.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
To clarify:
invoke subroutine -style method.
CPU enters method
bounce around
until execution stops
with return
(or exception)
(or implicit last statement)
and release CPU to caller
---
So, here&#x2019;s a simple subroutine example.
When you call a method the flow of control enters the function, and is trapped until the method terminates.
Once the method terminates, here with an explicit return, but it could be an exception, or simply stopping after the last executable statement of the code path, the flow of control is finally released to the caller.
The only way to go back into the function is to go back to the start by calling it again.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
once exited,
no going back inside
that method is dead
want to re-run?
have to re-invoke
create new copy of stack (expensive)
and enter at start
nothing shared (&#x2018;cept pass-ins)
---
So, once you exit a sub-routine, the door is closed; you can&#x2019;t return to it the way you came out.
To re-use the sub-routine, your only option is to re-invoke it and go back to the first line of code. This creates a new copy of the entire stack, so there&#x2019;s nothing shared between this invocation and the previous ones, or any future ones. Depending on your code, this could be expensive.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
how are co-routines different?
start&#x2019;s same
invoke method
CPU trapped
execute statements
exit with yield
gives caller back CPU
caller later resume
re-enter co-routine
at EXACT POINT WHERE WE LEFT OFF
same stack, same everything
continue exec
---
And here&#x2019;s a similar example for a co-routine.
It starts pretty much the same way. The flow of control enters the method and is trapped until it provides a result, this time with a yield. However, unlike before, we can resume the method and send the flow of control back in to continue working, picking up where we were when we left off.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
more interesting is...
not a one-time deal
yield to the caller
caller resume routine
many times!
even more interesting
yield from multiple places
and resume knows which yield to go back to
---
What makes co-routines even more interesting is that we can yield and resume as many times as we want, until, of course, the co-routine comes to a natural termination.
We can also have as many yield&#x2019;s as we want, we don&#x2019;t always have to yield from the same place. Although having yielded at a given point, we resume at that point, we can&#x2019;t choose some other yield point to re-enter at.
2nd idea - multitasking
1st = thread model
several running tasks
OS or lang runtime schedules
don&#x2019;t know when so access shared objects = pain (locks)
Fibers = 2nd
programmer has control
choose when in each task to give up CPU
and who to give it to
--
You should be familiar with pre-emptive multitasking as it&#x2019;s the standard model of concurrency used by most Thread implementations.
You have several tasks running at the same time, scheduled by the OS or language runtime.
The gotcha is access to shared objects.
Fiber&#x2019;s however use the co-operative model.
With this no tasks run at the exact same time and it&#x2019;s up to the programmer to decide when each task will give up control and who to pass control onto.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
2 threads, alpha, beta
scheduler gives each some CPU time
for work
they don&#x2019;t know when
so alpha wants shared data
locks it
stops changes when CPU elsewhere
when beta gets the CPU
if shared data is locked, it can&#x2019;t use it,
probably can&#x2019;t do anything, wasted effort
---
The main problem with pre-emptive multitasking is that (on a single core machine) these two threads are given CPU time arbitrarily by some scheduler. They don&#x2019;t know when in their life-cycle this&#x2019;ll happen, so when thread alpha wants to access the shared data, it has to lock it. Unfortunately this means the shared data could remain locked while thread beta has the CPU time, so thread beta can&#x2019;t do anything.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
with co-op
fibers, not threads
no external scheduler
when fiber has CPU it has CPU
can use shared data without lock
nothing else running.
when done
or done enough
transfers CPU away
other fiber picks up and starts work
---
On the other hand, in co-operative multitasking, the fiber itself has explicit control of when the CPU will transfer away. This means it doesn&#x2019;t need to lock anything because it&#x2019;s safe in the knowledge that no other fiber will be running unless it says it&#x2019;s done.
When the fiber is done (or happy that it&#x2019;s done enough for now), it stops accessing the shared data and simply transfers control away to some other fiber.
science over. code now.
simple example of creating Fiber.
familiar if worked with threads
block is workload for fiber
illustrates 3 things...
Fiber.yield is exit point
shared stack (local var i same between yields)
infinite loop (!)
---
So, I&#x2019;ve bored you with the science part, how about looking at some code?
If you&#x2019;ve used threads in ruby this should be familiar. You create a Fiber by passing a block to a constructor. The block is the &#x201C;work load&#x201D; for that Fiber. In this case an infinite loop to generate increasingly excited hello&#x2019;s to the LRUG crowd. Don&#x2019;t worry about that pesky &#x201C;infinite&#x201D; though...
after create fiber
like thread, not running
call &#x201C;resume&#x201D; (chicken-before-egg)
makes fiber run from start to Fiber.yield
returns value
each successive .resume goes back in
resumes from Fiber.yield
with previous stack intact
----
So, when you create a Fiber, again just like a thread, it won&#x2019;t do anything until you ask it to. To start it you call the somewhat chicken-before-the-egg &#x201C;resume&#x201D; method. This causes hello_lrug to run until it hits that Fiber.yield. This pauses execution of the Fiber and returns the value passed to it. You also use &#x201C;resume&#x201D; to re-enter the Fiber to do some more work.
3rd interesting thing
that pesky infinite loop
it&#x2019;s ok
Fiber only runs up to .yield
then exit
CPU is out and nothing running
only call resume 5 times, never get our 6th
no longer need to think about explicit termination
lazy eval = super easy
----
So although we gave hello_lrug a workload that *will never end*, it&#x2019;s not a problem because we use the yield and resume methods to explicitly schedule when hello_lrug run. If we only want to run it 5 times and never come back to it, that&#x2019;s ok, it won&#x2019;t eat up CPU time. This gives us an interesting new way to think about writing functions; if they don&#x2019;t have to end lazy evaluation becomes super easy...
Fibonacci
standard fib method using recursion
can be hard to get head around
have to worry about termination clauses
can be expensive
(this impl will calc fib(1) several times)
---
Hey, so what&#x2019;s a talk without Fibonacci?
Here&#x2019;s the standard implementation for generating a number in the fibonacci sequence using ruby. It uses recursion, which is something you have to get your head around before you see how it works, and that can be hard sometimes, and you have to take care to have correct guard clauses to make sure you terminate the recursion.
same thing, with fibers
understanding co-routines is probably hard
both have mental roadblock
but the def is more natural
advantage, unlike recursion
get fib 6, gives us fib 1 - 5 as well
recursion calcs,
but doesn&#x2019;t share
---
Here&#x2019;s the Fibrous way of doing it. Again, there is a fundamental concept you need to understand first (co-routines), but I do think this is a slightly more natural way of defining the sequence.
The difference is that to get the 6th number, we have to call resume on the fiber 6 times. With the side-effect of being provided with all the preceding 5 numbers in the sequence.
lazy eval = fibers!
most use I think
is where used in 1.9 stdlib
.each, .map &c without block = enumerator
can be chained
under the hood all done with fibers
--
This sort of lazy evalutation is where Fibers shine, and probably where they&#x2019;ll see the most use.
And, in fact, it&#x2019;s exactly this sort of thing that Fibers are being used for in the ruby 1.9 stdlib. Things like .each and .map have been reworked so that without a block they now return enumerators that you can chain together. And under the hood these enumerators are implemented using fibers.
and in the real world?
(I dunno)
github search
plenty results
on closer inspection
most forks/copies of rubyspec for fibers
(a good resource to read
if you want
know ruby)
the first non-rubyspec result though...
---
So, that&#x2019;s all a bit theoretical. What real use are fibers?
Well, I don&#x2019;t know, so I did a quick search on github, and to my surprise there were actually plenty of results.
But... on closer inspection, the first few pages are entirely forks and copies of the Ruby specs for fibers. Which, by the way, I totally recommend reading if you want to get an idea how something in ruby actually works.
The first result that wasn&#x2019;t a ruby spec requires a detour first...
another quick detour
if you&#x2019;ve done it
you know
evented programming is different
example reading a webpage
normal is simple, call a couple of methods
evented - much more complex.
define state recording models
use callback methods
you gain performance & flexibility
but you lose simplicity and familiarity
---
Well.. another quick detour. If you&#x2019;ve ever done any evented programming you&#x2019;ll know that the code is very different looking to normal code.
Here&#x2019;s a simplified example of how to read a webpage. For the normal case it&#x2019;s really simple, you just call a couple of methods.
The evented case, not so much. You have to rely on callback methods and keep some object around to hold the result of those callbacks. What you lose in a simplified API you gain in performance and flexibility, but it&#x2019;s hard to get your head around.
that first non-rubyspec github hit?
Neverblock - fibers + eventmachine + async libs
give you sync style API for async programming
get performance (not flex)
without changing much code
or the what it feels like
just replace blocking libraries with neverblock
not going to cover in detail. 1 more slide!
--
The first non-ruby spec result on github that uses fibers was: Neverblock.
This library uses Fibers, Event Machine and other non-blocking APIs to present you with an API for doing asynchronous programming that looks remarkably synchronous. So you don&#x2019;t have to change your code to get the benefit of asynchronous performance.
I won&#x2019;t go into details (I only have 1 more slide!), but you should check it out if you&#x2019;re interested.
plenty I didn&#x2019;t cover
remaining API
transfer - yield this fiber + resume another fiber in one go
don&#x2019;t go back to caller
others simple enough
lightweight - less mem than same num threads
single core only (all fibers run in same thread)
--
Last slide. There&#x2019;s loads I didn&#x2019;t cover, but I think I got the basics.
3 remaining API methods (apart from resume and yield).
Transfer is like yield, but instead of giving CPU back to the caller, you give it to the Fiber you called transfer on. The other two are simple enough.
Supremely lightweight. Spinning up fibers takes much less memory than Threads, there&#x2019;s a good comparison.
Single core solution really.
I&#x2019;ll put a resource slide up when I post these slides....