This document discusses coroutines and provides examples of using coroutines to generate Fibonacci sequences and format output text. Some key points:
- A coroutine can be suspended and later resumed from the point of suspension, retaining its execution state. This allows breaking execution into multiple calls.
- Examples show generating Fibonacci sequences using direct code, routines, classes, and coroutines. Coroutines avoid needing explicit execution state.
- Additional examples format a block of input text into structured output blocks using direct code and coroutines. Coroutines simplify retaining state across multiple calls.
The document is an untitled HTML file that consists of 19 numbered pages without any visible text or content. It appears to be an empty or incomplete document.
The document is an untitled 32-page file that provides no substantive information. It consists of repetitive page number listings without any descriptive text, headings, or other content.
This very short document contains random text and does not convey any clear ideas or information in a meaningful way. It consists of only a phrase and multiple repeated characters with no context provided.
This very short document contains random text and does not convey any clear ideas or information in a meaningful way. It consists of only a phrase and various repetitions of the letter "h" without any connecting context or explanation. Therefore, no accurate high-level summary can be provided in 3 sentences or less.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Stacks can be created by dragging any folder to the right side of the Dock. Clicking a stack will display its contents in either a fan or grid layout. Files can be added or removed from stacks by dragging them in or out. The Documents and Downloads stacks are pre-made for organizing common file types.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
The document is an untitled HTML file that consists of 19 numbered pages without any visible text or content. It appears to be an empty or incomplete document.
The document is an untitled 32-page file that provides no substantive information. It consists of repetitive page number listings without any descriptive text, headings, or other content.
This very short document contains random text and does not convey any clear ideas or information in a meaningful way. It consists of only a phrase and multiple repeated characters with no context provided.
This very short document contains random text and does not convey any clear ideas or information in a meaningful way. It consists of only a phrase and various repetitions of the letter "h" without any connecting context or explanation. Therefore, no accurate high-level summary can be provided in 3 sentences or less.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Stacks can be created by dragging any folder to the right side of the Dock. Clicking a stack will display its contents in either a fan or grid layout. Files can be added or removed from stacks by dragging them in or out. The Documents and Downloads stacks are pre-made for organizing common file types.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Stacks can be created by dragging any folder to the right side of the Dock. Clicking a stack will display its contents in either a fan or grid layout. Files can be added or removed from stacks by dragging them in or out. The Documents and Downloads stacks are pre-made for organizing common file types.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Stacks can be created by dragging any folder to the right side of the Dock. Clicking a stack will display its contents in either a fan or grid layout. Files can be added or removed from stacks by dragging them in or out. The Documents and Downloads stacks are pre-made for organizing common file types.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
The document discusses key concepts in distributed systems including networking, remote procedure calls (RPC), and transaction processing systems (TPS). It covers networking fundamentals like sockets and ports. It describes how RPC works by allowing functions to be called remotely. It explains the ACID properties that TPS must support for atomicity, consistency, isolation, and durability of transactions processed across distributed systems.
Introduction & Parellelization on large scale clustersSri Prasanna
This document provides an overview of distributed computing and parallelization challenges. It discusses how parallelization can be difficult due to synchronization issues when multiple threads access shared resources. Various synchronization primitives are introduced like semaphores, condition variables, and barriers to help coordinate parallel threads. The challenges of deadlocks are also covered. MapReduce is then introduced as a paradigm that handles data distribution and synchronization challenges for distributed computing problems.
This document provides an overview of MapReduce theory and implementation:
- MapReduce is a programming model that allows for automatic parallelization and distribution of large-scale data processing across hundreds/thousands of CPUs.
- It borrows concepts from functional programming, requiring users to implement map and reduce functions. Map processes key-value pairs in parallel while reduce combines intermediate outputs.
- The MapReduce framework handles fault tolerance, locality of data and tasks, and other optimizations to make large data processing efficient and scalable.
This document discusses several distributed computing systems:
1) DNS is a distributed system that maps domain names to IP addresses using a hierarchical naming structure and caching DNS servers for efficiency.
2) BOINC is a volunteer computing platform that uses over a million computers worldwide for distributed applications like disease research. It provides incentives and verifies results to prevent cheating.
3) PlanetLab is a research network with over 700 servers globally that allows testing new distributed systems at large scales under realistic conditions. It isolates projects using virtualization and trust relationships.
The Google File System (GFS) was designed by Google to store massive amounts of data across cheap, unreliable hardware. It uses a single master to coordinate metadata and mutations across multiple chunkservers. Files are divided into fixed-size chunks which are replicated for reliability. The design focuses on supporting huge files that are written once and read through streaming, while tolerating high failure rates through replication and relaxed consistency. GFS has proven successful at meeting Google's storage needs at massive scale.
Introduction to Cluster Computing and Map Reduce (from Google)Sri Prasanna
This document provides an introduction and overview of distributed computing and systems. It discusses key topics like parallel vs distributed computing, a brief history of distributed computing, challenges with parallelization and synchronization, networking basics, and issues that arise in distributed systems. The goal of the seminar is to cover topics like MapReduce, distributed algorithms, and theoretical aspects of distributed computing over 5 lectures.
This document provides an overview and introduction to MapReduce and functional programming concepts. It discusses:
- The map and fold/reduce functions and how they can be used to parallelize computations.
- How MapReduce borrows from functional programming by having users implement map and reduce functions to process large datasets in a parallel and distributed manner.
- Key aspects of MapReduce including automatic parallelization, fault tolerance, and optimizations like locality-aware task assignment and redundant execution of tasks.
Distributed file systems (from Google)Sri Prasanna
This document summarizes a lecture on distributed file systems. It discusses Network File System (NFS) and the Google File System (GFS). NFS allows remote access to files on servers and uses client caching for efficiency. GFS was designed for Google's need to redundantly store massive amounts of data on unreliable hardware. It uses large file chunks, replication for reliability, and a single master for coordination to provide high throughput for streaming reads of huge files.
The document discusses graph algorithms and PageRank and how they can be implemented using MapReduce. It covers graph representations like adjacency matrices and sparse matrices that are suitable for distributed computing. It also describes how breadth-first search, shortest path finding, and PageRank calculations can be broken down into MapReduce jobs by iteratively processing portions of the graph in parallel. While not optimal for highly iterative algorithms, MapReduce can help distribute the computation across multiple machines to process large graphs.
Distributed Computing Seminar Lecture 4 provided an overview of clustering algorithms and techniques. It discussed how clustering is used to group related data in applications like Google News and Amazon. It described hierarchical and partitional clustering algorithms and the k-means clustering algorithm. The lecture also introduced canopy clustering as a preliminary step that can help parallelize computation for clustering large datasets using MapReduce. It provided an example of how to efficiently partition a large movie ratings dataset into clusters using canopy clustering, k-means clustering, and MapReduce.
Naming And Binding (Distributed computing)Sri Prasanna
The document discusses naming and binding in distributed systems. It defines key terms like naming, naming service, name resolution, binding, naming system, name space, and resolution. It provides examples of naming like DNS which maps machine names to IP addresses, and file naming in UNIX. Name resolution involves looking up the node, address, and path to access a service. Binding associates a name with a lower-level representation.
Ellen Burstyn: From Detroit Dreamer to Hollywood Legend | CIO Women MagazineCIOWomenMagazine
In this article, we will dive into the extraordinary life of Ellen Burstyn, where the curtains rise on a story that's far more attractive than any script.
Discover innovative uses of Revit in urban planning and design, enhancing city landscapes with advanced architectural solutions. Understand how architectural firms are using Revit to transform how processes and outcomes within urban planning and design fields look. They are supplementing work and putting in value through speed and imagination that the architects and planners are placing into composing progressive urban areas that are not only colorful but also pragmatic.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Stacks can be created by dragging any folder to the right side of the Dock. Clicking a stack will display its contents in either a fan or grid layout. Files can be added or removed from stacks by dragging them in or out. The Documents and Downloads stacks are pre-made for organizing common file types.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Stacks can be created by dragging any folder to the right side of the Dock. Clicking a stack will display its contents in either a fan or grid layout. Files can be added or removed from stacks by dragging them in or out. The Documents and Downloads stacks are pre-made for organizing common file types.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
Stacks are a new feature in Mac OS X Leopard that allow you to access frequently used files directly from the Dock. Dragging a folder to the right side of the Dock creates a stack. Stacks can be viewed as a fan or grid and allow easy access and organization of files. Customization options exist for sorting and viewing stacks.
The document discusses key concepts in distributed systems including networking, remote procedure calls (RPC), and transaction processing systems (TPS). It covers networking fundamentals like sockets and ports. It describes how RPC works by allowing functions to be called remotely. It explains the ACID properties that TPS must support for atomicity, consistency, isolation, and durability of transactions processed across distributed systems.
Introduction & Parellelization on large scale clustersSri Prasanna
This document provides an overview of distributed computing and parallelization challenges. It discusses how parallelization can be difficult due to synchronization issues when multiple threads access shared resources. Various synchronization primitives are introduced like semaphores, condition variables, and barriers to help coordinate parallel threads. The challenges of deadlocks are also covered. MapReduce is then introduced as a paradigm that handles data distribution and synchronization challenges for distributed computing problems.
This document provides an overview of MapReduce theory and implementation:
- MapReduce is a programming model that allows for automatic parallelization and distribution of large-scale data processing across hundreds/thousands of CPUs.
- It borrows concepts from functional programming, requiring users to implement map and reduce functions. Map processes key-value pairs in parallel while reduce combines intermediate outputs.
- The MapReduce framework handles fault tolerance, locality of data and tasks, and other optimizations to make large data processing efficient and scalable.
This document discusses several distributed computing systems:
1) DNS is a distributed system that maps domain names to IP addresses using a hierarchical naming structure and caching DNS servers for efficiency.
2) BOINC is a volunteer computing platform that uses over a million computers worldwide for distributed applications like disease research. It provides incentives and verifies results to prevent cheating.
3) PlanetLab is a research network with over 700 servers globally that allows testing new distributed systems at large scales under realistic conditions. It isolates projects using virtualization and trust relationships.
The Google File System (GFS) was designed by Google to store massive amounts of data across cheap, unreliable hardware. It uses a single master to coordinate metadata and mutations across multiple chunkservers. Files are divided into fixed-size chunks which are replicated for reliability. The design focuses on supporting huge files that are written once and read through streaming, while tolerating high failure rates through replication and relaxed consistency. GFS has proven successful at meeting Google's storage needs at massive scale.
Introduction to Cluster Computing and Map Reduce (from Google)Sri Prasanna
This document provides an introduction and overview of distributed computing and systems. It discusses key topics like parallel vs distributed computing, a brief history of distributed computing, challenges with parallelization and synchronization, networking basics, and issues that arise in distributed systems. The goal of the seminar is to cover topics like MapReduce, distributed algorithms, and theoretical aspects of distributed computing over 5 lectures.
This document provides an overview and introduction to MapReduce and functional programming concepts. It discusses:
- The map and fold/reduce functions and how they can be used to parallelize computations.
- How MapReduce borrows from functional programming by having users implement map and reduce functions to process large datasets in a parallel and distributed manner.
- Key aspects of MapReduce including automatic parallelization, fault tolerance, and optimizations like locality-aware task assignment and redundant execution of tasks.
Distributed file systems (from Google)Sri Prasanna
This document summarizes a lecture on distributed file systems. It discusses Network File System (NFS) and the Google File System (GFS). NFS allows remote access to files on servers and uses client caching for efficiency. GFS was designed for Google's need to redundantly store massive amounts of data on unreliable hardware. It uses large file chunks, replication for reliability, and a single master for coordination to provide high throughput for streaming reads of huge files.
The document discusses graph algorithms and PageRank and how they can be implemented using MapReduce. It covers graph representations like adjacency matrices and sparse matrices that are suitable for distributed computing. It also describes how breadth-first search, shortest path finding, and PageRank calculations can be broken down into MapReduce jobs by iteratively processing portions of the graph in parallel. While not optimal for highly iterative algorithms, MapReduce can help distribute the computation across multiple machines to process large graphs.
Distributed Computing Seminar Lecture 4 provided an overview of clustering algorithms and techniques. It discussed how clustering is used to group related data in applications like Google News and Amazon. It described hierarchical and partitional clustering algorithms and the k-means clustering algorithm. The lecture also introduced canopy clustering as a preliminary step that can help parallelize computation for clustering large datasets using MapReduce. It provided an example of how to efficiently partition a large movie ratings dataset into clusters using canopy clustering, k-means clustering, and MapReduce.
Naming And Binding (Distributed computing)Sri Prasanna
The document discusses naming and binding in distributed systems. It defines key terms like naming, naming service, name resolution, binding, naming system, name space, and resolution. It provides examples of naming like DNS which maps machine names to IP addresses, and file naming in UNIX. Name resolution involves looking up the node, address, and path to access a service. Binding associates a name with a lower-level representation.
Ellen Burstyn: From Detroit Dreamer to Hollywood Legend | CIO Women MagazineCIOWomenMagazine
In this article, we will dive into the extraordinary life of Ellen Burstyn, where the curtains rise on a story that's far more attractive than any script.
Discover innovative uses of Revit in urban planning and design, enhancing city landscapes with advanced architectural solutions. Understand how architectural firms are using Revit to transform how processes and outcomes within urban planning and design fields look. They are supplementing work and putting in value through speed and imagination that the architects and planners are placing into composing progressive urban areas that are not only colorful but also pragmatic.
Garments ERP Software in Bangladesh _ Pridesys IT Ltd.pdfPridesys IT Ltd.
Pridesys Garments ERP is one of the leading ERP solution provider, especially for Garments industries which is integrated with
different modules that cover all the aspects of your Garments Business. This solution supports multi-currency and multi-location
based operations. It aims at keeping track of all the activities including receiving an order from buyer, costing of order, resource
planning, procurement of raw materials, production management, inventory management, import-export process, order
reconciliation process etc. It’s also integrated with other modules of Pridesys ERP including finance, accounts, HR, supply-chain etc.
With this automated solution you can easily track your business activities and entire operations of your garments manufacturing
proces
Profiles of Iconic Fashion Personalities.pdfTTop Threads
The fashion industry is dynamic and ever-changing, continuously sculpted by trailblazing visionaries who challenge norms and redefine beauty. This document delves into the profiles of some of the most iconic fashion personalities whose impact has left a lasting impression on the industry. From timeless designers to modern-day influencers, each individual has uniquely woven their thread into the rich fabric of fashion history, contributing to its ongoing evolution.
Presentation by Herman Kienhuis (Curiosity VC) on Investing in AI for ABS Alu...Herman Kienhuis
Presentation by Herman Kienhuis (Curiosity VC) on developments in AI, the venture capital investment landscape and Curiosity VC's approach to investing, at the alumni event of Amsterdam Business School (University of Amsterdam) on June 13, 2024 in Amsterdam.
Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...my Pandit
Explore the fascinating world of the Gemini Zodiac Sign. Discover the unique personality traits, key dates, and horoscope insights of Gemini individuals. Learn how their sociable, communicative nature and boundless curiosity make them the dynamic explorers of the zodiac. Dive into the duality of the Gemini sign and understand their intellectual and adventurous spirit.
Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.AnnySerafinaLove
This letter, written by Kellen Harkins, Course Director at Full Sail University, commends Anny Love's exemplary performance in the Video Sharing Platforms class. It highlights her dedication, willingness to challenge herself, and exceptional skills in production, editing, and marketing across various video platforms like YouTube, TikTok, and Instagram.
SATTA MATKA SATTA FAST RESULT KALYAN TOP MATKA RESULT KALYAN SATTA MATKA FAST RESULT MILAN RATAN RAJDHANI MAIN BAZAR MATKA FAST TIPS RESULT MATKA CHART JODI CHART PANEL CHART FREE FIX GAME SATTAMATKA ! MATKA MOBI SATTA 143 spboss.in TOP NO1 RESULT FULL RATE MATKA ONLINE GAME PLAY BY APP SPBOSS
Discover timeless style with the 2022 Vintage Roman Numerals Men's Ring. Crafted from premium stainless steel, this 6mm wide ring embodies elegance and durability. Perfect as a gift, it seamlessly blends classic Roman numeral detailing with modern sophistication, making it an ideal accessory for any occasion.
https://rb.gy/usj1a2
Best practices for project execution and deliveryCLIVE MINCHIN
A select set of project management best practices to keep your project on-track, on-cost and aligned to scope. Many firms have don't have the necessary skills, diligence, methods and oversight of their projects; this leads to slippage, higher costs and longer timeframes. Often firms have a history of projects that simply failed to move the needle. These best practices will help your firm avoid these pitfalls but they require fortitude to apply.
Industrial Tech SW: Category Renewal and CreationChristian Dahlen
Every industrial revolution has created a new set of categories and a new set of players.
Multiple new technologies have emerged, but Samsara and C3.ai are only two companies which have gone public so far.
Manufacturing startups constitute the largest pipeline share of unicorns and IPO candidates in the SF Bay Area, and software startups dominate in Germany.
Dpboss Matka Guessing Satta Matta Matka Kalyan Chart Indian Matka
Coroutine (Concurrency)
1. 2 Coroutine
• A coroutine is a routine that can also be suspended at some point and
resume from that point when control returns.
• The state of a coroutine consists of:
– an execution location, starting at the beginning of the coroutine and
remembered at each suspend.
– an execution state holding the data created by the code the coroutine
is executing.
⇒ each coroutine has its own stack, containing its local variables and
those of any routines it calls.
– an execution status—active or inactive or terminated—which
changes as control resumes and suspends in a coroutine.
• Hence, a coroutine does not start from the beginning on each activation;
it is activated at the point of last suspension.
• In contrast, a routine always starts execution at the beginning and its
local variables only persist for a single activation.
Except as otherwise noted, the content of this presentation is licensed under the Creative Com-
mons Attribution 2.5 License.
7
2. 8
cocaller coroutine
state program program state
10
15
20
25
30
3. 9
cocaller coroutine
cocall
state program program state
10 inactive 10 active
15
20
25
30
4. 10
cocaller coroutine
cocall
state program program state
10 10 suspend 15
active 15 inactive
20
25
30
5. 11
cocaller coroutine
cocall
state program program state
10 10 suspend 15
20
resume 15
inactive 20 active
25
30
6. 12
cocaller coroutine
cocall
state program program state
10 10 suspend 15
20 25
resume 15
20 suspend
active 25 inactive
30
7. 13
cocaller coroutine
cocall
state program program state
10 10 suspend 15
20 25
30 resume 15
20 suspend
resume 25
inactive 30 active
8. 14
cocaller coroutine
cocall
state program program state
10 10 suspend 15
20 25
30 resume 15
20 suspend
resume 25
30
active return
terminated
• A coroutine handles the class of problems that need to retain state
between calls (e.g. plugin, device driver, finite-state machine).
• A coroutine executes synchronously with other coroutines; hence, no
concurrency among coroutines.
• Coroutines are the precursor to concurrent tasks, and introduce the
complex concept of suspending and resuming on separate stacks.
9. 15
• Two different approaches are possible for activating another coroutine:
1. A semi-coroutine acts asymmetrically, like non-recursive routines, by
implicitly reactivating the coroutine that previously activated it.
2. A full-coroutine acts symmetrically, like recursive routines, by
explicitly activating a member of another coroutine, which directly or
indirectly reactivates the original coroutine (activation cycle).
• These approaches accommodate two different styles of coroutine usage.
2.1 Semi-Coroutine
2.1.1 Fibonacci Sequence
0 n=0
f (n) = 1 n=1
f (n − 1) + f (n − 2) n ≥ 2
• 3 states, producing the sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, . . .
2.1.1.1 Direct
• Assume output can be generated at any time.
10. 16
int main() {
int fn, fn1, fn2;
fn = 0; fn1 = fn; // 1st case
cout << fn << endl;
fn = 1; fn2 = fn1; fn1 = fn; // 2nd case
cout << fn << endl;
for ( ;; ) { // infinite loop
fn = fn1 + fn2; fn2 = fn1; fn1 = fn; // general case
cout << fn << endl;
}
}
• Convert program into a routine that generates a sequence of Fibonacci
numbers on each call (no output in routine):
int main() {
for ( int i = 1; i <= 10; i += 1 ) { // first 10 Fibonacci numbers
cout << fibonacci() << endl;
}
}
• Examine different solutions.
11. 17
2.1.1.2 Routine
int fn1, fn2, state = 1; // global variables
int fibonacci() {
int fn;
switch (state) {
case 1:
fn = 0; fn1 = fn;
state = 2;
break;
case 2:
fn = 1; fn2 = fn1; fn1 = fn;
state = 3;
break;
case 3:
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
break;
}
return fn;
}
• unencapsulated global variables necessary to retain state between calls
• only one fibonacci generator can run at a time
• execution state must be explicitly retained
12. 18
2.1.1.3 Class
class fibonacci {
int fn, fn1, fn2, state; // global class variables
public:
fibonacci() : state(1) {}
int next() {
switch (state) {
case 1:
fn = 0; fn1 = fn;
state = 2;
break;
case 2:
fn = 1; fn2 = fn1; fn1 = fn;
state = 3;
break;
case 3:
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
break;
}
return fn;
}
};
13. 19
int main() {
fibonacci f1, f2;
for ( int i = 1; i <= 10; i += 1 ) {
cout << f1.next() << " " << f2.next() << endl;
} // for
}
• unencapsulated program global variables becomes encapsulated object
global variables
• multiple fibonacci generators (objects) can run at a time
• execution state must still be explicitly retained
14. 20
2.1.1.4 Coroutine
#include <uC++.h> // first include file
#include <iostream>
using namespace std;
_Coroutine fibonacci { // : public uBaseCoroutine
int fn; // used for communication
void main() { // distinguished member
int fn1, fn2; // retained between resumes
fn = 0; fn1 = fn;
suspend(); // return to last resume
fn = 1; fn2 = fn1; fn1 = fn;
suspend(); // return to last resume
for ( ;; ) {
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
suspend(); // return to last resume
}
}
public:
int next() {
resume(); // transfer to last suspend
return fn;
}
};
15. 21
void uMain::main() { // argc, argv class variables
fibonacci f1, f2;
for ( int i = 1; i <= 10; i += 1 ) {
cout << f1.next() << " " << f2.next() << endl;
}
}
• no explicit execution state! (see direct solution)
• distinguished member main (coroutine main) can be suspended and
resumed
• first resume starts main on new stack (cocall); subsequent resumes
restart last suspend.
• suspend restarts last resume
• object becomes a coroutine on first resume; coroutine becomes an object
when main ends
• both statements cause a context switch between coroutine stacks
16. 22
context switch
next resume resume
f1{fn} suspend suspend
main f2{fn}
main fn1, fn2 main fn1, fn2
i
umain f1 f2
stacks
17. 23
Coroutine fibonacci {
int fn;
void main() {
int fn1, fn2;
fn = 0; fn1 = fn;
suspend();
fn = 1; fn2 = fn1; fn1 = fn;
suspend();
1st for ( ;; ) {
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
suspend();
}
}
public: next 2 resume
int next() {
f1{fn:0}
resume(); 1
main f2{fn:?}
return fn; main fn1:0
} i:1 fn2:?
}; umain f1
18. 24
Coroutine fibonacci {
int fn;
void main() {
int fn1, fn2;
fn = 0; fn1 = fn;
suspend();
fn = 1; fn2 = fn1; fn1 = fn;
suspend();
for ( ;; ) {
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
suspend();
}
}
public: next 2 suspend
int next() {
f1{fn:0}
resume(); 4
main f2{fn:?}
return fn; main fn1:0
} i:1 fn2:?
}; umain f1
19. 25
Coroutine fibonacci {
int fn;
void main() {
int fn1, fn2;
fn = 0; fn1 = fn;
suspend();
fn = 1; fn2 = fn1; fn1 = fn;
suspend();
for ( ;; ) {
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
suspend();
2nd }
}
public: next 2 resume
int next() {
f1{fn:0}
resume(); 4
main f2{fn:0}
return fn; main fn1:1
} i:2 fn2:0
}; umain f1
20. 26
Coroutine fibonacci {
int fn;
void main() {
int fn1, fn2;
fn = 0; fn1 = fn;
suspend();
fn = 1; fn2 = fn1; fn1 = fn;
suspend();
for ( ;; ) {
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
suspend();
}
}
public: next 2 suspend
int next() {
f1{fn:1}
resume(); 6
main f2{fn:0}
return fn; main fn1:1
} i:2 fn2:0
}; umain f1
21. 27
Coroutine fibonacci {
int fn;
void main() {
int fn1, fn2;
fn = 0; fn1 = fn;
suspend();
fn = 1; fn2 = fn1; fn1 = fn;
suspend();
for ( ;; ) {
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
suspend();
}
3rd }
public: next 2 resume
int next() {
f1{fn:1}
resume(); 6
main f2{fn:1}
return fn; main fn1:1
} i:3 fn2:1
}; umain f1
22. 28
Coroutine fibonacci {
int fn;
void main() {
int fn1, fn2;
fn = 0; fn1 = fn;
suspend();
fn = 1; fn2 = fn1; fn1 = fn;
suspend();
for ( ;; ) {
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
suspend();
}
}
public: next 2 suspend
int next() {
f1{fn:1}
resume(); 9
main f2{fn:1}
return fn; main fn1:1
} i:2 fn2:1
}; umain f1
23. 29
Coroutine fibonacci {
int fn;
void main() {
int fn1, fn2;
fn = 0; fn1 = fn;
suspend();
fn = 1; fn2 = fn1; fn1 = fn;
suspend();
for ( ;; ) {
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
suspend();
}
}
4th public: next 2 resume
int next() {
f1{fn:2}
resume(); 9
main f2{fn:1}
return fn; main fn1:2
} i:4 fn2:1
}; umain f1
24. 30
• routine frame at the top of the stack knows where to activate execution
• coroutine main does not have to return before coroutine object is deleted
• uMain is the initial coroutine started by µC++
• argc and argv are implicitly defined in uMain::main
• <uC++.h> must be first include
• compile with u++ command
2.1.2 Formatted Output
Unstructured input:
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
Structured output:
abcd efgh ijkl mnop qrst
uvwx yzab cdef ghij klmn
opqr stuv wxyz
blocks of 4 letters, separated by 2 spaces, grouped into lines of 5 blocks.
25. 31
2.1.2.1 Direct
• Assume input can be obtained at any time.
int main() {
int g, b;
char ch;
for ( ;; ) { // for as many characters
for ( g = 0; g < 5; g += 1 ) { // groups of 5 blocks
for ( b = 0; b < 4; b += 1 ) { // blocks of 4 chars
cin >> ch; // read one character
if ( cin.eof() ) goto fini; // eof ? multi-level exit
cout << ch; // print character
}
cout << " "; // print block separator
}
cout << endl; // print group separator
}
fini: ;
if ( g != 0 | | b != 0 ) cout << endl; // special case
}
• Convert program into a routine passed one character at a time to
generate structured output (no input in routine).
26. 32
2.1.2.2 Routine
int g, b; // global variables
void fmtLines( char ch ) {
if ( ch == EOF ) { cout << endl; return; }
if ( ch == ’n’ ) return; // ignore newline characters
cout << ch; // print character
b += 1;
if ( b == 4 ) { // block of 4 chars
cout << " "; // block separator
b = 0;
g += 1;
}
if ( g == 5 ) { // group of 5 blocks
cout << endl; // group separator
g = 0;
}
}
27. 33
int main() {
char ch;
cin >> noskipws; // turn off white space skipping
for ( ;; ) { // for as many characters
cin >> ch;
if ( cin.eof() ) break; // eof ?
fmtLines( ch );
}
fmtLines( EOF );
}
• must retain variables b and g between successive calls.
• only one instance of formatter
• routine fmtLines must flattening two nested loops into assignments and if
statements.
28. 34
2.1.2.3 Class
class FmtLines {
int g, b; // global class variables
public:
FmtLines() : g( 0 ), b( 0 ) {}
~FmtLines() { if ( g != 0 | | b != 0 ) cout << endl; }
void prt( char ch ) {
cout << ch; // print character
b += 1;
if ( b == 4 ) { // block of 4 chars
cout << " "; // block separator
b = 0;
g += 1;
}
if ( g == 5 ) { // group of 5 blocks
cout << endl; // group separator
g = 0;
}
}
};
29. 35
int main() {
FmtLines fmt;
char ch;
for ( ;; ) { // for as many characters
cin >> ch; // read one character
if ( cin.eof() ) break; // eof ?
fmt.prt( ch );
}
}
• Solves encapsulation and multiple instances issues, but still explicitly
managing execution state.
30. 36
2.1.2.4 Coroutine
_Coroutine FmtLines {
char ch; // used for communication
int g, b; // global because used in destructor
void main() {
for ( ;; ) { // for as many characters
for ( g = 0; g < 5; g += 1 ) { // groups of 5 blocks
for ( b = 0; b < 4; b += 1 ) { // blocks of 4 characters
suspend();
cout << ch; // print character
}
cout << " "; // block separator
}
cout << endl; // group separator
}
}
public:
FmtLines() { resume(); } // start coroutine
~FmtLines() { if ( g != 0 | | b != 0 ) cout << endl; }
void prt( char ch ) { FmtLines::ch = ch; resume(); }
};
31. 37
void uMain::main() {
FmtLines fmt;
char ch;
for ( ;; ) {
cin >> ch; // read one character
if ( cin.eof() ) break; // eof ?
fmt.prt( ch );
}
}
• resume in constructor allows coroutine main to get to 1st input suspend.
resume
prt ch
suspend
fmt{ch, g, b}
main
ch main
umain fmt
32. 38
2.1.3 Correct Coroutine Usage
• Eliminate unnecessary computation or flag variables used to retain
information about execution state.
• E.g., sum the even and odd digits of a 10-digit number, where each digit
is passed to the coroutine:
BAD: Explicit Execution State GOOD: Implicit Execution State
for ( int i = 0; i < 10; i += 1 ) { for ( int i = 0; i < 5; i += 1 ) {
if ( i % 2 == 0 ) // even ?
even += digit; even += digit;
else suspend();
odd += digit; odd += digit;
suspend(); suspend();
} }
• Right example illustrates the “Zen” of the coroutine; let it do the work.
• E.g., a BAD solution for the previous Fibonacci generator is:
33. 39
void main() {
int fn1, fn2, state = 1;
for ( ;; ) {
switch (state) {
case 1:
fn = 0; fn1 = fn;
state = 2;
break;
case 2:
fn = 1; fn2 = fn1; fn1 = fn;
state = 3;
break;
case 3:
fn = fn1 + fn2; fn2 = fn1; fn1 = fn;
break;
}
suspend();
}
}
• Uses explicit flag variables to control execution state and a single
suspend at the end of an enclosing loop.
34. 40
• None of the coroutine’s capabilities are used, and the program structure
is lost in switch statement.
• Must do more than just activate the coroutine main to demonstrate an
understanding of retaining data and execution state within a coroutine.
2.1.4 Device Driver
• Called by interrupt handler for hardware serial port.
• Parse transmission protocol and return message text.
. . . STX . . . message . . . ESC ETX . . . message . . . ETX 2-byte CRC . . .
35. 41
_Coroutine SerialDriver {
unsigned char byte;
int status;
unsigned char *msg;
public:
driver( unsigned char *msg ) : msg( msg ) { resume(); }
int next( unsigned char b ) { // called by interrupt handler
byte = b;
resume();
return status;
}
private:
void main() {
newmsg:
for ( ;; ) { // parse messages
status = CONT;
int lnth = 0, sum = 0;
do {
suspend();
} while ( byte != STX ) // look for start of message
eomsg:
for ( ;; ) {
suspend(); // parse message bytes
switch ( byte ) {
36. 42
case STX: // protocol violation
status = ERROR;
continue newMsg; // uC++ labelled continue
case ETX: // end of message
break eomsg; // uC++ labelled break
case ESC: // escape next character
suspend(); // get escaped character
break;
} // switch
msg[lnth] = byte; // store message
lnth += 1;
sum += byte; // compute CRC
} // for
suspend(); // obtain 1st CRC byte
int crc = byte;
suspend(); // obtain 2nd CRC byte
crc = (crc << 8) | byte;
status = crc == sum ? MSG : ERROR;
msgcomplete( msg, lnth ); // return message to OS
} // for
} // main
}; // SerialDriver
37. 43
2.1.5 Producer-Consumer
_Coroutine Cons {
int p1, p2, status; bool done;
void main() { // starter prod
int money = 1;
status = 0;
// 1st resume starts here
for ( ;; ) {
if ( done ) break;
cout << "receives:" << p1 << ", " << p2;
cout << " and pays $" << money << endl;
status += 1;
suspend(); // activate delivery or stop
money += 1;
}
cout << "Cons stops" << endl;
} // suspend / resume(starter)
public:
Cons() : done(false) {}
int delivery( int p1, int p2 ) {
Cons::p1 = p1; Cons::p2 = p2;
resume(); // activate main
return status;
}
void stop() { done = true; resume(); } // activate main
};
38. 44
_Coroutine Prod {
Cons &c;
int N;
void main() { // starter umain
int i, p1, p2, status;
// 1st resume starts here
for ( i = 1; i <= N; i += 1 ) {
p1 = rand() % 100;
p2 = rand() % 100;
cout<< "delivers:"<< p1<< ", "<< p2<< endl;
status = c.delivery( p1, p2 );
cout << " gets status:" << status << endl;
}
cout << "Prod stops" << endl;
c.stop();
} // suspend / resume(starter)
public:
Prod( Cons &c ) : c(c) {}
void start( int N ) {
Prod::N = N;
resume(); // activate main
}
};
void uMain::main() { // instance called umain
Cons cons; // create consumer
Prod prod( cons ); // create producer
prod.start( 5 ); // start producer
} // resume(starter)
39. 45
resume
start N resume
cons{p1, p2, suspend
main status, done} delivery p1, p2
prod{c, N} main i,p1,p2,status main money
umain prod cons
umain prod cons
start main delivery main
main resume(1) resume(2)
suspend
stop
resume(3)
40. 46
• Do both Prod and Cons need to be coroutines?
• When coroutine main returns, it activates the coroutine that started main.
• prod started cons.main, so control goes to prod suspended in stop.
• uMain started prod.main, so control goes back to uMain suspended in
start.
2.2 Full Coroutines
• Semi-coroutine activates the member routine that activated it.
• Full coroutine has a resume cycle; semi-coroutine does not form a
resume cycle.
resume
stack(s) suspend
call
return
routine semi-coroutine full coroutine
41. 47
• A full coroutine is allowed to perform semi-coroutine operations
because it subsumes the notion of semi-routine.
_Coroutine fc {
void main() { // starter umain control flow semantics
mem(); // ?
resume(); // ? inactive active
suspend(); // ?
} // ? suspend last resumer
public: uThisCoroutine()
void mem() { resume(); } resume this
};
void uMain::main() { context switch
fc x;
x.mem();
}
mem mem
main x main i
umain x
42. 48
umain fc
mem main
main resume
• Suspend inactivates the current active coroutine (uThisCoroutine), and
activates last resumer.
• Resume inactivates the current active coroutine (uThisCoroutine), and
activates the current object (this).
• Hence, the current object must be a non-terminated coroutine.
• Note, this and uThisCoroutine change at different times.
• Exception: last resumer not changed when resuming self because no
practical value.
43. 49
• Full coroutines can form an arbitrary topology with an arbitrary number
of coroutines.
• There are 3 phases to any full coroutine program:
1. starting the cycle
2. executing the cycle
3. stopping the cycle
• Starting the cycle requires each coroutine to know at least one other
coroutine.
• The problem is mutually recursive references:
fc x(y), y(x);
• One solution is to make closing the cycle a special case:
fc x, y(x);
x.partner( y );
• Once the cycle is created, execution around the cycle can begin.
• Stopping can be as complex as starting, because a coroutine goes back to
its starter.
44. 50
• In many cases, it is unnecessary to terminate all coroutines, just delete
them.
• But it is necessary to activate uMain for the program to finish (unless exit
is used).
creation starter execution
umain umain umain
x y x x y
y
45. 51
2.2.1 Producer-Consumer
_Coroutine Prod { public:
Cons *c; int payment( int money ) {
int N, money, receipt; Prod::money = money;
void main() { // starter umain cout << " gets payment $"
int i, p1, p2, status; << money << endl;
// 1st resume starts here resume(); // activate Cons::delivery
for ( i = 1; i <= N; i += 1 ) { return receipt;
p1 = rand() % 100; }
p2 = rand() % 100; void start( int N, Cons &c ) {
cout << "delivers:" << p1 Prod::N = N; Prod::c = &c;
<< ", " << p2 << endl; receipt = 0;
status = c->delivery( p1, p2 ); resume(); // activate main
cout << " gets status:" }
<< status << endl; };
receipt += 1;
}
cout << "Prod stops" << endl;
c->stop();
}
47. 53
start N, c resume resume
cons{p,p1,p2,
status, done}
main delivery p1, p2 payment
prod{c, N,
money, receipt} main i,p1,p2,status main money
umain prod cons
umain prod cons
start main delivery main
main resume(1) resume(2)
payment stop
resume(3) resume(4)
48. 54
2.3 Nonlocal Exceptions
• Exception handling is based on traversing a call stack.
• With coroutines there are multiple call stacks.
• Nonlocal exceptions move from one coroutine stack to another.
_Throw [ throwable-event [ _At coroutine-id ] ] ;
• Hence, exceptions can be handled locally within a coroutine or
nonlocally among coroutines.
• Local exceptions within a coroutine are the same as for exceptions
within a routine/class, with one nonlocal difference:
– An unhandled exception raised by a coroutine raises a nonlocal
exception of type uBaseCoroutine::UnhandledException at the
coroutine’s last resumer and then terminates the coroutine.
49. 55
_Event E {}; // uC++ exception type
_Coroutine C {
void main() { _Throw E(); }
public:
void mem() { resume(); }
};
void uMain::main() {
C c;
try { c.mem();
} catch( uBaseCoroutine::UnhandledException ) {. . .}
}
– Call to c.mem resumes coroutine c and then coroutine c throws
exception E but does not handle it.
– When the base of c’s stack is reached, an exception of type
uBaseCoroutine::UnhandledException is raised at uMain, since it last
resumed c.
– The original exception’s (E) default terminate routine is not called
because it has been caught and transformed.
– The coroutine terminates but control returns to its last resumer rather
than its starter.
50. 56
_Coroutine C {
void main() {
for ( int i = 0; i < 5; i += 1 ) {
try {
_Enable { // allow nonlocal exceptions
. . . suspend(); . . .
}
} catch( E ) { . . . }
}
}
public:
C() { resume(); } // prime loop
void mem() { resume(); }
};
void uMain::main() {
C c;
for ( int i = 0; i < 5; i += 1 ) {
_Throw E() _At c; // exception pending
c.mem(); // trigger exception
}
}
51. 57
• Nonlocal delivery is initially disabled for a coroutine, so handlers can
be set up before any exception can be delivered .
• Hence, nonlocal exceptions must be explicitly enabled before delivery
can occur with _Enable.
• µC+ allows dynamic enabling and disabling of nonlocal event delivery.
+
_Enable <E1><E2>. . . {
// exceptions E1, E2 are enabled
}
_Disable <E1><E2>. . . {
// exceptions E1, E2 are disabled
}
• Specifying no exceptions enables/disables all nonlocal exceptions.
• _Enable and _Disable blocks can be nested, turning delivery on/off on
entry and reestablishing the delivery state to its prior value on exit.
• The source coroutine delivers the nonlocal exception immediately but
does not propagate it; propagation only occurs when the faulting
coroutine becomes active.
⇒ must call one of the faulting coroutine’s members that does a resume.