The document discusses asynchronous programming and patterns in .NET. It covers several key points:
- Async methods should return Tasks rather than void to allow proper composition.
- Libraries should expose asynchronous APIs that are naturally asynchronous rather than wrapping synchronous code in Tasks unnecessarily.
- Parallelization can hurt performance by adding overhead if not used correctly, such as when parallelizing I/O-bound work.
- Asynchronous patterns like Task.WhenAll can improve performance over sequential code by allowing asynchronous operations to overlap and avoid unnecessary waiting.
Async best practices DotNet Conference 2016 Lluis Franco
1. The document discusses best practices for asynchronous programming in .NET.
2. It recommends using async Task methods instead of async void methods, except for event handlers, so exceptions can be caught and callers know when the method finishes.
3. For CPU-bound work, using Parallel.ForEach or Task.Run to put work on the thread pool is recommended, while for I/O work using await is preferred over background threads.
Understanding reactive programming with microsoft reactive extensionsOleksandr Zhevzhyk
We all want our applications to be responsible, reliable and testable. But event-driven paradigm sometimes could lead us to obscured or, even worse, messy code. Let’s look into the world, where generated data, background tasks and events are stuck together as asynchronous data streams to achieve a better result.
Тема №2 Расширяем сознание реактивным подходом. RxJava и Android
Спикер — Владимир Артеменко — android developer Компания Rooky Pro
Уровень аудитории — Теория есть, начальный опыт применения
Цель доклада – Обучение
Solid principles in practice the clean architecture - Droidcon ItalyFabio Collini
The Clean Architecture has been formalized by Robert C. Martin in 2012, it's quite new even if it's based on the SOLID principles (presented for the first time in early 2000). The biggest benefit that we get using this architecture is the code testability, indeed it separates the application code from the code connected to external factor (that usually is more difficult to test).
In this talk we'll see a practical example of how to apply the SOLID principle, in particular, the dependency inversion.
Functional Reactive Programming (FRP): Working with RxJSOswald Campesato
Functional Reactive Programming (FRP) combines functional programming and reactive programming by treating asynchronous data streams as basic elements. FRP uses Observables to represent these streams, which emit values over time that can be composed together using operators like map and filter. Popular libraries for FRP include RxJS, which supports asynchronous and event-based programs by modeling push-based data streams with Observables. Operators allow transforming and combining Observable streams to build reactive applications.
Rxjs provides a paradigm for dealing with asynchronous operations in a way that resembles synchronous code. It uses Observables to represent asynchronous data streams over time that can be composed using operators. This allows handling of events, asynchronous code, and other reactive sources in a declarative way. Key points are:
- Observables represent asynchronous data streams that can be subscribed to.
- Operators allow manipulating and transforming streams through methods like map, filter, switchMap.
- Schedulers allow controlling virtual time for testing asynchronous behavior.
- Promises represent single values while Observables represent continuous streams, making Observables more powerful for reactive programming.
- Cascading asynchronous calls can be modeled elegantly using switch
Programming Sideways: Asynchronous Techniques for AndroidEmanuele Di Saverio
Android apps need to respond fast, support highly parallel execution and multi component architecture.
Learn some tricks of the trade for these problems!
as presented at www.mobileconference.it (2013 edition)
A presentation given to Overstock.com IT at annual conference. Twitter @TECHknO 2015. Goal of the presentation is to provide a good introduction to the reactive programming model with RxJava.
Async best practices DotNet Conference 2016 Lluis Franco
1. The document discusses best practices for asynchronous programming in .NET.
2. It recommends using async Task methods instead of async void methods, except for event handlers, so exceptions can be caught and callers know when the method finishes.
3. For CPU-bound work, using Parallel.ForEach or Task.Run to put work on the thread pool is recommended, while for I/O work using await is preferred over background threads.
Understanding reactive programming with microsoft reactive extensionsOleksandr Zhevzhyk
We all want our applications to be responsible, reliable and testable. But event-driven paradigm sometimes could lead us to obscured or, even worse, messy code. Let’s look into the world, where generated data, background tasks and events are stuck together as asynchronous data streams to achieve a better result.
Тема №2 Расширяем сознание реактивным подходом. RxJava и Android
Спикер — Владимир Артеменко — android developer Компания Rooky Pro
Уровень аудитории — Теория есть, начальный опыт применения
Цель доклада – Обучение
Solid principles in practice the clean architecture - Droidcon ItalyFabio Collini
The Clean Architecture has been formalized by Robert C. Martin in 2012, it's quite new even if it's based on the SOLID principles (presented for the first time in early 2000). The biggest benefit that we get using this architecture is the code testability, indeed it separates the application code from the code connected to external factor (that usually is more difficult to test).
In this talk we'll see a practical example of how to apply the SOLID principle, in particular, the dependency inversion.
Functional Reactive Programming (FRP): Working with RxJSOswald Campesato
Functional Reactive Programming (FRP) combines functional programming and reactive programming by treating asynchronous data streams as basic elements. FRP uses Observables to represent these streams, which emit values over time that can be composed together using operators like map and filter. Popular libraries for FRP include RxJS, which supports asynchronous and event-based programs by modeling push-based data streams with Observables. Operators allow transforming and combining Observable streams to build reactive applications.
Rxjs provides a paradigm for dealing with asynchronous operations in a way that resembles synchronous code. It uses Observables to represent asynchronous data streams over time that can be composed using operators. This allows handling of events, asynchronous code, and other reactive sources in a declarative way. Key points are:
- Observables represent asynchronous data streams that can be subscribed to.
- Operators allow manipulating and transforming streams through methods like map, filter, switchMap.
- Schedulers allow controlling virtual time for testing asynchronous behavior.
- Promises represent single values while Observables represent continuous streams, making Observables more powerful for reactive programming.
- Cascading asynchronous calls can be modeled elegantly using switch
Programming Sideways: Asynchronous Techniques for AndroidEmanuele Di Saverio
Android apps need to respond fast, support highly parallel execution and multi component architecture.
Learn some tricks of the trade for these problems!
as presented at www.mobileconference.it (2013 edition)
A presentation given to Overstock.com IT at annual conference. Twitter @TECHknO 2015. Goal of the presentation is to provide a good introduction to the reactive programming model with RxJava.
RxJs - demystified provides an overview of reactive programming and RxJs. The key points covered are:
- Reactive programming focuses on propagating changes without explicitly specifying how propagation happens.
- Observables are at the heart of RxJs and emit values in a push-based manner. Operators allow transforming, filtering, and combining observables.
- Common operators include map, filter, reduce, buffer, and switchMap. Over 120 operators exist for tasks like error handling, multicasting, and conditional logic.
- Marble diagrams visually demonstrate how operators transform observable streams.
- Creating observables from events, promises, arrays and iterables allows wrapping different data sources in a uniform API
Android architecture component - FbCircleDev Yogyakarta IndonesiaPratama Nur Wijaya
The document discusses Android Architecture Components (AAC). It describes AAC as a library, guidelines and standards that aim to standardize architecture and reduce boilerplate code. It discusses key components of AAC like Lifecycles, LiveData, ViewModel and Room that help address issues like lifecycle handling, data persistence and offline support. It provides code examples to demonstrate how these components can be used to build more robust Android applications that properly handle lifecycles and data.
This document provides an overview of Rxjs (Reactive Extensions for JavaScript). It begins by explaining why Rxjs is useful for dealing with asynchronous code in a synchronous way and provides one paradigm for asynchronous operations. It then discusses the history of callbacks and promises for asynchronous code. The bulk of the document explains key concepts in Rxjs including Observables, Operators, error handling, testing with Schedulers, and compares Promises to Rxjs. It provides examples of many common Rxjs operators and patterns.
This slide deck (partly German) covers async and parallel programming topics for .NET and C#. For details see http://www.software-architects.com/devblog/2014/02/18/BASTA-2014-Spring-C-Workshop
The document discusses the benefits of using RxJS observables over promises and events for managing asynchronous and reactive code in Angular applications. It explains key concepts like observers, subscriptions, operators, cold vs hot observables, and using RxJS with services and components. Example code is provided for creating observable data services to share data between components, composing asynchronous logic with operators, and best practices for managing subscriptions and preventing memory leaks. Overall, the document promotes a reactive programming style with RxJS for building maintainable and testable Angular applications.
RxJava is a library for composing asynchronous and event-based programs using observable sequences. It provides APIs for asynchronous programming with observable streams and the ability to chain operations and transformations on these streams using reactive extensions. The basic building blocks are Observables, which emit items, and Subscribers, which consume those items. Operators allow filtering, transforming, and combining Observable streams. RxJava helps address problems with threading and asynchronous operations in Android by providing tools to manage execution contexts and avoid callback hell.
RxJS Operators - Real World Use Cases (FULL VERSION)Tracy Lee
This document provides an overview and explanation of various RxJS operators for working with Observables, including:
- The map, filter, and scan operators for transforming streams of data. Map applies a function to each value, filter filters values, and scan applies a reducer function over time.
- Flattening operators like switchMap, concatMap, mergeMap, and exhaustMap for mapping Observables to other Observables.
- Error handling operators like catchError, retry, and retryWhen for catching and handling errors.
- Additional explanation of use cases and common mistakes for each operator discussed. The document is intended to explain these essential operators for real world reactive programming use.
Taming Core Data by Arek Holko, MacoscopeMacoscope
The document discusses best practices for working with Core Data in iOS applications. It covers 9 steps: 1) setting up Core Data, 2) accessing the managed object context, 3) creating NSManagedObject subclasses, 4) creating fetch requests, 5) integrating networking, 6) using NSFetchedResultsController, 7) protocolizing models, 8) using immutable models, and 9) modularizing the code. The overall message is that Core Data code should be organized cleanly using small, single-purpose classes and protocols to improve testability, separation of concerns, and code reuse.
A practical guide to using RxJava on Android. Tips for improving your app architecture with reactive programming. What are the advantages and disadvantages of using RxJava over standard architecture? And how to connect with other popular Android libraries?
Presented at Droidcon Greece 2016.
Lecture on Reactive programming on Android, mDevCamp 2016.
A practical guide to using RxJava on Android. Tips for improving your app architecture with reactive programming. What are the advantages and disadvantages of using RxJava over standard architecture? And how to connect with other popular Android libraries?
RxJS is a library for reactive programming that allows composing asynchronous and event-based programs using observable sequences. It provides the Observable type for pushing multiple values to observers over time asynchronously. Operators allow transforming and combining observables. Key types include Observable, Observer, Subject, BehaviorSubject, and ReplaySubject. Subjects can multicast values to multiple observers. Overall, RxJS is useful for handling asynchronous events as collections in a declarative way.
This document provides an overview of various JavaScript concepts and techniques, including:
- Prototypal inheritance allows objects in JavaScript to inherit properties from other objects. Functions can act as objects and have a prototype property for inheritance.
- Asynchronous code execution in JavaScript is event-driven. Callbacks are assigned as event handlers to execute code when an event occurs.
- Scope and closures - variables are scoped to functions. Functions create closures where they have access to variables declared in their parent functions.
- Optimization techniques like event delegation and requestAnimationFrame can improve performance of event handlers and animations.
Reactive programming with RxJS - ByteConf 2018Tracy Lee
Reactive programming paradigms are all around us. So why does is it awesome? We'll explore reactive programming in standards, frameworks and libraries and talk about how to think reactively.
Then we'll take a more practical approach and talk about how to utilize reactive programming patterns with an abstraction like RxJS, a domain specific language for reacting to events and how using this abstraction can make your development life much easier in React Native.
The document discusses reactive programming concepts using RxJava. It introduces observables and observers, where observables push asynchronous events to observers via subscriptions. It explains how to create observables that return asynchronous values, and how operators like map, filter, and flatMap can transform and combine observable streams. Key lessons are that performance depends on operator implementations, debugging subscriptions can be difficult, and IDE support for reactive code is still maturing.
This document discusses callbacks, promises, and generators for handling asynchronous code in JavaScript. It begins by explaining callbacks and the issues they can cause like "callback hell". It then introduces promises as an alternative using libraries like Q that allow chaining asynchronous operations together. Generators are also covered as a way to write asynchronous code that looks synchronous when combined with promises through libraries like CO. Overall, it recommends using an asynchronous pattern supported by a library to manage complex asynchronous code.
apidays LIVE Australia 2020 - Strangling the monolith with a reactive GraphQL...apidays
apidays LIVE Australia 2020 - Building Business Ecosystems
Strangling the monolith with a reactive GraphQL gateway
Martin Varga, Senior Software Developer at Atlassian
1. Rxjs provides a better way of handling asynchronous code through observables which are streams of values over time. Observables allow for cancellable, retryable operations and easy composition of different asynchronous sources.
2. Common Rxjs operators like map, filter, and flatMap allow transforming and combining observable streams. Operators make observables quite powerful for tasks like async logic, event handling, and API requests.
3. In Angular, observables are used extensively for tasks like HTTP requests, routing, and component communication. Key aspects are using async pipes for subscriptions and unsubscribing during lifecycle hooks. Rxjs greatly simplifies many common asynchronous patterns in Angular applications.
SOLID principles in practice: the Clean ArchitectureFabio Collini
The Clean Architecture has been formalized by Robert C. Martin in 2012, it's quite new even if it's based on the SOLID principles (presented for the first time in early 2000). The biggest benefit that we get using this architecture is the code testability, indeed it separates the application code from the code connected to external factor (that usually is more difficult to test).
In this talk we'll see a practical example of how to apply the SOLID principle, in particular, the dependency inversion.
The document discusses async/await in .NET and C#. It is aimed at .NET library developers to help them understand when and how to properly implement async methods. The key topics covered are getting the right mental model for async/await, knowing when not to use async, avoiding allocations, and minimizing suspensions. Examples are provided of manually caching tasks to improve performance compared to unnecessary allocations.
RxJs - demystified provides an overview of reactive programming and RxJs. The key points covered are:
- Reactive programming focuses on propagating changes without explicitly specifying how propagation happens.
- Observables are at the heart of RxJs and emit values in a push-based manner. Operators allow transforming, filtering, and combining observables.
- Common operators include map, filter, reduce, buffer, and switchMap. Over 120 operators exist for tasks like error handling, multicasting, and conditional logic.
- Marble diagrams visually demonstrate how operators transform observable streams.
- Creating observables from events, promises, arrays and iterables allows wrapping different data sources in a uniform API
Android architecture component - FbCircleDev Yogyakarta IndonesiaPratama Nur Wijaya
The document discusses Android Architecture Components (AAC). It describes AAC as a library, guidelines and standards that aim to standardize architecture and reduce boilerplate code. It discusses key components of AAC like Lifecycles, LiveData, ViewModel and Room that help address issues like lifecycle handling, data persistence and offline support. It provides code examples to demonstrate how these components can be used to build more robust Android applications that properly handle lifecycles and data.
This document provides an overview of Rxjs (Reactive Extensions for JavaScript). It begins by explaining why Rxjs is useful for dealing with asynchronous code in a synchronous way and provides one paradigm for asynchronous operations. It then discusses the history of callbacks and promises for asynchronous code. The bulk of the document explains key concepts in Rxjs including Observables, Operators, error handling, testing with Schedulers, and compares Promises to Rxjs. It provides examples of many common Rxjs operators and patterns.
This slide deck (partly German) covers async and parallel programming topics for .NET and C#. For details see http://www.software-architects.com/devblog/2014/02/18/BASTA-2014-Spring-C-Workshop
The document discusses the benefits of using RxJS observables over promises and events for managing asynchronous and reactive code in Angular applications. It explains key concepts like observers, subscriptions, operators, cold vs hot observables, and using RxJS with services and components. Example code is provided for creating observable data services to share data between components, composing asynchronous logic with operators, and best practices for managing subscriptions and preventing memory leaks. Overall, the document promotes a reactive programming style with RxJS for building maintainable and testable Angular applications.
RxJava is a library for composing asynchronous and event-based programs using observable sequences. It provides APIs for asynchronous programming with observable streams and the ability to chain operations and transformations on these streams using reactive extensions. The basic building blocks are Observables, which emit items, and Subscribers, which consume those items. Operators allow filtering, transforming, and combining Observable streams. RxJava helps address problems with threading and asynchronous operations in Android by providing tools to manage execution contexts and avoid callback hell.
RxJS Operators - Real World Use Cases (FULL VERSION)Tracy Lee
This document provides an overview and explanation of various RxJS operators for working with Observables, including:
- The map, filter, and scan operators for transforming streams of data. Map applies a function to each value, filter filters values, and scan applies a reducer function over time.
- Flattening operators like switchMap, concatMap, mergeMap, and exhaustMap for mapping Observables to other Observables.
- Error handling operators like catchError, retry, and retryWhen for catching and handling errors.
- Additional explanation of use cases and common mistakes for each operator discussed. The document is intended to explain these essential operators for real world reactive programming use.
Taming Core Data by Arek Holko, MacoscopeMacoscope
The document discusses best practices for working with Core Data in iOS applications. It covers 9 steps: 1) setting up Core Data, 2) accessing the managed object context, 3) creating NSManagedObject subclasses, 4) creating fetch requests, 5) integrating networking, 6) using NSFetchedResultsController, 7) protocolizing models, 8) using immutable models, and 9) modularizing the code. The overall message is that Core Data code should be organized cleanly using small, single-purpose classes and protocols to improve testability, separation of concerns, and code reuse.
A practical guide to using RxJava on Android. Tips for improving your app architecture with reactive programming. What are the advantages and disadvantages of using RxJava over standard architecture? And how to connect with other popular Android libraries?
Presented at Droidcon Greece 2016.
Lecture on Reactive programming on Android, mDevCamp 2016.
A practical guide to using RxJava on Android. Tips for improving your app architecture with reactive programming. What are the advantages and disadvantages of using RxJava over standard architecture? And how to connect with other popular Android libraries?
RxJS is a library for reactive programming that allows composing asynchronous and event-based programs using observable sequences. It provides the Observable type for pushing multiple values to observers over time asynchronously. Operators allow transforming and combining observables. Key types include Observable, Observer, Subject, BehaviorSubject, and ReplaySubject. Subjects can multicast values to multiple observers. Overall, RxJS is useful for handling asynchronous events as collections in a declarative way.
This document provides an overview of various JavaScript concepts and techniques, including:
- Prototypal inheritance allows objects in JavaScript to inherit properties from other objects. Functions can act as objects and have a prototype property for inheritance.
- Asynchronous code execution in JavaScript is event-driven. Callbacks are assigned as event handlers to execute code when an event occurs.
- Scope and closures - variables are scoped to functions. Functions create closures where they have access to variables declared in their parent functions.
- Optimization techniques like event delegation and requestAnimationFrame can improve performance of event handlers and animations.
Reactive programming with RxJS - ByteConf 2018Tracy Lee
Reactive programming paradigms are all around us. So why does is it awesome? We'll explore reactive programming in standards, frameworks and libraries and talk about how to think reactively.
Then we'll take a more practical approach and talk about how to utilize reactive programming patterns with an abstraction like RxJS, a domain specific language for reacting to events and how using this abstraction can make your development life much easier in React Native.
The document discusses reactive programming concepts using RxJava. It introduces observables and observers, where observables push asynchronous events to observers via subscriptions. It explains how to create observables that return asynchronous values, and how operators like map, filter, and flatMap can transform and combine observable streams. Key lessons are that performance depends on operator implementations, debugging subscriptions can be difficult, and IDE support for reactive code is still maturing.
This document discusses callbacks, promises, and generators for handling asynchronous code in JavaScript. It begins by explaining callbacks and the issues they can cause like "callback hell". It then introduces promises as an alternative using libraries like Q that allow chaining asynchronous operations together. Generators are also covered as a way to write asynchronous code that looks synchronous when combined with promises through libraries like CO. Overall, it recommends using an asynchronous pattern supported by a library to manage complex asynchronous code.
apidays LIVE Australia 2020 - Strangling the monolith with a reactive GraphQL...apidays
apidays LIVE Australia 2020 - Building Business Ecosystems
Strangling the monolith with a reactive GraphQL gateway
Martin Varga, Senior Software Developer at Atlassian
1. Rxjs provides a better way of handling asynchronous code through observables which are streams of values over time. Observables allow for cancellable, retryable operations and easy composition of different asynchronous sources.
2. Common Rxjs operators like map, filter, and flatMap allow transforming and combining observable streams. Operators make observables quite powerful for tasks like async logic, event handling, and API requests.
3. In Angular, observables are used extensively for tasks like HTTP requests, routing, and component communication. Key aspects are using async pipes for subscriptions and unsubscribing during lifecycle hooks. Rxjs greatly simplifies many common asynchronous patterns in Angular applications.
SOLID principles in practice: the Clean ArchitectureFabio Collini
The Clean Architecture has been formalized by Robert C. Martin in 2012, it's quite new even if it's based on the SOLID principles (presented for the first time in early 2000). The biggest benefit that we get using this architecture is the code testability, indeed it separates the application code from the code connected to external factor (that usually is more difficult to test).
In this talk we'll see a practical example of how to apply the SOLID principle, in particular, the dependency inversion.
The document discusses async/await in .NET and C#. It is aimed at .NET library developers to help them understand when and how to properly implement async methods. The key topics covered are getting the right mental model for async/await, knowing when not to use async, avoiding allocations, and minimizing suspensions. Examples are provided of manually caching tasks to improve performance compared to unnecessary allocations.
Amazon has been using and building workflow services for years now. They use Simple Workflow (SWF) internally to lay down OS and all required software onto a new Amazon server before it joins Amazon fleet. Every Amazon server being put in service is provisioned in a workflow using SWF.
During this brown-bag session you will be taken through the example of real application which uses SWF.
The document discusses multithreading in Java, including the evolution of threading support across Java releases and examples of implementing multithreading using Threads, ExecutorService, and NIO channels. It also provides examples of how to make operations thread-safe using locks and atomic variables when accessing shared resources from multiple threads. References are included for further reading on NIO-based servers and asynchronous channel APIs introduced in Java 7.
Threads, Queues, and More: Async Programming in iOSTechWell
To keep your iOS app running butter-smooth at 60 frames per second, Apple recommends doing as many tasks as possible asynchronously or “off the main thread.” Joe Keeley introduces you to some basic concepts of asynchronous programming in iOS. He discusses what threads and queues are, how they are related, and the special significance of the main queue to iOS. Look at what options are available in the iOS SDK to work asynchronously, including NSOperationQueues and Grand Central Dispatch. Take an in depth look at how to implement some common use cases for those options in Swift. Joe pays special attention to networking, one of the most common asynchronous use cases. Spend some time discussing common asynchronous programming pitfalls—and how to avoid them. Leave this session ready to try out asynchronous programming in your iOS app.
The history of asynchronous programming in .NET began with threads and the thread pool to handle concurrency. Later, patterns like the Asynchronous Programming Model (APM) and Event-based Asynchronous Pattern (EAP) simplified asynchronous code but it remained complex. The Task Parallel Library (TPL) and async/await further abstracted asynchronous operations so code more closely resembled synchronous code. Now, asynchronous code no longer requires Wait() or GetAwaiter().GetResult() and avoids potential deadlocks.
Think Async: Asynchronous Patterns in NodeJSAdam L Barrett
JavaScript is single threaded, so understanding the async patterns available in the language is critical to creating maintainable NodeJS applications with good performance. In order to master “thinking in async”, we’ll explore the async patterns available in node and JavaScript including standard callbacks, promises, thunks/tasks, the new async/await, the upcoming asynchronous iteration features, streams, CSP and ES Observables.
The document discusses using asynchronous SQL queries in Flex applications to avoid freezing the user interface. It proposes using a StatementList class to encapsulate executing multiple SQL statements as a transaction in an asynchronous manner. The key points are:
1) Synchronous SQL queries can freeze the UI, so asynchronous queries are preferable.
2) A StatementList class is created to execute multiple SQL statements as a transaction asynchronously without locking the UI.
3) An ExecutionQueue class is introduced to schedule StatementList objects to ensure proper execution order without locking.
This document discusses three examples of using AJAX: 1) displaying tooltips with asynchronously retrieved server data, 2) autocompleting text fields, and 3) conducting surveys. For each example, it describes the key functions used, such as making asynchronous requests to the server and processing the responses. It concludes that AJAX can be useful for various applications and that one example required adding a database for it to work properly. The activity took approximately 5 hours for a three-person team to complete.
REX about JavaFX8 used in SlideshowFX. This presentation covers concept from JavaFX as well as technologies like OSGi, Vert.x, LeapMotion, nashorn and friends in order to make them communicate inside one application developed in JavaFX.
This presentation was made at the ElsassJUG
Paco de la Cruz gave a presentation on Durable Functions 2.0 and stateful serverless functions. The presentation covered an overview of serverless computing on Azure, a recap of Azure Functions and an introduction to Durable Functions. It discussed new features in Durable Functions 2.0 including Durable Entities, additional function types and patterns. The presentation also provided demonstrations of common Durable Functions patterns and a stateful serverless request bin application. It concluded with a Q&A section.
This document summarizes Paco de la Cruz's presentation on Azure Durable Functions. The presentation covered the evolution of application platforms from on-premises to serverless. It then discussed Azure Functions and some challenges it faces with stateful orchestrations. Durable Functions were introduced as an extension of Azure Functions that uses a Durable Task Framework to implement stateful workflows in a serverless manner. Key patterns demonstrated include function chaining, fan-out/fan-in, and using an orchestration client to start and monitor orchestrations. Code samples and demos were provided to illustrate approval workflows using Durable Functions.
Manufacturers have hit limits for single-core processors due to physical constraints, so parallel processing using multiple smaller cores is now common. The .NET framework includes classes like Task Parallel Library (TPL) and Parallel LINQ (PLINQ) that make it easy to take advantage of multi-core systems while abstracting thread management. TPL allows executing code asynchronously using tasks, which can run in parallel and provide callbacks to handle completion and errors. PLINQ allows parallelizing LINQ queries.
Reactive Programming Patterns with RxSwiftFlorent Pillet
In this introduction to reactive programming and RxSwift you'll learn how common problems are solved in a reactive way to improve your architecture and write more reliable code.
This document discusses socket programming in Java. It begins by explaining the key classes for socket programming - InetAddress, Socket, ServerSocket, DatagramSocket, DatagramPacket, and MulticastSocket. It then provides examples of TCP client-server applications using Sockets and ServerSockets, UDP client-server applications using DatagramSockets and DatagramPackets, and multicast applications using MulticastSockets. The examples demonstrate how to send and receive data over sockets in both text and binary formats.
Introduction to the New Asynchronous API in the .NET DriverMongoDB
The document provides an introduction to the new asynchronous (async) API in the .NET driver for MongoDB. It begins with an overview of async programming in C# and the benefits of using an async approach. It then demonstrates how to use the new async API in the .NET driver through sample data import and web applications. The applications show how to asynchronously drop, load, and index data. The document also includes code examples for building asynchronous queries and aggregations.
Your website is done. Your webpages access data from a database or a web service and have a 1 to 2 second response time. After deploying the application your user interface is unresponsive and your server doesn’t scale.
In this presentation we will find out what’s happening to our website under scale and how we can use the new async/await support in .NET 4.5 to make our application more responsive under load.
El documento presenta una agenda para una presentación sobre programación paralela con .NET Framework 4.5. La agenda incluye introducir conceptos clave como paralelismo, multithreading y problemas de escalabilidad, y demostrar las nuevas herramientas de .NET 4.5 como Parallel, PLINQ, tasks, concurrencia y async/await que permiten aprovechar la computación paralela de forma más sencilla. El objetivo es mostrar cómo estas herramientas hacen la programación paralela más accesible para desarrolladores sin necesidad de ser expertos.
Este documento presenta una introducción a la computación paralela con .NET 4.0. Cubre conceptos como multithreading vs paralelismo, y las nuevas características de paralelismo en .NET 4.0 como PLINQ, Parallel y Task. Incluye varias demostraciones de cómo usar estas características para mejorar el rendimiento de aplicaciones mediante el aprovechamiento de procesadores multi-núcleo.
Este documento resume la evolución de Visual Basic desde su primera versión en 1991 hasta Visual Basic 2010. Se describe la transición de VB a .NET, resaltando características clave como LINQ, parámetros nombrados y continuación implícita de línea en versiones recientes. Expertos en VB discutirán estas novedades y realizarán demostraciones.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
1. Lluis Franco & Alex Casquete
.NET Conference 2015
Y
A
X B
2.
3. Evolution of the async model
Async void is only for top-level event handlers.
Use the threadpool for CPU-bound code, but not IO-bound.
Libraries shouldn't lie, and should be chunky.
Micro-optimizations: Consider ConfigureAwait(false)
4. ayudarte con algo
It seems you’re calling an async
method without awaiting…
Can I help you?
Yep! Return a Task object plz
Nope. Maybe latter.
7. private int myExpensiveMethod()
{
...
return 42;
}
private void Button1_Click(object sender, EventArgs e)
{
var function = new Func<int>(myExpensiveMethod);
IAsyncResult result = function.BeginInvoke(whenFinished, function);
}
private void whenFinished(IAsyncResult ar)
{
var function = ar.AsyncState as Func<int>;
int result = function.EndInvoke(ar);
resultTextBox.Text = string.Format("The answer is... {0}!", result);
}
28. // Q. It sometimes shows PixelWidth and PixelHeight are both 0 ???
BitmapImage m_bmp;
protected override async void OnNavigatedTo(NavigationEventArgs e) {
base.OnNavigatedTo(e);
await PlayIntroSoundAsync();
image1.Source = m_bmp;
Canvas.SetLeft(image1, Window.Current.Bounds.Width - m_bmp.PixelWidth);
}
protected override async void LoadState(Object nav, Dictionary<String, Object> pageState) {
m_bmp = new BitmapImage();
var file = await StorageFile.GetFileFromApplicationUriAsync("ms-appx:///pic.png");
using (var stream = await file.OpenReadAsync()) {
await m_bmp.SetSourceAsync(stream);
}
}
class LayoutAwarePage : Page
{
private string _pageKey;
protected override void OnNavigatedTo(NavigationEventArgs e)
{
if (this._pageKey != null) return;
this._pageKey = "Page-" + this.Frame.BackStackDepth;
...
this.LoadState(e.Parameter, null);
}
}
29. // A. Use a task
Task<BitmapImage> m_bmpTask;
protected override async void OnNavigatedTo(NavigationEventArgs e) {
base.OnNavigatedTo(e);
await PlayIntroSoundAsync();
var bmp = await m_bmpTask; image1.Source = bmp;
Canvas.SetLeft(image1, Window.Current.Bounds.Width - bmp.PixelWidth);
}
protected override void LoadState(Object nav, Dictionary<String, Object> pageState) {
m_bmpTask = LoadBitmapAsync();
}
private async Task<BitmapImage> LoadBitmapAsync() {
var bmp = new BitmapImage();
...
return bmp;
}
30. ' In VB, the expression itself determines void- or Task-returning (not the context).
Dim void_returning = Async Sub()
Await LoadAsync() : m_Result = "done"
End Sub
Dim task_returning = Async Function()
Await LoadAsync() : m_Result = "done"
End Function
' If both overloads are offered, you must give it Task-returning.
Await Task.Run(Async Function() ... End Function)
// In C#, the context determines whether async lambda is void- or Task-returning.
Action a1 = async () => { await LoadAsync(); m_Result="done"; };
Func<Task> a2 = async () => { await LoadAsync(); m_Result="done"; };
// Q. Which one will it pick?
await Task.Run( async () => { await LoadAsync(); m_Result="done"; });
// A. If both overloads are offered, it will pick Task-returning. Good!
class Task
{
static public Task Run(Action a) {...}
static public Task Run(Func<Task> a) {...}
...
}
31.
32. // table1.DataSource = LoadHousesSequentially(1,5);
// table1.DataBind();
public List<House> LoadHousesSequentially(int first, int last)
{
var loadedHouses = new List<House>();
for (int i = first; i <= last; i++) {
House house = House.Deserialize(i);
loadedHouses.Add(house);
}
return loadedHouses;
}
work1
work2
work3
work4
work5
33. // table1.DataSource = LoadHousesInParallel(1,5);
// table1.DataBind();
public List<House> LoadHousesInParallel(int first, int last)
{
var loadedHouses = new BlockingCollection<House>();
Parallel.For(first, last+1, i => {
House house = House.Deserialize(i);
loadedHouses.Add(house);
});
return loadedHouses.ToList();
}
3
response out
300ms
work1 work2
work3 work4
work5
Parallel.For
Parallelization
hurts Scalability!
38. // table1.DataSource = await LoadHousesAsync(1,5);
// table1.DataBind();
public async Task<List<House>> LoadHousesAsync(int first, int last)
{
var tasks = new List<Task<House>>();
for (int i = first; i <= last; i++)
{
Task<House> t = House.LoadFromDatabaseAsync(i);
tasks.Add(t);
}
House[] loadedHouses = await Task.WhenAll(tasks);
return loadedHouses.ToList();
} When… methods
minimize awaits +
exceptions
39. public async void btnPayout_Click(object sender, RoutedEventArgs e)
{
double initialPrice, strikePrice, drift, volatility = from UI
double[] prices = new double[252]; double total_payout = 0;
for (int i = 0; i < 1000000; i++) {
Quant.SimulateStockPrice(prices, initialPrice, drift, volatility);
total_payout += Quant.Payout_AsianCallOption(prices, strikePrice);
}
txtExpectedPayout.Text = (total_payout / 1000000).ToString();
}
//Box-Muller technique, generates "standard normal" distribution (mean=0, variance=1)
let private NextNormal () =
let u1 = RND.NextDouble()
let u2 = RND.NextDouble()
sqrt(-2.0 * log u1) * sin(2.0 * System.Math.PI * u2)
//Geometric Brownian Monion, a common technique to model stock price
let SimulateStockPrice (prices:double[], initialPrice, drift, volatility) =
let dt = 1.0 float prices.Length
let red sim i value =
prices.[i] <- value
let nextval = value * (1.0 + drift*dt + volatility*NextNormal()*sqrt dt)
if i+1 < prices.Length then sim (i+1) (if nextval < 0.0 then 0.0 else nextval)
sim 0 inicialprice
//An Asian Call Option gives payout if strike price is lower than the average stock price
let Payout_Asiancalloption (prices, strikePrice) =
let av = Array.average prices
max (av - strikePrice) 0.0
40. public async void btnPayout_Click(object sender, RoutedEventArgs e)
{
double initialPrice, strikePrice, drift, volatility = from UI
var expectedPayout = await Task.Run(() => {
double[] prices = new double[252]; double total_payout = 0;
for (int i = 0; i < 1000000; i++) {
Quant.SimulateStockPrice(prices, initialPrice, drift, volatility);
total_payout += Quant.Payout_AsianCallOption(prices, strikePrice);
}
return total_payout / 1000000;
});
txtExpectedPayout.Text = expectedPayout.ToString();
}
41. public async void btnPayout_Click(object sender, RoutedEventArgs e)
{
double initialPrice, strikePrice, drift, volatility = from UI
IProgress<int> progress = new Progress<int>(i => progressBar1.Value = i);
var expectedPayout = await Task.Run(() => {
double[] prices = new double[252]; double total_payout = 0;
for (int i = 0; i < 1000000; i++) {
Quant.SimulateStockPrice(prices, initialPrice, drift, volatility);
total_payout += Quant.Payout_AsianCallOption(prices, strikePrice);
if(i % 1000 == 0) progress.Report(i);
}
return total_payout / 1000000;
});
txtExpectedPayout.Text = expectedPayout.ToString();
}
42.
43.
44. Foo();
var task = FooAsync();
...
await task;
synchronous
perform
when it’s done
asynchronous
initiate
immediately
45. public static void PausePrint2() {
Task t = PausePrintAsync();
t.Wait();
}
// “I’m not allowed an async signature,
// but my underlying library is async”
public static Task PausePrint2Async() {
return Task.Run(() =>
PausePrint());
}
// “I want to offer an async signature,
// but my underlying library is synchronous”
public static Task PausePrintAsync() {
var tcs = new
TaskCompletionSource<bool>();
new Timer(_ => {
Console.WriteLine("Hello");
tcs.SetResult(true);
}).Change(10000, Timeout.Infinite);
return tcs.Task;
}
public static async Task PausePrintAsync() {
await Task.Delay(10000);
Console.WriteLine("Hello");
}
Synchronous Asynchronous
public static void PausePrint() {
var end = DateTime.Now +
TimeSpan.FromSeconds(10);
while (DateTime.Now < end) { }
Console.WriteLine("Hello");
}
“Should I expose async wrappers for
synchronous methods?” – generally no!
http://blogs.msdn.com/b/pfxteam/archive/2012/03/24/10287244.aspx
“How can I expose sync wrappers for async
methods?” – if you absolutely have to, you
can use a nested message-loop…
http://blogs.msdn.com/b/pfxteam/archive/2012/04/13/10293638.aspx
46.
47. The threadpool is an app-global resource
In a server app, spinning up threads hurts scalability
The app is in the best position to manage its threads
synchronous blocks the current thread
asynchronous without spawning new threads
54. var x = await GetNextIntAsync(); var $awaiter = GetNextIntAsync().GetAwaiter();
if (!$awaiter.IsCompleted) {
DO THE AWAIT/RETURN AND RESUME;
}
var x = $awaiter.GetResult();
57. The heap is an app-global resource.
Like all heap allocations, async allocations can contributing to hurting GC perf.
58.
59. Sync context represents a “target for work”
“Await task” uses the sync context
“where you were before”
But for library code, it’s rarely needed!
You can use “await task.ConfigureAwait(false)”
This suppresses step 2; instead if possible it resumes “on the thread that completed the task”
Result: slightly better performance. Also can avoid deadlock if a badly-written user blocks.
63. Lluis Franco & Alex Casquete
Async best practices
¡¡¡Si te ha gustado no olvides
rellenar la encuesta!!!
Thanks
Y
A
X B
Editor's Notes
* And so on to the first of four sections of this talk.
* Async void is only for event-handlers.
* I'll motivate it with developer stories.
* Actually, all the scenarios in this talk come straight from developers, and most of the code.
[CLICK]
* "I have a Silverlight page that uses RIA services async to load the data for the page."
* "This works fine if the user waits for a few seconds before selecting the print button."
* "But does *not* work if the user prints right away."
* "If the user clicks the Print button before all of the page data is loaded, the printed output does not have all of the data."
* Diagnosis: she was using async void deep inside her code.
* Fix: should return Task from her internal async methods, not void.
* Let me put that more strongly
* For goodness' sake, stop using async void everywhere.
* (At first that was going to be the title of my talk)
* Actually, I won't show her code because it was a bit involved.
* I'll show someone else who made the exact same mistake.
* Their async method returns void rather than Task.
* This code actually comes from Microsoft's own official Win8 SDK samples!
* Goes to show that it's a common mistake that anyone can make.
[CLICK]
* Clicks a button, invokes the handler
[CLICK]
* Invokes SendData, which kicks off request for data and then awaits response
* You know what happens now. At the first await, control returns straight back to the caller.
[CLICK]
* Normally at this point the caller would await until SendData finishes.
* But SendData returned void, not Task, so the caller can’t do that.
* Instead it awaits a Task Delay, so returns back to its own caller, the UI message loop
[CLICK]
* Some time later, the response will come back. Or the delay will finish.
* Don't know which will happen first.
* Maybe it'll assign to m_GetResponse first. Or maybe not.
* That's what the developer said "My code doesn't work 100% reliably".
* Had obviously experimented with Task.Delay until they got the right delay to work on their dev network!
* The problem is all down to this async void SendData.
* It's a void, right. It doesn't return anything to its caller.
* The caller can't do anything with it.
* The caller is UNABLE to know when SendData has finished.
* It's basically fire-and-forget.
* That's the crux of the problem, fire-and-forget.
* Actually, before we go on to fix it, I want to highlight another problem with fire-and-forget async voids.
* Let's comment out the problematic race condition
* and see how exceptions behave from a fire-and-forget method.
* We'll focus on the try/catch, to catch exceptions arising from SendData.
[CLICK]
* Once again, we invoke the handler
[CLICK]
* It calls SendData.
[CLICK]
* As you know, at the first await, it returns to its caller.
* The thing is, at this stage, there's been no exception yet, so nothing gets caught.
* We breeze through the catch block and return to the UI.
[CLICK]
* Now the network request comes back, maybe with a 404 error.
* And SendData throws an exception.
* But where can an exception go out of a fire-and-forget method?
* Can't go back to Button1_Click, because that's already finished.
* Answer is that all exceptions from these fire-and-forget async voids
get posted straight to the UI thread.
* In Win8, terminates app. In Phone, silently swallowed. In WPF, dialog.
* In no cases is that desirable.
* We've seen that async void is a "fire-and-forget" mechanism
* Meaning: the caller is *unable* to know when an async void has finished.
* And the caller is *unable* to catch exceptions from an async void method.
* Guidance is to use async void solely for top-level event handlers.
* Everywhere else in code, like SendData, async methods should return Task.
* There’s one other danger, about async void and lambdas, I’ll come to it in a moment.
* But first let’s fix SendData.
* SendData should return Task, not void.
* Convention: every method that returns Task has a name ending with Async
* The caller sees that name and knows he should await it.
* And we can get rid of that awful Task.Delay rubbish.
* Well, we've said async void is for fire-and-forget
* And the only place that's appropriate is for event-handlers, or event-like things.
* What do I mean by "event-like things"? Sometimes it's hard to know.
* Let's look at this case.
* I was wondering how bad the problem is of people misusing async void.
* I looked through the MSDN forums for "async void" and "problem".
* A lot of hits came back from this function "async void LoadState"
* You might not know about it.
* In Win8 apps. When you get to a page, it fires the NavigatedTo event
* The base class handles the event with an overridable void-returning method OnNavigatedTo.
* So that method's basically like an event-handler, fire-and-forget. It's fine to be async.
[CLICK]
* First thing it does is call its base method.
* If the page had already been shown before, it just returns.
* But just for the first time that a page is shown, it invokes the virtual void method LoadState.
* So LoadState is also basically like an event-handler, fire-and-forget. It's fine to be async void.
* OnNavigatedTo is called every time you navigate to a page. LoadState called is called only once per time the page has been constructed.
[CLICK]
* Maybe you can see where this is going...
* Let's trace it out.
* We get to a page. Invoke the OnNavigatedTo virtual method, fire-and-forget.
* Which calls its base.
[CLICK]
* Which kicks off LoadState, fire-and-forget, which we’ve overridden.
* It does an await
[CLICK]
* Which goes back to its caller, who does an await, and returns to the UI message-loop
[CLICK]
* But now, there are two fire-and-forget async voids in flight. Which one will go first?
* Will it be the bottom one who loads the bitmap?
* Or the top one you uses the bitmap, and assumes it's already loaded?
* The forums question you hear is "Why is PixelWidth 0?"
* It's because they're querying a bitmap that hasn't loaded yet.
* Well, the answer here is to use a task.
* It would have been easier to change LoadState to be a Task-returning async method.
* But we can’t do that. We don’t control the signature. We’re just overriding it.
* So instead we’ll have to pass the Task back an alternative way.
* Here, LoadState kicks off an async method that loads a bitmap
* But not fire-and-forget.
* Oh no. Instead, it'll rememer that task, and save it in m_bmpTask field.
* That way, OnNavigatedTo can await for the same task to finish.
* There's just one other surprise place where async voids will bite you
* In C#, when you write an async lambda, it can be either void-returning or Task-returning.
* The syntax of the lambda doesn't tell you which.
* Instead, it's the context that tells you.
* Here I’ve assigned same async lambda to both void-returning Action delegate, and Task-returning Function delegate. No compiler errors. Both work fine.
* Look at this call to Task.Run. It passes an async lambda.
* Will that be void-returning or Task-returning?
[CLICK]
* Well, if both overloads are offered, it'll pick Task-returning. Good!
[CLICK]
* In VB, the situation's different
* Here it's not the context that decides if it's void-returning Sub or Task-returning Function.
* Instead the expression itself says which it is.
* But the conclusion's the same. The method you call should generally both overloads.
* Let's see async lambda problems in practice
* Here I'm writing a Win8 app which invokes Dispatcher.RunAsync
* I'm passing it an async lambda.
* Whenever I see an async lambda being passed to a function, I always check that function.
* Look at the bottom of the slide.
* In this case, it takes something called a DispatchedHandler,
* which is void-returning.
* So it's passing a void-returning async.
[CLICK]
* We can imagine what will happen.
* The dispatcher will kick off the lambda, fire-and-forget.
* The lambda will get to the first await.
[CLICK]
* It'll return it its caller. And the dispatcher will think it's done.
[CLICK]
* So our await on the dispatcher will finish, and we'll plow through the rest of our method.
[CLICK]
* Meanwhile, m_Result doesn't get set until too late,
* and the exception from our async lambda was never caught.
* That's because our async lambda was void-returning, fire-and-forget.
[CLICK]
* You understand the problem.
* We’ll touch on a solution in next section. It’s subtle.
* But first, let's sum up.
* "For goodness' sake, stop using async void"
* That's because async void means fire-and-forget.
* And fire-and-forget is only appropriate for event-handlers.
* Now section two of four.
* I want to talk about the threadpool, about IO- and CPU-bound workloads.
* Let's hear what the developer had to say. He said:
[CLICK]
* "I'm now looking at the biggest user complaint about a slow running operation in an ASP.NET WebForms page."
* "Essentially, the page loads some data and I'm wondering if it'd be the best approach to use the Task Parallel Library."
* "The method itself deserializes an object and depending on user choices can call the method in a foreach 26+ times, the result of which I bind to a gridpanel."
* "The deserialization itself is where 99% of the time is being spent."
* Well, that's brilliant! He used profiling first. He identified the problem area.
* The punchline is that is code turned out not to be CPU-bound, and so he should have been using await.
* But let’s look at what the developer started with.
* It's a zillow-like housing app. He's deserializing a load of houses,
and databinding them to his webform.
* And we’ll start by taking him at his word that the deserialization work is CPU-bound.
[CLICK]
* If we draw a flow chart of it, the request comes in, then it does
one house after another, and then finishes.
* If each house takes 100ms to deserialize, and he does five houses, then
it'll be 500ms before the user sees anything in his web-browser.
* This is what the developer tried using the TaskParallelLibrary.
* He used Parallel.For, to deserialize all the houses in parallel.
* This lambda is the work for each house that has to be done.
[CLICK]
* Let's draw a flow-chart for it.
* First a request comes in.
* Then he does Parallel.For, which means that five lambdas will have to be executed eventually.
* Then the threadpool does those five pieces of work.
* The threadpool will run them on as many threads as will be fastest.
* My laptop has two CPU cores -- a boy and a girl, you can see, so two cores will be fastest.
* If each request takes 100ms, then we'll get the answer out in 300ms.
That's an improvement!
* Actually, it might not be. If we're running on a server that has other workload as well,
then one of the cores will probably be taken, so we'll only have one.
* Oh. Just hold on there a moment.
* What the heck kind of deserialization takes so long? 100ms per house? That's an eternity.
* Well, I checked with the developer.
* Turns out his deserialization wasn't really what I'd call deserialization.
* It was looking up tables in a database.
* That's why it took so long. It was network-bound, not CPU-bound.
* So this is what his first sequential code was actually doing.
* It was downloading data for each house, one after the other.
* But it only took a miniscule amount of time to kick off each request,
then it was idle for about 100ms,
then it got back the response from the network.
* But let's look back at how his Parallel.For code was behaving.
* Well, as we said, it had five workitems in the threadpool.
* Let's say the threadpool started with two threads, because of my two cores.
[CLICK]
* Gradually it'll realize that its threads aren't really being used,
and it'll add an extra thread to do some more work.
[CLICK]
* Maybe an extra thread as well.
* The threadpool will gradually find the optimum number of threads to run
a given workload, but it's fairly slow to respond.
[CLICK]
* In this case maybe it only ended up growing by two extra threads.
* Well, this result came in about 200ms.
* In general, threadpool growth isn't the right way to get responsive code.
* That's because it does take time to get there.
* Sometimes you'll see it adding just one new thread a second.
* Let's draw a flow diagram about how this code should ideally work.
* We should kick off all five requests in one go.
* We might as well issue the requests in sequence, since it's so quick to issue a request.
* Later on, about 100ms later, the responses trickle in.
* They might come out of order. That doesn't matter. We'll get them all.
* And we should have them all done within about 100ms.
* That's the fastest "Time To First Byte" of all our solutions.
* Back to the developer's scenario.
* This is the code that the developer should have used
* He can kick off tasks for all the database loads.
* And then await Task.WhenAll, until they're all finished.
* What would we do if there weren’t 5 houses but 500?
* Can’t make 500 requests all at once. We need to throttle the rate of requests.
* Here’s what I think is the easiest idiom.
* A queue of work-items.
* Async method WorkerAsync runs through the queue and fires requests, one after the other.
* And if I kick off three of these workers, then I’ll have throttled it to 3 at a time!
Calling F# (CPU-bound demo)
Calling F# (CPU-bound demo)
Iprogress interface
* So let's review.
* It's vital to distinguish between what is CPU-bound work and what is IO-bound work.
* CPU-bound means things like LINQ-to-objects, or iterations, or computationally-intensive inner loops.
* Parallel.ForEach and Task.Run are good ways to put these CPU-bound workloads on the threadpool.
* But it's important to understand that threads haven't increased scalability
* Scalability is about not wasting resources
* One of those resources is threads
* Let's say your server can handle 1000 threads.
* If you had 1 thread per request, then you could handle 1000 requests at a time.
* But if you created 2 threads per request, then only 500 requests at a time.
* The first library tip is that library method signatures shouldn't "lie".
* (that's not why they're called lie-braries).
* If a method look async, if it smells async, then it should be async.
* Let's see what that means in practice.
Here I’ve written two different methods that someone might call.
The first, in yellow, is synchronous.
The second, in blue, is asynchronous.
* So what are the assumptions people will make about how to call these two?
* Imagine someone comes up to your API.
* They're going to read the documentation.
* Hah! Who am I kidding? They might read the XML doc-comments if we're lucky.
For the first one, they’ll see its name, and expect it to be synchronous.
Everyone knows what that means.
They think it will perform something right away
and will only return once it’s finished its work
It’ll probably be using CPU all the time it’s running.
[CLICK]
For the second one, they’ll see its name ends in Async.
They’ll think they can call the method to initiate something, but they’ll get back control immediately
Maybe they’re writing a server app. They’ll expect that the method isn’t going to spawn new threads or use up CPU in the background. They can trust it to be a good citizen on their server.
* They also know that they can parallelize it.
* Maybe it's a download API. They can kick off 10 downloads simultaneously, just by invoking it 10 times and then awaiting Task.WhenAll.
And it's not going to hurt their scalability to do so.
* The thing is, your callers will look at your signature of your method, and they'll make assumptions right or wrong about how you're implemented underneath.
It'll be your job to stay in line with those expectations.
This talk about sync or Async is important, because it will affect how you architect your async APIs.
* Let's spell it out with some concrete examples
* Just some code to pause 10 seconds, then print "Hello".
* I know, it's not much of a library, but it's a start!
[CLICK]
* This is just an example… I'm not suggesting you do this at home!
* It’s an example of an API that's synchronous in both senses...
* Its signature looks synchronous, and its implementation really does block the calling thread. Actually it’s even worse than that, it burns CPU cycles to do so.
[CLICK]
* And here's an example of an API that's asynchronous in both senses.
* It uses TaskCompletionSource to generate a Task
* It schedules a timer to wake it up in 10 seconds time
* And when the timer wakes up, it prints to the screen and marks the Task as completed.
Hardly any CPU used at all. It's all just scheduling.
(Some people ask, “What about Timer itself? Are you saying that Timer’s own internal implementation is Async as well? Is it just turtles all the way down?
Well, yes. You know in the Task Manager where it shows System Idle taking up 95% of your CPU? It’s not really burning CPU. It’s probably switched the CPU to a low-power state and is waiting for the next hardware interrupt.)
[CLICK]
* Actually, we wouldn't write it that way. We'd write it using an "await".
* But the two implementations here are basically the same.
[CLICK]
* It's what I'd call "true async"
[CLICK]
* Here's another piece of code. Let's study this.
It uses Task.Run to run the synchronous code on the threadpool
So, it’s blocking up a threadpool thread.
I see people doing this quite a lot. Maybe they’ve heard that async is good, and they want to offer up an async signature, but they’re calling an underlying library that is synchronous.
[CLICK]
* There's something fishy about this, isn't there?
The signature looks async, it smells async, but the implementation is burning CPU.
It’s fine for an application to use the threadpool in this way if it wants, but it’s bad for a library to secretly use Task.Run internally. I’ll discuss why.
[CLICK]
* I want to show you one last example.
This routine initiates an async operation.
But then it blocks its thread, with the blocking call to Wait().
I see this often, or equivalently using the blocking property Task.Result.
Often it’s because people are writing code that fits into a larger synchronous framework, but they need to make one small call to an Async method.
Sometimes it’s from people do this because they wanted to offer a synchronous API from their library, as well as Async.
[CLICK]
* But there's something fishy about this too.
* The signature looks synchronous, it smells synchronous, but the underlying implementation is async.
* When we're writing library APIs, we should try to stick to the top-left or bottom-right.
The other two styles, with the arrows, are dangerous. Confusing to users of the API.
[CLICK]
I want to dig deeper into the two fishy patterns, the orange and the red.
Just a tip, if you’re forced into the bottom-left scenario, this link has some workarounds.
So what’s so fishy about libraries that use Task.Run internally?
[VS: 1.LibrariesShouldntLie]
* Simple app, console app, but I'm just spinning up a Winforms dialog here.
* This is the minimal code I need to get a UI message-loop. (don't want rest of plumbing)
class Library
* It's a demo of a library, so we'll have three layers: the app that uses the library, then the library itself, then the underlying framework functionality that the library uses.
* Here's one way to write the library. It wants to offer up an asynchronous API
* And in this version, it's using the synchronous OS API. Maybe that's the only one available.
* So to become async, my API needs to wrap it, with await Task.Run
* It's the top-right quadrant. It looks async, but it's wrapping an implementation that's synchronous.
* Probably to avoid blocking the calling thread.
b.Click += async delegate
* And here's what the app developer wrote, the user of my library.
* They want to be asynchronous, they want to stay responsive.
* But say they don't want just one, but they want to download 100 files.
* They saw that it was an async method, so they trusted they could just kick off all the tasks and then await Task.WhenAll
[RUN]
* Now it has kicked off all 100 of those tasks.
* But because each one wants to use a background thread, it's actually going in bursts.
* I have four logical cores on this laptop, so the threadpool starts by giving me four threads. As many threads as we have cores.
* Then it looks a second later, says it looks like you've made poor use of those threads, most of them were idle, waiting on IO
* So it looks like you need more threads
[RUN]
* See the first batch was 4, then next batch was 5, then 6
* The threadpool has this predefined scaling behavior, hill-climbing
* So I've had to wait until the threadpool catches up to me, until it eventually finds its optimal number.
* But actually my app didn't need any threads.
* As an app author, I didn't even think any threads were involved.
* That's the key. You don't want to go messing with things that aren't yours, global resources.
* And the threadpool is one of those things.
* It belongs to the app developer, not to you the library author.
* They might have their own ideas about how they want to use the threadpool.
var contents = await IO.DownloadFileAsync()
* Now this one's pure async
[RUN]
* And this time all 100 files can download at the same time.
* This is what we'd expect.
* I shouldn't have to block waiting for the threadpool to grow
* I just have the assumption that I'm just kicking off work from the UI thread.
* You don't want to be a library author who violates that assumption
[SLIDES]
We have to think of the threadpool as an app-global resource.
Remember that hill-climbing that we saw. It’s done across all code across all libraries in the app.
* In a server app, spinning up a bunch of threads hurts scalability.
* I don't want to create new threads, because my caller might be relying on those other threads to be request-threads, to handle new incoming requests.
* And imagine if my library uses Task.Run deep inside - then it'll be a pain for users to diagnose.
* It wasn't a mistake they made. It was a mistake for them to trust my API.
* The app is in the best position to manage its threads.
Let the user use their own domain-knowledge about what they're building to decide how they want to manage threads.
* If your library's using Task.Run, you're putting in roadblocks that prevent the app from using its threads effectively
* If the caller wants to go make some synchronous work happen on a background thread, let them do that themselves. It’s fine for your caller to use Task.Run. Just you shouldn’t do it in your library.
* You should expose something that looks like what it is.
If you only have an implementation that's synchronous, then expose as an API that's synchronous.
Only provide async methods when you can implement them asynchronously.
That will help your callers make the call on how to call you.
That showed you the dangers of the top-right quadrant.
I want to show you the even worse dangers of the bottom-left quadrant:
blocking code, that uses task.Wait() or similar.
At the start of this “essential tips on Async” series, I explained how the message-loop works in a UI app with this diagram.
[CLICK]
The user clicks a button in the UI, and it invokes the message-handler, which calls a LoadAsync method
[CLICK]
That creates a “Task” and returns to its caller, where the task is assigned to variable t
But now we did a terrible thing. We blocked, waiting until that task had completed.
[CLICK]
If you remember how Async works under the hood, once the DownloadAsync task has finished, it moves into the message-queue so that the message-pump can handle it.
But our message-pump is now blocked, stuck on the task.Wait() call!
This is a deadlock.
* Principle is that the threadpool is a global resource.
* You as a library developer have to play nice, help the app author use their domain-specific knowledge as to how to create threads using Task.Run if they want.
* Don't do it yourself.
* Only show an async signature when your method is truly async.
* Also, don't block in your library calls.
* If you block on the UI thread, disaster.
* If you block on a threadpool thread, you're hurting the threadpool because of that hill-climbing, hurting everyone else in the app who uses the threadpool.
* Next thing address for async libraries is some aspects of async perf.
* Can I use await in an inner loop?
* If I have a method "ReadByte" in my API, would I also want "ReadByteAsync()"? if I'm getting a million bytes one-at-a-time?
* It's a question of how chatty vs chunky to design my API. Like peanut butter.
* Most importantly, we'll understand how to optimize around some common special cases.
* Just to step back, we're all used to synchronous methods.
* We know they're cheap. That's why we're happy to factor out our code at will, take out these lines of code and put them into a separate method
[CLICK]
* And if we look at the IL, we can see how simple it is.
* I know, I know, IL isn't the best way to judge the cost of something. It’s just a starting point for comparison.
* For async methods, though, the compiler puts in plumbing, and it's not so simple
* Here's the same method as the last slide, but it uses the async modifier
[CLICK]
* First, the compiler generates this code for the method
[CLICK]
* And actually this is only part of it.
* Part of what the compiler generates is a structure with a MoveNext method, and here’s a call to this MoveNext method
[CLICK]
* And let's look at the MoveNext it generates.
* It's the plumbing that lets an async method pause and resume.
[CLICK]
* If I highlight it, you can still see the core bit that corresponds to what I wrote
* Let's set this in context. The code overhead isn't much. Equivalent to about an empty for loop about 200 times.
* If you're doing it a few hundred times a second, doesn't matter.
* It's just if you'll be using it in a tight inner loop, then you need to think.
* And you know what? This IL might look scary, but it's actually more efficient than anything you'd write if you tried to do async callbacks by hand.
* We've been able to optimize everything around await, use internal methods on Task, use detailed understanding of JIT.
* It's not that the await keyword is particularly slow.
* It's just there's a slight inherent overhead to async APIs.
* Usually that overhead will be negligible.
* It's just in a tight loop that it adds up.
[CLICK]
* Actually, the real mental model I want you have in your mind is that it's ALLOCATION that's expensive.
* Technically, allocation is cheap, it's the garbage-collection afterwards that's expensive. Like getting drunk and then getting a hangover.
* If you want to play nice with memory, you want to avoid allocating memory as much as possible.
* I think of the heap as another app-global shared resource, and it's our job as library authors not to trample on it unfairly.
* For async methods, there are three particular allocations that show up.
* It allocates a "state machine" class which holds all the method's local variables, and remembers which await it's got up to (so it can resume after it).
* It allocates a delegate. Delegates in .NET are heap objects. It allocates a delegate that it signs up for when a Task is complete, to execute that delegate
* And your async method returns a Task object. Which is a heap object. So each async method allocates that.
[CLICK]
* But there are some really powerful optimizations here.
* The core point is that async methods start executing immediately.
* Like in this case, my method gets the next integer, but it downloaded the integers in chunks, in buffers
* So 99.9% of the time it can return immediately. It doesn't need any awaits.
* It's only when a chunk runs out that it needs the next one.
* It's only when we get to the first await point that we return to our caller, and incur all those allocation costs.
So if it happens that you never even get to an await point, never need to return to the caller until the end, it avoids the first two allocations entirely!
Now there’s another important under-the-hood optimization which makes this optimization much more powerful…
We call it the "fast path"
[CLICK]
* Here's an await operator, and here's the codegen that the compiler makes for it.
* When you await something, it first checks "is that thing already completed"
* You might wonder, what kinds of tasks will be completed already before I even await them?
* Well I just showed you one on the previous slide!
* 99% of the time, GetNextIntAsync has already completed, so the await operator can fly over it really fast.
* And because we flew over it, we didn't even return to the caller.
* That's important. You might have heard the message "At the first await in an async method, it returns to its caller."
* But that's not precisely true.
* Really, "At the first await WHICH HAS NOT YET COMPLETED, it returns to its caller".
* And this fast path has a great synergy with the previous slide.
* Because if all our awaits take the fast path, then we're basically skipping over them, and we avoid those two heap allocations for the async method.
You might wonder, “what kind of Task would already be complete at the time I await for it?”
Well, here’s a great example – 1023 times out of 1024, anyone who invokes GetNextIntAsync, they’ll get back a Task object that has already completed.
So they’ll benefit from all the built-in fast-path optimizations themselves. It’s a virtuous circle.
There’s one final memory allocation in an Async method: that is allocation of the returned Task object.
[CLICK]
* But if the method managed to take the fast path on all its awaits,
* and if the returned value in your Task<T> was one of the "common" ones like 0, 1, true, false, null, then it avoids even allocating the task.
* Also if your async method just returned non-generic Task.
* That's because the framework keeps just a singleton copy of about ten common Task objects
[CLICK]
* That'll be good if you want to return Task<bool>, or an empty string in Task<string>
* If you're returning some other value, like arbitrary integers, it doesn't make sense for the framework to have a singleton Task<int> for every single possible integer.
* Or if you're returning a string, the framework can't have a singleton Task<string> for every possible string.
* So in those cases you can cache a Task object yourself.
So the first optimizations we saw around the fast-path and common values, they all happen automatically.
* But this final optimization, caching the returned Task if it's not one of the common ones, that requires some work on your part.
[VS: 3.CacheTasks]
* Here I'm going to show you a typical pattern you can use to cache the returned Task, to avoid having to allocate a new Task object every single time.
byte[] data = new byte[0x10000000[
* I'm going to allocate a quarter of a gig, and measure how many allocations it needs to copy it.
input.CopyToAsync(Stream.Null).Wait();
* For the copying, I'll be using the .NET framework method CopyAsync
int newGen0 = GC.CollectionCount(0)
* And I'll be measuring how many times the GC had to run
class MemStream1 : MemoryStream
* What I'm testing is two different implementations of MemoryStream
return Read(buffer, offset, count);
* My test used Stream.CopyAsync, so I know its going to call into ReadAsync
* This first implementation is just a simple async method
* no awaits, so it always takes the fast path
* But it returns the number of bytes read.
* This isn't one of the common values, so it's not a singleton.
* Instead it's going to allocate a new Task object every time this is called.
* It happens that Stream.CopyAsync is using buffers of size 80k each time, so every Task<int> that it allocates will be Task<int> with value 81920
* But it's still allocating a new copy of that every single time.
private Task<int> m_cachedTask;
* Let's look at this second memory-stream implementation.
* This one keeps a cache of the last Task<int> it returned.
if (m_cachedTask != null && m_cachedTask.Result == numRead)
* And if the Task it's cached has the right value, well, it might as well return that.
* A single Task object can be used as many times as you like after it's been completed.
* After the Task has completed, it's immutable.
m_cachedTask = Task.FromResult(numRead);
* But if the cache wasn't there, or had the wrong value,
* then we'll generate an already-completed task with the right value.
* That's what Task.FromResult does.
[RUN, CTRL+F5]
* And there we see that we've saved an appreciable number of allocations.
* I want to stress, it doesn't cache the last INTEGER it returned.
* That'd miss the point. Our goal is to reduce the number of Task objects we allocate.
* So we have to cache the Task<int>, not just the int.
* One thing to ask, how big should our cache be?
* Here I've just used a single-element cache. It only stores the previous one.
* And what you'll find is that, generally, just a single-element cache works great!
TrackGcs(new MemoryStream(data));
* I just wanted to show you some more perf numbers.
* Here I'm going to use the standard built-in MemoryStream
[RUN IT, CTRL+F5]
* What we see is that MemoryStream actually has some further internal optimizations to eliminate all GCs in this test.
* You can go a long way. It's a question of how much time you want to spend as a library author, and how frequently your library APIs will be used, for how worthwhile it is.
* In general, you as a library-author should use domain-knowledge about the nature of your API, to decide whether and how it makes sense to cache tasks.
* We used it in the .NET framework to dramatically improve the performance of BufferedStream and MemoryStream.
* We've talked about perf considerations. They largely relate to the heap and GC, and avoiding unnecessary allocations.
* Async/await keywords are as fast as they can be, and the inherent overheads are only noticeable in a tight inner loop. We're talking millions of iterations, not just a few hundred or thousand.
* If you can't help, and the shape of your API means has to be called frequently, there are some great built-in perf features that happen automatically.
* First, there’s the "Fast Path". If an await has already completed, then it just plows right through it.
And if you get to the end of the method without any "slow-path" awaits, then you avoid a bunch of memory allocations.
* Guidance is, try to avoid chatty APIs. Make APIs where the consumer of your library doesn't have to await in an inner loop.
* You can GetNextKilobyteAsync() instead of GetNextBitAsync().
* If you have to have a chatty API, we saw how to cache the returned Task<T> to remove the one last allocation on the fast path.
* And a cache size of just "1" is often the right choice!
* But remember, don’t prematurely optimize.
* Async is not a bottleneck if you’re only doing it a few hundred or thousand times a second.
* It’s only when you get more that you’ll need to think about Async perf.
* Final tip for Async library developers is to consider task.ConfigureAwait(false)
* I need to get technical. Talk about "SynchronizationContext".
It represents a target for work
It’s been in the framework for a while, but we generally haven’t had to worry about it.
* For example, in Winforms, if you get the current SynchronizationContext and do Post on it, it does a Control.BeginInvoke. That's how Winforms gets onto the UI thread.
* And ASP.Net current synchronization context, when you do Post() on it, it schedules work to be done in its own way.
* There are about 10 in the framework, and you can create more.
And this is the key way that the await keyword knows how to put you back where you were.
[CLICK]
So when you do await, it first captures the current SyncContext before awaiting.
[CLICK]
* When it resumes, it uses SyncContext.Post() to resume "in the same place" as it was before
[CLICK]
* For app code, this is the behavior that you almost always want.
* When you await a download, say, you want to come back and update the UI.
* But when you're a library author, it's rarely needed.
* Say you’ve got a library method with an await in the middle of it.
* You usually don't care which threading context you come back on to finish up the second half of your library method.
* It doesn't matter if your library method finishes off on a different thread either, maybe the IO completion port thread.
* That's because when the user awaited on your Async library method, then their own await is going to put them back where they wanted. They don't need to rely on you doing it for them.
[CLICK]
* And so in the framework we provide this helper method Task.ConfigureAwait.
* Use it on your await operator.
* Default, true, means the await should use SynchronizationContext.Post to resume back where it left off
* If you pass in false, then if possible it'll skip that and just continue where it is, maybe the IO completion-port thread
* Let's just stay there! is as good a place as any!
[CLICK]
If your library doesn't do this, and you're using await in an inner loop, then you're wasting the user's message-loop
The user’s message-loop is an app-global resource.
* You’re being a bad citizen, flooding THEIR UI thread with messages that don't have anything to do with them
* demo...
[VS: 2.ConfigureAwait]
const int ITERS = 20000;
* Repeat inner loop 20,000 times
await t
* This one does the default - it does capture and resume on the captured synchronization context
await t.ConfigureAwait(false)
* This one, same code, just resumes on whichever thread it left off. Likely the threadpool thread.
[RUN, CTRL+F5]
* We see a fifteen-fold difference.
* If doing the loop 20,000 times, it adds up to half a second.
* That's not much, just a few microseconds.
* And it's completely irrelevant if you're only doing 10 or 100 awaits in your library method
* But if you have an await inside your inner loop, or if your user will call you inside their inner loop, that's when it adds up
Again, this is a micro-optimization.
If you only have a few tens of awaits per second, nothing to worry about. But otherwise…
Principle is that the UI message-queue is an app-global resource.
If the internal implementation of your library routine has awaits inside it, and your routine was called from the UI context, then it’ll wind up posting each of its awaits back to the UI thread.
This is an abuse of the UI thread, which will hurt responsiveness.
So you can use ConfigureAwait(false) to avoid that.