The presentation by Joonas Westlin delves into Durable Functions in Azure, emphasizing the role of orchestrators and activity functions in creating long-running workflows with persistent state. It covers various durability providers, their unique features, and considerations for debugging applications using this framework. Key takeaways include the advantages of different storage options and the mechanisms for activity and orchestrator communication.
Key takeaways fromthis presentation
• Orchestrator and activity communication
• State storage using the Azure Storage provider
• Differences in other durability providers
• Goal: improve your ability to debug Durable Functions
4.
Introduction/refresher
• Durable Functions= extension for Azure Functions that allows
execution of Durable Task orchestrations
• Durable Task framework allows developers to build long-running
workflows with persistent state, using well known async-await
constructs in .NET
• Orchestrator functions define the workflow, activity functions do
non-deterministic work
5.
Durable Task workers
•Workers execute orchestrators and activities by requesting work
items from the durability provider
• Multiple workers run in parallel = increased throughput
• Activity work items can run on any worker, but orchestrator work
items cannot
• Danger of state corruption if more than 1 worker takes a work item for
same orchestration simultaneously
Storage durability provider
•Convenient choice for Azure Functions as it has a Storage account
already
• Cheap option, consumption-based pricing in Storage
• Tip: using a v1 Storage account is cheaper (not in new regions)
• Can run emulator locally for Storage
Orchestrator and activitycommunication
• Orchestrator gets executed several times; each time a set of
outbound messages is generated for activities to be triggered
• These are sent through the durability provider and workers
receive them
• Once the worker is done with the activity, it sends a message back
to the orchestrator
Durable Entities
• Welooked at orchestrators and activities but where are Durable
Entities?
• Entities are just orchestrators
• At high level, you can think of entities as an orchestrator that:
1. Gets state from input
2. Waits for an external event (operation)
3. Computes new state based on event
4. Restarts itself with new state as input
13.
Other things providedby Durable Functions
extension
• Durable HTTP
• [Deterministic]
• Replay-safe logger
Netherite (public preview)
•Higher performance + higher cost
• In high throughput scenarios, cheaper than high scale of other providers
• Supported by Durable Functions
• Not Consumption plan though
• Uses Event Hubs for orchestrator/activity messaging
• Even though Event Hubs do persist data, it is replicated to Azure Storage
after receive
• Max 32 partitions + 20 throughput units = 20 MB/s
• Uses Azure Storage blobs for storing partition state
• Harder to inspect current state
SQL (public preview)
•Uses SQL tables for everything
• Supported by Durable Functions
• Not Consumption plan though
• Stored procedures contain most logic
• Portable, no Azure connection required
• Can do data encryption, backups etc.
• Multi-tenant scenarios with schema per tenant
Service Bus (+Storage)
•Mature and transactionally consistent
• Not supported by Durable Functions
• Uses 3 Service Bus queues
• Orchestrator = messages for orchestrators
• Worker = messages for activities
• Tracking = orchestration state tracking
• Requires an instance store + blob store, comes with Storage
implementations for them
• State and history tracking
• Oversized messages and sessions
21.
Redis
• Uses aRedis database
• Not supported by Durable Functions
• Workers notified of new messages through Redis channels
• Actual messages sent through Redis lists
• Portable since you can run Redis pretty much anywhere
22.
Service Fabric
• Supportedonly within a Service Fabric cluster
• Uses SF reliable collections for state storage
• Some features limited, e.g. cannot query instance state if it has
completed over an hour ago
23.
Emulator
• Used fortesting DTfx
• Fully in-memory
• Can use for integration testing your own orchestrations if you use
raw DTfx
• Not designed for production use
#8 Regions that came online after Oct 1, 2020 have similar price for v1 and v2 storage
#9 Control queues trigger orchestrators
# of partitions = # of control queues
Max 1 worker listens to each queue, controlled by blob leases
Work item queue triggers activities
Only one queue, many listeners
Two tables used to store state, one for current orchestration statuses and one for orchestration history