From Event to Action: Accelerate Your Decision Making with Real-Time Automation
Serverless cqrs using azure event grid
1. "SERVERLESS" CQRS USING
AZURE EVENT GRID
Using Azure Event Grid as the backbone for a serverless CQRS architecture
DUBLIN
Lets trend! …
#GlobalAzure
@Merrion
2. AGENDA
• What is CQRS?
• What is event sourcing?
• How to do it on Event Grid
• Q & A
4. WHAT IS EVENT SOURCING?
Open
Account
Open
Account
Open
Account
Deposit Money
Deposit Money
Deposit Money
Withdraw
Money
Withdraw
Money
Close Account
$10.00
$10.00
$5.00
$5.00
$10.00
Aidan
00001
Margaret
00002
Fearghal
00003
17. LESSONS LEARNED
• Keep your functions small (do only one business thing per
function)
• Hook up application insights to see inside your business
application
• Be wary of knitting together domains – you can make an anti
corruption layer in Event Grid
• Be careful with what can be parallel vs what cannot. (If things
have to happen in a sequence then have them trigger each
other as a function pipeline.)
• Only write the code that only you can write (business)
22. XBOX & OTHER PRIZES…
FILL IN THE FORM!
https://aka.ms/azurebootcampireland
Editor's Notes
The key words are “CQRS” which is what we are going to do, and “Event Grid” which is how.
I’ll start by describing the architecture pattern CQRS and a side track into a concept called “event sourcing” which are applicable to many different technologies then will go ahead and show how you would go about doing this on top of Event Grid.
At its simplest, CQRS – command query separation of responsibilities – is an architecture whereby commands (those things that alter the state of the system) are kept separate from queries (that get the state of the system).
Commands map to the things that can cause the state of your application to change – the event publishers.
1) Blob storage could be used as a sort of “in tray” that converts a file arrival into a command.
2) Event Hubs can be used to react to commands sent from other domains (or internally within this domain)
3) Custom topics
Event Grid is now in “General Availability” so can be run from whichever region you have chosen for your domain.
There are SDKs for a number of programming languages – C#, Python, Node.js – and the REST API means you can access it from a large range of other programming environments.
You should also have some idea of the commands and events that make up your application – although it is as easy (maybe arguably easier) to add new events as your business domain requires them.
For commands that have a data payload greater than the maximum size of an event (64kb) then saving the file data as a file into a designated blob storage location can trigger the event with the file location being a parameter of that command. You can envisage these blob storage locations as application in-trays.
For smaller commands you can wrap the entire business data in an application specific event that goes in a custom topic
For events relating to an actual race we can envisage an IoT hub being used to send data from sensors (or fit bit type devices) and here we rely on the massive throughput capabilities of IoT hub.
Alternatively for an occasionally connected model you could record the events on a tablet and fire them up to your system by saving the results file to a storage location.
When something (a domain event) happens and is ingested by the system we need to do a two step process in order to persist that event’s occurrence
Turn the event into an event classification
Append it to whichever entity’s event stream that it applies to
Trigger any notifications that the event occurred that other parts of the system may be listening to
The event stream can be stored as an AppendBlob, in Azure Tables, File, CosmosDB etc.
To trigger a query to run, a specific query requested custom topic is passed into the system with the appropriate parameters.
This is then routed to an appropriate query handler which can be an azure function, a logic app or even calling an external program using webhooks.
Results are calculated by running the appropriate projections over the event streams and performing any map-reduce collation on the returned data.
Results can then be distributed – either back to the caller’s application by web hooks or by email, twitter etc. using a logic app.
(This could be triggered by the results being written to an “out-tray” location in Azure storage for example)
This approach works best for predefined queries with matching predefined query handlers. However you can, of course, persist the output of these queries to a denormalised relational database store for more ad hoc querying.
OK – a quick demo to show how these ideas sit on top of Azure Event Grid…
The application for this demo represents a rudimentary system for use by a running organisation – if you are familiar with the “Meet and train league” or “Parkruns” you will know the general idea.
This is split up into four domains:- League, Team, Race and Runner and each domain is represented as its own Azure functions app.
There is also a linked storage account for each of these domains that holds the event streams and data snapshots for that domain.
Each domain has its own distinct storage account in order to keep everything separate.
Each of these has a number of containers in e.g.
command logs / query logs to store the incoming commands details for debugging and audit purposes
An in-tray for message payloads that are greater than 64kb
Documentation to describe the commands/queries and parameters for the public interface to your domain
2) Separate event hubs for routing commands and queries around
3) Custom event grid topics for each specific command or query
We send a command into the system by posting a message to the command endpoint with the event detail contained in the HTTP request body.
Security is performed by having an access token passed in the HTTP request headers.
The event grid topic triggers a subscription for the “Command Logger” which is an azure function which gives the command an unique identifier then writes it to the “command-log”
This is so we have an audit trail of every command and can deal with failed commands.
Then the command is passed to the function or functions that comprise the command handling.
This could include validation and any set-up actions that need to be performed.
Finally the command is turned into one or more events which are persisted to the event stream to cause the change to occur.
Running a query is triggered in a very similar manner to the command – we pass a message to the event grid topic for the query type with the parameters in the HTTP request body.
In addition we pass parameters to tell the query handler where and how to deliver the results when the query completes.
The query parameters are logged and then passed to a query handler function.
The query handler is tasked with picking up the identity group over which the query will run and then running the projection (or projections) that comprise the query before collating the results.
When a query completes the results are written to a results cache (so that we do not process the same question twice)
The results are then returned to where the request specified
For the latest pricing, check the above link because prices do change fairly rapidly (nearly always downwards though).