Richard Banks
@rbanks54
richard-banks.org
1. Why? (the short version)
2. Architecture
3. Implementation
4. Deployment
You probably don’t need them
Tooling is still improving
Many implementations aren’t ‘pure’… that’s OK
I’m showing ONE way, not THE ONLY way.
There’s plenty of other approaches:
• AWS Lambda / Azure Functions
• Azure Service Fabric
• Akka/Akka.NET
Shiny! Uber! Shiny! Shiny! Shiny!
Shiny! Shiny! Shiny! Netflix!! Shiny!
Shiny! Shiny! Amazon!! Shiny! Shiny! Shiny!
Shiny! Unicorns!! Shiny! Shiny! Shiny! Shiny!
Greater flexibility & scalability
More evolvable
Independently deployable services
Improved technical agility
Independent development teams
Resilience. A failure in one service shouldn’t
wipe out the whole system.
Tech flexibility. Right tool for the right job.
Smaller services are easier to understand and
maintain.
A potential migration approach for legacy
systems
Isn’t this meant to be easy?!
I can’t tell how it fits together anymore!
It’s more brittle now than it ever was!
Performance is terrible!!
I need to deploy all my services together and in
a specific order!
Distributed systems are HARD!!
Eventual consistency is a paradigm shift
Legacy habits create a distributed “big ball of
mud”
People and culture problems.
Architecture is never just about the technology.
Can your team(s) create a well built monolith?
Are you agile, do you “do agile”, or is it neither?
Have you got a DevOps culture?
Is there an underlying business reason driving
the change?
Keep it simple! Always.
Don’t build what you don’t need.
Don’t build what you might need.
ROI & TCO are still incredibly important!
If those warnings didn’t scare you off, we’ll
continue.
YOU HAVE BEEN WARNED :-)
Independent, loosely coupled services
Cheap to replace, easy to scale
Fault tolerant, version tolerant services
http://blog.mattwynne.net/2012/05/31/hexagonal-rails-objects-values-and-hexagons/
http://www.slideshare.net/fabricioepa/hexagonal-architecture-for-java-applications/10
http://www.kennybastani.com/2015/08/polyglot-persistence-spring-cloud-docker.html
http://www.slideshare.net/adriancockcroft/monitorama-please-no-more/31
Be language & platform agnostic
One synchronous approach (JSON over HTTP)
One asynchronous approach (AMQP via RabbitMQ)
Why? Consistency reduces complexity.
Client applications should not call
microservices directly.
Have clients call an API/Application Gateway.
This then calls your microservices.
Why? Encapsulate and isolate change.
If you use synchronous comms, you need to
handle failures and timeouts.
Use a circuit breaker pattern & design with
failures in mind (and test for it!)
Why? Uptime is the product of the individual
components (99.99%30 = 99.7% = 2+ hrs/mth)
http://www.lybecker.com/blog/2013/08/07/automatic-retry-and-circuit-breaker-made-easy/
http://techblog.netflix.com/2012/02/fault-tolerance-in-high-volume.html
One client request may trigger hundreds of
microservice calls. How do we trace a request?
Treat each client request as a logical business
transaction.
Add a Correlation ID to every client request and
include it in all internal communications.
Why? Traceability aids debugging and performance
tuning.
Loose coupling implies no hard coded URLs.
Service discovery isn’t new (remember UDDI?)
Microservices need a discovery mechanism.
E.g. Consul.io & Microphone
https://github.com/rogeralsing/Microphone
For services to be independent…
…they cannot rely on another service being
available (temporal coupling), and
…they should cache any external data they
need.
Be prepared for this in your design.
“Services aren't really loosely coupled if all parties to a
piece of service functionality must change at the same
time.”
Consumer Driven Contracts are concept from the SOA
days:
WSDLs and XSDs were the SOAP attempt to solve this.
With synchronous HTTP calls, have a look at Pact
http://www.infoq.com/articles/consumer-driven-contracts
https://github.com/SEEK-Jobs/pact-net
https://www.youtube.com/watch?v=SMadH_ALLII
Domain Driven Design
Align microservices to Domain Contexts, Aggregates &
Domain Services
CQRS
Command Query Responsibility Segregation.
Scale reads and writes independently.
SQL or NoSQL
Use persistent, easily rebuilt caches for query services.
Versioning
APIs are your contracts, not versions of binaries.
Message Bus
Reliable, async comms.
Optimistic Concurrency
Avoid locking of any kind.
Event Sourcing
Persist events, not state. Avoid 2-PC hassles.
API Gateway
Encapsulate access to microservices; optimise for client
needs.
When a domain object is updated, we need to
communicate the domain event(s) to all the other
interested microservices.
We could use 2-phase commit for this… and we
could also drink battery acid.
Why not just persist these events to a database
instead of state, and publish those same events on
the message bus.
The “100 line” rule is a bit silly.
nano-services are effectively a service-per-
method.
Don’t turn your app into thousands of RPC calls!
(unless you want to use AWS Lambda?)
It’s about units of functionality, not lines of code.
Have a single purpose
E.g. manage state of aggregates/entities
E.g. send emails
E.g. calculate commissions
Be unaware of other services (in the “core”)
Think about your Use Cases/Bounded Contexts
It’s not architecture if there’s no boxes and lines!
Application Services
(Gateway/Edge Service)
UI Request (HTTP)
Query MicroService
Data Cache
(Redis)
Overall Approach
Commands & Queries
Database
(EventStore)
Domain MicroService
Message Bus
(RabbitMQ)
Commands Queries
Event Sourcing Domain Events
Precomputed
Results
Web API Controller
Request (HTTP)
Aggregate
Event Handler(s)
Event Store
Domain MicroService
Command
Message Bus (publish)
Command Handler
Command(s)
Event Store Repository
Save New Events
Event(s) Event(s)
Web API Controller
Query (HTTP)
Query Handler
Event Handler(s)
Message Bus (subscribe)
Query Micro Service
Event(s)
Data Cache
(Redis)
Consider splitting
here when
scaling beyond a
single instance to
avoid competing
consumers
Query
Updates
RabbitMQ + EasyNetQ
EventStore
Redis + StackExchange.Redis
ASP.NET Web API
Sample code is for inspiration, not duplication
https://github.com/rbanks54/microcafe
Inspired by:
Starbucks does not use two phase commit
http://www.enterpriseintegrationpatterns.com/docs/IEEE_Software_Design_2PC.pdf
 Cashier/Barista/Customer?
 Coffee Shop/Customer?
What about ‘Master Data’?
Which context owns the
“product” entity?
User Story?
As the coffee shop
owner
I want to define the
products that are
offered for sale
So I can set my menu
Use Cases?
Manage Products
(CRUD)
View Menu
Run a promotion
Domain entities form the application core.
Commands & Queries are the adapters and
ports of our services
Use CQRS; separate
microservices for
commands and queries
Products
Admin Domain
Command
Handlers
Web API
Repository
Bus Publisher
Event
Store
Event
Handlers
Bus Subscriber
Admin
Microservice
Memory
Store
Event
Store
RabbitMQ
Memory
Bus
Commands do not update state of any domain
objects.They raise domain events.
Events are processed by domain objects, who
update their own internal state.
This pattern makes it very easy to replay events
and rebuild state quickly.
public class Product : Aggregate
{
private Product() { }
public Product(Guid id, string name, string description, decimal price)
{
ValidateName(name);
ApplyEvent(new ProductCreated(id, name, description, price));
}
private void Apply(ProductCreated e)
{
Id = e.Id;
Name = e.Name;
Description = e.Description;
Price = e.Price;
}
Methods Create Events
Apply an Event to change stateApply an Event to change state
Holds unsaved events.
Helper method to reapply events when
rehydrating an object from an event stream.
Provides a helper method to apply an event of
any type and increment the entity’s version
property.
public abstract class Aggregate
{
public void LoadStateFromHistory(IEnumerable<Event> history)
{
foreach (var e in history) ApplyEvent(e, false);
}
protected internal void ApplyEvent(Event @event) { ApplyEvent(@event, true); }
protected virtual void ApplyEvent(Event @event, bool isNew)
{
this.AsDynamic().Apply(@event);
if (isNew)
{
@event.Version = ++Version;
events.Add(@event);
}
else Version = @event.Version;
}
Cast as Dynamic so we don’t need to know
all strongly typed Events beforehand
New Events cause
version to increment
Replaying events
public class Product : Aggregate
{
private void Apply(ProductNameChanged e)
{
Name = e.NewName;
}
public void ChangeName(string newName, int originalVersion)
{
ValidateName(newName);
ValidateVersion(originalVersion);
ApplyEvent(new ProductNameChanged(Id, newName));
}
Domain Command
Commands raise
Events
We separate the commands from the queries in
our design. CQRS approach.
Ports: Command Handlers/Services
Adapters: HTTP API (ASP.NET Web API)
Commands do not have to map 1:1 to our internal
domain methods.
Commands Handlers (the ports) act on the inbound
contract our adapters (the API) expose.
Internal implementation and any created domain
events are up to us.
Command objects are just POCOs. No behaviour.
public class ProductCommandHandlers
{
private readonly IRepository repository;
public ProductCommandHandlers(IRepository repository)
{
this.repository = repository;
}
public void Handle(CreateProduct message)
{
var product = new Products.Domain.Product(message.Id, message.Name,
message.Description, message.Price);
repository.Save(product);
}
Outgoing “Port”
Incoming “Port”
Commands don’t return values
Act on the domain
Persist
Doesn’t need to be RESTful.
Could be also have a SOAP API.
Could also have a Web Sockets API.
Secure your adapters. Flow identity to your
microservices
[HttpPost]
public IHttpActionResult Post(CreateProductCommand cmd)
{
if (string.IsNullOrWhiteSpace(cmd.Name))
{
var response = new HttpResponseMessage(HttpStatusCode.Forbidden) { //… }
throw new HttpResponseException(response);
}
try
{
var command = new CreateProduct(Guid.NewGuid(), cmd.Name, cmd.Description, cmd.Price);
handler.Handle(command);
var link = new Uri(string.Format("http://localhost:8181/api/products/{0}", command.Id));
return Created<CreateProduct>(link, command);
}
catch (AggregateNotFoundException) { return NotFound(); }
catch (AggregateDeletedException) { return Conflict(); }
}
Incoming “adapter”
Pass through to the
internal “port”
Commands either
succeed or throw an error
Repository Interface for data persistence
Message Bus interface for publishing events
Ports: Repository / Message Bus
Adapters: EventStore API / EasyNetQ
Repository pattern to encapsulate data access
Event sourcing; persist events not state.
Immediately publish an event on the bus
Note:This approach may fail to publish an event
Can be prevented by using Event Store as the pub/sub mechanism
Can be prevented by only publishing to the bus. Use a separate microservice to persist events to the
EventStore (extra complexity)
Personal choice: RabbitMQ for ease of use & HA/clustering.
public async Task SaveAsync<TAggregate>(TAggregate aggregate) where TAggregate : Aggregate
{
//...
var streamName = AggregateIdToStreamName(aggregate.GetType(), aggregate.Id);
var eventsToPublish = aggregate.GetUncommittedEvents();
//...
if (eventsToSave.Count < WritePageSize)
{
await eventStoreConnection.AppendToStreamAsync(streamName, expectedVersion, eventsToSave);
}
else { //... multiple writes to event store, in a transaction }
if (bus != null)
{
foreach (var e in eventsToPublish) { bus.Publish(e); }
}
aggregate.MarkEventsAsCommitted();
}
Repository method
Persist via the event
store “adapter”
Publish events onto
the bus
Product
View
Read Model(s)
Query
Handlers
Web API
Repository
Persistence
Event
Handlers
Bus
Subscriber
Admin Read Model
Microservice
RedisRabbitMQ
Subscribe to domain events, and
Update their read models based on those
events (i.e. their cached data)
Optimise for querying with minimal I/O
Subscribe to messages from the message bus at
startup
Use Topic Filters to only subscribe to events of
interest
var eventMappings = new EventHandlerDiscovery().Scan(productView).Handlers;
var subscriptionName = "admin_readmodel";
var topicFilter1 = "Admin.Common.Events";
var b = RabbitHutch.CreateBus("host=localhost");
b.Subscribe<PublishedMessage>(subscriptionName, m =>
{
Aggregate handler;
var messageType = Type.GetType(m.MessageTypeName);
var handlerFound = eventMappings.TryGetValue(messageType, out handler);
if (handlerFound)
{
var @event = JsonConvert.DeserializeObject(m.SerialisedMessage, messageType);
handler.AsDynamic().ApplyEvent(@event, ((Event)@event).Version);
}
},
q => q.WithTopic(topicFilter1));
Uses reflection and convention
over configuration
All events subclass this
Dynamic call to avoid tight
coupling with types
Filter to subset of events
Query microservices determine events they are
interested in.
Handle events using the same Event Handling
pattern as used in the domain objects.
Consistency reduces complexity.
public class ProductView : ReadModelAggregate,
IHandle<ProductCreated>, IHandle<ProductDescriptionChanged>,
IHandle<ProductNameChanged>, IHandle<ProductPriceChanged>
{
//...
public void Apply(ProductCreated e)
{
var dto = new ProductDto
{
Id = e.Id,
Name = e.Name,
Description = e.Description,
Price = e.Price,
Version = e.Version,
DisplayName = string.Format(displayFormat, e.Name, e.Description),
};
repository.Insert(dto);
}
Interested in 4 eventsLook familiar?
Queries return DTOs/Result Objects.
Not domain objects.
Persist the DTO’s. Denormalised
data is OK.
Queries are simply WebAPI methods
Simple lookups of precomputed result(s) in the
cached data.
Redis: A key/value store, with fries
Collections stored as ‘sets’
Convention approach to ease implementation
Single objects stored using FQ type name
Key = MyApp.TypeName:ID | Value = JSON serialised object
All keys stored in a set, named using FQTN
Key = MyApp.TypeNameSet | Values = MyApp.TypeName:ID1, MyApp.TypeName:ID2, etc
Redis can dereference keys in a Set, avoiding N+1
queries.
public IEnumerable<T> GetAll()
{
var get = new RedisValue[] { InstanceName() + "*" };
var result = database.SortAsync(SetName(), sortType: SortType.Alphabetic, by: "nosort", get: get).Result;
var readObjects = result.Select(v => JsonConvert.DeserializeObject<T>(v)).AsEnumerable();
return readObjects;
}
public void Insert(T t)
{
var serialised = JsonConvert.SerializeObject(t);
var key = Key(t.Id);
var transaction = database.CreateTransaction();
transaction.StringSetAsync(key, serialised);
transaction.SetAddAsync(SetName(), t.Id.ToString("N"));
var committed = transaction.ExecuteAsync().Result;
if (!committed)
{
throw new ApplicationException("transaction failed. Now what?");
}
}
Updating the Redis Cache
We cache JSON strings.
Simple Redis query
Return the DTOs we’d
previously persisted
Before we deploy to <environment />, how do
we test our microservices in concert?
Consider having an environment configuration file
List the version of each microservice that has been
tested as part of a “known good” configuration
-- OR --
Ignore versioning!
Rely on production monitoring to discover problems,
and quickly rollback changes
Microservices are small, replaceable units of
functionality, right?
Stop thinking about upgrading them.
You don’t upgrade them; you replace them.
Best approach? Isolate the service and it’s
execution environment. Replace both at once.
Image: a read only template for a container. Not
runnable.
Container: a runnable instance of an image.
Registry: a collection of Docker images
Containers are immutable.
You don’t upgrade them; you replace them.
No binary promotion to a production container.
You promote the container itself to production.
Use a repository to store images (e.g. artifactory)
Use Docker-Compose to automatically build
and run a set of containers that matches
production.
You may be limited by the resources of your dev
box (RAM, CPU cores, disk).
You could use Azure Container Services to spin
up your configuration in the cloud instead.
Use test/mock containers or microservices.
Only spin up the services you need to test your
work, and avoid all the other services that exist.
Requires a bit more knowledge around what
services to start, what to mock and what to ignore.
Could also use tools like wiremock to intercept and
respond to HTTP requests. (more complex)
If you’ve proven your microservice supports the
defined contracts…
- HTTP API (consumer based contracts)
- Events on a Message Bus
…then your microservice should work with everything
else. Just deploy it!
But you MUST have great testing, and strong operational
monitoring in place.
1. Build and test locally in a container
2. Push code to source control. Automated build
creates new container image.
4. Image is pushed to image repository
5. Image gets promoted through environments
to prod.
That’s cool.You don’t need Docker (or containers).
1. Always get the latest code you need.
2. Manually build & run all of the services on your dev box each
time you test.
3. Use scripting to make it a little less painful.
Side-effect: Encourages a low number of services.
1. Why?
2. Architecture
3. Implementation
4. Deployment
Microservices with .Net - NDC Sydney, 2016

Microservices with .Net - NDC Sydney, 2016

  • 1.
  • 2.
    1. Why? (theshort version) 2. Architecture 3. Implementation 4. Deployment
  • 3.
    You probably don’tneed them Tooling is still improving Many implementations aren’t ‘pure’… that’s OK I’m showing ONE way, not THE ONLY way.
  • 4.
    There’s plenty ofother approaches: • AWS Lambda / Azure Functions • Azure Service Fabric • Akka/Akka.NET
  • 6.
    Shiny! Uber! Shiny!Shiny! Shiny! Shiny! Shiny! Shiny! Netflix!! Shiny! Shiny! Shiny! Amazon!! Shiny! Shiny! Shiny! Shiny! Unicorns!! Shiny! Shiny! Shiny! Shiny!
  • 7.
    Greater flexibility &scalability More evolvable Independently deployable services Improved technical agility Independent development teams
  • 8.
    Resilience. A failurein one service shouldn’t wipe out the whole system. Tech flexibility. Right tool for the right job. Smaller services are easier to understand and maintain. A potential migration approach for legacy systems
  • 10.
    Isn’t this meantto be easy?! I can’t tell how it fits together anymore! It’s more brittle now than it ever was! Performance is terrible!! I need to deploy all my services together and in a specific order!
  • 11.
    Distributed systems areHARD!! Eventual consistency is a paradigm shift Legacy habits create a distributed “big ball of mud” People and culture problems.
  • 13.
    Architecture is neverjust about the technology. Can your team(s) create a well built monolith? Are you agile, do you “do agile”, or is it neither? Have you got a DevOps culture? Is there an underlying business reason driving the change?
  • 14.
    Keep it simple!Always. Don’t build what you don’t need. Don’t build what you might need. ROI & TCO are still incredibly important!
  • 15.
    If those warningsdidn’t scare you off, we’ll continue. YOU HAVE BEEN WARNED :-)
  • 16.
    Independent, loosely coupledservices Cheap to replace, easy to scale Fault tolerant, version tolerant services
  • 17.
  • 18.
  • 19.
  • 20.
    Be language &platform agnostic One synchronous approach (JSON over HTTP) One asynchronous approach (AMQP via RabbitMQ) Why? Consistency reduces complexity.
  • 21.
    Client applications shouldnot call microservices directly. Have clients call an API/Application Gateway. This then calls your microservices. Why? Encapsulate and isolate change.
  • 22.
    If you usesynchronous comms, you need to handle failures and timeouts. Use a circuit breaker pattern & design with failures in mind (and test for it!) Why? Uptime is the product of the individual components (99.99%30 = 99.7% = 2+ hrs/mth) http://www.lybecker.com/blog/2013/08/07/automatic-retry-and-circuit-breaker-made-easy/ http://techblog.netflix.com/2012/02/fault-tolerance-in-high-volume.html
  • 23.
    One client requestmay trigger hundreds of microservice calls. How do we trace a request? Treat each client request as a logical business transaction. Add a Correlation ID to every client request and include it in all internal communications. Why? Traceability aids debugging and performance tuning.
  • 24.
    Loose coupling impliesno hard coded URLs. Service discovery isn’t new (remember UDDI?) Microservices need a discovery mechanism. E.g. Consul.io & Microphone https://github.com/rogeralsing/Microphone
  • 25.
    For services tobe independent… …they cannot rely on another service being available (temporal coupling), and …they should cache any external data they need. Be prepared for this in your design.
  • 26.
    “Services aren't reallyloosely coupled if all parties to a piece of service functionality must change at the same time.” Consumer Driven Contracts are concept from the SOA days: WSDLs and XSDs were the SOAP attempt to solve this. With synchronous HTTP calls, have a look at Pact http://www.infoq.com/articles/consumer-driven-contracts https://github.com/SEEK-Jobs/pact-net https://www.youtube.com/watch?v=SMadH_ALLII
  • 28.
    Domain Driven Design Alignmicroservices to Domain Contexts, Aggregates & Domain Services CQRS Command Query Responsibility Segregation. Scale reads and writes independently. SQL or NoSQL Use persistent, easily rebuilt caches for query services. Versioning APIs are your contracts, not versions of binaries.
  • 29.
    Message Bus Reliable, asynccomms. Optimistic Concurrency Avoid locking of any kind. Event Sourcing Persist events, not state. Avoid 2-PC hassles. API Gateway Encapsulate access to microservices; optimise for client needs.
  • 30.
    When a domainobject is updated, we need to communicate the domain event(s) to all the other interested microservices. We could use 2-phase commit for this… and we could also drink battery acid. Why not just persist these events to a database instead of state, and publish those same events on the message bus.
  • 31.
    The “100 line”rule is a bit silly. nano-services are effectively a service-per- method. Don’t turn your app into thousands of RPC calls! (unless you want to use AWS Lambda?) It’s about units of functionality, not lines of code.
  • 32.
    Have a singlepurpose E.g. manage state of aggregates/entities E.g. send emails E.g. calculate commissions Be unaware of other services (in the “core”) Think about your Use Cases/Bounded Contexts
  • 33.
    It’s not architectureif there’s no boxes and lines!
  • 34.
    Application Services (Gateway/Edge Service) UIRequest (HTTP) Query MicroService Data Cache (Redis) Overall Approach Commands & Queries Database (EventStore) Domain MicroService Message Bus (RabbitMQ) Commands Queries Event Sourcing Domain Events Precomputed Results
  • 35.
    Web API Controller Request(HTTP) Aggregate Event Handler(s) Event Store Domain MicroService Command Message Bus (publish) Command Handler Command(s) Event Store Repository Save New Events Event(s) Event(s)
  • 36.
    Web API Controller Query(HTTP) Query Handler Event Handler(s) Message Bus (subscribe) Query Micro Service Event(s) Data Cache (Redis) Consider splitting here when scaling beyond a single instance to avoid competing consumers Query Updates
  • 37.
    RabbitMQ + EasyNetQ EventStore Redis+ StackExchange.Redis ASP.NET Web API
  • 38.
    Sample code isfor inspiration, not duplication https://github.com/rbanks54/microcafe
  • 39.
    Inspired by: Starbucks doesnot use two phase commit http://www.enterpriseintegrationpatterns.com/docs/IEEE_Software_Design_2PC.pdf
  • 40.
     Cashier/Barista/Customer?  CoffeeShop/Customer? What about ‘Master Data’? Which context owns the “product” entity?
  • 41.
    User Story? As thecoffee shop owner I want to define the products that are offered for sale So I can set my menu Use Cases? Manage Products (CRUD) View Menu Run a promotion
  • 43.
    Domain entities formthe application core. Commands & Queries are the adapters and ports of our services Use CQRS; separate microservices for commands and queries
  • 44.
    Products Admin Domain Command Handlers Web API Repository BusPublisher Event Store Event Handlers Bus Subscriber Admin Microservice Memory Store Event Store RabbitMQ Memory Bus
  • 45.
    Commands do notupdate state of any domain objects.They raise domain events. Events are processed by domain objects, who update their own internal state. This pattern makes it very easy to replay events and rebuild state quickly.
  • 46.
    public class Product: Aggregate { private Product() { } public Product(Guid id, string name, string description, decimal price) { ValidateName(name); ApplyEvent(new ProductCreated(id, name, description, price)); } private void Apply(ProductCreated e) { Id = e.Id; Name = e.Name; Description = e.Description; Price = e.Price; } Methods Create Events Apply an Event to change stateApply an Event to change state
  • 47.
    Holds unsaved events. Helpermethod to reapply events when rehydrating an object from an event stream. Provides a helper method to apply an event of any type and increment the entity’s version property.
  • 48.
    public abstract classAggregate { public void LoadStateFromHistory(IEnumerable<Event> history) { foreach (var e in history) ApplyEvent(e, false); } protected internal void ApplyEvent(Event @event) { ApplyEvent(@event, true); } protected virtual void ApplyEvent(Event @event, bool isNew) { this.AsDynamic().Apply(@event); if (isNew) { @event.Version = ++Version; events.Add(@event); } else Version = @event.Version; } Cast as Dynamic so we don’t need to know all strongly typed Events beforehand New Events cause version to increment Replaying events
  • 49.
    public class Product: Aggregate { private void Apply(ProductNameChanged e) { Name = e.NewName; } public void ChangeName(string newName, int originalVersion) { ValidateName(newName); ValidateVersion(originalVersion); ApplyEvent(new ProductNameChanged(Id, newName)); } Domain Command Commands raise Events
  • 50.
    We separate thecommands from the queries in our design. CQRS approach. Ports: Command Handlers/Services Adapters: HTTP API (ASP.NET Web API)
  • 51.
    Commands do nothave to map 1:1 to our internal domain methods. Commands Handlers (the ports) act on the inbound contract our adapters (the API) expose. Internal implementation and any created domain events are up to us. Command objects are just POCOs. No behaviour.
  • 52.
    public class ProductCommandHandlers { privatereadonly IRepository repository; public ProductCommandHandlers(IRepository repository) { this.repository = repository; } public void Handle(CreateProduct message) { var product = new Products.Domain.Product(message.Id, message.Name, message.Description, message.Price); repository.Save(product); } Outgoing “Port” Incoming “Port” Commands don’t return values Act on the domain Persist
  • 53.
    Doesn’t need tobe RESTful. Could be also have a SOAP API. Could also have a Web Sockets API. Secure your adapters. Flow identity to your microservices
  • 54.
    [HttpPost] public IHttpActionResult Post(CreateProductCommandcmd) { if (string.IsNullOrWhiteSpace(cmd.Name)) { var response = new HttpResponseMessage(HttpStatusCode.Forbidden) { //… } throw new HttpResponseException(response); } try { var command = new CreateProduct(Guid.NewGuid(), cmd.Name, cmd.Description, cmd.Price); handler.Handle(command); var link = new Uri(string.Format("http://localhost:8181/api/products/{0}", command.Id)); return Created<CreateProduct>(link, command); } catch (AggregateNotFoundException) { return NotFound(); } catch (AggregateDeletedException) { return Conflict(); } } Incoming “adapter” Pass through to the internal “port” Commands either succeed or throw an error
  • 55.
    Repository Interface fordata persistence Message Bus interface for publishing events Ports: Repository / Message Bus Adapters: EventStore API / EasyNetQ
  • 56.
    Repository pattern toencapsulate data access Event sourcing; persist events not state. Immediately publish an event on the bus Note:This approach may fail to publish an event Can be prevented by using Event Store as the pub/sub mechanism Can be prevented by only publishing to the bus. Use a separate microservice to persist events to the EventStore (extra complexity) Personal choice: RabbitMQ for ease of use & HA/clustering.
  • 57.
    public async TaskSaveAsync<TAggregate>(TAggregate aggregate) where TAggregate : Aggregate { //... var streamName = AggregateIdToStreamName(aggregate.GetType(), aggregate.Id); var eventsToPublish = aggregate.GetUncommittedEvents(); //... if (eventsToSave.Count < WritePageSize) { await eventStoreConnection.AppendToStreamAsync(streamName, expectedVersion, eventsToSave); } else { //... multiple writes to event store, in a transaction } if (bus != null) { foreach (var e in eventsToPublish) { bus.Publish(e); } } aggregate.MarkEventsAsCommitted(); } Repository method Persist via the event store “adapter” Publish events onto the bus
  • 58.
  • 59.
    Subscribe to domainevents, and Update their read models based on those events (i.e. their cached data) Optimise for querying with minimal I/O
  • 60.
    Subscribe to messagesfrom the message bus at startup Use Topic Filters to only subscribe to events of interest
  • 61.
    var eventMappings =new EventHandlerDiscovery().Scan(productView).Handlers; var subscriptionName = "admin_readmodel"; var topicFilter1 = "Admin.Common.Events"; var b = RabbitHutch.CreateBus("host=localhost"); b.Subscribe<PublishedMessage>(subscriptionName, m => { Aggregate handler; var messageType = Type.GetType(m.MessageTypeName); var handlerFound = eventMappings.TryGetValue(messageType, out handler); if (handlerFound) { var @event = JsonConvert.DeserializeObject(m.SerialisedMessage, messageType); handler.AsDynamic().ApplyEvent(@event, ((Event)@event).Version); } }, q => q.WithTopic(topicFilter1)); Uses reflection and convention over configuration All events subclass this Dynamic call to avoid tight coupling with types Filter to subset of events
  • 62.
    Query microservices determineevents they are interested in. Handle events using the same Event Handling pattern as used in the domain objects. Consistency reduces complexity.
  • 63.
    public class ProductView: ReadModelAggregate, IHandle<ProductCreated>, IHandle<ProductDescriptionChanged>, IHandle<ProductNameChanged>, IHandle<ProductPriceChanged> { //... public void Apply(ProductCreated e) { var dto = new ProductDto { Id = e.Id, Name = e.Name, Description = e.Description, Price = e.Price, Version = e.Version, DisplayName = string.Format(displayFormat, e.Name, e.Description), }; repository.Insert(dto); } Interested in 4 eventsLook familiar? Queries return DTOs/Result Objects. Not domain objects. Persist the DTO’s. Denormalised data is OK.
  • 64.
    Queries are simplyWebAPI methods Simple lookups of precomputed result(s) in the cached data.
  • 65.
    Redis: A key/valuestore, with fries Collections stored as ‘sets’ Convention approach to ease implementation Single objects stored using FQ type name Key = MyApp.TypeName:ID | Value = JSON serialised object All keys stored in a set, named using FQTN Key = MyApp.TypeNameSet | Values = MyApp.TypeName:ID1, MyApp.TypeName:ID2, etc Redis can dereference keys in a Set, avoiding N+1 queries.
  • 66.
    public IEnumerable<T> GetAll() { varget = new RedisValue[] { InstanceName() + "*" }; var result = database.SortAsync(SetName(), sortType: SortType.Alphabetic, by: "nosort", get: get).Result; var readObjects = result.Select(v => JsonConvert.DeserializeObject<T>(v)).AsEnumerable(); return readObjects; } public void Insert(T t) { var serialised = JsonConvert.SerializeObject(t); var key = Key(t.Id); var transaction = database.CreateTransaction(); transaction.StringSetAsync(key, serialised); transaction.SetAddAsync(SetName(), t.Id.ToString("N")); var committed = transaction.ExecuteAsync().Result; if (!committed) { throw new ApplicationException("transaction failed. Now what?"); } } Updating the Redis Cache We cache JSON strings. Simple Redis query Return the DTOs we’d previously persisted
  • 68.
    Before we deployto <environment />, how do we test our microservices in concert?
  • 69.
    Consider having anenvironment configuration file List the version of each microservice that has been tested as part of a “known good” configuration -- OR -- Ignore versioning! Rely on production monitoring to discover problems, and quickly rollback changes
  • 70.
    Microservices are small,replaceable units of functionality, right? Stop thinking about upgrading them. You don’t upgrade them; you replace them. Best approach? Isolate the service and it’s execution environment. Replace both at once.
  • 71.
    Image: a readonly template for a container. Not runnable. Container: a runnable instance of an image. Registry: a collection of Docker images
  • 72.
    Containers are immutable. Youdon’t upgrade them; you replace them. No binary promotion to a production container. You promote the container itself to production. Use a repository to store images (e.g. artifactory)
  • 73.
    Use Docker-Compose toautomatically build and run a set of containers that matches production. You may be limited by the resources of your dev box (RAM, CPU cores, disk). You could use Azure Container Services to spin up your configuration in the cloud instead.
  • 74.
    Use test/mock containersor microservices. Only spin up the services you need to test your work, and avoid all the other services that exist. Requires a bit more knowledge around what services to start, what to mock and what to ignore. Could also use tools like wiremock to intercept and respond to HTTP requests. (more complex)
  • 75.
    If you’ve provenyour microservice supports the defined contracts… - HTTP API (consumer based contracts) - Events on a Message Bus …then your microservice should work with everything else. Just deploy it! But you MUST have great testing, and strong operational monitoring in place.
  • 76.
    1. Build andtest locally in a container 2. Push code to source control. Automated build creates new container image. 4. Image is pushed to image repository 5. Image gets promoted through environments to prod.
  • 77.
    That’s cool.You don’tneed Docker (or containers). 1. Always get the latest code you need. 2. Manually build & run all of the services on your dev box each time you test. 3. Use scripting to make it a little less painful. Side-effect: Encourages a low number of services.
  • 78.
    1. Why? 2. Architecture 3.Implementation 4. Deployment