Good Afternoon Everyone, So today I will talk about Lattice but before starting How many of you know docker ? How many of you already heard about Cloudfoundry ? Again this is a beginner talk so I will not go too deeply into details
My name is Etourneau Gwenn, I am French so sorry for the english listener or the translator .. I am working as Solution Architect in the company call Pivotal the same as Casey West the guy you made Remote working presentation yesterday.
Pivotal is the company behind Spring framework (Java) RabbitMQ, Cloudfoundry Pivotal Traker and others things
This is today Agenda.
Before talking about Lattice we need to talk about the origin of Lattice. Cloud foundry Cloudfoundry is a platform as Service, basically is a platform to deploy your application without taking care of the underlying IaaS and where you got may out of the box things Cloudfoundry is truly OpenSource Platform as Service that’s mean everyone in this room can contribute to it. Cloudfoundry aim to be the the de-facto Entreprise PaaS As a PaaS cloundfoundry include a lot feature such as authentification, Service, high availability etc .. Cloudfoundry is supporting Buildpack as well as Docker image
To resume that’s the way of deploying application in Cloudfoundry. You push your code and your application get deployed and you can access to it.
This is the cloud foundry Architecture I mean 80 % of it…
And this is the Lattice Architecture so as you can see we keep the routing, logs and the scheduling parts, the scheduling is named Diego. I know people don’t like diagram so let’s be simple
This is cloud foundry ..
and this is lattice ,… Simple to understand right ?
So Why Lattice, why pivotal created lattice ? Well Cloudfoundry is hard to install I mean not really technically but it’s just too much for local development. So we wanted something where developer can focused on development without wasting time of installing / setuping many things on their local environment.
Yeah something easy to use and install of course be able to run docker, buildpack or even your own workload.
As you may know there is actually a pretty famous architecture call micro-service coming up so lattice as well as Cloudfoundry will help to run this new workload
But Lattice is not suitable for eveything.
Lattice is for development not for production if you look production environment you should maybe look to cloud foundry
If you want persistent data like for Database for example again cloud foundry is a better options same as if you have strong policies
What about others solution ? DIY is basically doing everything yourself, installing apache and so on… Docker ?? Well for one or two application is fine, but again no self healing, no load balancing or logs aggregation ? you need to setup this feature by yourself
Kubernetes is pretty nice but as Kelsey says yesterday, is not a PaaS is just a scheduler you will have to setup for example Load balancing and log aggregation.
So let’s check what lattice can do for you and how Easy to install, well you just need to use vagrant and vagrant up command.
Some time when you have a small team like 3 -4 people you want to share your development env. so you can easily clusterised Lattice using terraform, just terraform apply. Actually OpenStack, AWS and Digital Ocean are supported.
Scheduling As I share the scheduler is called Diego And of course the scheduler manage the distribution of your workload accros the cluster. Diego use an auction algori[them] for example kubernetes use a bin packing algori[them] .
So diego understand too things : Task which is guarantee to be run at most once and Long Running Process which can run forever and can have multiple instance . And of course should resist against crashes, network failure etc …
Some example of task : Cron or Database migration like in ruby the task ‘rake db::migrate’
For Long Running Process Website worker Database / Key-value store are a good example for Long running process
Now I will try to explain quickly the scheduling algorithem. To win the bids we use score based on Memory, Disk space, Number of Running container and Number of the same application instance running and the lowest score wins the bids
Here I have 1 instances of the blue application and 2 of the green application to place on my cluster more precisely on what we call Cell
So the first placement is easy we use the empty Cell
Now the score change because I am trying to place a green one, so the Cell where there is already green application running get their score increase. In Diego for HA we try to avoid to place the application instances on the same Cell or host..
Again score change mainly.
And that’s all we got our application instances are in place.
About Self-Healing Diego is a state machine so that’s mean it s always trying to keep the Actual state eguals the desired state, so that’s mean it s always calculating the delta between the actual and the desired state.
For example I wanted to run 6 blue applications instances and I have 6 blue application running so the delta is zero. No need to add no need to remove applications instances
Let’s say now we are loosing a Cell, so the we lost 3 instances and the delat become minus 3.
Diego will just rescheduled the thee missing on the alive Cell. And if you don’t have enough resource Diego will keep trying to place them until you add new resource.
As I say you get the load balancing out of the box, so no need to setup nginx, or Haproxy. Your instance will be automatically added to the http routing layer.
For the logs you get the aggregate logs this logs can be stream to your terminal using the command line. Again nothing to setup …..
Now I will show some command only the basic one or the common one
to deploy you docker image is pretty easy … and of course you can use a lot of options. to scale well that’s pretty strait forward ….
For the build pack we need to do it in two step. First we need to build what we call the droplet So basicaly we merge your application and the buildpack (your runtimes)
And after we just need to run this droplet using ….
And you eventually you can even run your own workload . For example you can execute a code into a docker image.
Concerning the routing by default your application will be route to your application name your domain
Of course you can update/change the route, to do that you simply to say I want to route the container port for example here 8080 to my-api so my-api.mydomain.com
Well for the logs is pretty obvious ltc logs …
X-Ray. So X-Ray is a tool to help you to visualize lattice events and distributions
You will be able to : ……..
Of course as every pivotal product X-Ray is Opensource
So that the end of my presentation so thank everyone !
Run containerized workload
Sr Solution Architect
Bef.: Platform Architect
Born from Cloudfoundry
• Truly OpenSource Platform as a Service
• Aim to be de-facto enteprise PaaS
• Huge community
• Authentification, Service, High Availability …
• Support Buildpack and Docker as well
Cloudfoundry is “hard” to install
Focused for local development
Easy to use and install
Be able to run Docker and Buildpack and your own workload
Suitable for micro-service architecture
Why not Lattice
Production workload, use Cloudfoundry
Strong security policies
• DIY do it yourself
• Kubernetes / Docker
Public / Private cloud
Distribution across the cluster
A Task is guaranteed to be run at most once
LRP may have multiple instances.
iego is a state machine maintaining the desiredLRP vs actualLRP
ep the correct number of instances running in the face of network
One off task
Database migration ‘rake db::migrate’
Database / Key-Value store
Based on :
• Disk Space
• Number of Running container
• Number of the same running application instance…
Lowest score win the bids