The last decade belonged to virtual machines and the next one belongs to containers. CoreOS is a new Linux distribution designed specifically for application containers and running them at scale. This talk will examine all the major components of CoreOS (etcd, fleet, docker, systemd) and how these components work together.
14. your
with Ada.Text_IO;
procedure Hello_World is
use Ada.Text_IO;
begin
Put_Line("Hello, world!");
end;
#include <stdio.h>
int main()
{
printf("Hello, world!n");
}
package main
import "fmt"
func main() {
fmt.Println("Hello, world!")
}
What is ignition?
Utility for configuring a machine on boot.
how is this different from cloudinit?
What is CoreOS? It is a tool that is packaged like a server OS.
In particular it is a Linux server OS. I wouldn’t be here at a Linux Foundation event if it wasn’t.
What is ignition?
Utility for configuring a machine on boot.
how is this different from cloudinit?
JOKE about hardware! PXE, Install to disk, iPXE, etc
In particular it is a Linux server OS. I wouldn’t be here at a Linux Foundation event if it wasn’t.
we also have a number of open source tools that can be used independently
Why build another Linux? Google released a paper called “Datacenter as a Computer”. A system where:
- You add more machines and get more capacity
- Individual servers don’t matter
- The application is the focus
- There are no maintenance windows
- Use smart software on commodity hardware
OK, so lets get started building this thing!
**JOKE**: The goal of this talk is talk about the most important person in the room: you
And really the different ways that people have been interacting with our software
as a sw engineer you will be interacting with our software in dev/test
taking the code to your applications
and converting it into ACIs that will be converted
and converting it into ACIs that will be converted
and converting it into ACIs that will be converted
and converting it into ACIs that will be converted
and converting it into ACIs that will be converted
as an ops engineer you will be interacting with our products as a user
as an ops engineer you will be interacting with our products as a user
and converting it into ACIs that will be converted
and converting it into ACIs that will be converted
and converting it into ACIs that will be converted
and converting it into ACIs that will be converted
we also have a number of open source tools that can be used independently
In order to achieve this we need to make the individual server less special.
- Who here likes large complex API contracts?
- Who likes maintaining complex inter-dependent systems?
The current state of server infra makes it hard not to treat things as special.
The current distribution model offers a large API contract. The server provides a complex pre-configured platform for you app to run against.
Distros are forced to freeze versions of things for fear of breaking this API contract.
How do we avoid this situation?
but, if we re-write the contract then the OS can be dumber.
How can we get away with this?
- The application brings its entire userspace from libc up
- Kernel syscall API is very stable for nearly all server app needs
How do we do this?
Using containers we can start to run apps side-by-side with conflicting versions
JOKE I would not recommend having lots openssl versions, consider NOT embedding openssl in applications.
Using containers we can start to run apps side-by-side with conflicting versions
JOKE I would not recommend having lots openssl versions, consider NOT embedding openssl in applications.
And to clear everything else up we have containers on the right. Nice isolated bundles of userspace code running on top of a minimal system.
Now that we have reduced the API contract we are able to start doing interesting things. Lets talk about updates.
And to clear everything else up we have containers on the right. Nice isolated bundles of userspace code running on top of a minimal system.
Now that we have reduced the API contract we are able to start doing interesting things. Lets talk about updates.
In order to achieve this we need to make the individual server less special.
- Who here likes large complex API contracts?
- Who likes maintaining complex inter-dependent systems?
The current state of server infra makes it hard not to treat things as special.
In order to achieve this we need to make the individual server less special.
- Who here likes large complex API contracts?
- Who likes maintaining complex inter-dependent systems?
The current state of server infra makes it hard not to treat things as special.
Now just because we have reduced the responsibilities of the OS doesn’t mean we can forget about it completely. Keeping an up to date kernel, init system, ssh, etc are good hygiene.
How does CoreOS handle this?
Remember how hard it was to update IE?
Firefox was better, but still annoying
Versions before Firefox 15 and IE 8 didn’t do automatic updates
Then Chrome just did it for you
And we saw the greatest step forward in web-security to date
and we got HTML5, soon there after
being able to update unlocked all this
In order to make shipping updates to CoreOS as automated as possible we have atomic updates with rollback
In order to make shipping updates to CoreOS as automated as possible we have atomic updates with rollback
In order to make shipping updates to CoreOS as automated as possible we have atomic updates with rollback
There are two parts of configuration:
- machine configuration
- cluster configuration
The machine configuration is mostly about how to get into the cluster
- SSH certificates to add
- boot strapping etcd
- any cluster agents to run
- configure networking
This is generally specified in CoreOS as a cloud-config file. Because on nearly all platforms you can only get a string of bytes into the system:
- Kernel command line
- AWS user-data
- etc
For machines in almost all environments we are limited to a string of bytes. This is OK because the things we need to do are really simple! We have just a few goals.
For machines in almost all environments we are limited to a string of bytes. This is OK because the things we need to do are really simple! We have just a few goals.
For machines in almost all environments we are limited to a string of bytes. This is OK because the things we need to do are really simple! We have just a few goals.
Service discovery through API or DNS. Also, used by scheduler to figure out if work needs to be resceduled.
You can think of etcd as /etc distributed across lots of machines.
You can think of etcd as /etc distributed across lots of machines.
- What should I be running?
- Can I reboot for an upgrade now?
Transition: For cluster configuration we have a data store called etcd.
Scheduling is really the user interface we are getting towards:
-
Service discovery through API or DNS. Also, used by scheduler to figure out if work needs to be resceduled.
There are two parts of configuration:
There are two parts of configuration:
What’s next?
Active development.
A few months away.
Supercede cloudinit. Use one or the other.
user_data
cloudinit is not going anywhere.
What’s next?
Active development.
A few months away.
Supercede cloudinit. Use one or the other.
user_data
cloudinit is not going anywhere.