Because if you are doing it right, just building and testing will require a dozen or computers.
As I get used to controlling a handful of computers, I started thinking what more we can do.If you don’t think more computers are helpful, you are doing it wrong / Can’t be said about people.
Don’t build up capacity that’s enough on a few days a year but go idle most other time.
One of the reasons I needed so many computers is because I needed all the different environments / some combinations were very rare and old, keeping them pristine was hard.
Needing to have diversity in the environment adds to the capacity planning problem.
But you don’t want to make everything too slow by over-subscribing. I’ve seen hypervisors used to run many virtual machines.
Hey Kohsuke, my builds are failing. Can you take a look?
Hey Kohsuke, my builds are failing. Can you take a look?
So the lesson and the best practice = isolate builds and tests / treat them like untrusted code
Various techniques has been deployed successfully today
but as I found out the hard way, this isn’t enough to solve this problem
Hey Kohsuke, my builds are failing. Can you take a look?
Hey Kohsuke, my builds are failing. Can you take a look?
Turns out isolation in the time dimension is just as important / somewhat like a human body --- if you live long enough, things tend to break down / beyond certain point it becomes unsalvageable, as Windows users know all too well!
Turns out elasticity solves this problem, too, by allowing you to simply throw away and create new instances in the same predictable state /
Episode from scalability summit / everyone explains their monitoring system
Either this slide or more details Jenkins.
Another common mode of deployment is…Even if it’s static…
If you are willing to invest on creating a great slave virtualization environment, you can.
HS: if somebody misses the CoW concept, he’d be lost for the next two slides
Milestone in build environment elasticity / you’ve reached a new level of mental peace, enlightenment / all is well, let’s pack up and head home, right?
But this story doesn’t end there. The case for elasticity applies equally well to tests and test environments. As that’s really the heart of continuous integration. The hard problem.
A traditional attack vector to the testing is to test individual piece one at a time, then hope it still works when put together.We do this in Jenkins & CB a lot. Runs fast, anywhere, great!
Especially in a connected world
In Jenkins & CloudBees I do both all the times / Jetty / access token.And sometimes it’s a major accomplishment just to do it. Subversion server / OpenID service.But sometimes you just can’t do it. LDAP server / Active Directory
single “runtime environment” definition and just multiple copies of it.
Take load balancer as an example. Having chef configure haproxy is now a well understood problem.
Provision and dispose through API / creating the box that chef/puppet runs in.Usefulness of such elasticity is not just about elasticity for running tests. For development, for review, too.
Nowadays, that’s what I think of CloudBees as. Elasticity build environmenttaken to its logical conclusion demands elastic platform as a service.