A SaltConf16 use case talk by Steven Braverman of Dun & Bradstreet. Testing configuration changes for multiple server roles can be time consuming when real instances or legacy container systems are used. Applying configuration changes to each role in parallel can be difficult. So what's the best way to test configuration changes efficiently, quickly, and securely prior to applying them? See how an integrated test setup using AWS EC2 Container Service (ECS), AWS AutoScaling Group, and SaltStack simplifies the application of configuration changes and allows you to test configuration changes in parallel to reduce the time spent testing.
6. It is important to test applying states to roles
efficiently, quickly, and securely
prior to applying them to production servers.
TESTING SALT STATES
20. PROBLEMS
• Linear integration testing is slow
• Have to maintain legacy virtual machines
No generic ways to run integration tests for
SaltStack exist
21.
22. Applying Salt states to a series of Docker images will:
HYPOTHESES
• Speed up the time it takes to apply state changes to roles
• Allow for concurrent builds
• Be easier to maintain
26. JENKINS NODES
SIT Flow
ASG
1. Create a pull-request
2. Pull salt repository and run unit tests
3. “Initiate” Jenkins node to be a Master
4. Launch Integration Test
5. “Teardown” node back to a Minion
6. Rinse and Repeat
27.
28. INITIATION AND TEARDOWN
• Pull down your Salt repo into the workspace
• Run lint tests
• Run unit tests/coverage
Run SIT Teardown SITInitiate SITPre-SIT Tasks
29. INITIATION AND TEARDOWN
• SED to configure Jenkins node master config
• Start Salt-Master service
• Start Redis service
• Install SIT requirements
Run SITPre-SIT Tasks Initiate SIT Teardown SIT
30. INITIATION AND TEARDOWN
Pre-SIT Tasks Run SIT Teardown SITInitiate SIT
1. SIT requests AutoScaling Group to provision a new instance
2. Instance provisions & SIT discovers it
3. ASG instance gets registered into ECS cluster
4. ECS Tasks begin (runs Docker container); states are applied to the minions
5. Tasks return applied states’ results to the Jenkins node
6. When all tasks have stopped, ASG instance gets terminated
7. Results are analyzed
31. 1. SIT requests AutoScaling Group to provision a new instance
2. Instance provisions & SIT discovers it
3. ASG instance gets registered into ECS cluster
4. ECS Tasks begin (runs Docker container); states are applied to the minions
5. Tasks return applied states’ results to the Jenkins node
6. When all tasks have stopped, ASG instance gets terminated
7. Results are analyzed
INTEGRATION TEST
ASGJENKINS NODE
Run SIT
32. 1. SIT requests AutoScaling Group to provision a new instance
2. Instance provisions & SIT discovers it
3. ASG instance gets registered into ECS cluster
4. ECS Tasks begin (runs Docker container); states are applied to the
minions
5. Tasks return applied states’ results to the Jenkins node
6. When all tasks have stopped, ASG instance gets terminated
7. Results are analyzed
INTEGRATION TEST
ASGJENKINS NODE ASG INSTANCE
Run SIT
33. 1. SIT requests AutoScaling Group to provision a new instance
2. Instance provisions & SIT discovers it
3. ASG instance gets registered into ECS cluster
4. ECS Tasks begin (runs Docker container); states are applied to the
minions
5. Tasks return applied states’ results to the Jenkins node
6. When all tasks have stopped, ASG instance gets terminated
7. Results are analyzed
INTEGRATION TEST
Run SIT
JENKINS NODE
ECS CLUSTER
34. 1. SIT requests AutoScaling Group to provision a new instance
2. Instance provisions & SIT discovers it
3. ASG instance gets registered into ECS cluster
4. ECS Tasks begin (runs Docker container); states are applied to the minions
5. Tasks return applied states’ results to the Jenkins node
6. When all tasks have stopped, ASG instance gets terminated
7. Results are analyzed
INTEGRATION TEST
Run SIT
JENKINS NODE
ECS CLUSTER
35. 1. SIT requests AutoScaling Group to provision a new instance
2. Instance provisions & SIT discovers it
3. ASG instance gets registered into ECS cluster
4. ECS Tasks begin (runs Docker container); states are applied to the
minions
5. Tasks return applied states’ results to the Jenkins node
6. When all tasks have stopped, ASG instance gets terminated
7. Results are analyzed
INTEGRATION TEST
JENKINS NODE
Run SIT
ECS CLUSTER
36. 1. SIT requests AutoScaling Group to provision a new instance
2. Instance provisions & SIT discovers it
3. ASG instance gets registered into ECS cluster
4. ECS Tasks begin (runs Docker container); states are applied to the
minions
5. Tasks return applied states’ results to the Jenkins node
6. When all tasks have stopped, ASG instance gets terminated
7. Results are analyzed
INTEGRATION TEST
JENKINS NODE
Run SIT
ECS CLUSTER
37. 1. SIT requests AutoScaling Group to provision a new instance
2. Instance provisions & SIT discovers it
3. ASG instance gets registered into ECS cluster
4. ECS Tasks begin (runs Docker container); states are applied to the
minions
5. Tasks return applied states’ results to the Jenkins node
6. When all tasks have stopped, ASG instance gets terminated
7. Results are analyzed
INTEGRATION TEST
JENKINS NODE
Run SIT
38. INITIATION AND TEARDOWN
• SED to configure Jenkins node master back to easily editable state
• Flush redis of data
• Remove Salt-keys
• Stop redis service
• Stop Salt-master
Pre-SIT Tasks Teardown SITInitiate SIT Run SIT
44. Use the SaltConf16 event app to provide feedback
for this presentation.
(we’re all ears)
QUESTIONS AND FEEDBACK
Editor's Notes
Good morning everyone!
My name is Steven Braverman and this is: “Integration Testing for Salt States Using AWS EC2 Container Service
I’d like you guys to keep a couple things in mind:
The Salt integration Testing tool I will discuss today is real. It is an open source tool that any one can use starting today.
Please hold all questions to the end, I have a lot to share and demo
So SIT back and let’s get started
Dun & Bradstreet, not Dave & Busters
Add links here to your contributions/pull-requests
But really, most people that know Dun and Bradstreet probably don’t consider them to be a giant technology company. And those suspicions are about right
the DevOps team I work with in Malibu is from a company that was recently acquired by D&B
DevOps team works on a lot of really cool projects. Agile scrum workflow. 1 week sprints. We have a slim team
We use SaltStack for configuration management. It provides us with a stable environment while moving at a fast pace. Also keeps our codebase transparent to our developers. We accept pull-requests About 98% of our infrastructure is hosted using AWS. They offer a lot of really great tools and services.
For our CI/CID we use Jenkins. Kohsuke is a smart guy and looking forward to version 2.0 which is currently in beta.
As a DevOps shop, it is important to test applying states to roles efficiently, quickly, and securely.
This is what we want
Just to make sure we are all on the same page about what it means to apply salt states to roles, I want to go over a quick analogy I came up with.
magine you are the owner of a fruit factory, and you have generated a machine that can clone fruits.
You coined it the Fruit Master
The fruit master contains blue prints to turn these things called minions into fruits.
Now what kind of fruit are we going to get?j
That depends on the role of the minion.
That depends on the role of the minion.
When the fruit master is ready to work, it sends its blue prints (the states( over to the minion based off the role
After this, we are left with the fruits
In the real world out salt masters and minions are servers
And we usually have clusters of servers, and each can have several roles.
For example, our PHP minions can have the role of server, php, and a role for each specific app
And we usually have clusters of servers, and each can have several roles.
For example, our PHP minions can have the role of server, php, and a role for each specific app
For the most part, our CI nodes are immutable. This means they are each configured the same way and can all function the same way. It also means we can delete and provision them as necessary
One of our servers was an OpenVZ that contained several virtual machines–OpenVZ is an open source container-based alternative to hypervisor-based virtualization
One of our servers was an OpenVZ that contained several virtual machines–OpenVZ is an open source container-based alternative to hypervisor-based virtualization
(i.e. run full integration tests on every pull request)
We implement a thing called freezes. We have a feature freeze and code freeze. Let’s say that It is Friday morning, and feature freeze is a few hours away. One might say you have plenty of time, but in reality, if all of your coworkers create a pull-request, and you are the last person to create a pull-request, your changes are probably not going into this sprint
Generic does not exist
Maintenance is manual. Manual processes are destined to have errors
(i.e. run full integration tests on every pull request)
We implement a thing called freezes. We have a feature freeze and code freeze. Let’s say that It is Friday morning, and feature freeze is a few hours away. One might say you have plenty of time, but in reality, if all of your coworkers create a pull-request, and you are the last person to create a pull-request, your changes are probably not going into this sprint
Generic does not exist
Maintenance is manual. Manual processes are destined to have errors
Alernative to virtual host
Docker Engine that runs in the same operating system as its host. This allows it to share a lot of the host operating system resources, but allows you to run a set of processes for a particular container. It also uses layered filesystems
Talk about what DevOps is and why it is important.
Describe what has been lacking in testing States
(i.e. run full integration tests on every pull request)
Real-life example
OpenVZ – a lot of overhead.
It is a virutal box of a server
Needs to be set up and maintained
Friday Freezes
{{TALK ABOUT SIT}}
{{TALK ABOUT SIT}}
Explain the autoscaling group
Explain the autoscaling group
Explain the autoscaling group
Initiate: turn a slave into a master
Teardown: turn a master back into a slave
Initiate: turn a slave into a master
Teardown: turn a master back into a slave
Initiate: turn a slave into a master
Teardown: turn a master back into a slave
(i.e. run full integration tests on every pull request)
This may just be an image. Some sort of visual tool here would be nice
(i.e. run full integration tests on every pull request)
This may just be an image. Some sort of visual tool here would be nice
(i.e. run full integration tests on every pull request)
This may just be an image. Some sort of visual tool here would be nice
(i.e. run full integration tests on every pull request)
This may just be an image. Some sort of visual tool here would be nice
(i.e. run full integration tests on every pull request)
This may just be an image. Some sort of visual tool here would be nice
(i.e. run full integration tests on every pull request)
This may just be an image. Some sort of visual tool here would be nice
(i.e. run full integration tests on every pull request)
This may just be an image. Some sort of visual tool here would be nice
Initiate: turn a slave into a master
Teardown: turn a master back into a slave
QUICK NOTE, changing the workspace project each time is specific to our environment.
(i.e. run full integration tests on every pull request)
Explain the autoscaling group
Demo time
I would like to show from start to finish this working