Today I’m going to show you how I used salt to simplify the process of installing Metric Collectors. The machinery I’ve setup is generic enough to be used for all of our standard metric collectors, but to avoid scope creep I’m going to focus on MySQL. Before I set this up we had a lot of manual process when setting up a MySQL server. Installing MySQL, configuring it, installing autometrics_framework, and dealing with config files that were all over the place. Between what I’m about to present to you and building an RPM for MySQL, we took a process that was taking 5-6 hours and condensed it down to 20-30 minutes.
Let’s start by taking a brief look at the architecture we’ll be working with. On the left we have a typical application server running MySQL, a Java Application, and the autometrics_framework. Autometrics_framework is a small agent which can be configured to run plugins that collect and report various application metrics. Most of our java services emit performance metrics directly to our metrics pipeline, but we use autometrics_framework to collect and report metrics from other sources, such as the machine’s resources or 3rd party software like MySQL.
Let’s start by taking a look at how we use this tool. Later on I’ll dive into how the tool is implemented. As you can see there are 4 basic steps needed to deploy monitoring for a new MySQL cluster.
The first thing we need to do is to create a configuration file for the mysql autometrics_framework plugin. This tells the plugin what metrics to report and supplies a username and password to the collector. This is a minimal configuration; most of our configurations are more complex and need to supply information like port numbers and the path to the correct binary. We check this file into SVN so that it can be fetched by our salt module.
Before we can monitor MySQL we need to create the monitoring user we defined in our configuration. Some clusters have this user setup as part of their bootstrap process, and so those clusters can skip this step.
Next we run a simple salt command to install and configure the autometrics framework package. If I had more plugins I wanted to setup on those boxes, I could specify them as additional arguments.
Finally, I have a command which tests the plugin to make sure that everything works correctly. The test function returns a list of the metrics that will be reported.
So now that you’ve seen how we use this infrastructure, what do you say we take a look under the hood? Before I can show you the next slide, I have to give you a little caveat. At the time I wrote all of this there was a problem where we couldn’t easily use the Python MySQL bindings for some of our clusters. That has since been resolved but I have not had a chance to rewrite that code.
Having said that, here is how we create the MySQL user. Parameterized code allows us to use this function in a variety of situations. I tried to protect against SQL injection, but I’m sure someone out there can break it. :)
Now we get into the real meat of this presentation. In the next 3 slides we’ll take a look at how we install, configure and enable the autometrics framework and its plugins in only a single command.
This is the basic logic for installing the autometrics_framework and its plugins. We use plugin_args to get a list of plugins we wish to enable, this is what allows us to specify an arbitrary list of plugins when we run our salt command, as well as passing arguments down to those plugins so they can be configured. First we make sure we have somewhere to put our autometrics framework configuration, then we install the autometrics_framework RPM. Switcher is a salt module that our infrastructure team wrote for maintaining symlinks to the current and previous version of a particular application. Finally, for each plugin specified, we configure and, if successful, enable that plugin.
Next we have the function which configures the application. This function looks for a registered plugin, and if it finds one it runs the plugin’s configure function passing in the argument supplied on the salt command line.
This is the function which we registered for the plugin. Here we can see that the argument specifies the location in SVN we need to pull our configuration from, and that the function essentially puts that single file into the proper location for autometrics framework.
So now that everything is installed and enabled we’re done, right? (hold up hands and kinda shrug) I suppose we could test our collectors, and to do that we use this function.
Oh my, it seems I didn’t use list comprehension here. That’s a little embarrassing. (Wait for laugh) Our autometrics_framework plugins are simply python modules with a single class, “Collector.” This module imports the plugin, instaniates an instance of Collector, and returns a list of metric names from that collector.
That wraps up what I setup to replace a very manual process. Before I did this rolling out a new MySQL cluster took us upwards of 6 hours to install MySQL, set up the autometrics framework, and deal with config files that were all over the place. Between this work and building an RPM for MySQL, we got that time down to 20-30 minutes. We can also reuse this same infrastructure to setup collectors for other plugins we may want to install. Any questions?
1. 2013 June 17
Deploying MySQL Metric Collectors W/Salt
Zach White, Sr. Site Reliability Engineer, LinkedIn
2. Monitoring Infrastructure Overview
3. MySQL Monitoring: Overview
4. MySQL Monitoring: Create Configuration
5. MySQL Monitoring: Create Monitoring User
6. Install autometrics_framework
7. MySQL Monitoring: Test Collectors
8. MySQL Monitoring: Create Monitoring User
9. MySQL Monitoring: Create Monitoring User
10. MySQL Monitoring: Install
11. MySQL Monitoring: Install
12. MySQL Monitoring: Install
13. MySQL Monitoring: Install