This presentation has some animations and content to help tell stories as you go. Feel free to change ANY of this to your own liking! I would definitely practice your flow once or twice before a presentation. There is A LOT of content to get through in 1 hour. The slides with search examples can be unhidden if needed.
Here is what you need for this presentation:
You should have the following installed:
PowerOfSPL App - https://splunkbase.splunk.com/app/3353/
Custom Cluster Map Visualization - https://splunkbase.splunk.com/app/3122/
Clustered Single Value Map Visualization - https://splunkbase.splunk.com/app/3124/
Geo Heatmap Custom Visualization - https://splunkbase.splunk.com/app/3217/
Timewrap Custom Command (NOTE this command is now included in CORE) - https://splunkbase.splunk.com/app/1645/
Haversine Custom Command - https://splunkbase.splunk.com/app/936/
Levenshtein Custom Command - https://splunkbase.splunk.com/app/1898/
Optional:
Splunk Search Reference Guide handouts
Mini buttercups or other prizes to give out for answering questions during the presentation
Shake! Demo can be used for interactivity on some of these search examples if you want… definitely adds some flare to the presentation
Intro
Mention to people to start downloading Splunk as it’s a large download if they haven’t already. If they already have Splunk installed than they just need to download the Power Of SPL App.
How to get help (recommended to have 1-2 staffers in room to help with install problems etc)
Intro
Mention to people to start downloading Splunk as it’s a large download if they haven’t already. If they already have Splunk installed than they just need to download the Power Of SPL App.
How to get help (recommended to have 1-2 staffers in room to help with install problems etc)
Intro:
1. Get everyone set up
2. This will essentially be a follow along – with me demoing and you following along. After today’s session you’ll have a useful, live instance with everything we do today on it.
3. We’ll walk through a bunch of different SPL commands, how SPL works with Custom Visualizations and finally SPL with a light Machine Learning intro.
- We call them search commands, but they really do so much more and that’s what I hope to get across with you today.
“The Splunk search language has over 140+ commands, is very expressive and can perform a wide variety of tasks ranging from filtering to data, to munging or modifying, and reporting.”
“The Syntax was …”
“Why? Because SQL is good for certain tasks and the Unix pipeline is amazing!”
This is great BUT… WHY WOULD WE WANT TO CREATE A NEW LANGUAGE AND WHY DO YOU CARE?
<Engage audience here.. Before showing bullet points ask “Why do you think we would want to create a new language?”>
<Also Feel free to change pictures or flow of this slide..> -- have buttercups to throw out if anyone answers correctly?
- Today we require the ability to quickly search and correlate through large amounts of data, sometimes in an unstructured or semi-unstructured way.
Conventional query languages (such as SQL or MDX) simply do not provide the flexibility required for the effective searching of big data. Not only this but STREAMING data. (SQL can be great at joining a bunch of small tables together, but really large joins on datasets can be a problem whereas hadoop can be great with larger data sets, but sometimes inefficient when it comes to many small files or datasets. )
- Machine Data is different:
- It is voluminous unstructured time series data with no predefined schema
- It is generated by all IT systems– from applications and servers, to networks and RFIDs.
- It is non-standard data and characterized by unpredictable and changing formats
Traditional approaches are just not engineered for managing this high volume, high velocity, and highly diverse form of data.
Splunk’s NoSQL query approach does not involve or impose any predefined schema. This enables the increased flexibility mentioned above, as there are
No limits on the formats of data –
No limits on where you can collect it from
No limits on the questions that you can ask of it
And no limits on scale
Methods of Correlation enabled by SPL
Time & GeoLocation: Identify relationships based on time and geographic location
Transactions: Track a series of events as a single transaction
Subsearches: Results of one search as input into other searches
Lookups: Enhance, enrich, validate or add context to event data
SQL-like joins between different data sets
In addition to flexible searching and correlation, the same language is used to rapidly construct reports, dashboards, trendlines and other visualizations. This is useful because you can understand and leverage your data without the cost associated with the formal structuring or modeling of the data first. (With hadoop or SQL you run a job or query to generate results, but then you have need to integrate more software to actually visualize it!)
“OK.. Let’s move on..”
“Let’s take a closer look at the syntax, notice the unix pipeline”
“The structure of SPL creates an easy way to stitch a variety of commands together to solve almost any question you may ask of your data.”
“Search and Filter”
- The search and filter piece allows you to use fields or keywords to reduce the data set. It’s an important but often overlooked part of the search due to the performance implications.
“Munge”
- The munge step is a powerful piece because you can “re-shape” data on the fly. In this example we show creating a new field called KB from an existing field “bytes”.
“Report”
- Once we’ve shaped and massaged the data we now have an abundant set of reporting commands that are used to visualize results through charts and tables, or even send to a third party application in whatever format they require.
“Cleanup”
- Lastly there are some cleanup options to help you create better labeling and add or remove fields.
Again, sticthing together makes it easier to utilize and understand advanced commands, better flow etc. Additionally the implicit join on time and automatic granularity helps reduces complexity compared to what you would have to do in SQL and excel or other tools.
“Let’s look at some more in depth examples”
“In this next section we’ll take a more in depth look at some search examples and recipes. It would be impossible for us to go over every command and use case so the goal of this is to show a few different commands that can help solve most problems and generate quick time to value in the following area."
“In this next section we’ll take a more in depth look at some search examples and recipes. It would be impossible for us to go over every command and use case so the goal of this is to show a few different commands that can help solve most problems and generate quick time to value in the following area."
Note how the search assistant shows the number of both exact and similar matched terms before you even click search. This can be very useful when exploring and previewing your data sets without having to run searches over and over again to find a result.
Additionally we can further filter our data set down to a specific host.
Lastly we can combine filters and keyword searches very easily.
“This is pretty basic, but the key here is that SPL makes it incredibly easy and flexible to filter your searches down and reduce your data set to exactly what you’re looking for.
Remember Munging or Re-shaping our data on the fly? Talk about Eval and it’s importance
sourcetype=access*|eval KB=bytes/1024
“There are tons of EVAL commands to help you shape or manipulate your data the way you want it.”
Optional
<Click on image to go to show and scroll through online quick reference quide>
Show difference between stats and timechart (adds _time buckets, visualize, etc.)
Why is this awesome? We can do all of the same statistical calculations over time with almost any level of granularity. For example…
<change timepicker from 60min to 15min, add span=1s to search and zoom in>
Add below?
Due to the implicit time dimension, it’s very easy to use timechart to visualize disparate data sets with varying time frequencies.
SQL vs Timechart actual comparison?
Walk through trendline basic options
Walk through predict basic options
“The timechart command plus other SPL commands make it very easy to visualize your data any way you want.”
“Again, don’t forget about the quick reference guide. There are many more statistical functions you can use with these commands on your data.”
Context is everything when it comes to building successful operational intelligence.
When you are stuck analyzing events from a single data source at a time, you might be missing out on rich contextual information or new insights that other data sources can provide.
Let’s take a quick look at a few powerful SPL commands that can help make this happen.
Make sure everyone is on the “4.1 Geographic” Dashboard. The bottom 3 visualizations will show errors.
Each example headline can be clicked to go to the correct Splunkbase page for download. Have each user (including yourself) download each app (3 of them), install them from file, and restart Splunk to ensure correct functionality.
Walkthrough the SPL required to generate each custom visualization. Point out the different commands used and why.
Now let’s install one more … Search for Missile Map on Splunkbase
Download and install
Complete the following search to get the visualization to work (starting coordinates can be whatever you like)
index=power_of_spl | stats count by clientip | iplocation clientip | dedup Country | eval start_lat="-104.99", start_lon="39.76" |
Let’s add some of the custom options specific to this visualization:
Make it animated
Make it colored
Make it pulse
index=power_of_spl | stats count by clientip | iplocation clientip | dedup Country | tail 10 | eval start_lat="-104.99", start_lon="39.76", end_lat=lon, end_lon=lat | eval color=if(Country="Iran","#FF0000","") | eval pulse_at_start="false" | eval animate=if(Country="Iran","true","false") | table color, animate, pulse_at_start, start_lat start_lon end_lat end_lon
Walk through the dashboard which points out several uses of anomalydetection
NOTE: Many transactions can be re-created using stats. Transaction is easy but stats is way more efficient and it’s a mapable command (more work will be distributed to the indexers).
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
Feel free to change this and use your own story!
“My interpretation of Data Exploration when it comes to Splunk is the process of characterizing and researching behavior of both existing and new data sources.”
“ For example while you may have an existing data source you are already used to, but there still could be some unknown value in in terms of patterns, relationships between fields and rare events that could point you to new insights or help with predictive analytics. This capability gives you confidence to explore new data sources as well because you can quickly look for replacements and nuggets that stick out or help classify data. A friend once asked me to look at some biomedical data with DNA information. The vocabulary and field definitions were way above me, but I was able to quickly understand patterns and relationships with Splunk and provide them value instaneously. With Splunk you literally become afraid of no data!”
Let’s look at a few quick examples.
“The cluster command is used to find common and/or rare events within your data”
<Show simple table search first and point out # of events, then run cluster and sort on cluster count to show common vs rare events>
* | table _raw _time
* | cluster showcount=t t=.1
| table _raw cluster_count
| sort - cluster_count
Fieldsummary gives you a quick breakdown of your numerical fields such as count, min, max, stdev, etc. It also shows you examples values in the event. I used maxvals to limit the number of samples it shows per field.
sourcetype=access_combined
| fields – date* source* time*
| fieldsummary maxvals=5
“The correlate command is used to find co-occurrence between fields. Basically a matrix showing the ‘Field1 exists 80% of the time when Field2 exists’”
sourcetype=access_combined
| fields – date* source* time*
| correlate
“This can be useful for both making sure your field extractions are correct (if you expect a field to exist %100 of the time when another field exists) and also helping you identify potential patterns and trends between different fields.”
“The contingency command is used to look for relationships of between two fields. Basically for these two fields, how many different value combinations are there and what are they / most common”
sourcetype=access_combined
| contingency uri status
This command is extremely useful for not only looking for meaningful fields in your data, but also for determining which fields to use in linear or logistical regression algorithms in the machine learning app.
sourcetype=access_combined
| analyzefields classfield=status
These commands are not only useful for learning about the patterns and characteristics of your data, but they are a stepping stone into machine learning as well.
If you want to learn more about Data Science, Exploration and Machine Learning, download the Machine Learning App! You’ll use new SPL commands like “fit” and “apply” to train models on data in Splunk. We will take a look at this in more detail after the last SPL section, custom commands.
New SPL commands: fit, apply, summary, listmodels, and deletemodel
* Predict Numeric Fields (Linear Regression): e.g. predict median house values.
* Predict Categorical Fields (Logistic Regression): e.g. predict customer churn.
* Detect Numeric Outliers (distribution statistics): e.g. detect outliers in IT Ops data.
* Detect Categorical Outliers (probabilistic measures): e.g. detect outliers in diabetes patient records.
* Forecast Time Series: e.g. forecast data center growth and capacity planning.
* Cluster Events (K-means, DBSCAN, Spectral Clustering, BIRCH).
Let’s talk about custom commands before we get into Machine Learning.
Only need to go through one example here, I usually just do haversine.
“We’ve gone over a variety of Splunk search commands.. but what happens when we can’t find a command that fits our needs OR want to use a complex algorithm someone already OR even create your own?? Enter Custom Commands.”
Additional Text:
Splunk's search language includes a wide variety of commands that you can use to get what you want out of your data and even to display the results in different ways. You have commands to correlate events and calculate statistics on your results, evaluate fields and reorder results, reformat and enrich your data, build charts, and more. Still, Splunk enables you to expand the search language to customize these commands to better meet your needs or to write your own search commands for custom processing or calculations.
Let’s see Haversine in action.
<Pull up search>
*Note – Coordinates of origin in this Haversine example is currently “Seattle”
In this final section today we’re going to do a brief overview of the ML Toolkit. More specifically since this whole workshop is based on SPL, we’ll focus in on the SPL aspects of Machine Learning.
First things first, Let’s install it!
The purpose of this slide is to point out some of the SPL commands and how they contribute or can be applied to a Machine Learning use case or process every step of the way.
Key Points:
- We’ve gone over some of these commands already, but here’s where they fit in the ML process, from cleaning, to stats, etc.
- Some of these commands such as predict and anomalydetection are part of core Splunk
- As of today, there are 6 commands specific too and added by the MLTK – all highlighted in blue
- Let’s go over them briefly
The Demo will show each of the 6 commands in use. I decided to use the ”Predict Categorical Fields” and the ”Predict Car Make and Model” example as it’s very easy to understand. Feel free to use any other example you prefer. The backup slides below are of this example. The next slide contains the directions to help everyone install the MLTK so they can follow along.
Once everyone is ready to go, explain the Showcase at a high level before drilling into the example.
You’ll go in the following order and can access most of the SPL just by using the assistant.
1. fit
Start on the Showcase tab. Click “Predict Vehicle Make and Model” under “Predict Categorical Fields”. Wait for page to load. Explain the basic operations of the page. Then click on “Show SPL” next to fit model to show the SPL portion. Then show them how to open this in search.
2. sample
Once You’ve opened this in search, remove all of the SPL except for the inputlookup statement. Notice the 50,000 results. Now add | sample ratio=.12 to the query and show how the results have been cut down and is a randomized sample.
3. apply
Go back to the assistant. Scroll down to look at the prediction results and other statistical information. Hover over the helpers and explain some of the features. Now click “open in search” button in the “Prediction Results Table”. This will show the Apply command in action. Point out how there is a new field called “predicted(vehicleType)”
4. listmodels
Remove all of the SPL and type “| listmodels”. Explain how this lists all of the models you’ve created using the fit command.
5. summary
Remove all of the SPL and type “| summary example_vehicle_type”. Explain how this shows additional detail about the model. The results and what is shown depends on what algorithm has been used. This is a good time to play with other examples in the MLTK to see the differences.
6. deletemodel
Remove all of the SPL and type “| deletemodel example_vehicle_type”. This will simply delete models.
Again you can either install through the App Management GUI using “Browse for Apps” or you can download them from Splunkbase and choose ”Install from File”
You’ll most likely have to restart after installing the apps. (You can wait to restart until you’ve installed both of them)
References:
Make sure to reference this App is now available for download!!
References:
Make sure to reference this App is now available for download!!