5. Put your hand up if…
● You've heard of Dashboard Studio
6. Put your hand up if…
● You've heard of Dashboard Studio
● You've tried Dashboard Studio (just once counts!)
7. Put your hand up if…
● You've heard of Dashboard Studio
● You've tried Dashboard Studio (just once counts!)
● You've built multiple dashboards in Dashboard Studio
8. Put your hand up if…
● You've heard of Dashboard Studio
● You've tried Dashboard Studio (just once counts!)
● You've built multiple dashboards in Dashboard Studio
● You like building with Dashboard Studio
9. Put your hand up if…
● There are features you need that are missing in Dashboard
Studio
10. Put your hand up if…
● There are features you need that are missing in Dashboard
Studio
● You just prefer Classic (SimpleXML) dashboards
11. Dashboard
Studio is the
next generation
of Splunk
dashboards
Designed for intuitive
point-and-click building,
while maintaining
flexibility for advanced
use cases.
12. Why is Splunk building a new dashboard
framework?
We listened to customers and heard the following about Classic dashboards:
● Hard to get something polished enough for execs or high visibility monitors
● Hard for less technical users to do much beyond the basics
● Easy to get started, but hard to master optimizing dashboards or building
more advanced use cases
15. First we need to understand how the dashboard
definition is structured
Every dashboard has:
● title
● description
● dataSources
● visualizations
● defaults
● inputs
● layout
16. First we need to understand how the dashboard
definition is structured
Every dashboard has:
● title
● description
● dataSources
● visualizations
● defaults
● inputs
● layout
17. dataSources
● Data sources include ad-hoc
searches, base and chain
searches, and saved searches
○ Chain searches are easier to configure
now too!
"ds_fWuYtYEz": {
"type": "ds.search",
"options": {
"query": "index=tutorial
action=purchase status=200 | stats
count(productName) as "Quantity"
values(price) as Price by productName,
clientip, categoryId | eval
Revenue=Quantity*Price"
},
"name": "Purchases"
18. ● Data sources include ad-hoc
searches, base and chain
searches, and saved searches
○ Chain searches are easier to configure
now too!
● Data sources are now
independent from visualizations
and inputs
○ This means that data sources can be
referenced by multiple visualizations
and inputs
dataSources
"ds_fWuYtYEz": {
"type": "ds.search",
"options": {
"query": "index=tutorial
action=purchase status=200 | stats
count(productName) as "Quantity"
values(price) as Price by productName,
clientip, categoryId | eval
Revenue=Quantity*Price"
},
"name": "Purchases"
19. dataSources
● Data sources include ad-hoc
searches, base and chain
searches, and saved searches
○ Chain searches are easier to configure
now too!
● Data sources are now
independent from visualizations
and inputs
○ This means that data sources can be
referenced by multiple visualizations
and inputs
● Data sources are identified by a
unique identifier (e.g.
"ds_fWuYtYEz": {
"type": "ds.search",
"options": {
"query": "index=tutorial
action=purchase status=200 | stats
count(productName) as "Quantity"
values(price) as Price by productName,
clientip, categoryId | eval
Revenue=Quantity*Price"
},
"name": "Purchases"
20. visualizations
● Visualizations reference data
sources via data source ID
○ Secondary data sources may be added
for annotations or field summaries for
Events Viewer viz
"viz_LcdCtHCD": {
"type": "splunk.singlevalue",
"dataSources": {
"primary": "ds_lRYLqjC2"
},
"title": "Total unique customers",
"options": {
"majorValue": "> sparklineValues | lastPoint()",
"trendValue": "> sparklineValues | delta(-2)",
"sparklineValues": "> primary |
seriesByName('customers')"
}
}
21. visualizations
● Visualizations reference data
sources via data source ID
○ Secondary data sources may be added
for annotations or field summaries for
Events Viewer viz
● Visualizations allow for more
flexibility in what from the data
source is displayed
○ sparklineValues
○ majorValue
○ trendValue
"viz_LcdCtHCD": {
"type": "splunk.singlevalue",
"dataSources": {
"primary": "ds_lRYLqjC2"
},
"title": "Total unique customers",
"options": {
"majorValue": "> sparklineValues | lastPoint()",
"trendValue": "> sparklineValues | delta(-2)",
"sparklineValues": "> primary |
seriesByName('customers')"
}
}
22. defaults
● Set options once to apply to
multiple data sources or
visualizations
○ Data source time range
○ Visualization options
"defaults": {
"dataSources": {
"ds.search": {
"options": {
"queryParameters": {
"latest": "0",
"earliest": ""
}
}
}
},
"visualizations":{
"global":{
"showProgressBar": true
},
"splunk.singlevalue":{
"backgroundColor":"#ffffff"
}
},
"tokens": {
"default": {
"customer": {
"value": "*"
}
}
}
23. defaults
● Set options once to apply to
multiple data sources or
visualizations
○ Data source time range
○ Visualization options
● Defaults can be set at a global or
type-specific level
○ Global: showProgressBar
○ Single values: backgroundColor
"defaults": {
"dataSources": {
"ds.search": {
"options": {
"queryParameters": {
"latest": "0",
"earliest": ""
}
}
}
},
"visualizations":{
"global":{
"showProgressBar": true
},
"splunk.singlevalue":{
"backgroundColor":"#ffffff"
}
},
"tokens": {
"default": {
"customer": {
"value": "*"
}
}
}
24. defaults
● Set options once to apply to
multiple data sources or
visualizations
○ Data source time range
○ Visualization options
● Defaults can be set at a global or
type-specific level
○ Global: showProgressBar
○ Single values: backgroundColor
● Specify default token values
○ Except input defaults, which are set in
the inputs section
"defaults": {
"dataSources": {
"ds.search": {
"options": {
"queryParameters": {
"latest": "0",
"earliest": ""
}
}
}
},
"visualizations":{
"global":{
"showProgressBar": true
},
"splunk.singlevalue":{
"backgroundColor":"#ffffff"
}
},
"tokens": {
"default": {
"customer": {
"value": "*"
}
}
}
25. New paradigms in Dashboard Studio
1. Data sources are independent from inputs and visualizations, and you
can specify what from the data source is displayed in the visualization.
This means you can possibly use fewer searches that return more fields, for
reuse by multiple visualizations. This can help with performance and
resource utilization.
26. New paradigms in Dashboard Studio
1. Data sources are independent from inputs and visualizations, and you
can specify what from the data source is displayed in the visualization.
This means you can possibly use fewer searches that return more fields, for
reuse by multiple visualizations. This can help with performance and
resource utilization.
2. You can reference search results and metadata directly as tokens.
This means you can move tokenization logic into a search, and set search
results as token values.
29. Classic (Simple XML) example
Let's consider how we might set search results as tokens in a Classic
dashboard:
<search>
<query>...</query>
<done>
<set token="user_error">result.UserError</set>
<set token="server_error">result.ServerError</set>
</done>
</search>
This requires manual source code editing and setting multiple token values.
30. Dashboard Studio example
In Dashboard Studio, you just need to select
"Use search results or job status as
tokens"
Then reference results using the format
$datasource name:result.<fieldname>$
Examples:
● $Interaction status:UserError$
● $Interaction status:ServerError$
No manual source code editing required, no
additional token logic to define.
32. Let's consider how we might show/hide panels in a Classic dashboard:
● Specify logic to set and unset a token
● Add a "depends" to the desired visualization to display when the token set
and hide when unset
This requires manual source code editing and possibly adding unset logic to
multiple places in the dashboard.
Classic (Simple XML) example
33. Dashboard Studio example
In Dashboard Studio, you just need to select
"When data is unavailable, hide element"
● For many use cases, this is likely all you
need
● For more complex use cases, you can set
up your search so that it does not return
results when you want to hide the
element
No manual source code editing required, no
additional token logic to define.
35. Let's consider how we might apply visual designs in a Classic dashboard:
● Custom JS
● Custom CSS
● Custom HTML panels
This requires higher technical skills, bundling .js and .css with your app, and
manual source code editing.
Classic (Simple XML) example
36. Dashboard Studio example
In Dashboard Studio, you just need
to
● Point-and-click support to edit
layout, size, and layering of
objects
● Add images via upload or URL
reference
○ Use images to add corporate logos
○ Use images to layer metrics on top
● GUI for changing colors, adding
markdown, and other styling
38. What's next for Dashboard Studio?
Advanced
interactivity +
layouts
Show/hide panels,
tabbed dashboards,
token logic builder
Ease of use
improvements
UI for all key options
and workflows,
templates,
grouping + layering
objects
More sharing
options
Export to .json, .html,
easier image export,
scheduled email
export
Classic to
Studio
conversion
Automated
conversion, post
conversion report
Subject to change
42. About CSV lookups
● Splunk provides handy CSV lookups.
https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/ConfigureCSVlookups
CSV lookups match field values from your events to field values in the static table represented by a CSV file. Then they output
corresponding field values from that table to your events
44. How do I add lookups to Splunk?
1. Run a Splunk search that has the data and use outputlookup
index=animal_data | outputlookup ug_demo.csv
2. Use Settings -> Lookups -> Add New
45. How do I add lookups to Splunk? (continued)
3. Use the Splunkbase lookup editor https://splunkbase.splunk.com/app/1724
App provides endpoint to upload lookups that is search head cluster aware. You can upload
once and store on all heads in your search head cluster!
46. But how do we upload CSV via the command line?
● Splunk community user mthcht created a Python 3 script to upload a directory
of lookup files to Splunk
https://github.com/mthcht/lookup-editor_scripts
● The upload script enumerates all files in a given directory
● For each file
○ Opens and reads the lookup file into memory
○ Sends a POST request to the Splunk server/management port using the endpoint
/services/data/lookup_edit/lookup_contents with the contents of the file in
json format
47. My modifications
● https://github.com/beckyburwell/splunk_rest_upload_lookups
splunk_rest_upload_lookups.py splunk_head_url lookup_file splunk_app
● Copied mthcht uploads script as follows
○ Modified it to upload a single lookup file, not a directory of lookups
○ Let the user pass in the Splunk host URL and long with management port
○ Pass in the name of one lookup file
○ Pass in the name of the Splunk app to upload to
○ Changed the hard-coding of the Splunk username and password to prompt the user
49. How to make more useful?
● Use in script:
○ Script prompts for Splunk admin and password
○ Change that to a secure way of obtaining the credentials; don’t prompt for username/password
50. Notes on Permissions
● In order to use the script, the user needs to be able to store knowledge
objects into the app
● By default, the search app is only writable to power and admin
● Users should upload to an app they have access to
51. Summary of Requirements
● Access to Python 3
● Splunk Lookup Editor installed on Splunk search heads
● User access to the app you want to store the lookups in
52. Acknowledgements and Thanks
● Thanks to community user mthcht
● Thanks to my colleague Paras Jain, who tested my script and gave me
feedback