Splunk Ninjas:
New Features, Pivot
and Search Dojo
2
Safe Harbor Statement
During the course of this presentation,we may make forward looking statements regarding future eve...
3
Agenda
What’s new in 6.1
– New features and capabilities
Data Models and Pivot
– Analyze data without using search comma...
4
Introducing Splunk Enterprise 6.1
ENABLING THE
MISSION-CRITICAL
ENTERPRISE
ENHANCED
INTERACTIVE
ANALYTICS
EMBEDDING
OPER...
5
Introducing Splunk Enterprise 6.1
ENABLING THE
MISSION-CRITICAL
ENTERPRISE
ENHANCED
INTERACTIVE
ANALYTICS
EMBEDDING
OPER...
6
Mission-critical Availability
New Clustering Features
• Location aware replication
• Search Head Affinity
MISSION
CRITIC...
7
Load and Preview Structured Data
Data Preview with
Structured Inputs
• Easily onboard structured
data
• Preview the fiel...
8
MISSION
CRITICAL
ENTERPRISE
Integrated Mainframe Insights
New Forwarder
• Collect data from
mainframes
• Correlate with ...
13
Introducing Splunk Enterprise 6.1
ENABLING THE
MISSION-CRITICAL
ENTERPRISE
ENHANCED
INTERACTIVE
ANALYTICS
EMBEDDING
OPE...
14
More Actionable Alerting
Customized Alerts
• Add tokens to the alerts
from the search results
• Select preferred format...
15
Visualization in Splunk
iframe
Visualization in non-Splunk UI
Add Splunk Insights to Business Apps
Embedded Reporting
•...
16
Splunk Mobile App
EMBEDDING
OPERATIONAL
INTELLIGENCE
• Access dashboards and
reports
• Annotate dashboards and
share wi...
17
Download the Overview App
http://apps.splunk.com/app/1773
Data Models and Pivot
19
Model, Report, and Accelerate
Build complex reports without the
search language
Provides more meaningful representation...
20
Creating a Data Model
Basic Steps
1. Have a use for a Data
Model
2. Write a base search
3. Select the fields to include
27
Data Model Acceleration
• Automatically collected and
maintained
• Stored on the indexers
• Must share the Data Model
•...
28
Pivot
• Drag-and-drop interface
• No need to understand
underlying data
• Click to visualize
Select fields from
data mo...
Harness the Power of
Search
36
search and filter | munge | report | cleanup
Search Processing Language
sourcetype=access*
| eval KB=bytes/1024
| stats...
37
Five Commands that will Solve Most Data Questions
eval - Modify or Create New Fields and Values
stats - Calculate Stati...
39
eval - Modify or Create New Fields and Values
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluati...
40
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response = ...
41
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response = ...
43
stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=access*
| eval KB=b...
44
stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=access*
| eval KB=b...
45
stats – Calculate Statistics Based on Field Values
Examples
• Calculate statistics
sourcetype=access*
| eval KB=bytes/1...
47
eventstats – Add Summary Statistics to Search Results
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(by...
48
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(by...
49
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(by...
51
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| reverse
| streamstats...
52
Examples
• Cumulative Sum
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_byt...
53
Examples
• Cumulative Sum
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_byt...
55
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSI...
56
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=acc...
57
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=acc...
58
Learn Them Well and Become a Ninja
eval - Modify or Create New Fields and Values
stats - Calculate Statistics Based on ...
Questions?
Bonus Command
61
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table c...
62
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table c...
63
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table c...
Thank You
Upcoming SlideShare
Loading in...5
×

Splunk live! ninjas_break-out

787

Published on

SplunkLive! San Francisco

Published in: Technology
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
787
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide
  • The Enhanced Dashboard Editor makes it easier to build advanced dashboards, adding visualizations and charts – all without Advanced XML.

    You can now easily add new inputs and panels to drive a richer experience and create advanced visualizations all in the the UI – without any coding.

  • With Contextual Drill-down a primary panel can drive the charts, tables and visualizations on the rest of a dashboard.
  • Splunk 6.1 delivers new controls to deliver an even more focused analytics experience with the machine data in Splunk Enterprise.

    Chart Overlay: Improves data analysis by providing the ability to overlay one chart on top of another.
    Pan and Zoom Controls: Enables more focused analytics by providing the ability in selecting a range of interest on a chart and zoom in for deeper analysis.
  • Alerts are triggered when certain conditions are met – a feature Splunk Enterprise has had for sometime.

    Now with Splunk Enterprise 6.1 you can deliver alerts with embedded machine data context. This includes fields and values from the result set that triggered the alert as well as the search artifacts such as the time range the search ran over.

    You can also choose what you include or exclude in the email.
  • Alerts are triggered when certain conditions are met – a feature Splunk Enterprise has had for sometime.

    Now with Splunk Enterprise 6.1 you can deliver alerts with embedded machine data context. This includes fields and values from the result set that triggered the alert as well as the search artifacts such as the time range the search ran over.

    You can also choose what you include or exclude in the email.
  • Embedded Reports, enables any Splunk report or table to be embedded in third-party business application such as salesforce.com, WordPress, Wiki or Microsoft® SharePoint

    With Embedded Reports users are connected to the critical insights using tools they are already familiar with – all without having access to Splunk.

    Simply copy the iframe code provided by Splunk and paste it into your webpage. The authentication is handled in the URL.
  • For more information, or to try out the features yourself. Check out the overview app which explains each of the features and includes code samples and examples where applicable.
  • This section should take ~10 minutes
  • Data Model – A data model is just like a map of the underlying data. It defines meaningful relationships in the data
    Pivot – is an interface to analyze data without using the splunk search language
    Analytics Store – is an option that can be applied to Data Models to make Pivot searches extremely fast. Think of it like our 3rd generation acceleration technology.

    Let’s dig into each of these features
  • sourcetype=access*
    | eval http_response = if(status == 200, "OK", "Error")
    | eventstats avg(bytes) AS avg_bytes by http_response
    | timechart latest(avg_bytes) avg(bytes)
  • Note: Chart is just stats visualized. Timechart is just stats by _time visualized.
  • sourcetype=access*
    | eval KB=bytes/1024
    | stats sum(KB) AS "Sum of KB"
  • sourcetype=access*
    | stats values(useragent) avg(bytes) max(bytes) by clientip
  • sourcetype=access*
    | stats values(useragent) avg(bytes) max(bytes) by clientip
  • Eventstats let’s you add statistics about the entire search results and makes the statistics available as fields on each event.

    <Walk through the examples with a demo. Hidden slides are available as backup>
  • Eventstats let’s you add statistics about the entire search results and makes the statistics available as fields on each event.


    Let’s use eventstats to create a timechart of the average bytes on top of the overall average.
    index=* sourcetype=access*
    | eventstats avg(bytes) AS avg_bytes
    | timechart latest(avg_bytes) avg(bytes)
  • We can turn this into a moving average simply by adding “by date_hour” to calculate the average per hour instead of the overall average.
    index=* sourcetype=access*
    | eventstats avg(bytes) AS avg_bytes by date_hour
    | timechart latest(avg_bytes) avg(bytes)

  • Decrease the threshold of similarity and see the change in results
    sourcetype=access* | cluster field=bc_uri showcount=t t=0.1| table cluster_count bc_uri _raw | sort -cluster_count
  • Splunk live! ninjas_break-out

    1. 1. Splunk Ninjas: New Features, Pivot and Search Dojo
    2. 2. 2 Safe Harbor Statement During the course of this presentation,we may make forward looking statements regarding future events or the expected performance of the company. We caution you that such statements reflect our current expectations and estimates based on factors currently known to us and that actual events or results could differ materially. For important factors that may cause actual results to differ from those contained in our forward-looking statements, please review our filings with the SEC. The forward-looking statements made in this presentation are being made as of the time and date of its live presentation. If reviewed after its live presentation, this presentation may not contain current or accurate information. We do not assume any obligation to update any forward looking statements we may make. In addition, any information about our roadmap outlines our general product direction and is subject to change at any time without notice. It is for informational purposes only and shall not be incorporated into any contract or other commitment. Splunk undertakes no obligation either to develop the features or functionality described orto includeany suchfeatureor functionalityina futurerelease.
    3. 3. 3 Agenda What’s new in 6.1 – New features and capabilities Data Models and Pivot – Analyze data without using search commands Harness the power of search – The 5 search commands that can solve most problems
    4. 4. 4 Introducing Splunk Enterprise 6.1 ENABLING THE MISSION-CRITICAL ENTERPRISE ENHANCED INTERACTIVE ANALYTICS EMBEDDING OPERATIONAL INTELLIGENCE
    5. 5. 5 Introducing Splunk Enterprise 6.1 ENABLING THE MISSION-CRITICAL ENTERPRISE ENHANCED INTERACTIVE ANALYTICS EMBEDDING OPERATIONAL INTELLIGENCE
    6. 6. 6 Mission-critical Availability New Clustering Features • Location aware replication • Search Head Affinity MISSION CRITICAL ENTERPRISE REPLICATION Portland Datacenter New York Datacenter
    7. 7. 7 Load and Preview Structured Data Data Preview with Structured Inputs • Easily onboard structured data • Preview the fields before indexing • Configure from the GUI Adjust configurations in the UI • Delimiters, Headers, Time Stamp Preview results before committing MISSION CRITICAL ENTERPRISE
    8. 8. 8 MISSION CRITICAL ENTERPRISE Integrated Mainframe Insights New Forwarder • Collect data from mainframes • Correlate with the rest of the stack
    9. 9. 13 Introducing Splunk Enterprise 6.1 ENABLING THE MISSION-CRITICAL ENTERPRISE ENHANCED INTERACTIVE ANALYTICS EMBEDDING OPERATIONAL INTELLIGENCE
    10. 10. 14 More Actionable Alerting Customized Alerts • Add tokens to the alerts from the search results • Select preferred format and delivery of results Customize Recipients Customize Message Select Delivery Method EMBEDDING OPERATIONAL INTELLIGENCE
    11. 11. 15 Visualization in Splunk iframe Visualization in non-Splunk UI Add Splunk Insights to Business Apps Embedded Reporting • Embed scheduled reports into web applications • Share with users who don’t have access to Splunk • 1-line copy/paste to embed in external application EMBEDDING OPERATIONAL INTELLIGENCE
    12. 12. 16 Splunk Mobile App EMBEDDING OPERATIONAL INTELLIGENCE • Access dashboards and reports • Annotate dashboards and share with others • Receive push notifications Native Mobile Experience
    13. 13. 17 Download the Overview App http://apps.splunk.com/app/1773
    14. 14. Data Models and Pivot
    15. 15. 19 Model, Report, and Accelerate Build complex reports without the search language Provides more meaningful representation of underlying raw machine data Pivot Data Model Acceleration technology delivers up to 1000x faster analytics over Splunk 5 Analytics Store
    16. 16. 20 Creating a Data Model Basic Steps 1. Have a use for a Data Model 2. Write a base search 3. Select the fields to include
    17. 17. 27 Data Model Acceleration • Automatically collected and maintained • Stored on the indexers • Must share the Data Model • Cost is additional disk space Makes reporting crazy fast
    18. 18. 28 Pivot • Drag-and-drop interface • No need to understand underlying data • Click to visualize Select fields from data model Time window All chart types available in the chart toolbox Save report to share Build Reports without SPL
    19. 19. Harness the Power of Search
    20. 20. 36 search and filter | munge | report | cleanup Search Processing Language sourcetype=access* | eval KB=bytes/1024 | stats sum(MB) dc(clientip) | rename sum(MB) AS "Total MB" dc(clientip) AS "Unique Customers"
    21. 21. 37 Five Commands that will Solve Most Data Questions eval - Modify or Create New Fields and Values stats - Calculate Statistics Based on Field Values eventstats - Add Summary Statistics to Search Results streamstats - Cumulative Statistics for Each Event transaction - Group Related Events Spanning Time
    22. 22. 39 eval - Modify or Create New Fields and Values Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port
    23. 23. 40 Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port eval - Modify or Create New Fields and Values
    24. 24. 41 Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port eval - Modify or Create New Fields and Values
    25. 25. 43 stats – Calculate Statistics Based on Field Values Examples • Calculate stats and rename sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) AS “Total KB” • Multiple statistics sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) • By another field sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip
    26. 26. 44 stats – Calculate Statistics Based on Field Values Examples • Calculate stats and rename sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) as “Total KB” • Multiple statistics sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) • By another field sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip
    27. 27. 45 stats – Calculate Statistics Based on Field Values Examples • Calculate statistics sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) AS "Total KB” • Multiple statistics sourcetype=access* | eval KB=bytes/1024 | stats avg(KB) sum(KB) • By another field sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip
    28. 28. 47 eventstats – Add Summary Statistics to Search Results Examples • Overlay Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes | timechart latest(avg_bytes) avg(bytes) • Moving Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes) • By created field sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes) by http_response
    29. 29. 48 Examples • Overlay Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes | timechart latest(avg_bytes) avg(bytes) • Moving Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes) • By created field sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes) by http_response eventstats – Add Summary Statistics to Search Results
    30. 30. 49 Examples • Overlay Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes | timechart latest(avg_bytes) avg(bytes) • Moving Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes) • By created field sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes) by http_response eventstats – Add Summary Statistics to Search Results
    31. 31. 51 streamstats – Cumulative Statistics for Each Event Examples • Cumulative Sum sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total | timechart max(bytes_total) • Cumulative Sum by Field sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status • Moving Average sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes)
    32. 32. 52 Examples • Cumulative Sum sourcetype=access* | timechart sum(bytes) as bytes | streamstats sum(bytes) as cumulative_bytes | timechart max(cumulative_bytes) • Cumulative Sum by Field sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status • Moving Average sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes) streamstats – Cumulative Statistics for Each Event
    33. 33. 53 Examples • Cumulative Sum sourcetype=access* | timechart sum(bytes) as bytes | streamstats sum(bytes) as cumulative_bytes | timechart max(cumulative_bytes) • Cumulative Sum by Field sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status • Moving Average sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes) streamstats – Cumulative Statistics for Each Event
    34. 34. 55 transaction – Group Related Events Spanning Time Examples • Group by Session ID sourcetype=access* | transaction JSESSIONID • Calculate Session Durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is Better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration)
    35. 35. 56 Examples • Group by Session ID sourcetype=access* | transaction JSESSIONID • Calculate Session Durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is Better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration) transaction – Group Related Events Spanning Time
    36. 36. 57 Examples • Group by Session ID sourcetype=access* | transaction JSESSIONID • Calculate Session Durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is Better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration) transaction – Group Related Events Spanning Time
    37. 37. 58 Learn Them Well and Become a Ninja eval - Modify or Create New Fields and Values stats - Calculate Statistics Based on Field Values eventstats - Add Summary Statistics to Search Results streamstats - Cumulative Statistics for Each Event transaction - Group Related Events Spanning Time See many more examples and neat tricks at docs.splunk.com and answers.splunk.com
    38. 38. Questions?
    39. 39. Bonus Command
    40. 40. 61 cluster – Find Common and/or Rare Events Examples • Find the most common events * | cluster showcount=t t=0.1 | table cluster_count, _raw | sort - cluster_count • Select a field to cluster on sourcetype=access* | cluster field=bc_uri showcount=t | table cluster_count bc_uri _raw | sort -cluster_count • Most or least common errors index=_internal source=*splunkd.log* log_level!=info | cluster showcount=t | table cluster_count _raw | sort -cluster_count
    41. 41. 62 cluster – Find Common and/or Rare Events Examples • Find the most common events * | cluster showcount=t t=0.1 | table cluster_count, _raw | sort - cluster_count • Select a field to cluster on sourcetype=access* | cluster field=bc_uri showcount=t | table cluster_count bc_uri _raw | sort -cluster_count • Most or least common errors index=_internal source=*splunkd.log* log_level!=info | cluster showcount=t | table cluster_count _raw | sort -cluster_count
    42. 42. 63 cluster – Find Common and/or Rare Events Examples • Find the most common events * | cluster showcount=t t=0.1 | table cluster_count, _raw | sort - cluster_count • Select a field to cluster on sourcetype=access* | cluster field=bc_uri showcount=t | table cluster_count bc_uri _raw | sort -cluster_count • Most or least common errors index=_internal source=*splunkd.log* log_level!=info | cluster showcount=t | table cluster_count _raw | sort -cluster_count
    43. 43. Thank You

    ×