SlideShare a Scribd company logo
Copyright © 2015 Splunk Inc.
Splunk Ninjas:
New Features, Pivot,
and Search Dojo
2
Safe Harbor Statement
During the course of this presentation, we may make forward looking statements regarding future events
or the expected performance of the company. We caution you that such statements reflect our current
expectations and estimates based on factors currently known to us and that actual events or results could
differ materially. For important factors that may cause actual results to differ from those contained in our
forward-looking statements, please review our filings with the SEC. The forward-looking statements
made in this presentation are being made as of the time and date of its live presentation. If reviewed
after its live presentation, this presentation may not contain current or accurate information. We do not
assume any obligation to update any forward looking statements we may make. In addition, any
information about our roadmap outlines our general product direction and is subject to change at any
time without notice. It is for informational purposes only and shall not be incorporated into any contract
or other commitment. Splunk undertakes no obligation either to develop the features or functionality
described orto includeany suchfeatureor functionalityina futurerelease.
3
Agenda
What’s new in 6.2
– New features and capabilities
Harness the power of search
– The 5 search commands that can solve most problems
4
Introducing Splunk Enterprise 6.2
4
Getting Data In
Advanced Field Extractor
Instant Pivot
Event Pattern Detection
Prebuilt Panels
Search Head Clustering
Distributed
Management Console
Powerful
Analytics for Broader
Number of Users
Faster Data
Onboarding
Breakthrough
Scalability and
Centralized Mgmt.
5
Introducing Splunk Enterprise 6.2
5
Getting Data In
Advanced Field Extractor
Instant Pivot
Event Pattern Detection
Prebuilt Panels
Search Head Clustering
Distributed
Management Console
Powerful
Analytics for Broader
Number of Users
Faster Data
Onboarding
Breakthrough
Scalability and
Centralized Mgmt.
6
Getting Data In
New interface makes it easier and faster to onboard any data
Intuitive wizard-style interface
Configurable inputs on forwarders
Improved data preview
Context-specific FAQs
6
7
Advanced Field Extractor
Simplified field extractor enables rapid data analysis
Highlight-to-extract multiple fields
at once
Apply keyword search filters
Specify required text in extractions
View diverse and rare events
Validate extracted values with
field stats
7
8
Demo
9
Introducing Splunk Enterprise 6.2
9
Getting Data In
Advanced Field Extractor
Instant Pivot
Event Pattern Detection
Prebuilt Panels
Search Head Clustering
Distributed
Management Console
Powerful
Analytics for Broader
Number of Users
Faster Data
Onboarding
Breakthrough
Scalability and
Centralized Mgmt.
10
Prebuilt Panels
Build dashboards faster using reusable building blocks
Enhanced dashboard edit workflow
– Browse or search across reports,
panels, dashboards and more
– Preview before adding to dashboard
Personalize your dashboards
Collaborate using a library of pre-
built panels
Convert panels to inline to further
customize
10
11
Event Pattern Detection
Auto-discover meaningful patterns in your data with a single click
Search data without having to know
specific terms to search on
No need to sift through similar
events, just select “Patterns” tab
Intuitive interface
11
Screenshot or Image
suggestion
12
Instant Pivot
Pivot directly on any search to discover relationships, build reports
From any search, simply select
the Statistics tab and click on the
pivot icon
Explore and analyze data from
the Pivot interface
Quickly discover relationships in
the data and build powerful
reports
1
13
Download the Overview App
14
Demo
Harness the Power of
Search
16
search and filter | munge | report | cleanup
Search Processing Language
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) dc(clientip)
| rename sum(KB) AS "Total MB" dc(clientip) AS "Unique Customers"
17
Five Commands that will Solve Most Data Questions
eval - Modify or Create New Fields and Values
stats - Calculate Statistics Based on Field Values
eventstats - Add Summary Statistics to Search Results
streamstats - Cumulative Statistics for Each Event
transaction - Group Related Events Spanning Time
18
eval - Modify or Create New Fields and Values
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response = if(status == 200,
"OK", "Error”)
• Concatenation:
sourcetype=access*
| eval connection = clientip.":".port
19
eval - Modify or Create New Fields and Values
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response = if(status == 200,
"OK", "Error”)
• Concatenation:
sourcetype=access*
| eval connection = clientip.":".port
20
eval - Modify or Create New Fields and
Values
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response = if(status == 200,
"OK", "Error”)
• Concatenation:
sourcetype=access*
| eval connection = clientip.":".port
21
stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) AS “Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
22
stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) as “Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
23
stats – Calculate Statistics Based on Field Values
Examples
• Calculate statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) AS "Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats avg(KB) sum(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
24
eventstats – Add Summary Statistics to Search Results
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
25
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
eventstats – Add Summary Statistics to Search Results
26
eventstats – Add Summary Statistics to Search Results
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
27
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total
| timechart max(bytes_total)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
28
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_bytes
| timechart max(cumulative_bytes)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
29
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_bytes
| timechart max(cumulative_bytes)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes
window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
30
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
31
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
32
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
33
Learn Them Well and Become a Ninja
eval - Modify or Create New Fields and Values
stats - Calculate Statistics Based on Field Values
eventstats - Add Summary Statistics to Search Results
streamstats - Cumulative Statistics for Each Event
transaction - Group Related Events Spanning Time
See many more examples and neat tricks at docs.splunk.com and answers.splunk.com
Closing Thoughts
The 6th Annual Splunk Worldwide Users’ Conference
September 21-24, 2015  The MGM Grand Hotel, Las Vegas
• 50+ Customer Speakers
• 50+ Splunk Speakers
• 35+ Apps in Splunk Apps Showcase
• 65 Technology Partners
• 4,000+ IT & Business Professionals
• 2 Keynote Sessions
• 3 days of technical content (150+ Sessions)
• 3 days of Splunk University
– Get Splunk Certified
– Get CPE credits for CISSP, CAP, SSCP, etc.
– Save thousands on Splunk education!
35
Register at: conf.splunk.com
The 6th Annual Splunk Worldwide Users’ Conference
September 21-24, 2015  The MGM Grand Hotel, Las Vegas
Did you like this session on Splunk Search Ninja? You should check out
these sessions at .conf2015?
• Search Efficiency Optimization - Andrew Landen, Splunk SME, National Oilwell Varco
• Notes on Optimizing Splunk Performance - Dritan Bitincka, Principal Solutions Architect, Splunk
• Onboarding data with Splunk - Andrew Duca, Sr. Professional Services Consultant
Register at: conf.splunk.com
37
www.splunk.com/apptitude
July 20th, 2015 Submission deadline
38
We Want to Hear your Feedback!
After the Breakout Sessions conclude
Text Splunk to 878787
And be entered for a chance to win a $100 AMEX gift card!
Questions?
Bonus Command
41
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table cluster_count, _raw
| sort - cluster_count
• Select a field to cluster on
sourcetype=access*
| cluster field=bc_uri showcount=t
| table cluster_count bc_uri _raw
| sort -cluster_count
• Most or least common errors
index=_internal source=*splunkd.log* log_level!=info
| cluster showcount=t
| table cluster_count _raw
| sort -cluster_count
42
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table cluster_count, _raw
| sort - cluster_count
• Select a field to cluster on
sourcetype=access*
| cluster field=bc_uri showcount=t
| table cluster_count bc_uri _raw
| sort -cluster_count
• Most or least common errors
index=_internal source=*splunkd.log* log_level!=info
| cluster showcount=t
| table cluster_count _raw
| sort -cluster_count
43
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table cluster_count, _raw
| sort - cluster_count
• Select a field to cluster on
sourcetype=access*
| cluster field=bc_uri showcount=t
| table cluster_count bc_uri _raw
| sort -cluster_count
• Most or least common errors
index=_internal source=*splunkd.log* log_level!=info
| cluster showcount=t
| table cluster_count _raw
| sort -cluster_count
44
Splunk Mobile App
EMBEDDING
OPERATIONAL
INTELLIGENCE
• Access dashboards and
reports
• Annotate dashboards and
share with others
• Receive push notifications
Native Mobile Experience
Thank You

More Related Content

What's hot (6)

Super scaling singleton inserts
Super scaling singleton insertsSuper scaling singleton inserts
Super scaling singleton inserts
 
Apache Spark Streaming -Real time web server log analytics
Apache Spark Streaming -Real time web server log analyticsApache Spark Streaming -Real time web server log analytics
Apache Spark Streaming -Real time web server log analytics
 
Stream Processing with Flink and Stream Sharing
Stream Processing with Flink and Stream SharingStream Processing with Flink and Stream Sharing
Stream Processing with Flink and Stream Sharing
 
la veille
la veillela veille
la veille
 
Respect your audience, how to make presentations that have impact
Respect your audience, how to make presentations that have impactRespect your audience, how to make presentations that have impact
Respect your audience, how to make presentations that have impact
 
PostgreSQL Extensions: A deeper look
PostgreSQL Extensions:  A deeper lookPostgreSQL Extensions:  A deeper look
PostgreSQL Extensions: A deeper look
 

Similar to Splunk Ninjas Breakout Session

SplunkLive! Analytics with Splunk Enterprise - Part 2
SplunkLive! Analytics with Splunk Enterprise - Part 2SplunkLive! Analytics with Splunk Enterprise - Part 2
SplunkLive! Analytics with Splunk Enterprise - Part 2
Splunk
 
SplunkLive! Analytics with Splunk Enterprise
SplunkLive! Analytics with Splunk EnterpriseSplunkLive! Analytics with Splunk Enterprise
SplunkLive! Analytics with Splunk Enterprise
Splunk
 
SplunkLive! Data Models 101
SplunkLive! Data Models 101SplunkLive! Data Models 101
SplunkLive! Data Models 101
Splunk
 
Analytics with splunk - Advanced
Analytics with splunk - AdvancedAnalytics with splunk - Advanced
Analytics with splunk - Advanced
jenny_splunk
 

Similar to Splunk Ninjas Breakout Session (20)

Splunk Ninjas: New features, pivot, and search dojo
Splunk Ninjas: New features, pivot, and search dojoSplunk Ninjas: New features, pivot, and search dojo
Splunk Ninjas: New features, pivot, and search dojo
 
Splunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search Dojo
 
Splunk live! ninjas_break-out
Splunk live! ninjas_break-outSplunk live! ninjas_break-out
Splunk live! ninjas_break-out
 
SplunkLive! London: Splunk ninjas- new features and search dojo
SplunkLive! London: Splunk ninjas- new features and search dojoSplunkLive! London: Splunk ninjas- new features and search dojo
SplunkLive! London: Splunk ninjas- new features and search dojo
 
Splunk Ninjas: New Features and Search Dojo
Splunk Ninjas: New Features and Search DojoSplunk Ninjas: New Features and Search Dojo
Splunk Ninjas: New Features and Search Dojo
 
SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo
SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo
SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo
 
Splunk Ninjas: New Features and Search Dojo
Splunk Ninjas: New Features and Search DojoSplunk Ninjas: New Features and Search Dojo
Splunk Ninjas: New Features and Search Dojo
 
Splunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search Dojo
 
Splunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search Dojo
 
Intershop Commerce Management with Microsoft SQL Server
Intershop Commerce Management with Microsoft SQL ServerIntershop Commerce Management with Microsoft SQL Server
Intershop Commerce Management with Microsoft SQL Server
 
Power of SPL Breakout Session
Power of SPL Breakout SessionPower of SPL Breakout Session
Power of SPL Breakout Session
 
SplunkLive! Analytics with Splunk Enterprise - Part 2
SplunkLive! Analytics with Splunk Enterprise - Part 2SplunkLive! Analytics with Splunk Enterprise - Part 2
SplunkLive! Analytics with Splunk Enterprise - Part 2
 
Power of SPL Breakout Session
Power of SPL Breakout SessionPower of SPL Breakout Session
Power of SPL Breakout Session
 
Data Models Breakout Session
Data Models Breakout SessionData Models Breakout Session
Data Models Breakout Session
 
SplunkLive! Analytics with Splunk Enterprise
SplunkLive! Analytics with Splunk EnterpriseSplunkLive! Analytics with Splunk Enterprise
SplunkLive! Analytics with Splunk Enterprise
 
SplunkLive! Data Models 101
SplunkLive! Data Models 101SplunkLive! Data Models 101
SplunkLive! Data Models 101
 
Power of SPL Breakout Session
Power of SPL Breakout SessionPower of SPL Breakout Session
Power of SPL Breakout Session
 
Analytics with splunk - Advanced
Analytics with splunk - AdvancedAnalytics with splunk - Advanced
Analytics with splunk - Advanced
 
Streaming Visualization
Streaming VisualizationStreaming Visualization
Streaming Visualization
 
Power of SPL
Power of SPLPower of SPL
Power of SPL
 

More from Splunk

More from Splunk (20)

.conf Go 2023 - Data analysis as a routine
.conf Go 2023 - Data analysis as a routine.conf Go 2023 - Data analysis as a routine
.conf Go 2023 - Data analysis as a routine
 
.conf Go 2023 - How KPN drives Customer Satisfaction on IPTV
.conf Go 2023 - How KPN drives Customer Satisfaction on IPTV.conf Go 2023 - How KPN drives Customer Satisfaction on IPTV
.conf Go 2023 - How KPN drives Customer Satisfaction on IPTV
 
.conf Go 2023 - Navegando la normativa SOX (Telefónica)
.conf Go 2023 - Navegando la normativa SOX (Telefónica).conf Go 2023 - Navegando la normativa SOX (Telefónica)
.conf Go 2023 - Navegando la normativa SOX (Telefónica)
 
.conf Go 2023 - Raiffeisen Bank International
.conf Go 2023 - Raiffeisen Bank International.conf Go 2023 - Raiffeisen Bank International
.conf Go 2023 - Raiffeisen Bank International
 
.conf Go 2023 - På liv og død Om sikkerhetsarbeid i Norsk helsenett
.conf Go 2023 - På liv og død Om sikkerhetsarbeid i Norsk helsenett .conf Go 2023 - På liv og død Om sikkerhetsarbeid i Norsk helsenett
.conf Go 2023 - På liv og død Om sikkerhetsarbeid i Norsk helsenett
 
.conf Go 2023 - Many roads lead to Rome - this was our journey (Julius Bär)
.conf Go 2023 - Many roads lead to Rome - this was our journey (Julius Bär).conf Go 2023 - Many roads lead to Rome - this was our journey (Julius Bär)
.conf Go 2023 - Many roads lead to Rome - this was our journey (Julius Bär)
 
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu....conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...
 
.conf go 2023 - Cyber Resilienz – Herausforderungen und Ansatz für Energiever...
.conf go 2023 - Cyber Resilienz – Herausforderungen und Ansatz für Energiever....conf go 2023 - Cyber Resilienz – Herausforderungen und Ansatz für Energiever...
.conf go 2023 - Cyber Resilienz – Herausforderungen und Ansatz für Energiever...
 
.conf go 2023 - De NOC a CSIRT (Cellnex)
.conf go 2023 - De NOC a CSIRT (Cellnex).conf go 2023 - De NOC a CSIRT (Cellnex)
.conf go 2023 - De NOC a CSIRT (Cellnex)
 
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)conf go 2023 - El camino hacia la ciberseguridad (ABANCA)
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)
 
Splunk - BMW connects business and IT with data driven operations SRE and O11y
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk - BMW connects business and IT with data driven operations SRE and O11y
Splunk - BMW connects business and IT with data driven operations SRE and O11y
 
Splunk x Freenet - .conf Go Köln
Splunk x Freenet - .conf Go KölnSplunk x Freenet - .conf Go Köln
Splunk x Freenet - .conf Go Köln
 
Splunk Security Session - .conf Go Köln
Splunk Security Session - .conf Go KölnSplunk Security Session - .conf Go Köln
Splunk Security Session - .conf Go Köln
 
Data foundations building success, at city scale – Imperial College London
 Data foundations building success, at city scale – Imperial College London Data foundations building success, at city scale – Imperial College London
Data foundations building success, at city scale – Imperial College London
 
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...
 
SOC, Amore Mio! | Security Webinar
SOC, Amore Mio! | Security WebinarSOC, Amore Mio! | Security Webinar
SOC, Amore Mio! | Security Webinar
 
.conf Go 2022 - Observability Session
.conf Go 2022 - Observability Session.conf Go 2022 - Observability Session
.conf Go 2022 - Observability Session
 
.conf Go Zurich 2022 - Keynote
.conf Go Zurich 2022 - Keynote.conf Go Zurich 2022 - Keynote
.conf Go Zurich 2022 - Keynote
 
.conf Go Zurich 2022 - Platform Session
.conf Go Zurich 2022 - Platform Session.conf Go Zurich 2022 - Platform Session
.conf Go Zurich 2022 - Platform Session
 
.conf Go Zurich 2022 - Security Session
.conf Go Zurich 2022 - Security Session.conf Go Zurich 2022 - Security Session
.conf Go Zurich 2022 - Security Session
 

Recently uploaded

Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 

Recently uploaded (20)

Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptxUnpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
 
UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutes
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
UiPath Test Automation using UiPath Test Suite series, part 2
UiPath Test Automation using UiPath Test Suite series, part 2UiPath Test Automation using UiPath Test Suite series, part 2
UiPath Test Automation using UiPath Test Suite series, part 2
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
Quantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIsQuantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIs
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
 

Splunk Ninjas Breakout Session

  • 1. Copyright © 2015 Splunk Inc. Splunk Ninjas: New Features, Pivot, and Search Dojo
  • 2. 2 Safe Harbor Statement During the course of this presentation, we may make forward looking statements regarding future events or the expected performance of the company. We caution you that such statements reflect our current expectations and estimates based on factors currently known to us and that actual events or results could differ materially. For important factors that may cause actual results to differ from those contained in our forward-looking statements, please review our filings with the SEC. The forward-looking statements made in this presentation are being made as of the time and date of its live presentation. If reviewed after its live presentation, this presentation may not contain current or accurate information. We do not assume any obligation to update any forward looking statements we may make. In addition, any information about our roadmap outlines our general product direction and is subject to change at any time without notice. It is for informational purposes only and shall not be incorporated into any contract or other commitment. Splunk undertakes no obligation either to develop the features or functionality described orto includeany suchfeatureor functionalityina futurerelease.
  • 3. 3 Agenda What’s new in 6.2 – New features and capabilities Harness the power of search – The 5 search commands that can solve most problems
  • 4. 4 Introducing Splunk Enterprise 6.2 4 Getting Data In Advanced Field Extractor Instant Pivot Event Pattern Detection Prebuilt Panels Search Head Clustering Distributed Management Console Powerful Analytics for Broader Number of Users Faster Data Onboarding Breakthrough Scalability and Centralized Mgmt.
  • 5. 5 Introducing Splunk Enterprise 6.2 5 Getting Data In Advanced Field Extractor Instant Pivot Event Pattern Detection Prebuilt Panels Search Head Clustering Distributed Management Console Powerful Analytics for Broader Number of Users Faster Data Onboarding Breakthrough Scalability and Centralized Mgmt.
  • 6. 6 Getting Data In New interface makes it easier and faster to onboard any data Intuitive wizard-style interface Configurable inputs on forwarders Improved data preview Context-specific FAQs 6
  • 7. 7 Advanced Field Extractor Simplified field extractor enables rapid data analysis Highlight-to-extract multiple fields at once Apply keyword search filters Specify required text in extractions View diverse and rare events Validate extracted values with field stats 7
  • 9. 9 Introducing Splunk Enterprise 6.2 9 Getting Data In Advanced Field Extractor Instant Pivot Event Pattern Detection Prebuilt Panels Search Head Clustering Distributed Management Console Powerful Analytics for Broader Number of Users Faster Data Onboarding Breakthrough Scalability and Centralized Mgmt.
  • 10. 10 Prebuilt Panels Build dashboards faster using reusable building blocks Enhanced dashboard edit workflow – Browse or search across reports, panels, dashboards and more – Preview before adding to dashboard Personalize your dashboards Collaborate using a library of pre- built panels Convert panels to inline to further customize 10
  • 11. 11 Event Pattern Detection Auto-discover meaningful patterns in your data with a single click Search data without having to know specific terms to search on No need to sift through similar events, just select “Patterns” tab Intuitive interface 11 Screenshot or Image suggestion
  • 12. 12 Instant Pivot Pivot directly on any search to discover relationships, build reports From any search, simply select the Statistics tab and click on the pivot icon Explore and analyze data from the Pivot interface Quickly discover relationships in the data and build powerful reports 1
  • 15. Harness the Power of Search
  • 16. 16 search and filter | munge | report | cleanup Search Processing Language sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) dc(clientip) | rename sum(KB) AS "Total MB" dc(clientip) AS "Unique Customers"
  • 17. 17 Five Commands that will Solve Most Data Questions eval - Modify or Create New Fields and Values stats - Calculate Statistics Based on Field Values eventstats - Add Summary Statistics to Search Results streamstats - Cumulative Statistics for Each Event transaction - Group Related Events Spanning Time
  • 18. 18 eval - Modify or Create New Fields and Values Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port
  • 19. 19 eval - Modify or Create New Fields and Values Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port
  • 20. 20 eval - Modify or Create New Fields and Values Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port
  • 21. 21 stats – Calculate Statistics Based on Field Values Examples • Calculate stats and rename sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) AS “Total KB” • Multiple statistics sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) • By another field sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip
  • 22. 22 stats – Calculate Statistics Based on Field Values Examples • Calculate stats and rename sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) as “Total KB” • Multiple statistics sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) • By another field sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip
  • 23. 23 stats – Calculate Statistics Based on Field Values Examples • Calculate statistics sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) AS "Total KB” • Multiple statistics sourcetype=access* | eval KB=bytes/1024 | stats avg(KB) sum(KB) • By another field sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip
  • 24. 24 eventstats – Add Summary Statistics to Search Results Examples • Overlay Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes | timechart latest(avg_bytes) avg(bytes) • Moving Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes) • By created field sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes) by http_response
  • 25. 25 Examples • Overlay Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes | timechart latest(avg_bytes) avg(bytes) • Moving Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes) • By created field sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes) by http_response eventstats – Add Summary Statistics to Search Results
  • 26. 26 eventstats – Add Summary Statistics to Search Results Examples • Overlay Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes | timechart latest(avg_bytes) avg(bytes) • Moving Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes) • By created field sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes) by http_response
  • 27. 27 streamstats – Cumulative Statistics for Each Event Examples • Cumulative Sum sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total | timechart max(bytes_total) • Cumulative Sum by Field sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status • Moving Average sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes)
  • 28. 28 streamstats – Cumulative Statistics for Each Event Examples • Cumulative Sum sourcetype=access* | timechart sum(bytes) as bytes | streamstats sum(bytes) as cumulative_bytes | timechart max(cumulative_bytes) • Cumulative Sum by Field sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status • Moving Average sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes)
  • 29. 29 streamstats – Cumulative Statistics for Each Event Examples • Cumulative Sum sourcetype=access* | timechart sum(bytes) as bytes | streamstats sum(bytes) as cumulative_bytes | timechart max(cumulative_bytes) • Cumulative Sum by Field sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status • Moving Average sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes)
  • 30. 30 transaction – Group Related Events Spanning Time Examples • Group by Session ID sourcetype=access* | transaction JSESSIONID • Calculate Session Durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is Better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration)
  • 31. 31 transaction – Group Related Events Spanning Time Examples • Group by Session ID sourcetype=access* | transaction JSESSIONID • Calculate Session Durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is Better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration)
  • 32. 32 transaction – Group Related Events Spanning Time Examples • Group by Session ID sourcetype=access* | transaction JSESSIONID • Calculate Session Durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is Better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration)
  • 33. 33 Learn Them Well and Become a Ninja eval - Modify or Create New Fields and Values stats - Calculate Statistics Based on Field Values eventstats - Add Summary Statistics to Search Results streamstats - Cumulative Statistics for Each Event transaction - Group Related Events Spanning Time See many more examples and neat tricks at docs.splunk.com and answers.splunk.com
  • 35. The 6th Annual Splunk Worldwide Users’ Conference September 21-24, 2015  The MGM Grand Hotel, Las Vegas • 50+ Customer Speakers • 50+ Splunk Speakers • 35+ Apps in Splunk Apps Showcase • 65 Technology Partners • 4,000+ IT & Business Professionals • 2 Keynote Sessions • 3 days of technical content (150+ Sessions) • 3 days of Splunk University – Get Splunk Certified – Get CPE credits for CISSP, CAP, SSCP, etc. – Save thousands on Splunk education! 35 Register at: conf.splunk.com
  • 36. The 6th Annual Splunk Worldwide Users’ Conference September 21-24, 2015  The MGM Grand Hotel, Las Vegas Did you like this session on Splunk Search Ninja? You should check out these sessions at .conf2015? • Search Efficiency Optimization - Andrew Landen, Splunk SME, National Oilwell Varco • Notes on Optimizing Splunk Performance - Dritan Bitincka, Principal Solutions Architect, Splunk • Onboarding data with Splunk - Andrew Duca, Sr. Professional Services Consultant Register at: conf.splunk.com
  • 38. 38 We Want to Hear your Feedback! After the Breakout Sessions conclude Text Splunk to 878787 And be entered for a chance to win a $100 AMEX gift card!
  • 41. 41 cluster – Find Common and/or Rare Events Examples • Find the most common events * | cluster showcount=t t=0.1 | table cluster_count, _raw | sort - cluster_count • Select a field to cluster on sourcetype=access* | cluster field=bc_uri showcount=t | table cluster_count bc_uri _raw | sort -cluster_count • Most or least common errors index=_internal source=*splunkd.log* log_level!=info | cluster showcount=t | table cluster_count _raw | sort -cluster_count
  • 42. 42 cluster – Find Common and/or Rare Events Examples • Find the most common events * | cluster showcount=t t=0.1 | table cluster_count, _raw | sort - cluster_count • Select a field to cluster on sourcetype=access* | cluster field=bc_uri showcount=t | table cluster_count bc_uri _raw | sort -cluster_count • Most or least common errors index=_internal source=*splunkd.log* log_level!=info | cluster showcount=t | table cluster_count _raw | sort -cluster_count
  • 43. 43 cluster – Find Common and/or Rare Events Examples • Find the most common events * | cluster showcount=t t=0.1 | table cluster_count, _raw | sort - cluster_count • Select a field to cluster on sourcetype=access* | cluster field=bc_uri showcount=t | table cluster_count bc_uri _raw | sort -cluster_count • Most or least common errors index=_internal source=*splunkd.log* log_level!=info | cluster showcount=t | table cluster_count _raw | sort -cluster_count
  • 44. 44 Splunk Mobile App EMBEDDING OPERATIONAL INTELLIGENCE • Access dashboards and reports • Annotate dashboards and share with others • Receive push notifications Native Mobile Experience

Editor's Notes

  1. Intro: - Who you are – cover groupon, box among others - Objective of this track   Here is what you need for this presentation: Link to videos on box: <coming soon> You should have the following installed: 6.2 Overview OI Demo– Get it from the Technical Enablement Portal under SE tools –> Demos https://splunk--c.na2.visual.force.com/apex/LMS_TechnicalEnablementPortal NOTE: Configure your role to search the oidemo index by default, otherwise you will have to type “index=oidemo” for the examples later on. There is a lot to cover in this presentation! Try to go quickly and at a pretty high level. When you get through the presentation judge the audience’s interest and go deeper in whichever section. For example, if they want to know more about Pivot and Data Models then unhide those slides and walk through them, or if they want to go deeper on the search commands talk through the extra examples. If running locally on 8000, these are the links to have ready in the background: http://127.0.0.1:8000/en-US/app/oidemo/content_dashboard?form.track_name=Headlines&earliest=0&latest= http://127.0.0.1:8000/en-US/app/oidemo/data_model_editor?model=%2FservicesNS%2Fnobody%2Foidemo%2Fdatamodel%2Fmodel%2FOIDemo http://127.0.0.1:8000/en-US/app/oidemo/search
  2. Splunk safe harbor statement.
  3. Features we may not have used. Making data accessible to everyone. New ways of looking at old commands.
  4. The goal of 6.X is to make you experience more fluid, make the product easier to use, and ultimately to make your more effective How do we achieve this? – 3 major focus areas Let me pause – and ask How many people are using 6.2? Quick Poll., How may are using these features?
  5. Splunk Enterprise is the industry-leading platform for Operational Intelligence. Version 6.2 enables organizations to onboard, enrich and analyze machine data faster than ever before, scale to higher numbers of concurrent users and searches, and spend less time managing their large, distributed deployments. Easier data onboarding and preparation Getting Data In radically simplifies onboarding of any data source Advanced Field Extractor enables better preparation of machine data for further analysis
  6. Context-specific FAQs In Splunk 6.2, we’ve completely remodeled the pages and workflows for adding data, and added new features like Forwarder Inputs a new Data Preview. Consolidated Workflow: We’ve made it much easier to find your way to the appropriate input configuration. Instead of selecting from a confusing list of sources, start with a simple choice of “upload, monitor, or forward” and you’ll find yourself in a simple wizard-style workflow of defining the appropriate parameters for the data you want to add. Data Preview The new Data Preview will make it easier for you to create the right sourcetype for your data. In the advanced section, you’ll be able to choose a charset from a list, and see how changes you make to your sourcetype are reflected in props.conf. Forwarder Inputs With Forwarder Inputs, you are able to push input configurations to Splunk instances configured as deployment clients. Simply select one or more forwarders and provide a group name, and you’ll be able to create data inputs on them in the same way you create inputs through the UI on your indexers.
  7. Step by step walkthough of your new fields 1. Easy highlight 2. Extract multiple fields 3. Validate With this enhancement, we’ve made it easier to extract fields from your data with the Advanced Field Extractor (AFX). A replacement of the existing field extraction utility, AFX enables you to easily capture multiple fields in a single extraction and specify required text to filter events for extraction (improving accuracy and efficiency). AFX also provides a number of methods for detecting false positives in order to help you validate your field extractions and improve the accuracy of your field
  8. Demo GDI and AFE David used access_combined Walk through the options Create a new index on the fly… Example and info here: https://splunk.box.com/s/zg6964cc15nj9kcldd9w
  9. Purpose of one click pivot enabling anyone to Pivot directly on data, bypassing the Data Model step Splunk Enterprise is the industry-leading platform for Operational Intelligence. Version 6.2 enables organizations to onboard, enrich and analyze machine data faster than ever before, scale to higher numbers of concurrent users and searches, and spend less time managing their large, distributed deployments. More powerful analytics for everyone Instant Pivot makes analytics easier by enabling anyone to Pivot directly on data, bypassing the Data Model step Event Pattern Detection speeds analysis by identifying meaningful patterns in machine data Prebuilt Panels enables faster dashboard creation by providing the ability to create and package re-usable dashboard building blocks
  10. Reusable building blocks Panels allow users to build custom dashboards faster, leveraging pre-built dashboard panels packaged within apps. A user can select from pre-built reports and dashboards or create their own from the new Add Panel interface.
  11. -Auto discovery meaningful patters in your data -slide-bar allows you to set the threshold of similarity of the events so you can tune if you want the pattern to be more or less specific which will increase or reduce the number of patterns. - The default grouping method is to break down the events into terms (match=termlist) and compute the vector between events - T= threshold value - higher threshold value for t, if you want the command to be more discriminating about which events are grouped together. - Can save as eventtypes Event Pattern Detection reduces massive sets of data to its essence rather than sifting through all events. This can be used to identify common and rare events quickly or search your data without having to know specific terms to search on. If you already understand the “cluster” command in Splunk then you know what this is capable of. A slide-bar allows you to set the threshold of similarity of the events so you can tune if you want the pattern to be more or less specific which will increase or reduce the number of patterns.
  12. How do we use it? Why’ to quickly discovery relationships. Flexible. – between search and the data model Instant Pivot enables you to open any query in the Pivot interface, without requiring the creation of a data model. This means that you have the flexibility to choose what interface to explore your data. This also creates another method to construct data models, starting with search. When a user clicks on the Pivot icon, an ephemeral data model is created that collects user specified fields within Pivot as a single, flat object. The user can save their Pivot (additionally prompts user to save data model). Users can choose to instantly Pivot on their data, modify fields, columns, etc in Pivot and then convert it back to a search if they need to use advanced search commands. Instant Pivot allows users to interact with their data faster.
  13. For more information, or to try out the features yourself. Check out the overview app which explains each of the features and includes code samples and examples where applicable. SF Dept of health food violation report:
  14. Lets take another interesting data sample: SF Dept of health food violation report: index=* sourcetype="sf_food_violations” neighbourhood description name postal_code risk_category Question we want to ask of our data: number of violations by name? OK now tell me the most starbucks violations by neighbourhood? STARBUCKS* Example and info here: https://splunk.box.com/s/zg6964cc15nj9kcldd9w
  15. <This section should take ~15 minutes> Search is the most powerful part of Splunk.
  16. The Splunk search language is very expressive and can perform a wide variety of tasks ranging from filtering to data, to munging, and reporting. The results can be used to answer questions, visualize results, or even send to a third party application in whatever format they require. Although there are 135 documented search commands; however, most questions can be answered by using just a handful.
  17. These are the five commands you should get very familiar with. If you know how to use these well, you will be able to solve most data questions that come your way. Let’s take a quick look at each of these.
  18. sourcetype=access*| eval http_response = if(status == 200, "OK", "Error") sourcetype=access* | eval http_response = if(status == 200, "OK", "Error") | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes)
  19. sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) AS "Sum of KB"
  20. Stats: Average througput Multiple: Average vs max by ip Multiple: by another field sourcetype=access* | stats avg(bytes) max(bytes) by clientip
  21. sourcetype=access* | stats values(useragent) avg(bytes) max(bytes) by clientip
  22. Eventstats let’s you add statistics about the entire search results and makes the statistics available as fields on each event. sourcetype=access*| eventstats avg(bytes) as avg_bytes | timechart latest(avg_bytes) avg(bytes) Let’s use eventstats to create a timechart of the average bytes on top of the overall average.
  23. We can turn this into a moving average simply by adding “by date_hour” to calculate the average per hour instead of the overall average. index=* sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes)
  24. sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes) by http_response
  25. Download level changes over time Temp variations over time To create a cumulative sum: sourcetype=access* | timechart sum(bytes) as bytes | streamstats sum(bytes) as cumulative_bytes | timechart max(cumulative_bytes)
  26. sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status
  27. sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes) Bonus: This could also be completed using the trendline command with the simple moving average (sma) parameter: sourcetype=access* | timechart avg(bytes) as avg_bytes | trendline sma10(avg_bytes) as moving_average_bytes | timechart latest(avg_bytes) latest(moving_average_bytes) Double Bonus: Cumulative sum by period sourcetype=access* | timechart span=15m sum(bytes) as cumulative_bytes by status | streamstats global=f sum(cumulative_bytes) as bytes_total
  28. sourcetype=access* | transaction JSESSIONID
  29. sourcetype=access*| transaction JSESSIONID | stats min(duration) max(duration) avg(duration)
  30. NOTE: Many transactions can be re-created using stats. Transaction is easy but stats is way more efficient and it’s a mapable command (more work will be distributed to the indexers). sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration)
  31. There is much more each of these commands can be used for. Check out answers.splunk.com and docs.splunk.com for many more examples.
  32. And finally, I would like to encourage all of you to attend our user conference in September.   The energy level and passion that our customers bring to this event is simply electrifying.   Combined with inspirational keynotes and 150+ breakout session across all areas of operational intelligence,   It is simply the best forum to bring our Splunk community together, to learn about new and advanced Splunk offerings, and most of all to learn from one another.
  33. <If you have time, feel free to show one of your favorite commands or a neat use case of a command. The cluster command is provided here as an example > “There are over 135 splunk commands, the five you have just seen are incredibly powerful. Here is another to add to your arsenal.”
  34. You can use the cluster command to learn more about your data and to find common and/or rare events in your data. For example, if you are investigating an IT problem and you don't know specifically what to look for, use the cluster command to find anomalies. In this case, anomalous events are those that aren't grouped into big clusters or clusters that contain few events. Or, if you are searching for errors, use the cluster command to see approximately how many different types of errors there are and what types of errors are common in your data. index=* sourcetype=linux_secure | cluster t=.9 showcount=t | table cluster_count _raw | sort -cluster_count
  35. Decrease the threshold of similarity and see the change in results sourcetype=access* | cluster field=bc_uri showcount=t t=0.1| table cluster_count bc_uri _raw | sort -cluster_count
  36. index=_internal source=*splunkd.log* log_level!=info | cluster showcount=t | table cluster_count _raw | sort -cluster_count
  37. Android coming soon!