This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
The simplicity and variability of searches can be a blessing and a curse. How can you tell if searches are really efficient? Splunk has a job inspector, but what do all the options mean? Are you using the right commands for your goal? Is there a better way to do this? This session will review the internals of how a search is performed, use of job inspector, search log, review of where and when to use certain commands.
Explore, Analyze and Visualize Data in Hadoop and NoSQL. Make massive quantities of machine data accessible, usable and valuable for the people who need it, at the speed they need it. Use Hunk to turn underutilized data into valuable insights in minutes, not weeks or months.
The simplicity and variability of searches can be a blessing and a curse. How can you tell if searches are really efficient? Splunk has a job inspector, but what do all the options mean? Are you using the right commands for your goal? Is there a better way to do this? This session will review the internals of how a search is performed, use of job inspector, search log, review of where and when to use certain commands.
Explore, Analyze and Visualize Data in Hadoop and NoSQL. Make massive quantities of machine data accessible, usable and valuable for the people who need it, at the speed they need it. Use Hunk to turn underutilized data into valuable insights in minutes, not weeks or months.
Splunk Ninjas: New features, pivot, and search dojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Using Postgres and Citus for Lightning Fast Analytics, also ft. Rollups | Liv...Citus Data
Watch Sai Srirampur, Solutions Engineer at Citus Data (now part of the Microsoft family), give a live demo of how you can use Postgres and the Citus extension to Postgres to manage real-time analytics workloads.
View if you & your application need:
>> A relational database that scales for customer-facing analytics dashboards, with real-time data ingest and a large volume of queries
>> A way to scale out Postgres horizontally, to address the performance hiccups you’re experiencing as you run into the resource limits of single-node Postgres
>> A way to roll-up and pre-aggregate data to build fast data pipelines and enable sub-second response times.
>> A way to consolidate your database platforms, to avoid having separate stores for your transactional and analytics workloads
Using a 4-node Citus database cluster in the cloud, Sai will show you how Citus shards Postgres to give you lightning fast performance, at scale. Also featuring rollups.
SplunkLive! London: Splunk ninjas- new features and search dojoSplunk
Besides seeing the newest features in Splunk software and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a Splunk ninja.
Data analysis using hive ql & tableaupkale1708
The purpose of this study is to develop a system which will assist a user to determine if a location can be entitled as a “Safe” residence or not. The output will be based on an analysis carried out on the local crime history of the city. This involves examining a huge geolocation data and zeroing down to a single area. The area with majority crime incidents will be highlighted as Unsafe. Clicking/hovering on a single record will display name, associated crime and its rank depending on number of crimes occurred. Big Data Hadoop and Hive systems are implemented in Azure for the analysis.
An overview of crime report and analysis shows a significant amount of information related to crime. Multiple factors need to be considered while studying the different aspects of crime. These multiple measures are found in Uniform Crime Reports data and the National Crime Victimization Survey, a survey that interrogates the victim about their experience. Our paper depicts the nature and characteristics of crime using Hadoop Big Data systems, especially Hive in Azure. Besides, the map of the Geo-location presents which area is safe or unsafe. The results of different Hive queries are visualized using Tableau.
This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Power of Splunk Search Processing Language (SPL) ...Splunk
This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
Power of Splunk Search Processing Language (SPL)Splunk
This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
In addition to seeing the latest features in Splunk Enterprise, learn some of the top commands that will solve most search and analytics needs. Ninja’s can use these blindfolded. New features will be demonstrated in the following areas: TCO and Performance Improvements, Platform Management and New Interactive Visualizations.
Wussten Sie, dass die Splunk Search Processing Language (SPL) weit mehr kann als “nur” suchen? Wir zeigen Ihnen in unserer Session die gesamten Möglichkeiten von SPL! Erfahren Sie, wie Sie sie zum Suchen, Transformieren und Visualisieren von jeglichen Maschinendaten mit über 140 Befehlen nutzen können . In dieser Breakout Session lernen Sie neue Techniken kennen, die Ihnen dabei helfen können, weitere Use Case Möglichkeiten zu entdecken. Finden Sie heraus, wie Sie folgendes besser machen können:
- "Die Nadel im Heuhaufen finden" sowie Root Cause Analysen
- Verbinden von ungleichen Datensätzen und Erforschen von Beziehungen zwischen Feldern
- Geografische Datenvisualisierungen in nahezu Echtzeit
- Statistiken kalkulieren, Anomalien finden und Ergebnisse vorhersagen.
Splunk Ninjas: New features, pivot, and search dojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Using Postgres and Citus for Lightning Fast Analytics, also ft. Rollups | Liv...Citus Data
Watch Sai Srirampur, Solutions Engineer at Citus Data (now part of the Microsoft family), give a live demo of how you can use Postgres and the Citus extension to Postgres to manage real-time analytics workloads.
View if you & your application need:
>> A relational database that scales for customer-facing analytics dashboards, with real-time data ingest and a large volume of queries
>> A way to scale out Postgres horizontally, to address the performance hiccups you’re experiencing as you run into the resource limits of single-node Postgres
>> A way to roll-up and pre-aggregate data to build fast data pipelines and enable sub-second response times.
>> A way to consolidate your database platforms, to avoid having separate stores for your transactional and analytics workloads
Using a 4-node Citus database cluster in the cloud, Sai will show you how Citus shards Postgres to give you lightning fast performance, at scale. Also featuring rollups.
SplunkLive! London: Splunk ninjas- new features and search dojoSplunk
Besides seeing the newest features in Splunk software and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a Splunk ninja.
Data analysis using hive ql & tableaupkale1708
The purpose of this study is to develop a system which will assist a user to determine if a location can be entitled as a “Safe” residence or not. The output will be based on an analysis carried out on the local crime history of the city. This involves examining a huge geolocation data and zeroing down to a single area. The area with majority crime incidents will be highlighted as Unsafe. Clicking/hovering on a single record will display name, associated crime and its rank depending on number of crimes occurred. Big Data Hadoop and Hive systems are implemented in Azure for the analysis.
An overview of crime report and analysis shows a significant amount of information related to crime. Multiple factors need to be considered while studying the different aspects of crime. These multiple measures are found in Uniform Crime Reports data and the National Crime Victimization Survey, a survey that interrogates the victim about their experience. Our paper depicts the nature and characteristics of crime using Hadoop Big Data systems, especially Hive in Azure. Besides, the map of the Geo-location presents which area is safe or unsafe. The results of different Hive queries are visualized using Tableau.
This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Power of Splunk Search Processing Language (SPL) ...Splunk
This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
Power of Splunk Search Processing Language (SPL)Splunk
This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
In addition to seeing the latest features in Splunk Enterprise, learn some of the top commands that will solve most search and analytics needs. Ninja’s can use these blindfolded. New features will be demonstrated in the following areas: TCO and Performance Improvements, Platform Management and New Interactive Visualizations.
Wussten Sie, dass die Splunk Search Processing Language (SPL) weit mehr kann als “nur” suchen? Wir zeigen Ihnen in unserer Session die gesamten Möglichkeiten von SPL! Erfahren Sie, wie Sie sie zum Suchen, Transformieren und Visualisieren von jeglichen Maschinendaten mit über 140 Befehlen nutzen können . In dieser Breakout Session lernen Sie neue Techniken kennen, die Ihnen dabei helfen können, weitere Use Case Möglichkeiten zu entdecken. Finden Sie heraus, wie Sie folgendes besser machen können:
- "Die Nadel im Heuhaufen finden" sowie Root Cause Analysen
- Verbinden von ungleichen Datensätzen und Erforschen von Beziehungen zwischen Feldern
- Geografische Datenvisualisierungen in nahezu Echtzeit
- Statistiken kalkulieren, Anomalien finden und Ergebnisse vorhersagen.
Julian Harty, Sr. Sales Engineer, Splunk reviews the internals of how a Splunk search is performed, use of job inspector, search log, and gives a review of where and when to use certain commands.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Data mining guest lecture (CSE6331 University of Texas, Arlington) 2004Alan Walker
I was invited to talk about some of the data mining and knowledge discovery work that was going on at Sabre. This is an overview of some of the projects that I could talk about. The photo for the title slide was home made, that's my wife's geologist hammer.
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Rethinking Online SPARQL Querying to Support Incremental Result VisualizationOlaf Hartig
These are the slides of my invited talk at the 5th Int. Workshop on Usage Analysis and the Web of Data (USEWOD 2015): http://usewod.org/usewod2015.html
The abstract of this talks is given as follows:
To reduce user-perceived response time many interactive Web applications visualize information in a dynamic, incremental manner. Such an incremental presentation can be particularly effective for cases in which the underlying data processing systems are not capable of completely answering the users' information needs instantaneously. An example of such systems are systems that support live querying of the Web of Data, in which case query execution times of several seconds, or even minutes, are an inherent consequence of these systems' ability to guarantee up-to-date results. However, support for an incremental result visualization has not received much attention in existing work on such systems. Therefore, the goal of this talk is to discuss approaches that enable query systems for the Web of Data to return query results incrementally.
What makes a search engine "intelligent"? In this talk I discuss MarkLogic's full text search features and demonstrate how to enhance search functionality using MarkLogic's new Search API to deliver better, faster results automatically. You will learn how to use Search API to include indexed facets alongside results and perform query expansion to add robust automatic semantic search for known entities and expand thesaurus terms to reduce false negatives.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
.conf Go 2023 presentation:
De NOC a CSIRT
Speakers:
Daniel Reina - Country Head of Security Cellnex (España) & Global SOC Manager Cellnex
Samuel Noval - Global CSIRT Team Leader, Cellnex
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
2. About Me
• My name is Brian Heffernan
and I have Splunked:
• BBQ Smokers - predicted
cook times
• Nest Thermostats
• Alerts when my wife
turns on the heat
• Children’s Browsing
History
• Splunking for 4+ years
• Northeastern University2
3. Rules & Agendah
Goal: Get Gooder!
Ask Questions- there will be prizes
Don’t take notes – I will provide slides
No texting and searching
No Sleeping – I will make fun of you.
• Overview & Anatomy of a Search
– Quick refresher on search language and
structure
• SPL Commands and Examples
– Doing More with Less
– Searching, charting, converging,
transactions, anomalies, exploring
• Custom Commands
– Doing Less with More
– Extend the capabilities of SPL
• Q&A
5. SPL Overview
● Over 140+ search commands
● Syntax was originally based upon the Unix pipeline and SQL
and is optimized for time series data
● The scope of SPL includes data searching, filtering, modification, manipulation,
enrichment, insertion and deletion
● Includes anomaly detection and machine learning
5
6. Why Create a New Query Language?
● Flexibility and
effectiveness on
small and big data
● Late-binding schema
● More/better methods
of correlation
● Not just analyze, but
visualize
6
Data
BIG Data
7. new pipe = new line + space + pipe
search and filter | munge | report | cleanup
| rename sum(KB) AS "Total KB" dc(clientip) AS "Unique Customers"
| eval KB=bytes/1024
sourcetype=access*
| stats sum(KB) dc(clientip)
SPL Basic Structure
Bonus
Points
20. Stats – Calculate Statistics Based on Field Values
Examples
● Calculate stats and rename
sourcetype=netapp:perf
| stats avg(read_ops) AS “Read OPs”
● Multiple statistics
sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
● By another field
Sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
by instance
20
21. Stats – Calculate Statistics Based on Field Values
Examples
21
● Calculate stats and rename
sourcetype=netapp:perf
| stats avg(read_ops) AS “Read OPs”
● Multiple statistics
sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
● By another field
Sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
BY instance
22. Stats – Calculate Statistics Based on Field Values
Examples
22
● Calculate stats and rename
sourcetype=netapp:perf
| stats avg(read_ops) AS “Read OPs”
● Multiple statistics
sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
● By another field
Sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
BY instance
23. Timechart – Visualize Statistics Over Time
Examples
● Visualize stats over time
sourcetype=netapp:perf
| timechart avg(read_ops)
● Add a trendline
sourcetype=netapp:perf
| timechart avg(read_ops) AS
read_ops | trendline sma5(read_ops)
● Add a prediction overlay
sourcetype=netapp:perf
| timechart avg(read_ops) AS
read_ops | predict read_ops
23
24. Timechart – Visualize Statistics Over Time
Examples
24
● Visualize stats over time
sourcetype=netapp:perf
| timechart avg(read_ops)
● Add a trendline
sourcetype=netapp:perf
| timechart avg(read_ops) AS
read_ops | trendline sma5(read_ops)
● Add a prediction overlay
sourcetype=netapp:perf
| timechart avg(read_ops) AS
read_ops | predict read_ops
25. Timechart – Visualize Statistics Over Time
Examples
25
● Visualize stats over time
sourcetype=netapp:perf
| timechart avg(read_ops)
● Add a trendline
sourcetype=netapp:perf
| timechart avg(read_ops) AS
read_ops | trendline sma5(read_ops)
● Add a prediction overlay
sourcetype=netapp:perf
| timechart avg(read_ops) AS
read_ops | predict read_ops
27. SPL Examples and Recipes
● Search and filter + creating/modifying fields
● Charting statistics and predicting values
● Converging data sources
● Identifying transactions and anomalies
● Data exploration & finding relationships between fields
27
28. 28
Converging Data Sources
Index Untapped Data: Any Source, Type, Volume
Online
Services Web
Services
Servers
Security GPS
Location
Storage
Desktops
Networks
Packaged
Applications
Custom
ApplicationsMessaging
Telecoms
Online
Shopping
Cart
Web
Clickstreams
Databases
Energy
Meters
Call Detail
Records
Smartphones
and Devices
RFID
On-
Premises
Private
Cloud
Public
Cloud
Ask Any Question
Application Delivery
Security, Compliance,
and Fraud
IT Operations
Business Analytics
Industrial Data and
the Internet of Things
29. Converging Data Sources
Examples
● Implicit join on time
index=* http | timechart count BY
sourcetype
● Enrich data with lookup
sourcetype=access_combined status=503
| lookup customer_info uid |
stats count BY customer_value
● Append results from another
search
… | appendcols [search earliest=-1h
sourcetype=Kepware units=W row=A
| stats stdev(Value) AS hr_stdev] …
29
30. Lookup – Converging Data Sources
Examples
30
● Implicit join on time
index=* http | timechart count by
sourcetype
● Enrich data with lookup
sourcetype=access_combined status=503
| lookup customer_info uid |
stats count by customer_value
● Append results from another
search
… | appendcols [search earliest=-1h
sourcetype=Kepware units=W row=A
| stats stdev(Value) AS hr_stdev] …
31. Appendcols – Converging Data Sources
Examples
31
● Implicit join on time
index=* http | timechart count BY
sourcetype
● Enrich data with lookup
sourcetype=access_combined status=503
| lookup customer_info uid
| stats count BY customer_value
● Append results from another
search
… | appendcols [search earliest=-1h
sourcetype=Kepware units=W row=A
| stats stdev(Value) AS hr_stdev] …
32. SPL Examples and Recipes
● Search and filter + creating/modifying fields
● Charting statistics and predicting values
● Converging data sources
● Identifying transactions and anomalies
● Data exploration & finding relationships between fields
32
33. Transaction – Group Related Events Spanning Time
Examples
● Group by session ID
sourcetype=access*
| transaction JSESSIONID
● Calculate session durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration)
avg(duration)
● Stats is better
sourcetype=access*
| stats min(_time) AS earliest max(_time)
AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration)
avg(duration)
33
34. Transaction – Group Related Events Spanning Time
Examples
34
● Group by session ID
sourcetype=access*
| transaction JSESSIONID
● Calculate session durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration)
avg(duration)
● Stats is better
sourcetype=access*
| stats min(_time) AS earliest max(_time)
AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration)
avg(duration)
35. Transaction – Group Related Events Spanning Time
Examples
35
● Group by session ID
sourcetype=access*
| transaction JSESSIONID
● Calculate session durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration)
avg(duration)
● Stats is better
sourcetype=access*
| stats min(_time) AS earliest max(_time)
AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration)
avg(duration)
36. Anomaly Detection – Find anomalies in your data
Examples
36
● Find anomalies
| inputlookup car_data.csv |
anomalydetection
● Summarize anomalies
| inputlookup car_data.csv |
anomalydetection action=summary
● Use IQR and remove outliers
| inputlookup car_data.csv |
anomalydetection method=iqr
action=remove
37. SPL Examples and Recipes
● Search and filter + creating/modifying fields
● Charting statistics and predicting values
● Converging data sources
● Identifying transactions and anomalies
● Data exploration & finding relationships between fields
37
45. Custom Commands
● What is a Custom Command?
– “| haversine origin="47.62,-122.34" outputField=dist lat lon”
● Why do we use Custom Commands?
– Run other/external algorithms on your Splunk data
– Save time munging data (see Timewrap!)
– Because you can!
● Create your own or download as Apps
– Haversine (Distance between two GPS coordinates)
– Timewrap (Enhanced Time overlay)
– Levenshtein (Fuzzy string compare)
– R Project (Utilize R!)
45
47. Custom Commands – Haversine
Examples
● Download and install App
Haversine
● Read documentation then
use in SPL!
sourcetype=access*
| iplocation clientip
| search City=A*
| haversine origin="47.62,-122.34"
units=mi outputField=dist lat lon
| table clientip, City, dist, lat, lon
47
48. Custom Commands – Haversine
Examples
● Download and install App
Haversine
● Read documentation then
use in SPL!
sourcetype=access*
| iplocation clientip
| search City=A*
| haversine origin="47.62,-122.34"
units=mi outputField=dist lat lon
| table clientip, City, dist, lat, lon
48
49. Tools
Add Splunk Search to Chrome:
It is a cool way to search for a .conf or a Splunk command directly from the Chrome search bar. Install the Chrome Extension, and then add the
following commands under the Chrome/Settings/Manage Search Engines
1- Add "Bookmark Search": https://chrome.google.com/webstore/detail/bookmark-search/hhmokalkpaiacdofbcddkogifepbaijk?utm_source=gmail
2-Add the following Search Engines
Splunk Apps SA: https://splunkbase.splunk.com/apps/#/order/relevance/search/%s
Splunk Conf SC: http://docs.splunk.com/Documentation/Splunk/latest/admin/%sconf
Splunk Search Commands SS: http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/%s
Splunk Docs SD: http://docs.splunk.com/Special:SplunkSearch/docs?q=%s
53. For More Information
● Additional information can be
found in:
– Search Manual
– Blogs
– Answers
– Operational Intelligence
Cookbook
– Exploring Splunk
53
This presentation has some animations and content to help tell stories as you go. Feel free to change ANY of this to your own liking! I found it is best to pre-load all of the demo dashboards with the search examples instead of clicking on each picture (link to the search) from the slides and moving between the PowerPoint presentation and a Splunk demo instance too frequently. I would definitely practice your flow once or twice before a presentation. There is A LOT of content to get through in 1 hour. The slides with search examples can be unhidden if needed.
Here is what you need for this presentation:
You should have the following installed:
6.3 Overview App - https://splunkbase.splunk.com/app/2828/
OI Demo 3.1 – Get it from box: https://splunk.box.com/s/vlt3qve9hmil8gsgxjouizjceu8h33uf
Optional:
Splunk Search Reference Guide handouts
Mini buttercups or other prizes to give out for answering questions during the presentation
Add your own About Me info, if you want to include.
“There are tons of EVAL commands to help you shape or manipulate your data the way you want it.”
Optional
<Click on image to go to show and scroll through online quick reference quide>
Next we’ll talk about Splunk’s charting and statistical commands.
Notes:
Stats
Timechart
Trendline
Predict
Add streamstats and eventstats or keep simple?
There are 3 commands that are the basis of calculating statistics and visualizing results. Essentially chart is just stats visualized and timechart is stats by _time visualized. These SPL commands are extremely powerful and easy to use.
“Let’s go through some examples – additionally we’ll make it more interesting and pull apart some searches and visualizations from one of the demo’s you saw on stage”
<Go to IT Ops Visibility, click on Storage indicator>
1. Use Read/Write OPs by instance for STATS, bonus w/ sparkline
2. Use Read/Write OPs for TIMECHART
Walk through predict basic options
“The timechart command plus other SPL commands make it very easy to visualize your data any way you want.”
“Again, don’t forget about the quick reference guide. There are many more statistical functions you can use with these commands on your data.”
“The contingency command is used to look for relationships of between two fields. Basically for these two fields, how many different value combinations are there and what are they / most common”
sourcetype=access_combined
| contingency uri status
Depending on remaining time can show 1 or more custom command examples.
“We’ve gone over a variety of Splunk search commands.. but what happens when we can’t find a command that fits our needs OR want to use a complex algorithm someone already OR even create your own?? Enter Custom Commands.”
Additional Text:
Splunk's search language includes a wide variety of commands that you can use to get what you want out of your data and even to display the results in different ways. You have commands to correlate events and calculate statistics on your results, evaluate fields and reorder results, reformat and enrich your data, build charts, and more. Still, Splunk enables you to expand the search language to customize these commands to better meet your needs or to write your own search commands for custom processing or calculations.
<This slide can be optional, again feel free to use your own story>
Customer Story:
“Awhile back I was working on a project where I was Splunking live aircraft data. I was doing a demonstration with both my manager and customers where I was showing real-time movement of aircraft. In the middle of the presentation the customer asked me if they could see the real-time distance between any two aircraft or the even the airport. While I had lat/lon as fields, I knew I couldn’t write a accurate distance algorithm in a timely manner. I quickly searched “distance” in splunkbase just for the heck of it and whaddya know, there as custom command for it called Haversine. I asked if the audience would give me 5 minutes – downloaded the app and plugged in my lat/lon fields for two different planes, just like a regular search command.”
Click #2: “This was my manager’s reaction”
Click #3: “And this was what was going through the customer’s head”
Let’s see Haversine in action.
<Pull up search>
*Note – Coordinates of origin in this Haversine example is currently “Seattle”
If you want to learn more about Data Science, Exploration and Machine Learning, download the Machine Learning App! You’ll use new SPL commands like “fit” and “apply” to train models on data in Splunk.
New SPL commands: fit, apply, summary, listmodels, and deletemodel
* Predict Numeric Fields (Linear Regression): e.g. predict median house values.
* Predict Categorical Fields (Logistic Regression): e.g. predict customer churn.
* Detect Numeric Outliers (distribution statistics): e.g. detect outliers in IT Ops data.
* Detect Categorical Outliers (probabilistic measures): e.g. detect outliers in diabetes patient records.
* Forecast Time Series: e.g. forecast data center growth and capacity planning.
* Cluster Events (K-means, DBSCAN, Spectral Clustering, BIRCH).