Splunk is a powerful platform that can harness your machine data and turn it into valuable information thereby enabling your business to make informed decisions, taking your organization from reactive to proactive. Just like any other platform, Splunk is only as powerful as the data it has access to, therefore in this session we will be conducting a walk thru of how to successfully on-board data, with samples of data ranging from simple to complex. We will also be taking a look at how to use common TA’s to bring valuable data into Splunk. This session is designed to give you a better understanding of how to onboard data into Splunk enabling you to unlock the power of your data.
Splunk Data Onboarding Overview - Splunk Data Collection ArchitectureSplunk
Splunk's Naman Joshi and Jon Harris presented the Splunk Data Onboarding overview at SplunkLive! Sydney. This presentation covers:
1. Splunk Data Collection Architecture 2. Apps and Technology Add-ons
3. Demos / Examples
4. Best Practices
5. Resources and Q&A
Here’s your chance to get hands-on with Splunk for the first time! Bring your modern Mac, Windows, or Linux laptop and we’ll go through a simple install of Splunk. Then, we’ll load some sample data, and see Splunk in action – we’ll cover searching, pivot, reporting, alerting, and dashboard creation. At the end of this session you’ll have a hands-on understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll experience practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Splunk Tutorial for Beginners - What is Splunk | EdurekaEdureka!
This Splunk tutorial will help you understand what is Splunk, benefits of using Splunk, Splunk vs ELK vs Sumo Logic, Splunk architecture - Splunk Forwarder, Indexer and Search Head with the help of Dominos use-case, Splunk careers & jobs. Check the Splunk tutorial video here: https://www.youtube.com/watch?v=Ekai8Ln11Iw. You can also read the tutorial blog here: https://goo.gl/eoZFWV.
The slides consist of following topics:
Need for Data Management & Analytics
What is Splunk and Why Splunk?
Splunk vs ELK vs Sumo Logic
Splunk Use Case: Domino's
How Splunk Works? Splunk Architecture
Heavy Forwarders
Splunk Architecture Diagram
Splunk Jobs & Careers
Splunk for Enterprise Security and User Behavior AnalyticsSplunk
This session will review Splunk’s two premium solutions for information security organizations: Splunk for Enterprise Security (ES) and Splunk User Behavior Analytics (UBA). Splunk ES is Splunk's award-winning security intelligence solution that brings immediate value for continuous monitoring across SOC and incident response environments – allowing you to quickly detect and respond to external and internal attacks, simplifying threat management while decreasing risk. Splunk UBA is a new technology that applies unsupervised machine learning and data science to solving one of the biggest problems in information security today: insider threat. You’ll learn how Splunk UBA works in tandem with ES, or third-party data sources, to bring significant automated analytical power to your SOC and Incident Response teams. We’ll discuss each solution and see them integrated and in action through detailed demos.
Splunk Data Onboarding Overview - Splunk Data Collection ArchitectureSplunk
Splunk's Naman Joshi and Jon Harris presented the Splunk Data Onboarding overview at SplunkLive! Sydney. This presentation covers:
1. Splunk Data Collection Architecture 2. Apps and Technology Add-ons
3. Demos / Examples
4. Best Practices
5. Resources and Q&A
Here’s your chance to get hands-on with Splunk for the first time! Bring your modern Mac, Windows, or Linux laptop and we’ll go through a simple install of Splunk. Then, we’ll load some sample data, and see Splunk in action – we’ll cover searching, pivot, reporting, alerting, and dashboard creation. At the end of this session you’ll have a hands-on understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll experience practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Splunk Tutorial for Beginners - What is Splunk | EdurekaEdureka!
This Splunk tutorial will help you understand what is Splunk, benefits of using Splunk, Splunk vs ELK vs Sumo Logic, Splunk architecture - Splunk Forwarder, Indexer and Search Head with the help of Dominos use-case, Splunk careers & jobs. Check the Splunk tutorial video here: https://www.youtube.com/watch?v=Ekai8Ln11Iw. You can also read the tutorial blog here: https://goo.gl/eoZFWV.
The slides consist of following topics:
Need for Data Management & Analytics
What is Splunk and Why Splunk?
Splunk vs ELK vs Sumo Logic
Splunk Use Case: Domino's
How Splunk Works? Splunk Architecture
Heavy Forwarders
Splunk Architecture Diagram
Splunk Jobs & Careers
Splunk for Enterprise Security and User Behavior AnalyticsSplunk
This session will review Splunk’s two premium solutions for information security organizations: Splunk for Enterprise Security (ES) and Splunk User Behavior Analytics (UBA). Splunk ES is Splunk's award-winning security intelligence solution that brings immediate value for continuous monitoring across SOC and incident response environments – allowing you to quickly detect and respond to external and internal attacks, simplifying threat management while decreasing risk. Splunk UBA is a new technology that applies unsupervised machine learning and data science to solving one of the biggest problems in information security today: insider threat. You’ll learn how Splunk UBA works in tandem with ES, or third-party data sources, to bring significant automated analytical power to your SOC and Incident Response teams. We’ll discuss each solution and see them integrated and in action through detailed demos.
If you are looking to gain all the benefits of Splunk software with all the benefits of a cloud-service, this is a must-attend session. In this session learn why Splunk Cloud is the industry-leading SaaS platform for operational intelligence and hear how Splunk Cloud customers use Splunk software with zero operational overhead. You will also learn how Splunk Cloud offers the full feature set of Splunk Enterprise, access to 500+ apps and single pane-of-glass visibility across Splunk Cloud and Splunk Enterprise deployments.
Splunk Enterprise Security (ES) ist eine SIEM-Lösung, die Einblicke in von Sicherheitstechnologien erzeugte Maschinendaten wie Angaben über Netzwerke, Endpunkte, Zugriffe, Schadsoftware, Schwachstellen sowie Identitätsdaten liefert. Sicherheitsteams können damit interne und externe Angriffe schnell erkennen und abwehren und somit das Threat Management vereinfachen, Risiken minimieren und Ihr Unternehmen schützen. Splunk Enterprise Security strafft sämtliche Aspekte von Sicherheitsprozessen und eignet sich für Unternehmen jeder Größe und Expertise.
Here’s your chance to get hands-on with Splunk for the first time! Bring your modern Mac, Windows, or Linux laptop and we’ll go through a simple install of Splunk. Then, we’ll load some sample data, and see Splunk in action – we’ll cover searching, pivot, reporting, alerting, and dashboard creation. At the end of this session you’ll have a hands-on understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll experience practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
The volume and complexities of today’s security incidents can tax even the largest security teams. This leaves big gaps in incident detection and response workflows that can put organisations at great risk. Your team can’t scale to manually catch and address every incident, so which ones should you focus on and which ones should you ignore? You shouldn’t be forced to make a choice. In this session, find out how Splunk’s SIEM and SOAR technologies deliver security analytics, machine learning, and automation capabilities to increase the efficiency of security teams and reduce the enterprise’s exposure to risk. Learn how to achieve big results from intelligently streamlined incident detection and response workflows—accelerating your actions, scaling your resources, and optimizing your security operations.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
This session will explore best practices for monitoring and observing Splunk deployments. There will be a focus on how to instrument your deployment and understand how your users workloads may affect performance. Guidance will be provided on how to observe these behaviours, investigate them and then perform the right corrective action.
Getting Started with Splunk Enterprise
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
If you are looking to gain all the benefits of Splunk software with all the benefits of a cloud-service, this is a must-attend session. In this session learn why Splunk Cloud is the industry-leading SaaS platform for operational intelligence and hear how Splunk Cloud customers use Splunk software with zero operational overhead. You will also learn how Splunk Cloud offers the full feature set of Splunk Enterprise, access to 500+ apps and single pane-of-glass visibility across Splunk Cloud and Splunk Enterprise deployments.
Splunk Enterprise Security (ES) ist eine SIEM-Lösung, die Einblicke in von Sicherheitstechnologien erzeugte Maschinendaten wie Angaben über Netzwerke, Endpunkte, Zugriffe, Schadsoftware, Schwachstellen sowie Identitätsdaten liefert. Sicherheitsteams können damit interne und externe Angriffe schnell erkennen und abwehren und somit das Threat Management vereinfachen, Risiken minimieren und Ihr Unternehmen schützen. Splunk Enterprise Security strafft sämtliche Aspekte von Sicherheitsprozessen und eignet sich für Unternehmen jeder Größe und Expertise.
Here’s your chance to get hands-on with Splunk for the first time! Bring your modern Mac, Windows, or Linux laptop and we’ll go through a simple install of Splunk. Then, we’ll load some sample data, and see Splunk in action – we’ll cover searching, pivot, reporting, alerting, and dashboard creation. At the end of this session you’ll have a hands-on understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll experience practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
The volume and complexities of today’s security incidents can tax even the largest security teams. This leaves big gaps in incident detection and response workflows that can put organisations at great risk. Your team can’t scale to manually catch and address every incident, so which ones should you focus on and which ones should you ignore? You shouldn’t be forced to make a choice. In this session, find out how Splunk’s SIEM and SOAR technologies deliver security analytics, machine learning, and automation capabilities to increase the efficiency of security teams and reduce the enterprise’s exposure to risk. Learn how to achieve big results from intelligently streamlined incident detection and response workflows—accelerating your actions, scaling your resources, and optimizing your security operations.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
This session will explore best practices for monitoring and observing Splunk deployments. There will be a focus on how to instrument your deployment and understand how your users workloads may affect performance. Guidance will be provided on how to observe these behaviours, investigate them and then perform the right corrective action.
Getting Started with Splunk Enterprise
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Field Extractions: Making Regex Your BuddyMichael Wilde
This presentation was given by Michael Wilde, Splunk Ninja at Splunk's Worldwide User Conference 2011. A demonstration accompanied this presentation. Link is forthcoming.
Splunk conf2014 - Onboarding Data Into SplunkSplunk
It's important to get data into Splunk right the first time. This session shows you how to get the 'important' things right, the first time, sometimes using .conf files. Some of those important things to get right include timestamp and timezone, host extractions (which host to extract), sourcetype, line-breaking and index. Splunk's "schema-on-the-fly" allows flexibility in field extractions, but we need to index things properly to find the data. This presentation walks customers through getting different data sources -- e.g., logs, data base, API calls (JIRA, SFDC), FIX data -- into Splunk with the correct parsing rules.
SplunkLive! Frankfurt 2018 - Data Onboarding OverviewSplunk
Presented at SplunkLive! Frankfurt 2018:
Splunk Data Collection Architecture
Apps and Technology Add-ons
Demos / Examples
Best Practices
Resources and Q&A
LOGGING - About Needles in the Modern Haystack
https://www.macsysadmin.se/program.html
Every once in a while you'll read "collect the log files". How will this work with your Cloud Service, Identity provider, and SaaS solution? What's the challenges and what are the options at hand when monitoring macOS effectively for compliance?
In this session we talk about practices in storing and retrieving event information for monitoring, and review applications to build and process rich audit trails. This session aims to share our experiences made with commercial and open source backends applied to various client scenarios.
Machine Data Is EVERYWHERE: Use It for TestingTechWell
As more applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. First, test that the data is being created. Second, ensure that the entries are correctly formatted and complete. Third, make sure the data can be consumed by your company’s log analysis tools. And fourth, verify that the app will create all possible log entries from the test data that is supplied. Join Tom as he presents demos including free tools. Learn the steps you need to include in your test plans so your team’s apps not only function but also can be monitored and understood from their machine data when running in production.
Machine-generated data is one of the fastest growing and complex areas of big data. It's also one of the most valuable, containing a definitive record of all user transactions, customer behavior, machine behavior, security threats, fraudulent activity and more. Join us as we explore the basics of machine data analysis and highlight techniques to help you turn your organization’s machine data into valuable insights. This introductory workshop includes a hands-on(bring your laptop) demonstration of Splunk’s technology and covers use cases both inside and outside IT. Learn why more than 13,000 customers in over 110 countries use Splunk to make business, government, and education more efficient, secure, and profitable.
Machine-generated data is one of the fastest growing and complex areas of big data. It's also one of the most valuable, containing some of the most important insights: where things went wrong, how to optimize the customer experience, the fingerprints of fraud. Join us as we explore the basics of machine data analysis and highlight techniques to help you turn your organization’s machine data into valuable insights—across IT and the business. This introductory workshop includes a hands-on (bring your laptop) demonstration of Splunk’s technology and covers use cases both inside and outside IT. Learn why more than 13,000 customers in over 110 countries use Splunk to make their organizations more efficient, secure, and profitable.
SplunkLive! Amsterdam 2015 Breakout - Getting Started with SplunkSplunk
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of big data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
.conf Go 2023 presentation:
De NOC a CSIRT
Speakers:
Daniel Reina - Country Head of Security Cellnex (España) & Global SOC Manager Cellnex
Samuel Noval - Global CSIRT Team Leader, Cellnex
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
2. • Major
components
involved
in
data
indexing
• What
happens
to
data
within
Splunk
• What
the
data
pipeline
is
&
how
to
influence
it
• Shaping
data
understanding
via
props.conf
• Configuring
data
inputs
via
inputs.conf
• What
goes
where
• Heavy
Forwarders
vs.
Universal
Forwarders
• How
to
get
your
data
into
Splunk
(mostly
correctly)
~
60
minutes
from
now...
3. • SystemaMc
way
to
bring
new
data
sources
into
Splunk
• Make
sure
that
new
data
is
instantly
usable
&
has
maximum
value
for
users
• Goes
hand-‐in-‐hand
with
the
User
Onboarding
process
(sold
separately)
What
is
the
Data
Onboarding
Process?
4. 4
Machine Data > Business Value
Index
Untapped
Data:
Any
Source,
Type,
Volume
Online
Services
Web
Services
Servers
Security
GPS
LocaMon
Storage
Desktops
Networks
Packaged
ApplicaMons
Custom
ApplicaMons
Messaging
Telecoms
Online
Shopping
Cart
Web
Clickstream
s
Databases
Energy
Meters
Call
Detail
Records
Smartphones
and
Devices
RFID
On-‐
Premises
Private
Cloud
Public
Cloud
Ask
Any
QuesMon
ApplicaMon
Delivery
Security,
Compliance
and
Fraud
IT
OperaMons
Business
AnalyMcs
Industrial
Data
and
the
Internet
of
Things
5. Flavors of Machine Data
Order
Processing
TwiRer
Care
IVR
Middleware
Error
6. Getting Data Into Splunk
6
Agent
and
Agent-‐less
Approach
for
Flexibility
perf
shell
code
Mounted
File
Systems
hostnamemount
syslog
TCP/UDP
WMI
Event
Logs
Performance
AcMve
Directory
syslog
compaMble
hosts
and
network
devices
Unix,
Linux
and
Windows
hosts
Windows
hosts
Custom
apps
and
scripted
API
connecMons
Local
File
Monitoring
log
files,
config
files
dumps
and
trace
files
Windows
Inputs
Event
Logs
performance
counters
registry
monitoring
AcAve
Directory
monitoring
virtual
host
Windows
hosts
Scripted
Inputs
shell
scripts
custom
parsers
batch
loading
Agent-‐less
Data
Input
Splunk
Forwarder
7. Splunk
Data
Ingest
UF
UF
HF
UF
IDX
SH
Splunk
Enterprise
(with
opMonal
configs)
Splunk
Universal
Forwarder
Summary:
when
it
comes
to
"core"
Splunk,
there
are
two
dis8nct
products:
Splunk
Universal
Forwarder
and
Splunk
Enterprise.
"Everything
else"
–
Indexer,
Search
Head,
License
Server,
Deployment
Server,
Cluster
Master,
Deployer,
Heavy
Forwarder,
etc.
are
all
instances
of
Splunk
Enterprise
with
varying
configs.
12. • Input
Processors:
Monitor,
FIFO,
UDP,
TCP,
Scripted
• No
events
yet-‐-‐
just
a
stream
of
bytes
• Break
data
stream
into
64KB
blocks
• Annotate
stream
with
metadata
keys
(host,
source,
sourcetype,
index,
etc.)
• Can
happen
on
UF,
HF
or
indexer
Inputs–
Where
it
all
starts
13. • Check
character
set
• Break
lines
• Process
headers
• Can
happen
on
HF
or
indexer
Parsing
14. • Merge
lines
for
mulM-‐line
events
• IdenMfy
events
(finally!)
• Extract
Mmestamps
• Exclude
events
based
on
Mmestamp
(MAX_DAYS_AGO,
..)
• Can
happen
on
HF
or
indexer
AggregaMon/Merging
15. • Do
regex
replacement
(field
extracMon,
punctuaMon
extracMon,
event
rouMng,
host/source/sourcetype
overrides)
• Annotate
events
with
metadata
keys
(host,
source,
sourcetype,
..)
• Can
happen
on
HF
or
indexer
Typing
16. • Output
processors:
TCP,
syslog,
HTTP
• indexAndForward
• Sign
blocks
• Calculate
license
volume
and
throughput
metrics
• Index
• [Write
to
disk
]
/
[forward
elsewhere]
/
...
• Can
happen
on
HF
or
indexer
Indexing
22. Splunk
Data
Ingest
UF
UF
HF
UF
IDX
SH
Parsing
Not
Parsing
Note:
the
data
is
parsed
at
the
first
component
that
has
a
parsing
engine
–
and
not
again
This
effects
where
you
put
certain
props.conf
and
transforms.conf
files
(a.k.a.
some8mes
they
go
on
the
forwarder)
24. • IdenMfy
the
specific
sourcetype(s)
-‐
onboard
each
separately
• Check
for
pre-‐exisMng
app/TA
on
splunk.com-‐-‐
don't
reinvent
the
wheel!
• Gather
info
• Where
does
this
data
originate/reside?
How
will
Splunk
collect
it?
• Which
users/groups
will
need
access
to
this
data?
Access
controls?
• Determine
the
indexing
volume
and
data
retenMon
requirements
• Will
this
data
need
to
drive
exisMng
dashboards
(ES,
PCI,
etc.)?
• Who
is
the
SME
for
this
data?
• Map
it
out
• Get
a
"big
enough"
sample
of
the
event
data
• IdenMfy
and
map
out
fields
• Assign
sourcetype
and
TA
names
according
to
CIM
convenMons
On-‐boarding
Process
25. • Dev
• Create
(or
use)
an
app
• Props
/
inputs
definiMon
• Sourcetype
definiMon
• Use
data
import
wizard
• Import,
tweak,
repeat
• Oneshot
• [hook
up
monitor]
On-‐boarding
Process
• Prod
• Deploy
app
• Validate
• Monitor
• Test
• Deploy
app
• Oneshost
• Validate
• Hook
up
monitor
• Validate
1
2
3
26. • General:
• Use
apps
for
configs
• Use
TAs
/
add-‐ons
from
Splunk
if
possible
• Use
dev,
test,
prod
• Dev
can
be
laptop,
test
can
be
ephemeral
• UF
when
possible
• HF
only
if
filtering
/
transforming
is
required
in
foreign
land
• Unique
Sourcetype
per
event
stream
• Don't
send
data
through
Search
Heads
• Don't
send
data
direct
to
Indexers
Good
Hygiene
27. • inputs.conf
• As
specific
as
possible
• Set
sourcetype,
if
possible
• Don't
let
splunk
auto-‐sourcetype
(no
...too_small)
• Specify
index
if
possible
• props.conf
• Set:
TIME_PREFIX,
TIME_FORMAT,
MAX_TIMESTAMP_LOOKAHEAD
• OpMmally:
SHOULD_LINEMERGE
=
false,
LINE_BREAKER,
TRUNCATE
Good
Hygiene
29. • IdenMfy
the
specific
sourcetype(s)
-‐
onboard
each
separately
• Check
for
pre-‐exisMng
app/TA
on
splunk.com-‐-‐
don't
reinvent
the
wheel!
• Gather
info
• Where
does
this
data
originate/reside?
How
will
Splunk
collect
it?
• Which
users/groups
will
need
access
to
this
data?
Access
controls?
• Determine
the
indexing
volume
and
data
retenMon
requirements
• Will
this
data
need
to
drive
exisMng
dashboards
(ES,
PCI,
etc.)?
• Who
is
the
SME
for
this
data?
• Map
it
out
• Get
a
"big
enough"
sample
of
the
event
data
• IdenMfy
and
map
out
fields
• Assign
sourcetype
and
TA
names
according
to
CIM
convenMons
Pre-‐Board
30. • The
Common
InformaMon
Model
(CIM)
defines
relaMonships
in
the
underlying
data,
while
leaving
the
raw
machine
data
intact
• A
naming
convenMon
for
fields,
evensypes
&
tags
• More
advanced
reporMng
and
correlaMon
requires
that
the
data
be
normalized,
categorized,
and
parsed
• CIM-‐compliant
data
sources
can
drive
CIM-‐based
dashboards
(ES,
PCI,
others)
Tangent:
What
is
the
CIM
and
why
should
I
care?
31. • IdenMfy
necessary
configs
(inputs,
props
and
transforms)
to
properly
handle:
• Mmestamp
extracMon,
Mmezone,
event
breaking,
sourcetype/host/source
assignments
• Do
events
contain
sensiMve
data
(i.e.,
PII,
PAN,
etc.)?
Create
masking
transforms
if
necessary
• Package
all
index-‐Mme
configs
into
the
TA
Build
the
index-‐Mme
configs
32. • Assign
sourcetype
according
to
event
format;
events
with
similar
format
should
have
the
same
sourcetype
• When
do
I
need
a
separate
index?
• When
the
data
volume
will
be
very
large,
or
when
it
will
be
searched
exclusively
a
lot
• When
access
to
the
data
needs
to
be
controlled
• When
the
data
requires
a
specific
data
retenMon
policy
• Resist
the
temptaMon
to
create
lots
of
indexes
Tangent:
Best
&
Worst
PracMces
33. • Always
specify
a
sourcetype
and
index
• Be
as
specific
as
possible:
use
/var/log/fubar.log,
not
/var/log/
• Arrange
your
monitored
filesystems
to
minimize
unnecessary
monitored
logfiles
• Use
a
scratch
index
while
tesMng
new
inputs
Best
&
Worst
PracMces
–
[monitor]
34. • Lookout
for
inadvertent,
runaway
monitor
clauses
• Don’t
monitor
thousands
of
files
unnecessarily–
that’s
the
NSA’s
job
• From
the
CLI:
splunk
show
monitor
• From
your
browser:
hsps://your_splunkd:8089/
services/admin/inputstatus/TailingProcessor:FileStatus
Best
&
Worst
PracMces
–
[monitor]
35. • Find
&
fix
index-‐Mme
problems
BEFORE
polluMng
your
index
• A
try-‐it-‐before-‐you-‐fry-‐it
interface
for
figuring
out
• Event
breaking
• Timestamp
recogniMon
• Timezone
assignment
• Provides
the
necessary
props.conf
parameter
sewngs
Your
friend,
the
Data
Previewer
Another
Tangent!
37. • IdenMfy
"interesMng"
events
which
should
be
tagged
with
an
exisMng
CIM
tag
(hsp://
docs.splunk.com/DocumentaMon/CIM/latest/User/Alerts)
• Get
a
list
of
all
current
tags:
|
rest
splunk_server=local
/services/admin/tags
|
rename
tag_name
as
tag,
field_name_value
AS
definiMon,
eai:acl.app
AS
app
|
eval
definiMon_and_app=definiMon
.
"
("
.
app
.
")"
|
stats
values(definiMon_and_app)
as
"definiMons
(app)"
by
tag
|
sort
+tag
• Get
a
list
of
all
evensypes
(with
associated
tags):
|
rest
splunk_server=local
/services/
admin/evensypes
|
rename
Mtle
as
evensype,
search
AS
definiMon,
eai:acl.app
AS
app
|
table
evensype
definiMon
app
tags
|
sort
+evensype
• Examine
the
current
list
of
CIM
tags.
For
each
"interesMng"
event,
idenMfy
which
tags
should
be
applied
to
each.
A
parMcular
event
may
have
mulMple
tags.
• Are
there
new
tags
which
should
be
created,
beyond
those
in
the
current
CIM
tag
library?
If
so,
add
them
to
the
CIM
library
Build
the
search-‐Mme
configs:
evenRypes
&
tags
38. • Extract
"interesMng"
fields
• If
already
in
your
CIM
library,
name
or
alias
appropriately
• If
not
already
in
your
CIM
library,
name
according
to
CIM
convenMons
• Add
lookups
for
missing/desirable
fields
• Lookups
may
be
required
to
supply
CIM-‐compliant
fields/field
values
(for
example,
to
convert
'sev=42'
to
'severity=medium'
• Make
the
values
more
readable
for
humans
• Put
everything
into
the
TA
package
Build
the
search-‐Mme
configs:
extracMons
&
lookups
39. • Create
data
models.
What
will
be
interesMng
for
end
users?
• Document!
(Especially
the
fields,
evensypes
&
tags)
• Test
• Does
this
data
drive
relevant
exisMng
dashboards
correctly?
• Do
the
data
models
work
properly
/
produce
correct
results?
• Is
the
TA
packaged
properly?
• Check
with
originaMng
user/group;
is
it
OK?
Keep
Going
40. • Determine
addiMonal
Splunk
infrastructure
required;
can
exisMng
infrastructure
&
license
support
this?
• Will
new
forwarders
be
required?
If
so,
iniMate
CR
process(es)
• Will
firewall
changes
be
required?
If
so,
iniMate
CR
process(es)
• Will
new
Splunk
roles
be
required?
Create
&
map
to
AD
roles
• Will
new
app
contexts
be
required?
Create
app(s)
as
necessary
• Will
new
users
be
added?
Create
the
accounts
Get
ready
to
deploy
41. • Deploy
new
search
heads
&
indexers
as
needed
• Install
new
forwarders
as
needed
• Deploy
new
app
&
TA
to
search
heads
&
indexers
• Deploy
new
TA
to
relevant
forwarders
Bring
it!
42. • All
sources
reporMng?
• Event
breaking,
Mmestamp,
Mmezone,
host,
source,
sourcetype?
• Field
extracMons,
aliases,
lookups?
• Evensypes,
tags?
• Data
model(s)?
• User
access?
• Confirm
with
original
requesMng
user/group:
looks
OK?
Test
&
Validate
44. • Bring
new
data
sources
in
correctly
the
first
Mme
• Reduce
the
amount
of
“bad”
data
in
your
indexes–
and
the
Mme
spent
dealing
with
it
• Make
the
new
data
immediately
useful
to
ALL
users–
not
just
the
ones
who
originally
requested
it
• Allow
the
data
to
drive
all
sorts
of
dashboards
without
extra
modificaMons
Gee,
this
seems
like
a
lot
of
work…
45. • What
splunk
can
monitor:
• hsp://docs.splunk.com/DocumentaMon/Splunk/latest/Data/WhatSplunkcanmonitor
• How
data
moves
through
splunk:
• hsp://docs.splunk.com/DocumentaMon/Splunk/latest/Deploy/Datapipeline
• Components
of
the
data
pipeline:
• hsp://docs.splunk.com/DocumentaMon/Splunk/latest/Deploy/Componentsofadistributedenvironment
• Common
informaMon
model
app:
• hsps://splunkbase.splunk.com/app/1621
• Common
informaMon
model
docs:
• hsp://docs.splunk.com/DocumentaMon/CIM/latest/User/Overview
• Where
do
I
put
configs:
• hsp://wiki.splunk.com/Where_do_I_configure_my_Splunk_sewngs
Reference