In addition to seeing the latest features in Splunk Enterprise, learn some of the top commands that will solve most search and analytics needs. Ninja’s can use these blindfolded. New features will be demonstrated in the following areas: TCO and Performance Improvements, Platform Management and New Interactive Visualizations.
Splunk Enterpise for Information Security Hands-OnSplunk
Splunk is the ultimate tool for the InfoSec hunter. In this unique session, we’ll dive straight into the Splunk search interface, and interact with wire data harvested from various interesting and hostile environments, as well as some web access logs. We’ll show how you can use Splunk Enterprise with a few free Splunk applications to hunt for attack patterns. We’ll also demonstrate some ways to add context to your data in order to reduce false positives and more quickly respond to information. Bring your laptop – you’ll need a web browser to access our demo systems!
SplunkLive! London: Splunk ninjas- new features and search dojoSplunk
The document discusses new features and enhancements in Splunk 6.4, including improvements to reduce storage costs through TSIDX reduction, enhance platform security and management through features like improved DMC and new SSO options, and new interactive visualizations. It also covers search commands like eval, stats, eventstats, streamstats, and transaction that can solve most data analysis problems, and provides examples of using these commands. Finally, it discusses some tips and tricks for Splunk searches.
Getting Started with Splunk Enterprise Hands-OnSplunk
This document provides an overview and demonstration of Splunk software. The agenda includes downloading Splunk, an overview of its key features for searching machine data, field extraction, dashboards, alerting, and analytics. The presenter then demonstrates installing and onboarding sample data, performing searches, and using pivots. deployment architectures are discussed along with scaling to hundreds of terabytes per day. Questions areas like documentation, support, and the Splunk user conference are also mentioned.
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
This document provides an overview and examples of data onboarding in Splunk. It discusses best practices for indexing data, such as setting the event boundary, date, timestamp, sourcetype and source fields. Examples are given for onboarding complex JSON, simple JSON and complex CSV data. Lessons learned from each example highlight issues like properly configuring settings for nested or multiple timestamp fields. The presentation also introduces Splunk capabilities for collecting machine data beyond logs, such as the HTTP Event Collector, Splunk MINT and the Splunk App for Stream.
Splunk: How to Design, Build and Map IT ServicesSplunk
This document discusses how to design, build, and map IT and business services in Splunk to gain "service intelligence." It describes a methodology for bringing subject matter experts together to design services top-down before configuration. Specifically, it discusses deconstructing a company's supply chain, online store, and ERP systems into a service map to gain insights on key performance indicators and improve issue resolution, efficiency, and customer satisfaction.
Elevate your Splunk Deployment by Better Understanding your Value Breakfast S...Splunk
This document discusses how to better understand the value of a Splunk deployment through assessing data sources. It presents a data source assessment tool to map data sources to use cases and organizational groups to identify opportunities. The tool shows which data sources are indexed and overlap between groups. It aims to maximize benefits from machine data by supporting business objectives and enabling broader impact.
The document summarizes Splunk Enterprise 6.3, highlighting key new features and capabilities. It discusses breakthrough performance and scale improvements including doubled search and indexing speed and 20-50% increased capacity. It also covers advanced analysis and visualization features like anomaly detection, geospatial mapping, and single-value display. New capabilities for high-volume event collection and an enterprise-scale platform with expanded management, custom alert actions, and data integrity control are also summarized.
Splunk Enterpise for Information Security Hands-OnSplunk
Splunk is the ultimate tool for the InfoSec hunter. In this unique session, we’ll dive straight into the Splunk search interface, and interact with wire data harvested from various interesting and hostile environments, as well as some web access logs. We’ll show how you can use Splunk Enterprise with a few free Splunk applications to hunt for attack patterns. We’ll also demonstrate some ways to add context to your data in order to reduce false positives and more quickly respond to information. Bring your laptop – you’ll need a web browser to access our demo systems!
SplunkLive! London: Splunk ninjas- new features and search dojoSplunk
The document discusses new features and enhancements in Splunk 6.4, including improvements to reduce storage costs through TSIDX reduction, enhance platform security and management through features like improved DMC and new SSO options, and new interactive visualizations. It also covers search commands like eval, stats, eventstats, streamstats, and transaction that can solve most data analysis problems, and provides examples of using these commands. Finally, it discusses some tips and tricks for Splunk searches.
Getting Started with Splunk Enterprise Hands-OnSplunk
This document provides an overview and demonstration of Splunk software. The agenda includes downloading Splunk, an overview of its key features for searching machine data, field extraction, dashboards, alerting, and analytics. The presenter then demonstrates installing and onboarding sample data, performing searches, and using pivots. deployment architectures are discussed along with scaling to hundreds of terabytes per day. Questions areas like documentation, support, and the Splunk user conference are also mentioned.
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
This document provides an overview and examples of data onboarding in Splunk. It discusses best practices for indexing data, such as setting the event boundary, date, timestamp, sourcetype and source fields. Examples are given for onboarding complex JSON, simple JSON and complex CSV data. Lessons learned from each example highlight issues like properly configuring settings for nested or multiple timestamp fields. The presentation also introduces Splunk capabilities for collecting machine data beyond logs, such as the HTTP Event Collector, Splunk MINT and the Splunk App for Stream.
Splunk: How to Design, Build and Map IT ServicesSplunk
This document discusses how to design, build, and map IT and business services in Splunk to gain "service intelligence." It describes a methodology for bringing subject matter experts together to design services top-down before configuration. Specifically, it discusses deconstructing a company's supply chain, online store, and ERP systems into a service map to gain insights on key performance indicators and improve issue resolution, efficiency, and customer satisfaction.
Elevate your Splunk Deployment by Better Understanding your Value Breakfast S...Splunk
This document discusses how to better understand the value of a Splunk deployment through assessing data sources. It presents a data source assessment tool to map data sources to use cases and organizational groups to identify opportunities. The tool shows which data sources are indexed and overlap between groups. It aims to maximize benefits from machine data by supporting business objectives and enabling broader impact.
The document summarizes Splunk Enterprise 6.3, highlighting key new features and capabilities. It discusses breakthrough performance and scale improvements including doubled search and indexing speed and 20-50% increased capacity. It also covers advanced analysis and visualization features like anomaly detection, geospatial mapping, and single-value display. New capabilities for high-volume event collection and an enterprise-scale platform with expanded management, custom alert actions, and data integrity control are also summarized.
Introduced in Splunk 6.2, the Distributed Management Console helps Splunk Admins deal with the monitoring and health of their Splunk deployment. In Splunk 6.3, we built views for Splunk Index and Volume Usage, Forwarder Monitoring, Search Head Cluster Monitoring, Index Cluster Monitoring, and tools for visualizing your Splunk Topology. Leverage Splunk DMC and come see the forest -and- the trees in your Splunk deployment!
Explain the Value of your Splunk Deployment Breakout SessionSplunk
This document provides best practices for documenting value realization from a Splunk deployment. It recommends aligning Splunk use with key organizational objectives. Steps include identifying current success stories, quantifying benefits realized using key metrics, and outlining additional value that can be achieved. Metrics to track include reduced incidents, faster issue resolution, and improved efficiencies. Adoption curves and staff training plans should be defined to fully realize potential value. The document aims to help customers justify further Splunk investment and expansion.
This document provides a summary of the Distributed Management Console (DMC) 6.2 from Splunk. It discusses the continuous investment in management and monitoring capabilities. It provides a history of Splunk's monitoring tools and describes the DMC architecture. It demonstrates the DMC's search head clustering, indexer clustering, indexes and volumes, and forwarder monitoring views which provide insights into deployments. It also shows the topology view that visually represents distributed Splunk installations.
This document provides an overview of Splunk capabilities including knowledge objects, tags, event types, saved searches, alerts, and the search pipeline. It demonstrates how to use these features to better organize and analyze IT data through examples such as monitoring server activity, detecting suspicious login attempts, and tracking software sales. Advanced searching techniques including comparison operators, stats, and transaction commands are also explained to help users leverage Splunk's powerful search language.
SplunkLive! Tampa: Using Value to Fuel AdoptionSplunk
This document discusses how to drive adoption of Splunk by positioning and documenting business value. It recommends quantifying value using metrics to show how Splunk saves time and money. For example, one customer saved 27,000 hours per year and reduced downtime by 50% while stopping over $10 million in fraud. The document provides best practices for measuring success, aligning with business objectives, and creating an incremental adoption plan across IT operations, security, and other teams by positioning specific value opportunities for each. Challenges to documenting value like lack of benchmarks, tools, and time are also addressed.
Power of Splunk Search Processing Language (SPL)Splunk
The document discusses Splunk's Search Processing Language (SPL) for searching and analyzing machine data. It provides an overview of SPL and its commands, and gives examples of how SPL can be used for tasks like searching, charting, enriching data, identifying anomalies, transactions, and custom commands. The presentation aims to showcase the power and flexibility of SPL for tasks like searching large datasets, visualizing data, combining different data sources, and extending SPL's capabilities through custom commands.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
1) Cisco has been using Splunk enterprise for over 7 years across many business units and teams, with daily indexing growing from 300GB in 2010 to over 2TB currently.
2) Cisco's Computer Security Incident Response Team (CSIRT) uses Splunk as their security information and event management (SIEM) platform to monitor 350TB of stored data across 60 global users.
3) The presentation discusses how Cisco and some of its customers have successfully deployed Splunk on Cisco Unified Computing System (UCS) servers to scale their Splunk environments and gain benefits of simplified and repeatable deployments.
Get your Service Intelligence off to a Flying StartSplunk
The document provides guidance to customers on getting started with Splunk IT Service Intelligence. It recommends bringing subject experts together to identify a problem worth solving, such as issues impacting critical business services. It also suggests designing service models before configuring tools to help map business, application, and infrastructure layers and define key performance indicators. The document offers to help customers with workshops, assessments, and best practices to maximize their investment in Splunk IT Service Intelligence.
Machine Data 101: Turning Data Into Insight is a presentation about using Splunk software to analyze machine data. It discusses topics such as:
- What machine data is and examples of common sources like log files, social media, call center systems
- How Splunk indexes machine data from various sources in real-time regardless of format
- Techniques for enriching data in Splunk like tags, field aliases, calculated fields, event types, and lookups from external data sources
- Examples of collecting non-traditional data sources into Splunk like network data, HTTP events, databases, and mobile app data
The presentation provides an overview of Splunk's machine data platform and techniques for analyzing, enrich
Leverage Machine Data and Deliver New Insights for Business AnalyticsShannon Cuthbertson
Splunk can provide real-time insights from machine data to complement existing business intelligence technologies. It allows users to enrich machine data with structured data for business analytics purposes. Examples include gaining insights into customer experience, business processes, product usage, and digital marketing efforts. Splunk provides faster insights by analyzing data from Hadoop and NoSQL systems.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
Taking Splunk to the Next Level - ArchitectureSplunk
This session led by Michael Donnelly will teach you how to take your Splunk deployment to the next level. Learn about Splunk high availability architectures with Splunk Search Head Clustering and Index Replication. Additionally, learn how to manage your deployment with Splunk’s operational and management controls to manage Splunk capacity and end user experience
Drive more value through data source and use case optimization Splunk
Do you wish you had a way to better illustrate the value of Splunk to your leadership? Better yet, do you wish you had a way to illustrate how much MORE value is possible from Splunk leveraging the data you plan on indexing in one functional area or are already indexing today? We can help with that! Come learn the how and why of data source and use case optimization.
Covance is a global drug development company headquartered in Princeton, NJ with over 12,000 employees worldwide. Jessie Ridge is a senior security engineer who helped build Covance's security program from the ground up. Previously, Covance had limited security visibility due to outdated tools and data silos. They implemented Splunk to gain a single source of visibility across their systems. Splunk provided improved security and faster investigations by ingesting various log types. It has since expanded to other teams and grown from processing 10GB of data per day to over 900GB with 25+ users.
Taking Splunk to the Next Level - Management Breakout SessionSplunk
Taking Splunk to the Next Level for Management outlines how Splunk can help organizations quantify the business value of machine data. It provides benchmarks from 400+ customer engagements that show potential efficiencies in IT operations, application delivery, and security and compliance. These include reduced incident resolution times, increased developer productivity, and faster security incident response. The document also offers best practices for aligning a Splunk deployment with key objectives, qualifying issues it can address, quantifying anticipated benefits, and measuring success based on key metrics and customer stories.
Splunk is an industry-leading platform for machine data that allows users to access, analyze, and take action on data from any source. It uses universal indexing to ingest data in real-time from various sources without needing predefined schemas. This enables search, reporting, and alerting across all machine data. Splunk can scale to handle large volumes and varieties of data, provides a developer platform for customization, and supports both on-premises and cloud deployments.
Splunk is a powerful platform for understanding your data. The preview of the Machine Learning Toolkit and Showcase App extends Splunk with a rich suite of advanced analytics and machine learning algorithms. In this session, we'll present an overview of the app architecture and API and show you how to use Splunk to easily perform a variety of tasks, including outlier and anomaly detection, predictive analytics, and event clustering. We’ll use real data to explore these techniques and explain the intuition behind the analytics.
Splunk for IT Operations Breakout SessionGeorg Knon
This document discusses how IT complexity is a challenge for CIOs due to siloed technologies, disconnected point solutions, and time spent maintaining rather than innovating. It presents Splunk as a solution that provides comprehensive visibility across infrastructure, applications, databases, and more through centralized data collection and analysis. Splunk reduces problem resolution time by 67% and escalations by 90% by enabling "first responders" to search across all IT data from a single interface. The document also outlines how Splunk apps can provide insights by role and technology and its capabilities for various IT functions like virtualization, storage, and operating systems.
How to Align Your Daily Splunk Activities Breakout SessionSplunk
This document discusses how organizations can align their daily Splunk activities to key business services to increase value and visibility. It recommends that organizations start with identifying a critical problem related to an important business service. It then suggests conducting a workshop with subject matter experts to collaboratively design a Splunk dashboard to monitor the key performance indicators for that service before configuring the dashboard. The document provides an example of how this approach helped a company called Buttercup Games address issues with their supply chain visibility.
Danfoss - Splunk for Vulnerability ManagementSplunk
This document summarizes a presentation about Danfoss' use of Splunk for vulnerability management. It provides an overview of Danfoss, the background and experience of the presenter, how Danfoss got started with Splunk in 2008 to meet log collection and retention requirements, and how their use of Splunk has evolved over time to include dashboards, security, automated alerting, and a Sophos antivirus case study. It outlines next steps of expanding Splunk's use to more teams and exploring advanced analytics.
The document discusses the experience of migrating from an old SIEM to Splunk Enterprise Security (ES). Key points include:
- The old SIEM was difficult to maintain, slow, and lacked community support. Splunk provided better performance and capabilities.
- Logs were migrated to Splunk one source at a time after normalization. Analysts found Splunk easier to use.
- A proof of concept with ES showed its advanced correlations, dashboards, and incident management capabilities beyond core Splunk.
- ES provides templates for searches, alerts, and workflows that would have taken months to recreate. It is a more complete SIEM solution.
Introduced in Splunk 6.2, the Distributed Management Console helps Splunk Admins deal with the monitoring and health of their Splunk deployment. In Splunk 6.3, we built views for Splunk Index and Volume Usage, Forwarder Monitoring, Search Head Cluster Monitoring, Index Cluster Monitoring, and tools for visualizing your Splunk Topology. Leverage Splunk DMC and come see the forest -and- the trees in your Splunk deployment!
Explain the Value of your Splunk Deployment Breakout SessionSplunk
This document provides best practices for documenting value realization from a Splunk deployment. It recommends aligning Splunk use with key organizational objectives. Steps include identifying current success stories, quantifying benefits realized using key metrics, and outlining additional value that can be achieved. Metrics to track include reduced incidents, faster issue resolution, and improved efficiencies. Adoption curves and staff training plans should be defined to fully realize potential value. The document aims to help customers justify further Splunk investment and expansion.
This document provides a summary of the Distributed Management Console (DMC) 6.2 from Splunk. It discusses the continuous investment in management and monitoring capabilities. It provides a history of Splunk's monitoring tools and describes the DMC architecture. It demonstrates the DMC's search head clustering, indexer clustering, indexes and volumes, and forwarder monitoring views which provide insights into deployments. It also shows the topology view that visually represents distributed Splunk installations.
This document provides an overview of Splunk capabilities including knowledge objects, tags, event types, saved searches, alerts, and the search pipeline. It demonstrates how to use these features to better organize and analyze IT data through examples such as monitoring server activity, detecting suspicious login attempts, and tracking software sales. Advanced searching techniques including comparison operators, stats, and transaction commands are also explained to help users leverage Splunk's powerful search language.
SplunkLive! Tampa: Using Value to Fuel AdoptionSplunk
This document discusses how to drive adoption of Splunk by positioning and documenting business value. It recommends quantifying value using metrics to show how Splunk saves time and money. For example, one customer saved 27,000 hours per year and reduced downtime by 50% while stopping over $10 million in fraud. The document provides best practices for measuring success, aligning with business objectives, and creating an incremental adoption plan across IT operations, security, and other teams by positioning specific value opportunities for each. Challenges to documenting value like lack of benchmarks, tools, and time are also addressed.
Power of Splunk Search Processing Language (SPL)Splunk
The document discusses Splunk's Search Processing Language (SPL) for searching and analyzing machine data. It provides an overview of SPL and its commands, and gives examples of how SPL can be used for tasks like searching, charting, enriching data, identifying anomalies, transactions, and custom commands. The presentation aims to showcase the power and flexibility of SPL for tasks like searching large datasets, visualizing data, combining different data sources, and extending SPL's capabilities through custom commands.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
1) Cisco has been using Splunk enterprise for over 7 years across many business units and teams, with daily indexing growing from 300GB in 2010 to over 2TB currently.
2) Cisco's Computer Security Incident Response Team (CSIRT) uses Splunk as their security information and event management (SIEM) platform to monitor 350TB of stored data across 60 global users.
3) The presentation discusses how Cisco and some of its customers have successfully deployed Splunk on Cisco Unified Computing System (UCS) servers to scale their Splunk environments and gain benefits of simplified and repeatable deployments.
Get your Service Intelligence off to a Flying StartSplunk
The document provides guidance to customers on getting started with Splunk IT Service Intelligence. It recommends bringing subject experts together to identify a problem worth solving, such as issues impacting critical business services. It also suggests designing service models before configuring tools to help map business, application, and infrastructure layers and define key performance indicators. The document offers to help customers with workshops, assessments, and best practices to maximize their investment in Splunk IT Service Intelligence.
Machine Data 101: Turning Data Into Insight is a presentation about using Splunk software to analyze machine data. It discusses topics such as:
- What machine data is and examples of common sources like log files, social media, call center systems
- How Splunk indexes machine data from various sources in real-time regardless of format
- Techniques for enriching data in Splunk like tags, field aliases, calculated fields, event types, and lookups from external data sources
- Examples of collecting non-traditional data sources into Splunk like network data, HTTP events, databases, and mobile app data
The presentation provides an overview of Splunk's machine data platform and techniques for analyzing, enrich
Leverage Machine Data and Deliver New Insights for Business AnalyticsShannon Cuthbertson
Splunk can provide real-time insights from machine data to complement existing business intelligence technologies. It allows users to enrich machine data with structured data for business analytics purposes. Examples include gaining insights into customer experience, business processes, product usage, and digital marketing efforts. Splunk provides faster insights by analyzing data from Hadoop and NoSQL systems.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
Taking Splunk to the Next Level - ArchitectureSplunk
This session led by Michael Donnelly will teach you how to take your Splunk deployment to the next level. Learn about Splunk high availability architectures with Splunk Search Head Clustering and Index Replication. Additionally, learn how to manage your deployment with Splunk’s operational and management controls to manage Splunk capacity and end user experience
Drive more value through data source and use case optimization Splunk
Do you wish you had a way to better illustrate the value of Splunk to your leadership? Better yet, do you wish you had a way to illustrate how much MORE value is possible from Splunk leveraging the data you plan on indexing in one functional area or are already indexing today? We can help with that! Come learn the how and why of data source and use case optimization.
Covance is a global drug development company headquartered in Princeton, NJ with over 12,000 employees worldwide. Jessie Ridge is a senior security engineer who helped build Covance's security program from the ground up. Previously, Covance had limited security visibility due to outdated tools and data silos. They implemented Splunk to gain a single source of visibility across their systems. Splunk provided improved security and faster investigations by ingesting various log types. It has since expanded to other teams and grown from processing 10GB of data per day to over 900GB with 25+ users.
Taking Splunk to the Next Level - Management Breakout SessionSplunk
Taking Splunk to the Next Level for Management outlines how Splunk can help organizations quantify the business value of machine data. It provides benchmarks from 400+ customer engagements that show potential efficiencies in IT operations, application delivery, and security and compliance. These include reduced incident resolution times, increased developer productivity, and faster security incident response. The document also offers best practices for aligning a Splunk deployment with key objectives, qualifying issues it can address, quantifying anticipated benefits, and measuring success based on key metrics and customer stories.
Splunk is an industry-leading platform for machine data that allows users to access, analyze, and take action on data from any source. It uses universal indexing to ingest data in real-time from various sources without needing predefined schemas. This enables search, reporting, and alerting across all machine data. Splunk can scale to handle large volumes and varieties of data, provides a developer platform for customization, and supports both on-premises and cloud deployments.
Splunk is a powerful platform for understanding your data. The preview of the Machine Learning Toolkit and Showcase App extends Splunk with a rich suite of advanced analytics and machine learning algorithms. In this session, we'll present an overview of the app architecture and API and show you how to use Splunk to easily perform a variety of tasks, including outlier and anomaly detection, predictive analytics, and event clustering. We’ll use real data to explore these techniques and explain the intuition behind the analytics.
Splunk for IT Operations Breakout SessionGeorg Knon
This document discusses how IT complexity is a challenge for CIOs due to siloed technologies, disconnected point solutions, and time spent maintaining rather than innovating. It presents Splunk as a solution that provides comprehensive visibility across infrastructure, applications, databases, and more through centralized data collection and analysis. Splunk reduces problem resolution time by 67% and escalations by 90% by enabling "first responders" to search across all IT data from a single interface. The document also outlines how Splunk apps can provide insights by role and technology and its capabilities for various IT functions like virtualization, storage, and operating systems.
How to Align Your Daily Splunk Activities Breakout SessionSplunk
This document discusses how organizations can align their daily Splunk activities to key business services to increase value and visibility. It recommends that organizations start with identifying a critical problem related to an important business service. It then suggests conducting a workshop with subject matter experts to collaboratively design a Splunk dashboard to monitor the key performance indicators for that service before configuring the dashboard. The document provides an example of how this approach helped a company called Buttercup Games address issues with their supply chain visibility.
Danfoss - Splunk for Vulnerability ManagementSplunk
This document summarizes a presentation about Danfoss' use of Splunk for vulnerability management. It provides an overview of Danfoss, the background and experience of the presenter, how Danfoss got started with Splunk in 2008 to meet log collection and retention requirements, and how their use of Splunk has evolved over time to include dashboards, security, automated alerting, and a Sophos antivirus case study. It outlines next steps of expanding Splunk's use to more teams and exploring advanced analytics.
The document discusses the experience of migrating from an old SIEM to Splunk Enterprise Security (ES). Key points include:
- The old SIEM was difficult to maintain, slow, and lacked community support. Splunk provided better performance and capabilities.
- Logs were migrated to Splunk one source at a time after normalization. Analysts found Splunk easier to use.
- A proof of concept with ES showed its advanced correlations, dashboards, and incident management capabilities beyond core Splunk.
- ES provides templates for searches, alerts, and workflows that would have taken months to recreate. It is a more complete SIEM solution.
Splunk .conf2011: Splunk for Fraud and Forensics at IntuitErin Sweeney
Splunk can help organizations detect security threats and attacks by analyzing patterns in large volumes of machine data. As attacks have evolved beyond simple signatures to target behaviors, a behavioral approach is needed to understand adversary goals and methods. Splunk supports pattern modeling and adaptation to anticipate attack vectors. It detects suspicious patterns and anomalies by establishing baselines of normal behavior and monitoring for deviations. This helps security analysts take an "actor view" to gain insights into persistent threats.
The document discusses machine learning and analytics capabilities in Splunk. It provides an overview of machine learning concepts like supervised vs. unsupervised learning. It then introduces the ML Toolkit and Showcase App, which adds machine learning commands to the Splunk Programming Language. The app uses popular Python machine learning libraries behind the scenes. The document demonstrates how to fit and apply models to data in Splunk using these new commands. It also outlines some limitations and future plans for the preview release app. Example use cases for predictive modeling in areas like capacity planning, insider threat detection, and customer churn prediction are presented.
In this session, you’ll learn about security on AWS and why logging in the cloud is different than on-premises. We’ll explore AWS Cloudtrail, the logging service built into AWS. We’ll discuss Amazon Cloudwatch, a monitoring service for AWS cloud resources and the applications you run on AWS. We’ll also talk about Amazon Inspector, which is the recently announced application security assessment service from AWS. We’ll examine the AWS Config service and how you can use it to improve security and resource management on AWS. Finally, we will look at how the Splunk App for AWS ties all of these services together into deep insight and useful visualizations.
Softcat Splunk Discovery Day Manchester, March 2017Splunk
This document provides an agenda for a Splunk conference on March 15th 2017 in Manchester. The agenda includes:
- An introduction and welcome from 09:30-09:45
- Two session from 09:45-12:15 on data-driven IT operations and best practices for security investigations
- A lunch break from 12:30-13:30
- The event concludes at 13:30
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Splunk Ninjas: New features, pivot, and search dojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
The document provides an overview of new features in Splunk Enterprise 6.1 including enhanced interactive analytics, embedding operational intelligence, and enabling the mission-critical enterprise. It discusses data models and pivot which allow analyzing data without search commands. Finally, it highlights five powerful search commands (eval, stats, eventstats, streamstats, transaction) that can solve most data analysis problems.
SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo Splunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
This document introduces Splunk Enterprise & Splunk Cloud Release 6.4. It highlights new features including unlimited custom visualizations, enhanced predictive analytics, expanded cloud services monitoring, improved platform security and management, and reduced storage costs for historical data of up to 80% with Splunk Enterprise. The release aims to help users get more value from big data while lowering storage costs.
Independent of the source of data, the integration and analysis of event streams gets more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analysed, often with many consumers or systems interested in all or part of the events.
So far this mostly a development experience, with frameworks such as Oracle Event Processing, Apache Storm or Spark Streaming. With Oracle Stream Analytics, analytics on event streams can be put in the hands of the business analyst. It simplifies the implementation of event processing solutions so that every business analyst is able to graphically and decleratively define event stream processing pipelines, without having to write a single line of code or continous query language (CQL). Event Processing is no longer “complex”! This session presents Oracle Stream Analytics directly on some selected demo use cases.
This document provides an overview and demonstration of Splunk Enterprise. The agenda includes an overview of Splunk, a live demonstration of installing and using Splunk to search, analyze and visualize machine data, a discussion of Splunk deployment architectures, and information on Splunk communities and support resources. The demonstration walks through importing sample data, performing searches, creating a field extraction, building a dashboard, and exploring Splunk's alerting, analytics and pivot interface capabilities.
This document provides an overview and examples of using the Splunk Search Processing Language (SPL). It discusses SPL commands for searching, filtering, modifying fields, calculating statistics, charting data over time, converging different data sources, identifying transactions, and exploring relationships between fields. Examples are given for common SPL commands like stats, timechart, lookup, appendcols, transaction, and eval. The document is intended to refresh the audience on SPL and provide recipes for common search tasks.
An overview of Splunk Enterprise 6.3. Presented by Splunk's Jim Viegas at GTRI's Splunk Tech Day, December 8, 2015.
Visit http://www.gtri.com/ for more information.
Sumo Logic QuickStart Webinar - Jan 2016Sumo Logic
QuickStart your Sumo Logic service with this exclusive webinar. At these monthly live events you will learn how to capitalize on critical capabilities that can amplify your log analytics and monitoring experience while providing you with meaningful business and IT insights
This document provides an overview and examples of using the Splunk Search Processing Language (SPL). It begins with a safe harbor statement noting that forward-looking statements may differ from actual results. The agenda then outlines an overview of SPL anatomy, commands and examples, custom commands, and a Q&A section. Examples are provided for various SPL commands like search, eval, stats, and timechart. It also discusses converging and exploring data through commands like lookup, appendcols, transaction, cluster, correlate and associate. Finally, it briefly introduces the concept of custom commands and examples like Haversine.
Azure Stream Analytics : Analyse Data in MotionRuhani Arora
The document discusses evolving approaches to data warehousing and analytics using Azure Data Factory and Azure Stream Analytics. It provides an example scenario of analyzing game usage logs to create a customer profiling view. Azure Data Factory is presented as a way to build data integration and analytics pipelines that move and transform data between on-premises and cloud data stores. Azure Stream Analytics is introduced for analyzing real-time streaming data using a declarative query language.
Most data visualisation solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualisation capabilities. One was is to first persist the data into a data store and then use a traditional data visualisation solution to present the data.
If latency is not an issue, such a solution might be good enough. An other question is which data store solution is necessary to keep up with the high load on write and read. If it is not an RDBMS but an NoSQL database, then not all traditional visualisation tools might already integrate with the specific data store. An other option is to use a Streaming Visualisation solution. They are specially built for streaming data and often do not support batch data. A much better solution would be to have one tool capable of handling both, batch and streaming data. This talk presents different architecture blueprints for integrating data visualisation into a fast data solution and highlights some of the products available to implement these blueprints.
How to monitor the whole Citrix environment from bottom to the top - using Splunk. It was preseneted during 6th Polish Citrix User Group meeting in Cracow (October 26th, 2017).
SplunkLive! Frankfurt 2018 - Data Onboarding OverviewSplunk
Presented at SplunkLive! Frankfurt 2018:
Splunk Data Collection Architecture
Apps and Technology Add-ons
Demos / Examples
Best Practices
Resources and Q&A
Managing your Black Friday Logs - Antonio Bonuccelli - Codemotion Rome 2018Codemotion
Monitoring an entire application is not a simple task, but with the right tools it is not a hard task either. However, events like Black Friday can push your application to the limit, and even cause crashes. As the system is stressed, it generates a lot more logs, which may crash the monitoring system as well. In this talk I will walk through the best practices when using the Elastic Stack to centralize and monitor your logs. I will also share some tricks to help you with the huge increase of traffic typical in Black Fridays.
Similar to Splunk Ninjas: New Features and Search Dojo (20)
.conf Go 2023 - Raiffeisen Bank InternationalSplunk
This document discusses standardizing security operations procedures (SOPs) to increase efficiency and automation. It recommends storing SOPs in a code repository for versioning and referencing them in workbooks which are lists of standard tasks to follow for investigations. The goal is to have investigation playbooks in the security orchestration, automation and response (SOAR) tool perform the predefined investigation steps from the workbooks to automate incident response. This helps analysts automate faster without wasting time by having standard, vendor-agnostic procedures.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
El documento describe la transición de Cellnex de un Centro de Operaciones de Seguridad (SOC) a un Equipo de Respuesta a Incidentes de Seguridad (CSIRT). La transición se debió al crecimiento de Cellnex y la necesidad de automatizar procesos y tareas para mejorar la eficiencia. Cellnex implementó Splunk SIEM y SOAR para automatizar la creación, remediación y cierre de incidentes. Esto permitió al personal concentrarse en tareas estratégicas y mejorar KPIs como tiempos de resolución y correos electrónicos anal
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)Splunk
Este documento resume el recorrido de ABANCA en su camino hacia la ciberseguridad con Splunk, desde la incorporación de perfiles dedicados en 2016 hasta convertirse en un centro de monitorización y respuesta con más de 1TB de ingesta diaria y 350 casos de uso alineados con MITRE ATT&CK. También describe errores cometidos y soluciones implementadas, como la normalización de fuentes y formación de operadores, y los pilares actuales como la automatización, visibilidad y alineación con MITRE ATT&CK. Por último, señala retos
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
The document is a presentation on cyber security trends and Splunk security products from Matthias Maier, Product Marketing Director for Security at Splunk. The presentation covers trends in security operations like the evolution of SOCs, new security roles, and data-centric security approaches. It also provides updates on Splunk's security portfolio including recognition as a leader in SIEM by Gartner and growth in the SIEM market. Maier highlights some breakout sessions from the conference on topics like asset defense, machine learning, and building detections.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
This document summarizes a presentation about observability using Splunk. It includes an agenda introducing observability and why Splunk for observability. It discusses the need for modernization initiatives in companies and the thousands of changes required. It presents that Splunk provides end-to-end visibility across metrics, traces and logs to detect, troubleshoot and optimize systems. It shares a customer case study of Accenture using Splunk observability in their hybrid cloud environment. Finally, it concludes that observability with Splunk can drive results like reduced downtime and faster innovation.
This document contains slides from a Splunk presentation covering the following topics:
- Updated Splunk logo and information about meetings in Zurich and sales engineering leads
- Ideas for confused or concerned human figures in design concepts
- Three buckets of challenges around websites slowing, apps being down, and supply chain issues
- Accelerating mean time to detect, identify, respond and resolve through cyber resilience with Splunk
- Unifying security, IT and DevOps teams
- Splunk's technology vision focusing on customer experience, hybrid/edge, unleashing data lakes, and ubiquitous machine learning
- Gaining operational resilience through correlating infrastructure, security, application and user data with business outcomes
This document summarizes a presentation about Splunk's platform. It discusses Splunk's mission of helping customers create value faster with insights from their data. It provides statistics on Splunk's daily ingest and users. It highlights examples of how Splunk has helped customers in areas like internet messaging and convergent services. It also discusses upcoming challenges and new capabilities in Splunk like federated search, flexible indexing, ingest actions, improved data onboarding and management, and increased platform resilience and security.
The document appears to be a presentation from Splunk on security topics. It includes sections on cyber security resilience, the data-centric modern SOC, application monitoring at scale, threat modeling, security monitoring journeys, self-service Splunk infrastructure, the top 3 CISO priorities of risk based alerting, use case development, a security content repository, security PVP (posture, vision, and planning) and maturity assessment, and concludes with an overview of how Splunk can provide end-to-end visibility across an organization.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
4. 4
Agenda
What’s new in 6.4 (and a few goodies from 6.3!)
– TCO & Performance Improvements
– Platform Security and Management
– New Interactive Visualizations
Harness the power of search
– The 5 Search Commands That Can Solve Most Problems
6. 6
Safe Harbor Statement
During the course of this presentation,we may make forward looking statements regarding future events
or the expected performance of the company. We caution you that such statements reflect our current
expectations and estimates based on factors currently known to us and that actual events or results could
differ materially. For important factors that may cause actual results to differ from those contained in our
forward-looking statements, please review our filings with the SEC. The forward-looking statements
made in this presentation are being made as of the time and date of its live presentation. If reviewed
after its live presentation, this presentation may not contain current or accurate information. We do not
assume any obligation to update any forward looking statements we may make. In addition, any
information about our roadmap outlines our general product direction and is subject to change at any
time without notice. It is for informational purposes only and shall not be incorporated into any contract
or other commitment. Splunk undertakes no obligation either to develop the features or functionality
described orto includeany suchfeatureor functionalityina futurerelease.
7. 7
Splunk Enterprise & Cloud 6.4
Storage TCO
Reduction
- TSIDX Reduction
reduces historical data
storage TCO by 40%+
Platform Security &
Management
New Interactive
Visualizations
- Improved DMC
- New SSO Options
- Improved Event Collector
- New Pre-built Visualizations
- Open Community Library
- Event Sampling and Predict
8. 8
TSIDX Reduction
Provides up to 40-80% storage reduction
Retention Policy on TSIDX Files
Creates “mini” TSIDX
Performance trade-off between
storage costs and performance
– Rare vs Dense Searches
No functionality loss
Can restore original TSIDX files if
needed
8
9. 9
Splunk Enterprise & Cloud 6.4
Storage TCO
Reduction
- TSIDX Reduction
reduces historical data
storage TCO by 40%+
Platform Security &
Management
New Interactive
Visualizations
- Improved DMC
- New SSO Options
- Improved Event Collector
- New Pre-built Visualizations
- Open Community Library
- Event Sampling and Predict
10. 10
Management & Platform Enhancements
Management
– Distributed Management Console
New monitoring views for scheduler,
Event Collector, system I/O performance
– Delegated Admin roles
HTTP Event Collector
– Unrestricted data for payloads
– Data Indexing acknowledgement
SAML Identity Provider Support
– OKTA, Azure AD, ADFS
1
SAML Support
OKTA
Azure AD
ADFS
Ping FederateAWS IoT
Event Collector
11. 11
Splunk Enterprise & Cloud 6.4
Storage TCO
Reduction
- TSIDX Reduction
reduces historical data
storage TCO by 40%+
Platform Security &
Management
New Interactive
Visualizations
- Improved DMC
- New SSO Options
- Improved Event Collector
- New Pre-built Visualizations
- Open Community Library
- Event Sampling and Predict
12. 12
Custom Visualizations
Unlimited new ways to visualize your data
15 new interactive visualizations useful
for IT, security, IoT, business analysis
Open framework to create or customize
any visual
Visuals shared via Splunkbase library
Available for any use: search, dashboards,
reports…
1
13. 13
New Custom Visualizations
1
Treemap
Sankey
Diagram
Punchcard Calendar
Heat Map
Parallel
Coordinates
Bullet GraphLocation
Tracker
Horseshoe
Meter
Machine Learning
Charts
Timeline
Horizon
Chart
Multiple use cases across IT, security, IoT, and business analytics
14. 14
Event Sampling
• Powerful search option provides
unbiased sample results
• Useful to quickly determine dataset
characteristics
• Speeds large-scale data investigation
and discovery
14
Optimizes query performance for big data analysis
15. 15
Predict Command Enhancements
• Time-series forecasting
• New algorithms:
• Support bivariate time series
with covariance
• Predict multiple series independently
• Predict missing values within series
• 80-100X performance improvement
15
Forecast Trends and Predict Missing Values
19. 19
search and filter | munge | report | cleanup
Search Processing Language
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) dc(clientip)
| rename sum(KB) AS "Total MB" dc(clientip) AS "Unique Customers"
20. 20
Five Commands That Will Solve Most Data Questions
eval - Modify or Create New Fields and Values
stats - Calculate Statistics Based on Field Values
eventstats - Add Summary Statistics to Search Results
streamstats - Cumulative Statistics for Each Event
transaction - Group Related Events Spanning Time
24. 26
stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) AS “Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
25. 27
stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) as “Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
26. 28
stats – Calculate Statistics Based on Field Values
Examples
• Calculate statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) AS "Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats avg(KB) sum(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
27. 30
eventstats – Add Summary Statistics to Search Results
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
28. 31
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
eventstats – Add Summary Statistics to Search Results
29. 32
eventstats – Add Summary Statistics to Search Results
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
30. 34
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total
| timechart max(bytes_total)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
31. 35
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_bytes
| timechart max(cumulative_bytes)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
32. 36
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_bytes
| timechart max(cumulative_bytes)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes
window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
33. 38
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
34. 39
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
35. 40
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
36. 41
Learn Them Well and Become a Ninja
eval - Modify or Create New Fields and Values
stats - Calculate Statistics Based on Field Values
eventstats - Add Summary Statistics to Search Results
streamstats - Cumulative Statistics for Each Event
transaction - Group Related Events Spanning Time
See many more examples and neat tricks at docs.splunk.com and answers.splunk.com
43. 48
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table cluster_count, _raw
| sort - cluster_count
• Select a field to cluster on
sourcetype=access*
| cluster field=bc_uri showcount=t
| table cluster_count bc_uri _raw
| sort -cluster_count
• Most or least common errors
index=_internal source=*splunkd.log* log_level!=info
| cluster showcount=t
| table cluster_count _raw
| sort -cluster_count
44. 49
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table cluster_count, _raw
| sort - cluster_count
• Select a field to cluster on
sourcetype=access*
| cluster field=bc_uri showcount=t
| table cluster_count bc_uri _raw
| sort -cluster_count
• Most or least common errors
index=_internal source=*splunkd.log* log_level!=info
| cluster showcount=t
| table cluster_count _raw
| sort -cluster_count
45. 50
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table cluster_count, _raw
| sort - cluster_count
• Select a field to cluster on
sourcetype=access*
| cluster field=bc_uri showcount=t
| table cluster_count bc_uri _raw
| sort -cluster_count
• Most or least common errors
index=_internal source=*splunkd.log* log_level!=info
| cluster showcount=t
| table cluster_count _raw
| sort -cluster_count
Editor's Notes
Here is what you need for this presentation:
Link to videos on box: <coming soon>
You should have the following installed:
6.4 Overview
OI Demo 3.2 – Note this is not on Enablement yet. Please request this from sluedtke@splunk.com. The enablement link will be placed here once availabile.
NOTE: Configure your role to search the oidemo index by default, otherwise you will have to type “index=oidemo” for the examples later on.
There is a lot to cover in this presentation! Try to go quickly and at a pretty high level. When you get through the presentation judge the audience’s interest and go deeper in whichever section. For example, if they want to know more about Choropleths and polygons spend some time there, or if they want to go deeper on the search commands talk through the extra examples.
Objective: We want to help you change from this..
Today I’m going to show you some of the new features available in Splunk 6.4.
For TCO & Performance Improvements we’ve created new options to reduce your storage footprint as well as a new event sampling feature to optimize query performance and help you answer questions faster.
For Platform Security and Management we have added new single sign-on capabilities, new features to the HTTP Event Collector and finally new views and dashboards to the Distributed Management Console.
Then for my favorite part, the new Interactive Visualizations. Not only did we double the amount of visualizations available in Splunk, but we’ve provided a way for developers, partners and the community to create their own and integrate with the Splunk interface natively.
Lastly we will go through some of the most commonly used search commands and how they are used so you can become a Splunk Ninja in 6.4!
To this…
Splunk safe harbor statement.
Let’s start with TCO & Performance Improvements.
Extra Material:
Q: How does it affect performance? Can I still search the data?
A: You can access the data in all of the normal ways, and for many search and reporting activities there is little impact. But for “needle in the haystack” ad-hoc searches, the performance will no longer be optimal. For “dense” searches (searches whose results return most of the data for the time range searched), the performance impact will be minimal. For “sparse” or “needle in the haystack” searches (searches that return very few results), searches that typically return in seconds will now return in minutes. Note: This feature can be selectively applied to any index to provide the greatest amount of flexibility to our customers.
The goal is to apply this feature to data that is less frequently accessed – data for which you are willing to sacrifice some performance in order to gain a very significant cost savings. Splunk specialists can help you set the right policies for the right data.
Q: Do apps and Premium Solutions still work?
A: Yes. Apps and Premium Solutions will work.
Q: How does it affect performance? Can I still search the data?
A: You can access the data in all of the normal ways, and for many search and reporting activities there is little impact. But for “needle in the haystack” ad-hoc searches, the performance will no longer be optimal. For “dense” searches (searches whose results return most of the data for the time range searched), the performance impact will be minimal. For “sparse” or “needle in the haystack” searches (searches that return very few results), searches that typically return in seconds will now return in minutes. Note: This feature can be selectively applied to any index to provide the greatest amount of flexibility to our customers.
The goal is to apply this feature to data that is less frequently accessed – data for which you are willing to sacrifice some performance in order to gain a very significant cost savings. Splunk specialists can help you set the right policies for the right data.
Q: How do I control what data is minimized? Can I bring data back to the standard state?
A: You set policy by data age and by the type of data (index). Different data can have different time criteria for minimization. You can return data to the original state if needed. Splunk specialists can help you set the right policies for the right data.
Q: Why does your optimization data take up so much space?
A: Even including the optimization data, Splunk compression techniques have already reduced the customer’s storage requirements by over 50% during indexing. The optimization metadata (TSIDX – time-series index) is what enables the customer to ask any question of their data and handle any type of investigation or use case in real time.
By keeping data in its original unstructured state, Splunk offers the flexibility to ask any question of the data, handling any type of investigation or use case. Splunk structures the answer to each query on the fly, rather than forcing the customer to create a fixed data structure that limits the questions that can be asked. The TSIDX data enables us to deliver this unique flexibility with real-time speed.
Q: Why is the savings range so large (40-80%)?
A: The storage used by TSIDX varies depending on the nature and cardinality (uniqueness) of the data indexed. So the savings will vary as well across data types. Repetitive data fields will have a lower savings while unique (high cardinality) data will see a higher savings. Typical syslog data, for example will fall in the middle – about 60-70%.
High cardinality data returns a higher savings because it requires more index entries to describe it. When the TSIDX is reduced, the savings are larger. We expect most customers will see an overall benefit of 60% or more. We expect the average savings to be 60% or more.
Platform Security & Management
DMC
In 6.3 we re-worked the Distributed Management Console. In 6.4 we enhanced it even more adding new views and monitoring capabilities for things such as:
- HTTP Event Collector Views - Performance tracking for the HTTP Event Collector feature including breakdowns by authorization token.
- TCP Inputs - A partner to the Forwarder performance views in DMC tracking TCP queue health and other TCP input statistics.
Deployment Wide Search Statistics - Identify top Search Users across a multi-Search Head deployment including frequent and long running searches.
- Distributed Search View - A dashboard dedicated to tracking metrics for search in distributed deployments. Includes views for bundle replication performance and dispatch directory statistics.
- Resource Usage, I/O - In addition to useful data on CPU and Memory consumption, now also see I/O bandwidth utilization for any Splunk host or across hosts.
- Index Performance, Multi-pipeline - Updated views in the Deployment-wide and Instance-scoped Indexing Performance pages to accommodate multi-pipeline indexing.
- Threshold Control - Fine-grain controls for visual thresholds for DMC displays containing CPU, Memory, Indexing Rate, Search Concurrency, and Up/Down Status.
HTTP Event Collector
In 6.3 we added the HTTP Event Collector. Now we’ve improved it by enabling unrestricted data for payloads (besides JSON) and data indexing acknowledgements so customers can verify data was received.
SAML
And finally we’ve added additional Single Sign On Options for added flexibility
Platform Security & Management
Release 6.4 delivers an array of new pre-built visualizations, a visualization developer framework, and an open library to make it simple for customers to access, develop and share interactive visualizations
15 new pre-built visualizations help customers analyze and interact with data sets commonly found in IT, security, and machine learning analysis
A new developer framework allows customers and partners to easily create or customize any visualization to suit their needs
Splunkbase now contains a growing library of visualizations provided by Splunk, our partners and our community
Doubles the visualizations in Splunk today and creates an open environment for the unlimited creation and sharing of new visualizations
Once a visual is imported from SplunkBase it is treated the same as any native Splunk feature, and is available for general use in the Visualizations dropdown.
15 new pre-built visualizations help customers analyze and interact with data sets commonly found in IT, security, and machine learning analysis. We survey out customer and field to choose an initial set that would meet many common needs.
The new Event Sampling feature makes it faster to characterize very large datasets and focus your investigations. It is an integrated option of Search, offering a dropdown menu to control sampling 1 per 10, per 100, 1000, 10,000 etc.
Of course the performance is equally as fast – a 1 per 1000 search runs 1000x faster.
Main algo used – Kalgan filter
Algorithmic improvements
Support bi-variate time series by taking covariance between the individual time series into account.
Predict for multiple time series at the same time - this treats individual time series independently, i.e. without computing covariance
Predicting missing values in time series and accounting for that during prediction via missing value imputation methods (i.e., “No value was recorded, but it was most likely 5”)
Use Splunk Ninja App and Demo Instructions
For more information, or to try out the features yourself. Check out the overview app which explains each of the features and includes code samples and examples where applicable.
<This section should take ~15 minutes>Search is the most powerful part of Splunk.
The Splunk search language is very expressive and can perform a wide variety of tasks ranging from filtering to data, to munging, and reporting. The results can be used to answer questions, visualize results, or even send to a third party application in whatever format they require.
Although there are 135 documented search commands; however, most questions can be answered by using just a handful.
These are the five commands you should get very familiar with. If you know how to use these well, you will be able to solve most data questions that come your way. Let’s take a quick look at each of these.
<Walk through the examples with a demo. Hidden slides are available as backup. NOTE: Each of the grey boxes is clickable. If you are running Splunk on port 8000 you won’t have to type in the searches, this will save time.>
Note: Chart is just stats visualized. Timechart is just stats by _time visualized.
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) AS "Sum of KB"
sourcetype=access*
| stats values(useragent) avg(bytes) max(bytes) by clientip
sourcetype=access*
| stats values(useragent) avg(bytes) max(bytes) by clientip
Eventstats let’s you add statistics about the entire search results and makes the statistics available as fields on each event.
<Walk through the examples with a demo. Hidden slides are available as backup>
Eventstats let’s you add statistics about the entire search results and makes the statistics available as fields on each event.
Let’s use eventstats to create a timechart of the average bytes on top of the overall average.
index=* sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
We can turn this into a moving average simply by adding “by date_hour” to calculate the average per hour instead of the overall average.
index=* sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
Streamstats calculates statistics for each event at the time the event is seen. So for example, if I had an event with a temperature reading I could use streamstats to create a new field to tell me the temperature difference between the event and one or more previous events. Similar to the delta command, but more powerful. In this example, I’m going to take the bytes field of my access logs and see how much total data is being transferred code over time.
To create a cumulative sum:
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_bytes
| timechart max(cumulative_bytes)
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
Bonus: This could also be completed using the trendline command with the simple moving average (sma) parameter:
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| trendline sma10(avg_bytes) as moving_average_bytes
| timechart latest(avg_bytes) latest(moving_average_bytes)
Double Bonus: Cumulative sum by period
sourcetype=access*
| timechart span=15m sum(bytes) as cumulative_bytes by status
| streamstats global=f sum(cumulative_bytes) as bytes_total
A transaction is any group of related events that span time. It’s quite useful for finding overall durations. For example, how long did it take a user to complete a transaction. This really shows the power of Splunk. Think about it, if you are sending all your data to splunk then you have data from multiple subsystems (think database, webserver, and app server), you can see the overall time it’s taking AND how long each subsystem is taking. So many customers are using this to quickly pinpoint whether slowness is because of the network, database, or app server.
NOTE: Many transactions can be re-created using stats. Transaction is easy but stats is way more efficient and it’s a mapable command (more work will be distributed to the indexers).
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
There is much more each of these commands can be used for. Check out answers.splunk.com and docs.splunk.com for many more examples.
Android coming soon!
Now go do this Fu in your own environment!
But don’t just say you know the “Fu”…
<If you have time, feel free to show one of your favorite commands or a neat use case of a command. The cluster command is provided here as an example >
“There are over 135 splunk commands, the five you have just seen are incredibly powerful. Here is another to add to your arsenal.”
You can use the cluster command to learn more about your data and to find common and/or rare events in your data. For example, if you are investigating an IT problem and you don't know specifically what to look for, use the cluster command to find anomalies. In this case, anomalous events are those that aren't grouped into big clusters or clusters that contain few events. Or, if you are searching for errors, use the cluster command to see approximately how many different types of errors there are and what types of errors are common in your data.
Decrease the threshold of similarity and see the change in results
sourcetype=access* | cluster field=bc_uri showcount=t t=0.1| table cluster_count bc_uri _raw | sort -cluster_count