Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
This document discusses how Staples uses Splunk to gain insights from machine data across their organization. It provides details on:
- Staples' Splunk infrastructure consisting of 8 index servers and 9 search heads that can handle 1TB of data per day.
- The key use cases of operational support, application insights, and business intelligence.
- How Splunk provides a single pane of glass for visibility across their web apps, servers, monitoring tools, and more.
- Examples of how Splunk has helped identify issues, reduced resolution times, and optimized website searches to improve the customer experience.
The document discusses how Staples uses Splunk for operational support, application insights, and business intelligence across their infrastructure. Staples relies on Splunk for real-time visibility into the health of their Advantage website and business/operational analytics. Splunk provides comprehensive insights into Staples' infrastructure and helps map application performance to user experience. It has saved Staples numerous times by quickly detecting issues. Adoption of Splunk at Staples has grown organically as more teams see its benefits.
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
This document provides an agenda for scaling a Splunk deployment beyond initial use cases. It discusses growing use cases and data volume over time. As Splunk becomes mission critical, the document recommends implementing high availability through indexer and search head clustering. It also suggests using a distributed management console and centralized configuration management. Finally, the document briefly discusses Splunk Cloud and hybrid deployments as options to scale without waiting for additional on-premise hardware.
WestJet Airlines is a Canadian airline founded in 1996 that has grown to operate over 425 flights per day to over 90 destinations across North America and Central America. The Solutions Architect at WestJet discusses how they implemented Splunk to gain visibility into their various systems like websites and apps. Splunk has helped WestJet troubleshoot issues faster, identify performance problems, and answer ad-hoc questions by consolidating their logs in one place.
Greg Dostatni is the team lead for application hosting at the University of Alberta. He manages a 10 person team responsible for managing applications and databases across the university. The university implemented Splunk in 2013 to help address challenges around siloed data and reactive troubleshooting after restructuring IT operations. Splunk provides centralized logging, real-time monitoring and alerts, and customizable dashboards. This has improved initial incident response times from half an hour to gather data to immediately investigating issues. Splunk also allows tracking of key metrics like authentication system transactions and performance monitoring across the university's systems.
The document discusses how the U.S. Social Security Administration uses Splunk to gain insights from large amounts of machine data. It summarizes Splunk's implementation at SSA, including indexing over 400 GB of data daily from various sources, and deploying search heads, indexers, and universal forwarders. It also discusses challenges around managing IT assets and how Splunk helps with security, compliance, and asset management.
This document discusses log centralization in cloud environments. It describes FINRA's role as an independent financial industry regulator and how it monitors the stock market and registered brokers. It then discusses challenges of collecting logs from various cloud services (SaaS, IaaS, PaaS) and providers (AWS, Cisco, etc.). It provides examples of using AWS services like CloudTrail, CloudWatch, and Elastic MapReduce with Hadoop to collect and analyze logs and metrics in the cloud.
This document provides an overview of how Garmin International uses Splunk to monitor and analyze machine data. It introduces Tyler Rutschman, a Linux systems administrator at Garmin, and describes how Garmin started using Splunk in 2009 to help with Sarbanes-Oxley compliance. Splunk has provided benefits like reduced mean time to resolution, better reporting capabilities, cost savings, and improved compliance. The implementation collects up to 150 GB of data per day from sources like servers, databases, and load balancers. Future plans include indexer upgrades and adding more Garmin application data to Splunk.
This document discusses how Staples uses Splunk to gain insights from machine data across their organization. It provides details on:
- Staples' Splunk infrastructure consisting of 8 index servers and 9 search heads that can handle 1TB of data per day.
- The key use cases of operational support, application insights, and business intelligence.
- How Splunk provides a single pane of glass for visibility across their web apps, servers, monitoring tools, and more.
- Examples of how Splunk has helped identify issues, reduced resolution times, and optimized website searches to improve the customer experience.
The document discusses how Staples uses Splunk for operational support, application insights, and business intelligence across their infrastructure. Staples relies on Splunk for real-time visibility into the health of their Advantage website and business/operational analytics. Splunk provides comprehensive insights into Staples' infrastructure and helps map application performance to user experience. It has saved Staples numerous times by quickly detecting issues. Adoption of Splunk at Staples has grown organically as more teams see its benefits.
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
This document provides an agenda for scaling a Splunk deployment beyond initial use cases. It discusses growing use cases and data volume over time. As Splunk becomes mission critical, the document recommends implementing high availability through indexer and search head clustering. It also suggests using a distributed management console and centralized configuration management. Finally, the document briefly discusses Splunk Cloud and hybrid deployments as options to scale without waiting for additional on-premise hardware.
WestJet Airlines is a Canadian airline founded in 1996 that has grown to operate over 425 flights per day to over 90 destinations across North America and Central America. The Solutions Architect at WestJet discusses how they implemented Splunk to gain visibility into their various systems like websites and apps. Splunk has helped WestJet troubleshoot issues faster, identify performance problems, and answer ad-hoc questions by consolidating their logs in one place.
Greg Dostatni is the team lead for application hosting at the University of Alberta. He manages a 10 person team responsible for managing applications and databases across the university. The university implemented Splunk in 2013 to help address challenges around siloed data and reactive troubleshooting after restructuring IT operations. Splunk provides centralized logging, real-time monitoring and alerts, and customizable dashboards. This has improved initial incident response times from half an hour to gather data to immediately investigating issues. Splunk also allows tracking of key metrics like authentication system transactions and performance monitoring across the university's systems.
The document discusses how the U.S. Social Security Administration uses Splunk to gain insights from large amounts of machine data. It summarizes Splunk's implementation at SSA, including indexing over 400 GB of data daily from various sources, and deploying search heads, indexers, and universal forwarders. It also discusses challenges around managing IT assets and how Splunk helps with security, compliance, and asset management.
This document discusses log centralization in cloud environments. It describes FINRA's role as an independent financial industry regulator and how it monitors the stock market and registered brokers. It then discusses challenges of collecting logs from various cloud services (SaaS, IaaS, PaaS) and providers (AWS, Cisco, etc.). It provides examples of using AWS services like CloudTrail, CloudWatch, and Elastic MapReduce with Hadoop to collect and analyze logs and metrics in the cloud.
This document provides an overview of how Garmin International uses Splunk to monitor and analyze machine data. It introduces Tyler Rutschman, a Linux systems administrator at Garmin, and describes how Garmin started using Splunk in 2009 to help with Sarbanes-Oxley compliance. Splunk has provided benefits like reduced mean time to resolution, better reporting capabilities, cost savings, and improved compliance. The implementation collects up to 150 GB of data per day from sources like servers, databases, and load balancers. Future plans include indexer upgrades and adding more Garmin application data to Splunk.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
Justin Hardeman is a Unix administrator at Availity LLC, a company that processes over 2 billion healthcare transactions annually. He has over 5 years of experience using Splunk for monitoring Availity's large, multi-datacenter infrastructure consisting of 500+ virtual machines. Splunk has allowed Availity to move from a reactive to proactive approach by providing real-time visibility into issues, transactions, and workflows across their environment.
The document summarizes Battelle's use of Splunk for security monitoring and log management. It describes how Splunk replaced three disparate and difficult to manage log systems, providing a single interface for all security logs. Splunk reduced complexity, increased efficiency of the security team, and allowed them to spend more time on security and less on tool management. The security team uses Splunk for central logging, alerts and monitoring, queries and searches, and reporting to share security information.
This document summarizes Patrick Farrell's role as the Sr. Software Engineer and Splunk administrator at Cardinal Health, a Fortune 500 healthcare company. It describes how Splunk has helped Cardinal Health improve root cause analysis, gather customer usage statistics, increase efficiencies, and provide more proactive customer support. Specifically, Splunk reduced the time to resolve issues from hours to seconds, improved systems uptime and performance, and increased customer satisfaction. The document provides recommendations on best practices for implementing Splunk and describes Cardinal Health's plans to expand Splunk usage.
Getting Started with Splunk Enterprise Hands-OnSplunk
This document provides an overview and demonstration of Splunk software. The agenda includes downloading Splunk, an overview of its key features for searching machine data, field extraction, dashboards, alerting, and analytics. The presenter then demonstrates installing and onboarding sample data, performing searches, and using pivots. deployment architectures are discussed along with scaling to hundreds of terabytes per day. Questions areas like documentation, support, and the Splunk user conference are also mentioned.
Power of Splunk Search Processing Language (SPL)Splunk
The document discusses Splunk's Search Processing Language (SPL) for searching and analyzing machine data. It provides an overview of SPL and its commands, and gives examples of how SPL can be used for tasks like searching, charting, enriching data, identifying anomalies, transactions, and custom commands. The presentation aims to showcase the power and flexibility of SPL for tasks like searching large datasets, visualizing data, combining different data sources, and extending SPL's capabilities through custom commands.
SplunkLive! Customer Presentation - Penn State Hershey Medical CenterSplunk
This document discusses Jeff Campbell's role as the Information Security Architect at Penn State Hershey Medical Center and their use of Splunk. It describes how Penn State Hershey Medical Center has over 9,000 employees and a combined $1.5 billion budget across its institutes and hospitals. It outlines some of the challenges they faced with decentralized logging prior to Splunk, and how Splunk provided a centralized log repository allowing for faster searching and correlation across systems. It provides examples of how Penn State Hershey is using Splunk for security use cases, operational improvements, and additional sources. It also discusses their Splunk architecture and future plans to expand Splunk usage.
This document provides an overview and examples of data onboarding in Splunk. It discusses best practices for indexing data, such as setting the event boundary, date, timestamp, sourcetype and source fields. Examples are given for onboarding complex JSON, simple JSON and complex CSV data. Lessons learned from each example highlight issues like properly configuring settings for nested or multiple timestamp fields. The presentation also introduces Splunk capabilities for collecting machine data beyond logs, such as the HTTP Event Collector, Splunk MINT and the Splunk App for Stream.
This document discusses how Herbalife, a company that produces health and wellness products, uses Splunk to monitor their global ecommerce website and applications. It describes how Splunk has improved their operational visibility and issue resolution by enabling logging of web, SQL, application, and development data across their four data centers. Splunk has helped them scale from 10GB to 50GB of data in six months, improve mean time to resolution from days to minutes, and support over 250 users accessing logs and metrics.
In addition to seeing the latest features in Splunk Enterprise, learn some of the top commands that will solve most search and analytics needs. Ninja’s can use these blindfolded. New features will be demonstrated in the following areas: TCO and Performance Improvements, Platform Management and New Interactive Visualizations.
Splunk Enterprise 6.4 delivers a new library of interactive visualizations, faster analytics, and can reduce your historical data storage costs by up to 80%.
See how you can:
• Use new interactive visualizations to view results, and easily create and share your own
• Speed investigation and discovery of large-scale data with event sampling
• Reduce storage costs by up to 80% for aged data
• Get wider visibility into system performance and health with new management views
With the new features and lower storage costs offered by Splunk Enterprise 6.4, doing big data analysis is now easier than ever. See it in action by attending this webinar.
Splunk: How to Design, Build and Map IT ServicesSplunk
This document discusses how to design, build, and map IT and business services in Splunk to gain "service intelligence." It describes a methodology for bringing subject matter experts together to design services top-down before configuration. Specifically, it discusses deconstructing a company's supply chain, online store, and ERP systems into a service map to gain insights on key performance indicators and improve issue resolution, efficiency, and customer satisfaction.
SplunkLive! London: Splunk ninjas- new features and search dojoSplunk
The document discusses new features and enhancements in Splunk 6.4, including improvements to reduce storage costs through TSIDX reduction, enhance platform security and management through features like improved DMC and new SSO options, and new interactive visualizations. It also covers search commands like eval, stats, eventstats, streamstats, and transaction that can solve most data analysis problems, and provides examples of using these commands. Finally, it discusses some tips and tricks for Splunk searches.
Exact is a Dutch software company that provides business management software. They implemented Splunk to gain operational visibility, business insights, proactive monitoring, and search/investigation capabilities across their infrastructure supporting 350,000 companies in 7 countries. Splunk helped Exact lower their resolution times by 75% and scale their infrastructure while keeping the same team size to support exponential growth of adding 250 new companies per day.
Cardinal Health is a large healthcare company headquartered in Ohio with 37,000 employees worldwide. The author, Patrick Farrell, is a senior software engineer at Cardinal Health who helped introduce Splunk to the company in 2011. Before Splunk, Cardinal Health struggled to efficiently search and analyze logs across many servers to troubleshoot issues. Splunk helped improve mean time to resolution by consolidating logs and enabling faster search. Cardinal Health now uses Splunk extensively across many teams and platforms for operational support, monitoring, dashboards, and business intelligence. Specific use cases include supporting a large e-commerce platform and improving visibility into an electronic data interchange platform.
The document summarizes Splunk adoption at athenahealth, a cloud-based healthcare services company. It discusses how Splunk has provided athenahealth's security teams visibility into various data sources to help prioritize threats and incidents. Specifically, Splunk Enterprise Security is used by the Security Incident Response Team. Over 10 power users consume 400GB of data per day from hundreds of forwarders. Splunk has improved efficiency, reduced alert fatigue, and allowed for better investigation and correlation of security information.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
1) Cisco has been using Splunk enterprise for over 7 years across many business units and teams, with daily indexing growing from 300GB in 2010 to over 2TB currently.
2) Cisco's Computer Security Incident Response Team (CSIRT) uses Splunk as their security information and event management (SIEM) platform to monitor 350TB of stored data across 60 global users.
3) The presentation discusses how Cisco and some of its customers have successfully deployed Splunk on Cisco Unified Computing System (UCS) servers to scale their Splunk environments and gain benefits of simplified and repeatable deployments.
John Villacres works in network automation and tools at Nationwide. He demonstrated Splunk to colleagues in 2012 and they now use it extensively. Splunk has improved their ability to troubleshoot issues by providing timely access to network data through custom dashboards. It has reduced resolution times for problems from days to minutes by integrating data from sources like firewalls, routers, and packet captures. More teams now use Splunk as its efficiency has allowed employees to take on new tasks while maintaining productivity.
Yahoo Enabling Exploratory Analytics of Data in Shared-service Hadoop ClustersBrett Sheppard
The document discusses Hunk, a self-service analytics platform for exploring, visualizing, and analyzing data stored in Hadoop clusters and other data stores. Hunk allows users to rapidly interact with data through an interactive search interface and preview results without waiting for full queries to finish. It provides integrated visualization of data through built-in graphs and charts. Hunk deployment is fast, requiring under 60 minutes to connect to Hadoop clusters and begin searching data.
Splunk Ninjas: New features, pivot, and search dojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
Justin Hardeman is a Unix administrator at Availity LLC, a company that processes over 2 billion healthcare transactions annually. He has over 5 years of experience using Splunk for monitoring Availity's large, multi-datacenter infrastructure consisting of 500+ virtual machines. Splunk has allowed Availity to move from a reactive to proactive approach by providing real-time visibility into issues, transactions, and workflows across their environment.
The document summarizes Battelle's use of Splunk for security monitoring and log management. It describes how Splunk replaced three disparate and difficult to manage log systems, providing a single interface for all security logs. Splunk reduced complexity, increased efficiency of the security team, and allowed them to spend more time on security and less on tool management. The security team uses Splunk for central logging, alerts and monitoring, queries and searches, and reporting to share security information.
This document summarizes Patrick Farrell's role as the Sr. Software Engineer and Splunk administrator at Cardinal Health, a Fortune 500 healthcare company. It describes how Splunk has helped Cardinal Health improve root cause analysis, gather customer usage statistics, increase efficiencies, and provide more proactive customer support. Specifically, Splunk reduced the time to resolve issues from hours to seconds, improved systems uptime and performance, and increased customer satisfaction. The document provides recommendations on best practices for implementing Splunk and describes Cardinal Health's plans to expand Splunk usage.
Getting Started with Splunk Enterprise Hands-OnSplunk
This document provides an overview and demonstration of Splunk software. The agenda includes downloading Splunk, an overview of its key features for searching machine data, field extraction, dashboards, alerting, and analytics. The presenter then demonstrates installing and onboarding sample data, performing searches, and using pivots. deployment architectures are discussed along with scaling to hundreds of terabytes per day. Questions areas like documentation, support, and the Splunk user conference are also mentioned.
Power of Splunk Search Processing Language (SPL)Splunk
The document discusses Splunk's Search Processing Language (SPL) for searching and analyzing machine data. It provides an overview of SPL and its commands, and gives examples of how SPL can be used for tasks like searching, charting, enriching data, identifying anomalies, transactions, and custom commands. The presentation aims to showcase the power and flexibility of SPL for tasks like searching large datasets, visualizing data, combining different data sources, and extending SPL's capabilities through custom commands.
SplunkLive! Customer Presentation - Penn State Hershey Medical CenterSplunk
This document discusses Jeff Campbell's role as the Information Security Architect at Penn State Hershey Medical Center and their use of Splunk. It describes how Penn State Hershey Medical Center has over 9,000 employees and a combined $1.5 billion budget across its institutes and hospitals. It outlines some of the challenges they faced with decentralized logging prior to Splunk, and how Splunk provided a centralized log repository allowing for faster searching and correlation across systems. It provides examples of how Penn State Hershey is using Splunk for security use cases, operational improvements, and additional sources. It also discusses their Splunk architecture and future plans to expand Splunk usage.
This document provides an overview and examples of data onboarding in Splunk. It discusses best practices for indexing data, such as setting the event boundary, date, timestamp, sourcetype and source fields. Examples are given for onboarding complex JSON, simple JSON and complex CSV data. Lessons learned from each example highlight issues like properly configuring settings for nested or multiple timestamp fields. The presentation also introduces Splunk capabilities for collecting machine data beyond logs, such as the HTTP Event Collector, Splunk MINT and the Splunk App for Stream.
This document discusses how Herbalife, a company that produces health and wellness products, uses Splunk to monitor their global ecommerce website and applications. It describes how Splunk has improved their operational visibility and issue resolution by enabling logging of web, SQL, application, and development data across their four data centers. Splunk has helped them scale from 10GB to 50GB of data in six months, improve mean time to resolution from days to minutes, and support over 250 users accessing logs and metrics.
In addition to seeing the latest features in Splunk Enterprise, learn some of the top commands that will solve most search and analytics needs. Ninja’s can use these blindfolded. New features will be demonstrated in the following areas: TCO and Performance Improvements, Platform Management and New Interactive Visualizations.
Splunk Enterprise 6.4 delivers a new library of interactive visualizations, faster analytics, and can reduce your historical data storage costs by up to 80%.
See how you can:
• Use new interactive visualizations to view results, and easily create and share your own
• Speed investigation and discovery of large-scale data with event sampling
• Reduce storage costs by up to 80% for aged data
• Get wider visibility into system performance and health with new management views
With the new features and lower storage costs offered by Splunk Enterprise 6.4, doing big data analysis is now easier than ever. See it in action by attending this webinar.
Splunk: How to Design, Build and Map IT ServicesSplunk
This document discusses how to design, build, and map IT and business services in Splunk to gain "service intelligence." It describes a methodology for bringing subject matter experts together to design services top-down before configuration. Specifically, it discusses deconstructing a company's supply chain, online store, and ERP systems into a service map to gain insights on key performance indicators and improve issue resolution, efficiency, and customer satisfaction.
SplunkLive! London: Splunk ninjas- new features and search dojoSplunk
The document discusses new features and enhancements in Splunk 6.4, including improvements to reduce storage costs through TSIDX reduction, enhance platform security and management through features like improved DMC and new SSO options, and new interactive visualizations. It also covers search commands like eval, stats, eventstats, streamstats, and transaction that can solve most data analysis problems, and provides examples of using these commands. Finally, it discusses some tips and tricks for Splunk searches.
Exact is a Dutch software company that provides business management software. They implemented Splunk to gain operational visibility, business insights, proactive monitoring, and search/investigation capabilities across their infrastructure supporting 350,000 companies in 7 countries. Splunk helped Exact lower their resolution times by 75% and scale their infrastructure while keeping the same team size to support exponential growth of adding 250 new companies per day.
Cardinal Health is a large healthcare company headquartered in Ohio with 37,000 employees worldwide. The author, Patrick Farrell, is a senior software engineer at Cardinal Health who helped introduce Splunk to the company in 2011. Before Splunk, Cardinal Health struggled to efficiently search and analyze logs across many servers to troubleshoot issues. Splunk helped improve mean time to resolution by consolidating logs and enabling faster search. Cardinal Health now uses Splunk extensively across many teams and platforms for operational support, monitoring, dashboards, and business intelligence. Specific use cases include supporting a large e-commerce platform and improving visibility into an electronic data interchange platform.
The document summarizes Splunk adoption at athenahealth, a cloud-based healthcare services company. It discusses how Splunk has provided athenahealth's security teams visibility into various data sources to help prioritize threats and incidents. Specifically, Splunk Enterprise Security is used by the Security Incident Response Team. Over 10 power users consume 400GB of data per day from hundreds of forwarders. Splunk has improved efficiency, reduced alert fatigue, and allowed for better investigation and correlation of security information.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
1) Cisco has been using Splunk enterprise for over 7 years across many business units and teams, with daily indexing growing from 300GB in 2010 to over 2TB currently.
2) Cisco's Computer Security Incident Response Team (CSIRT) uses Splunk as their security information and event management (SIEM) platform to monitor 350TB of stored data across 60 global users.
3) The presentation discusses how Cisco and some of its customers have successfully deployed Splunk on Cisco Unified Computing System (UCS) servers to scale their Splunk environments and gain benefits of simplified and repeatable deployments.
John Villacres works in network automation and tools at Nationwide. He demonstrated Splunk to colleagues in 2012 and they now use it extensively. Splunk has improved their ability to troubleshoot issues by providing timely access to network data through custom dashboards. It has reduced resolution times for problems from days to minutes by integrating data from sources like firewalls, routers, and packet captures. More teams now use Splunk as its efficiency has allowed employees to take on new tasks while maintaining productivity.
Yahoo Enabling Exploratory Analytics of Data in Shared-service Hadoop ClustersBrett Sheppard
The document discusses Hunk, a self-service analytics platform for exploring, visualizing, and analyzing data stored in Hadoop clusters and other data stores. Hunk allows users to rapidly interact with data through an interactive search interface and preview results without waiting for full queries to finish. It provides integrated visualization of data through built-in graphs and charts. Hunk deployment is fast, requiring under 60 minutes to connect to Hadoop clusters and begin searching data.
Splunk Ninjas: New features, pivot, and search dojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
The document provides an overview of new features in Splunk Enterprise 6.1 including enhanced interactive analytics, embedding operational intelligence, and enabling the mission-critical enterprise. It discusses data models and pivot which allow analyzing data without search commands. Finally, it highlights five powerful search commands (eval, stats, eventstats, streamstats, transaction) that can solve most data analysis problems.
SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo Splunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Intershop Commerce Management with Microsoft SQL ServerMauro Boffardi
This document discusses Intershop Commerce Management's support for Microsoft SQL Server and Azure SQL Database as operational databases. Key points include:
- Intershop Commerce Management version 7.10 now supports Microsoft SQL Server and Azure SQL Database in addition to Oracle Database.
- Microsoft SQL Server and Azure SQL Database provide features for business intelligence, advanced analytics, data management, and machine learning.
- Organizations have options to use SQL Server on-premises, Azure SQL Database on Azure, or let Intershop manage the database through their commerce-as-a-service offering.
- The document outlines the steps taken to migrate an existing Intershop implementation from Oracle to Microsoft SQL Server, including
This document provides an overview and examples of using the Splunk Search Processing Language (SPL). It discusses SPL commands for searching, filtering, modifying fields, calculating statistics, charting data over time, converging different data sources, identifying transactions, and exploring relationships between fields. Examples are given for common SPL commands like stats, timechart, lookup, appendcols, transaction, and eval. The document is intended to refresh the audience on SPL and provide recipes for common search tasks.
Data models provide a hierarchical structure for mapping raw machine data onto conceptual objects and relationships. They encapsulate domain knowledge needed to build searches and reports. Data models allow non-technical users to interact with data via a pivot interface without understanding the underlying data structure or search syntax. When reports are generated from a data model, the search strings are automatically constructed based on the model. Model acceleration can optimize searches by pre-computing search results.
This document outlines an agenda for an advanced Splunk user training workshop. The workshop covers topics like field aliasing, common information models, event types, tags, dashboard customization, index replication for high availability, report acceleration, and lookups. It provides overviews and examples for each topic and directs attendees to additional documentation resources for more in-depth learning. The workshop also includes demonstrations of dashboard customization techniques and discusses support options through the Splunk community.
Cortana Analytics Workshop: Real-Time Data Processing -- How Do I Choose the ...MSAdvAnalytics
Benjamin Wright-Jones, Simon Lidberg. Are you interested in near real-time data processing but confused about Azure capabilities and product positioning? Spark, StreamInsight, Storm (HDInsight) and Stream Analytics offer ways to ingest data but there is uncertainty about when and how we should use these capabilities. For example, what are the differences and key solution design decision points? Come to this session to learn about current and new near real-time data processing engines. Go to https://channel9.msdn.com/ to find the recording of this session.
This document introduces Splunk Enterprise & Splunk Cloud Release 6.4. It highlights new features including unlimited custom visualizations, enhanced predictive analytics, expanded cloud services monitoring, improved platform security and management, and reduced storage costs for historical data of up to 80% with Splunk Enterprise. The release aims to help users get more value from big data while lowering storage costs.
This document provides an overview and examples of using the Splunk Search Processing Language (SPL). It begins with a safe harbor statement noting that forward-looking statements may differ from actual results. The agenda then outlines an overview of SPL anatomy, commands and examples, custom commands, and a Q&A section. Examples are provided for various SPL commands like search, eval, stats, and timechart. It also discusses converging and exploring data through commands like lookup, appendcols, transaction, cluster, correlate and associate. Finally, it briefly introduces the concept of custom commands and examples like Haversine.
SplunkLive! Presentation - Data Onboarding with SplunkSplunk
- The data onboarding process involves systematically bringing new data sources into Splunk to make the data instantly usable and valuable for users
- The process includes pre-boarding activities like identifying the data, mapping fields, and building index-time and search-time configurations
- It also involves deploying any necessary infrastructure, deploying the configurations, testing and validating the data, and getting user approval before the process is complete
SplunkLive! Frankfurt 2018 - Data Onboarding OverviewSplunk
Presented at SplunkLive! Frankfurt 2018:
Splunk Data Collection Architecture
Apps and Technology Add-ons
Demos / Examples
Best Practices
Resources and Q&A
Apache Beam is a unified programming model for batch and streaming data processing. It defines concepts for describing what computations to perform (the transformations), where the data is located in time (windowing), when to emit results (triggering), and how to accumulate results over time (accumulation mode). Beam aims to provide portable pipelines across multiple execution engines, including Apache Flink, Apache Spark, and Google Cloud Dataflow. The talk will cover the key concepts of the Beam model and how it provides unified, efficient, and portable data processing pipelines.
Comment transformer vos données en informations exploitablesElasticsearch
Découvrez des fonctionnalités stratégiques de la Suite Elastic, notamment Elasticsearch, un moteur de données incomparable, et Kibana, véritable fenêtre ouverte sur la Suite Elastic.
Dans cette session, vous apprendrez à :
injecter des données dans la Suite Elastic ;
stocker des données ;
analyser des données ;
exploiter des données.
This document provides an overview and demonstration of Splunk Enterprise. The agenda includes an overview of Splunk, a live demonstration of installing and using Splunk to search, analyze and visualize machine data, a discussion of Splunk deployment architectures, and information on Splunk communities and support resources. The demonstration walks through importing sample data, performing searches, creating a field extraction, building a dashboard, and exploring Splunk's alerting, analytics and pivot interface capabilities.
SplunkLive! Analytics with Splunk Enterprise - Part 2Splunk
This document discusses Splunk's data modeling capabilities and how they enable faster analytics over raw machine data. It introduces data models, which allow domain knowledge to be shared and reused. Data models map data onto hierarchical structures and enable non-technical users to build reports without using the Splunk search language. The document covers best practices for building data models and how pivot searches are generated from the underlying data model objects. It also discusses managing, securing, and accelerating analytics with data models.
Similar to Splunk Ninjas: New Features, Pivot, and Search Dojo (20)
.conf Go 2023 - Raiffeisen Bank InternationalSplunk
This document discusses standardizing security operations procedures (SOPs) to increase efficiency and automation. It recommends storing SOPs in a code repository for versioning and referencing them in workbooks which are lists of standard tasks to follow for investigations. The goal is to have investigation playbooks in the security orchestration, automation and response (SOAR) tool perform the predefined investigation steps from the workbooks to automate incident response. This helps analysts automate faster without wasting time by having standard, vendor-agnostic procedures.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
El documento describe la transición de Cellnex de un Centro de Operaciones de Seguridad (SOC) a un Equipo de Respuesta a Incidentes de Seguridad (CSIRT). La transición se debió al crecimiento de Cellnex y la necesidad de automatizar procesos y tareas para mejorar la eficiencia. Cellnex implementó Splunk SIEM y SOAR para automatizar la creación, remediación y cierre de incidentes. Esto permitió al personal concentrarse en tareas estratégicas y mejorar KPIs como tiempos de resolución y correos electrónicos anal
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)Splunk
Este documento resume el recorrido de ABANCA en su camino hacia la ciberseguridad con Splunk, desde la incorporación de perfiles dedicados en 2016 hasta convertirse en un centro de monitorización y respuesta con más de 1TB de ingesta diaria y 350 casos de uso alineados con MITRE ATT&CK. También describe errores cometidos y soluciones implementadas, como la normalización de fuentes y formación de operadores, y los pilares actuales como la automatización, visibilidad y alineación con MITRE ATT&CK. Por último, señala retos
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
The document is a presentation on cyber security trends and Splunk security products from Matthias Maier, Product Marketing Director for Security at Splunk. The presentation covers trends in security operations like the evolution of SOCs, new security roles, and data-centric security approaches. It also provides updates on Splunk's security portfolio including recognition as a leader in SIEM by Gartner and growth in the SIEM market. Maier highlights some breakout sessions from the conference on topics like asset defense, machine learning, and building detections.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
This document summarizes a presentation about observability using Splunk. It includes an agenda introducing observability and why Splunk for observability. It discusses the need for modernization initiatives in companies and the thousands of changes required. It presents that Splunk provides end-to-end visibility across metrics, traces and logs to detect, troubleshoot and optimize systems. It shares a customer case study of Accenture using Splunk observability in their hybrid cloud environment. Finally, it concludes that observability with Splunk can drive results like reduced downtime and faster innovation.
This document contains slides from a Splunk presentation covering the following topics:
- Updated Splunk logo and information about meetings in Zurich and sales engineering leads
- Ideas for confused or concerned human figures in design concepts
- Three buckets of challenges around websites slowing, apps being down, and supply chain issues
- Accelerating mean time to detect, identify, respond and resolve through cyber resilience with Splunk
- Unifying security, IT and DevOps teams
- Splunk's technology vision focusing on customer experience, hybrid/edge, unleashing data lakes, and ubiquitous machine learning
- Gaining operational resilience through correlating infrastructure, security, application and user data with business outcomes
This document summarizes a presentation about Splunk's platform. It discusses Splunk's mission of helping customers create value faster with insights from their data. It provides statistics on Splunk's daily ingest and users. It highlights examples of how Splunk has helped customers in areas like internet messaging and convergent services. It also discusses upcoming challenges and new capabilities in Splunk like federated search, flexible indexing, ingest actions, improved data onboarding and management, and increased platform resilience and security.
The document appears to be a presentation from Splunk on security topics. It includes sections on cyber security resilience, the data-centric modern SOC, application monitoring at scale, threat modeling, security monitoring journeys, self-service Splunk infrastructure, the top 3 CISO priorities of risk based alerting, use case development, a security content repository, security PVP (posture, vision, and planning) and maturity assessment, and concludes with an overview of how Splunk can provide end-to-end visibility across an organization.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
2. 2
Safe Harbor Statement
During the course of this presentation, we may make forward looking statements regarding future events
or the expected performance of the company. We caution you that such statements reflect our current
expectations and estimates based on factors currently known to us and that actual events or results could
differ materially. For important factors that may cause actual results to differ from those contained in our
forward-looking statements, please review our filings with the SEC. The forward-looking statements
made in this presentation are being made as of the time and date of its live presentation. If reviewed
after its live presentation, this presentation may not contain current or accurate information. We do not
assume any obligation to update any forward looking statements we may make. In addition, any
information about our roadmap outlines our general product direction and is subject to change at any
time without notice. It is for informational purposes only and shall not be incorporated into any contract
or other commitment. Splunk undertakes no obligation either to develop the features or functionality
described orto includeany suchfeatureor functionalityina futurerelease.
3. 3
Agenda
What’s new in 6.2
– New features and capabilities
Data Models and Pivot
– Analyze data without using search commands
Harness the power of search
– The 5 search commands that can solve most problems
4. 4
Introducing Splunk Enterprise 6.2
4
Getting Data In
Advanced Field Extractor
Instant Pivot
Event Pattern Detection
Prebuilt Panels
Search Head Clustering
Distributed
Management Console
Powerful
Analytics for Broader
Number of Users
Faster Data
Onboarding
Breakthrough
Scalability and
Centralized Mgmt.
5. 5
Introducing Splunk Enterprise 6.2
5
Getting Data In
Advanced Field Extractor
Instant Pivot
Event Pattern Detection
Prebuilt Panels
Search Head Clustering
Distributed
Management Console
Powerful
Analytics for Broader
Number of Users
Faster Data
Onboarding
Breakthrough
Scalability and
Centralized Mgmt.
6. 6
Getting Data In
New interface makes it easier and faster to onboard any data
Intuitive wizard-style interface
Configurable inputs on forwarders
Improved data preview
Context-specific FAQs
6
7. 7
Advanced Field Extractor
Simplified field extractor enables rapid data analysis
Highlight-to-extract multiple fields
at once
Apply keyword search filters
Specify required text in extractions
View diverse and rare events
Validate extracted values with
field stats
7
9. 9
Introducing Splunk Enterprise 6.2
9
Getting Data In
Advanced Field Extractor
Instant Pivot
Event Pattern Detection
Prebuilt Panels
Search Head Clustering
Distributed
Management Console
Powerful
Analytics for Broader
Number of Users
Faster Data
Onboarding
Breakthrough
Scalability and
Centralized Mgmt.
10. 10
Instant Pivot
Pivot directly on any search to discover relationships, build reports
From any search, simply select
the Statistics tab and click on the
pivot icon
Explore and analyze data from
the Pivot interface
Quickly discover relationships in
the data and build powerful
reports
1
11. 11
Prebuilt Panels
Build dashboards faster using reusable building blocks
Enhanced dashboard edit workflow
– Browse or search across reports,
panels, dashboards and more
– Preview before adding to dashboard
Personalize your dashboards
Collaborate using a library of pre-
built panels
Convert panels to inline to further
customize
11
12. 12
Event Pattern Detection
Auto-discover meaningful patterns in your data with a single click
Search data without having to know
specific terms to search on
No need to sift through similar
events, just select “Patterns” tab
Intuitive interface
12
Screenshot or Image
suggestion
16. 16
Model, Report, and Accelerate
Build complex reports without the
search language
Provides more meaningful representation
of underlying raw machine data
Pivot
Data
Model
Acceleration technology delivers up to
1000x faster analytics over Splunk 5
Analytics
Store
17. 17
Creating a Data Model
Basic Steps
1. Have a use for a Data
Model
2. Write a base search
3. Select the fields to include
18. 24
Data Model Acceleration
• Automatically collected and
maintained
• Stored on the indexers
• Must share the Data Model
• Cost is additional disk space
Makes reporting crazy fast
19. 25
Pivot
• Drag-and-drop interface
• No need to understand
underlying data
• Click to visualize
Select fields from
data model
Time window
All chart types available in the chart toolbox
Save report
to share
Build Reports without SPL
21. 33
search and filter | munge | report | cleanup
Search Processing Language
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) dc(clientip)
| rename sum(KB) AS "Total MB" dc(clientip) AS "Unique Customers"
22. 34
Five Commands that will Solve Most Data Questions
eval - Modify or Create New Fields and Values
stats - Calculate Statistics Based on Field Values
eventstats - Add Summary Statistics to Search Results
streamstats - Cumulative Statistics for Each Event
transaction - Group Related Events Spanning Time
26. 40
stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) AS “Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
27. 41
stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) as “Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
28. 42
stats – Calculate Statistics Based on Field Values
Examples
• Calculate statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) AS "Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats avg(KB) sum(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
29. 44
eventstats – Add Summary Statistics to Search Results
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
30. 45
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
eventstats – Add Summary Statistics to Search
Results
31. 46
eventstats – Add Summary Statistics to Search
Results
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
32. 48
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total
| timechart max(bytes_total)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
33. 49
streamstats – Cumulative Statistics for Each
Event
Examples
• Cumulative Sum
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_bytes
| timechart max(cumulative_bytes)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
34. 50
streamstats – Cumulative Statistics for Each
Event
Examples
• Cumulative Sum
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_bytes
| timechart max(cumulative_bytes)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes
window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
35. 52
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
36. 53
transaction – Group Related Events Spanning
Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
37. 54
transaction – Group Related Events Spanning
Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
38. 55
Learn Them Well and Become a Ninja
eval - Modify or Create New Fields and Values
stats - Calculate Statistics Based on Field Values
eventstats - Add Summary Statistics to Search Results
streamstats - Cumulative Statistics for Each Event
transaction - Group Related Events Spanning Time
See many more examples and neat tricks at docs.splunk.com and answers.splunk.com
Here is what you need for this presentation:
Link to videos on box: <coming soon>
You should have the following installed:
6.2 Overview
OI Demo– Get it from the Technical Enablement Portal under SE tools –> Demos https://splunk--c.na2.visual.force.com/apex/LMS_TechnicalEnablementPortal
NOTE: Configure your role to search the oidemo index by default, otherwise you will have to type “index=oidemo” for the examples later on.
There is a lot to cover in this presentation! Try to go quickly and at a pretty high level. When you get through the presentation judge the audience’s interest and go deeper in whichever section. For example, if they want to know more about Pivot and Data Models then unhide those slides and walk through them, or if they want to go deeper on the search commands talk through the extra examples.
If running locally on 8000, these are the links to have ready in the background:
http://127.0.0.1:8000/en-US/app/oidemo/content_dashboard?form.track_name=Headlines&earliest=0&latest=
http://127.0.0.1:8000/en-US/app/oidemo/data_model_editor?model=%2FservicesNS%2Fnobody%2Foidemo%2Fdatamodel%2Fmodel%2FOIDemo
http://127.0.0.1:8000/en-US/app/oidemo/search
Splunk safe harbor statement.
Splunk Enterprise is the industry-leading platform for Operational Intelligence. Version 6.2 enables organizations to onboard, enrich and analyze machine data faster than ever before, scale to higher numbers of concurrent users and searches, and spend less time managing their large, distributed deployments.
Easier data onboarding and preparation
Getting Data In radically simplifies onboarding of any data source
Advanced Field Extractor enables better preparation of machine data for further analysis
More powerful analytics for everyone
Instant Pivot makes analytics easier by enabling anyone to Pivot directly on data, bypassing the Data Model step
Event Pattern Detection speeds analysis by identifying meaningful patterns in machine data
Prebuilt Panels enables faster dashboard creation by providing the ability to create and package re-usable dashboard building blocks
Simplified management at scale
Search Head Clustering enables horizontal scaling of the search head doubling the number of concurrent users and searches on the same hardware
Distributed Management Console delivers new management interface to centrally monitor distributed Splunk Enterprise deployments
Splunk Enterprise is the industry-leading platform for Operational Intelligence. Version 6.2 enables organizations to onboard, enrich and analyze machine data faster than ever before, scale to higher numbers of concurrent users and searches, and spend less time managing their large, distributed deployments.
Easier data onboarding and preparation
Getting Data In radically simplifies onboarding of any data source
Advanced Field Extractor enables better preparation of machine data for further analysis
More powerful analytics for everyone
Instant Pivot makes analytics easier by enabling anyone to Pivot directly on data, bypassing the Data Model step
Event Pattern Detection speeds analysis by identifying meaningful patterns in machine data
Prebuilt Panels enables faster dashboard creation by providing the ability to create and package re-usable dashboard building blocks
Simplified management at scale
Search Head Clustering enables horizontal scaling of the search head doubling the number of concurrent users and searches on the same hardware
Distributed Management Console delivers new management interface to centrally monitor distributed Splunk Enterprise deployments
In Splunk 6.2, we’ve completely remodeled the pages and workflows for adding data, and added new features like Forwarder Inputs a new Data Preview.
Consolidated Workflow:
We’ve made it much easier to find your way to the appropriate input configuration. Instead of selecting from a confusing list of sources, start with a simple choice of “upload, monitor, or forward” and you’ll find yourself in a simple wizard-style workflow of defining the appropriate parameters for the data you want to add.
Data Preview
The new Data Preview will make it easier for you to create the right sourcetype for your data. In the advanced section, you’ll be able to choose a charset from a list, and see how changes you make to your sourcetype are reflected in props.conf.
Forwarder Inputs
With Forwarder Inputs, you are able to push input configurations to Splunk instances configured as deployment clients. Simply select one or more forwarders and provide a group name, and you’ll be able to create data inputs on them in the same way you create inputs through the UI on your indexers.
With this enhancement, we’ve made it easier to extract fields from your data with the Advanced Field Extractor (AFX). A replacement of the existing field extraction utility, AFX enables you to easily capture multiple fields in a single extraction and specify required text to filter events for extraction (improving accuracy and efficiency). AFX also provides a number of methods for detecting false positives in order to help you validate your field extractions and improve the accuracy of your field
Demo GDI and AFE
Example and info here:
https://splunk.box.com/s/zg6964cc15nj9kcldd9w
Splunk Enterprise is the industry-leading platform for Operational Intelligence. Version 6.2 enables organizations to onboard, enrich and analyze machine data faster than ever before, scale to higher numbers of concurrent users and searches, and spend less time managing their large, distributed deployments.
Easier data onboarding and preparation
Getting Data In radically simplifies onboarding of any data source
Advanced Field Extractor enables better preparation of machine data for further analysis
More powerful analytics for everyone
Instant Pivot makes analytics easier by enabling anyone to Pivot directly on data, bypassing the Data Model step
Event Pattern Detection speeds analysis by identifying meaningful patterns in machine data
Prebuilt Panels enables faster dashboard creation by providing the ability to create and package re-usable dashboard building blocks
Simplified management at scale
Search Head Clustering enables horizontal scaling of the search head doubling the number of concurrent users and searches on the same hardware
Distributed Management Console delivers new management interface to centrally monitor distributed Splunk Enterprise deployments
Instant Pivot enables you to open any query in the Pivot interface, without requiring the creation of a data model. This means that you have the flexibility to choose what interface to explore your data. This also creates another method to construct data models, starting with search.
When a user clicks on the Pivot icon, an ephemeral data model is created that collects user specified fields within Pivot as a single, flat object. The user can save their Pivot (additionally prompts user to save data model).
Users can choose to instantly Pivot on their data, modify fields, columns, etc in Pivot and then convert it back to a search if they need to use advanced search commands.
Instant Pivot allows users to interact with their data faster.
Panels allow users to build custom dashboards faster, leveraging pre-built dashboard panels packaged within apps. A user can select from pre-built reports and dashboards or create their own from the new Add Panel interface.
Event Pattern Detection reduces massive sets of data to its essence rather than sifting through all events. This can be used to identify common and rare events quickly or search your data without having to know specific terms to search on.
If you already understand the “cluster” command in Splunk then you know what this is capable of. A slide-bar allows you to set the threshold of similarity of the events so you can tune if you want the pattern to be more or less specific which will increase or reduce the number of patterns.
Example and info here:
https://splunk.box.com/s/zg6964cc15nj9kcldd9w
For more information, or to try out the features yourself. Check out the overview app which explains each of the features and includes code samples and examples where applicable.
This section should take ~10 minutes
Data Model – A data model is just like a map of the underlying data. It defines meaningful relationships in the data
Pivot – is an interface to analyze data without using the splunk search language
Analytics Store – is an option that can be applied to Data Models to make Pivot searches extremely fast. Think of it like our 3rd generation acceleration technology.
Let’s dig into each of these features
A data model is created by someone who has the domain knowledge of the underlying data. But first, why even create a data model?
Image is clickable
One great reason is so that others can leverage the domain knowledge without having to understand it. Think about it like this; if you are the expert on a particular data set, say web logs, you could build a data model that others can use and they won’t have to bother you if they want to analyze the data. For example, they won’t to ask you what a “purchase” looks like in the underlying data, they will be able to simply click on a “Purchase” object. Another bonus to data models is anyone will be able to analyze the data faster with Pivot. More on Pivot in a bit.
At a high level there are 3 steps to creating a Data Model.
Have a use– If you want to make it easier for users to analyze data themselves or you want to take advantage of the transparent acceleration technology of High Performance Analytics store (HPAS) then you have a good case to make a Data Model. Data Models are very cheap, they are simply a small JSON file and thus consume an insignificant amount of resources by themselves. Don’t be afraid to make multiple data models, even if they are very similar. For example, you might want a data model that is accelerated and another that is not of the same data since you cannot modify an accelerated data model without re-accelerating it.
Write a base search by adding an additional constraint via the “Add Object” dropdown.
Select the fields you want to include using “Add Attribute” dropdown.
Let’s take a look at a data model…
http://127.0.0.1:8000/en-US/app/oidemo/data_model_editor?model=%2FservicesNS%2Fnobody%2Foidemo%2Fdatamodel%2Fmodel%2FOIDemo
Show the OI Data Model
Data Models don’t have to be complex, even having just one root object is fine. Use “Root Event” whenever possible instead of “Root Search”. These searches can be optimized better.
Root search is for generating or streaming commands such as searches that begin with a |
Create a simple one one if you have time:
Root event: sourcetype=*
Child “Good responses” -> status<400
Child “Bad responses” -> status>=400
If you use instant pivot you can save the underlying data model that is automatically created.
What are the important “things” in your data?
How are they related?
There’s more than one “right” way to define your objects
Constraints filter down to a set of a data
Attributes are the fields and knowledge associated with the object
Both are inherited!
A child object is a type of its parent object
Adding a child object is essentially a way of adding a filter on the parents
A parent-child relationship makes it easy to do queries like “What percentage of my downloads are successful?”
Event
Maps to Splunk events
Requires constraints and attributes
Search
Maps to arbitrary Splunk search (may include generating, transforming and reporting search commands)
Requires search string attributes
Transaction
Maps to groups of Splunk events or groups of Splunk search results
Requires objects to group, fields/ conditions to group by, and attributes
<Briefly mention. It’s easy, its fast>
Automatically collected
Handles timing issues, backfill…
Automatically maintained
Uses acceleration window
Stored on the indexers
Peer to the buckets
Must share the Data Model
Acceleration can only be enabled if the Data Model is shared
Cost is additional disk space
Roughly 25% additional
New in 6.2, datamodels with multiple root events will be fully accelerated. In 6.1 only the first root event and children would be accelerated.
Why use Pivot?
The Pivot interface enables non-technical and technical users alike to quickly generate charts, visualizations and dashboards using simple drag and drop and without learning the Search Processing Language (SPL) and without having to have domain knowledge of the underlying data.
Queries using the Pivot interface are powered by underlying “data models” which we just spoke about that define the relationships in Machine Data.
<demo building a report using pivot, an example is provided in the hidden slides>
<This section should take ~15 minutes>Search is the most powerful part of Splunk.
The Splunk search language is very expressive and can perform a wide variety of tasks ranging from filtering to data, to munging, and reporting. The results can be used to answer questions, visualize results, or even send to a third party application in whatever format they require.
Although there are 135 documented search commands; however, most questions can be answered by using just a handful.
These are the five commands you should get very familiar with. If you know how to use these well, you will be able to solve most data questions that come your way. Let’s take a quick look at each of these.
<Walk through the examples with a demo. Hidden slides are available as backup. NOTE: Each of the grey boxes is clickable. If you are running Splunk on port 8000 you won’t have to type in the searches, this will save time.>
Note: Chart is just stats visualized. Timechart is just stats by _time visualized.
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) AS "Sum of KB"
sourcetype=access*
| stats values(useragent) avg(bytes) max(bytes) by clientip
sourcetype=access*
| stats values(useragent) avg(bytes) max(bytes) by clientip
Eventstats let’s you add statistics about the entire search results and makes the statistics available as fields on each event.
<Walk through the examples with a demo. Hidden slides are available as backup>
Eventstats let’s you add statistics about the entire search results and makes the statistics available as fields on each event.
Let’s use eventstats to create a timechart of the average bytes on top of the overall average.
index=* sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
We can turn this into a moving average simply by adding “by date_hour” to calculate the average per hour instead of the overall average.
index=* sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
Streamstats calculates statistics for each event at the time the event is seen. So for example, if I had an event with a temperature reading I could use streamstats to create a new field to tell me the temperature difference between the event and one or more previous events. Similar to the delta command, but more powerful. In this example, I’m going to take the bytes field of my access logs and see how much total data is being transferred code over time.
To create a cumulative sum:
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_bytes
| timechart max(cumulative_bytes)
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
Bonus: This could also be completed using the trendline command with the simple moving average (sma) parameter:
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| trendline sma10(avg_bytes) as moving_average_bytes
| timechart latest(avg_bytes) latest(moving_average_bytes)
Double Bonus: Cumulative sum by period
sourcetype=access*
| timechart span=15m sum(bytes) as cumulative_bytes by status
| streamstats global=f sum(cumulative_bytes) as bytes_total
A transaction is any group of related events that span time. It’s quite useful for finding overall durations. For example, how long did it take a user to complete a transaction. This really shows the power of Splunk. Think about it, if you are sending all your data to splunk then you have data from multiple subsystems (think database, webserver, and app server), you can see the overall time it’s taking AND how long each subsystem is taking. So many customers are using this to quickly pinpoint whether slowness is because of the network, database, or app server.
NOTE: Many transactions can be re-created using stats. Transaction is easy but stats is way more efficient and it’s a mapable command (more work will be distributed to the indexers).
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
There is much more each of these commands can be used for. Check out answers.splunk.com and docs.splunk.com for many more examples.
<If you have time, feel free to show one of your favorite commands or a neat use case of a command. The cluster command is provided here as an example >
“There are over 135 splunk commands, the five you have just seen are incredibly powerful. Here is another to add to your arsenal.”
You can use the cluster command to learn more about your data and to find common and/or rare events in your data. For example, if you are investigating an IT problem and you don't know specifically what to look for, use the cluster command to find anomalies. In this case, anomalous events are those that aren't grouped into big clusters or clusters that contain few events. Or, if you are searching for errors, use the cluster command to see approximately how many different types of errors there are and what types of errors are common in your data.
Decrease the threshold of similarity and see the change in results
sourcetype=access* | cluster field=bc_uri showcount=t t=0.1| table cluster_count bc_uri _raw | sort -cluster_count