Splunk Enterprise 6.4 delivers a new library of interactive visualizations, faster analytics, and can reduce your historical data storage costs by up to 80%.
See how you can:
• Use new interactive visualizations to view results, and easily create and share your own
• Speed investigation and discovery of large-scale data with event sampling
• Reduce storage costs by up to 80% for aged data
• Get wider visibility into system performance and health with new management views
With the new features and lower storage costs offered by Splunk Enterprise 6.4, doing big data analysis is now easier than ever. See it in action by attending this webinar.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
Taking Splunk to the Next Level - ArchitectureSplunk
This session led by Michael Donnelly will teach you how to take your Splunk deployment to the next level. Learn about Splunk high availability architectures with Splunk Search Head Clustering and Index Replication. Additionally, learn how to manage your deployment with Splunk’s operational and management controls to manage Splunk capacity and end user experience
Here are some key considerations for architecting a Splunk application:
- Define a data model and taxonomy - Map data sources to common schemas and entities. This allows for unified search, reporting and alerts.
- Partition data appropriately - Separate apps by function, team, data type or other logical boundaries. Consider security, scalability and maintenance.
- Choose input methods based on data volume and type - Streaming for high volume, modular/scripted for custom parsing. Consider HTTP Event Collector, TCP or file monitors.
- Design for scalability - Distribute data and workloads across multiple Splunk instances. Consider sharding, clustering, load balancing.
- Implement modular and reusable components - Custom searches, lookups
This document discusses how Splunk can be used for DevOps. It defines DevOps as integrating development and operations. It then discusses some common DevOps metrics like culture, process, quality, systems, activity, and impact metrics. It explains that machine data from across the development lifecycle and IT operations is a critical source of DevOps metrics. The document provides examples of how Splunk can provide visibility and collect machine data from various parts of the development and operations environments, like code review, version control, CI/build servers, testing, releases, and infrastructure systems. It discusses how Splunk can be used to increase delivery velocity, improve code quality, and enable data-driven continuous delivery for DevOps teams.
This document discusses how Herbalife, a company that produces health and wellness products, uses Splunk to monitor their global ecommerce website and applications. It describes how Splunk has improved their operational visibility and issue resolution by enabling logging of web, SQL, application, and development data across their four data centers. Splunk has helped them scale from 10GB to 50GB of data in six months, improve mean time to resolution from days to minutes, and support over 250 users accessing logs and metrics.
Data-Drive DevOps: Mining Machine Data for "Metrics that Matter"Splunk
Splunk's Andi Mann addresses what he refers to as the real core of DevOps: increasing collaboration, communication, integration and delivery of better, faster software; the human side of DevOps, combined with the business impacts.
Quelles nouveautés avec la version 6.5 de Splunk EnterpriseSplunk
Guided ML
Splunk Enterprise
Splunk Cloud
Splunk Light
Splunk Analytics for Hadoop
Splunk User Behavior Analytics
Splunk IT Service Intelligence
Splunk Security Essentials
Splunk App for AWS
Splunk App for Cisco
Splunk App for VMware
Splunk App for Microsoft
Splunk App for PCI
Splunk App for ServiceNow
Splunk App for SAP
Splunk App for Oracle
Splunk App for Salesforce
Splunk App for Workday
Splunk App for Marketo
Splunk App for ServiceNow
Splunk App for Marketo
Spl
This document provides a summary of the Distributed Management Console (DMC) 6.2 from Splunk. It discusses the continuous investment in management and monitoring capabilities. It provides a history of Splunk's monitoring tools and describes the DMC architecture. It demonstrates the DMC's search head clustering, indexer clustering, indexes and volumes, and forwarder monitoring views which provide insights into deployments. It also shows the topology view that visually represents distributed Splunk installations.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
Taking Splunk to the Next Level - ArchitectureSplunk
This session led by Michael Donnelly will teach you how to take your Splunk deployment to the next level. Learn about Splunk high availability architectures with Splunk Search Head Clustering and Index Replication. Additionally, learn how to manage your deployment with Splunk’s operational and management controls to manage Splunk capacity and end user experience
Here are some key considerations for architecting a Splunk application:
- Define a data model and taxonomy - Map data sources to common schemas and entities. This allows for unified search, reporting and alerts.
- Partition data appropriately - Separate apps by function, team, data type or other logical boundaries. Consider security, scalability and maintenance.
- Choose input methods based on data volume and type - Streaming for high volume, modular/scripted for custom parsing. Consider HTTP Event Collector, TCP or file monitors.
- Design for scalability - Distribute data and workloads across multiple Splunk instances. Consider sharding, clustering, load balancing.
- Implement modular and reusable components - Custom searches, lookups
This document discusses how Splunk can be used for DevOps. It defines DevOps as integrating development and operations. It then discusses some common DevOps metrics like culture, process, quality, systems, activity, and impact metrics. It explains that machine data from across the development lifecycle and IT operations is a critical source of DevOps metrics. The document provides examples of how Splunk can provide visibility and collect machine data from various parts of the development and operations environments, like code review, version control, CI/build servers, testing, releases, and infrastructure systems. It discusses how Splunk can be used to increase delivery velocity, improve code quality, and enable data-driven continuous delivery for DevOps teams.
This document discusses how Herbalife, a company that produces health and wellness products, uses Splunk to monitor their global ecommerce website and applications. It describes how Splunk has improved their operational visibility and issue resolution by enabling logging of web, SQL, application, and development data across their four data centers. Splunk has helped them scale from 10GB to 50GB of data in six months, improve mean time to resolution from days to minutes, and support over 250 users accessing logs and metrics.
Data-Drive DevOps: Mining Machine Data for "Metrics that Matter"Splunk
Splunk's Andi Mann addresses what he refers to as the real core of DevOps: increasing collaboration, communication, integration and delivery of better, faster software; the human side of DevOps, combined with the business impacts.
Quelles nouveautés avec la version 6.5 de Splunk EnterpriseSplunk
Guided ML
Splunk Enterprise
Splunk Cloud
Splunk Light
Splunk Analytics for Hadoop
Splunk User Behavior Analytics
Splunk IT Service Intelligence
Splunk Security Essentials
Splunk App for AWS
Splunk App for Cisco
Splunk App for VMware
Splunk App for Microsoft
Splunk App for PCI
Splunk App for ServiceNow
Splunk App for SAP
Splunk App for Oracle
Splunk App for Salesforce
Splunk App for Workday
Splunk App for Marketo
Splunk App for ServiceNow
Splunk App for Marketo
Spl
This document provides a summary of the Distributed Management Console (DMC) 6.2 from Splunk. It discusses the continuous investment in management and monitoring capabilities. It provides a history of Splunk's monitoring tools and describes the DMC architecture. It demonstrates the DMC's search head clustering, indexer clustering, indexes and volumes, and forwarder monitoring views which provide insights into deployments. It also shows the topology view that visually represents distributed Splunk installations.
Justin Hardeman is a Unix administrator at Availity LLC, a company that processes over 2 billion healthcare transactions annually. He has over 5 years of experience using Splunk for monitoring Availity's large, multi-datacenter infrastructure consisting of 500+ virtual machines. Splunk has allowed Availity to move from a reactive to proactive approach by providing real-time visibility into issues, transactions, and workflows across their environment.
This document summarizes Marc Chipouras' presentation on how CA Technologies uses Splunk to gain insights from log data generated by their Agile Central SaaS application. Originally, CA Technologies captured Apache logs and moved the large volumes of log data to a data warehouse, which created ETL challenges. They introduced the Kafka messaging system to decouple log production from consumption. Splunk then became a log consumer from Kafka, addressing data access, insight dashboarding, and customer problem identification needs without requiring complex ETL processes. With Splunk, CA Technologies' teams can now make faster, data-driven decisions to better serve customers from log data.
Elevate your Splunk Deployment by Better Understanding your Value Breakfast S...Splunk
This document discusses how to better understand the value of a Splunk deployment through assessing data sources. It presents a data source assessment tool to map data sources to use cases and organizational groups to identify opportunities. The tool shows which data sources are indexed and overlap between groups. It aims to maximize benefits from machine data by supporting business objectives and enabling broader impact.
Getting Started with Splunk Enterprise Hands-On Breakout SessionSplunk
This document provides an overview and demonstration of Splunk Enterprise. It discusses what machine data is and Splunk's mission to make it accessible. The presentation covers installing and onboarding data into Splunk, performing searches, creating dashboards and alerts. It also summarizes deployment architectures for Splunk and options for support and learning more.
Exact is a Dutch software company that provides business management software. They implemented Splunk to gain operational visibility, business insights, proactive monitoring, and search/investigation capabilities across their infrastructure supporting 350,000 companies in 7 countries. Splunk helped Exact lower their resolution times by 75% and scale their infrastructure while keeping the same team size to support exponential growth of adding 250 new companies per day.
Getting Started with Splunk Enterprise Hands-OnSplunk
This document provides an overview and demonstration of Splunk software. The agenda includes downloading Splunk, an overview of its key features for searching machine data, field extraction, dashboards, alerting, and analytics. The presenter then demonstrates installing and onboarding sample data, performing searches, and using pivots. deployment architectures are discussed along with scaling to hundreds of terabytes per day. Questions areas like documentation, support, and the Splunk user conference are also mentioned.
Splunk FISMA for Continuous Monitoring Greg Hanchin
Splunk for Continuous Monitoring provides visibility, reporting, and search capabilities across IT systems and infrastructure using a single solution. It reduces IT costs by solving various challenges with one tool that runs on modern platforms and indexes machine-generated data from various sources and formats. Dashboards and views are tailored for different roles like executives, compliance, security, and IT operations to monitor security control effectiveness and changes over time in compliance with NIST guidelines for continuous monitoring.
This document provides an overview and examples of data onboarding in Splunk. It discusses best practices for indexing data, such as setting the event boundary, date, timestamp, sourcetype and source fields. Examples are given for onboarding complex JSON, simple JSON and complex CSV data. Lessons learned from each example highlight issues like properly configuring settings for nested or multiple timestamp fields. The presentation also introduces Splunk capabilities for collecting machine data beyond logs, such as the HTTP Event Collector, Splunk MINT and the Splunk App for Stream.
During the presentation, forward-looking statements were made regarding Splunk's plans and estimates that are subject to risks and uncertainties. Any information about Splunk's roadmap outlines general product direction but is subject to change without notice. Splunk undertakes no obligation to develop or include any described feature in a future release. The presentation demonstrated Splunk's IoT analytics capabilities for manufacturing including predictive maintenance, advanced monitoring, and self-service analytics.
Getting Started with Splunk Enterprise Hands-OnSplunk
This document provides an overview and demonstration of Splunk software. It outlines an agenda to discuss why Splunk, how to install and use Splunk through a live demonstration, Splunk deployment architectures, and communities for help. The live demonstration shows importing sample data, performing searches, creating alerts and dashboards. It also discusses pivoting, field extraction, analytics, and scaling Splunk deployments.
Splunk is a time-series data platform that handles the three V's of data (volume, velocity, and variety) very well. It collects, indexes, and allows searching and analysis of data. Splunk can collect data from files, directories, network ports, programs/scripts, and databases. It breaks data down into searchable events and builds a high-performance index. This allows users to search, manipulate, and visualize data in reports, charts, and dashboards. Splunk can analyze structured, unstructured, and multistructured data from various sources like logs, networks, clicks, and more.
This document discusses DevOps concepts and how Splunk can be used to power DevOps initiatives. It defines key DevOps terms like continuous deployment, continuous delivery, push vs. pull deployments. It also outlines how Splunk provides visibility across the application development lifecycle from coding to testing to production. Example use cases are presented that leverage Splunk data and analytics to improve developer productivity, deployment health, and operational efficiency. The document promotes transforming organizations to DevOps using Splunk to provide a unified platform for data-driven insights.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
This document discusses Splunk for developers. It provides an overview of empowering developers with Splunk, building Splunk apps, and gaining application intelligence across the development lifecycle. Key points include instrumenting application logs for insights, integrating and extending Splunk, building unit testing and code integration, and gaining end-to-end visibility across development tools. The document also discusses resources for Splunk developers including tutorials, code samples, SDKs, and developer licenses.
This document provides a summary of new features and enhancements in Splunk Enterprise & Cloud version 6.3. Key highlights include improved performance and scale through search and index parallelization, intelligent job scheduling, expanded support for DevOps and IoT through the new HTTP Event Collector, and enhanced analytics and visualization capabilities such as anomaly detection and geospatial mapping. The documentation was also redesigned to be more user-friendly.
Machine Learning and Analytics Breakout SessionSplunk
This document provides an overview of machine learning and how it can be used with Splunk. It discusses what machine learning is, the different types of machine learning, and common use cases in IT operations, security, and business analytics. It also summarizes how machine learning can be implemented using Splunk, including exploring data, building models, applying and validating models, and operationalizing models. The document encourages attendees to try out the free Splunk Machine Learning Toolkit and Showcase app.
This document discusses Splunk's developer platform and resources for building applications on Splunk. It provides an overview of empowering developers through application intelligence, building Splunk apps, and integrating and extending Splunk. The document discusses Splunk for application development and challenges such as lack of visibility and limited insights. It describes gaining end-to-end visibility across development tools using Splunk and pushing better code using analytics in Splunk. Resources mentioned include Splunk's developer license, tutorials on their developer website, GitHub, and blogs.
Come and learn from our experts on ways to improve you IT Operational Visibility by using Splunk for monitoring environment health. In this hands-on session we will cover recommended approaches for end to end monitoring, across applications, OSes, and devices. Topics will include: critical services to monitor, use of the Splunk Common Information Model (CIM) for cross-dataset normalization, commonly deployed apps and TAs to gather data for IT infrastructure uses, and use of pre-made dashboard panels to quickly build dashboards for monitoring your environment.
This document discusses how Staples uses Splunk to gain insights from machine data across their organization. It provides details on:
- Staples' Splunk infrastructure consisting of 8 index servers and 9 search heads that can handle 1TB of data per day.
- The key use cases of operational support, application insights, and business intelligence.
- How Splunk provides a single pane of glass for visibility across their web apps, servers, monitoring tools, and more.
- Examples of how Splunk has helped identify issues, reduced resolution times, and optimized website searches to improve the customer experience.
Splunk is a software company headquartered in San Francisco with additional offices in London and Hong Kong. They have over 2,100 employees and annual revenue of $668.4 million, growing 49% year-over-year. Their products include Splunk Enterprise, Splunk Cloud, and other solutions for collecting, analyzing, and visualizing machine-generated data from websites, applications, sensors, and other sources. Splunk has over 11,000 customers across more than 110 countries, including 80 of the Fortune 100. Their largest customer indexes over 1 petabytes of data per day.
The document provides an overview of new features in Splunk Enterprise 6, including more powerful analytics capabilities for both technical and non-technical users. Key updates include an intuitive pivot interface that allows drag-and-drop report building without knowledge of the search language, defined data models to represent relationships in machine data, and an analytics store that can accelerate searches and reports up to 1000 times faster than previous versions. The release also includes simplified cluster management for large enterprise deployments and enhanced developer tools.
SplunkLive! Splunk Enterprise 6.3 - Data On-boardingSplunk
This document discusses Splunk Enterprise 6.3, a platform for machine data that provides breakthrough performance, scale, and total cost of ownership reductions. Key features highlighted include doubling search and indexing speed, increasing capacity by 20-50%, and reducing TCO by over 20%. Advanced analysis and visualization capabilities are improved, along with support for high-volume event collection, enterprise-scale requirements, and development tools. Demo apps showcase custom visualizations and machine learning functionality.
Justin Hardeman is a Unix administrator at Availity LLC, a company that processes over 2 billion healthcare transactions annually. He has over 5 years of experience using Splunk for monitoring Availity's large, multi-datacenter infrastructure consisting of 500+ virtual machines. Splunk has allowed Availity to move from a reactive to proactive approach by providing real-time visibility into issues, transactions, and workflows across their environment.
This document summarizes Marc Chipouras' presentation on how CA Technologies uses Splunk to gain insights from log data generated by their Agile Central SaaS application. Originally, CA Technologies captured Apache logs and moved the large volumes of log data to a data warehouse, which created ETL challenges. They introduced the Kafka messaging system to decouple log production from consumption. Splunk then became a log consumer from Kafka, addressing data access, insight dashboarding, and customer problem identification needs without requiring complex ETL processes. With Splunk, CA Technologies' teams can now make faster, data-driven decisions to better serve customers from log data.
Elevate your Splunk Deployment by Better Understanding your Value Breakfast S...Splunk
This document discusses how to better understand the value of a Splunk deployment through assessing data sources. It presents a data source assessment tool to map data sources to use cases and organizational groups to identify opportunities. The tool shows which data sources are indexed and overlap between groups. It aims to maximize benefits from machine data by supporting business objectives and enabling broader impact.
Getting Started with Splunk Enterprise Hands-On Breakout SessionSplunk
This document provides an overview and demonstration of Splunk Enterprise. It discusses what machine data is and Splunk's mission to make it accessible. The presentation covers installing and onboarding data into Splunk, performing searches, creating dashboards and alerts. It also summarizes deployment architectures for Splunk and options for support and learning more.
Exact is a Dutch software company that provides business management software. They implemented Splunk to gain operational visibility, business insights, proactive monitoring, and search/investigation capabilities across their infrastructure supporting 350,000 companies in 7 countries. Splunk helped Exact lower their resolution times by 75% and scale their infrastructure while keeping the same team size to support exponential growth of adding 250 new companies per day.
Getting Started with Splunk Enterprise Hands-OnSplunk
This document provides an overview and demonstration of Splunk software. The agenda includes downloading Splunk, an overview of its key features for searching machine data, field extraction, dashboards, alerting, and analytics. The presenter then demonstrates installing and onboarding sample data, performing searches, and using pivots. deployment architectures are discussed along with scaling to hundreds of terabytes per day. Questions areas like documentation, support, and the Splunk user conference are also mentioned.
Splunk FISMA for Continuous Monitoring Greg Hanchin
Splunk for Continuous Monitoring provides visibility, reporting, and search capabilities across IT systems and infrastructure using a single solution. It reduces IT costs by solving various challenges with one tool that runs on modern platforms and indexes machine-generated data from various sources and formats. Dashboards and views are tailored for different roles like executives, compliance, security, and IT operations to monitor security control effectiveness and changes over time in compliance with NIST guidelines for continuous monitoring.
This document provides an overview and examples of data onboarding in Splunk. It discusses best practices for indexing data, such as setting the event boundary, date, timestamp, sourcetype and source fields. Examples are given for onboarding complex JSON, simple JSON and complex CSV data. Lessons learned from each example highlight issues like properly configuring settings for nested or multiple timestamp fields. The presentation also introduces Splunk capabilities for collecting machine data beyond logs, such as the HTTP Event Collector, Splunk MINT and the Splunk App for Stream.
During the presentation, forward-looking statements were made regarding Splunk's plans and estimates that are subject to risks and uncertainties. Any information about Splunk's roadmap outlines general product direction but is subject to change without notice. Splunk undertakes no obligation to develop or include any described feature in a future release. The presentation demonstrated Splunk's IoT analytics capabilities for manufacturing including predictive maintenance, advanced monitoring, and self-service analytics.
Getting Started with Splunk Enterprise Hands-OnSplunk
This document provides an overview and demonstration of Splunk software. It outlines an agenda to discuss why Splunk, how to install and use Splunk through a live demonstration, Splunk deployment architectures, and communities for help. The live demonstration shows importing sample data, performing searches, creating alerts and dashboards. It also discusses pivoting, field extraction, analytics, and scaling Splunk deployments.
Splunk is a time-series data platform that handles the three V's of data (volume, velocity, and variety) very well. It collects, indexes, and allows searching and analysis of data. Splunk can collect data from files, directories, network ports, programs/scripts, and databases. It breaks data down into searchable events and builds a high-performance index. This allows users to search, manipulate, and visualize data in reports, charts, and dashboards. Splunk can analyze structured, unstructured, and multistructured data from various sources like logs, networks, clicks, and more.
This document discusses DevOps concepts and how Splunk can be used to power DevOps initiatives. It defines key DevOps terms like continuous deployment, continuous delivery, push vs. pull deployments. It also outlines how Splunk provides visibility across the application development lifecycle from coding to testing to production. Example use cases are presented that leverage Splunk data and analytics to improve developer productivity, deployment health, and operational efficiency. The document promotes transforming organizations to DevOps using Splunk to provide a unified platform for data-driven insights.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
This document discusses Splunk for developers. It provides an overview of empowering developers with Splunk, building Splunk apps, and gaining application intelligence across the development lifecycle. Key points include instrumenting application logs for insights, integrating and extending Splunk, building unit testing and code integration, and gaining end-to-end visibility across development tools. The document also discusses resources for Splunk developers including tutorials, code samples, SDKs, and developer licenses.
This document provides a summary of new features and enhancements in Splunk Enterprise & Cloud version 6.3. Key highlights include improved performance and scale through search and index parallelization, intelligent job scheduling, expanded support for DevOps and IoT through the new HTTP Event Collector, and enhanced analytics and visualization capabilities such as anomaly detection and geospatial mapping. The documentation was also redesigned to be more user-friendly.
Machine Learning and Analytics Breakout SessionSplunk
This document provides an overview of machine learning and how it can be used with Splunk. It discusses what machine learning is, the different types of machine learning, and common use cases in IT operations, security, and business analytics. It also summarizes how machine learning can be implemented using Splunk, including exploring data, building models, applying and validating models, and operationalizing models. The document encourages attendees to try out the free Splunk Machine Learning Toolkit and Showcase app.
This document discusses Splunk's developer platform and resources for building applications on Splunk. It provides an overview of empowering developers through application intelligence, building Splunk apps, and integrating and extending Splunk. The document discusses Splunk for application development and challenges such as lack of visibility and limited insights. It describes gaining end-to-end visibility across development tools using Splunk and pushing better code using analytics in Splunk. Resources mentioned include Splunk's developer license, tutorials on their developer website, GitHub, and blogs.
Come and learn from our experts on ways to improve you IT Operational Visibility by using Splunk for monitoring environment health. In this hands-on session we will cover recommended approaches for end to end monitoring, across applications, OSes, and devices. Topics will include: critical services to monitor, use of the Splunk Common Information Model (CIM) for cross-dataset normalization, commonly deployed apps and TAs to gather data for IT infrastructure uses, and use of pre-made dashboard panels to quickly build dashboards for monitoring your environment.
This document discusses how Staples uses Splunk to gain insights from machine data across their organization. It provides details on:
- Staples' Splunk infrastructure consisting of 8 index servers and 9 search heads that can handle 1TB of data per day.
- The key use cases of operational support, application insights, and business intelligence.
- How Splunk provides a single pane of glass for visibility across their web apps, servers, monitoring tools, and more.
- Examples of how Splunk has helped identify issues, reduced resolution times, and optimized website searches to improve the customer experience.
Splunk is a software company headquartered in San Francisco with additional offices in London and Hong Kong. They have over 2,100 employees and annual revenue of $668.4 million, growing 49% year-over-year. Their products include Splunk Enterprise, Splunk Cloud, and other solutions for collecting, analyzing, and visualizing machine-generated data from websites, applications, sensors, and other sources. Splunk has over 11,000 customers across more than 110 countries, including 80 of the Fortune 100. Their largest customer indexes over 1 petabytes of data per day.
The document provides an overview of new features in Splunk Enterprise 6, including more powerful analytics capabilities for both technical and non-technical users. Key updates include an intuitive pivot interface that allows drag-and-drop report building without knowledge of the search language, defined data models to represent relationships in machine data, and an analytics store that can accelerate searches and reports up to 1000 times faster than previous versions. The release also includes simplified cluster management for large enterprise deployments and enhanced developer tools.
SplunkLive! Splunk Enterprise 6.3 - Data On-boardingSplunk
This document discusses Splunk Enterprise 6.3, a platform for machine data that provides breakthrough performance, scale, and total cost of ownership reductions. Key features highlighted include doubling search and indexing speed, increasing capacity by 20-50%, and reducing TCO by over 20%. Advanced analysis and visualization capabilities are improved, along with support for high-volume event collection, enterprise-scale requirements, and development tools. Demo apps showcase custom visualizations and machine learning functionality.
The document discusses security session presented by Philipp Drieger. It begins with a safe harbor statement noting any forward-looking statements are based on current expectations and could differ from actual results. The agenda includes discussing Splunk for security, enterprise security, and Splunk user behavior analytics. It provides examples of how Splunk can be used to detect threats like fraud and advanced persistent threats by analyzing machine data from various sources. It also discusses how threat intelligence can be incorporated using STIX/TAXII standards and open IOCs. Customer examples show how Nasdaq and Cisco have replaced their SIEMs with Splunk to gain better scalability and flexibility.
What's New in Splunk Cloud and Enterprise 6.5Splunk
This document provides an overview and agenda for what's new in Splunk Cloud and Enterprise 6.5. It introduces new features for easier data preparation and analysis through intuitive table views. Extended platform and management capabilities include integrated Hadoop features for storage flexibility and automated management tools. New machine learning analytics allow for predictive analytics through packaged and custom models. Additional developer resources are introduced to simplify app development and certification. The presentation concludes with details on liberalized licensing terms and resources for getting started with Splunk.
Splunk is a big data company founded in 2004 that provides a platform for collecting, indexing, and analyzing machine-generated data. It has over 5,000 customers in over 80 countries across various industries. Splunk's software can handle large volumes of machine data, scaling to terabytes per day and thousands of users. It collects and indexes machine data from various sources like logs, metrics, and applications without needing prior knowledge of schemas or custom connectors.
Machine Data 101: Turning Data Into Insight is a presentation about using Splunk software to analyze machine data. It discusses topics such as:
- What machine data is and examples of common sources like log files, social media, call center systems
- How Splunk indexes machine data from various sources in real-time regardless of format
- Techniques for enriching data in Splunk like tags, field aliases, calculated fields, event types, and lookups from external data sources
- Examples of collecting non-traditional data sources into Splunk like network data, HTTP events, databases, and mobile app data
The presentation provides an overview of Splunk's machine data platform and techniques for analyzing, enrich
Covering off some of the latest announcements at Splunk's user conference (.conf), an Add-on created to Splunk config files and also the presentation delivered at .conf18 on SplDevOps!
Splunk Enterprise is a software platform for searching, monitoring, and analyzing machine-generated big data, such as logs, metrics, and mobile data. The presentation provided an overview of Splunk Enterprise capabilities including: live demonstrations of installing Splunk, searching data, creating dashboards and alerts. It also covered Splunk deployment architectures for scaling from single instances to distributed environments supporting hundreds of terabytes per day.
This document provides an overview and demonstration of Splunk Enterprise. It discusses Splunk's capabilities for indexing, searching, and analyzing machine data from various sources. The live demonstration shows how to install Splunk, import sample data, perform searches, create dashboards and alerts. It also covers Splunk's deployment architecture and scalability options. Attendees are encouraged to ask questions on Splunk's online communities and support channels.
This document introduces Splunk Enterprise & Splunk Cloud Release 6.4. It highlights new features including unlimited custom visualizations, enhanced predictive analytics, expanded cloud services monitoring, improved platform security and management, and reduced storage costs for historical data of up to 80% with Splunk Enterprise. The release aims to help users get more value from big data while lowering storage costs.
An overview of Splunk Enterprise 6.3. Presented by Splunk's Jim Viegas at GTRI's Splunk Tech Day, December 8, 2015.
Visit http://www.gtri.com/ for more information.
Here’s your chance to get hands-on with Splunk for the first time! Bring your modern Mac, Windows, or Linux laptop and we’ll go through a simple install of Splunk. Then, we’ll load some sample data, and see Splunk in action – we’ll cover searching, pivot, reporting, alerting, and dashboard creation. At the end of this session you’ll have a hands-on understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll experience practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
The document discusses Nordstrom's use of Splunk for log aggregation and analytics across their IT systems. Some key points:
- Nordstrom uses Splunk to consolidate machine data from various systems like point-of-sale devices, web servers, and monitoring tools for unified visibility.
- Splunk has been adopted organically by over 300 users across Nordstrom who use it for tasks like performance monitoring, troubleshooting, and building custom reports.
- Nordstrom applies DevOps principles and tools to manage their large and distributed Splunk deployment, with components like search heads, indexers, and deployment servers. Configuration is managed through version control.
The document discusses how Staples uses Splunk for operational support, application insights, and business intelligence across their infrastructure. Staples relies on Splunk for real-time visibility into the health of their Advantage website and business/operational analytics. Splunk provides comprehensive insights into Staples' infrastructure and helps map application performance to user experience. It has saved Staples numerous times by quickly detecting issues. Adoption of Splunk at Staples has grown organically as more teams see its benefits.
This document summarizes a presentation about Splunk's platform. It discusses Splunk's mission of helping customers create value faster with insights from their data. It provides statistics on Splunk's daily ingest and users. It highlights examples of how Splunk has helped customers in areas like internet messaging and convergent services. It also discusses upcoming challenges and new capabilities in Splunk like federated search, flexible indexing, ingest actions, improved data onboarding and management, and increased platform resilience and security.
Getting Started with Splunk Breakout SessionSplunk
This document provides an overview and introduction to Splunk Enterprise. It begins with an agenda that outlines discussing Splunk Enterprise, a live demonstration of using Splunk, deployment architecture, the Splunk community, and a Q&A. It then discusses how Splunk can unlock insights from machine data generated from various sources. The live demo shows installing Splunk, forwarding sample data, and performing searches. It also discusses deploying Splunk at scale, distributed architectures, and support resources available through the Splunk community.
This summary provides an overview of a presentation about Splunk:
1. The presentation introduces Splunk, an enterprise software platform that allows users to search, monitor, and analyze machine-generated big data for security, IT and business operations.
2. Key components of Splunk include universal forwarders for data collection, indexers for data storage and search heads for data visualization. Splunk supports data ingestion from various sources like servers, databases, applications and sensors.
3. A demo section shows how to install Splunk, ingest sample data, perform searches, set up alerts and reports. It also covers dynamic field extraction, the search command language and Splunk applications.
A Lap Around Developer Awesomeness in Splunk 6.3Glenn Block
The document discusses new developer features in Splunk 6.3, including the HTTP Event Collector for sending events directly to Splunk, custom alert actions for integrating with external systems, and improved custom search commands. It also demonstrates some of these features, such as sending events using the HTTP Event Collector and creating custom alert actions. Additional sessions are recommended for learning more about modular inputs, reference apps, and other developer tools in Splunk.
Splunk for Industrial Data and the Internet of Thingsaliciasyc
The IoT is a natural evolution of the world’s networks. Just as people became more connected by devices and applications during the explosion of the social media revolution, devices, sensors and industrial equipment are also becoming more connected—and are consuming and generating data at an unprecedented pace. Disparate and deployed connected devices can provide a unique touchpoint to real-world operations and conditions. Only few architectures and applications are designed to handle the constant streams of real-time events, sensor readings, user interactions and application data produced by massive numbers of connected devices. Use Splunk to collect, index and harness the power of the machine data generated by connected devices and machines deployed on your local network or around the world.
Splunk Cloud and Splunk Enterprise 7.2 provide enhanced capabilities for data ingestion, visualization, and analytics powered by artificial intelligence and machine learning. New features include guided data onboarding, metrics search performance improvements, smart data tiering for cost optimization, and accessibility enhancements. These updates aim to empower more users and accelerate business value from machine learning.
Splunk Cloud and Splunk Enterprise 7.2 provide breakthrough performance, scale, and manageability. Key features include SmartStore for cost-effective data management, workload management to prioritize analytics workloads, and accessibility enhancements to enable more users. The release also expands AI/ML capabilities and delivers intuitive metrics visualization and search.
Splunk Cloud and Splunk Enterprise 7.2 provide enhanced capabilities for data ingestion, visualization, and analytics powered by artificial intelligence and machine learning. New features include guided data onboarding, metrics search performance improvements, workload management for prioritizing queries, and accessibility enhancements. The presentation highlights how these updates help users gain more insights from their machine data and empower more people to explore and analyze data.
What's New with the Latest Splunk Platform ReleaseSplunk
This presentation + demo provides an overview of Splunk Cloud and Splunk Enterprise version 7.2, and Splunk Machine Learning Toolkit 4.0 – the customer value proposition, supporting customer stories, and high-level technical details.
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Alle Neuigkeiten im letzten Plattform ReleaseSplunk
Diese Session und Demo liefert einen Überblick über Splunk Cloud und Splunk Enterprise Version 7.2 und Splunk Machine Learning Toolkit 4.0 - Mehrwert für den Anwender, Kundenbeispiele und High-Level technische Details.
.conf Go 2023 - Raiffeisen Bank InternationalSplunk
This document discusses standardizing security operations procedures (SOPs) to increase efficiency and automation. It recommends storing SOPs in a code repository for versioning and referencing them in workbooks which are lists of standard tasks to follow for investigations. The goal is to have investigation playbooks in the security orchestration, automation and response (SOAR) tool perform the predefined investigation steps from the workbooks to automate incident response. This helps analysts automate faster without wasting time by having standard, vendor-agnostic procedures.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
El documento describe la transición de Cellnex de un Centro de Operaciones de Seguridad (SOC) a un Equipo de Respuesta a Incidentes de Seguridad (CSIRT). La transición se debió al crecimiento de Cellnex y la necesidad de automatizar procesos y tareas para mejorar la eficiencia. Cellnex implementó Splunk SIEM y SOAR para automatizar la creación, remediación y cierre de incidentes. Esto permitió al personal concentrarse en tareas estratégicas y mejorar KPIs como tiempos de resolución y correos electrónicos anal
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)Splunk
Este documento resume el recorrido de ABANCA en su camino hacia la ciberseguridad con Splunk, desde la incorporación de perfiles dedicados en 2016 hasta convertirse en un centro de monitorización y respuesta con más de 1TB de ingesta diaria y 350 casos de uso alineados con MITRE ATT&CK. También describe errores cometidos y soluciones implementadas, como la normalización de fuentes y formación de operadores, y los pilares actuales como la automatización, visibilidad y alineación con MITRE ATT&CK. Por último, señala retos
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
The document is a presentation on cyber security trends and Splunk security products from Matthias Maier, Product Marketing Director for Security at Splunk. The presentation covers trends in security operations like the evolution of SOCs, new security roles, and data-centric security approaches. It also provides updates on Splunk's security portfolio including recognition as a leader in SIEM by Gartner and growth in the SIEM market. Maier highlights some breakout sessions from the conference on topics like asset defense, machine learning, and building detections.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
This document summarizes a presentation about observability using Splunk. It includes an agenda introducing observability and why Splunk for observability. It discusses the need for modernization initiatives in companies and the thousands of changes required. It presents that Splunk provides end-to-end visibility across metrics, traces and logs to detect, troubleshoot and optimize systems. It shares a customer case study of Accenture using Splunk observability in their hybrid cloud environment. Finally, it concludes that observability with Splunk can drive results like reduced downtime and faster innovation.
This document contains slides from a Splunk presentation covering the following topics:
- Updated Splunk logo and information about meetings in Zurich and sales engineering leads
- Ideas for confused or concerned human figures in design concepts
- Three buckets of challenges around websites slowing, apps being down, and supply chain issues
- Accelerating mean time to detect, identify, respond and resolve through cyber resilience with Splunk
- Unifying security, IT and DevOps teams
- Splunk's technology vision focusing on customer experience, hybrid/edge, unleashing data lakes, and ubiquitous machine learning
- Gaining operational resilience through correlating infrastructure, security, application and user data with business outcomes
The document appears to be a presentation from Splunk on security topics. It includes sections on cyber security resilience, the data-centric modern SOC, application monitoring at scale, threat modeling, security monitoring journeys, self-service Splunk infrastructure, the top 3 CISO priorities of risk based alerting, use case development, a security content repository, security PVP (posture, vision, and planning) and maturity assessment, and concludes with an overview of how Splunk can provide end-to-end visibility across an organization.
John Eccleshare, Head of Compliance and Information Security at bet365, discusses bet365's migration of their Splunk deployment to Splunk Cloud. Some key points:
- bet365 processed 3 TB of data per day in their on-prem Splunk deployment but scaling it for new use cases was challenging.
- Migrating to Splunk Cloud improved performance, enhanced security capabilities, and freed up 4 FTEs by reducing maintenance and upgrade work.
- Lessons learned included needing more business input on requirements and migrating sooner for increased agility. Recommendations included running parallel deployments during migration and using professional services.
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
2. Participate in the live demo! Get prepared now!
● We want YOU! To generate live data in this webinar!
● Go with your mobile phone to: http://splunk.com/shake
– Shake your mobile phone once we say it during the webinar!
– On iOS 9 allow the access to the sensor:
‣ Settings > General > Accessibility
‣ Half-way down -> “Shake to undo” -> Set to off
4. Notes
• After the webinar you’ll get an E-Mail containing:
• Recording of the Webinar
• Link to Slideshare with this Presentation
• Link to the Splunk 6.4 What’s new App on Splunkbase
• Ask your questions during the Webinar and we will go
through them in a Q&A Session at the End
5. Safe Harbor Statement
During the course of this presentation,we may make forward looking statements regarding future events
or the expected performance of the company. We caution you that such statements reflect our current
expectations and estimates based on factors currently known to us and that actual events or results could
differ materially. For important factors that may cause actual results to differ from those contained in our
forward-looking statements, please review our filings with the SEC. The forward-looking statements
made in this presentation are being made as of the time and date of its live presentation. If reviewed
after its live presentation, this presentation may not contain current or accurate information. We do not
assume any obligation to update any forward looking statements we may make. In addition, any
information about our roadmap outlines our general product direction and is subject to change at any
time without notice. It is for informational purposes only and shall not be incorporated into any contract
or other commitment. Splunk undertakes no obligation either to develop the features or functionality
described orto includeany suchfeatureor functionalityina futurerelease.
5
6. Agenda
● Short introduction into Splunk
● What’s new in Splunk Enterprise 6.4
● Live Demo of new functionalities
● Q&A Session
6
7. Splunk Company Overview
7
Company
• Global HQs:
San Francisco
London
Hong Kong
• 1,800+ employees
globally
• Annual Revenue:
$450.9M (YoY +49%)
• NASDAQ: SPLK
Products
• Free trial to massive scale
• Splunk products:
Splunk Enterprise
Splunk Cloud
Hunk
Splunk Light
Splunk MINT
Premium Solutions
Customers
• 10,000+ customers
• Across 100 countries
• Small to large
organizations
• More than 80 of the
Fortune 100
• Largest license:
400+ Terabytes/day
9. 9
Turning Machine Data Into Business Value
Index Untapped Data: Any Source, Type, Volume
Online
Services Web
Services
Servers
Security GPS
Location
Storage
Desktops
Networks
Packaged
Applications
Custom
ApplicationsMessaging
Telecoms
Online
Shopping
Cart
Web
Clickstreams
Databases
Energy
Meters
Call Detail
Records
Smartphones
and Devices
RFID
On-
Premises
Private
Cloud
Public
Cloud
Ask Any Question
Application Delivery
Security, Compliance
and Fraud
IT Operations
Business Analytics
Industrial Data and
the Internet of Things
10. 10
Splunk Enterprise & Cloud 6.4
Storage TCO
Reduction
- TSIDX Reduction
reduces historical data
storage TCO by 40%+
Platform Security &
Management
New Interactive
Visualizations
- Improved DMC
- New SSO Options
- Improved Event Collector
- New Pre-built Visualizations
- Open Community Library
- Event Sampling and Predict
11. TSIDX Reduction
Provides up to 40-80% storage reduction
● Retention Policy on TSIDX Files
● Creates “mini” TSIDX
● Performance trade-off between
storage costs and performance
– Rare vs Dense Searches
● *Limited functionality loss
● Can restore original TSIDX files if
needed
11
12. 12
Splunk Enterprise & Cloud 6.4
Storage TCO
Reduction
- TSIDX Reduction
reduces historical data
storage TCO by 40%+
Platform Security &
Management
New Interactive
Visualizations
- Improved DMC
- New SSO Options
- Improved Event Collector
- New Pre-built Visualizations
- Open Community Library
- Event Sampling and Predict
13. Management & Platform Enhancements
● Management
– Distributed Management Console
‣ New monitoring views for scheduler,
Event Collector, system I/O performance
– Delegated Admin roles
● HTTP Event Collector
– Unrestricted data for payloads
– Data Indexing acknowledgement
● SAML Identity Provider Support
– OKTA, Azure AD, ADFS
13
SAML Support
OKTA
Azure AD
ADFS
Ping FederateAWS IoT
Event Collector
14. 14
Splunk Enterprise & Cloud 6.4
Storage TCO
Reduction
- TSIDX Reduction
reduces historical data
storage TCO by 40%+
Platform Security &
Management
New Interactive
Visualizations
- Improved DMC
- New SSO Options
- Improved Event Collector
- New Pre-built Visualizations
- Open Community Library
- Event Sampling and Predict
15. Custom Visualizations
Unlimited new ways to visualize your data
● 15 new interactive visualizations useful
for IT, security, IoT, business analysis
● Open framework to create or customize
any visual
● Visuals shared via Splunkbase library
● Available for any use: search, dashboards,
reports…
15
16. New Custom Visualizations
16
Treemap
Sankey
Diagram
Punchcard Calendar
Heat Map
Parallel
Coordinates
Bullet GraphLocation
Tracker
Horseshoe
Meter
Machine Learning
Charts
Timeline
Horizon
Chart
Multiple use cases across IT, security, IoT, and business analytics
17. Event Sampling
• Powerful search option provides
unbiased sample results
• Useful to quickly determine dataset
characteristics
• Speeds large-scale data investigation
and discovery
17
Optimizes query performance for big data analysis
18. Predict Command Enhancements
• Time-series forecasting
• New algorithms:
• Support bivariate time series
with covariance
• Predict multiple series independently
• Predict missing values within series
• 80-100X performance improvement
18
Forecast Trends and Predict Missing Values
21. Participate in the live demo! Get prepared now!
● We want YOU! To generate live data sent to the new HTTP Event
Collector in this webinar!
● Go with your mobile phone to: http://splunk.com/shake
– Shake your mobile phone once we say it during the webinar!
– On iOS 9 allow the access to the sensor:
‣ Settings > General > Accessibility
‣ Half-way down -> “Shake to undo” -> Set to off
Splunk has more than 1,800 employees worldwide, with our global headquarters in San Francisco. Our 10,000+ customers in 100 countries are using Splunk software and cloud services to improve service levels, reduce operations costs, mitigate security risks, enable compliance, enhance DevOps collaboration and create new product and service offerings.
Our products are designed to fit your needs and are built to be as frictionless to deploy as possible. Simple download Splunk software, or sign up for the online sandbox, point it at your data, and you’ll up and running in minutes.
Please always refer to latest company data found here: http://www.splunk.com/company.
At Splunk, our mission is to make machine data accessible, usable and valuable to everyone. And this overarching mission is what drives our company and product priorities.
Splunk products are being used for data volumes ranging from gigabytes to hundreds of terabytes per day. Splunk software and cloud services reliably collects and indexes machine data, from a single source to tens of thousands of sources. All in real time. Once data is in Splunk Enterprise, you can search, analyze, report on and share insights form your data. The Splunk Enterprise platform is optimized for real-time, low-latency and interactivity, making it easy to explore, analyze and visualize your data. This is described as Operational Intelligence.
The insights gained from machine data support a number of use cases and can drive value across your organization.
[In North America]
Splunk Cloud is available in North America and offers Splunk Enterprise as a cloud-based service – essentially empowering you with Operational Intelligence without any operational effort.
Let’s start with TCO & Performance Improvements.
*Limited functionality loss refers to not being able to use TSATS on TSIDX reduced data. This is because you no longer have the full tsidx files.
Extra Material:
Q: How does it affect performance? Can I still search the data?
A: You can access the data in all of the normal ways, and for many search and reporting activities there is little impact. But for “needle in the haystack” ad-hoc searches, the performance will no longer be optimal. For “dense” searches (searches whose results return most of the data for the time range searched), the performance impact will be minimal. For “sparse” or “needle in the haystack” searches (searches that return very few results), searches that typically return in seconds will now return in minutes. Note: This feature can be selectively applied to any index to provide the greatest amount of flexibility to our customers.
The goal is to apply this feature to data that is less frequently accessed – data for which you are willing to sacrifice some performance in order to gain a very significant cost savings. Splunk specialists can help you set the right policies for the right data.
Q: Do apps and Premium Solutions still work?
A: Yes. Apps and Premium Solutions will work.
Q: How does it affect performance? Can I still search the data?
A: You can access the data in all of the normal ways, and for many search and reporting activities there is little impact. But for “needle in the haystack” ad-hoc searches, the performance will no longer be optimal. For “dense” searches (searches whose results return most of the data for the time range searched), the performance impact will be minimal. For “sparse” or “needle in the haystack” searches (searches that return very few results), searches that typically return in seconds will now return in minutes. Note: This feature can be selectively applied to any index to provide the greatest amount of flexibility to our customers.
The goal is to apply this feature to data that is less frequently accessed – data for which you are willing to sacrifice some performance in order to gain a very significant cost savings. Splunk specialists can help you set the right policies for the right data.
Q: How do I control what data is minimized? Can I bring data back to the standard state?
A: You set policy by data age and by the type of data (index). Different data can have different time criteria for minimization. You can return data to the original state if needed. Splunk specialists can help you set the right policies for the right data.
Q: Why does your optimization data take up so much space?
A: Even including the optimization data, Splunk compression techniques have already reduced the customer’s storage requirements by over 50% during indexing. The optimization metadata (TSIDX – time-series index) is what enables the customer to ask any question of their data and handle any type of investigation or use case in real time.
By keeping data in its original unstructured state, Splunk offers the flexibility to ask any question of the data, handling any type of investigation or use case. Splunk structures the answer to each query on the fly, rather than forcing the customer to create a fixed data structure that limits the questions that can be asked. The TSIDX data enables us to deliver this unique flexibility with real-time speed.
Q: Why is the savings range so large (40-80%)?
A: The storage used by TSIDX varies depending on the nature and cardinality (uniqueness) of the data indexed. So the savings will vary as well across data types. Repetitive data fields will have a lower savings while unique (high cardinality) data will see a higher savings. Typical syslog data, for example will fall in the middle – about 60-70%.
High cardinality data returns a higher savings because it requires more index entries to describe it. When the TSIDX is reduced, the savings are larger. We expect most customers will see an overall benefit of 60% or more. We expect the average savings to be 60% or more.
Platform Security & Management
DMC
In 6.3 we re-worked the Distributed Management Console. In 6.4 we enhanced it even more adding new views and monitoring capabilities for things such as:
- HTTP Event Collector Views - Performance tracking for the HTTP Event Collector feature including breakdowns by authorization token.
- TCP Inputs - A partner to the Forwarder performance views in DMC tracking TCP queue health and other TCP input statistics.
Deployment Wide Search Statistics - Identify top Search Users across a multi-Search Head deployment including frequent and long running searches.
- Distributed Search View - A dashboard dedicated to tracking metrics for search in distributed deployments. Includes views for bundle replication performance and dispatch directory statistics.
- Resource Usage, I/O - In addition to useful data on CPU and Memory consumption, now also see I/O bandwidth utilization for any Splunk host or across hosts.
- Index Performance, Multi-pipeline - Updated views in the Deployment-wide and Instance-scoped Indexing Performance pages to accommodate multi-pipeline indexing.
- Threshold Control - Fine-grain controls for visual thresholds for DMC displays containing CPU, Memory, Indexing Rate, Search Concurrency, and Up/Down Status.
HTTP Event Collector
In 6.3 we added the HTTP Event Collector. Now we’ve improved it by enabling unrestricted data for payloads (besides JSON) and data indexing acknowledgements so customers can verify data was received.
SAML
And finally we’ve added additional Single Sign On Options for added flexibility
Platform Security & Management
Release 6.4 delivers an array of new pre-built visualizations, a visualization developer framework, and an open library to make it simple for customers to access, develop and share interactive visualizations
15 new pre-built visualizations help customers analyze and interact with data sets commonly found in IT, security, and machine learning analysis
A new developer framework allows customers and partners to easily create or customize any visualization to suit their needs
Splunkbase now contains a growing library of visualizations provided by Splunk, our partners and our community
Doubles the visualizations in Splunk today and creates an open environment for the unlimited creation and sharing of new visualizations
Once a visual is imported from SplunkBase it is treated the same as any native Splunk feature, and is available for general use in the Visualizations dropdown.
15 new pre-built visualizations help customers analyze and interact with data sets commonly found in IT, security, and machine learning analysis. We survey out customer and field to choose an initial set that would meet many common needs.
The new Event Sampling feature makes it faster to characterize very large datasets and focus your investigations. It is an integrated option of Search, offering a dropdown menu to control sampling 1 per 10, per 100, 1000, 10,000 etc.
Of course the performance is equally as fast – a 1 per 1000 search runs 1000x faster.
Main algo used – Kalgan filter
Algorithmic improvements
Support bi-variate time series by taking covariance between the individual time series into account.
Predict for multiple time series at the same time - this treats individual time series independently, i.e. without computing covariance
Predicting missing values in time series and accounting for that during prediction via missing value imputation methods (i.e., “No value was recorded, but it was most likely 5”)
Use Splunk Ninja App and Demo Instructions
For more information, or to try out the features yourself. Check out the overview app which explains each of the features and includes code samples and examples where applicable.
We’re headed to the East Coast!
2 inspired Keynotes – General Session and Security Keynote + Super Sessions with Splunk Leadership in Cloud, IT Ops, Security and Business Analytics!
165+ Breakout sessions addressing all areas and levels of Operational Intelligence – IT, Business Analytics, Mobile, Cloud, IoT, Security…and MORE!
30+ hours of invaluable networking time with industry thought leaders, technologists, and other Splunk Ninjas and Champions waiting to share their business wins with you!
Join the 50%+ of Fortune 100 companies who attended .conf2015 to get hands on with Splunk. You’ll be surrounded by thousands of other like-minded individuals who are ready to share exciting and cutting edge use cases and best practices. You can also deep dive on all things Splunk products together with your favorite Splunkers.
Head back to your company with both practical and inspired new uses for Splunk, ready to unlock the unimaginable power of your data! Arrive in Orlando a Splunk user, leave Orlando a Splunk Ninja!
REGISTRATION OPENS IN MARCH 2016 – STAY TUNED FOR NEWS ON OUR BEST REGISTRATION RATES – COMING SOON!