SplunkLive! Munich 2018: Get More From Your Machine Data Splunk & AISplunk
Presented at SplunkLive! Munich 2018:
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! Munich 2018: Intro to Security Analytics MethodsSplunk
The document provides an introduction and agenda for a presentation on security analytics methods. The agenda includes an intro to analytics methods from 11:40-12:40 followed by a lunch break from 12:40-13:40. The presentation may include forward-looking statements and disclaimers are provided. Information presented is subject to change and any information about product roadmaps is for informational purposes only.
SplunkLive! Munich 2018: Predictive, Proactive, and Collaborative ML with IT ...Splunk
This document discusses how machine learning (ML) can be used with IT service intelligence (ITSI) to enable predictive, proactive, and collaborative IT operations. It describes how ML can be applied to analyze machine data using ITSI to predict failures and other notable events. This allows operations teams to be notified earlier of potential issues. The document provides an example of using ITSI's built-in ML and event analytics to cluster similar alerts from thousands of events into meaningful, actionable alerts to improve response time. It also discusses integrating ITSI with chat tools like Slack to immediately notify teams to further reduce resolution times.
SplunkLive! Munich 2018: Getting Started with Splunk EnterpriseSplunk
The document provides an agenda for a SplunkLive! presentation on installing and using Splunk. It includes downloading required files, importing sample data, conducting searches on the data, and exploring various Splunk features through a live demonstration. Common installation problems are also addressed. The presentation aims to provide attendees with the knowledge and skills to get started using Splunk through hands-on learning and a question and answer session.
SplunkLive! Munich 2018: Get More From Your Machine Data Splunk & AISplunk
Presented at SplunkLive! Munich 2018:
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! Munich 2018: Intro to Security Analytics MethodsSplunk
The document provides an introduction and agenda for a presentation on security analytics methods. The agenda includes an intro to analytics methods from 11:40-12:40 followed by a lunch break from 12:40-13:40. The presentation may include forward-looking statements and disclaimers are provided. Information presented is subject to change and any information about product roadmaps is for informational purposes only.
SplunkLive! Munich 2018: Predictive, Proactive, and Collaborative ML with IT ...Splunk
This document discusses how machine learning (ML) can be used with IT service intelligence (ITSI) to enable predictive, proactive, and collaborative IT operations. It describes how ML can be applied to analyze machine data using ITSI to predict failures and other notable events. This allows operations teams to be notified earlier of potential issues. The document provides an example of using ITSI's built-in ML and event analytics to cluster similar alerts from thousands of events into meaningful, actionable alerts to improve response time. It also discusses integrating ITSI with chat tools like Slack to immediately notify teams to further reduce resolution times.
SplunkLive! Munich 2018: Getting Started with Splunk EnterpriseSplunk
The document provides an agenda for a SplunkLive! presentation on installing and using Splunk. It includes downloading required files, importing sample data, conducting searches on the data, and exploring various Splunk features through a live demonstration. Common installation problems are also addressed. The presentation aims to provide attendees with the knowledge and skills to get started using Splunk through hands-on learning and a question and answer session.
SplunkLive! Frankfurt 2018 - Get More From Your Machine Data with Splunk AISplunk
Presented at SpluknLive! Frankfurt 2018:
Why AI & Machine Learning?
What is Machine Learning?
Splunk's Machine Learning Tour
Use Cases & Customer Stories
Wrap Up
SplunkLive! Frankfurt 2018 - Intro to Security Analytics MethodsSplunk
Splunk Security Essentials provides concise summaries in 3 sentences or less that provide the high level and essential information from the document. The document discusses an introductory presentation on security analytics methods. It includes an agenda that covers an introduction to analytics methods, an example scenario, and next steps. It also discusses common security challenges, different analytics methods and types of use cases, and how analytics can be applied to different stages of an attack.
SplunkLive! Frankfurt 2018 - Legacy SIEM to Splunk, How to Conquer Migration ...Splunk
Presented at SplunkLive! Frankfurt 2018:
Introduction
SIEM Migration Methodology
Use Cases
Datasources & Data Onboarding
ES Architecture
Third-Party Integrations
You Got This!
SplunkLive! Zurich 2018: Get More From Your Machine Data with Splunk & AISplunk
This presentation discusses how Splunk and machine learning can help organizations get more value from their machine data. It describes how machine learning can improve decision making, uncover hidden trends, alert on deviations, and forecast incidents. The presentation provides an overview of Splunk's machine learning capabilities, including search, packaged solutions, and the machine learning toolkit. It also showcases several customer use cases that have benefited from Splunk's machine learning offerings, such as network incident detection, security/fraud prevention, and optimizing operations.
SplunkLive! Zurich 2018: Legacy SIEM to Splunk, How to Conquer Migration and ...Splunk
This document provides an overview of best practices for migrating from a legacy SIEM to Splunk Enterprise Security. It discusses identifying high-value use cases to prioritize for migration. Proper data source onboarding using technologies like the Universal Forwarder and Technology Add-ons is also covered. The presentation recommends planning the target architecture and identifying any necessary third-party integrations. Some preparatory steps customers can take today to get ready for the replacement are also listed.
SplunkLive! Zurich 2018: Monitoring the End User Experience with SplunkSplunk
This document discusses using Splunk to gain insights into end user experience and the factors that influence experience. Splunk provides a platform approach to monitor applications across the full technology stack from networks to databases. It can ingest data from various sources, including APM tools, and provide visibility into both instrumented and non-instrumented applications and environments. Splunk also offers predictive analytics capabilities and allows various stakeholders like operations and business teams to access and analyze data. The document demonstrates how Splunk can help organizations improve user experience, application performance, and collaboration between teams.
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunk
This document discusses integrating metrics and logs in Splunk for enhanced troubleshooting and monitoring. It provides an overview of metrics and how they are defined, compared to events. Metrics support in Splunk allows for more efficient aggregation, storage, and analysis of time-series data. Example use cases mentioned include IT operations, application performance monitoring, and IoT. Pricing is still based on uncompressed data volume ingested, with each metrics measurement licensed at around 150 bytes.
SplunkLive! Zurich 2018: Use Splunk for Incident Response, Orchestration and ...Splunk
This document discusses using Splunk for incident response, orchestration, and automation. It notes that incident response currently takes significant time, with containment and response phases accounting for 72% of the time spent on incidents. It proposes that security operations need to change through orchestration and automation using adaptive response. Adaptive response aims to accelerate detection, investigation, and response by centrally automating data retrieval, sharing, and response actions across security tools and domains. This improves efficiency and extracts new insights through leveraging shared context and actions.
This document summarizes information about the Splunk Usergroup Zurich. It mentions that the group has regular Splunk user get-togethers throughout major German-speaking cities, not just Zurich. It hosts frequent Splunk presentations in German and English. The group is not a sales-focused organization and provides a space for users to meet and learn from each other. Interested users can join the group by visiting the listed URL.
SplunkLive! Frankfurt 2018 - Getting Hands On with Splunk EnterpriseSplunk
This presentation introduces Splunk software. It provides an overview of Splunk capabilities including indexing and searching machine data from various sources. The presentation demonstrates how to install Splunk, onboard sample data, and perform searches including field extractions, dashboards and alerts. It concludes with information on Splunk documentation, support and community resources.
Splunk Discovery: Milan 2018 - Get More From Your Machine Data with Splunk AISplunk
This document discusses machine learning and artificial intelligence capabilities provided by Splunk. It begins by explaining why organizations are adopting AI and machine learning to improve decision making, uncover hidden trends, forecast incidents, and more using diverse real-time data. It then provides an overview of Splunk's machine learning toolkit and capabilities including search, packaged solutions, algorithms, and commands. Examples of applications include anomaly detection, predictive analytics, dynamic thresholding and more. Customer stories demonstrate how organizations are using Splunk's machine learning for security, operations, and other use cases.
SplunkLive! Paris 2018: Use Splunk for Incident Response, Orchestration and A...Splunk
Presented at SplunkLive! Paris 2018:
- Challenges with Security Operations Today
- Overview of Splunk Adaptive Response Initiative
- Technology behind the Adaptive Response Framework
- Demonstrations
- How to build your own AR Action
- Resources
Presented at SplunkLive! Paris 2018: Get More From Your Machine Data With Splunk AI
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! Frankfurt 2018 - Data Onboarding OverviewSplunk
Presented at SplunkLive! Frankfurt 2018:
Splunk Data Collection Architecture
Apps and Technology Add-ons
Demos / Examples
Best Practices
Resources and Q&A
SplunkLive! Frankfurt 2018 - Get More From Your Machine Data with Splunk AISplunk
Presented at SpluknLive! Frankfurt 2018:
Why AI & Machine Learning?
What is Machine Learning?
Splunk's Machine Learning Tour
Use Cases & Customer Stories
Wrap Up
SplunkLive! Frankfurt 2018 - Intro to Security Analytics MethodsSplunk
Splunk Security Essentials provides concise summaries in 3 sentences or less that provide the high level and essential information from the document. The document discusses an introductory presentation on security analytics methods. It includes an agenda that covers an introduction to analytics methods, an example scenario, and next steps. It also discusses common security challenges, different analytics methods and types of use cases, and how analytics can be applied to different stages of an attack.
SplunkLive! Frankfurt 2018 - Legacy SIEM to Splunk, How to Conquer Migration ...Splunk
Presented at SplunkLive! Frankfurt 2018:
Introduction
SIEM Migration Methodology
Use Cases
Datasources & Data Onboarding
ES Architecture
Third-Party Integrations
You Got This!
SplunkLive! Zurich 2018: Get More From Your Machine Data with Splunk & AISplunk
This presentation discusses how Splunk and machine learning can help organizations get more value from their machine data. It describes how machine learning can improve decision making, uncover hidden trends, alert on deviations, and forecast incidents. The presentation provides an overview of Splunk's machine learning capabilities, including search, packaged solutions, and the machine learning toolkit. It also showcases several customer use cases that have benefited from Splunk's machine learning offerings, such as network incident detection, security/fraud prevention, and optimizing operations.
SplunkLive! Zurich 2018: Legacy SIEM to Splunk, How to Conquer Migration and ...Splunk
This document provides an overview of best practices for migrating from a legacy SIEM to Splunk Enterprise Security. It discusses identifying high-value use cases to prioritize for migration. Proper data source onboarding using technologies like the Universal Forwarder and Technology Add-ons is also covered. The presentation recommends planning the target architecture and identifying any necessary third-party integrations. Some preparatory steps customers can take today to get ready for the replacement are also listed.
SplunkLive! Zurich 2018: Monitoring the End User Experience with SplunkSplunk
This document discusses using Splunk to gain insights into end user experience and the factors that influence experience. Splunk provides a platform approach to monitor applications across the full technology stack from networks to databases. It can ingest data from various sources, including APM tools, and provide visibility into both instrumented and non-instrumented applications and environments. Splunk also offers predictive analytics capabilities and allows various stakeholders like operations and business teams to access and analyze data. The document demonstrates how Splunk can help organizations improve user experience, application performance, and collaboration between teams.
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunk
This document discusses integrating metrics and logs in Splunk for enhanced troubleshooting and monitoring. It provides an overview of metrics and how they are defined, compared to events. Metrics support in Splunk allows for more efficient aggregation, storage, and analysis of time-series data. Example use cases mentioned include IT operations, application performance monitoring, and IoT. Pricing is still based on uncompressed data volume ingested, with each metrics measurement licensed at around 150 bytes.
SplunkLive! Zurich 2018: Use Splunk for Incident Response, Orchestration and ...Splunk
This document discusses using Splunk for incident response, orchestration, and automation. It notes that incident response currently takes significant time, with containment and response phases accounting for 72% of the time spent on incidents. It proposes that security operations need to change through orchestration and automation using adaptive response. Adaptive response aims to accelerate detection, investigation, and response by centrally automating data retrieval, sharing, and response actions across security tools and domains. This improves efficiency and extracts new insights through leveraging shared context and actions.
This document summarizes information about the Splunk Usergroup Zurich. It mentions that the group has regular Splunk user get-togethers throughout major German-speaking cities, not just Zurich. It hosts frequent Splunk presentations in German and English. The group is not a sales-focused organization and provides a space for users to meet and learn from each other. Interested users can join the group by visiting the listed URL.
SplunkLive! Frankfurt 2018 - Getting Hands On with Splunk EnterpriseSplunk
This presentation introduces Splunk software. It provides an overview of Splunk capabilities including indexing and searching machine data from various sources. The presentation demonstrates how to install Splunk, onboard sample data, and perform searches including field extractions, dashboards and alerts. It concludes with information on Splunk documentation, support and community resources.
Splunk Discovery: Milan 2018 - Get More From Your Machine Data with Splunk AISplunk
This document discusses machine learning and artificial intelligence capabilities provided by Splunk. It begins by explaining why organizations are adopting AI and machine learning to improve decision making, uncover hidden trends, forecast incidents, and more using diverse real-time data. It then provides an overview of Splunk's machine learning toolkit and capabilities including search, packaged solutions, algorithms, and commands. Examples of applications include anomaly detection, predictive analytics, dynamic thresholding and more. Customer stories demonstrate how organizations are using Splunk's machine learning for security, operations, and other use cases.
SplunkLive! Paris 2018: Use Splunk for Incident Response, Orchestration and A...Splunk
Presented at SplunkLive! Paris 2018:
- Challenges with Security Operations Today
- Overview of Splunk Adaptive Response Initiative
- Technology behind the Adaptive Response Framework
- Demonstrations
- How to build your own AR Action
- Resources
Presented at SplunkLive! Paris 2018: Get More From Your Machine Data With Splunk AI
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! Frankfurt 2018 - Data Onboarding OverviewSplunk
Presented at SplunkLive! Frankfurt 2018:
Splunk Data Collection Architecture
Apps and Technology Add-ons
Demos / Examples
Best Practices
Resources and Q&A
Latest Updates to Splunk from .conf 2017 Announcements Harry McLaren
Session detailing some of the best announcements from the recent Splunk users conference. Delivered at the Splunk User Group in Edinburgh on October 16, 2017.
Why Your Data Science Architecture Should Include a Data Virtualization Tool ...Denodo
Watch full webinar here: https://bit.ly/35FUn32
Presented at CDAO New Zealand
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python, and Scala put advanced techniques at the fingertips of the data scientists.
However, most architecture laid out to enable data scientists miss two key challenges:
- Data scientists spend most of their time looking for the right data and massaging it into a usable format
- Results and algorithms created by data scientists often stay out of the reach of regular data analysts and business users
Watch this session on-demand to understand how data virtualization offers an alternative to address these issues and can accelerate data acquisition and massaging. And a customer story on the use of Machine Learning with data virtualization.
Big Data, Physics, and the Industrial Internet: How Modeling & Analytics are ...mattdenesuk
1) The document discusses how big data, analytics, and physics-based modeling can transform industrial sectors like power, manufacturing, and transportation by making machines more intelligent and efficient.
2) It argues that connecting millions of industrial machines to collect massive amounts of data, and applying advanced analytics, will improve productivity, optimize operations, and reduce costs across industries.
3) A key enabler is developing "software-defined machines" that can easily connect to the internet, run analytics apps in the cloud to become self-aware, and update capabilities without hardware changes.
Splunk Discovery: Warsaw 2018 - Legacy SIEM to Splunk, How to Conquer Migrati...Splunk
Presented at Splunk Discovery Warsaw 2018:
SIEM Replacement Methodology
Use Cases
Data Sources & Data Onboarding
Architecture
Third Party Integration
You Got This!
Gain New Insights by Analyzing Machine Logs using Machine Data Analytics and BigInsights.
Half of Fortune 500 companies experience more than 80 hours of system down time annually. Spread evenly over a year, that amounts to approximately 13 minutes every day. As a consumer, the thought of online bank operations being inaccessible so frequently is disturbing. As a business owner, when systems go down, all processes come to a stop. Work in progress is destroyed and failure to meet SLA’s and contractual obligations can result in expensive fees, adverse publicity, and loss of current and potential future customers. Ultimately the inability to provide a reliable and stable system results in loss of $$$’s. While the failure of these systems is inevitable, the ability to timely predict failures and intercept them before they occur is now a requirement.
A possible solution to the problem can be found is in the huge volumes of diagnostic big data generated at hardware, firmware, middleware, application, storage and management layers indicating failures or errors. Machine analysis and understanding of this data is becoming an important part of debugging, performance analysis, root cause analysis and business analysis. In addition to preventing outages, machine data analysis can also provide insights for fraud detection, customer retention and other important use cases.
This presentation gives an overview of StreamCentral technology targeted for IT professionals. StreamCentral is software to model and build Big Data Solutions. StreamCentral consists of a Big Data Solutions Modeler that not only makes it easy to model traditional BI/DW and Big Data solutions but also auto deploys the model on the latest innovations in Big Data Management solutions (like HP Vertica and SQL Server Parallel Data Warehouse). StreamCentral Big Data Server executes the model definition in real-time. StreamCentral drastically reduces the time to market, risk and cost associated with building traditional BI/DW and Big Data solutions!
Primo Reporting: Using 3rd Party Software to Create Primo Reports & Analyze P...Alison Hitchens
This document discusses using third party software like Cognos and Google Analytics to analyze usage statistics from the Primo discovery system. It describes how the University of Waterloo created PowerPlay cubes in Cognos to interactively analyze and visualize data from the Primo Reporting Schema views. Impromptu reports were also created in Cognos for specific predefined reporting needs. Google Analytics was used to supplement Primo statistics with additional web usage details. Working with Ex Libris support helped clarify questions about the Primo data views.
Intro of Key Features of Soft CAAT Ent Softwarerafeq
This presentation provides a brief overview of SoftCAAT Ent with use cases. SoftCAAT Ent is a data analytics/BI software used by CAs and CXOs for Assurance, Compliance and Fraud Investigations.
Splunk is a time-series data platform that handles the three V's of data (volume, velocity, and variety) very well. It collects, indexes, and allows searching and analysis of data. Splunk can collect data from files, directories, network ports, programs/scripts, and databases. It breaks data down into searchable events and builds a high-performance index. This allows users to search, manipulate, and visualize data in reports, charts, and dashboards. Splunk can analyze structured, unstructured, and multistructured data from various sources like logs, networks, clicks, and more.
Monitoring and Measuring SharePoint to Guarantee Your ROIChristian Buckley
Whether on-premises or online, the business value of SharePoint can be hard to articulate to your management team if you are not taking the time to monitor and measure. This session identifies what is available out-of-the-box in SharePoint and in Office 365, how Microsoft uses telemetry and analytics to improve the platform, and options available for identifying your ROI by using these tools and more.
The document outlines a three-phase approach to developing an intelligent monitoring platform:
Phase 1 involves interviewing dev and ops teams to understand current monitoring practices.
Phase 2 focuses on improving the postmortem process and outage understanding.
Phase 3 aims to reduce the time to identify and resolve outages through expanded data collection, correlation analysis, and predictive capabilities.
How a Time Series Database Contributes to a Decentralized Cloud Object Storag...InfluxData
In this presentation, you'll learn how InfluxDB is a component to Storj’s Tardigrade service and workflows. John Gleeson and Ben Sirb of Storj Lab will Storj’s redefinition of a cloud object storage network, how InfluxData fits into Storj’s Open Source Partner Program, and how to collect and manage high-volume, real-time telemetry data from a distributed network.
Data Analytics in your IoT SolutionFukiat Julnual, Technical Evangelist, Mic...BAINIDA
Data Analytics in your IoT SolutionFukiat Julnual, Technical Evangelist, Microsoft (Thailand) Limited ในงาน THE FIRST NIDA BUSINESS ANALYTICS AND DATA SCIENCES CONTEST/CONFERENCE จัดโดย คณะสถิติประยุกต์และ DATA SCIENCES THAILAND
SplunkLive! Paris 2018: Legacy SIEM to SplunkSplunk
Presented at SplunkLive! Paris 2018: Legacy SIEM to Splunk, How to Conquer Migration and Not Die Trying:
- Why?
- SIEM Replacement
- Use Cases
- Data Sources & Data Onboarding
- Architecture
- Third Party Integrations
- You Got This
-
The document discusses how analytics and data mining can be used to gain insights from data generated by business processes. It describes how event data from processes can be analyzed in real-time for monitoring and over time to identify patterns and opportunities for process improvement. Key applications discussed include predictive modeling, simulation, optimization, and automated recommendations for resource allocation and process changes.
Machine Learning and Analytics Breakout SessionSplunk
This document provides an overview of machine learning and how it can be used with Splunk. It discusses what machine learning is, the different types of machine learning, and common use cases in IT operations, security, and business analytics. It also summarizes how machine learning can be implemented using Splunk, including exploring data, building models, applying and validating models, and operationalizing models. The document encourages attendees to try out the free Splunk Machine Learning Toolkit and Showcase app.
The document discusses how traditional analytics approaches are no longer sufficient due to new data sources like machine data that are unstructured and from external sources. It introduces Splunk as a platform that can collect, index, and analyze massive amounts of machine data in real-time to provide operational intelligence and business insights. Splunk uses late binding schema to allow ad-hoc queries over heterogeneous machine data without needing to design schemas upfront. It can complement traditional BI tools by focusing on real-time analytics over machine data while traditional tools focus on structured data.
Are you collecting just about every metric under the sun and the kitchen sink too? Understanding the cost of collecting metrics and the usefulness of those metrics is the only way to scale in a cloud native world. You can’t get away with just collecting everything as you grow. Your observability teams need to make decisions about what to collect, what to drop, what to aggregate, and still be able to alert, triage, remediate, and do their root cause analysis on a daily basis. Gain immediate insights into high cost data (DPPS), when to drop time series data, and how to determine when the value of that data is at its lowest. Session includes a recorded demo video of it in action.
Similar to SplunkLive! Munich 2018: Integrating Metrics and Logs (20)
.conf Go 2023 - Raiffeisen Bank InternationalSplunk
This document discusses standardizing security operations procedures (SOPs) to increase efficiency and automation. It recommends storing SOPs in a code repository for versioning and referencing them in workbooks which are lists of standard tasks to follow for investigations. The goal is to have investigation playbooks in the security orchestration, automation and response (SOAR) tool perform the predefined investigation steps from the workbooks to automate incident response. This helps analysts automate faster without wasting time by having standard, vendor-agnostic procedures.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
El documento describe la transición de Cellnex de un Centro de Operaciones de Seguridad (SOC) a un Equipo de Respuesta a Incidentes de Seguridad (CSIRT). La transición se debió al crecimiento de Cellnex y la necesidad de automatizar procesos y tareas para mejorar la eficiencia. Cellnex implementó Splunk SIEM y SOAR para automatizar la creación, remediación y cierre de incidentes. Esto permitió al personal concentrarse en tareas estratégicas y mejorar KPIs como tiempos de resolución y correos electrónicos anal
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)Splunk
Este documento resume el recorrido de ABANCA en su camino hacia la ciberseguridad con Splunk, desde la incorporación de perfiles dedicados en 2016 hasta convertirse en un centro de monitorización y respuesta con más de 1TB de ingesta diaria y 350 casos de uso alineados con MITRE ATT&CK. También describe errores cometidos y soluciones implementadas, como la normalización de fuentes y formación de operadores, y los pilares actuales como la automatización, visibilidad y alineación con MITRE ATT&CK. Por último, señala retos
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
The document is a presentation on cyber security trends and Splunk security products from Matthias Maier, Product Marketing Director for Security at Splunk. The presentation covers trends in security operations like the evolution of SOCs, new security roles, and data-centric security approaches. It also provides updates on Splunk's security portfolio including recognition as a leader in SIEM by Gartner and growth in the SIEM market. Maier highlights some breakout sessions from the conference on topics like asset defense, machine learning, and building detections.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
This document summarizes a presentation about observability using Splunk. It includes an agenda introducing observability and why Splunk for observability. It discusses the need for modernization initiatives in companies and the thousands of changes required. It presents that Splunk provides end-to-end visibility across metrics, traces and logs to detect, troubleshoot and optimize systems. It shares a customer case study of Accenture using Splunk observability in their hybrid cloud environment. Finally, it concludes that observability with Splunk can drive results like reduced downtime and faster innovation.
This document contains slides from a Splunk presentation covering the following topics:
- Updated Splunk logo and information about meetings in Zurich and sales engineering leads
- Ideas for confused or concerned human figures in design concepts
- Three buckets of challenges around websites slowing, apps being down, and supply chain issues
- Accelerating mean time to detect, identify, respond and resolve through cyber resilience with Splunk
- Unifying security, IT and DevOps teams
- Splunk's technology vision focusing on customer experience, hybrid/edge, unleashing data lakes, and ubiquitous machine learning
- Gaining operational resilience through correlating infrastructure, security, application and user data with business outcomes
This document summarizes a presentation about Splunk's platform. It discusses Splunk's mission of helping customers create value faster with insights from their data. It provides statistics on Splunk's daily ingest and users. It highlights examples of how Splunk has helped customers in areas like internet messaging and convergent services. It also discusses upcoming challenges and new capabilities in Splunk like federated search, flexible indexing, ingest actions, improved data onboarding and management, and increased platform resilience and security.
The document appears to be a presentation from Splunk on security topics. It includes sections on cyber security resilience, the data-centric modern SOC, application monitoring at scale, threat modeling, security monitoring journeys, self-service Splunk infrastructure, the top 3 CISO priorities of risk based alerting, use case development, a security content repository, security PVP (posture, vision, and planning) and maturity assessment, and concludes with an overview of how Splunk can provide end-to-end visibility across an organization.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
5. Raw Event Search on Log Events
Splunk 1.0: Find the “Needle in the Haystack”
Raw Event
Search
6. Statistical Analysis on Log Events
Splunk 3.0 and 5.0: Scan through and report on many events
Raw Event
Search
Optimization for
Statistical Queries
7. Metric Analysis on Metric Data Points
Splunk 7.0: Perform statistical calculations
Raw Event
Search
Optimization for
Statistical Queries
Optimization for
Metrics Queries
9. Why Metrics?
… when you already use logs?
▶ Metrics
• Structured data
• Best way to observe a process or device
• Easy way to do monitoring
• You know what you want to measure
• e.g. performance, CPU, Number of
users, memory used, network latency,
disk usage
▶ Events (e.g. Logs)
• Unstructured data
• Needle in the haystack
• Can tell you all about the “why”
• Answers questions you might not even
have yet
• Very versatile
10. Time Metric Name
system.cpu.idle
Measure
(aka Value)
numeric data point,
different types,
e.g., count, gauge,
timing, sample
Dimensions
Host
(10.1.1.100, web01.splunk.com)
Region
(us-east-1, emea-1, apac-2)
IntanceTypes
(t2.medium, t2.large, m3.large)
What Does a Metric Consist of?
Numerical data points captured over time that can be compressed,
stored, processed and retrieved far more efficiently than events
ABC.XYZ
13. Automate, collect, index and
visualize your machine data in
real time
Discover insights from any
machine data–structured or
unstructured
Analyze, predict and act on
outcomes from your machine
data
Splunk Enterprise 7.0
The easiest way to aggregate, analyze and get answers from your machine data
MONITOR INVESTIGATE BUILD INTELLIGENCE
17. Metrics and logs in one
unified experience
Find trends and root
cause easier and faster
based on purpose built
workflows
Start monitoring for free,
expand to span across
teams, use cases and
large hybrid environments
Built for Infrastructure
Monitoring, deploys in
minutes and easy to
maintain
Project Waitomo
Seamless Monitoring
and Troubleshooting
Automated Investigations Expandable Install to Insight
in Minutes
21. Save the Date 2018
October 1-4, 2018
▶ 8,750+ Splunk Enthusiasts
▶ 300+ Sessions
▶ 100+ Customer Speakers
Plus Splunk University:
▶ Three Days: September 29-October 1, 2018
▶ Get Splunk Certified for FREE!
▶ Get CPE credits for CISSP, CAP, SSCP
Walt Disney World Swan and Dolphin Resort in Orlando
conf .splunk.com
SAVE THE DATE!
24. ▶ IT Ops & Application Performance: Metrics provide usage, performance and
availability data (by OS, storage, Apps, Clouds, etc.)
• Trends can identify where there is a problem
• When trends and thresholds illustrate performance issues, other data sources are
correlated to determine the root causes
Use Cases
IT Ops and Application Performance are driven by Metrics
25. Metric Store
Ability to ingest and store
metric measurements
at scale
mstats
tstats equivalent to
query time series from
metrics indexes
Metrics Catalog
REST APIs to query lists
of ingested metrics
and dimensions
Metrics – The New Way
Ingest metrics natively
SPL
26. ▶ 06/29/2017 16:45:15.170 collection="Available Memory"
object=Memory counter="Pages/sec" Value=264
host=10.0.8.156
▶ 06/29/2017 16:47:47.170 collection="MSExchangeIS_Mailbox"
object="MSExchangeIS Mailbox" counter="Messages
Submitted/sec" instance="_Total" Value=185.3656
host=10.0.8.156
Metrics – Status Quo
Here: Windows Perfmon
Timestamp
Metric Name
Measurement Value
Dimensions
27. Dimensions
Fields that help describe and add context to a metric
▶ Dimensions are fields that help describe and add context to a metric
▶ For example a metric named “cpu.usage” might have dimensions for
host, IP address or asset location
▶ Use dimensions to split-by and filter metric data, but not as a primary
way to query the metric store
▶ Standard fields, such as host, source, sourcetype, index can be treated
as dimensions
▶ There are no limits to the number of dimensions you can have…
▶ That said, be mindful and consider best practices
▶ Examples
• Temp Sensor – Dimensions: time, latitude, longitude / Value:temperature
• Pressure Sensor – Dimensions: time, valve_id / Value: pressure(psi)
• IT Monitoring – Dimensions: time, host, pid / Value: cpu, memory
• Splunk Internal Metrics – Dimensions: time, user / Value: search_count
• Web Access – Dimensions: time, requester_ip, request_method, request_url / Value:request_duration, count
28. ▶ Customers want to aggregate, store and analyze as well as stream-process time-series metrics
data in an efficient manner. Furthermore, this system has to scale to handle data rates that may
be orders of magnitude larger than our current rates, and work seamlessly on Cloud and on-prem
deployments.
▶ Luckily, our current technology stack does support ingestion, search and analytics over time
series data, and we can leverage a lot of the machinery we have already built. However, the use
cases around metrics data store differ from log data in some fundamental ways, to list a few:
• Metrics data is voluminous
• Metrics data is structured data with dimensions and numerical measure field
• Lower latency and higher search concurrency requirements
▶ Currently, various customers and solutions engineers need to employ workarounds on our current
system to satisfy the above requirements but these are only stop gap measures that won't scale
to the next level and often times don't meet the latency/performance, TCO and scaling
requirements.
Why Metrics Matter
Metrics support helps customers aggregate, store and analyze data more efficiently
29. Metrics versus Events
Two distinct machine data sources that have been hard to integrate…until now
Metrics
▶ Numbers describing a particular process or activity
▶ Measured over intervals of time –
i.e., time series data
▶ Common metrics sources:
• System metrics (CPU, memory, disk)
• Infrastructure metrics (AWS CloudWatch)
• Web tracking scripts (Google Analytics)
• Application agents (APM, error tracking)
Events
▶ Immutable record of discrete events that happen
over time
▶ Come in three forms: plain text, structured, binary
▶ Common event sources:
• System and server logs (syslog, journald)
• Firewall and intrusion detection system logs
• Social media feeds (Twitter…)
• Application, platform and server logs (log4j, log4net,
Apache, MySQL, AWS)
Timestamp Metric Name Value Dimensions
1481050800 os.cpu.user 42.12345 hq:us-west-1
Sample Metric
[29/Aug/2017 08:47:05:316503] "POST /cart.do?uid=84e8d742-a31d69&action=remove&&product_id=BS-
2&JSESSIONID=SD6SAL4FF1ADFF9 HTTP 1.1" 200 2569 "http://www.buttercupenterprises.com/product.screen?
product_id=BS-2" "Mozilla/5.0 (Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/57.0.2957.0 Safari/537.36" 98
Sample Log
Equivalent to
1 metric value
30. ▶ Millions of CPUs in data centers, and billions of connected devices produce an ever increasing amount
of metrics data
• According to Gartner, the number of IoT endpoint devices (devices = metrics) will total 20.4 billion by 2020, up from 6.4 billion in 2016
• With more workloads moving to the cloud and more devices coming on line every day, metrics data is a foundational and strategic data source. As
structured, time-series data, metrics do not benefit from “schema-on-read” and are far more efficient than log data.
▶ Improved performance and scalability for monitoring and alerting
• With Splunk Enterprise 7.0, the performance of monitoring and alerting on metrics data is boosted by up to 200x vs. previous Splunk releases.
• When ingesting typical metrics payloads with supported metrics source types (collectd_http, statsd, metrics_csv), a metrics index requires about 50%
less disk storage space compared to storing the same payload in an events index.
• Because metrics queries now return faster, monitoring in Enterprise 7.0 puts less strain on the deployment and uses fewer resources. In the past you
didn’t have a choice. You had to use Events or nothing. Now you can choose the right tool for your particular analytics task.
▶ Splunk is a real-time data analytics platform delivering a unified experience between logs and metrics
• Splunk metrics removes context switching time between separate monitoring and troubleshooting tools by correlating metrics and logs; provides
flexibility to ingest these different data types in the most efficient way.
• This is a significant step toward end-to-end monitoring (starting with metrics) and investigation (pin-pointing issues with events) in the same platform.
Metrics Boosts Splunk Enterprise
Boosts performance of monitoring and alerting on metrics by 200X.
Requires *50% less disk space.
31. ▶ New SPL command
▶ optimized for fast retrieval of metrics aggregations (only aggregations on _value)
▶ Like tstats, it is a generating command that generates reports without transforming the events.
▶ unlike tstats, it can search from both on-disk data (historical search) and in-memory data (realtime
search)
▶ mstats cannot search event index, tstats and search commands cannot search metrics index
▶ mstats is a reporting command
mstats
Syntax
| mstats <stats-function> …
WHERE index=<metric_index> AND metric_name=<metricname> …]
[span=<timespan>] [BY <metricname|dimension>]
32. ▶ New SPL command: mcatalog
▶ optimized to list catalog information
(e.g., metric names, dimensions) of
metric store
Syntax
| mcatalog values(<field>) …
[WHERE index=<metric_index>
AND metric_name=<metricname> …]]
[BY <metricname|dimension>]
▶ New REST endpoints
▶ list metric names:
/services/catalog/metricstore/metrics
▶ list dimension names:
/services/catalog/metricstore/dimensi
ons
▶ list dimension values:
/services/catalog/metricstore/dimensi
ons/{dimension-name}/values
▶ You can also use filters with these
endpoints to limit results by index,
dimension, and dimension values.
Metrics Catalog