Presented at SplunkLive! Munich 2018:
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! Munich 2018: Intro to Security Analytics MethodsSplunk
The document provides an introduction and agenda for a presentation on security analytics methods. The agenda includes an intro to analytics methods from 11:40-12:40 followed by a lunch break from 12:40-13:40. The presentation may include forward-looking statements and disclaimers are provided. Information presented is subject to change and any information about product roadmaps is for informational purposes only.
SplunkLive! Munich 2018: Predictive, Proactive, and Collaborative ML with IT ...Splunk
This document discusses how machine learning (ML) can be used with IT service intelligence (ITSI) to enable predictive, proactive, and collaborative IT operations. It describes how ML can be applied to analyze machine data using ITSI to predict failures and other notable events. This allows operations teams to be notified earlier of potential issues. The document provides an example of using ITSI's built-in ML and event analytics to cluster similar alerts from thousands of events into meaningful, actionable alerts to improve response time. It also discusses integrating ITSI with chat tools like Slack to immediately notify teams to further reduce resolution times.
SplunkLive! Frankfurt 2018 - Get More From Your Machine Data with Splunk AISplunk
Presented at SpluknLive! Frankfurt 2018:
Why AI & Machine Learning?
What is Machine Learning?
Splunk's Machine Learning Tour
Use Cases & Customer Stories
Wrap Up
SplunkLive! Munich 2018: Intro to Security Analytics MethodsSplunk
The document provides an introduction and agenda for a presentation on security analytics methods. The agenda includes an intro to analytics methods from 11:40-12:40 followed by a lunch break from 12:40-13:40. The presentation may include forward-looking statements and disclaimers are provided. Information presented is subject to change and any information about product roadmaps is for informational purposes only.
SplunkLive! Munich 2018: Predictive, Proactive, and Collaborative ML with IT ...Splunk
This document discusses how machine learning (ML) can be used with IT service intelligence (ITSI) to enable predictive, proactive, and collaborative IT operations. It describes how ML can be applied to analyze machine data using ITSI to predict failures and other notable events. This allows operations teams to be notified earlier of potential issues. The document provides an example of using ITSI's built-in ML and event analytics to cluster similar alerts from thousands of events into meaningful, actionable alerts to improve response time. It also discusses integrating ITSI with chat tools like Slack to immediately notify teams to further reduce resolution times.
SplunkLive! Frankfurt 2018 - Get More From Your Machine Data with Splunk AISplunk
Presented at SpluknLive! Frankfurt 2018:
Why AI & Machine Learning?
What is Machine Learning?
Splunk's Machine Learning Tour
Use Cases & Customer Stories
Wrap Up
SplunkLive! Munich 2018: Getting Started with Splunk EnterpriseSplunk
The document provides an agenda for a SplunkLive! presentation on installing and using Splunk. It includes downloading required files, importing sample data, conducting searches on the data, and exploring various Splunk features through a live demonstration. Common installation problems are also addressed. The presentation aims to provide attendees with the knowledge and skills to get started using Splunk through hands-on learning and a question and answer session.
SplunkLive! Zurich 2018: Monitoring the End User Experience with SplunkSplunk
This document discusses using Splunk to gain insights into end user experience and the factors that influence experience. Splunk provides a platform approach to monitor applications across the full technology stack from networks to databases. It can ingest data from various sources, including APM tools, and provide visibility into both instrumented and non-instrumented applications and environments. Splunk also offers predictive analytics capabilities and allows various stakeholders like operations and business teams to access and analyze data. The document demonstrates how Splunk can help organizations improve user experience, application performance, and collaboration between teams.
SplunkLive! Frankfurt 2018 - Intro to Security Analytics MethodsSplunk
Splunk Security Essentials provides concise summaries in 3 sentences or less that provide the high level and essential information from the document. The document discusses an introductory presentation on security analytics methods. It includes an agenda that covers an introduction to analytics methods, an example scenario, and next steps. It also discusses common security challenges, different analytics methods and types of use cases, and how analytics can be applied to different stages of an attack.
SplunkLive! Zurich 2018: Get More From Your Machine Data with Splunk & AISplunk
This presentation discusses how Splunk and machine learning can help organizations get more value from their machine data. It describes how machine learning can improve decision making, uncover hidden trends, alert on deviations, and forecast incidents. The presentation provides an overview of Splunk's machine learning capabilities, including search, packaged solutions, and the machine learning toolkit. It also showcases several customer use cases that have benefited from Splunk's machine learning offerings, such as network incident detection, security/fraud prevention, and optimizing operations.
SplunkLive! Zurich 2018: Legacy SIEM to Splunk, How to Conquer Migration and ...Splunk
This document provides an overview of best practices for migrating from a legacy SIEM to Splunk Enterprise Security. It discusses identifying high-value use cases to prioritize for migration. Proper data source onboarding using technologies like the Universal Forwarder and Technology Add-ons is also covered. The presentation recommends planning the target architecture and identifying any necessary third-party integrations. Some preparatory steps customers can take today to get ready for the replacement are also listed.
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunk
This document discusses integrating metrics and logs in Splunk for enhanced troubleshooting and monitoring. It provides an overview of metrics and how they are defined, compared to events. Metrics support in Splunk allows for more efficient aggregation, storage, and analysis of time-series data. Example use cases mentioned include IT operations, application performance monitoring, and IoT. Pricing is still based on uncompressed data volume ingested, with each metrics measurement licensed at around 150 bytes.
SplunkLive! Frankfurt 2018 - Legacy SIEM to Splunk, How to Conquer Migration ...Splunk
Presented at SplunkLive! Frankfurt 2018:
Introduction
SIEM Migration Methodology
Use Cases
Datasources & Data Onboarding
ES Architecture
Third-Party Integrations
You Got This!
This document summarizes information about the Splunk Usergroup Zurich. It mentions that the group has regular Splunk user get-togethers throughout major German-speaking cities, not just Zurich. It hosts frequent Splunk presentations in German and English. The group is not a sales-focused organization and provides a space for users to meet and learn from each other. Interested users can join the group by visiting the listed URL.
Splunk Discovery: Warsaw 2018 - Reimagining IT with Service IntelligenceSplunk
Presented at Splunk Discovery Warsaw 2018:
What's Service Intelligence and Why You Should Care
Introduction to Splunk IT Service Intelligence
IT Service Intelligence Key Concepts
Demo
Presented at SplunkLive! Paris 2018: Get More From Your Machine Data With Splunk AI
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! Frankfurt 2018 - Data Onboarding OverviewSplunk
Presented at SplunkLive! Frankfurt 2018:
Splunk Data Collection Architecture
Apps and Technology Add-ons
Demos / Examples
Best Practices
Resources and Q&A
SplunkLive! Paris 2018: Use Splunk for Incident Response, Orchestration and A...Splunk
Presented at SplunkLive! Paris 2018:
- Challenges with Security Operations Today
- Overview of Splunk Adaptive Response Initiative
- Technology behind the Adaptive Response Framework
- Demonstrations
- How to build your own AR Action
- Resources
SplunkLive! Zurich 2018: Use Splunk for Incident Response, Orchestration and ...Splunk
This document discusses using Splunk for incident response, orchestration, and automation. It notes that incident response currently takes significant time, with containment and response phases accounting for 72% of the time spent on incidents. It proposes that security operations need to change through orchestration and automation using adaptive response. Adaptive response aims to accelerate detection, investigation, and response by centrally automating data retrieval, sharing, and response actions across security tools and domains. This improves efficiency and extracts new insights through leveraging shared context and actions.
Splunk Discovery: Milan 2018 - Get More From Your Machine Data with Splunk AISplunk
This document discusses machine learning and artificial intelligence capabilities provided by Splunk. It begins by explaining why organizations are adopting AI and machine learning to improve decision making, uncover hidden trends, forecast incidents, and more using diverse real-time data. It then provides an overview of Splunk's machine learning toolkit and capabilities including search, packaged solutions, algorithms, and commands. Examples of applications include anomaly detection, predictive analytics, dynamic thresholding and more. Customer stories demonstrate how organizations are using Splunk's machine learning for security, operations, and other use cases.
Splunk is a powerful platform for understanding your data. This session will provide an overview of machine learning capabilities available across Splunk’s portfolio. We'll dive deeply into Splunk's Machine Learning Toolkit App, which extends Splunk Enterprise with a rich suite of advanced analytics, machine learning algorithms, and rich visualizations. It also provides customers with a guided model-building and operationalization environment. The demonstration will include the guided model-building UI for tasks such as predictive analytics, outlier detection, event clustering, and anomaly detection. We’ll also review typical use cases and real-world customers who are using the Toolkit to drive business results.
Die Rolle von KI in der digitalen Widerstandsfähigkeit - Splunk Public Sector...Splunk EMEA
Die Rolle von KI
in der digitalen Widerstandsfähigkeit - Splunk Public Sector Summit 2024 in Frankfurt
Sprecher:
Philipp Drieger (Global Principal Machine Learning Architect)
SplunkLive! Munich 2018: Getting Started with Splunk EnterpriseSplunk
The document provides an agenda for a SplunkLive! presentation on installing and using Splunk. It includes downloading required files, importing sample data, conducting searches on the data, and exploring various Splunk features through a live demonstration. Common installation problems are also addressed. The presentation aims to provide attendees with the knowledge and skills to get started using Splunk through hands-on learning and a question and answer session.
SplunkLive! Zurich 2018: Monitoring the End User Experience with SplunkSplunk
This document discusses using Splunk to gain insights into end user experience and the factors that influence experience. Splunk provides a platform approach to monitor applications across the full technology stack from networks to databases. It can ingest data from various sources, including APM tools, and provide visibility into both instrumented and non-instrumented applications and environments. Splunk also offers predictive analytics capabilities and allows various stakeholders like operations and business teams to access and analyze data. The document demonstrates how Splunk can help organizations improve user experience, application performance, and collaboration between teams.
SplunkLive! Frankfurt 2018 - Intro to Security Analytics MethodsSplunk
Splunk Security Essentials provides concise summaries in 3 sentences or less that provide the high level and essential information from the document. The document discusses an introductory presentation on security analytics methods. It includes an agenda that covers an introduction to analytics methods, an example scenario, and next steps. It also discusses common security challenges, different analytics methods and types of use cases, and how analytics can be applied to different stages of an attack.
SplunkLive! Zurich 2018: Get More From Your Machine Data with Splunk & AISplunk
This presentation discusses how Splunk and machine learning can help organizations get more value from their machine data. It describes how machine learning can improve decision making, uncover hidden trends, alert on deviations, and forecast incidents. The presentation provides an overview of Splunk's machine learning capabilities, including search, packaged solutions, and the machine learning toolkit. It also showcases several customer use cases that have benefited from Splunk's machine learning offerings, such as network incident detection, security/fraud prevention, and optimizing operations.
SplunkLive! Zurich 2018: Legacy SIEM to Splunk, How to Conquer Migration and ...Splunk
This document provides an overview of best practices for migrating from a legacy SIEM to Splunk Enterprise Security. It discusses identifying high-value use cases to prioritize for migration. Proper data source onboarding using technologies like the Universal Forwarder and Technology Add-ons is also covered. The presentation recommends planning the target architecture and identifying any necessary third-party integrations. Some preparatory steps customers can take today to get ready for the replacement are also listed.
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunk
This document discusses integrating metrics and logs in Splunk for enhanced troubleshooting and monitoring. It provides an overview of metrics and how they are defined, compared to events. Metrics support in Splunk allows for more efficient aggregation, storage, and analysis of time-series data. Example use cases mentioned include IT operations, application performance monitoring, and IoT. Pricing is still based on uncompressed data volume ingested, with each metrics measurement licensed at around 150 bytes.
SplunkLive! Frankfurt 2018 - Legacy SIEM to Splunk, How to Conquer Migration ...Splunk
Presented at SplunkLive! Frankfurt 2018:
Introduction
SIEM Migration Methodology
Use Cases
Datasources & Data Onboarding
ES Architecture
Third-Party Integrations
You Got This!
This document summarizes information about the Splunk Usergroup Zurich. It mentions that the group has regular Splunk user get-togethers throughout major German-speaking cities, not just Zurich. It hosts frequent Splunk presentations in German and English. The group is not a sales-focused organization and provides a space for users to meet and learn from each other. Interested users can join the group by visiting the listed URL.
Splunk Discovery: Warsaw 2018 - Reimagining IT with Service IntelligenceSplunk
Presented at Splunk Discovery Warsaw 2018:
What's Service Intelligence and Why You Should Care
Introduction to Splunk IT Service Intelligence
IT Service Intelligence Key Concepts
Demo
Presented at SplunkLive! Paris 2018: Get More From Your Machine Data With Splunk AI
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! Frankfurt 2018 - Data Onboarding OverviewSplunk
Presented at SplunkLive! Frankfurt 2018:
Splunk Data Collection Architecture
Apps and Technology Add-ons
Demos / Examples
Best Practices
Resources and Q&A
SplunkLive! Paris 2018: Use Splunk for Incident Response, Orchestration and A...Splunk
Presented at SplunkLive! Paris 2018:
- Challenges with Security Operations Today
- Overview of Splunk Adaptive Response Initiative
- Technology behind the Adaptive Response Framework
- Demonstrations
- How to build your own AR Action
- Resources
SplunkLive! Zurich 2018: Use Splunk for Incident Response, Orchestration and ...Splunk
This document discusses using Splunk for incident response, orchestration, and automation. It notes that incident response currently takes significant time, with containment and response phases accounting for 72% of the time spent on incidents. It proposes that security operations need to change through orchestration and automation using adaptive response. Adaptive response aims to accelerate detection, investigation, and response by centrally automating data retrieval, sharing, and response actions across security tools and domains. This improves efficiency and extracts new insights through leveraging shared context and actions.
Splunk Discovery: Milan 2018 - Get More From Your Machine Data with Splunk AISplunk
This document discusses machine learning and artificial intelligence capabilities provided by Splunk. It begins by explaining why organizations are adopting AI and machine learning to improve decision making, uncover hidden trends, forecast incidents, and more using diverse real-time data. It then provides an overview of Splunk's machine learning toolkit and capabilities including search, packaged solutions, algorithms, and commands. Examples of applications include anomaly detection, predictive analytics, dynamic thresholding and more. Customer stories demonstrate how organizations are using Splunk's machine learning for security, operations, and other use cases.
Splunk is a powerful platform for understanding your data. This session will provide an overview of machine learning capabilities available across Splunk’s portfolio. We'll dive deeply into Splunk's Machine Learning Toolkit App, which extends Splunk Enterprise with a rich suite of advanced analytics, machine learning algorithms, and rich visualizations. It also provides customers with a guided model-building and operationalization environment. The demonstration will include the guided model-building UI for tasks such as predictive analytics, outlier detection, event clustering, and anomaly detection. We’ll also review typical use cases and real-world customers who are using the Toolkit to drive business results.
Die Rolle von KI in der digitalen Widerstandsfähigkeit - Splunk Public Sector...Splunk EMEA
Die Rolle von KI
in der digitalen Widerstandsfähigkeit - Splunk Public Sector Summit 2024 in Frankfurt
Sprecher:
Philipp Drieger (Global Principal Machine Learning Architect)
Splunk Discovery: Warsaw 2018 - Legacy SIEM to Splunk, How to Conquer Migrati...Splunk
Presented at Splunk Discovery Warsaw 2018:
SIEM Replacement Methodology
Use Cases
Data Sources & Data Onboarding
Architecture
Third Party Integration
You Got This!
The document provides an overview of Splunk, including:
- Splunk allows users to search and analyze machine-generated data from websites, applications, sensors and other sources to gain operational intelligence and security insights.
- Splunk's platform can index and correlate data from various sources in real-time to enable log search, monitoring, and analytics across IT, security, and business functions.
- Splunk provides solutions for IT operations, security, IoT and industrial data, and business analytics to help customers address challenges in those areas.
SplunkLive! Paris 2018: Legacy SIEM to SplunkSplunk
Presented at SplunkLive! Paris 2018: Legacy SIEM to Splunk, How to Conquer Migration and Not Die Trying:
- Why?
- SIEM Replacement
- Use Cases
- Data Sources & Data Onboarding
- Architecture
- Third Party Integrations
- You Got This
-
Splunk Webinar: IT Operations Demo für Troubleshooting & DashboardingGeorg Knon
This document provides an overview of Splunk's IT operations software. It discusses the challenges facing IT operations, including siloed tools and reactive problem solving. It presents Splunk as a solution, with its ability to index and analyze machine data from any source in real-time. Key benefits highlighted include faster troubleshooting to reduce downtime, proactive monitoring to address issues before they become problems, and increased operational visibility across the IT environment. The document concludes with a demonstration of Splunk's IT service intelligence capabilities.
Splunk AI & Machine Learning Roundtable 2019 - ZurichSplunk
Splunk Artificial Intelligence and Machine Learning Roundtable held in Zurich on November 6th 2019. Presented by Philipp Drieger, Staff Machine Learning Architect.
These are the slides from the webinar broadcast on April 1st 2020, presented by Philipp Drieger. Content covers:
- Introduction to AI and ML Features in Splunk
- Customer Use Case Examples
- Live Demo of Machine Learning Toolkit, with examples for:
Methods for Anomaly Detection, Predictive Analytics and Forecasting, and Clustering
- Custom Machine Learning, incl.: Advanced Containerization and Expansion with MLSPL API
Splunk ITOA Roundtable - Zurich: 30th November 2017Splunk
Presentation slides from the Splunk ITOA roundtable event that took place in Zurich, November 2017. Attendees learnt:
- What is machine learning
- Why machine learning is critical for today's IT
- The challenges you will need to overcome
- Some real examples of machine learning use cases
- How to get started to machine learning
Splunk Webinar – IT Operations auf den nächsten Level bringenSplunk
Verwertbare Einblicke in Ihre Daten gewinnen und IT Operations auf den nächsten Level bringen
In unserem Webinar zeigen wir Ihnen anhand einer Demo:
- wie Sie Service-Kontext gewinnen, in dem Sie Verhaltens- und Performance-Daten kombinieren.
- wie Sie ein genaues Bild Ihrer Umgebung erhalten, damit Sie Prozesse optimieren können
- wie Sie Kernursachen-Analysen beschleunigen und so Ausfälle auf Kundenseite entgegenwirken können
- wie Sie Incident Investigation priorisieren und die Time-to-Resolution durch Verhaltens- und Event-Analysen verkürzen
- wie Analytics und Machine Learning Service Intelliegence verbessern können
How to Move from Monitoring to Observability, On-Premises and in a Multi-Clou...Splunk
With the acceleration of customer and business demands, site reliability engineers and IT Ops analysts now require operational visibility into their entire architecture, something that traditional APM tools, dev logging tools, and SRE tools aren’t equipped to provide. Observability enables you to inspect and understand your IT stack on premises and in the cloud(s); It’s no longer about whether your system works (monitoring), but being able to task why it is not working? (Observability). This presentation will outline key steps to take to move from monitoring to observability.
Splunk provides a platform for operational intelligence that allows users to analyze machine data from any source. The document discusses Splunk products and solutions for IT service management, security intelligence, and Internet of Things applications. Splunk has over 11,000 customers across various industries.
How to analyze text data for AI and ML with Named Entity RecognitionSkyl.ai
About the webinar
The Internet is a rich source of data, mainly textual data. But making use of huge quantities of data is a complex and time-consuming task. NLP can help with this problem through the use of Named Entity Recognition systems. Named entities are terms that refer to names, organizations, locations, values etc. NER annotates texts – marking where and what type of named entities occurred in it. This step significantly simplifies further use of such data, allowing for easy categorization of documents, analyze sentiments, improving automatically generated summaries etc.
Further, in many industries, the vocabulary keeps changing and growing with new research, abbreviations, long and complex constructions, and makes it difficult to get accurate results or use rule-based methods. Named Entity Recognition and Classification can help to effectively extract, tag, index, and manage this fast and ever-growing knowledge.
Through this webinar, we will understand how NER can be used to extract key entities from large volumes of text data
What you will learn
- How organizations are leveraging Named Entity Recognition across various industries
- Live demo - Identify & classify complex terms & with NERC (Named Entity Recognition & Categorization)
- Best practice to automate machine learning models in hours not months
This summary provides an overview of a presentation about Splunk:
1. The presentation introduces Splunk, an enterprise software platform that allows users to search, monitor, and analyze machine-generated big data for security, IT and business operations.
2. Key components of Splunk include universal forwarders for data collection, indexers for data storage and search heads for data visualization. Splunk supports data ingestion from various sources like servers, databases, applications and sensors.
3. A demo section shows how to install Splunk, ingest sample data, perform searches, set up alerts and reports. It also covers dynamic field extraction, the search command language and Splunk applications.
The document provides an overview of the Splunk data platform. It discusses how Splunk helps organizations overcome challenges in turning real-time data into action. Splunk provides a single platform to investigate, monitor, and take action on any type of machine data from any source. It enables multiple use cases across IT, security, and business domains. The document highlights some of Splunk's products, capabilities, and customer benefits.
Splunk Discovery: Warsaw 2018 - Solve Your Security Challenges with Splunk En...Splunk
This document summarizes how Splunk Enterprise Security can help organizations strengthen their security posture and operationalize security processes. It discusses how Splunk ES allows organizations to centralize analysis of endpoint, network, identity, and threat data for improved visibility. It also emphasizes developing an investigative mindset when handling alerts to efficiently determine the root cause. Finally, it explains how Splunk ES can operationalize security processes by providing a single source of truth and integrating security technologies to automate responses.
Splunk’s machine learning framework mixed with Splunk’s Event Management capabilities gives operations teams the opportunity to proactively act and automate on an event before it becomes an IT outage. This session will detail and demonstrate how to predict a health score of your business service, proactively take action based on those predictions and publish to your collaborative messaging and automation solutions.
Similar to SplunkLive! Munich 2018: Get More From Your Machine Data Splunk & AI (20)
.conf Go 2023 - Raiffeisen Bank InternationalSplunk
This document discusses standardizing security operations procedures (SOPs) to increase efficiency and automation. It recommends storing SOPs in a code repository for versioning and referencing them in workbooks which are lists of standard tasks to follow for investigations. The goal is to have investigation playbooks in the security orchestration, automation and response (SOAR) tool perform the predefined investigation steps from the workbooks to automate incident response. This helps analysts automate faster without wasting time by having standard, vendor-agnostic procedures.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
El documento describe la transición de Cellnex de un Centro de Operaciones de Seguridad (SOC) a un Equipo de Respuesta a Incidentes de Seguridad (CSIRT). La transición se debió al crecimiento de Cellnex y la necesidad de automatizar procesos y tareas para mejorar la eficiencia. Cellnex implementó Splunk SIEM y SOAR para automatizar la creación, remediación y cierre de incidentes. Esto permitió al personal concentrarse en tareas estratégicas y mejorar KPIs como tiempos de resolución y correos electrónicos anal
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)Splunk
Este documento resume el recorrido de ABANCA en su camino hacia la ciberseguridad con Splunk, desde la incorporación de perfiles dedicados en 2016 hasta convertirse en un centro de monitorización y respuesta con más de 1TB de ingesta diaria y 350 casos de uso alineados con MITRE ATT&CK. También describe errores cometidos y soluciones implementadas, como la normalización de fuentes y formación de operadores, y los pilares actuales como la automatización, visibilidad y alineación con MITRE ATT&CK. Por último, señala retos
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
The document is a presentation on cyber security trends and Splunk security products from Matthias Maier, Product Marketing Director for Security at Splunk. The presentation covers trends in security operations like the evolution of SOCs, new security roles, and data-centric security approaches. It also provides updates on Splunk's security portfolio including recognition as a leader in SIEM by Gartner and growth in the SIEM market. Maier highlights some breakout sessions from the conference on topics like asset defense, machine learning, and building detections.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
This document summarizes a presentation about observability using Splunk. It includes an agenda introducing observability and why Splunk for observability. It discusses the need for modernization initiatives in companies and the thousands of changes required. It presents that Splunk provides end-to-end visibility across metrics, traces and logs to detect, troubleshoot and optimize systems. It shares a customer case study of Accenture using Splunk observability in their hybrid cloud environment. Finally, it concludes that observability with Splunk can drive results like reduced downtime and faster innovation.
This document contains slides from a Splunk presentation covering the following topics:
- Updated Splunk logo and information about meetings in Zurich and sales engineering leads
- Ideas for confused or concerned human figures in design concepts
- Three buckets of challenges around websites slowing, apps being down, and supply chain issues
- Accelerating mean time to detect, identify, respond and resolve through cyber resilience with Splunk
- Unifying security, IT and DevOps teams
- Splunk's technology vision focusing on customer experience, hybrid/edge, unleashing data lakes, and ubiquitous machine learning
- Gaining operational resilience through correlating infrastructure, security, application and user data with business outcomes
This document summarizes a presentation about Splunk's platform. It discusses Splunk's mission of helping customers create value faster with insights from their data. It provides statistics on Splunk's daily ingest and users. It highlights examples of how Splunk has helped customers in areas like internet messaging and convergent services. It also discusses upcoming challenges and new capabilities in Splunk like federated search, flexible indexing, ingest actions, improved data onboarding and management, and increased platform resilience and security.
The document appears to be a presentation from Splunk on security topics. It includes sections on cyber security resilience, the data-centric modern SOC, application monitoring at scale, threat modeling, security monitoring journeys, self-service Splunk infrastructure, the top 3 CISO priorities of risk based alerting, use case development, a security content repository, security PVP (posture, vision, and planning) and maturity assessment, and concludes with an overview of how Splunk can provide end-to-end visibility across an organization.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
4. Humans are good at
learning, but we get lost
in volume and details…
5. ▶ Improve decision-making
▶ Uncover hidden trends or
relationships
▶ Alert on deviations
▶ Forecast or anticipate incidents
All of this requires diverse data
from across many silos. Lots
of unstructured, real-time data.
Why AI & Machine Learning?
6. Run the Business in Real Time
Data From the Past Real-Time Data Statistical Forecast
T – a few days T + a few days
Security Operations Center
IT Operations Center
Business Operations Center
Predictive
(Models)
Historical Reporting
(BI Tools, Data Lakes) Grey space
8. Deviation from past behavior
Deviation from peers
(aka Multivariate AD or Cohesive AD)
Unusual change in features
Predicting churn
Predicting events
Trend forecasting
Detecting influencing entities
Early warning of failure – predictive
maintenance
Identify peer groups
Event correlation
Reduce alert noise
Anomaly Detection Predictive Analytics Clustering
Splunk Customers Have ML Problems
9. The ML Process
Get and
explore data
Select and fit an
algorithm,
generating a model
Apply and
validate models
Surface model to
consumers to
solve problems
Problem: <Stuff in the world> causes big time and money expense. Value Hypothesis
Solution: Build ML model to forecast <possible incidents>, act pre-emptively and learn
Operationalize
11. Overview of AI Powered by ML at Splunk
CORE PLATFORM
SEARCH
PACKAGED PREMIUM
SOLUTIONS
MACHINE LEARNING
TOOLKIT
12. Search Includes Machine Learning
Core platform search is a powerful and highly flexible interface built with ML
13. Splunk IT Service Intelligence
Get Data
Define services,
entities and KPIs
Monitor and
troubleshoot
Analyze
and detect
Data-Defined, Data-Driven Service Insights
Adaptive Thresholds and Anomaly Detection
14. Anomalous Behavior Risky Users Unknown Threats
Splunk User Behavior Analytics
An out-of-the-box solution that helps organizations find
with the use of machine learning
15. ▶ Assistants: Guided model building, testing
and deployment for common objectives
▶ Showcases: Interactive examples for typical
IT, security, business and IoT use cases
▶ Algorithms: 25+ standard algorithms
included with the Toolkit
▶ ML Commands: New SPL commands to fit,
test and operationalize models
▶ Python for Scientific Computing Library:
Access to 300+ open source algorithms
Splunk Machine Learning Toolkit
Extends Splunk platform functions and provides a guided modeling environment
Build custom analytics for any use case
16. Custom Machine Learning – Success Formula
Identify use cases
Drive decisions
Set business/ops priorities
SPL
Data prep
Statistics/math background
Algorithm selection
Model building
Splunk ML Toolkit
facilitates and simplifies
via examples and guidance
Operational success
Data
Science
Expertise
Splunk
Expertise
Domain
Expertise
(IT, Security…)
17. Continuous Data Ingest at Scale
DevelopVisualize PredictAlertSearch
Engineers Data
Analysts
Security
Analysts
Business
Users
Native Inputs
TCP, UDP, Logs, Scripts, Wire, Mobile
Industrial Data
SCADA, AMI, Meter Reads
Modular Inputs
MQTT, AMQP, COAP, REST, JMS
HTTP Event Collector
Token Authenticated Events
Technology Partnerships
Kepware, AWS IoT, Cisco, Palo Alto
Maintenance
Info
Asset
Info
Data
Stores
External
Lookups/Enrichment
OT
Industrial Assets
IT
Consumer and
Mobile Devices Real Time
23. Machine Learning Customer Success
Network Incident Detection
Service Degradation Detection
Security/Fraud Prevention
Machine Learning
Consulting Services
Analytics App Built
on ML Toolkit
Optimizing operations and business results
Predict Gaming Outages
Fraud Prevention
Entertainment
Company
Cell Tower Incident Detection
Optimize Repair Operations
Prioritize Website Issues
and Predict Root Cause
Hallo, mein Name ist Dirk Nitschke und ich arbeite als Sales Engineer bei Splunk.
Dieser Vortrag trägt den Titel „Holen Sie mit Splunk und künstlicher Intelligenz mehr aus Ihren Maschinendaten heraus“.
Künstliche Intelligenz ist ein weites Feld und wir betrachten hier das Konzept des maschinellen Lernens.
Als erstes kann man sich natürlich fragen, warum wir die Hilfe von Maschinen benötigen, um mehr aus unseren Maschinendaten herauszuholen.
Denn eigentlich sind wir Menschen ziemlich gut darin Dinge zu lernen, das Gelernte anzuwenden, aus der Erfahrung zu lernen.
Allerdings wird es schwierig, wenn wir großen Mengen an Daten verarbeiten sollen. Das Abarbeiten eines Berges an Daten ist für uns recht zeitaufwändig. Wir sind einfach nicht schnell genug.
Und wenn wir uns viele unterschiedliche Details für einen relativ kurzen Zeitraum merken sollen, wird es auch schwierig. Viele haben sicherlich schon von der magischen Zahl 7 gehört. Viele Menschen können sich im Kurzzeitgedächtnis eine zufällige Buchstabenfolge von 7 Zeichen merken – vielleicht 1 oder 2 Buchstaben mehr oder weniger. Es sind schon Tricks und Übung nötig, um hier besser zu werden – etwa häufiges Wiederholen (das dauert) oder es geling einem, die Buchstabenfolge in einen anderen Zusammenhang bringen. Das Merken ganzer Worte fällt uns zum Beispiel deutlich leichter. Auch wenn sie mehr als 7 Zeichen lang sind.
Der Trend geht dahin, Entscheidungen nicht aus dem Bauch heraus, sondern nachvollziehbar und auf Basis von Daten zu treffen. Und das idealerweise zeitnah, -- vielleicht sogar nahe Echtzeit – und nicht erst nach Wochen oder Monaten.
Noch schöner ist es, nicht nur reaktiv tätig zu sein, sondern neue Entwicklungen frühzeitig zu erkennen, so dass man proaktiv handeln kann und sich so einen Vorteil gegenüber den Marktbegleitern verschafft.
Die dafür benötigten Daten sind vielfältig, kommen aus unterschiedlichen Bereichen und sind meist unstrukturiert.
Machine Learning kann uns dabei helfen, auf Basis großer Mengen asolcher Daten Entscheidungen zu treffen, indem man zum Beispiel Trends und ungewöhnliches Verhalten erkennt oder Vorhersagen trifft. Und das bei Bedarf auch in Echtzeit.
Was benötigt man dafür? Eine Platform, die die für mich relevanten unstrukturierten Maschinendaten in großen Mengen erfassen und analysieren kann und mir Erkenntnisse liefert, die die ich dann umsetzen kann.
Die Maschinendaten, die ich dafür benötige, stammen aus ganz unterschiedlichen Bereichen. Vielleicht betreiben Sie heute schon eine IT oder Security Operations Center. Oder vielleicht sogar schon ein übergreifendes Business Operations Center. Neben den aktuellen Daten der letzten Tage oder Wochen sind aber auch weitere Daten von Interesse. Zum Beispiel historische Daten, um daraus zu lernen und Muster zu erkennen.
Oder aber als Anreicherung aktueller Daten. Denken Sie zum Beispiel an einen Webshop. Aufgrund der Daten, die sie erfassen, sehen sie, welche Waren sich in Warenkörben befinden, die Ihre Webshop-Besucher nicht ausgecheckt haben. Wenn sie diese Daten jetzt mit den Preisen und Herstellungskosten der Produkte anreichen, können sie sehen, welchen Wert diese liegengelassenen Warenkörbe haben, also welchen Umsatz sie nicht erzielen konnten.
Das Schöne ist, dass Splunk ihnen genau eine solche Platform für die Analyse von Maschinendaten zur Verfügung stellt.
Doch was ist eigentlich Machine Learning?
Schauen wir auf die formale Definition, dann betrachtet Machine Learning Algorithmen, die eine gewisse Aufgabe erfüllen und dabei aus der Erfahrung lernen und so ihre Aufgabe in Zukunft besser erfüllen können.
8
Wie führt man so etwas jetzt praktisch durch? Zunächst einmal formuliert man ein Problem, dass man lösen möchte. Nehmen wir den Ausfall einer Produktionsmaschine, der dazu führt, das eine Reihe von Angestellten nicht arbeiten kann und keine Waren erzeugt werden können.
Zusätzlich definiert man, welches Ziel man erreichen möchte. In unserem Fall soll der Ausfall der Maschine vorhergesagt werden – idealerweise mit einem Vorlauf, so dass man noch eingreifen kann.
Anhand welcher Daten soll das Problem untersucht werden? Diese Daten werden zunächst untersucht. Etwa auf Vollständigkeit, Qualität der Erfassung (d.h. steht da Unsinn drin, wie etwa Baujahr eines Autos in der Zukunft) und gegebenenfalls bereinigt.
Anschließend erstellt man ein Model, basierend auf einem mathematischen Algorithmus. Das heißt man beschreibt den Zusammenhang zwischen den erfassten Daten und dem Ereignis „Ausfall der Maschine“. Dieses Model wird dann auf die Testdaten angewendet und man überprüft das Ergebnis. Gegebenenfalls passt man Parameter des Algorithmus an, um ein besseres Modell zu erhalten. Die Ergebnisse präsentiert man anschließend.
Allerdings ist dies noch nicht das Ende. Üblicherweise operationalisiert man das Ganze. D.h. die Nutzer des Models geben Rückmeldungen über die Genauigkeit des Models, veränderte Anforderungen und andere Erkenntnisse, die dann zurückfließen und zu einer Verfeinerung des Modells führen. Das Model „lernt“.
Welche Möglichkeiten haben Sie jetzt, Machine Learning in Splunk anzuwenden?
Wir möchten Ihnen die Nutzung von Machine Learning möglichst einfach machen. In Splunk stehen Machine Larning Algorithmen und Funktionalitäten in drei Ausprägungen zur Verfügung. WARUM? In Hinblick auf Vorkenntnisse, Aufgabenstellung, ...)
Nämlich in Splunk Core selbst, paketiert in unseren Premium Lösungen ITSI und UBA, und zusätzlich im sogenannten Machine Learning Toolkit. Wir schauen jetzt etwas genauer auf diese drei Varianten.
Die Suchsprache in Splunk Core enthält bereits eine Reihe an Befehlen, die für die drei typischen Anwendungsfälle genutzt werden können, zum Beispiel:
anomalydetection für das Erkennen von Ausreißern
predict zur Vorhersage von Werten über die Zeit
cluster zur Gruppierung von Events (das steckt übrigens hintern dem „Pattern“ Tab in der GUI)
und noch einige mehr. Ach ja, für die Erkennung von Ausreißern lassen sich auch die klassischen statistischen Funktionen nutzen: etwa Mittelwert und Standardabweichung.
In unseren Premiumlösungen hat Splunk Machine Learning für dedizierte Anwendungsfälle eingebaut und so die Nutzung vereinfacht.
Splunk ITSI ist eine Erweiterung, die auf das End-to-End Monitoring von Services ausgelegt ist.
In ITSI werden somit Services definiert und Key Performance Indikatoren, die wiederum den Gesundheitszustand von Services beschreiben. Machine Learning ist hier speziell für die folgenden drei Bereiche integriert und die Nutzung durch eine grafische Oberfläche startk vereinfacht:
* Adaptive Schwellwerte: ein fester Schwellwert für einen KPI ist nicht immer gewünscht. Denken wir zum Beispiel an die Anzahl an Loginversuchen an einem System. Hier erwarten wir am Morgen zum Beispiel eine deutlich höhere Anzahl als am Abend. Ein fester Schwellwert liefert uns dann entweder regelmäßig morgens falsch positive Benachrichtigungen oder wir erhöhen den Schwellwert und bekommen dann fast gar keine Meldungen mehr, was aber den ganzen KPI sinnlos macht. Wäre es nicht hilfreich hier Schwellwerte zu definieren, die sich an das übliche Verhalten über die Zeit anpassen? Also einen hohen Schwellwert am Morgen und einen niedrigeren Schwellwert am Abend? Genau das erreichen wir mit adaptiven Schwellwerten, bei denen anhand historischer Daten als Baseline unterschiedliche Schwellwerte für unterschiedliche Zeiträume gesetzt werden.
* Splunk ITSI enthält auch Möglichkeiten zur Erkennung von Anomalien, also Abweichungen vom erwarteten Verhalten.
* Last but not least kann Splunk ITSI sogenannte „Notable Events“ anhand von Machine Learning gruppieren und so dazu beitragen, die Anzahl an Benachrichtigungen, die abzuarbeiten sind auf ein handhabbares Maß zu reduzieren und insbesondere eine Verbindung zu den betroffenen Services herzustellen, so dass die Bearbeitung entsprechend der Wichtigkeit des Services priorisiert werden kann.
Splunk User Behavior Analytics (Splunk UBA) ist eine weitere Premium-Lösung, die Machine Learning vorkonfiguriert nutzt. Sie verfügt über eine große Zahl an Algorithmen, die darauf ausgelegt sind unbekannte Angriffe und Bedrohungen durch Insider zu entdecken. Dies unterstützt Security Operations Center zum Beispiel dabei proaktiv ungewöhnliches Benutzerverhalten zu analysieren. Beispielsweise eine ungewöhnlich hohe Anzahl an Dateizugriffen.
Wer seinen eigenen Ansatz verfolgen möchte, volle Flexibilität haben möchte und tiefer in Machine learning einsteigen, der sollte sich mit dem Machine Learniing Toolkit beschäftigen. Dies ist eine App, die die Splunk Suchsprache um neue Befehle erweitert und so Zugriff auf mehr als 30 typische Machine learning Algorithmen bietet.
Zusätzlichen helfen sogenannte Assistenten beim Einstieg, indem sie sie durch die einzelnen Schritte der Erstellung eines Modells, dem Testen und Anwenden eines Modells leiten. Die Assistenten umfassen die Vorhersage von numerischen und kategoriellen Felder, das Aufspüren von numerischen oder kategoriellen Ausreißern und die Gruppierung von Daten.
Zusätzlich gibt es eine Reihe an Beispielen zum Probieren und Lernen.
Welche Variante soll man jetzt wählen? Wir möchten, dass Sie erfolgreich Machine Learning einsetzen und daher geben wir hier eine Hilfestellung.
Um erfolgreich Machine Learning einzusetzen, sind gewisse Kenntnisse erforderlich. Neben Kenntnissen in Splunk, werden noch Kenntnisse in dem Bereich benötigt, der untersucht werden soll und natürlich auch Kenntnisse im Bereich Data Science. Abhängig von der Problemstellung und der verfügbaren Expertise wähle man. Wer Zugriff auf Data Science Kenntnisse hat, volle Flexibilität haben möchte oder sich selber etwas intensiver mit Machine Learning beschäftigen möchte, für den ist das MLTK eine gute Wahl. Haben Sie keinen Zugang zu Data Science, dann kann eine unserer paketierten Lösungen für sie sinnvoll sein.
Wir haben bisher darüber gesprochen, dass man unterschiedliche Arten von Maschinendaten benötigt. Wir haben aber noch nicht betrachtet, wie man diese überhaupt mit Splunk erfasst und was SPlunk dann mit diesen Daten tun kann.
Auch hier ist Splunk sehr flexibel. Neben der Möglichkeit klassische Logfiles einzulesen, kann man auch über andere Wege Daten in Splunk erfassen. Zum Beispiel über REST API. Applikationen können auch Daten selber senden, zum Beispiel über HTTP an den sogenannten HTTP Event Collector. Ebenso gibt es Schnittstellen für Daten, die in Cloud-Umgebungen anfallen, wie zum Beispiel Amazon Web Services.
Interessant sicherlich auch die Möglichkeit, Netzwerkdaten einzulesen. Stichwort ist hier Splunk Stream.
All diese Daten werden in Splunk indiziert. Sie stehen unmittelbar zur Analyse zur Verfügung, das heißt sie lassen sich durchsuchen. Suchergebnisse sind dann die Basis für Alarmierungen oder können visualisiert werden. Wie besprochen werden die gesammelten Daten auch als Quelle für Vorhersagen oder die Erkennung von Ausreißern genutzt.
Die Daten werden von unterschiedlichen Nutzer verwendet. Jeder erhält die für ihn relevante Sicht auf die Daten.
Die in Splunk gesammelten Daten können durch externe Quellen angereichert werden, zum Beispiel Informationen die in relationalen Datenbanken vorhanden sind. Umgekehrt kann Splunk auch selber Daten an andere Systeme senden.
Alles in Splunk basiert auf einer Suche. Eine solche Suche kann Machine Learning Funktionalität verwenden – zum einen mit den bereits in der Splunk Suchsprache enthaltenen Kommandos oder mit Hilfe der durch das Machine Learning Toolkit hinzugekommenen Suchbefehle.
Das bedeutet, dass Sie zum Beispiel auf Basis des Ergebnisses einer Suche eine Alarmierung durchführen können, die ihnen mitteilt, dass es zum Beispiel ungewöhnlich viele fehlerhafte Anmeldeversuche auf einem ihrer Systeme gibt. Eine solche Alarmierung kann in Form eine Mail durchgeführt werden, oder sie lassen sich per Messenger information oder generieren ein Ticket in Ihrem Ticketsystem. Beispielhaft seien hier Anbindungen an BMC Remedy oder Service Now genannt.
Ich möchte jetzt einmal das Machine Learning Toolkit mit seinen Showcases und Assistenten durchgehen.
MLTK Demo: Zuerst landet man in den Showcases. Diese sind aufgeteilt in verschidene Kategorien: Vorhersage numerischer Werte, Erkennung numerischer Ausreißer, ...
In jeder Kategorie wird kurz beschrieben, um welche Problemstellung es sich handelt. Außerdem sehen wir die einzelnen Beispiele, die zur Verfügung stehen.
Wir wählen eine aus, Server Power Consumption. Was jetzt passiert ist folgendes: wir gelangen in den Assitenten für die Vorhersage numerischer Felder und es werden die Beispieldaten eingelesen und einige Parameter gesetzt.
Im oberen Teil werden die Daten eingelsen, hier jetzt einfach eine CSV Datei. Man kann hier aber jeder Splunk-Suche verwenden, um die nötigen Daten auszuwählen.
Darunter gibt es die Möglichkeit, die Daten vorzuverarbeiten. Vielleicht ist es sinnvoll, die Daten zu skalieren. Wir benötigen das hier jetzt nicht.
Dann wählen wir den Algorithmus zur Lösung des Problems aus. Dann wählen wir die Variable, die wir verhersagen möchten und die Variablen, die wir für die Vorhersage nutzen wollen.
Rechts legen wir fest, wie wir den eingelseenen Datensatz aufteilen wollen: wir können einen sogenannten Trainingsdatensatz und einen Testdatensatz definieren. Was bedeutet das?
Das Modell wird anhand der Trainingsdaten erstellt. Der Testdatensatz wird dann verwendet, um das Model zu validieren und zu bewerten, wie gut es die Testdaten beschreibt.
„Show SPL“ zeigt uns, was in der Suchsprache passieren würde.
Preview Data: „predicted(ac_power)“ zeigt uns das Ergebnis des Models, das auf die Daten angewendet wurde. Residue zeigt uns den Fehler an.
Show SPL -> zeigt uns SPL dazu. Scheduled Alert -> kann gleich einen Alarm definieren!
Fit
Apply
Bewerten
Kann es in anderer Suche verwenden.
Industry:
Technology
Splunk Use Cases:
IT Operations
Challenges:
Monitoring and response required for 24/7 customer access
Separate silos created Balkanized IT department
Needed to pare down thousands of alerts and events
Splunk Products:
Splunk Enterprise
Splunk ITSI
Data Sources:
Application
Device
Firewall
Network
Server
Case Study: https://www.splunk.com/en_us/customers/success-stories/leidos.html
Nasdaq is a global exchange operator.
They use Splunk Enterprise Security premium solution for security investigations. With Splunk ES they have gained a efficiency level of over 50% in analyst ability to track down data.
Splunk has also sped up their security investigation time by 50% as well.
Splunk allows them to have a skill set that is common across the organization. It is reusable by analysts at different levels and gives a deep understanding of the organization’s overall security posture.
Our Early Adopter customers have had much success creating and operationalizing ML models. Some examples include:
Zillow makes hundreds of website updates daily, including content from several partners nationally. These updates can often cause issues in the site. Zillow built an ML model that predicts which of these changes is likely to result in an issue to allow the team to fix them proactively. Once a potential or actual issue has been identified, the model can also provide guidance on likely root cause and resolution.
TELUS has thousands of mobile phone towers across Canada; when one of these goes offline it can cause significant disruption for their customers. TELUS built a model to predict which towers are likely to fail so that they can proactively fix issues before they occur.
Fassen wir zusammen: Splunk bietet die Platform für die Sammlung und Analyse von Maschinendaten – auch in Real-Time. Durch die Verwendung von Machine Learning lassen sich dabei zusätzliche Einblicke und Erkenntnisse gewinnen, die Basis von Entscheidungen sein können. BLA, unterschiedlcieh Darreichungsformen passen sich ihren use cases an.
Wie üblich können sie Feedback zu diesem Vortrag geben Nutzen sie dafür unseren Pony Poll. Die URL versteckt sich hinter dem QR Code.
Damit geht der Track „Splunk Überblick“ zu Ende. Ich hoffe, es war informativ für Sie. Draußen haben Sie jetzt noch Gelegenheit sich untereinander auszutauschen und mit unseren Partnern oder meinen Kollegen von Splunk ins Gespräch zu kommen. Oder an besagtem „Machine Learning Roundtable“ diskutieren.