Splunk's Andi Mann addresses what he refers to as the real core of DevOps: increasing collaboration, communication, integration and delivery of better, faster software; the human side of DevOps, combined with the business impacts.
DevOps is powering the computing environments of tomorrow. When properly configured, the Splunk platform allows us to gain real-time visibility into the velocity, quality, and business impact of DevOps-driven application delivery across all roles, departments, process, and systems. Splunk can be used by DevOps practitioners to provide continuous integration/deployment and the real-time feedback to help the organization with their operational intelligence. Join us for a exciting talk about Splunk’s current approach to DevOps, and for examples of how Splunk is being used by customers today to transform DevOps initiatives.
Data-Driven DevOps: Mining Machine Data for 'Metrics that Matter' in a DevOps...Splunk
IT organizations are increasingly using machine data - including in DevOps practices - to get away from 'vanity metrics' and instead to generate 'metrics that matter'. These metrics provide visibility into the delivery of new application code and the business value of DevOps, to both IT and business stakeholders.
Machine data provides DevOps teams and others - including QA, secops, CxOs and LOB leaders - with meaningful and actionable metrics. This allows stakeholders to monitor, measure, and continuously improve the velocity and quality of code throughout the software lifecycle, from dev/test to customer-facing outcomes and business impact.
In this session Andi Mann, chief technology advocate at Splunk, will share core methodologies, interesting case studies, key success factors and 'gotcha' moments from real-world experience with mining machine data to produce 'metrics that matter' in a DevOps context.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
Listen to Your Machines: DevOps Analytics for Better Feedback LoopsSplunk
Effective DevOps Practices for collecting, correlating and analyzing DevOps data. First presented by Splunk's Andi Mann at DevOps Summit at Cloud Expo.
Splunk Enterprise 6.4 delivers a new library of interactive visualizations, faster analytics, and can reduce your historical data storage costs by up to 80%.
See how you can:
• Use new interactive visualizations to view results, and easily create and share your own
• Speed investigation and discovery of large-scale data with event sampling
• Reduce storage costs by up to 80% for aged data
• Get wider visibility into system performance and health with new management views
With the new features and lower storage costs offered by Splunk Enterprise 6.4, doing big data analysis is now easier than ever. See it in action by attending this webinar.
DevOps is powering the computing environments of tomorrow. When properly configured, the Splunk platform allows us to gain real-time visibility into the velocity, quality, and business impact of DevOps-driven application delivery across all roles, departments, process, and systems. Splunk can be used by DevOps practitioners to provide continuous integration/deployment and the real-time feedback to help the organization with their operational intelligence. Join us for a exciting talk about Splunk’s current approach to DevOps, and for examples of how Splunk is being used by customers today to transform DevOps initiatives.
Data-Driven DevOps: Mining Machine Data for 'Metrics that Matter' in a DevOps...Splunk
IT organizations are increasingly using machine data - including in DevOps practices - to get away from 'vanity metrics' and instead to generate 'metrics that matter'. These metrics provide visibility into the delivery of new application code and the business value of DevOps, to both IT and business stakeholders.
Machine data provides DevOps teams and others - including QA, secops, CxOs and LOB leaders - with meaningful and actionable metrics. This allows stakeholders to monitor, measure, and continuously improve the velocity and quality of code throughout the software lifecycle, from dev/test to customer-facing outcomes and business impact.
In this session Andi Mann, chief technology advocate at Splunk, will share core methodologies, interesting case studies, key success factors and 'gotcha' moments from real-world experience with mining machine data to produce 'metrics that matter' in a DevOps context.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
Listen to Your Machines: DevOps Analytics for Better Feedback LoopsSplunk
Effective DevOps Practices for collecting, correlating and analyzing DevOps data. First presented by Splunk's Andi Mann at DevOps Summit at Cloud Expo.
Splunk Enterprise 6.4 delivers a new library of interactive visualizations, faster analytics, and can reduce your historical data storage costs by up to 80%.
See how you can:
• Use new interactive visualizations to view results, and easily create and share your own
• Speed investigation and discovery of large-scale data with event sampling
• Reduce storage costs by up to 80% for aged data
• Get wider visibility into system performance and health with new management views
With the new features and lower storage costs offered by Splunk Enterprise 6.4, doing big data analysis is now easier than ever. See it in action by attending this webinar.
Quelles nouveautés avec la version 6.5 de Splunk EnterpriseSplunk
Apprentissage machine, analyse des données simplifiée, baisse du TCO et bien plus encore
Nous avons l'immense plaisir de vous annoncer que Splunk Enterprise 6.5 est désormais disponible en téléchargement !
Découvrez lors de notre webinar du 4 novembre toutes les nouvelles fonctionnalités et les points forts de Splunk Enterprise 6.5 :
Analyse avec apprentissage machine pour mieux détecter, prédire et éviter les incidents critiques
Analyse des données simplifiée grâce à des vues en tableaux structurés permettant de préparer et analyser les données sans utiliser SPL
Gestion automatisée pour simplifier la surveillance des problèmes opérationnels courants et leur expansion
Un TCO réduit avec une option à coût zéro pour le transfert des données historiques dans Hadoop
Téléchargez la toute dernière version dès aujourd’hui !
Learn how Nationwide scaled enterprise DevOps with New Relic.
Be sure to subscribe and follow New Relic at:
https://twitter.com/NewRelic
https://www.facebook.com/NewRelic
https://www.youtube.com/NewRelicInc
<p>From <a href="https://en.wikipedia.org/wiki/Site_reliability_engineering" target="_blank">Wikipedia</a>: Site reliability engineering (SRE) is a discipline that incorporates aspects of software engineering and applies that to operations whose goals are to create ultra-scalable and highly reliable software systems.<p>
<p>Over the past year Acquia has built their own SRE team to help their products and services scale with the demand of our growing number of customers. We wish to share our experience so that others are enabled to do the same and reap the rewards.</p>
<p>This presentation will discuss how the SRE team came about at Acquia, what achievements we have made so far, and the lessons we have learned along the way. We will then show the steps on how to introduce SRE to your workplace so you can deliver more reliable and scalable services to your customers! We will specifically cover:</p>
<ul>
<li>SRE's basic concepts and history from Google</li>
<li>The management support you will need to get started</li>
<li>Introducing the idea of service level objectives and error budgets</li>
<li>Operational Responsibility Assessments as a tool to measure risk</li>
<li>Creating a Launch Readiness Checklist to standardize and improve product launches</li>
<li>Finding ideal candidates for your SRE team</li></ul>
<p>The intended audience are software engineers, system administrators, and managers that have a desire to improve how they do their work and how their products/services perform.</p>
Boston DevOps Days 2016: Implementing Metrics Driven DevOps - Why and HowAndreas Grabner
How can we detect a bad deployment before it hits production? By automatically looking at the right architectural metrics in your CI/CD and stop a build before its too late. Lets hook up your test automation with app metrics and use them as quality gates to stop bad builds early!
Taking Splunk to the Next Level - ArchitectureSplunk
This session led by Michael Donnelly will teach you how to take your Splunk deployment to the next level. Learn about Splunk high availability architectures with Splunk Search Head Clustering and Index Replication. Additionally, learn how to manage your deployment with Splunk’s operational and management controls to manage Splunk capacity and end user experience
AWS and Dynatrace: Moving your Cloud Strategy to the Next LevelDynatrace
AWS and Dynatrace: Moving your Cloud Strategy to the Next Level
On-Demand Webcast
AWS re:Invent was an exciting time for Dynatrace and we received a lot of “Wows” on our capabilities. We got to demonstrate the only AI-based, full-stack monitoring solution to thousands of AWS prospects and users. We announced our AWS Certified DevOps Competency partnership, and we introduced DAVIS, our natural-language voice interface, to thousands of attendees.
We know that many of you couldn’t attend the event in Las Vegas, so we wanted to share some of the key highlights from the show. And for those of you who were there, you may not have seen all of the benefits Dynatrace provides in the AWS ecosystem due to time constraints of sessions and the large tradeshow floor.
Listen to this 30 Minute webcast where Alois Reitbauer and Franz Karlsberger recap some of the highlights of the event, including:
How Dynatrace, as an AWS certified Migration Competency partner, uniquely supports enterprise migrations to AWS
How to achieve faster feedback and improved lead times with AWS CodePipeline and Dynatrace
An overview of the first ever VoiceOps and ChatOps interface via DAVIS, based on our AI approach to full-stack monitoring
Integrating SAP into DevOps Pipelines: Why and HowDevOps.com
Teams practicing DevOps don’t usually have to spend much time thinking about applications like SAP, and SAP often remains a DevOps-free zone that is resolutely difficult to change. But SAP systems enable critical operational processes and in an increasingly interconnected technology stack, need to adapt at high speed if a business is going to be truly agile.
DevOps expertise from outside SAP teams is helping to accelerate change in SAP so that digital transformation of products, processes and business models isn’t held back by dependence on slow, unresponsive ‘systems of record’. In this webinar we’ll look at why it’s important to include SAP in cross-application CI/CD pipelines, and how to do so. Join us to learn:
Why DevOps teams should care about SAP
Key SAP differences that DevOps teams need to understand
How to get started with DevOps for SAP and successfully integrate SAP into wider DevOps pipelines
Real-world examples of SAP DevOps adoption
Splunk: How to Design, Build and Map IT ServicesSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
Quelles nouveautés avec la version 6.5 de Splunk EnterpriseSplunk
Apprentissage machine, analyse des données simplifiée, baisse du TCO et bien plus encore
Nous avons l'immense plaisir de vous annoncer que Splunk Enterprise 6.5 est désormais disponible en téléchargement !
Découvrez lors de notre webinar du 4 novembre toutes les nouvelles fonctionnalités et les points forts de Splunk Enterprise 6.5 :
Analyse avec apprentissage machine pour mieux détecter, prédire et éviter les incidents critiques
Analyse des données simplifiée grâce à des vues en tableaux structurés permettant de préparer et analyser les données sans utiliser SPL
Gestion automatisée pour simplifier la surveillance des problèmes opérationnels courants et leur expansion
Un TCO réduit avec une option à coût zéro pour le transfert des données historiques dans Hadoop
Téléchargez la toute dernière version dès aujourd’hui !
Learn how Nationwide scaled enterprise DevOps with New Relic.
Be sure to subscribe and follow New Relic at:
https://twitter.com/NewRelic
https://www.facebook.com/NewRelic
https://www.youtube.com/NewRelicInc
<p>From <a href="https://en.wikipedia.org/wiki/Site_reliability_engineering" target="_blank">Wikipedia</a>: Site reliability engineering (SRE) is a discipline that incorporates aspects of software engineering and applies that to operations whose goals are to create ultra-scalable and highly reliable software systems.<p>
<p>Over the past year Acquia has built their own SRE team to help their products and services scale with the demand of our growing number of customers. We wish to share our experience so that others are enabled to do the same and reap the rewards.</p>
<p>This presentation will discuss how the SRE team came about at Acquia, what achievements we have made so far, and the lessons we have learned along the way. We will then show the steps on how to introduce SRE to your workplace so you can deliver more reliable and scalable services to your customers! We will specifically cover:</p>
<ul>
<li>SRE's basic concepts and history from Google</li>
<li>The management support you will need to get started</li>
<li>Introducing the idea of service level objectives and error budgets</li>
<li>Operational Responsibility Assessments as a tool to measure risk</li>
<li>Creating a Launch Readiness Checklist to standardize and improve product launches</li>
<li>Finding ideal candidates for your SRE team</li></ul>
<p>The intended audience are software engineers, system administrators, and managers that have a desire to improve how they do their work and how their products/services perform.</p>
Boston DevOps Days 2016: Implementing Metrics Driven DevOps - Why and HowAndreas Grabner
How can we detect a bad deployment before it hits production? By automatically looking at the right architectural metrics in your CI/CD and stop a build before its too late. Lets hook up your test automation with app metrics and use them as quality gates to stop bad builds early!
Taking Splunk to the Next Level - ArchitectureSplunk
This session led by Michael Donnelly will teach you how to take your Splunk deployment to the next level. Learn about Splunk high availability architectures with Splunk Search Head Clustering and Index Replication. Additionally, learn how to manage your deployment with Splunk’s operational and management controls to manage Splunk capacity and end user experience
AWS and Dynatrace: Moving your Cloud Strategy to the Next LevelDynatrace
AWS and Dynatrace: Moving your Cloud Strategy to the Next Level
On-Demand Webcast
AWS re:Invent was an exciting time for Dynatrace and we received a lot of “Wows” on our capabilities. We got to demonstrate the only AI-based, full-stack monitoring solution to thousands of AWS prospects and users. We announced our AWS Certified DevOps Competency partnership, and we introduced DAVIS, our natural-language voice interface, to thousands of attendees.
We know that many of you couldn’t attend the event in Las Vegas, so we wanted to share some of the key highlights from the show. And for those of you who were there, you may not have seen all of the benefits Dynatrace provides in the AWS ecosystem due to time constraints of sessions and the large tradeshow floor.
Listen to this 30 Minute webcast where Alois Reitbauer and Franz Karlsberger recap some of the highlights of the event, including:
How Dynatrace, as an AWS certified Migration Competency partner, uniquely supports enterprise migrations to AWS
How to achieve faster feedback and improved lead times with AWS CodePipeline and Dynatrace
An overview of the first ever VoiceOps and ChatOps interface via DAVIS, based on our AI approach to full-stack monitoring
Integrating SAP into DevOps Pipelines: Why and HowDevOps.com
Teams practicing DevOps don’t usually have to spend much time thinking about applications like SAP, and SAP often remains a DevOps-free zone that is resolutely difficult to change. But SAP systems enable critical operational processes and in an increasingly interconnected technology stack, need to adapt at high speed if a business is going to be truly agile.
DevOps expertise from outside SAP teams is helping to accelerate change in SAP so that digital transformation of products, processes and business models isn’t held back by dependence on slow, unresponsive ‘systems of record’. In this webinar we’ll look at why it’s important to include SAP in cross-application CI/CD pipelines, and how to do so. Join us to learn:
Why DevOps teams should care about SAP
Key SAP differences that DevOps teams need to understand
How to get started with DevOps for SAP and successfully integrate SAP into wider DevOps pipelines
Real-world examples of SAP DevOps adoption
Splunk: How to Design, Build and Map IT ServicesSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
Introduction et démo de l'orchestrateur docker Rancher.
La partie démo est dispo en vidéo (talk DevoxxFrance 2016) : https://www.youtube.com/watch?v=QFqt8xMTChY
Announcing AWS Batch - Run Batch Jobs At Scale - December 2016 Monthly Webina...Amazon Web Services
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems.
Learning Objectives:
• Learn about the capabilities and features of AWS Batch
• Learn about the benefits of AWS Batch
• Learn about the different use cases
• Learn how to get started using AWS Batch
Multi Security Checkpoints on DevOps PlatformSonatype
Hasan Yasar, Carnegie Mellon University
“Software security” often evokes negative feelings amongst developers because it is linked with challenges and uncertainty on rapid releases. The burgeoning concepts of DevOps can be applied to increase the security of developed applications. Applying these DevOps principles can have a big impact with resiliency and secure at multiple checkpoints. This talk explains how to do with live demo.
FollowFridays, Session 2: The Power of Customer Data and MetricsBBDO Belgium
In the near future, 15% of your marketing budget will be spent on data analytics. Seems like a lot? Actually, it isn't.
Ask yourself these questions. How do you measure the total impact of your campaign? Which KPIs and metrics really count? Are digital media absolutely ideal for one-to-one communication? What is the upside of testing everything, of clinch rates, of constantly optimizing?
Lots of questions. Lots of answers. Data analytics teach us that the lessons learned make all the difference when applied properly.
Treating operational aspects of software as 'non-functional requirements' and 'an Ops problem' rather than a core part of the software product leads to poor live service and unexplained errors in Production.
Deployability, recoverability, diagnosability, monitorability, and high quality logging are simply features of a software system, along with user-visible features surfaced via the UI, or a capability of an API endpoint.
However, many Product Managers understandably feel uneasy about taking on the (necessary) responsibility for prioritising operational features alongside user-visible and API features.
This session aims to bring Scrum Masters and Product Owners up to speed on operational features, empowering them to make effective prioritisation choices about all kinds of product features, whether user-visible or operational.
Metrics - You are what you measure (DevOps Perth)Rob Crowley
DevOps is no longer just the concern of cutting edge start-ups in Silicon Valley and is gaining wide scale adoption within established industries. This session focuses on the Metrics pillar of DevOps and explores how we can leverage metrics to drive the software delivery process based on data rather than gut feel and opinions.
DevOps and ITSM intersect, they fit one model, they can be reconciled - we need to find the common ground.
See http://www.itskeptic.org/content/unified-theory
Taming the Technology of Digital TransformationSplunk
Andi Mann explains how to tame digital transformation: Establish new roles, teams and processes to support digital; adopt new technology to deliver new digital experiences and rebuild service delivery capability with a "digital first" approach.
Data-Driven DevOps: Improve Velocity and Quality of Software Delivery with Me...Splunk
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical.
This session will show how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful feedback on DevOps processes to all stakeholders. Learn from real-life examples how to use the data generated throughout application delivery to continuously identify, measure, and improve deployment speed, code quality, process efficiency, outsourcing value, security coverage, audit success, customer satisfaction, and business alignment.
Innovate Better Through Machine data AnalyticsHal Rottenberg
This talk was presented at IP Expo Manchester in May, 2016. the themes discussed are:
- how does machine data relate to devops?
- how can tracking this data lead to better outcomes?
- what types of data are important to track?
As humans, we face an increasing amount of data and information every day. To derive meaning and make sense of this complex world, we constantly scan the world around us and select what we believe is important and what is not. In this session I will go trough the end to end framework about turning data into business actions.
The ultimate container monitoring bake-off - Rancher Online Meetup October 2016Shannon Williams
In our October online meetup we asked three of our engineers to demonstrate their favorite monitoring tools for containers, and demonstrate how to deploy and utilize them. They provided an in-depth overview of Sysdig, Datadog and Prometheus and try to help you determine which tool is best for your workload, and demonstrated:
- Deploying container monitoring tools from the Rancher catalog.
- Key differences between container and server monitoring.
- How Sysdig, Datadog and Prometheus compare with one another.
Please register for the next Rancher online meetup: http://rancher.com/meetup
In this slidedeck, Infochimps Director of Product, Tim Gasper, discusses how Infochimps tackles business problems for customers by deploying a comprehensive Big Data infrastructure in days; sometimes in just hours. Tim unlocks how Infochimps is now taking that same aggressive approach to deliver faster time to value by helping customers develop analytic applications with impeccable speed.
Code to Release using Artificial Intelligence and Machine LearningSTePINForum
by Nataraj Narayan, Managing Director, AutonomIQ at STeP-IN SUMMIT 2018 15th International Conference on Software Testing on August 31, 2018 at Taj, MG Road, Bengaluru
Turning Analysis into Action with APIs - Superweek2017Mark Edmondson
Presentation given by Peter Meyer and Mark Edmondson and Superweek 2017 in Hungary. Includes three examples of using APIs in a tag management solution to give better data to make decisions and use predictions. 1.
Turning Analysis into Action with APIs - Superweek 2017Peter Meyer
Presentation given by Mark Edmondson and Peter Meyer at Superweek 2017 in Hungary. Includes three examples of using APIs in a tag management solution to give better data to make decisions and use predictions.
Expert data analytics prove to be highly transformative when applied in context to corporate business strategies.
This webinar covers various approaches and strategies that will give you a detailed insight into planning and executing your Data Analytics projects.
This presentation will introduce a new DevOps reference architecture published by IBM. This technology agnostic reference architecture was developed harvesting solution architectures from dozens of clients who have been successful in adopting DevOps at scale. The presentation will present the capabilities - across practices, tools, platforms and organizational considerations, that are required for large scale DevOps adoption in an enterprise.
Building an Open Source AppSec Pipeline - 2015 Texas Linux FestMatt Tesauro
Take the ideas of DevOps and the notion of a delivery pipeline and combine them for an AppSec Pipeline. This talk covers the open source components used to create an AppSec Pipeline and the benefits we received from its implementation.
Turn Data Into Actionable Insights - StampedeCon 2016StampedeCon
At Monsanto, emerging technologies such as IoT, advanced imaging and geo-spatial platforms; molecular breeding, ancestry and genomics data sets have made us rethink how we approach developing, deploying, scaling and distributing our software to accelerate predictive and prescriptive decisions. We created a Cloud based Data Science platform for the enterprise to address this need. Our primary goals were to perform analytics@scale and integrate analytics with our core product platforms.
As part of this talk, we will be sharing our journey of transformation showing how we enabled: a collaborative discovery analytics environment for data science teams to perform model development, provisioning data through APIs, streams and deploying models to production through our auto-scaling big-data compute in the cloud to perform streaming, cognitive, predictive, prescriptive, historical and batch analytics@scale, integrating analytics with our core product platforms to turn data into actionable insights.
Similar to Data-Drive DevOps: Mining Machine Data for "Metrics that Matter" (20)
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
.conf Go 2023 presentation:
De NOC a CSIRT
Speakers:
Daniel Reina - Country Head of Security Cellnex (España) & Global SOC Manager Cellnex
Samuel Noval - Global CSIRT Team Leader, Cellnex
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
5. Andi Mann @AndiMann @Splunk5
Yeah, but …
… what are you
achieving?
I’m gonna need you
to come in Sunday.
6. Andi Mann @AndiMann @Splunk
Gartner’s DevOps ‘Metrics that Matter’
Gartner Inc., Data-Driven DevOps: Use Metrics to Help Guide Your Journey, 29 May 2014 G00264319, Analyst(s): Cameron Haight | Tapati Bandopadhyay
8. Andi Mann @AndiMann @Splunk
Some DevOps Metrics that Might Matter
Culture
e.g.
• Retention
• Satisfaction
• Callouts
Process
e.g.
• Idea-to-cash
• MTTR
• Deliver time
Quality
e.g.
• Tests passed
• Tests failed
• Best/worst
Systems
e.g.
• Throughput
• Uptime
• Build times
Activity
e.g.
• Commits
• Tests run
• Releases
Impact
e.g.
• Signups
• Checkouts
• Revenue
9. Andi Mann @AndiMann @Splunk
But DevOps Is Always a Unique Journey
What Are Your
‘Metrics That
Matter’?
10. Andi Mann @AndiMann @Splunk
Every tool, every process, every
component, on-prem or off
One Constant:
Machine Data
11. Andi Mann @AndiMann @Splunk
Common Data Fabric
11
API
SDKs UI
Other Tools
Escalation/
Collaboration
Visibility Across the Whole Dev Lifecycle
Plan Code Build Test/QA Stage Release Config Monitor
12. Andi Mann @AndiMann @Splunk
Common Data Fabric
12
API
SDKs UI
Server, Storage.
N/W
Server
Virtualisation
Operating
Systems
Mobile
Applications
Cloud Services
Other Tools
Ticketing/Help
Desk
Custom
Applications
Visibility Across the Whole Ops Environment
API Services
Infrastructure
Applications
13. Andi Mann @AndiMann @Splunk
Use Machine Data To Identify ‘Waste’
Plan
Develop
(UI)
Develop
(Db)
Develop
(M’ware)
Develop
(Backend)
Security
Test
Monitor
Build
(Prod)
Architect
Secure/
Comply
DeployAccept
Unit
Test
Document
Cap Plan
Train
Feedback
Integration
Test
Configure
System
Test
Launch
CAB
Develop
(APIs)
Budget
Build
(Dev)
Mgmt/
Tooling
W
W
W
W
W
W
W
W
W
16 40 52 35 96 40 48 24 --8 2 5 6 8 2 12
14. Andi Mann @AndiMann @Splunk
Use Machine Data To Manage Testing and QA
• Release when
ready, not a date!
• Best / worst
developers
• Best / worst
providers
• Impact of new
code on ops
• Impact of new
code on biz
15. Andi Mann @AndiMann @Splunk
Use Machine Data To Enable Continuous
Improvement
Defect
Information
Capacity
Planning
Quality
Standards
Enhancement
Requests
Integration
Requirements
Acceptance
Metrics
Service Levels
and KPIs
Application Development Test and Acceptance Production
BuildCodePlan Test/QA Stage Release Config Monitor
Infrastructure
Dependencies
16. Andi Mann @AndiMann @Splunk
Use Machine Data To Accelerate Velocity
Pivot & improve with
Continuous Insights
Product Managers
identify new
opportunities
Continuously delivered to market
… and Auditors are “happy”
17. Andi Mann @AndiMann @Splunk
Use Machine Data To Improve Quality
Code quality scans Static security scans
White BoxDeveloper
checks in code
Automated
Acceptance Tests
Dynamic Security
Scans
Black Box
“Chaos Monkey”
tests
Test Fail:
Return
Test Fail:
Return
X
X
Production
QA Prod Pattern
QA Pattern Library
Test Pass:
Promote
Test Pass:
Promote to
Production
Pattern
library used
for test and
QA