This document discusses using Splunk to analyze logs from Akamai Cloud Monitor in real-time. Akamai Cloud Monitor delivers logs within 60 seconds, enabling real-time operational monitoring and analytics. The document outlines how logs are configured and delivered in JSON format to a Splunk receiver. It also describes building a Splunk application to gain insights into availability, performance, security and release monitoring using the enriched Akamai log data.
DataEd Slides: Data Management Best PracticesDATAVERSITY
It is clear that Data Management best practices exist, and so does a useful process for improving existing Data Management practices. The question arises: Since we understand the goal, how does one design a process for Data Management goal achievement? This program describes what must be done at the programmatic level to achieve better data use and a way to implement this as part of your data program. The approach combines DMBoK content and CMMI/DMM processes — permitting organizations the opportunity to benefit from the best of both. It also permits organizations to understand:
• Their current Data Management practices
• Strengths that should be leveraged
• Remediation opportunities
Performance Troubleshooting Using Apache Spark MetricsDatabricks
Performance troubleshooting of distributed data processing systems is a complex task. Apache Spark comes to rescue with a large set of metrics and instrumentation that you can use to understand and improve the performance of your Spark-based applications. You will learn about the available metric-based instrumentation in Apache Spark: executor task metrics and the Dropwizard-based metrics system. The talk will cover how Hadoop and Spark service at CERN is using Apache Spark metrics for troubleshooting performance and measuring production workloads. Notably, the talk will cover how to deploy a performance dashboard for Spark workloads and will cover the use of sparkMeasure, a tool based on the Spark Listener interface. The speaker will discuss the lessons learned so far and what improvements you can expect in this area in Apache Spark 3.0.
As cloud computing continues to gather speed, organizations with years’ worth of data stored on legacy on-premise technologies are facing issues with scale, speed, and complexity. Your customers and business partners are likely eager to get data from you, especially if you can make the process easy and secure.
Challenges with performance are not uncommon and ongoing interventions are required just to “keep the lights on”.
Discover how Snowflake empowers you to meet your analytics needs by unlocking the potential of your data.
Agenda of Webinar :
~Understand Snowflake and its Architecture
~Quickly load data into Snowflake
~Leverage the latest in Snowflake’s unlimited performance and scale to make the data ready for analytics
~Deliver secure and governed access to all data – no more silos
DataEd Slides: Data Management Best PracticesDATAVERSITY
It is clear that Data Management best practices exist, and so does a useful process for improving existing Data Management practices. The question arises: Since we understand the goal, how does one design a process for Data Management goal achievement? This program describes what must be done at the programmatic level to achieve better data use and a way to implement this as part of your data program. The approach combines DMBoK content and CMMI/DMM processes — permitting organizations the opportunity to benefit from the best of both. It also permits organizations to understand:
• Their current Data Management practices
• Strengths that should be leveraged
• Remediation opportunities
Performance Troubleshooting Using Apache Spark MetricsDatabricks
Performance troubleshooting of distributed data processing systems is a complex task. Apache Spark comes to rescue with a large set of metrics and instrumentation that you can use to understand and improve the performance of your Spark-based applications. You will learn about the available metric-based instrumentation in Apache Spark: executor task metrics and the Dropwizard-based metrics system. The talk will cover how Hadoop and Spark service at CERN is using Apache Spark metrics for troubleshooting performance and measuring production workloads. Notably, the talk will cover how to deploy a performance dashboard for Spark workloads and will cover the use of sparkMeasure, a tool based on the Spark Listener interface. The speaker will discuss the lessons learned so far and what improvements you can expect in this area in Apache Spark 3.0.
As cloud computing continues to gather speed, organizations with years’ worth of data stored on legacy on-premise technologies are facing issues with scale, speed, and complexity. Your customers and business partners are likely eager to get data from you, especially if you can make the process easy and secure.
Challenges with performance are not uncommon and ongoing interventions are required just to “keep the lights on”.
Discover how Snowflake empowers you to meet your analytics needs by unlocking the potential of your data.
Agenda of Webinar :
~Understand Snowflake and its Architecture
~Quickly load data into Snowflake
~Leverage the latest in Snowflake’s unlimited performance and scale to make the data ready for analytics
~Deliver secure and governed access to all data – no more silos
Running Apache Spark Jobs Using KubernetesDatabricks
Apache Spark has introduced a powerful engine for distributed data processing, providing unmatched capabilities to handle petabytes of data across multiple servers. Its capabilities and performance unseated other technologies in the Hadoop world, but while Spark provides a lot of power, it also comes with a high maintenance cost, which is why we now see innovations to simplify the Spark infrastructure.
Un patrón recurrente que vemos en plataformas de datos centralizadas es básicamente extraer los datos de varios sistemas operacionales para después limpiar o procesar la data y al final desifrar cómo obtener valor de los datos. El problema es que los datos son ubicuos y cambian constantemente en el tiempo y este tipo de arquitectura centralizada esta descompuesta en capacidad técnicas y simplemente no escala. En esta charla se explicará la teoría y las pruebas que se han ejecutado en ThoughtWorks, sobre Data Mesh, un paradigma que se basa en la arquitectura distribuida moderna donde se considera la división en dominios, el pensamiento de la plataforma para crear una infraestructura de datos de autoservicio y el tratamiento de los datos como un producto.
OLAP performs multidimensional analysis of business data and provides the capability for complex calculations, trend analysis, and sophisticated data modeling.
This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.
MySQL Backup and Security Best PracticesLenz Grimmer
Slides of my talk about MySQL Backup and Security at phpDay in Verona, Italy:
http://www.phpday.it/site/phpday-2009/calendario-conferenze/canale-developers/mysql-backup-and-security-best-practices/
A 30 day plan to start ending your data struggle with SnowflakeSnowflake Computing
Organizations everywhere are struggling to load, integrate, analyze and collaborate with data. This is largely thanks to their antiquated data platform, designed in a time when few people had the desire or need to interact with the database. Snowflake, the data warehouse built for the cloud, can help.
Machine-generated data is one of the fastest growing and complex areas of big data. It's also one of the most valuable, containing some of the most important insights: where things went wrong, how to optimize the customer experience, the fingerprints of fraud. Join us as we explore the basics of machine data analysis and highlight techniques to help you turn your organization’s machine data into valuable insights—across IT and the business. This introductory workshop includes a hands-on (bring your laptop) demonstration of Splunk’s technology and covers use cases both inside and outside IT. Learn why more than 13,000 customers in over 110 countries use Splunk to make their organizations more efficient, secure, and profitable.
Enabling Airbus Digital Transformation with Splunk
Learn how Airbus are turning their data into doing across their organisation. From real time monitoring to IT Service Management to security operations – Airbus are maximising their use of data to deliver more services and continuous process improvement.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
This presentation is an attempt do demystify the practice of building reliable data processing pipelines. We go through the necessary pieces needed to build a stable processing platform: data ingestion, processing engines, workflow management, schemas, and pipeline development processes. The presentation also includes component choice considerations and recommendations, as well as best practices and pitfalls to avoid, most learnt through expensive mistakes.
Rakuten’s Journey with Splunk - Evolution of Splunk as a ServiceRakuten Group, Inc.
This is a presentation material of splunklive 2016 Tokyo.
(Japanese) Splunk Live 2016 での発表資料です。
楽天社内で展開しているSplunkの共通基盤である、Splunk as a Serviceのご紹介をします。
ユーザーの活用事例とともに、これまでのSplunkサービスの歩みを振り返り、今後の展望についてもお話します。
また、OSS化され、さらにパワーアップしたウェブツールのご紹介をしつつ、サービスのユーザー拡大と運用のコツをお伝えします。
The latest distributed system utilizing the cloud is a very complicated configuration in which the components span a plurality of components. Applications for customers are part of products, and service quality targets directly linked to business indicators are needed. Legacy monitoring system based on traditional system management is not linked not only to business indicators but also to measure service quality. Google advocates the idea of site reliability engineering (SRE) and introduces efforts to measure quality of service. Based on the concept of SRE, the service quality monitoring system collects and analyzes logs from various components not only application codes but also whole infrastructure components. Since very large amounts of data must be processed in real time, it is necessary to design carefully with reference to the big data architecture. To utilize this system, you can measure the quality of service, and make it possible to continuously improve the service quality.
Running Apache Spark Jobs Using KubernetesDatabricks
Apache Spark has introduced a powerful engine for distributed data processing, providing unmatched capabilities to handle petabytes of data across multiple servers. Its capabilities and performance unseated other technologies in the Hadoop world, but while Spark provides a lot of power, it also comes with a high maintenance cost, which is why we now see innovations to simplify the Spark infrastructure.
Un patrón recurrente que vemos en plataformas de datos centralizadas es básicamente extraer los datos de varios sistemas operacionales para después limpiar o procesar la data y al final desifrar cómo obtener valor de los datos. El problema es que los datos son ubicuos y cambian constantemente en el tiempo y este tipo de arquitectura centralizada esta descompuesta en capacidad técnicas y simplemente no escala. En esta charla se explicará la teoría y las pruebas que se han ejecutado en ThoughtWorks, sobre Data Mesh, un paradigma que se basa en la arquitectura distribuida moderna donde se considera la división en dominios, el pensamiento de la plataforma para crear una infraestructura de datos de autoservicio y el tratamiento de los datos como un producto.
OLAP performs multidimensional analysis of business data and provides the capability for complex calculations, trend analysis, and sophisticated data modeling.
This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.
MySQL Backup and Security Best PracticesLenz Grimmer
Slides of my talk about MySQL Backup and Security at phpDay in Verona, Italy:
http://www.phpday.it/site/phpday-2009/calendario-conferenze/canale-developers/mysql-backup-and-security-best-practices/
A 30 day plan to start ending your data struggle with SnowflakeSnowflake Computing
Organizations everywhere are struggling to load, integrate, analyze and collaborate with data. This is largely thanks to their antiquated data platform, designed in a time when few people had the desire or need to interact with the database. Snowflake, the data warehouse built for the cloud, can help.
Machine-generated data is one of the fastest growing and complex areas of big data. It's also one of the most valuable, containing some of the most important insights: where things went wrong, how to optimize the customer experience, the fingerprints of fraud. Join us as we explore the basics of machine data analysis and highlight techniques to help you turn your organization’s machine data into valuable insights—across IT and the business. This introductory workshop includes a hands-on (bring your laptop) demonstration of Splunk’s technology and covers use cases both inside and outside IT. Learn why more than 13,000 customers in over 110 countries use Splunk to make their organizations more efficient, secure, and profitable.
Enabling Airbus Digital Transformation with Splunk
Learn how Airbus are turning their data into doing across their organisation. From real time monitoring to IT Service Management to security operations – Airbus are maximising their use of data to deliver more services and continuous process improvement.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
This presentation is an attempt do demystify the practice of building reliable data processing pipelines. We go through the necessary pieces needed to build a stable processing platform: data ingestion, processing engines, workflow management, schemas, and pipeline development processes. The presentation also includes component choice considerations and recommendations, as well as best practices and pitfalls to avoid, most learnt through expensive mistakes.
Rakuten’s Journey with Splunk - Evolution of Splunk as a ServiceRakuten Group, Inc.
This is a presentation material of splunklive 2016 Tokyo.
(Japanese) Splunk Live 2016 での発表資料です。
楽天社内で展開しているSplunkの共通基盤である、Splunk as a Serviceのご紹介をします。
ユーザーの活用事例とともに、これまでのSplunkサービスの歩みを振り返り、今後の展望についてもお話します。
また、OSS化され、さらにパワーアップしたウェブツールのご紹介をしつつ、サービスのユーザー拡大と運用のコツをお伝えします。
The latest distributed system utilizing the cloud is a very complicated configuration in which the components span a plurality of components. Applications for customers are part of products, and service quality targets directly linked to business indicators are needed. Legacy monitoring system based on traditional system management is not linked not only to business indicators but also to measure service quality. Google advocates the idea of site reliability engineering (SRE) and introduces efforts to measure quality of service. Based on the concept of SRE, the service quality monitoring system collects and analyzes logs from various components not only application codes but also whole infrastructure components. Since very large amounts of data must be processed in real time, it is necessary to design carefully with reference to the big data architecture. To utilize this system, you can measure the quality of service, and make it possible to continuously improve the service quality.
Hidden Gems for Oracle EBS Automation in the UiPath MarketplaceAuraPlayer
Learn how AuraPlayer's patented RPA solution can be seamlessly plugged into UiPath Studio to help you create stable Oracle automations with ease. Find out how AuraPlayer can help ensure ROI from your Oracle RPA strategy. www.auraplayer.com
Digi-Key, the top-rated and most visited Web site in the electronic distribution industry, migrated their catalog to a digital form a few years ago. Although site visitors were impressed with the design of the dynamic catalog, the retailer received complaints about poor performance. Join this session to hear how Digi-Key is working with Akamai and leveraging Aqua Ion to gain insight into user performance and help boost their online performance for their customers, while freeing up technical resources to work on more sophisticated site functionality and high-value projects. The speaker will share best practices on how they implement Akamai solutions, as well as the benefits they realize from using Akamai’s front-end optimization and real user monitoring features. See Chris Schultz's Edge Presentation: http://www.akamai.com/html/custconf/edgetv-commerce.html#beyond-middle-mile
The Akamai Edge Conference is a gathering of the industry revolutionaries who are committed to creating leading edge experiences, realizing the full potential of what is possible in a Faster Forward World. From customer innovation stories, industry panels, technical labs, partner and government forums to Web security and developers' tracks, there’s something for everyone at Edge 2013.
Learn more at http://www.akamai.com/edge
Spark makes it easy to build and deploy complex data processing applications onto shared compute platforms, but tuning them is a skill in itself and can get overlooked. Uncontrolled, this leads to over specified resource requirements, unnecessary platform load and increases the chances of resource contention, degrading overall performance. By identifying inefficient jobs, development teams and platform administrators can wrestle back control of system resources, improve efficiency and lessen the effect of contention across the cluster.
Sparklint uses the Spark metrics steam and a custom event listener to analyze individual Spark jobs for over specified or unbalanced resources, incorrect partitioning and sub optimal worker locality. It is easily attached to any Spark job and can also run standalone against historical event logs, presenting data for analysis through a web UI and providing a unique resource focused view of the application runtime.
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly SolarWinds Loggly
April 2014 update to this presentation: Loggly removed Storm from its architecture. Details here: https://www.loggly.com/blog/what-we-learned-about-scaling-with-apache-storm/
This is a technical architect's case study of how Loggly has employed the latest social-media-scale technologies as the backbone ingestion processing for our multi-tenant, geo-distributed, and real-time log management system. Given by Jim Nisbet and Philip O'Toole, this presentation describes design details of how we built a second-generation system fully leveraging AWS services including Amazon Route 53 DNS with heartbeat and latency-based routing, multi-region VPCs, Elastic Load Balancing, Amazon Relational Database Service, and a number of pro-active and re-active approaches to scaling computational and indexing capacity.
The talk includes lessons learned in our first generation release, validated by thousands of customers; speed bumps and the mistakes we made along the way; various data models and architectures previously considered; and success at scale: speeds, feeds, and an unmeltable log processing engine.
Service-Level Objective for Serverless Applicationsalekn
Deploying commercial applications that meet their expected business needs is challenging due to the differences between how business goals are specified and how the system is evaluated. Furthermore, business goals are dynamic, requiring deployment to change constantly over time. Such difficulties make it costly to maintain application quality as the underlying infrastructure is not always fast enough to keep up with business changes. Nowadays, serverless opens a new approach to build application. By abstracting out the deployment details, serverless application can be implemented with minimum deployment efforts. Serverless also reduces maintenance cost with auto-scaling and pay-as-you-go. Such abilities make us believe that by adopting serverless, we can build application that can meet and quickly adapt to business goals.
However, simply writing applications with serverless is not sufficient. Due to best-effort invocation mechanisms and the lack of application structure awareness, serverless performance is highly variable and often fails to support applications with rigorous quality of service requirements. In this study, we aim to mitigate such limitations by coupling serverless deployment with business needs. In particular, we define an Serverless Service-Level Objective (SLO) interface that allows developers to describe their application structure and business goals in terms of software-level objectives. We implement an SLO enforcer, which uses this information in combination with the system performance metrics to decide a proper serverless deployment and resource allocation for meeting business goals. The Serverless SLO leverages blueprint model, which allow developers to describe applications' architecture and runtime characteristics needs, to map application description to serverless function deployment on the top of Knative. We deploy our proposed system on KinD, a tool to run Kubernetes cluster over our local Docker container, and evaluate it with different system configurations. Evaluation results showed that SLO definition and enforcement helps serverless application use resources in accordance with business goals.
How Netskope Mastered DevOps with Sumo LogicSumo Logic
This webinar discusses how the leader in cloud app analytics and policy enforcement uses Sumo Logic to ensure optimal performance, availability and security of their cloud platform.
Sumo Logic Co-Founder & VP of Engineering, Kumar Saurabh, joins Netskope VP of Engineering, Abhay Kulkarni, to run a LIVE demo and discusses how Netskope:
- Was able to set up the Sumo Logic service within a single day in various data centers across the world
- Rapidly identifies and troubleshoots issues across 100’s of servers and virtual machines
- Leverages real-time alerts to fix issues to deliver a reliable service
- Makes informed business decisions by analyzing core user behaviors
- Uses out-of-the box applications such as Ngnix and Apache
2 Speed IT powered by Microsoft Azure and MinecraftSriram Hariharan
In this session, Mike will show how a model reference architecture in Azure and Minecraft can be used by architects to visualize solutions that you want your teams to build.
2 Speed IT powered by Microsoft Azure and MinecraftBizTalk360
In this session, Mike will show how a model reference architecture in Azure and Minecraft can be used by architects to visualize solutions that you want your teams to build.
Advanced Application Monitoring and Management in Microsoft Azure with KEMP360Kemp
Enterprise level management and support tools for simplified and seamless management of your application deployment fabric - from managing LoadMaster, F5 Big-IP, Amazon ELB, HAProxy and NGINX to 24/7/365 pro-active monitored support team, alert management, issue diagnosis and escalation.
Integrating IBM Z and IBM i Operational Intelligence Into Splunk, Elastic, an...Precisely
Whether your organization is moving more IT operations to the cloud or enhancing its deployments on-premises, it’s critical to understand the impact of excluding essential data points from legacy systems has on your bottom line. That said, optimizing your cloud deployment isn’t just about cost reduction — it can also positively enhance how you deliver services to your business.
Whether you are just getting started on your cloud journey or are looking to make more data available for your IT operations, Precisely Ironstream can help. In this on-demand webinar, learn how Precisely Ironstream helps customers make IBM Z platform and IBM i operational intelligence available in top cloud IT operational platforms like Splunk, Elastic, and Kafka.
Join this webinar to learn:
- Best practices for easily integrating IBM Z platform (mainframe) and IBM i operational metrics integrated into Splunk, Elastic, and Kafka
- The operational and financial ROI of using Ironstream to integrate legacy systems into modern IT platforms
- How Ironstream customers have benefited from bringing in IBM Z platform (mainframe) and IBM i machine/log data into Splunk, Elastic, and Kafka
Mobile User Experience:Auto Drive through Performance MetricsAndreas Grabner
Believe it or not - 85% of mobile apps are removed after first usage! In this presentation - given at the APM Meetup in Singapore in April 2015 - I talked about the challenges, best practices and especially metrics to avoid this situation.
Key Points of the Presentation
The two key trends "Internet of Things" and "DevOps" play a big role in our life when we talk about User Experience and especially mobile user experience. In this presentation I tell you what metrics to use to make sure you deliver your ideas faster to your mobile end users but also ensuring the right quality and user experience so that your users stay loyal and dont delete the mobile app after first usage.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
.conf Go 2023 presentation:
De NOC a CSIRT
Speakers:
Daniel Reina - Country Head of Security Cellnex (España) & Global SOC Manager Cellnex
Samuel Noval - Global CSIRT Team Leader, Cellnex
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
7. 7
CM Data Delivery
7
• Select Data Sets to Include from Default &
OpIonal Data Sets
" Default: CP, Format, Message, Type, Version
" OpIonal: Akadebug, network, netPerf, Geo, WAF,
PPCustomData
• Configure the AggregaIon opIons:
– Time: 60 seconds max
– Line Count: Max 3000 records
– Message Size: Max 900 KB of data
• Configure Delivery Endpoint, DistribuIon &
Failover OpIons:
– Primary receiver gets 100%
– Secondary receiver gets 100% if Primary
receiver is unavailable
– NetStorage gets 100% if Primary &
Secondary are unavailable
ê With scheduled FTP download hourly
13. 13 13
• Performance Issue
– Searches Were Slow
– Dashboards taking longer to load
– Greater than 25K events/sec
• Summary Index
– Loses the Rich informaIon in data
• Report AcceleraIon
– AcceleraIon is suspended as the summary reaches 10% of its total size
– No Control on Summary Schedule
Hurdles
Data Models & AcceleraIon
Came to Rescue
19. 19
Final Analysis
19
• The iniIal business needs for Real Time logging have been met:
– Real Ime monitoring enables us to know the current state of the site
– Dashboards allow us to track the effect of site changes
– JSON data forma^ng makes it easier to do analyIcs on the customer experience
• The addiIonal benefits add significant value that is not possible to get any
another way:
– Dashboards for NOC and support teams to see current state of the site
– Header data used for troubleshooIng site and code issues
– Header informaIon providing customer experience analyIcs data
– Performance data on a regional and network level
• Download the Splunk App for Akamai CloudMonitor
– h`ps://splunkbase.splunk.com/app/2923/
we get more than a billion hits /day though akamai
akamai is business critical and monitor akmai is cirtical
-state of the site
- impact of changes
- end user experience
akamai standard log collection is though net storage & it takes hours before it can be analyzed
delay in monitoring is not acceptable
CM results
akamai has some monitoring but it does not granular details
real time availability of data enables us to monitor and analyze the site
meet out operational goals - to monitor real time
and cloud monitor sturcture provide improved usability
JSON stucture mire usability of logs
its more intutive
not only richer analytics for busniess but also enabled richer data for troubleshooting like connection detials
also CM enables to add custom fields from http header or body which provide more deeper analytics
1. How to setup akamai
2. How we setup the receiver
3. Splunk Setup
CM allows you to customize the data you want to receive
Default data sets:
CP- unique identifie, Format, Message, Type, Version
Optional data sets:
Akadebug, network-- time for each segment, netPerf, Geo, WAF, PPCustomData
Can not opt out of data elements within an optional data set
Results in duplicates of some data or unwanted data
Custom Data field:
Selected HTTP Header data is included
Other header info is excluded- like large cookies
aggregation criteria is set by you
as one of the critieria fullfills the message gets posted to the reciver
1. CM posts the messages to VIP
2. SSL redemption happens at firewall ?????
3. Writes to files and FWD monitors the logs and forwards them to IDX
Scale to all properties
we plan to include redundancy and load balancing
building multiple receivers in multiple DC's
we add more capacity to the receivers, and also adds redundancy
we plan to divide the traffic 50% in each DC
http event collector in newer version
easier Configuration UI for setting up the event collecors
token based authentication for posting the messages
it supports both http & https
Full advantage of the data
we started building buling basic dashbaoprds
data is rich and verry useful
but dashbaords are slow for monitoring and troubleshooting
acceptable perfromance
25k events with rich JSON objects
fist thought was summary indexing
post processing also din't help
scheduled searches - not a option
summary - loose the rich content
report accleration limition ---- bucket or index size
real issue is to see the iformation in near real time
availability dashbaord is to monitor the avaliablity of the site, real time view of the
view of success and failures
monitor the traffic and failures by geo
top countries with failures and success
Failed URL and status
started seeing issues which were not know before
Performance
view the toatl round trip time
orgin latency
Lastmile RTT
mim mile latency
Latency by geo
Origin
traffic routing (ratio)
error and success by the origin
origin latency
origin timeouts
Edge
Akamai Cache
Issues with orogin - mid mile latency
issues with edge - first byte not served
edge servers with high error rate ---- intertingpoint we found in few of the edge servers in same geo have different latencies it might be due to the ISP or a issue with the edge by itself
Malicious ---- genuine
monitor the WAF rules
top deny rules
top warn rules
warn and deny by geo
TOP URL's with deny and warn
top denied IP's
bery good tool for monitoring releases
look at the cache content served by property
sucess % and failure %
Lookup for the origins to resolve to DC
origin perfromance
availablility and perfroamnce by GEO
which DC is in traffic