This talk covers the challenges we tackled during building our new service oriented system. Summarizing what we realized would bad Ideas to do, what are the better approaches to data consistency, how we used Apache Avro technology and what other supporting infrastructure we created to help us achieving the goal of consistent yet flexible system.
Amihay Zer-Kavod is I'm a Senior Software Architect at LivePerson.
If you are building a service oriented system and you want to build it for scale as well as flexibility. There are a few questions you need to make sure are asked and answered regarding the data interchange between services and offline persistency of services data. Questions as:
- How can I change a service API without breaking other services?
- How do I keep data from services consistent over time?
This talk covers the challenges we tackled during building our new service oriented system. Summarizing what we realized would bad Ideas to do, what are the better approaches to data consistency.
It includes a dive into the Apache Avro technology and how we used it.
Also what other supporting infrastructure we created to help us achieving the goal of consistent yet flexible system.
Thrift vs Protocol Buffers vs Avro - Biased ComparisonIgor Anishchenko
Igor Anishchenko
Odessa Java TechTalks
Lohika - May, 2012
Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
JSON, by now, became a regular part of most applications and services. Do we, how ever, really want to transfer human readable information or are we looking for a binary protocol to be as debuggable as JSON? CBOR the Concise Binary Object Representation offers the best of JSON + an extremely efficient, binary representation.
http://www.cbor.io
If you are building a service oriented system and you want to build it for scale as well as flexibility. There are a few questions you need to make sure are asked and answered regarding the data interchange between services and offline persistency of services data. Questions as:
- How can I change a service API without breaking other services?
- How do I keep data from services consistent over time?
This talk covers the challenges we tackled during building our new service oriented system. Summarizing what we realized would bad Ideas to do, what are the better approaches to data consistency.
It includes a dive into the Apache Avro technology and how we used it.
Also what other supporting infrastructure we created to help us achieving the goal of consistent yet flexible system.
Thrift vs Protocol Buffers vs Avro - Biased ComparisonIgor Anishchenko
Igor Anishchenko
Odessa Java TechTalks
Lohika - May, 2012
Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
JSON, by now, became a regular part of most applications and services. Do we, how ever, really want to transfer human readable information or are we looking for a binary protocol to be as debuggable as JSON? CBOR the Concise Binary Object Representation offers the best of JSON + an extremely efficient, binary representation.
http://www.cbor.io
RESTLess Design with Apache Thrift: Experiences from Apache Airavatasmarru
Apache Airavata is software for providing services to manage scientific applications on a wide range of remote computing resources. Airavata can be used by both individual scientists to run scientific workflows as well as communities of scientists through Web browser interfaces. It is a challenge to bring all of Airavata’s capabilities together in the single API layer that is our prerequisite for a 1.0 release. To support our diverse use cases, we have developed a rich data model and messaging format that we need to expose to client developers using many programming languages. We do not believe this is a good match for REST style services. In this presentation, we present our use and evaluation of Apache Thrift as an interface and data model definition tool, its use internally in Airavata, and its use to deliver and distribute client development kits.
Do you want to upgrade your GWT application or write a sizable web application? Dart is the efficient choice.
As a brief example, check out http://lightningdart.com
This presentation is updated October 2015 for Silicon Valley Code Camp
This is the fourteenth (and last for now) set of slides from a Perl programming course that I held some years ago.
I want to share it with everyone looking for intransitive Perl-knowledge.
A table of content for all presentations can be found at i-can.eu.
The source code for the examples and the presentations in ODP format are on https://github.com/kberov/PerlProgrammingCourse
RESTLess Design with Apache Thrift: Experiences from Apache Airavatasmarru
Apache Airavata is software for providing services to manage scientific applications on a wide range of remote computing resources. Airavata can be used by both individual scientists to run scientific workflows as well as communities of scientists through Web browser interfaces. It is a challenge to bring all of Airavata’s capabilities together in the single API layer that is our prerequisite for a 1.0 release. To support our diverse use cases, we have developed a rich data model and messaging format that we need to expose to client developers using many programming languages. We do not believe this is a good match for REST style services. In this presentation, we present our use and evaluation of Apache Thrift as an interface and data model definition tool, its use internally in Airavata, and its use to deliver and distribute client development kits.
Do you want to upgrade your GWT application or write a sizable web application? Dart is the efficient choice.
As a brief example, check out http://lightningdart.com
This presentation is updated October 2015 for Silicon Valley Code Camp
This is the fourteenth (and last for now) set of slides from a Perl programming course that I held some years ago.
I want to share it with everyone looking for intransitive Perl-knowledge.
A table of content for all presentations can be found at i-can.eu.
The source code for the examples and the presentations in ODP format are on https://github.com/kberov/PerlProgrammingCourse
If you want to stay up to date, subscribe to our newsletter here: https://bit.ly/3tiw1I8
An introduction to Apache Flume that comes from Hadoop Administrator Training delivered by GetInData.
Apache Flume is a distributed, reliable, and available service for collecting, aggregating, and moving large amounts of log data. By reading these slides, you will learn about Apache Flume, its motivation, the most important features, architecture of Flume, its reliability guarantees, Agent's configuration, integration with the Apache Hadoop Ecosystem and more.
Galerie des Beaux Arts.
Een unieke galerie waar u in een stijlvolle omgeving geniet van een bijzondere collectie oude en hedendaagse kunstobjecten. Moderne schilderijen en beeldhouwwerken, antieke meubelen en klokken, kostelijke sieraden en juwelen, verfijnde kleinoden in zilver en goud …
Wij nodigen u graag uit voor een wandeling door onze schatkamers, waarin zo’n vier eeuwen fine arts & antiques bijeen zijn gebracht. Laat u verbazen, bekoren en inspireren. Wellicht ook verleiden … u bent te allen tijde welkom in Galerie des Beaux Arts in Heeze
SplunkLive! Frankfurt 2018 - Data Onboarding OverviewSplunk
Presented at SplunkLive! Frankfurt 2018:
Splunk Data Collection Architecture
Apps and Technology Add-ons
Demos / Examples
Best Practices
Resources and Q&A
Tech Talk: ONOS- A Distributed SDN Network Operating Systemnvirters
This event takes us to the cusp of Distributed Software Development and SDN Controllers. We will be hosting Madan and Brian who have been involved in the architecture and development of ONOS (Open Network Operating System).
Synopsis
ONOS is a distributed SDN network operating system architected to provide performance, scale-out, resiliency, and well-defined northbound and southbound abstractions. Madan and Brian, both from ON.Lab, will start the talk with a deep-dive into ONOS architecture, including the key technical challenges that were solved to build this platform. They will also walk us through a live demo of building a SDN application on ONOS.
Details:
ONOS Architecture
ONOS Abstractions and Modularity
ONOS Distributed architecture
ONOS APIs and their usage
Live demo- Building a SDN app on ONOS
Speaker Bios
Madan Jampani, Distributed Systems Architect, ONOS
Madan is Distributed Systems Architect at ON.Lab focusing on the core distributed systems problems for ONOS. Prior to joining ON.Lab in Sep 2014, Madan worked at Amazon for around 10 years. At Amazon, Madan was instrumental in building several key technologies ranging from Amazon retail ordering systems, distributed data stores and shared compute clusters for running large-scale data processing and machine learning workloads.
Brian O’Connor, Lead Developer, ONOS
Brian is the ONOS Application Intent Framework lead and a core developer at ON.Lab, working on ONOS and Mininet. Brian O’Connor received Bachelor’s and Master’s degrees in Computer Science from Stanford University. At Stanford, he helped develop “An Introduction to Computer Networking,” one of Stanford’s first MOOCs (Massively Open Online Courses).
ABOUT ON.LAB and ONOS
Open Networking Lab (ON.Lab) is a non-profit organization founded by SDN inventors and leaders from Stanford University and UC Berkeley to foster an open source community for developing tools and platforms to realize the full potential of SDN. ON.Lab brings innovative ideas from leading edge research and delivers high quality open source platforms on which members of its ecosystem and the industry can build real products and solutions.
ONOS, a SDN network operating system for service provider and mission critical networks, was open sourced on Dec 5th, 2014. ONOS delivers a highly available, scalable SDN control plane featuring northbound and southbound abstractions and interfaces for a diversity of management, control, service applications and network devices. ONOS ecosystem comprises of ON.Lab, organizations who are funding and contributing to the ONOS initiative including AT&T, NTT Communications, SK Telecom, Ciena, Cisco, Ericsson, Fujitsu, Huawei, Intel, NEC; members who are collaborating and contributing to ONOS include ONF, Infoblox, SRI, Internet2, Happiest Minds, CNIT, Black Duck, Create-Net and the broader ONOS community. Learn how you can get involved with ONOS at onosproject.org.
.conf2011: Web Analytics Throwdown: with NPR and IntuitErin Sweeney
Splunk for Web Intelligence? Why not! This session provides a plain-english tour that shows how NPR uses Splunk to track audio and video traffic across our web sites and digital apps (Android, iPhone, etc.). Then Intuit will share how they capture greater insight into visitor sessions and answer questions traditional web analytics tools can’t provide. Intuit will cover a variety of use cases, taking each from the raw data through Splunk “under-the-hood” to the strategic questions and answers.
Choreo Community Call 1: How to Create a Service in Choreo!
Agenda:
1. Introduction to Choreo
2. Introduction to Service Components
- What are Service components and its use cases
- Capabilities
3. Demo
- Deploy a Todo list web application
Watch the video at : https://youtube.com/live/FX06RgpNUB4
21 people attended the July 2014 program meeting hosted by BDPA Cincinnati chapter. The topic was 'Open Source Tools and Resources'. The guest speaker was Greg Greenlee (Blacks In Technology).
'Open source' refers to a computer program in which the source code is available to the general public for use or modification from its original design. Open source code is typically created as a collaborative effort in which programmers improve upon the code and share the changes within the community. Open source sprouted in the technological community as a response to proprietary software owned by corporations. Over 85% of enterprises are using open source software. Managers are quickly realizing the benefit that community-based development can have on their businesses. This month, we put on our geek hats and detective gloves to learn how we can monitor our computers’ environments using open source tools. This meetup covered some of the most popular ‘Free and Open Source Software’ (FOSS) tools used to monitor various aspects of your computer environment.
Creating a Context-Aware solution, Complex Event Processing with FIWARE PerseoFernando Lopez Aguilar
Introduction to Complex Event Processing (CEP). How FIWARE deals with CEP through FIWARE Perseo. How to connect FIWARE Perseo with FIWARE Orion Context Broker. How can we define an event with Event Processing Language (EPL) and what are the predefined actions to include in FIWARE Perseo.
Presentació a càrrec de Maria Isabel Gandia, cap de Comunicacions del CSUC, duta a terme dins la 9a edició del SIG-NOC meeting el dia 8 d'abril de 2019 a Ljubljana, Eslovènia.
La presentació explica l'estudi que s'està duent a terme en el marc de Géant per conèixer les eines utilitzades per monitorar fluxos i les necessitats de les NREN de cara a un possible desenvolupament.
LOGGING - About Needles in the Modern Haystack
https://www.macsysadmin.se/program.html
Every once in a while you'll read "collect the log files". How will this work with your Cloud Service, Identity provider, and SaaS solution? What's the challenges and what are the options at hand when monitoring macOS effectively for compliance?
In this session we talk about practices in storing and retrieving event information for monitoring, and review applications to build and process rich audit trails. This session aims to share our experiences made with commercial and open source backends applied to various client scenarios.
Similar to Apache Avro and Messaging at Scale in LivePerson (20)
Kubernetes your tests! automation with docker on google cloud platformLivePerson
Arik Lerner, Automation Team Leader, and Waseem Hamshawi, Automation Infra Developer, present how to build a large scale automated testing platform by leveraging containers orchestration over GCP, with the ability to scale out and provide fast feedback while maintaining a highly reliable test infrastructure.
The presentation includes new approach of managing a scalable testing platform of distributed automated tests with Kubernetes and Docker over Google Cloud Platform.
Topics:
• GCP and Kubernetes introduction for automated testing
• Traditional Selenium Grid vs Selenium Standalone with Kubernetes and Docker for Web and Mobile tests
• Distributed and containerized testing environment over container cluster - different use cases
Ephemerals - "Short-lived Testing Endpoints". An Open Source by LivePerson which makes automation testing at large scale like a "Walk in the park".
In this Meetup Yaar Reuveni – Team Leader & Nir Hedvat – Software Engineer from Liveperson Data Platform R&D team, will talk about the journey we made from early days of the data platform in production with high friction and low awareness to issues into a mature, measurable data platform that is visible and trustworthy.
In this Meetup Arik Lerner – Liveperson Team lead of Java Automation, Performance & Resilience , will talk about How we measure our services, By End2End testing which become one of the most critical Monitor tool in LP .
Over 200K tests runs per day providing statistics and insights into the problem as they happen.
Arik will go through different topics and stages of the journey and share details that led to current results .
Part of the menu topics are : The Awakens of the End2End Insights
• How we measure our services using synthetic user experience
• Measuring through analytics & insights
• How we collect our data
• How we debug our services? Hint: video recording, HAR (Http archive), KIbana , Dashboard analytics & insights
• Future logs App correlation with End2End data
• Our tools: Selenium, Jenkins and cutting edge technologies such as Kafka & ELK (Elastic search, Logstash and Kibana)
In this Meetup, Arik will host Ali AbuAli- NOC Team Leader , who will talk about the e2e usage on his day 2 day work.
video: https://www.youtube.com/watch?v=IBC9gcYqNR4
In this talk Efim Dimenstein, Chief Architect at Liveperson will cover the rules and guidelines of building resilient systems, implementing them in real life and lessons learned during the process. The talk will focus on achieving resilience in real life and will feature a lot of examples and lessons learned from building systems currently in production running at extreme scale.
Efim will talk about:
· General resilience guidelines
· How they are implemented in practice
· What changes needed to be implemented to achieve
resilience
· Lessons learned
· Summary
My name is Victor Perepelitsky I'm an R&D Technical Leader at LivePerson leading the 'Real Time Event Processing Platform' team.
In this Meetup I talked about the journey of creating the platform from scratch - challenges, design decisions, technology choices and more.
During the last 3 years the team has built Real Time Event Processing Platform which is currently running in production with thousands of new and migrated customers. It is built to handle hundreds of thousands requests per/sec with low latency response time (under 30 ms round trip)
I went through different topics and stages of this journey and share details that led to specific choices and results.
“Stateful or Stateless”, “CEP”, “Rules engine”, “Automated performance testing”, “Locking”, “Timing” were a part of the menu.
In this meetup, Kobi Salant - Data Platform Technical Lead & Vladi Feigin - Data System Architect, both from Liveperson will talk about : Making scale a non-issue for real-time Data apps.
Have you ever tried to build a system processing in real-time hundreds of thousands events per second and servicing more than 1M concurrent visitors?
We're going to talk about the LivePerson real-time stream processing solution doing exactly that. Learn how we empower digital call centers with insights for their critical decision making processes and never-ending efficiency goals.
In this talk Sergei Koren, Production Architect at LivePerson will present HTTP/2, the official successor of HTTP 1.1, and how it would influence Web as we know it.
Sergei will talk about:
- HTTP/2 history
- The major changes - what do and don’t
- Expected changes to Web as we use it today
- Proposed checklist for implementation: how and when; from Production point of view.
Mobile app real-time content modifications using websocketsLivePerson
We are happy to host Benny Weingarten-Gabbay, Senior Software Engineer at eBay at our offices.
Benny presents BetterContent, a tool that allows editing of an iOS mobile app in runtime, in a fun and easy way.
Read more on our DevBlog:
https://connect.liveperson.com/community/developers/blog/2015/03/26/mobile-app-real-time-content-modifications-using-websockets
Mobile SDK: Considerations & Best Practices LivePerson
Mobile SDKs are a great way to make your service or API easily consumable by the large number of developers out there looking for state of the art tools to make their apps stand out in the competitive marketplaces, but building a stable, compatible and successful SDK is quite a challenge.
In this talk we the technical and design challenges involved in developing an efficient mobile SDK that is highly compatible with its host mobile app, and the various considerations we took into account and the lessons we’ve learned while designing and building LivePerson’s native mobile SDK.
In this Meetup Victor Perepelitsky - R&D Technical Leader at LivePerson leading the 'Real Time Event Processing Platform' team , will talk about Java 8', 'Stream API', 'Lambda', and 'Method reference'.
Victor will clarify what functional programming is and how can you use java 8 in order to create better software.
Victor will also cover some pain points that Java 8 did not solve regarding functionality and see how you can work around it.
In this lecture, Sergei Koren, System architect at LivePerson production team presents data & image compression and its effective usage in modern web and data flows.
Support Office Hour Webinar - LivePerson API LivePerson
Course description and agenda
LivePerson enables the creation of innovative applications designed to enhance and extend the functionality of your LivePerson solution, as well as cooperate with partners worldwide.
In this session we will demonstrate the LivePerson API offerings, the development process and quick overview of CHAT API and its basic usage. You will also have an opportunity to ask questions relevant to your business.
Host: Nitay Bartal
Date: July 17, 2014
Time: 11:00 AM - 12:00 PM EST
Duration: 60 minutes
Agenda:
- Leveraging LivePerson APIs to your benefit
- Overview of LivePerson API offerings
- Introduction to LivePerson Developers Network
- Overview of the Development process
- Tools and best practices
- Helpful tips and tricks
- Q&A
SIP - More than meets the eye
Speakers:
Ofer Cohen - VOIP Group Leader, LivePerson
Yossi Maimon - VOIP Technical Leader, LivePerson
An Introduction to the SIP protocol.
SIP Position in telecommunication networks and the content services.
What is SIP:
The Session Initiation Protocol (SIP) is a signaling communications protocol, widely used for controlling multimedia communication sessions such as voice and video calls over Internet Protocol (IP) networks.
The protocol defines the messages that are sent between peers which govern establishment, termination and other essential elements of a call. SIP can be used for creating, modifying and terminating sessions consisting of one or several media streams. SIP can be used for two-party (unicast) or multiparty (multicast) sessions. Other SIP applications include video conferencing, streaming multimedia distribution, instant messaging, presence information, file transfer, fax over IP and online games.
(Source: Wikipedia)
My name is Neta Barkay , and I'm a data scientist at LivePerson.
I'd like to share with you a talk I presented at the Underscore Scala community on "Efficient MapReduce using Scalding".
In this talk I reviewed why Scalding fits big data analysis, how it enables writing quick and intuitive code with the full functionality vanilla MapReduce has, without compromising on efficient execution on the Hadoop cluster. In addition, I presented some examples of Scalding jobs which can be used to get you started, and talked about how you can use Scalding's ecosystem, which includes Cascading and the monoids from Algebird library.
Read more & Video: https://connect.liveperson.com/community/developers/blog/2014/02/25/scalding-reaching-efficient-mapreduce
Building Enterprise Level End-To-End Monitor System with Open Source Solution...LivePerson
Recently, LivePerson's Production moved from traditional monitoring to a new enterprise monitoring system using only open source tools.
Oren Katz (Production Monitoring Team Leader) and Ittiel Savir (Automation team leader) will describe the road from a concept to the implementation in LivePerson,
In the lecture we will talk about chosen tools, the development process, tips, and how to avoid pitfalls
Check out Oren's recent blog post on the Subject: http://bit.ly/16i5lDS
Ofer Ron, senior data scientist at LivePerson.
Recently, I've had the pleasure of presenting an introduction to Data Science and data driven products at DevconTLV
I focused this talk around the basic ideas of data science, not the technology used, since I thought that far too many times companies and developers rush to play around with "big data" related technologies, instead of figuring out what questions they want to answer, and whether these answers form a successful product.
From a Kafkaesque Story to The Promised Land at LivePersonLivePerson
Ran Silberman, developer & technical leader at LivePerson presents how LivePerson moved their data platform from a legacy ETL concept to new "Data Integration" concept of our era.
Kafka is the main infrastructure that holds the backbone for data flow in the new Data Integration. Having that said, Kafka cannot come by itself. Other supporting systems like Hadoop, Storm, and Avro protocol were also integrated.
In this lecture Ran will describe the implementation in LivePerson and will share some tips and how to avoid pitfalls.
Read More: https://connect.liveperson.com/community/developers/blog/2013/11/21/from-a-kafkaesque-story-to-the-promised-land
LivePerson Developers is proud to host a meetup about A/B testing by Shlomo Lahav, Chief Scientist at LivePerson.
The lecture will focus on testing and the ability to deduct conclusions, especially in the web.
- What is an A/B test?
- How to construct an A/B test properly?
- What are the metrics that can be used?
- Can the results be miss leading?
- Errors: bias and statistical errors
- First and second type errors
- Measuring lift, why lift is a biased measure.
- Is it possible to change the test settings during the test?
- How to run a multivariate testing effectively?
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
1. Apache Avro in LivePerson
Collecting and saving data is easy
keeping it consistent is tough
DevCon Tlv, June 2014
Amihay Zer-Kavod, Software Architect
2. Who am I?
Amihay Zer-Kavod
Software Architect
Been in software Since 1989
4. ● Consistent but decoupled communication
between services, such as:
o Monitoring, Interaction
o Predictive, Sentiment
o Reporting & Analysis
o History
Communication & Meaning
event
evento
事件
घटना
حدث
ארוע
событие
● Consistent meaning over time
o BigData Store (Hadoop)
o Reporting
5. What can’t we use?
Don’t use Direct APIs!
They are completely wrong for this issue, since:
• They produce too much coupling between services
• APIs are synchronous by nature
• Adds irrelevant complexity to the called service
6. So what is needed?
The Message is the API!
● A unified event model (schema) for all reported events
● Management tools for the unified schema
● Tools for sending events over the wire
● Tools for reading/writing event in big data
● Backward and forward compatibility
7. The Event model
From generic to specific structure with:
• Common header - all common data to all events
• Logical Entities - common header to all logical entities
(such as Visitor)
• Dynamic Specific headers
• Specific Event body
8. Apache Avro to the rescue
● Avro - a schema based serialization/deserialization
framework
● Avro idl - schema definition language
● Avro file - Hadoop integration
● Avro schema resolution
● Apache Avro created by Doug Cutting
10. Avro IDL - LivePerson Event
/** Base for all LivePerson Events
*/
@namespace("com.liveperson.global")
record LPEvent {
/** Common Header of the event */
CommonHeader header = null;
/** Logical entity details participating in this event - Visitor, Agent, etc... */
array<Participant> participants = null;
/** Holding specific platform info as node name (machine) cluster Id etc... */
PlatformHeader platformSpecificHeader = null;
/** Auditing Header, Optional - adds data for auditing of the events flow in the platform*/
union {null, AuditingHeader } auditingHeader = null;
/** The event body */
EventBody eventBody = null;
}
11. Backward & Forward Compatibility
Avro schema evolution
● Avro supports two schemes resolution
● Need to follow a set of rules:
● Every field must have a default value
● A field can be added (make sure to put a default value)
● Field types can not be changed (add a new field
instead)
● enum symbols can be added but never removed
13. How good does it work?
● Cyber Monday 2013 (one day)
o More than 320,000 events per second
o 7 Storm topologies consuming the events seconds from
real time
o 2TB of data saved to Hadoop
● 2014 preparation:
o x2 number of events per second to ~640,000
14. So how did we do it?
1. Use an event driven system, don’t use direct APIs
2. Create a unified schema for all events
3. Use Avro to implement the schema
4. Add some supporting infrastructure