This document summarizes the evolution of Swisscom's network analytics capabilities from 2015 to the present. It discusses moving from basic network monitoring to a data mesh architecture enabling closed-loop network operations. Key developments include onboarding more platforms and metrics, anomaly detection, visualization, and collaborating with IETF on standards like BGP Monitoring Protocol, IPFIX, and YANG push notifications. The goal is network visibility to make informed decisions and recognize service interruptions before customers. Future work involves standardizing extensions for additional RIB coverage, segment routing, and route policies.
Mainframe Integration, Offloading and Replacement with Apache KafkaKai Wähner
Video recording of this presentation:
https://youtu.be/upWzamacOVQ
Blog post with more details:
https://www.kai-waehner.de/blog/2020/04/24/mainframe-offloading-replacement-apache-kafka-connect-ibm-db2-mq-cdc-cobol/
Mainframes are still hard at work, processing over 70 percent of the world’s most essential computing transactions every day. Very high cost, monolithic architectures, and missing experts are the key challenges for mainframe applications. Time to get more innovative, even with the mainframe!
Mainframe offloading with Apache Kafka and its ecosystem can be used to keep a more modern data store in real-time sync with the mainframe. At the same time, it is persisting the event data on the bus to enable microservices, and deliver the data to other systems such as data warehouses and search indexes.
But the final goal and ultimate vision are to replace the mainframe by new applications using modern and less costly technologies. Stand up to the dinosaur, but keep in mind that legacy migration is a journey! Kai will guide you to the next step of your company’s evolution!
You will learn:
- how to not only reduce operational expenses but provide a path for architecture modernization, agility and eventually mainframe replacement
- what steps some of Confluent’s customers already took, leveraging technologies like Change Data Capture (CDC) or MQ for mainframe offloading
- how an event streaming platform enables cost reduction, architecture modernization, and a combination of a mainframe with new technologies
Building a Logical Data Fabric using Data Virtualization (ASEAN)Denodo
Watch full webinar here: https://bit.ly/3FF1ubd
In the recent Building the Unified Data Warehouse and Data Lake report by leading industry analysts TDWI, we have discovered 64% of organizations stated the objective for a unified Data Warehouse and Data Lakes is to get more business value and 84% of organizations polled felt that a unified approach to Data Warehouses and Data Lakes was either extremely or moderately important.
In this session, you will learn how your organization can apply a logical data fabric and the associated technologies of machine learning, artificial intelligence, and data virtualization can reduce time to value. Hence, increasing the overall business value of your data assets.
KEY TAKEAWAYS:
- How a Logical Data Fabric is the right approach to assist organizations to unify their data.
- The advanced features of a Logical Data Fabric that assist with the democratization of data, providing an agile and governed approach to business analytics and data science.
- How a Logical Data Fabric with Data Virtualization enhances your legacy data integration landscape to simplify data access and encourage self-service.
Dell Technologies - The Complete ISG Hardware PortfolioSmarter.World
To get an idea of the hughe hardware product portfolio of Dell Technologies, I will showcasing the entire Dell EMC ISG (Infrastructure Solutions Group) server, storage, backup, converged, hyper converged and network portfolio in this presentation.
I do not speak about hero numbers, magic quadrants, nor I present revenue, employee numbers.
In this presentation, the focus will be on the Dell EMC ISG hardware products of Dell Technologies that are needed for the IT transformation.
We will introduce the hardware products from Dell EMC at an semi high level view (e.g. product highlights, use caseses / workloads and a primary set of key capabilities)
Mainframe Integration, Offloading and Replacement with Apache KafkaKai Wähner
Video recording of this presentation:
https://youtu.be/upWzamacOVQ
Blog post with more details:
https://www.kai-waehner.de/blog/2020/04/24/mainframe-offloading-replacement-apache-kafka-connect-ibm-db2-mq-cdc-cobol/
Mainframes are still hard at work, processing over 70 percent of the world’s most essential computing transactions every day. Very high cost, monolithic architectures, and missing experts are the key challenges for mainframe applications. Time to get more innovative, even with the mainframe!
Mainframe offloading with Apache Kafka and its ecosystem can be used to keep a more modern data store in real-time sync with the mainframe. At the same time, it is persisting the event data on the bus to enable microservices, and deliver the data to other systems such as data warehouses and search indexes.
But the final goal and ultimate vision are to replace the mainframe by new applications using modern and less costly technologies. Stand up to the dinosaur, but keep in mind that legacy migration is a journey! Kai will guide you to the next step of your company’s evolution!
You will learn:
- how to not only reduce operational expenses but provide a path for architecture modernization, agility and eventually mainframe replacement
- what steps some of Confluent’s customers already took, leveraging technologies like Change Data Capture (CDC) or MQ for mainframe offloading
- how an event streaming platform enables cost reduction, architecture modernization, and a combination of a mainframe with new technologies
Building a Logical Data Fabric using Data Virtualization (ASEAN)Denodo
Watch full webinar here: https://bit.ly/3FF1ubd
In the recent Building the Unified Data Warehouse and Data Lake report by leading industry analysts TDWI, we have discovered 64% of organizations stated the objective for a unified Data Warehouse and Data Lakes is to get more business value and 84% of organizations polled felt that a unified approach to Data Warehouses and Data Lakes was either extremely or moderately important.
In this session, you will learn how your organization can apply a logical data fabric and the associated technologies of machine learning, artificial intelligence, and data virtualization can reduce time to value. Hence, increasing the overall business value of your data assets.
KEY TAKEAWAYS:
- How a Logical Data Fabric is the right approach to assist organizations to unify their data.
- The advanced features of a Logical Data Fabric that assist with the democratization of data, providing an agile and governed approach to business analytics and data science.
- How a Logical Data Fabric with Data Virtualization enhances your legacy data integration landscape to simplify data access and encourage self-service.
Dell Technologies - The Complete ISG Hardware PortfolioSmarter.World
To get an idea of the hughe hardware product portfolio of Dell Technologies, I will showcasing the entire Dell EMC ISG (Infrastructure Solutions Group) server, storage, backup, converged, hyper converged and network portfolio in this presentation.
I do not speak about hero numbers, magic quadrants, nor I present revenue, employee numbers.
In this presentation, the focus will be on the Dell EMC ISG hardware products of Dell Technologies that are needed for the IT transformation.
We will introduce the hardware products from Dell EMC at an semi high level view (e.g. product highlights, use caseses / workloads and a primary set of key capabilities)
VMware Tanzu Application Service as an Integration PlatformVMware Tanzu
SpringOne 2021
Session Title: VMware Tanzu Application Service as an Integration Platform
Speakers: Manoj Thekumpurath, Sr. Manager at Deloitte; Siddharth Mehrotra, Senior Manager at Deloitte
Domain Driven Data: Apache Kafka® and the Data Meshconfluent
James Gollan, Confluent, Senior Solutions Engineer
From digital banking to industry 4.0 the nature of business is changing. Increasingly businesses are becoming software. And the lifeblood of software is data. Dealing with data at the enterprise level is tough, and their have been some missteps along the way.
This session will consider the increasingly popular idea of a 'data mesh' - the problems it solves and, perhaps most importantly, how an event streaming platform forms the bedrock of this new paradigm.
Recording to be available cnfl.io/meetup-hub
https://www.meetup.com/KafkaMelbourne/events/277076626/
Benefits of Stream Processing and Apache Kafka Use Casesconfluent
Watch this talk here: https://www.confluent.io/online-talks/benefits-of-stream-processing-and-apache-kafka-use-cases-on-demand
This talk explains how companies are using event-driven architecture to transform their business and how Apache Kafka serves as the foundation for streaming data applications.
Learn how major players in the market are using Kafka in a wide range of use cases such as microservices, IoT and edge computing, core banking and fraud detection, cyber data collection and dissemination, ESB replacement, data pipelining, ecommerce, mainframe offloading and more.
Also discussed in this talk are the differences between Apache Kafka and Confluent Platform.
This session is part 1 of 4 in our Fundamentals for Apache Kafka series.
In this webinar you'll learn how to quickly and easily improve your business using Snowflake and Matillion ETL for Snowflake. Webinar presented by Solution Architects Craig Collier (Snowflake) adn Kalyan Arangam (Matillion).
In this webinar:
- Learn to optimize Snowflake and leverage Matillion ETL for Snowflake
- Discover tips and tricks to improve performance
- Get invaluable insights from data warehousing pros
Cloud architecture with the ArchiMate LanguageIver Band
Today's commercial cloud platforms enable the migration of on-premises architectures to environments that offer increased flexibility, resilience, and security. These platforms also offer innovative managed services that enable architects, designers and developers to focus on business logic and user experience rather than underlying infrastructure.
Enterprise Architects can use the ArchiMate language to guide the use of cloud platforms to meet business and technical goals. This presentation models an architecture based on a leading cloud platform. The model uses all layers and aspects of the ArchiMate language as well as its customization mechanisms, which express vendor-specific platform elements and relationships. It provides an appreciation of the depth and versatility of the ArchiMate 3.0 language, and an introduction to developing architectures that use commercial cloud platforms.
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...Databricks
Many had dubbed 2020 as the decade of data. This is indeed an era of data zeitgeist.
From code-centric software development 1.0, we are entering software development 2.0, a data-centric and data-driven approach, where data plays a central theme in our everyday lives.
As the volume and variety of data garnered from myriad data sources continue to grow at an astronomical scale and as cloud computing offers cheap computing and data storage resources at scale, the data platforms have to match in their abilities to process, analyze, and visualize at scale and speed and with ease — this involves data paradigm shifts in processing and storing and in providing programming frameworks to developers to access and work with these data platforms.
In this talk, we will survey some emerging technologies that address the challenges of data at scale, how these tools help data scientists and machine learning developers with their data tasks, why they scale, and how they facilitate the future data scientists to start quickly.
In particular, we will examine in detail two open-source tools MLflow (for machine learning life cycle development) and Delta Lake (for reliable storage for structured and unstructured data).
Other emerging tools such as Koalas help data scientists to do exploratory data analysis at scale in a language and framework they are familiar with as well as emerging data + AI trends in 2021.
You will understand the challenges of machine learning model development at scale, why you need reliable and scalable storage, and what other open source tools are at your disposal to do data science and machine learning at scale.
The success of application deployment on cloud depends a lot on the architecture style which in turn depends on your business needs. This presentation talks about the commonly used Architecture and business use cases.
Apache Camel v3, Camel K and Camel QuarkusClaus Ibsen
In this session, we will explore key challenges with function interactions and coordination, addressing these problems using Enterprise Integration Patterns (EIP) and modern approaches with the latest innovations from the Apache Camel community:
Apache Camel is the Swiss army knife of integration, and the most powerful integration framework. In this session you will hear about the latest features in the brand new 3rd generation.
Camel K, is a lightweight integration platform that enables Enterprise Integration Patterns to be used natively on any Kubernetes cluster. When used in combination with Knative, a framework that adds serverless building blocks to Kubernetes, and the subatomic execution environment of Quarkus, Camel K can mix serverless features such as auto-scaling, scaling to zero, and event-based communication with the outstanding integration capabilities of Apache Camel.
- Apache Camel 3
- Camel K
- Camel Quarkus
We will show how Camel K works. We’ll also use examples to demonstrate how Camel K makes it easier to connect to cloud services or enterprise applications using some of the 300 components that Camel provides.
Apache Kafka and the Data Mesh | Michael Noll, ConfluentHostedbyConfluent
Data mesh is a relatively recent term that describes a set of principles that good modern data systems uphold. A kind of “microservices” for the data-centric world. While the data mesh is not technology-specific as a pattern, the building of systems that adopt and implement data mesh principles have a relatively long history under different guises.
In this talk, we share our recommendations and picks of what every developer should know about building a streaming data mesh with Kafka. We introduce the four principles of the data mesh: domain-driven decentralization, data as a product, self-service data platform, and federated governance. We then cover topics such as the differences between working with event streams versus centralized approaches and highlight the key characteristics that make streams a great fit for implementing a mesh, such as their ability to capture both real-time and historical data. We’ll examine how to onboard data from existing systems into a mesh, modelling the communication within the mesh, how to deal with changes to your domain’s “public” data, give examples of global standards for governance, and discuss the importance of taking a product-centric view on data sources and the data sets they share.
Integrating Apache NiFi and Apache FlinkHortonworks
Hortonworks DataFlow delivers data to streaming analytics platforms, inclusive of Storm, Spark and Flink
These are slides from an Apache Flink Meetup: Integration of Apache Flink and Apache Nifi, Feb 4 2016
What’s New in OpenText Content Suite 16 EP2OpenText
OpenText Content Suite is a unified set of ECM solutions that are transforming the digital workplace. The OpenText Content Suite EP2 release recognizes the ways knowledge workers access, create, and collaborate on business content is changing and enables them with simple, effective tools that don't sacrifice control and governance. This presentation highlights new features added to our Content Suite 16.2 releases that build on the revolutionary functionality first launched a year ago in Content Suite 16.
• OpenText Content Suite 16 EP2 continues to put productivity first by making the user experience even more intuitive, and strengthening integration with the business applications users work in every day.
• OpenText Content Suite 16 EP2 enhances information flows with more ways to extend ECM to lead applications, bridging silos and automating business processes while strengthening control and governance.
• And OpenText Content Suite 16 EP2 further streamlines deployment, configuration and maintenance whether deployed on-premises or in the cloud.
With these updates and more in OpenText Content Suite EP2, your organization will be better enabled to leverage the power of its information, and work faster and smarter towards its digital transformation.
Learn more: www.opentext.com/what-we-do/opentext-release-16/opentext-content-suite-16
VMware Tanzu Application Service as an Integration PlatformVMware Tanzu
SpringOne 2021
Session Title: VMware Tanzu Application Service as an Integration Platform
Speakers: Manoj Thekumpurath, Sr. Manager at Deloitte; Siddharth Mehrotra, Senior Manager at Deloitte
Domain Driven Data: Apache Kafka® and the Data Meshconfluent
James Gollan, Confluent, Senior Solutions Engineer
From digital banking to industry 4.0 the nature of business is changing. Increasingly businesses are becoming software. And the lifeblood of software is data. Dealing with data at the enterprise level is tough, and their have been some missteps along the way.
This session will consider the increasingly popular idea of a 'data mesh' - the problems it solves and, perhaps most importantly, how an event streaming platform forms the bedrock of this new paradigm.
Recording to be available cnfl.io/meetup-hub
https://www.meetup.com/KafkaMelbourne/events/277076626/
Benefits of Stream Processing and Apache Kafka Use Casesconfluent
Watch this talk here: https://www.confluent.io/online-talks/benefits-of-stream-processing-and-apache-kafka-use-cases-on-demand
This talk explains how companies are using event-driven architecture to transform their business and how Apache Kafka serves as the foundation for streaming data applications.
Learn how major players in the market are using Kafka in a wide range of use cases such as microservices, IoT and edge computing, core banking and fraud detection, cyber data collection and dissemination, ESB replacement, data pipelining, ecommerce, mainframe offloading and more.
Also discussed in this talk are the differences between Apache Kafka and Confluent Platform.
This session is part 1 of 4 in our Fundamentals for Apache Kafka series.
In this webinar you'll learn how to quickly and easily improve your business using Snowflake and Matillion ETL for Snowflake. Webinar presented by Solution Architects Craig Collier (Snowflake) adn Kalyan Arangam (Matillion).
In this webinar:
- Learn to optimize Snowflake and leverage Matillion ETL for Snowflake
- Discover tips and tricks to improve performance
- Get invaluable insights from data warehousing pros
Cloud architecture with the ArchiMate LanguageIver Band
Today's commercial cloud platforms enable the migration of on-premises architectures to environments that offer increased flexibility, resilience, and security. These platforms also offer innovative managed services that enable architects, designers and developers to focus on business logic and user experience rather than underlying infrastructure.
Enterprise Architects can use the ArchiMate language to guide the use of cloud platforms to meet business and technical goals. This presentation models an architecture based on a leading cloud platform. The model uses all layers and aspects of the ArchiMate language as well as its customization mechanisms, which express vendor-specific platform elements and relationships. It provides an appreciation of the depth and versatility of the ArchiMate 3.0 language, and an introduction to developing architectures that use commercial cloud platforms.
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...Databricks
Many had dubbed 2020 as the decade of data. This is indeed an era of data zeitgeist.
From code-centric software development 1.0, we are entering software development 2.0, a data-centric and data-driven approach, where data plays a central theme in our everyday lives.
As the volume and variety of data garnered from myriad data sources continue to grow at an astronomical scale and as cloud computing offers cheap computing and data storage resources at scale, the data platforms have to match in their abilities to process, analyze, and visualize at scale and speed and with ease — this involves data paradigm shifts in processing and storing and in providing programming frameworks to developers to access and work with these data platforms.
In this talk, we will survey some emerging technologies that address the challenges of data at scale, how these tools help data scientists and machine learning developers with their data tasks, why they scale, and how they facilitate the future data scientists to start quickly.
In particular, we will examine in detail two open-source tools MLflow (for machine learning life cycle development) and Delta Lake (for reliable storage for structured and unstructured data).
Other emerging tools such as Koalas help data scientists to do exploratory data analysis at scale in a language and framework they are familiar with as well as emerging data + AI trends in 2021.
You will understand the challenges of machine learning model development at scale, why you need reliable and scalable storage, and what other open source tools are at your disposal to do data science and machine learning at scale.
The success of application deployment on cloud depends a lot on the architecture style which in turn depends on your business needs. This presentation talks about the commonly used Architecture and business use cases.
Apache Camel v3, Camel K and Camel QuarkusClaus Ibsen
In this session, we will explore key challenges with function interactions and coordination, addressing these problems using Enterprise Integration Patterns (EIP) and modern approaches with the latest innovations from the Apache Camel community:
Apache Camel is the Swiss army knife of integration, and the most powerful integration framework. In this session you will hear about the latest features in the brand new 3rd generation.
Camel K, is a lightweight integration platform that enables Enterprise Integration Patterns to be used natively on any Kubernetes cluster. When used in combination with Knative, a framework that adds serverless building blocks to Kubernetes, and the subatomic execution environment of Quarkus, Camel K can mix serverless features such as auto-scaling, scaling to zero, and event-based communication with the outstanding integration capabilities of Apache Camel.
- Apache Camel 3
- Camel K
- Camel Quarkus
We will show how Camel K works. We’ll also use examples to demonstrate how Camel K makes it easier to connect to cloud services or enterprise applications using some of the 300 components that Camel provides.
Apache Kafka and the Data Mesh | Michael Noll, ConfluentHostedbyConfluent
Data mesh is a relatively recent term that describes a set of principles that good modern data systems uphold. A kind of “microservices” for the data-centric world. While the data mesh is not technology-specific as a pattern, the building of systems that adopt and implement data mesh principles have a relatively long history under different guises.
In this talk, we share our recommendations and picks of what every developer should know about building a streaming data mesh with Kafka. We introduce the four principles of the data mesh: domain-driven decentralization, data as a product, self-service data platform, and federated governance. We then cover topics such as the differences between working with event streams versus centralized approaches and highlight the key characteristics that make streams a great fit for implementing a mesh, such as their ability to capture both real-time and historical data. We’ll examine how to onboard data from existing systems into a mesh, modelling the communication within the mesh, how to deal with changes to your domain’s “public” data, give examples of global standards for governance, and discuss the importance of taking a product-centric view on data sources and the data sets they share.
Integrating Apache NiFi and Apache FlinkHortonworks
Hortonworks DataFlow delivers data to streaming analytics platforms, inclusive of Storm, Spark and Flink
These are slides from an Apache Flink Meetup: Integration of Apache Flink and Apache Nifi, Feb 4 2016
What’s New in OpenText Content Suite 16 EP2OpenText
OpenText Content Suite is a unified set of ECM solutions that are transforming the digital workplace. The OpenText Content Suite EP2 release recognizes the ways knowledge workers access, create, and collaborate on business content is changing and enables them with simple, effective tools that don't sacrifice control and governance. This presentation highlights new features added to our Content Suite 16.2 releases that build on the revolutionary functionality first launched a year ago in Content Suite 16.
• OpenText Content Suite 16 EP2 continues to put productivity first by making the user experience even more intuitive, and strengthening integration with the business applications users work in every day.
• OpenText Content Suite 16 EP2 enhances information flows with more ways to extend ECM to lead applications, bridging silos and automating business processes while strengthening control and governance.
• And OpenText Content Suite 16 EP2 further streamlines deployment, configuration and maintenance whether deployed on-premises or in the cloud.
With these updates and more in OpenText Content Suite EP2, your organization will be better enabled to leverage the power of its information, and work faster and smarter towards its digital transformation.
Learn more: www.opentext.com/what-we-do/opentext-release-16/opentext-content-suite-16
Meetup 4/2/2016 - Functionele en technische architectuur IoTDigipolis Antwerpen
Meetup waar we samen met iedereen die interesse heeft nadenken over een open IoT architectuur voor Antwerpen.
http://www.meetup.com/DigAnt-Cafe/events/228254825/
Chair: Ewan Quibell, management systems and service leader, Jisc.
16:15-16:55 - The autonomous network
Speaker: Simon Parry, CTO UK public sector, Ciena.
You’ve virtualised your servers, virtualised your storage, maybe even virtualised an application, but what about the network that joins it all together? How do you build an agile, open network that responds to the new world of on-demand services, without impacting current performance and while delivering greater efficiencies?
Find out how a network operator can save money and deliver a more responsive experience and outcome for your users.
Detecting Hacks: Anomaly Detection on Networking DataJames Sirota
See https://medium.com/@jamessirota for a series of blog entries that goes with this deck...
Defense in Depth for Big Data
Network Anomaly Detection Overview
Volume Anomaly Detection
Feature Anomaly Detection
Model Architecture
Deployment on OpenSOC Platform
Questions
Packet processing in the fast path involves looking up bit patterns and deciding on an actions at line rate. The complexity of these functions at Line Rate, have been traditionally handled by ASICs and NPUs. However with the availability of faster and cheaper CPUs and hardware/software accelerations, it is possible to move these functions onto commodity hardware. This tutorial will talk about the various building blocks available to speed up packet processing both hardware based e.g. SR-IOV, RDT, QAT, VMDq, VTD and software based e.g. DPDK, Fd.io/VPP, OVS etc and give hands on lab experience on DPDK and fd.io fast path look up with following sessions. 1: Introduction to Building blocks: Sujata Tibrewala
An experience is a personal and emotional event we remember. Every experience is established based upon pre-determined expectations we conceive and create in our minds. It’s personal, and therefore, remains a moving and evolving target in every scenario. When our experience concludes and the moment has passed, the outcome remains in our memory. Think about what makes you happy when connecting with your own device and then think about what makes you really upset when things are hard, complicated, and slow. If the user has a bad experience in anyone of these areas (simple, fast, and smart), they are likely to leave, share their negative experience, and potentially never return. Users might forget facts or details about their computing environment but they find it difficult to forgot the feeling behind a bad network experience. When something goes wrong with the network or an application, do you always get the blame?
So what can Ultra Low, consistent latency deliver? Low latency is a requirement for intensive, time critical applications. Latency is measure on a port-to-port basis, that once a frame is received on a ingress port how long does it take the frame to go through the internal switching infrastructure and leave an ingress port. The Summit X670 Top of Rack switch supports latency of around 800-900usec while the Black Diamond chassis, BDX8, can switch frames in a little as 3usec. We’re big believers in the value of disaggregation – of breaking down traditional data center technologies into their core components so we can build new systems that are more flexible, more scalable, and more efficient. This approach has guided Facebook from the beginning, as we’ve grown and expanded our infrastructure to connect more than 1.28 billion people around the world.
Flatter networks. Traditional data center networks have a minimum of three tiers: top of rack (ToR), aggregation and core. Often, there is more than one aggregation tier, meaning the data center could have three or more network tiers. When network traffic is primarily best effort, this is sufficient. But as more mission-critical, real-time traffic flows into the data center, it becomes critical that organizations move to two-tier networks.
An increase in east-west traffic flows. Legacy data center networks are designed for traffic to flow from the edge of the network into the core and then back to the edge in a north-south direction. Today, however, factors such as workforce mobility, Hadoop, big data and other applications are driving east-west traffic flows from server to server.
Virtualization of other IT assets. Historically, compute resources such as processor, memory and storage were resident in the server itself. Over time, more and more of these resources are being put into “pools” that can be accessed on demand. In this case, the data center network becomes a “fabric” that acts as the backplane for the virtualized data center.
A Pragmatic Reference Architecture for The Internet of ThingsRick G. Garibay
We already know that the Internet of Things is big. It isn't something that's coming. It's already here. From manufacturing to healthcare, retail and hospitality, transportation, utilities and energy, the shift from Information Technology to Operational Technology and the value that this massive explosion of data can provide is taking the world by storm.
But IoT isn't a product. It's not something you can buy. As with any gold rush, snake oil abounds. The potential is massive and the good news is that the technology and platforms are already here!
But how do you get started? What are the application and networking protocols at play? How do you handle the ingestion of massive, real-time streams of data? Where do you land the data? What kind of insights does the data at scale provide? How do you make sense of it and/or take action on the data in real time scaling to hundreds if not hundreds of thousands of devices per deployment?
In this session, Rick G. Garibay will share a pragmatic reference architecture based on his experience working with dozens of customers in the field and provide an insider’s view on some real-world IoT solutions he's led. He'll demystify what IoT is and what it isn't, discuss patterns for addressing the challenges inherent in IoT projects and how the most popular public cloud vendors are already providing the capabilities you need to build real-world IoT solutions today.
NetBrain Consultant Edition (CE) is designed to make a Consultant’s job easier by providing instant network discovery, document automation, and visual troubleshooting. NetBrain enables consultants to:
1. Carry out deep discovery of the customer network
2. Automate documentation for network assessments
3. Analyze network design visually
4. Automatically troubleshoot and collect data without custom scripts
In short, NetBrain’s visual workbench allows consultants to complete network assessment tasks much faster and with much more accuracy.
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
In our exclusive webinar, you'll learn why event-driven architecture is the key to unlocking cost efficiency, operational effectiveness, and profitability. Gain insights on how this approach differs from API-driven methods and why it's essential for your organization's success.
Unlocking the Power of IoT: A comprehensive approach to real-time insightsconfluent
In today's data-driven world, the Internet of Things (IoT) is revolutionizing industries and unlocking new possibilities. Join Data Reply, Confluent, and Imply as we unveil a comprehensive solution for IoT that harnesses the power of real-time insights.
Workshop híbrido: Stream Processing con Flinkconfluent
El Stream processing es un requisito previo de la pila de data streaming, que impulsa aplicaciones y pipelines en tiempo real.
Permite una mayor portabilidad de datos, una utilización optimizada de recursos y una mejor experiencia del cliente al procesar flujos de datos en tiempo real.
En nuestro taller práctico híbrido, aprenderás cómo filtrar, unir y enriquecer fácilmente datos en tiempo real dentro de Confluent Cloud utilizando nuestro servicio Flink sin servidor.
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...confluent
Our talk will explore the transformative impact of integrating Confluent, HiveMQ, and SparkPlug in Industry 4.0, emphasizing the creation of a Unified Namespace.
In addition to the creation of a Unified Namespace, our webinar will also delve into Stream Governance and Scaling, highlighting how these aspects are crucial for managing complex data flows and ensuring robust, scalable IIoT-Platforms.
You will learn how to ensure data accuracy and reliability, expand your data processing capabilities, and optimize your data management processes.
Don't miss out on this opportunity to learn from industry experts and take your business to the next level.
La arquitectura impulsada por eventos (EDA) será el corazón del ecosistema de MAPFRE. Para seguir siendo competitivas, las empresas de hoy dependen cada vez más del análisis de datos en tiempo real, lo que les permite obtener información y tiempos de respuesta más rápidos. Los negocios con datos en tiempo real consisten en tomar conciencia de la situación, detectar y responder a lo que está sucediendo en el mundo ahora.
Eventos y Microservicios - Santander TechTalkconfluent
Durante esta sesión examinaremos cómo el mundo de los eventos y los microservicios se complementan y mejoran explorando cómo los patrones basados en eventos nos permiten descomponer monolitos de manera escalable, resiliente y desacoplada.
Purpose of the session is to have a dive into Apache, Kafka, Data Streaming and Kafka in the cloud
- Dive into Apache Kafka
- Data Streaming
- Kafka in the cloud
Build real-time streaming data pipelines to AWS with Confluentconfluent
Traditional data pipelines often face scalability issues and challenges related to cost, their monolithic design, and reliance on batch data processing. They also typically operate under the premise that all data needs to be stored in a single centralized data source before it's put to practical use. Confluent Cloud on Amazon Web Services (AWS) provides a fully managed cloud-native platform that helps you simplify the way you build real-time data flows using streaming data pipelines and Apache Kafka.
Q&A with Confluent Professional Services: Confluent Service Meshconfluent
No matter whether you are migrating your Kafka cluster to Confluent Cloud, running a cloud-hybrid environment or are in a different situation where data protection and encryption of sensitive information is required, Confluent Service Mesh allows you to transparently encrypt your data without the need to make code changes to you existing applications.
Citi Tech Talk: Event Driven Kafka Microservicesconfluent
Microservices have become a dominant architectural paradigm for building systems in the enterprise, but they are not without their tradeoffs. Learn how to build event-driven microservices with Apache Kafka
Confluent & GSI Webinars series - Session 3confluent
An in depth look at how Confluent is being used in the financial services industry. Gain an understanding of how organisations are utilising data in motion to solve common problems and gain benefits from their real time data capabilities.
It will look more deeply into some specific use cases and show how Confluent technology is used to manage costs and mitigate risks.
This session is aimed at Solutions Architects, Sales Engineers and Pre Sales, and also the more technically minded business aligned people. Whilst this is not a deeply technical session, a level of knowledge around Kafka would be helpful.
Transforming applications built with traditional messaging solutions such as TIBCO, MQ and Solace to be scalable, reliable and ready for the move to cloud
How can applications built with traditional messaging technologies like TIBCO, Solace and IBM MQ be modernised and be made cloud ready? What are the advantages to Event Streaming approaches to pub/sub vs traditional message queues? What are the strengeths and weaknesses of both approaches, and what use cases and requirements are actually a better fit for messaging than Kafka?
This session will show why the old paradigm does not work and that a new approach to the data strategy needs to be taken. It aims to show how a Data Streaming Platform is integral to the evolution of a company’s data strategy and how Confluent is not just an integration layer but the central nervous system for an organisation
Vous apprendrez également à :
• Créer plus rapidement des produits et fonctionnalités à l’aide d’une suite complète de connecteurs et d’outils de gestion des flux, et à connecter vos environnements à des pipelines de données
• Protéger vos données et charges de travail les plus critiques grâce à des garanties intégrées en matière de sécurité, de gouvernance et de résilience
• Déployer Kafka à grande échelle en quelques minutes tout en réduisant les coûts et la charge opérationnelle associés
Confluent Partner Tech Talk with Synthesisconfluent
A discussion on the arduous planning process, and deep dive into the design/architectural decisions.
Learn more about the networking, RBAC strategies, the automation, and the deployment plan.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
3. 3
The customerknowsbeforeSwisscomthat
there is serviceinterruption.
Unableto recognizeimpactand rootcause
when configurationalor operational
networkchangesoccur.
Swisscomsuffersreputationdamage.
We need to worktogetherto mediate.
«
«
Markus Reber
Head of Networks at Swisscom
4. 4
At IETF only9.85% of the activitiesare
relatedto networkautomationand
monitoring.
We are still usingprotocolsdesigned40
yearsago to managenetworks.
IP networkprotocolsare not made to
exposemetricsfor analytics. IPFIXand BGP
monitoringprotocolare the rareexception.
«
«
Thomas Graf
Distinguished Network Engineer
and Network Analytics Architect at Swisscom
5. “ It is our duty to recognize service interruption
before our customer does.
Why do we still often fail to be first ? “
5
6. 6
Swisscom Big Data onboarded,
Meerkat Anomaly Detection Feasibility
10 active users. 9 platforms. 87 nodes. 250'000
metrics per seconds.
2017-2018
2019
2020
BGP Monitoring Protocol and YANG Push
IETF Engagement started
40 active users. 17 platforms. 233 nodes.
1'200'000 metrics per second.
Pivot Migration, Druid Scale Out,
Unyte IETF colaboration established
160 active users. 34 platforms. 2500 nodes.
3'000'000 metrics per second. Active probing with
1'500'000 broadband subscribers.
Flow Aggregation Proof of Concept
Internet Distribution Core and TV 2.0
2015-2016
Early
adopters
Early
majority
Late
majority Laggards
Platform onboarding
Change verification and troubleshooting
Capacity management
and trend detection
Anomaly detection
IETF vendor, operator and
university colaboration
Network visualization
DaisyNetworkAnalyticsTransformsSwisscomDevOpsMindset
Fromdevicemonitoringto networkanalyticswith closedloop operation
2021 Taking over end to end Daisy Chain Responsibility
215 active users. 40 platforms. 2700 nodes.
20'000'000 metrics per second. Active probing
with >1'500'000 broadband subscribers.
Key Points
> From bottom up to mainstream. From IETF to Swisscom DevOps teams.
> From network verification and troubleshooting to visualization
with anomaly detection and SLO reporting
> From capacity management to trend detection
> From network automation to closed loop operation
SLO Reporting
2022 L3 VPN Anomaly Detection and
Network Visualization Proof of Concept
400 active users. 47 platforms. 7000 nodes.
25'000'000 metrics per second.
7. 7
2ndGeneration
3rdGeneration
current
Data lake
Big data ecosystem
Kappa
Adds streaming for
real-time data
Proprietary
Enterprise Data Warehouse
1stGeneration
EvolvingBig Dataarchitecture
Domainoriented,like networks
4thGeneration
next-step
Data Mesh
Distributed and organized
in domains.
Data Infra as a Platform
Operational
Delivery Platform
Analytical
Data Platform
Analytical
Data Plane
Operational
Data Plane
Domain A Domain B Domain C
Federated Computentional
Governance for global interoparabiity
Data Product as a Architectual Quantum
Serve
Collect
Publish
Serve
Collect
Publish
Serve
Collect
Publish
From Principles to Logical Architecture
8. 8
Products
• Verification and Troubleshooting enables change and
incident management.
• Visualization makes routing and peering topologies
accessible to humans.
• Capacity Management enables proactivity for key
performance metrics..
• Anomaly Detection automates incident management.
Alerts users to important events with contexts.
• Service Level Objective reports delay and loss for a
time period.
• Trend Detection automates capacity management.
Alerts users early before running out of capacity.
• Closed Loop Operation validates network
orchestration. Controlled configuration deployments.
DomainOwnership
NetworkAnalyticsas a product
Forwarding
Plane
Control
Plane
Device
Topology
Collect
Transform and
Aggregates
Analytical
Data Plane
Operational
Data Plane
Publish
Alerts and
Reports
Serve
Normalize and
Correlates
9. 9
Data Collectionwith NetworkTelemetry
Structuredmetricsenableinformeddecision-making
Network Telemetry:
> A data collection framework
where the network device
pushes its metrics to Big
Data. Defined in RFC 9232.
Data Modelling:
> Key for Big Data correlation
to understand and react in
the right context
> Are interface drops bad?
> How should we react?
Forwarding Plane
Data Models
How customers are
using our network
and services. Active
and passive delay
measurement
Control Plane
Data Models
How networks are
provisioned and
redundancy adjusts to
topology
Topology
Data Models
How logical and
physical network
devices are connected
with each other and
carry load
Swisscom Service
Service Models
Translates between what customers wishes and intend which should be fulfilled
Realitity
vs.
Intent
Thor LC ID
54654
BGP
Community
64497:12220
VRF, Interface
Config
10. 10
Self-servedata platform
EnablingSLO Reporting,Trendand AnomalyDetection
Key Assets
Data Infra shared among domains.
Provides
> Message Broker for accessibility
> Schema Registry for
discoverability
> Alert Broker for alert unification
> Time Series Database for
normalization and ability to
correlate. Supporting "hot" and
"warm" storage.
> Report and Alert generation are
running independently without
dependencies.
Enabling collaboration among
domains and agile teams.
SLO Reporting
Data Infra as a Platform
Operational
Delivery Platform
Analytical
Data Platform
Anomaly Detection
Device Topology
Control Plane
Forwarding Plane
Collect
Transform and
Aggregates
Serve
Correlates with
inventory
Alerts
determenistic
domain rules
and pattern
recognition
Schema Registry
YANG, BMP, IPFIX,
Analytical Schema
Message
Broker
Apache Kafka
Time Series
Database
Apache Druid
Alert Broker
Issues Anomaly
Detection Alert ID
Device Topology
Forwarding Plane
Collect
Transform and
Aggregates
Serve
Manage Error
Budget and
Burn Rate
Report
Aggregate and
Correlate
Trend Detection
Device Topology
Collect
Transform
Serve
Manage
Capacity
Report
Aggregate and
Predict
Trend
Detection
Report
Service Level
Objective
Report
Anomaly
Detection
Alert
11. 11
L3 VPN NetworkAnomalyDetection
Networksare deterministic– customerspartially
Analytical Perspectives
Monitors the network service and
wherever it is congested or not.
> BGP updates and withdrawals.
> UDP vs. TCP missing traffic.
> Interface state changes.
Network Events
1. VPN orange lost connectivity.
VPN blue lost redundancy.
2. VPN blue lost connectivity.
Key Point
> AI/ML requires network intent and
network modelled data to deliver
dependable results.
12. “ Without network visibility,
no informed decisions can be made. “
12
16. 16
At 17:39 prefixes from
Facebook BGP ASN 32934
where withdrawn. Outbound
traffic steadily increased
twofold until 20:20. Inbound
traffic decreased by 85%.
Between 19:25 and 00:51, BGP
updates and withdrawals
where received.
At 00:41 traffic rate restored
to normal.
FacebookIncident October4/5th
The Swisscomperspective
17. “ The solution comes with innovators.
That's why Swisscom cooperates at IETF with
network operators, vendors and universities. “
17
19. • Support for Local RIB in BGP Monitoring Protocol
https://datatracker.ietf.org/doc/draft-ietf-grow-bmp-local-rib
YANGDatastoresenablesClosedLoop Operation
Automateddata correlation– what else?
Automated networks can only run with a common data model. A digital twin YANG data store enables a
comparison between intend and reality. Schema preservation enables closed loop operation. Closed Loop is
like an autopilot on an airplane. We need to understand what the flight envelope is to keep the airplane
within. Without, we crash.
YANG is a data modelling language which will
not only transform how we managed our
networks; it will transform also how we
manage our services.
News: 17 industry leading colleagues from 4
network operators, 2 network and 3 analytics
providers, and 3 universities commit on a
project to integrate YANG and CBOR into
data mesh. Starts November 2022.
Conceptual Tree - Network Configuration
Conceptual Tree - Network State
Conceptual Tree - Network Configuration
Conceptual Tree - Network State
Network Configuration
Netconf <edit-config>
Network State
YANG Push
YANG Data Store
on Big Data Lake
YANG Data Store
on Network Device
Digital Twin
20. When Data Meshand Networkbecomeone
A simple, scalableapproach toYANG push
Simplify YANG push network data
collection with high scale and low
impact. Suited for nowadays distributed
forwarding systems.
Preserve YANG data model schema
definition throughout the data
processing chain.
Enable automated data correlation
among device, forwarding-plane and
control-plane.
An HTTPS-based Transport for YANG
Notifications
https://datatracker.ietf.org/doc/html/draf
t-ietf-netconf-https-notif
UDP-based Transport for Configured
Subscriptions
https://datatracker.ietf.org/doc/draft-
unyte-netconf-udp-notif
Subscription to Distributed Notifications
https://datatracker.ietf.org/doc/draft-
unyte-netconf-distributed-notif
Conceptual Tree - Network Configuration
Conceptual Tree - Network State
YANG Model
YANG Model
YANG Model
JSON/CBOR
Schema
ID
REST API
Get Schema
Message broker
YANG Schema Registry
On Big Data lake
YANG Data Store
On Big Data Lake
JSON/CBOR
Schema ID
YANG push
notification message
YANG Push
Data Collection
Netconf
<get-schema>
Parse YANG notification
message header and
maintain schema id to YANG
model and version mapping.
21. • Support for Adj-RIB-Out in BGP Monitoring Protocol
https://tools.ietf.org/html/rfc8671
• Support for Local RIB in BGP Monitoring Protocol
https://datatracker.ietf.org/doc/html/rfc9069
BMP Coveringall RIB's
Extendsmuch neededRIB coverage
BGP route exposure without BMP is a challenge of
the first order:
> Only best path is exposed (missing best-external and ECMP
routes)
> Next-hop attribute not preserved all the time
> Filtering between RIB's not visible
Adj-RIB-Outan RFC since November 2019. Local RIB since
February 2022. Juniper, Huawei and Nokia have public
releases available supporting both. Cisco has test code
available but haven't released yet.
BGP Peer-A
Adj-Rib-In Pre Policy
BGP Peer-A
Adj-Rib-In Post Policy
Static, Connected,
IGP Redistribution
Post Policy
Peer-A In Policy
BGP Peer-B
Adj-Rib-In Pre Policy
BGP Peer-B
Adj-Rib-In Post Policy
Peer-B In Policy
Local-Rib Pre Policy
BGP Peer-C
Adj-Rib-Out Pre Policy
BGP Peer-C
Adj-Rib-Out Post Policy
Peer-A Out Policy
BGP Peer-D
Adj-Rib-Out Pre Policy
BGP Peer-D
Adj-Rib-Out Post Policy
Peer-B Out Policy
Fib
Table Policy
22. • Support for Enterprise-specific TLVs in the BGP Monitoring Protocol
https://tools.ietf.org/html/draft-lucente-grow-bmp-tlv-ebit
• BMP Extension for Path Marking TLV
https://tools.ietf.org/html/draft-cppy-grow-bmp-path-marking-tlv
BMP with extendedTLV support
BringsvisibilityintoFIB'sandroute-policies
Knowing all the routes in all the RIB's brings the new
challenge
> That we don't know how they are being used in the FIB/RIB
(which one is best, best-external, ECMP, backup)
> That we don't know which route-policy
permitted/denied/changedwhich prefix/attribute
For IETF 110 Hackathon, IETF lab network with Big Data
integration has been further extendedto collaborate
developmentresearch with ETHZ, INSA, Cisco, Huawei and
pmacct (open source data-collection by Paolo Lucente).
BGP Peer-A
Adj-Rib-In Pre Policy
BGP Peer-A
Adj-Rib-In Post Policy
Static, Connected,
IGP Redistribution
Post Policy
Peer-A In Policy
BGP Peer-B
Adj-Rib-In Pre Policy
BGP Peer-B
Adj-Rib-In Post Policy
Peer-B In Policy
Local-Rib Pre Policy
BGP Peer-C
Adj-Rib-Out Pre Policy
BGP Peer-C
Adj-Rib-Out Post Policy
Peer-A Out Policy
BGP Peer-D
Adj-Rib-Out Pre Policy
BGP Peer-D
Adj-Rib-Out Post Policy
Peer-B Out Policy
Fib
Table Policy
• BGP Route Policy and Attribute Trace Using BMP
https://tools.ietf.org/html/draft-xu-grow-bmp-route-policy-attr-trace
• TLV support for BMP Route Monitoring and Peer Down Messages
https://tools.ietf.org/html/draft-ietf-grow-bmp-tlv
23. Export of MPLS Segment Routing Label Type Information in IPFIX
https://datatracker.ietf.org/doc/html/rfc9160
Export of Segment Routing IPv6 Information in IPFIX
https://datatracker.ietf.org/doc/html/draft-tgraf-opsawg-ipfix-srv6-srh
Export of Forwarding Path Delay in IPFIX
https://datatracker.ietf.org/doc/html/draft-tgraf-opsawg-ipfix-inband-telemetry
IPFIX CoveringSegmentRouting
For MPLS-SR, SRv6 and On-path Delay
SRv6 is commonly standardized, network vendors implementations are available and
network operators are at various stages in their deployments, missing data-plane visibility
though.
Segment Routing coverage in IPFIX brings visibility for:
> Which routing protocol provided the label or IPv6 Segment in the SR domain.
> The active Segmentwhere the packet is forwarded to in the SRv6 Domain.
> The SegmentList where the packet is going to be forwarded throughout the SRv6 Domain.
> The Endpoint Behavior describing how the packet is being forwarded in the SRv6 Domain.
> The Min, Max and Average On-path delay at each hop in the SR domain.
Node based
Flow Aggregation
Apache Kafka
Message Broker
Timeseries DB
Pmacct
Data Collection
IOAM
nodes
Data-collection based
Flow Aggregation
Message Broker based
Consolidation
Data Base
Join
24. 24
IETF 114/MWC2022 – NetworkAnalyticsDevelopment
IPv6 Forum,SRv6 Data PlaneVisibility
5x BMP drafts and 1 RFC at
GROW working group.
Bringing RIB and route-policy
dimensions into BMP and
increase scale.
2x YANG push drafts at
NETCONF working group.
2x IPFIX Segment Routing
On-path delay draft and 1
RFC at OPSAWG working
group.
Network Anomaly Detection
code development.
YANG push udp-notif open-
source running code.
https://www.linkedin.com/pulse/network-analytics-
ietf-development-mwc-2022-thomas-graf/
https://www.linkedin.com/pulse/ietf-114-network-
analytics-bmp-ipfix-yang-push-thomas-graf/