The document discusses declarative programming as it relates to network programmability. It provides examples of declarative versus imperative code and explains key concepts of declarative programming like lack of side effects, referential transparency, and idempotence. It also discusses how declarative programming can provide benefits like robustness, scalability, and reusability for network systems, which often operate in uncertain distributed environments. Finally, it outlines some declarative programming approaches being used for network control, orchestration, and automation.
This document discusses parallel programming and some common approaches used in .NET. It explains that parallel programming involves partitioning work into chunks that can execute concurrently on multiple threads or processor cores. It then describes several common .NET APIs for parallel programming: the Task Parallel Library (TPL) for general parallelism, PLINQ for parallel LINQ queries, Parallel class methods for data parallelism, and lower-level task parallelism using the Task class.
If you missed the SpringOne Conference this year, don't fret! In this session you'll get the opportunity to get the highlights of the trip Jeroen and Tim made to Las Vegas and they'll show you the coolest stuff from Spring and CloudFoundry!
The document discusses writing Neutron plugins, including:
- Neutron plugins implement plugin interfaces and handle API requests, interacting with backend agents/appliances.
- Core plugins implement the core Neutron API while service plugins provide additional network services. Plugins can use drivers to delegate backend interaction.
- Plugins exist independently in the "Neutron stadium" with their own development teams and releases.
- The document uses the fictional HDN (Human Defined Networking) plugin as an example to illustrate plugin architecture, integration with DevStack, use of callbacks, and testing considerations for Neutron plugins.
Streaming your Lyft Ride Prices - Flink Forward SF 2019Thomas Weise
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python and Apache Flink as the streaming engine.
https://sf-2019.flink-forward.org/conference-program#streaming-your-lyft-ride-prices
Python is popular amongst data scientists and engineers for data processing tasks. The big data ecosystem has traditionally been rather JVM centric. Often Java (or Scala) are the only viable option to implement data processing pipelines. That sometimes poses an adoption barrier for organizations that have already invested in other language ecosystems. The Apache Beam project provides a unified programming model for data processing and its ongoing portability effort aims to enable multiple language SDKs (currently Java, Python and Go) on a common set of runners. The combination of Python streaming on the Apache Flink runner is one example. Let’s take a look how the Flink runner translates the Beam model into the native DataStream (or DataSet) API, how the runner is changing to support portable pipelines, how Python user code execution is coordinated with gRPC based services and how a sample pipeline runs on Flink.
In the last decade, many distributed stream processing engines (SPEs) were developed to perform continuous queries on massive online data. The central design principle of these engines is to handle queries that potentially run forever on data streams with a query-at-a-time model, i.e., each query is optimized and executed separately. In many real applications, streams are not only processed with long-running queries, but also thousands of short-running ad-hoc queries. To support this efficiently, it is essential to share resources and computation for stream ad-hoc queries in a multi-user environment.
The goal of this talk is to bridge the gap between stream processing and ad-hoc queries in SPEs by sharing computation and resources. We define three main requirements for ad-hoc shared stream processing: (1) Integration: Ad-hoc query processing should be a composable layer which can extend stream operators, such as join, aggregation, and window operators; (2) Consistency: Ad-hoc query creation and deletion must be performed in a consistent manner and ensure exactly-once semantics and correctness; (3) Performance: In contrast to state-of-the-art SPEs, ad-hoc SPE should not only maximize data throughput but also query throughout via incremental computation and resource sharing. Based on these requirements, we have developed AStream, an ad-hoc, shared computation stream processing framework.
To the best of our knowledge, AStream is the first system that supports distributed ad-hoc stream processing. AStream is built on top of Apache Flink. Our experiments show that AStream shows comparable results to Flink for single query deployments and outperforms it in orders of magnitude with multiple queries.
We present a web service named FLOW to let users do FLink On Web. FLOW aims to minimize the effort of handwriting streaming applications similar in spirit to Hortonworks Stream Analytics Manager, StreamAnalytix, and Nussknacker by letting users drag and drop graphical icons representing streaming operators on GUI.
FLOW builds on Flink Table API and lets users assemble graphical icons associated with not only basic SQL operations but also advanced SQL operations like window aggregation, temporal join, and pattern recognition (MATCH_RECOGNIZE clause). Its data preview function enables to observe how sample data changes before and after applying each operation on screen. In addition, FLOW shows the sample data as time-series charts and geographical maps by interacting with Elasticsearch and Kibana. Therefore, domain experts with basic knowledge of SQL can design their streaming applications easily on GUI without understanding of Flink DataStream API and Flink CEP library.
In this talk, we first present what motivates the development of FLOW, then show how FLOW can be used to figure out the "Popular Places" exercise in its own style, and lastly explain how FLOW leverages Flink Table API.
This document discusses parallel programming and some common approaches used in .NET. It explains that parallel programming involves partitioning work into chunks that can execute concurrently on multiple threads or processor cores. It then describes several common .NET APIs for parallel programming: the Task Parallel Library (TPL) for general parallelism, PLINQ for parallel LINQ queries, Parallel class methods for data parallelism, and lower-level task parallelism using the Task class.
If you missed the SpringOne Conference this year, don't fret! In this session you'll get the opportunity to get the highlights of the trip Jeroen and Tim made to Las Vegas and they'll show you the coolest stuff from Spring and CloudFoundry!
The document discusses writing Neutron plugins, including:
- Neutron plugins implement plugin interfaces and handle API requests, interacting with backend agents/appliances.
- Core plugins implement the core Neutron API while service plugins provide additional network services. Plugins can use drivers to delegate backend interaction.
- Plugins exist independently in the "Neutron stadium" with their own development teams and releases.
- The document uses the fictional HDN (Human Defined Networking) plugin as an example to illustrate plugin architecture, integration with DevStack, use of callbacks, and testing considerations for Neutron plugins.
Streaming your Lyft Ride Prices - Flink Forward SF 2019Thomas Weise
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python and Apache Flink as the streaming engine.
https://sf-2019.flink-forward.org/conference-program#streaming-your-lyft-ride-prices
Python is popular amongst data scientists and engineers for data processing tasks. The big data ecosystem has traditionally been rather JVM centric. Often Java (or Scala) are the only viable option to implement data processing pipelines. That sometimes poses an adoption barrier for organizations that have already invested in other language ecosystems. The Apache Beam project provides a unified programming model for data processing and its ongoing portability effort aims to enable multiple language SDKs (currently Java, Python and Go) on a common set of runners. The combination of Python streaming on the Apache Flink runner is one example. Let’s take a look how the Flink runner translates the Beam model into the native DataStream (or DataSet) API, how the runner is changing to support portable pipelines, how Python user code execution is coordinated with gRPC based services and how a sample pipeline runs on Flink.
In the last decade, many distributed stream processing engines (SPEs) were developed to perform continuous queries on massive online data. The central design principle of these engines is to handle queries that potentially run forever on data streams with a query-at-a-time model, i.e., each query is optimized and executed separately. In many real applications, streams are not only processed with long-running queries, but also thousands of short-running ad-hoc queries. To support this efficiently, it is essential to share resources and computation for stream ad-hoc queries in a multi-user environment.
The goal of this talk is to bridge the gap between stream processing and ad-hoc queries in SPEs by sharing computation and resources. We define three main requirements for ad-hoc shared stream processing: (1) Integration: Ad-hoc query processing should be a composable layer which can extend stream operators, such as join, aggregation, and window operators; (2) Consistency: Ad-hoc query creation and deletion must be performed in a consistent manner and ensure exactly-once semantics and correctness; (3) Performance: In contrast to state-of-the-art SPEs, ad-hoc SPE should not only maximize data throughput but also query throughout via incremental computation and resource sharing. Based on these requirements, we have developed AStream, an ad-hoc, shared computation stream processing framework.
To the best of our knowledge, AStream is the first system that supports distributed ad-hoc stream processing. AStream is built on top of Apache Flink. Our experiments show that AStream shows comparable results to Flink for single query deployments and outperforms it in orders of magnitude with multiple queries.
We present a web service named FLOW to let users do FLink On Web. FLOW aims to minimize the effort of handwriting streaming applications similar in spirit to Hortonworks Stream Analytics Manager, StreamAnalytix, and Nussknacker by letting users drag and drop graphical icons representing streaming operators on GUI.
FLOW builds on Flink Table API and lets users assemble graphical icons associated with not only basic SQL operations but also advanced SQL operations like window aggregation, temporal join, and pattern recognition (MATCH_RECOGNIZE clause). Its data preview function enables to observe how sample data changes before and after applying each operation on screen. In addition, FLOW shows the sample data as time-series charts and geographical maps by interacting with Elasticsearch and Kibana. Therefore, domain experts with basic knowledge of SQL can design their streaming applications easily on GUI without understanding of Flink DataStream API and Flink CEP library.
In this talk, we first present what motivates the development of FLOW, then show how FLOW can be used to figure out the "Popular Places" exercise in its own style, and lastly explain how FLOW leverages Flink Table API.
Future of Apache Flink Deployments: Containers, Kubernetes and More - Flink F...Till Rohrmann
Container technology experiences an ever increasing adoption throughout many industries. Not only does this technology make your applications portable across different machines and operating systems, it also allows to scale applications in a matter of seconds. Moreover, it significantly simplifies and speeds up deployments which decreases development and operation costs. Consequently, more and more Flink deployments run in containerized environments which poses new challenges for Flink.
In this talk, we will take a look at Flink's current and future container support which will make it a first class citizen of the container world. First of all, we will explain how the new reactive execution mode will solve the problem of seamless application scaling and how it blends in with any environment. Complementary to the reactive mode, the active execution mode demonstrates its strengths when it comes to changing workloads such as batch jobs. Last but not least, we will take a look beyond Flink's own nose and investigate how Flink can be used together with Kubernetes operators or data Artisans' Application Manager. We will conclude the talk with a short demo of Flink's native Kubernetes support and giving an outlook on future developments in the container realm.
A tour of scalability improvements between Havana and Juno.
The presentation discusses results from an experimental campaign and the various features that enable the scalability improvements
Presentation from Aaron Rose and Salvatore Orlando.
Recent Advances in Machine Learning: Bringing a New Level of Intelligence to ...Brocade
Presentation by Brocade Chief Scientist and Fellow, David Meyer, given at Orange Gardens July 2016. What is Machine Learning and what is all the excitement about?
An associated blog is available here: http://community.brocade.com/t5/CTO-Corner/Networking-Meets-Artificial-Intelligence-A-Glimpse-into-the-Very/ba-p/88196
The document discusses the Open Data Plane (ODP) project, which aims to create an open source framework for data plane applications. ODP provides a standardized API to enable networking applications across different architectures like ARM, Intel and PowerPC. It is based on the Event Machine model of work-driven processing. ODP implementations optimize the API for different hardware platforms while providing application portability. The project aims to support functions like dynamic load balancing, power management, and virtual switch integration.
Toward Hybrid Cloud Serverless Transparency with Lithops FrameworkLibbySchulze
The document describes using the Lithops framework to simplify serverless data pre-processing of images by extracting faces and aligning them. Lithops allows processing millions of images located in different storage locations in a serverless manner without having to write boilerplate code to access storage or partition data. It handles parallel execution, data access, and coordination to run a user-defined function that pre-processes each image on remote servers near the data. This avoids having to move large amounts of data and allows leveraging serverless cloud compute resources to speed up processing times significantly compared to running everything locally.
Learn more about the tremendous value Open Data Plane brings to NFV
Bob Monkman, Networking Segment Marketing Manager, ARM
Bill Fischofer, Senior Software Engineer, Linaro Networking Group
Moderator:
Brandon Lewis, OpenSystems Media
This document describes a Graphical Packet Generator (GPG) software application that was developed using onePK API to enable network administration from a central point. GPG allows for automatic discovery of network topology by using CDP or LLDP protocols. It offers packet generation capabilities for troubleshooting networks as well as ICMP, UDP, and IP packet generation. The design of GPG includes modules for topology discovery, a packet generator window for the GUI, and a packet generator to establish a connection between onePK API and virtual routers. The GPG respects SDN principles and provides abstraction at the software level for network administration.
Virtual Flink Forward 2020: Integrate Flink with Kubernetes natively - Yang WangFlink Forward
Currently Flink supports the resource management system YARN and Mesos. However, they were not designed for fast moving cloud native architectures, and they could not support mixed workloads (e.g. batch, streaming, deep learning, web services, etc.) relatively well. At the same time, Kubernetes is evolving very fast to fill those gaps and become the de-facto orchestration framework. So running Flink on Kubernetes is a very basic requirement for many users. In this talk, firstly we will quickly go through Kubernetes architecture and the efforts we have been made to run Flink on Kubernetes. Then we deep dive into the technical details about how to make Flink natively run on Kubernetes. Native means Flink KubernetesResourceManager calls directly the Kubernetes APIs to allocate and release TaskManager pods. Next we will share some practices of application lifecycle management and production optimizations (e.g. high-availability, storage, network, etc.). Finally, we will conclude the talk with advantages for Flink on Kubernetes and a simple demo. This talk is aimed at users and companies who are looking to run Flink on Kubernetes cluster. We assume that the listener has some basic knowledge of cluster orchestration and containers.
Transformacion e innovacion digital Meetup - Application Modernization and Mi...José Román Martín Gil
This document discusses application modernization and migration strategies for moving applications to the cloud. It begins by defining application modernization and migration as focusing on business workloads and digital transformation. It then discusses why modernization is important, including reasons like disruption, embracing digital transformation, and gaining business agility. The document outlines various migration strategies like rehosting, replatforming, refactoring, repurchasing, and retiring applications. It also discusses modern application concepts like better software architecture, streamlining the application lifecycle, and continuous innovation. Finally, it covers cloud-readiness and demonstrates migrating an application to OpenShift through repackaging, deploying with containers, and modernizing the architecture.
The document outlines the roadmap for ONOS, an open source SDN controller, in 2015. Key points include:
- Regular 3 month release cadence, with names like Avocet and Blackbird. Blackbird release in February 2015 will focus on stability, performance, and high availability.
- Areas of focus for 2015 include building out the distributed core using RAFT, improving the intent framework, adding southbound drivers, and exploring new use cases.
- Planned proof of concepts and deployments include AT&T use cases, NTT/NEC optical networking, an Internet2 deployment, and community labs.
- Goals for 2015 include expanding the developer community, influencing standards,
Flink Forward San Francisco 2019: Towards Flink 2.0: Rethinking the stack and...Flink Forward
Flink currently features different APIs for bounded/batch (DataSet) and streaming (DataStream) programs. And while the DataStream API can handle batch use cases, it is much less efficient in that compared to the DataSet API. The Table API was built as a unified API on top of both, to cover batch and streaming with the same API, and under the hood delegate to either DataSet or DataStream.
In this talk, we present the latest on the Flink community's efforts to rework the APIs and the stack for better unified batch & streaming experience. We will discuss:
- The future roles and interplay of DataSet, DataStream, and Table API
- The new Flink stack and the abstractions on which these APIs will build
- The new unified batch/streaming sources
- How batch and streaming optimizations differ in the runtime, and what the future interplay of batch and streaming execution could look like
Flink Forward San Francisco 2018: Dave Torok & Sameer Wadkar - "Embedding Fl...Flink Forward
This document discusses using Apache Flink to operationalize a streaming machine learning lifecycle. It describes Comcast's need to improve customer experiences through predictive analytics over streaming data. Flink is used to orchestrate feature engineering, model training/evaluation, and real-time predictions. Key aspects of the solution include a metadata-driven pipeline, automated deployments, consistent feature stores for training and prediction, and monitoring of multiple models. The document outlines the various components of the ML lifecycle and pipeline implemented on Flink and discusses next steps around UI/UX, continuous monitoring, and supporting multiple feature stores.
SDN models can be categorized as canonical/OpenFlow, broker/API-based, proactive/declarative, overlay, and hybrid models. The canonical model uses a logically centralized controller and "dumb" switches. Broker models use an API to interact between applications and the network. Proactive models use a compiler to translate high-level network definitions. Overlay models program edge devices to manage tunnels. Hybrid models combine centralized and distributed control. Future work is needed to maximize the benefits of combining models while limiting complexity.
Open source cloud native security with threat mapperLibbySchulze
ThreatMapper is an open source security observability platform that helps users secure their cloud native applications from development through production. It discovers an application's topology and attack surface, scans for vulnerabilities, and provides indicators of attacks. ThreatMapper is currently focused on mapping the attack surface, but future versions will incorporate additional features like gathering attack intelligence and providing indicators of compromise. It is part of Deepfence's overall strategy to help users "Shift Left" to build security into development and also "Secure Right" once applications are in production.
Flink Forward San Francisco 2018: Andrew Gao & Jeff Sharpe - "Finding Bad Ac...Flink Forward
Within fintech catching fraudsters is one of the primary opportunities for us to use streaming applications to apply ML models in real-time. This talk will be a review of our journey to bring fraud decisioning to our tellers at Capital One using Kafka, Flink and AWS Lambda. We will share our learnings and experiences to common problems such as custom windowing, breaking down a monolith app to small queryable state apps, feature engineering with Jython, dealing with back pressure from combining two disparate streams, model/feature validation in a regulatory environment, and running Flink jobs on Kubernetes.
Serving Deep Learning Models At Scale With RedisAI: Luca AntigaRedis Labs
This document provides an overview and roadmap for RedisAI, which allows serving deep learning models using Redis. Key points:
- RedisAI turns Redis into a full-fledged deep learning runtime by introducing tensors as a new data type and enabling model execution on CPU and GPU.
- Models can be exported from frameworks like TensorFlow and PyTorch and served using the RedisAI API. Scripts can also be used to define computations directly in RedisAI.
- RedisAI aims to keep models hot in memory, run anywhere Redis runs, and optimize resource usage. Future plans include DAG execution, auto-batching, ONNX support, and advanced monitoring.
- A demo of RedisAI will be provided
Flink Forward Berlin 2018: Thomas Weise & Aljoscha Krettek - "Python Streamin...Flink Forward
Python is popular amongst data scientists and engineers for data processing tasks. The big data ecosystem has traditionally been rather JVM centric. Often Java (or Scala) are the only viable option to implement data processing pipelines. That sometimes poses an adoption barrier for organizations that have already invested in other language ecosystems. The Apache Beam project provides a unified programming model for data processing and its ongoing portability effort aims to enable multiple language SDKs (currently Java, Python and Go) on a common set of runners. The combination of Python streaming on the Apache Flink runner is one example. Let’s take a look how the Flink runner translates the Beam model into the native DataStream (or DataSet) API, how the runner is changing to support portable pipelines, how Python user code execution is coordinated with gRPC based services and how a sample pipeline runs on Flink.
Flink currently features different APIs for bounded/batch (DataSet) and streaming (DataStream) programs. And while the DataStream API can handle batch use cases, it is much less efficient in that compared to the DataSet API. The Table API was built as a unified API on top of both, to cover batch and streaming with the same API, and under the hood delegate to either DataSet or DataStream.
In this talk, we present the latest on the Flink community's efforts to rework the APIs and the stack for better unified batch & streaming experience. We will discuss:
- The future roles and interplay of DataSet, DataStream, and Table API
- The new Flink stack and the abstractions on which these APIs will build
- The new unified batch/streaming sources
- How batch and streaming optimizations differ in the runtime, and what the future interplay of batch and streaming execution could look like.
This document discusses YANG data models and automation. It provides background on the speaker and their focus on manageability. It describes how automation is required for today's large, dynamic networks. YANG is introduced as the standard data modeling language that can be used to define management information for protocols like NETCONF and RESTCONF. Examples of organizations developing YANG models are provided. The document emphasizes that data model-driven APIs are key to enabling automation.
Future of Apache Flink Deployments: Containers, Kubernetes and More - Flink F...Till Rohrmann
Container technology experiences an ever increasing adoption throughout many industries. Not only does this technology make your applications portable across different machines and operating systems, it also allows to scale applications in a matter of seconds. Moreover, it significantly simplifies and speeds up deployments which decreases development and operation costs. Consequently, more and more Flink deployments run in containerized environments which poses new challenges for Flink.
In this talk, we will take a look at Flink's current and future container support which will make it a first class citizen of the container world. First of all, we will explain how the new reactive execution mode will solve the problem of seamless application scaling and how it blends in with any environment. Complementary to the reactive mode, the active execution mode demonstrates its strengths when it comes to changing workloads such as batch jobs. Last but not least, we will take a look beyond Flink's own nose and investigate how Flink can be used together with Kubernetes operators or data Artisans' Application Manager. We will conclude the talk with a short demo of Flink's native Kubernetes support and giving an outlook on future developments in the container realm.
A tour of scalability improvements between Havana and Juno.
The presentation discusses results from an experimental campaign and the various features that enable the scalability improvements
Presentation from Aaron Rose and Salvatore Orlando.
Recent Advances in Machine Learning: Bringing a New Level of Intelligence to ...Brocade
Presentation by Brocade Chief Scientist and Fellow, David Meyer, given at Orange Gardens July 2016. What is Machine Learning and what is all the excitement about?
An associated blog is available here: http://community.brocade.com/t5/CTO-Corner/Networking-Meets-Artificial-Intelligence-A-Glimpse-into-the-Very/ba-p/88196
The document discusses the Open Data Plane (ODP) project, which aims to create an open source framework for data plane applications. ODP provides a standardized API to enable networking applications across different architectures like ARM, Intel and PowerPC. It is based on the Event Machine model of work-driven processing. ODP implementations optimize the API for different hardware platforms while providing application portability. The project aims to support functions like dynamic load balancing, power management, and virtual switch integration.
Toward Hybrid Cloud Serverless Transparency with Lithops FrameworkLibbySchulze
The document describes using the Lithops framework to simplify serverless data pre-processing of images by extracting faces and aligning them. Lithops allows processing millions of images located in different storage locations in a serverless manner without having to write boilerplate code to access storage or partition data. It handles parallel execution, data access, and coordination to run a user-defined function that pre-processes each image on remote servers near the data. This avoids having to move large amounts of data and allows leveraging serverless cloud compute resources to speed up processing times significantly compared to running everything locally.
Learn more about the tremendous value Open Data Plane brings to NFV
Bob Monkman, Networking Segment Marketing Manager, ARM
Bill Fischofer, Senior Software Engineer, Linaro Networking Group
Moderator:
Brandon Lewis, OpenSystems Media
This document describes a Graphical Packet Generator (GPG) software application that was developed using onePK API to enable network administration from a central point. GPG allows for automatic discovery of network topology by using CDP or LLDP protocols. It offers packet generation capabilities for troubleshooting networks as well as ICMP, UDP, and IP packet generation. The design of GPG includes modules for topology discovery, a packet generator window for the GUI, and a packet generator to establish a connection between onePK API and virtual routers. The GPG respects SDN principles and provides abstraction at the software level for network administration.
Virtual Flink Forward 2020: Integrate Flink with Kubernetes natively - Yang WangFlink Forward
Currently Flink supports the resource management system YARN and Mesos. However, they were not designed for fast moving cloud native architectures, and they could not support mixed workloads (e.g. batch, streaming, deep learning, web services, etc.) relatively well. At the same time, Kubernetes is evolving very fast to fill those gaps and become the de-facto orchestration framework. So running Flink on Kubernetes is a very basic requirement for many users. In this talk, firstly we will quickly go through Kubernetes architecture and the efforts we have been made to run Flink on Kubernetes. Then we deep dive into the technical details about how to make Flink natively run on Kubernetes. Native means Flink KubernetesResourceManager calls directly the Kubernetes APIs to allocate and release TaskManager pods. Next we will share some practices of application lifecycle management and production optimizations (e.g. high-availability, storage, network, etc.). Finally, we will conclude the talk with advantages for Flink on Kubernetes and a simple demo. This talk is aimed at users and companies who are looking to run Flink on Kubernetes cluster. We assume that the listener has some basic knowledge of cluster orchestration and containers.
Transformacion e innovacion digital Meetup - Application Modernization and Mi...José Román Martín Gil
This document discusses application modernization and migration strategies for moving applications to the cloud. It begins by defining application modernization and migration as focusing on business workloads and digital transformation. It then discusses why modernization is important, including reasons like disruption, embracing digital transformation, and gaining business agility. The document outlines various migration strategies like rehosting, replatforming, refactoring, repurchasing, and retiring applications. It also discusses modern application concepts like better software architecture, streamlining the application lifecycle, and continuous innovation. Finally, it covers cloud-readiness and demonstrates migrating an application to OpenShift through repackaging, deploying with containers, and modernizing the architecture.
The document outlines the roadmap for ONOS, an open source SDN controller, in 2015. Key points include:
- Regular 3 month release cadence, with names like Avocet and Blackbird. Blackbird release in February 2015 will focus on stability, performance, and high availability.
- Areas of focus for 2015 include building out the distributed core using RAFT, improving the intent framework, adding southbound drivers, and exploring new use cases.
- Planned proof of concepts and deployments include AT&T use cases, NTT/NEC optical networking, an Internet2 deployment, and community labs.
- Goals for 2015 include expanding the developer community, influencing standards,
Flink Forward San Francisco 2019: Towards Flink 2.0: Rethinking the stack and...Flink Forward
Flink currently features different APIs for bounded/batch (DataSet) and streaming (DataStream) programs. And while the DataStream API can handle batch use cases, it is much less efficient in that compared to the DataSet API. The Table API was built as a unified API on top of both, to cover batch and streaming with the same API, and under the hood delegate to either DataSet or DataStream.
In this talk, we present the latest on the Flink community's efforts to rework the APIs and the stack for better unified batch & streaming experience. We will discuss:
- The future roles and interplay of DataSet, DataStream, and Table API
- The new Flink stack and the abstractions on which these APIs will build
- The new unified batch/streaming sources
- How batch and streaming optimizations differ in the runtime, and what the future interplay of batch and streaming execution could look like
Flink Forward San Francisco 2018: Dave Torok & Sameer Wadkar - "Embedding Fl...Flink Forward
This document discusses using Apache Flink to operationalize a streaming machine learning lifecycle. It describes Comcast's need to improve customer experiences through predictive analytics over streaming data. Flink is used to orchestrate feature engineering, model training/evaluation, and real-time predictions. Key aspects of the solution include a metadata-driven pipeline, automated deployments, consistent feature stores for training and prediction, and monitoring of multiple models. The document outlines the various components of the ML lifecycle and pipeline implemented on Flink and discusses next steps around UI/UX, continuous monitoring, and supporting multiple feature stores.
SDN models can be categorized as canonical/OpenFlow, broker/API-based, proactive/declarative, overlay, and hybrid models. The canonical model uses a logically centralized controller and "dumb" switches. Broker models use an API to interact between applications and the network. Proactive models use a compiler to translate high-level network definitions. Overlay models program edge devices to manage tunnels. Hybrid models combine centralized and distributed control. Future work is needed to maximize the benefits of combining models while limiting complexity.
Open source cloud native security with threat mapperLibbySchulze
ThreatMapper is an open source security observability platform that helps users secure their cloud native applications from development through production. It discovers an application's topology and attack surface, scans for vulnerabilities, and provides indicators of attacks. ThreatMapper is currently focused on mapping the attack surface, but future versions will incorporate additional features like gathering attack intelligence and providing indicators of compromise. It is part of Deepfence's overall strategy to help users "Shift Left" to build security into development and also "Secure Right" once applications are in production.
Flink Forward San Francisco 2018: Andrew Gao & Jeff Sharpe - "Finding Bad Ac...Flink Forward
Within fintech catching fraudsters is one of the primary opportunities for us to use streaming applications to apply ML models in real-time. This talk will be a review of our journey to bring fraud decisioning to our tellers at Capital One using Kafka, Flink and AWS Lambda. We will share our learnings and experiences to common problems such as custom windowing, breaking down a monolith app to small queryable state apps, feature engineering with Jython, dealing with back pressure from combining two disparate streams, model/feature validation in a regulatory environment, and running Flink jobs on Kubernetes.
Serving Deep Learning Models At Scale With RedisAI: Luca AntigaRedis Labs
This document provides an overview and roadmap for RedisAI, which allows serving deep learning models using Redis. Key points:
- RedisAI turns Redis into a full-fledged deep learning runtime by introducing tensors as a new data type and enabling model execution on CPU and GPU.
- Models can be exported from frameworks like TensorFlow and PyTorch and served using the RedisAI API. Scripts can also be used to define computations directly in RedisAI.
- RedisAI aims to keep models hot in memory, run anywhere Redis runs, and optimize resource usage. Future plans include DAG execution, auto-batching, ONNX support, and advanced monitoring.
- A demo of RedisAI will be provided
Flink Forward Berlin 2018: Thomas Weise & Aljoscha Krettek - "Python Streamin...Flink Forward
Python is popular amongst data scientists and engineers for data processing tasks. The big data ecosystem has traditionally been rather JVM centric. Often Java (or Scala) are the only viable option to implement data processing pipelines. That sometimes poses an adoption barrier for organizations that have already invested in other language ecosystems. The Apache Beam project provides a unified programming model for data processing and its ongoing portability effort aims to enable multiple language SDKs (currently Java, Python and Go) on a common set of runners. The combination of Python streaming on the Apache Flink runner is one example. Let’s take a look how the Flink runner translates the Beam model into the native DataStream (or DataSet) API, how the runner is changing to support portable pipelines, how Python user code execution is coordinated with gRPC based services and how a sample pipeline runs on Flink.
Flink currently features different APIs for bounded/batch (DataSet) and streaming (DataStream) programs. And while the DataStream API can handle batch use cases, it is much less efficient in that compared to the DataSet API. The Table API was built as a unified API on top of both, to cover batch and streaming with the same API, and under the hood delegate to either DataSet or DataStream.
In this talk, we present the latest on the Flink community's efforts to rework the APIs and the stack for better unified batch & streaming experience. We will discuss:
- The future roles and interplay of DataSet, DataStream, and Table API
- The new Flink stack and the abstractions on which these APIs will build
- The new unified batch/streaming sources
- How batch and streaming optimizations differ in the runtime, and what the future interplay of batch and streaming execution could look like.
This document discusses YANG data models and automation. It provides background on the speaker and their focus on manageability. It describes how automation is required for today's large, dynamic networks. YANG is introduced as the standard data modeling language that can be used to define management information for protocols like NETCONF and RESTCONF. Examples of organizations developing YANG models are provided. The document emphasizes that data model-driven APIs are key to enabling automation.
The document discusses the requirements and architecture of an SDN controller. It states that an SDN controller should be a flexible platform that can accommodate diverse applications through common APIs and extensibility. It should also scale to support independent development and integration of applications. The OpenDaylight controller satisfies these requirements through its use of YANG modeling and the Model-Driven Service Abstraction Layer (MD-SAL). MD-SAL generates Java classes from YANG models and provides messaging between controller components.
This document provides a summary of a YANG tutorial presentation on advanced YANG statements, including must statements, augment statements, when statements, choice statements, identity statements, feature statements, deviations, and YANG modeling strategies. It discusses topics like restricting valid values with XPath expressions in must statements, adding new data with augment statements, making data conditional with when statements, modeling related enumerations with identity statements, and marking data as optional with feature statements. The presentation aims to help people understand and properly apply these important YANG modeling constructs.
Synopsis: A discussion of the requirements for next generation network management identified in RFC 3535 which lead to the development of NETCONF and YANG.
Slides from Talk by Jan Medved on Yang modeling and its support in OpenDaylight meetup
http://www.meetup.com/OpenDaylight-Silicon-Valley/events/212834752
Yang is a data modeling language that is rapidly being adopted to model Netconf, an IETF standardized network management protocol, as well as to model other data interfaces in OpenDaylight. Join us for the talk by expert Jan Medved to learn about Yang and its usage within OpenDaylight.
Synopsis: A high-level technical introduction to ConfD. Introduction to ConfD architecture, data model driven paradigm, core engine features and northbound interfaces.
Synopsis: Part 1 of a tutorial on the YANG data modeling language. The basics of YANG are taught in this module. More advanced YANG statements are taught in Part 2.
Synopsis: A tutorial on the NETCONF protocol. The operations of the core NETCONF protocol are taught. This is followed by examination of traces of NETCONF sessions.
DEVNET-1152 OpenDaylight YANG Model Overview and ToolsCisco DevNet
This document provides an overview of a workshop held by the IAB on network management. The goal of the workshop was to continue discussions between network operators and protocol developers and guide the IETF's focus on future work regarding network management.
Introduction to YANG data models and their use in OpenDaylight: an overviewCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. YANG is a data modeling language defining device and service configuration and operations. This session will describe what YANG is (with examples), and its relationship to and how it is used in OpenDaylight. Several tools making it easier for the application developer to work with YANG will be discussed. It will conclude with a demonstration of YANGUI and YANG Visualizer, two new OpenDaylight applications that auto-generate a user interface and directed graph respectively, both based on selected YANG models.
This document provides a tutorial on NETCONF and YANG, which are standards for network configuration and management. NETCONF was designed to address operators' requirements for easier network-wide configuration, validation of changes, and transactional management across multiple devices. It uses SSH for secure transport and XML encoding. YANG provides data models to define the configuration and state data. The tutorial covers the background and motivation for these standards, an overview of NETCONF operations and examples, and a demonstration of YANG data modeling. It explains how NETCONF enables network-wide atomic transactions, fulfilling a key operator need and improving the cost and complexity of network management.
A 30-minute Introduction to NETCONF and YANGTail-f Systems
This is a live document that I use to present the state of NETCONF and YANG in various contexts. I use it to inform and get conversation going, not to provide complete and final documentation of NETCONF and YANG. I update this document almost monthly, mostly with regards to industry support and working group timelines, check back!
Building and deploying LLM applications with Apache AirflowKaxil Naik
Behind the growing interest in Generate AI and LLM-based enterprise applications lies an expanded set of requirements for data integrations and ML orchestration. Enterprises want to use proprietary data to power LLM-based applications that create new business value, but they face challenges in moving beyond experimentation. The pipelines that power these models need to run reliably at scale, bringing together data from many sources and reacting continuously to changing conditions.
This talk focuses on the design patterns for using Apache Airflow to support LLM applications created using private enterprise data. We’ll go through a real-world example of what this looks like, as well as a proposal to improve Airflow and to add additional Airflow Providers to make it easier to interact with LLMs such as the ones from OpenAI (such as GPT4) and the ones on HuggingFace, while working with both structured and unstructured data.
In short, this shows how these Airflow patterns enable reliable, traceable, and scalable LLM applications within the enterprise.
https://airflowsummit.org/sessions/2023/keynote-llm/
This document discusses Relay, a new project that aims to provide a complete solution for continuously deploying applications and infrastructure. Relay will orchestrate actions across existing tools and services by listening to cloud events and triggers from services. It will use a taxonomy of reusable, modular workflow steps that can be combined to build workflows as code in YAML. The document provides examples of step types like trigger steps, action steps, and query steps. It also outlines Relay's integration ecosystem and upcoming release timeline.
This document discusses web application architecture and frameworks. It argues that frameworks should not dictate project structure, and that the code should separate domain logic from infrastructure logic. This allows focusing on the core problem domain without concerning itself with technical details like databases or web requests. It also advocates splitting code into ports that define intentions like persistence, and adapters that provide framework-specific implementations, allowing for independence of the domain logic from any particular framework or technology. This architecture, known as hexagonal or ports and adapters, facilitates testing, replacement of parts, and future-proofing of the application.
This document provides an overview and summary of OpenShift v3 and containers. It discusses how OpenShift v3 uses Docker containers and Kubernetes for orchestration instead of the previous "Gears" system. It also summarizes the key architectural changes in OpenShift v3, including using immutable Docker images, separating development and operations, and abstracting operational complexity.
2016 - Open Mic - IGNITE - Open Infrastructure = ANY Infrastructuredevopsdaysaustin
The document discusses the need for hybrid infrastructure and hybrid DevOps to manage different cloud platforms and physical infrastructure in a consistent way. It notes that while no single API or platform can meet all needs, AWS dominance means its operational patterns have become the benchmark. The key is developing composable infrastructure modules that can be orchestrated together to provide portability across environments using a common operational process.
OpenStack Preso: DevOps on Hybrid Infrastructurerhirschfeld
Discusses the approach for making hybrid DevOps workable including what obstacles must be overcome. Includes demo of multiple OpenStack clouds & Kubernetes deploy on AWS, Google and OpenStack
Triangle Devops Meetup covering Netflix open source, cloud architecture, and what Andrew did in his first year working as a senior software engineer in the cloud platform group.
This document provides a summary of Christian Esteve Rothenberg, a professor researching network functions virtualization and software defined infrastructures. It outlines his professional experience which includes positions at University of Campinas and CPqD R&D Center in Telecommunication. It also lists his research interests such as SDN, NFV, ICN and various open source projects he has led like Mininet-WiFi and libfluid. The document discusses some of his research questions around NFV/SDN including VNF benchmarking and multi-domain orchestration.
Data Parallel and Object Oriented ModelNikhil Sharma
All the content is taken from Advance Computer Architecture book. Which (10.1.3 and 10.1.4)
This PPT covers the basics of Data-Parallel Model and Object-Oriented Model.
SpringBoot and Spring Cloud Service for MSAOracle Korea
Cloud 환경에서 MSA를 하기 위해서 Service Discovery, Circuit Breaker 등을 사용하여 Application을 개발하는 방법과 SpringBoot 와 Spring Cloud Service 를 사용하는데, Cloud에서 Kubernetes를 위시한 Container 생태계가 어떻게 MSA에 영향을 미치는지 알아봅니다.
This document discusses microservices architecture using Spring Cloud and related technologies. It provides an overview of microservices and cloud native applications. It then covers Spring Boot, Spring Cloud, and Netflix OSS projects that can be used to build microservices. Specific Spring Cloud features like service registration, circuit breakers, and API gateways are demonstrated. The role of Pivotal in contributing to open source projects and providing Spring Cloud services is also mentioned.
KCD Italy 2022 - Application driven infrastructure with Crossplanesparkfabrik
Crossplane allows users to extend their Kubernetes clusters using CRDs. The CRDs map any infrastructure or managed service, ensuring that the creation process for the users is as simple as the Kubernetes resources creation. Using a collection of YAML manifests, the development teams can assemble the needed cloud services for their applications removing this duty from the operation teams: this is "shift left" at its best. All this powerfulness comes with a cost in terms of security, governance, cognitive load and maintenance. In this talk we'll discuss strategies and techniques to better map the complexity of this infrastructure.
A presentation on the Netflix Cloud Architecture and NetflixOSS open source. For the All Things Open 2015 conference in Raleigh 2015/10/19. #ATO2015 #NetflixOSS
This document summarizes an SDN and cloud computing presentation given by Affan Basalamah and Dr.-Ing. Eueung Mulyana from Institut Teknologi Bandung. It discusses SDN and cloud computing research activities at ITB, including implementing OpenFlow networks, developing SDN courses, and student projects involving OpenFlow, OpenStack, and IPsec VPNs. It also describes forming an SDN research group at ITB to facilitate collaboration between academia, network operators, and vendors on SDN topics.
This document provides a summary of Netflix's architecture and use of open source software. It discusses:
- Why Netflix open sources software, including gathering feedback, collaboration, and improving retention and recruiting
- Popular Netflix open source projects like Eureka, Ribbon, and Hystrix that are widely used in cloud architectures
- Netflix's microservices architecture and emphasis on automation, high availability, and continuous delivery
- How Netflix ensures operational visibility and security at scale through open source tools like Turbine, Atlas, and Security Monkey
- Getting started resources for understanding and running Netflix's technologies like ZeroToCloud and ZeroToDocker workshops
Concurrency Programming in Java - 01 - Introduction to Concurrency ProgrammingSachintha Gunasena
This session discusses a basic high-level introduction to concurrency programming with Java which include:
programming basics, OOP concepts, concurrency, concurrent programming, parallel computing, concurrent vs parallel, why concurrency, real world example, terms, Moore's Law, Amdahl's Law, types of parallel computation, MIMD Variants, shared memory model, distributed memory model, client server model, scoop mechanism, scoop preview - a sequential program, in a concurrent setting - using scoop, programming then & now, sequential programming, concurrent programming,
On SDN Research Topics - Christian Esteve RothenbergCPqD
This document summarizes Christian Esteve Rothenberg's research interests in software-defined networking topics. It outlines his background and experience in SDN and lists several areas of focus, including SDN in the WAN with a focus on software-defined IP routing. It also discusses high performance SDN stacks, building high availability into SDNs, and exploring the integration of optics and electronics with SDN programmable abstractions and datapaths. Rothenberg's research aims to advance these topics through ongoing work and collaborations.
Current & Future Use-Cases of OpenDaylightabhijit2511
OpenDaylight Overview and Architecture
• OpenDaylight Use Cases (Partial List)
I. Network Abstraction
II. ONAP
III. Network Virtualization
IV. AI/ML with OpenDaylight
V. ODL in OSS
• OpenDaylight: Getting Involved
EU-Taiwan Workshop on 5G Research, PRISTINE introductionICT PRISTINE
The PRISTINE project aims to explore programmability in RINA (Recursive Internet Architectures) through developing a RINA software development kit. It will demonstrate RINA's applicability and benefits in three use cases - datacenters, distributed clouds, and carrier networks. The project is building a RINA simulator and working towards commoditizing networking equipment through standardized programmability APIs, with the goals of increasing flexibility, automation, and innovation while reducing costs.
Presentation of the status of my PhD in 2012 done to ABLE group at Carnegie Mellon.
Years later from that appeared
https://github.com/iTransformers/netTransformer
The document discusses the benefits of model-driven automation over traditional imperative and procedural approaches for networking. It argues that a model-driven approach is more suitable for large-scale parallel distributed systems like networks that have high uncertainty. The key benefits include being more declarative by describing "what to be" through models rather than prescribing "how" through scripts or commands. This makes model-driven systems more robust, reusable, maintainable and scalable. However, some complaints about modeling are that it takes more time and effort and is not as human-friendly as sequential approaches. The document counters that the modeling process has benefits and standardization can help address issues over time.
The document summarizes a panel discussion on service provider architecture and NFV held by Cisco Systems in April 2015. It includes presentations from NTT Communications, KDDI, and SoftBank Mobile on their NFV strategies and experiences. Some key points discussed are:
- NFV promises benefits like reduced CAPEX/OPEX but challenges remain around performance, maintenance costs, and immature standards.
- Telecom operators are working to automate testing and operations through techniques like DevOps, model-driven management, and abstracting existing networks.
- While NFV offers opportunities, realities of the technology include issues meeting throughput demands on commercial off-the-shelf hardware, increased maintenance complexity, and a lack of inter
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Embedded machine learning-based road conditions and driving behavior monitoring
Mk network programmability-03_en
1. 19
Feb.
2015
Miya
Kohno,
miya.kohno@gmail.com
“Declarative Programming” and a form of SDN
Network Programmability Study Group Workshop #3
http://network-programmability.connpass.com/
2. About me
• Miya
Kohno,
Principal
Engineer,
Cisco
Systems
• Used
to
be
a
soDware
engineer
- Love
of
Programming
style
discussion
• ADer
that,
I
have
been
a
network
engineer
- Protocol
- Network
Architecture
• Official
Blog
- hPp://gblogs.cisco.com/jp/author/miyakohno/
• TwiPer
@mkohno
3. Agenda
• Programming
Paradigm
Discussion
in
the
Networking
Discipline
• A
Form
of
SDN
Open
Daylight
-‐-‐
BGP-‐LS/PCEP and
MD-‐SAL
4. What is Network Programmability ?!
• Neutron
I E T F
• NETCONF/YANG
• I2RS
• FORCES
• + and
any
network
protocols
!
To
be
Programmed
/
Orchestrated
by
Network
Engineers
To
Program
Network
Devices
(Virtual,
Physical)
5. Programming Paradigm Trend
in the Networking Discipline (hypothesis)
• Not
Impera^ve
but
Declara7ve
• Not
Procedural
but
Model
driven
• Not
Waterfall
but
Agile
6. What’s Declarative Programming ?
• A
program
that
describes
what
computa^on
should
be
performed
and
not
how
to
compute
it
• Any
programming
language
that
lacks
side
effects
(or
more
specifically,
is
referen^ally
transparent)
• A
language
with
a
clear
correspondence
to
mathema^cal
logic
http://en.wikipedia.org/wiki/Declarative_programming
Any
style
of
programming
that
is
not
impera^ve
7. What’s Declarative Programming ?
http://karari.tumblr.com/post/61067682037/clojure
“Add
all
the
integers
from
1
to
10”
Impera^ve
code
var s = 0;!
for(var n=1; n=10; n++)!
{!
s = s + n; !
}!
console.log(s);!
//55!
Declara^ve
code
(- (range 1 11)!
(reduce +)!
(println)!
)!
//55!
Flowchart
!
Model
!!
n
=
10
?
• Add
• increment
n
1
...
10
A set of the integers
range : 1…10
sum
8. What’s Declarative Programming ?
Lack
of
side
effect
:
Referen^ally
Transparent,
Idempotent
Referen^al
Transparency
A
property
whereby
an
expression
can
be
replaced
by
its
value
without
affec^ng
the
program
e.g.
to
use
global
variables
makes
it
referen^ally
opaque
Idempotence
A
quality
of
an
ac^on
such
that
repe^^ons
of
the
ac^on
have
no
further
effect
on
outcome
e.g.
n++;
(incremen^ng)
is
not
idempotent
à
These
concepts
are
important
for
networking
compu^ng
/
distributed
parallel
compu^ng,
where
environment
is
uncertain
and
such
things
like
retries
or
duplicates
could
more
likely
happen..
9. Idempotence
group{'sysadmin':!
!ensure=present!
}!
# First Puppet Run!
notice: /Group[sysadmin]/ensure: created!
notice: Finished catalog run in 0.08 seconds!
!
# Second Puppet Run!
notice: Finished catalog run in 0.03 seconds!
An
example
from
Puppet
To
state
the
desired
status
=
“present”
The
second
run
is
not
executed,
because
it
is
already
“present”
We
could
do
this
using
Shell
Script(Impera^ve),
but
with
condi^onal
branches..
if[`getentgroupsysadmin|awk-F:'{print$1}'`==]!
!then!
! !groupaddsysadmin!
fi!
10. What’s Declarative Programming ?
[Pros]
• Robustness
and
Scalability
-‐ In
uncertain
and
complex
environments
-‐ In
distributed
parallel
systems
• Reusability,
Maintainability
[Cons]
• Tend
to
be
Turing
incomplete
• BePer
to
restrict
domain/scope
• Not
good
at
controlling
details
To
agree
on
“what”
Model
Referen^al
Transparence
and
Idempotence
11. Turing completeness?
• Defini^on
of
Turing
completeness
- A
computa^onal
system
that
can
compute
every
Turing-‐computable
func^on
is
called
Turing
complete
(or
Turing
powerful).
Alterna^vely,
such
a
system
is
one
that
can
simulate
a
universal
Turing
machine.
hPp://en.wikipedia.org/wiki/Turing_completeness
- Impera^ve
languages
are
all
Turing
complete.
(e.g.
C,
Java,
Perl,
PHP,
Python..)
• Declara^ve
languages
tend
to
be
Turing
incomplete
- It
does
not
mean
Declara^ve
languages
cannot
be
Turing
complete.
- It’s
bePer
not
to
be
universally
powerful.
Instead,
by
limi^ng
the
scope
or
context,
the
power
of
declara^veness
would
be
op^mized.
(e.g.
SQL,
HTML,
JSON,
YANG..)
12. What’s Declarative Programming ?
Impera^ve
Declara^ve
Programming
Language
• Procedural
Programming
• Func^onal
Programming
• Domain
Specific
Language
Network
Control
• Openflow
OVS
• DB
• NETCONF/RESTCONF
• Control
Plane
Protocols
Orchestra^on/
Automa^on
• Workflow
• Model-‐driven
Configura^on
Management
• Script
• Puppet
• CFEngine
• OVSDB
13. Transport
Assurance
Orchestra^on
Control
Infrastructure
• Physical
• Virtual
virtual
physical
Service
Applica^on
Forwarding
Plane
(Distributed)
Control
Plane
(Centralized)
Control
Plane
Domain
Orchestra^on
Service
Orchestra^on
Service,
Applica^on
Hierarchy in Network Programmability
Various
forms
of
Programmability
14. • Addi^on
of
Model
Driven
SAL(Service
Adapta^on
Layer)
• Various
Southbound
Protocol
(BGP-‐LS,
PCEP..)
• Supports
both
physical
and
virtual
devices
E.g. OpenDaylight Controller Architecture
http://www.opendaylight.org/
Declara^ve
Impera^ve
15. • NFVO
(NFV
Service
Orchestrator)
• VNFM
(VNF
Manager)
• VIM
(Virtual
Infrastructure
Manager)
–
Openstack,
etc.
E.g. ETSI NFV Orchestration Architecture
Imperative
BSS
EMS1
Virtualiza^on
Layer
VNFM
VIM
Virtual
Compu^ng
Virtual
Storage
Virtual
Networ
k
NFVO
NFVI
NFV
Management
and
Orchestra^on
(Mano)
Compu^ng
Hardware
Storage
Hardware
Network
Hardware
VNF1
VNF2
VNF3
Tail-‐f
NCS
EMS1
EMS1
OSS
SID
Workflow
Script
YANG
Model
VNF,
VNFM
Interface
Defini^ons
YANG
Model
Service
Defini^ons
Declarative
16. Imperative vs Declarative – which fits where
• For
determinis^c
environment
à
Impera^ve
• For
uncertain(*)
environment
à
Declara^ve
(*)
What
causes
uncertainty
• Logical
and
physical
distance
• Scale-‐up,
Growth
• Various
kind
of
components
• Distributed
parallel
systems
• Mul^-‐agent
system
17. (Appendix) Programming Paradigm discussion in
Computing discipline
Object
Oriented
Procedural
Declara^ve,
Func^onal
Conflict
?!
• Object
Oriented
and
Func^onal
Programming
are
conflic^ng.
• Due
to
the
difference
of
the
principles?
Impera^ve
Declara^ve
18. (Appendix) Imperative vs Declarative discussion in
Cloud Management area
hPp://docs.oasis-‐open.org/tosca/TOSCA/v1.0/
cs01/TOSCA-‐v1.0-‐cs01.pdf
Proceedings
of
the
IEEE
Interna^onal
Conference
on
Cloud
Engineering
(IEEE
IC2E
2014)}
March
2014,
p87-‐96,
DOI
10.1109/IC2E.
2014.56
19. (Appendix – yet another icing on the cake)
Human and Machine
Impera^ve
Paradigm
• Human
who
writes
program
knows
everything
Declara^ve
Paradigm
• Human
may
NOT
know
everything
-‐ Machine
learning/Deep
Learning
-‐ Agent
based
system
• Network
centric
programming
-‐ A
module
to
program
another
module
via
network
20. Agenda
• Programming
Paradigm
Discussion
in
the
Networking
Discipline
• A
Form
of
SDN
Open
Daylight
-‐-‐
BGP-‐LS/PCEP and
MD-‐SAL
21. “Network”
from the viewpoint of Network Engineers ?!
Network
Engineers’
view
Image
source
:
hPp://www.dreams^me.com/royalty-‐free-‐stock-‐images-‐3d-‐white-‐people-‐system-‐administrator-‐image28585969,
hPp://www.sudarshansoDech.com/chnt3.htm
node
link
• Only
if
we
announce
the
endpoint
informa^on
and
requirements,
• Then
it
will
be
connected!
GW
• IP
addr/subnet
• vlan
• port
External
Network
Internal
Network
Security
Server
Engineers’
view
• Network
consists
of
nodes
and
links.
• Topology
maPers,
Bandwidth
maPers..
• Cost,
Delay,
JiPer
trade-‐offs..
22. BGP−LS and PCEP – SDN for Network Engineering
R5
R6
R7
R3
R4
R1
R2
SDN
Controller
Programming
CollecIon
NB
interface
PCEP
BGP-‐LS,
etc
Conges^on!
TE
Path
calcula^on
and
setup
Collect
informa^on:
Topology,
Bandwidth,
Usage..
• Path
to
sa^sfy
SLA
• Disjoint
paths
based
on
QoS
requirements
23. • TCP
MD5
Signature
Op^on
(rfc2385)
has
become
separated
from
BGPCEP
project
• SDNi(SDN
interface)
depends
on
BGP
implementa^on
Implementation of BGP-LS, PCEP in Open Daylight
http://www.opendaylight.org/
24. Topology Learning by BGP-LS
https://wiki.opendaylight.org/images/e/e3/
Os2014-md-sal-tutorial.pdf
25. Path (Tunnel) setup by PCEP
https://wiki.opendaylight.org/view/BGP_LS_PCEP:Programmer_Guide
R5
R6
R7
R3
R4
R1
R2
SDN
Controller
Programming
CollecIon
NB
interface
PCEP
BGP-‐LS,
etc
• draD-‐ie|-‐pce-‐stateful-‐pce-‐02
and
draD-‐crabbe-‐ini^ated-‐00
• draD-‐ie|-‐pce-‐stateful-‐pce-‐07,
draD-‐ie|-‐pce-‐pce-‐ini^ated-‐lsp-‐00
• draD-‐sivabalan-‐pce-‐segment-‐rou^ng-‐02
Create
node,
name,
arguments,
endpoints-‐obj,
ero,
lsp
Update
node,
name,
arguments,
opera^onal,
ero,
lsp
Remove
node,
name
26. (Appendix: Segment Routing)
Controller
DC
Cross
Domain
Orchestra^on
IPv4/IPv6
MPLS
Network
DC
Controller
Segment
RouIng
One
Collector
APIs
MPLS
Segment
RouIng
Control
Plane
LDP
and
RSVP
for
Label
distribu^on
IGP
extension
to
distribute
Segment
ID
Traffic
Engineering
RSVP
TE
signaling
Explicit
path
is
expressed
by
header
stack
ProtecIon
RSVP
TE
FRR
(IP
FRR/LFA
has
topology
restric^on)
Topology-‐
Independent
FRR
• Simple
• No
extra
control
plane
(RSVP,
LDP)
• No
RSVP
state
in
the
network
• Applica^on
centric
27. Model Driven SAL
http://www.opendaylight.org/
AD-‐SAL
MD-‐SAL
• The
Model-‐driven
approach
to
service
abstrac^on
presents
an
opportunity
to
unify
both
northbound
and
southbound
APIs
and
the
data
structures
used
in
various
services
and
components
of
an
SDN
Controller.
28. Model-Driven SAL
28
module
topology-‐tunnel-‐pcep-‐programming
{
yang-‐version
1;
namespace
urn:opendaylight:params:xml:ns:yang:topology:tunnel:pcep:programming;
prefix
ttpp;
import
pcep-‐types
{
prefix
pcep;
revision-‐date
2013-‐10-‐05;
}
import
topology-‐tunnel-‐programming
{
prefix
ttp;
revision-‐date
2013-‐09-‐30;
}
import
topology-‐tunnel-‐p2p
{
prefix
p2p;
revision-‐date
2013-‐08-‐19;
}
import
topology-‐tunnel-‐pcep
{
prefix
ptp;
revision-‐date
2013-‐08-‐20;
}
organization
Cisco
Systems,
Inc.;
contact
Robert
Varga
rovarga@cisco.com;
description
This
module
contains
the
programming
extensions
for
tunnel
topologies.
Copyright
(c)2013
Cisco
Systems,
Inc.
All
rights
reserved.
This
program
and
the
accompanying
materials
are
made
available
under
the
terms
of
the
Eclipse
Public
License
v1.0
which
accompanies
this
distribution,
and
is
available
at
http://www.eclipse.org/legal/epl-‐v10.html;
rpc
pcep-‐create-‐p2p-‐tunnel
{
input
{
uses
ttp:create-‐p2p-‐tunnel-‐input;
uses
p2p:tunnel-‐p2p-‐path-‐cfg-‐attributes;
uses
ptp:tunnel-‐pcep-‐link-‐cfg-‐attributes;
}
output
{
uses
ttp:create-‐p2p-‐tunnel-‐output;
}
}
rpc
pcep-‐destroy-‐tunnel
{
input
{
uses
ttp:destroy-‐tunnel-‐input;
}
output
{
uses
ttp:destroy-‐tunnel-‐output;
}
}
rpc
pcep-‐update-‐tunnel
{
input
{
uses
ttp:base-‐tunnel-‐input;
uses
p2p:tunnel-‐p2p-‐path-‐cfg-‐attributes;
uses
ptp:tunnel-‐pcep-‐link-‐cfg-‐attributes;
}
output
{
uses
ttp:base-‐tunnel-‐output;
}
}!
}!
Yang
Tools
Plugin
Plugin
Model
topology-tunnel-pcep-programming.yang
APIs
29. Model-Driven SAL
• Controller
SAL
to
communicate
with
other
controller
components,
applica^ons,
and
plugins.
Controller
SAL
30. Why Model?
• Model
is
a
representa^on
of
a
part
of
the
func^on,
structure
and/or
behavior
of
a
system
(*)
(*)
Architectural
Board
ORMSC,
“Model
Driven
Architecture”,
July
2001
• Advantage
of
Model
• Declara^ve
Agree
on
“what”,
not
“how”
• Commonality
Abstract
diversity
• Reusability,
Maintainability,
Portability
Conversion
from
model
to
model
• Robustness
in
uncertain
environment
31. Agenda
• Programming
Paradigm
Discussion
in
the
Networking
Discipline
• A
Form
of
SDN
Open
Daylight
-‐-‐
BGP-‐LS/PCEP and
MD-‐SAL
Declara^ve
programming
and
Model-‐drivenness
has
an
advantage
in
networking
compu^ng,
where
the
environment
is
more
uncertain.