The document discusses high performance networking and summarizes a presentation about improving network performance. It describes drawbacks of the current Linux network stack, including kernel overhead and data copying. It then discusses approaches like DPDK and RDMA that can help improve performance by reducing overhead and enabling zero-copy data transfers. A case study is presented on using RDMA to improve TensorFlow performance by eliminating unnecessary data copies between devices.
This talks shows how to implement the Application-Based Routing in the common Linux Distribution. We use the NDPI to execute the DPI function to category the packet first, use the linux kernel build-it mark to pass the information from user-space to kernel space and then the policy routing system use that mark to route the packet by different destination or interface.
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
[20200720]cloud native develoment - Nelson LinHanLing Shen
There is no shortage now of development and CI/CD tools for cloud-native application development. But how do we put the cloud-native concept and think as the cloud-native way on the leftmost side of CI/CD pipeline.
During developing phrase, the tools provided with cloud code can help you expedite iteration of source codes, run and debug cloud native applications in an easy and fast way, making cloud-native development turn into real-time process, reduce the gap between deployment and development.
現在不乏用於雲原生應用程序開發的開發和 CI/CD工具。 但是,我們如何將雲原生概念放在的 CI/CD 流水線的最左側呢?
在開發階段,如何用 Cloud code 協助您加快原始碼的迭代速度,以簡便快捷的方式運行和調用雲原生應用程序,使雲原生開發變為即使過程,縮小開發與部署之間的差
How Networking works with Data Science HungWei Chiu
Introduce the basic concept of networking model, including the OSI model and TCP/IP model.
Also introduce basic ideas/function in networking, such as routing, classification, security..etc
Scaling OpenStack Networking Beyond 4000 Nodes with Dragonflow - Eshed Gal-Or...Cloud Native Day Tel Aviv
As OpenStack matures, more users move from “dipping a toe” to deploying at large scale, with 1000's of nodes.
OpenStack networking has long been a limiting factor in scaling beyond a few hundreds of nodes, forcing users to turn to cell splitting, or to complete offloading of the networking to the underlay systems and forfeit the overlay network altogether.
Dragonflow is a fully distributed, open source, SDN implementation of Neutron, that handles large scale deployments without splitting to cells.
In testing we've conducted, we were able to scale to 4000+ controllers (each controller is typically deployed on a compute node), while maintaining the same performance we had on a small 30 node environment.
This talks shows how to implement the Application-Based Routing in the common Linux Distribution. We use the NDPI to execute the DPI function to category the packet first, use the linux kernel build-it mark to pass the information from user-space to kernel space and then the policy routing system use that mark to route the packet by different destination or interface.
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
[20200720]cloud native develoment - Nelson LinHanLing Shen
There is no shortage now of development and CI/CD tools for cloud-native application development. But how do we put the cloud-native concept and think as the cloud-native way on the leftmost side of CI/CD pipeline.
During developing phrase, the tools provided with cloud code can help you expedite iteration of source codes, run and debug cloud native applications in an easy and fast way, making cloud-native development turn into real-time process, reduce the gap between deployment and development.
現在不乏用於雲原生應用程序開發的開發和 CI/CD工具。 但是,我們如何將雲原生概念放在的 CI/CD 流水線的最左側呢?
在開發階段,如何用 Cloud code 協助您加快原始碼的迭代速度,以簡便快捷的方式運行和調用雲原生應用程序,使雲原生開發變為即使過程,縮小開發與部署之間的差
How Networking works with Data Science HungWei Chiu
Introduce the basic concept of networking model, including the OSI model and TCP/IP model.
Also introduce basic ideas/function in networking, such as routing, classification, security..etc
Scaling OpenStack Networking Beyond 4000 Nodes with Dragonflow - Eshed Gal-Or...Cloud Native Day Tel Aviv
As OpenStack matures, more users move from “dipping a toe” to deploying at large scale, with 1000's of nodes.
OpenStack networking has long been a limiting factor in scaling beyond a few hundreds of nodes, forcing users to turn to cell splitting, or to complete offloading of the networking to the underlay systems and forfeit the overlay network altogether.
Dragonflow is a fully distributed, open source, SDN implementation of Neutron, that handles large scale deployments without splitting to cells.
In testing we've conducted, we were able to scale to 4000+ controllers (each controller is typically deployed on a compute node), while maintaining the same performance we had on a small 30 node environment.
Control Your Network ASICs, What Benefits switchdev Can Bring UsHungWei Chiu
In this slide, I will introduce what is switchdev and what problem it wants to solve. To this day, most of the hardware switch's application-specific integrated circuit (ASIC) only be controlled by the vendor's proprietary binary (SDK) and it's inconvenient for system administrator/developer. In order to break the chip vendor's lock-in situation, the switchdev had been designed to solve this. With the help of switchdev, we can develop a general solution for hardware switch chips and break the connection with vendor's binary-blob (SDK).
In order words. Linux kernel can directly communicate with the vendor's proprietary ASIC now, and the software programmer/system administrator can easily control that ASIC to provide more flexible, powerful and programmable network function.
Introduce the basic concept of load-balancing, common implementations of load-balancing and the detail fo kubernetes service. In the last, demonstrate how to modify the linux iptable kernel module to fulfill the layer-7 load-balcning for kubernetes
Presentation delivered at LinuxCon China 2017.
Open vSwitch (OVS) is a multilayer open source virtual switch. OVS is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces. OVN is a new network virtualization project that brings virtual networking to the Open vSwitch user community. OVN includes logical switches and routers, security groups, and L2/L3/L4 ACLs, implemented on top of a tunnel-based overlay network.
In this presentation, we will provide an overview of the current state of the projects and their future plans, such as:
- The current state of the Linux, DPDK, and Hyper-V ports
- A status update on a portable BPF-based datapath
- The latest stateful and OpenFlow features available in OVS
- Performance and debugging enhancement to OVN
- OVN features under development such as ACL logging and encrypted tunnels
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Can the Open vSwitch (OVS) bottleneck be resolved? - Erez Cohen - OpenStack D...Cloud Native Day Tel Aviv
OpenStack practitioners who have deployed cloud at scale would frown when they hear the mention of Open Virtual Switch (OVS), which has been a bottleneck for cloud network performance and scalability. As emerging technologies such as NFV keep pushing for higher data forwarding performance across the network infrastructure, it becomes critical to improve OVS performance without compromising flexibility, network programmability, and cost.
We will present a novel way to offload the entire OVS dataplane onto the embedded switch (eSwitch) implemented in the server NIC. This approach maximizes the effective bandwidth that the applications can use to communicate with each other or fetch data from storage, and enhances the efficiency of the cloud. Accelerated Switching And Packet Processing (ASAP2) Direct works seamlessly within the framework of SDN, and allow controllers to configure and update flows onto OVS the same way as before so that network programmability remains intact.
Docker network performance in the public cloudArjan Schaaf
Presentation from Container Camp London 2015 which compares both the network performance of containers on both AWS and Azure. Included SDN solutions in these tests are Flannel, Weave and Project Calico.
Enterprise data centers have to support a diverse of set of workloads: cloud native, big data, high performance computing, and legacy applications. While cloud native applications are ideal to run in Docker clusters, bare metal and virtualization infrastructures must still be supported in the data center. The result is a proliferation of clusters and technologies running in individual silos, resulting in high management costs and low utilization. This talk describes the challenges and experiences in implementing a shared cluster infrastructure based on Kubernetes to support big data, high performance computing, and VM-based workloads. The talk will show the deployment and scaling of a high performance computing workload manager, Spark, and OpenStack, and how the VM and Docker management can be integrated together.
Tech Talk by Gal Sagie: Kuryr - Connecting containers networking to OpenStack...nvirters
These are slides from the Tech Talk at http://www.meetup.com/openvswitch/events/226518209/
Synopsis
Kuryr is a new project under Neutron's big tent that makes Neutron networking available to Docker containers by means of a Docker plugin.
In this session Gal will introduce Kuryr and show how it provides networking for containers in plain Docker environments and in mixed Docker, OpenStack environments. He will also present Kuryr's roadmap and integration with networking models in other orchestration engines like Kubernetes and Docker
About Gal Sagie
Gal Sagie is an open source software architect at Huawei European Research Centre, focusing work on OpenStack networking and containers networking. Working on various projects in the community like Dragonflow, OVN, Kuryr, and Multisite/Hybrid clouds in OpenStack. Blogging for anything SDN/NFV/OpenStack related at http://galsagie.github.io
Network administration overhead is currently one of the major obstacles preventing customers from moving OpenStack into production for wider adoption and efficient utilization by applications. Cloud facilities might experience lack of visibility to common operations of underlying workers and coherent representation of physical and virtual network elements and their interconnections. They might find it hard to estimate impact of micro failures in their infrastructure and react fast to failures. Some might overcome complexity in operations, discovery and monitoring of their cloud by manual processes and/or complex batch operations. I'm offering a journey of troubleshooting and discovery cycles in a typical Cloud that we run today, suggest elegant ways to overcome overheads. Substantially simplifying networking operations, troubleshooting and monitoring might happen through unified Operations API and operations agent, those concepts will be presented, accompanied with practical demos.
Writing the Container Network Interface(CNI) plugin in golangHungWei Chiu
An introduction to Container Network Interface (CNI), including what problems it want solve and how it works.
Also contains a example about how to write a simple CNI plugin with golang
The attached is a summary of terms, description of constructs, integration alternatives and more in the networking world of Kubernetes, Openshift and AWS
OpenStack Israel Meetup - Project Kuryr: Bringing Container Networking to Neu...Cloud Native Day Tel Aviv
Kuryr is a new project, started by Gal Sagie, that makes Neutron networking available to containers networking used in Docker / Kubernetes and others.
Kuryr aims at bridging the gap between containers orchestration engines and models to OpenStack networking abstraction and expose Neutron flexibility/features and advanced services to containers networking.
Ariel Waizel discusses the Data Plane Development Kit (DPDK), an API for developing fast packet processing code in user space.
* Who needs this library? Why bypass the kernel?
* How does it work?
* How good is it? What are the benchmarks?
* Pros and cons
Ariel worked on kernel development at the IDF, Ben Gurion University, and several companies. He is interested in networking, security, machine learning, and basically everything except UI development. Currently a Solution Architect at ConteXtream (an HPE company), which specializes in SDN solutions for the telecom industry.
Control Your Network ASICs, What Benefits switchdev Can Bring UsHungWei Chiu
In this slide, I will introduce what is switchdev and what problem it wants to solve. To this day, most of the hardware switch's application-specific integrated circuit (ASIC) only be controlled by the vendor's proprietary binary (SDK) and it's inconvenient for system administrator/developer. In order to break the chip vendor's lock-in situation, the switchdev had been designed to solve this. With the help of switchdev, we can develop a general solution for hardware switch chips and break the connection with vendor's binary-blob (SDK).
In order words. Linux kernel can directly communicate with the vendor's proprietary ASIC now, and the software programmer/system administrator can easily control that ASIC to provide more flexible, powerful and programmable network function.
Introduce the basic concept of load-balancing, common implementations of load-balancing and the detail fo kubernetes service. In the last, demonstrate how to modify the linux iptable kernel module to fulfill the layer-7 load-balcning for kubernetes
Presentation delivered at LinuxCon China 2017.
Open vSwitch (OVS) is a multilayer open source virtual switch. OVS is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces. OVN is a new network virtualization project that brings virtual networking to the Open vSwitch user community. OVN includes logical switches and routers, security groups, and L2/L3/L4 ACLs, implemented on top of a tunnel-based overlay network.
In this presentation, we will provide an overview of the current state of the projects and their future plans, such as:
- The current state of the Linux, DPDK, and Hyper-V ports
- A status update on a portable BPF-based datapath
- The latest stateful and OpenFlow features available in OVS
- Performance and debugging enhancement to OVN
- OVN features under development such as ACL logging and encrypted tunnels
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Can the Open vSwitch (OVS) bottleneck be resolved? - Erez Cohen - OpenStack D...Cloud Native Day Tel Aviv
OpenStack practitioners who have deployed cloud at scale would frown when they hear the mention of Open Virtual Switch (OVS), which has been a bottleneck for cloud network performance and scalability. As emerging technologies such as NFV keep pushing for higher data forwarding performance across the network infrastructure, it becomes critical to improve OVS performance without compromising flexibility, network programmability, and cost.
We will present a novel way to offload the entire OVS dataplane onto the embedded switch (eSwitch) implemented in the server NIC. This approach maximizes the effective bandwidth that the applications can use to communicate with each other or fetch data from storage, and enhances the efficiency of the cloud. Accelerated Switching And Packet Processing (ASAP2) Direct works seamlessly within the framework of SDN, and allow controllers to configure and update flows onto OVS the same way as before so that network programmability remains intact.
Docker network performance in the public cloudArjan Schaaf
Presentation from Container Camp London 2015 which compares both the network performance of containers on both AWS and Azure. Included SDN solutions in these tests are Flannel, Weave and Project Calico.
Enterprise data centers have to support a diverse of set of workloads: cloud native, big data, high performance computing, and legacy applications. While cloud native applications are ideal to run in Docker clusters, bare metal and virtualization infrastructures must still be supported in the data center. The result is a proliferation of clusters and technologies running in individual silos, resulting in high management costs and low utilization. This talk describes the challenges and experiences in implementing a shared cluster infrastructure based on Kubernetes to support big data, high performance computing, and VM-based workloads. The talk will show the deployment and scaling of a high performance computing workload manager, Spark, and OpenStack, and how the VM and Docker management can be integrated together.
Tech Talk by Gal Sagie: Kuryr - Connecting containers networking to OpenStack...nvirters
These are slides from the Tech Talk at http://www.meetup.com/openvswitch/events/226518209/
Synopsis
Kuryr is a new project under Neutron's big tent that makes Neutron networking available to Docker containers by means of a Docker plugin.
In this session Gal will introduce Kuryr and show how it provides networking for containers in plain Docker environments and in mixed Docker, OpenStack environments. He will also present Kuryr's roadmap and integration with networking models in other orchestration engines like Kubernetes and Docker
About Gal Sagie
Gal Sagie is an open source software architect at Huawei European Research Centre, focusing work on OpenStack networking and containers networking. Working on various projects in the community like Dragonflow, OVN, Kuryr, and Multisite/Hybrid clouds in OpenStack. Blogging for anything SDN/NFV/OpenStack related at http://galsagie.github.io
Network administration overhead is currently one of the major obstacles preventing customers from moving OpenStack into production for wider adoption and efficient utilization by applications. Cloud facilities might experience lack of visibility to common operations of underlying workers and coherent representation of physical and virtual network elements and their interconnections. They might find it hard to estimate impact of micro failures in their infrastructure and react fast to failures. Some might overcome complexity in operations, discovery and monitoring of their cloud by manual processes and/or complex batch operations. I'm offering a journey of troubleshooting and discovery cycles in a typical Cloud that we run today, suggest elegant ways to overcome overheads. Substantially simplifying networking operations, troubleshooting and monitoring might happen through unified Operations API and operations agent, those concepts will be presented, accompanied with practical demos.
Writing the Container Network Interface(CNI) plugin in golangHungWei Chiu
An introduction to Container Network Interface (CNI), including what problems it want solve and how it works.
Also contains a example about how to write a simple CNI plugin with golang
The attached is a summary of terms, description of constructs, integration alternatives and more in the networking world of Kubernetes, Openshift and AWS
OpenStack Israel Meetup - Project Kuryr: Bringing Container Networking to Neu...Cloud Native Day Tel Aviv
Kuryr is a new project, started by Gal Sagie, that makes Neutron networking available to containers networking used in Docker / Kubernetes and others.
Kuryr aims at bridging the gap between containers orchestration engines and models to OpenStack networking abstraction and expose Neutron flexibility/features and advanced services to containers networking.
Ariel Waizel discusses the Data Plane Development Kit (DPDK), an API for developing fast packet processing code in user space.
* Who needs this library? Why bypass the kernel?
* How does it work?
* How good is it? What are the benchmarks?
* Pros and cons
Ariel worked on kernel development at the IDF, Ben Gurion University, and several companies. He is interested in networking, security, machine learning, and basically everything except UI development. Currently a Solution Architect at ConteXtream (an HPE company), which specializes in SDN solutions for the telecom industry.
DPDK Summit 2015 - Aspera - Charles ShiflettJim St. Leger
DPDK Summit 2015 in San Francisco.
Presentation by Charles Shiflett, Aspera.
For additional details and the video recording please visit www.dpdksummit.com.
In-memory processing has started to become the norm in large scale data handling. This is aclose to the metal analysis of highly important but often neglected aspects of memory accesstimes and how it impacts big data and NoSQL technologies.We cover aspects such as the TLB, the Transparent Huge Pages, the QPI Link, Hyperthreading and the impact of virtualization on high-memory footprint applications. We present benchmarks of various technologies ranging from Cloudera’s Impala to Couchbase and how they are impacted by the underlying hardware.The key takeaway is a better understanding of how to size a cluster, how to choose a cloud provider and an instance type for big data and NoSQL workloads and why not every core or GB of RAM is created equal.
OSDC 2016 - Tuning Linux for your Database by Colin CharlesNETWAYS
Many operations folk know that performance varies depending on using one of the many Linux filesystems like EXT4 or XFS. They also know of the schedulers available, they see the OOM killer coming and more. However, appropriate configuration is necessary when you're running your databases at scale.
Learn best practices for Linux performance tuning for MariaDB/MySQL (where MyISAM uses the operating system cache, and InnoDB maintains its own aggressive buffer pool), as well as PostgreSQL and MongoDB (more dependent on the operating system). Topics that will be covered include: filesystems, swap and memory management, I/O scheduler settings, using and understanding the tools available (like iostat/vmstat/etc), practical kernel configuration, profiling your database, and using RAID and LVM.
There is a focus on bare metal as well as configuring your cloud instances in.
Learn from practical examples from the trenches.
Tuning Linux for your database FLOSSUK 2016Colin Charles
Some best practices about tuning Linux for your database workloads. The focus is not just on MySQL or MariaDB Server but also on understanding the OS from hardware/cloud, I/O, filesystems, memory, CPU, network, and resources.
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
JT Kellington, IBM and Allan Cantle, Nallatech present at the 2015 HPCC Systems Engineering Summit Community Day about porting HPCC Systems to the POWER8-based ppc64el architecture.
A Dataflow Processing Chip for Training Deep Neural Networksinside-BigData.com
In this deck from the Hot Chips conference, Chris Nicol from Wave Computing presents: A Dataflow Processing Chip for Training Deep Neural Networks.
Watch the video: https://wp.me/p3RLHQ-k6W
Learn more: https://wavecomp.ai/
and
http://www.hotchips.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Accelerated dataplanes integration and deploymentOPNFV
Tim Rozet, Red Hat, Feng Pan, Red Hat
This session will explore the challenges and lessons learned with integrating accelerated dataplanes into OPNFV deployments. More specifically the talk will focus on FD.IO (VPP) and OVS DPDK integration into Apex, including different types of configuration options, platform requirements, performance tuning, and deployment challenges. This talk will also provide context to how OpenStack functions differently with these types of dataplanes, and how integration with the OpenDaylight controller works.
This slide describe what is the KIND and how to set up the KIND(Kubernetes IN Docker) to have a simple and quickly environment for k8s testing, is also address few issues what KIND fix to make the KIND work, like the certificate issue and DNS issue
Kubernetes is a container orchestrator platform, not the docker platform. It means we can switch to a different container solutions in the Kubernetes environment and the key point is the CRI, container runtime intface. We will talked about what is the CRI and how to use it in the Kubernetes world, we also introduce what is the OCI, the basic concept of the OCI, inclduing Runtime spec and Image spec.
In this slide, we discussed the IPVS, including the introduction, demonstration, implementation, and integration in Kubernetes.
IPVS was based on the netfilter and we discussed how it works with iptables and also compares the detail implementation in Kubernetes to show why IPVS has a better performance in IPTABLES.
In this slide, we go through the Google Dapper, OpenTracing, Jaeger to OpenTelemetry. By reading and studying the history of Dapper, we could lean the experience and design theory of a large-scale distributed tracing system and then know how it affects other solutions, like OpenTracing and Jaeger.
We also discuss the difference between the OpenTracing and Jaeger and also demonstrate how Jaeger works and looks like.
After, we talked about the future of OpenTracing, the new organization called OpenTelemetry, what's its goal and how to do that.
In this slide, we discussed the architecture of iptables and also showed how to implement your own IPTABLES module.
Upon the understanding of iptables, we implemented the DNS layer 7 parse in iptables module.
After that, we studied how Kubernetes service works and also explained why Kubernetes can't do layer7 load-balancer in TCP connection but UDP.
In this slide, we discuss the concept of IPTABLES/EBTABLES and then show how they work in a simple docker environment.
In order to track the packet flow in those containers communication, we use the LOG module in IPTABLES/EBTABLE to track the information.
Introduction what is container and how to use it. staring from the comparison to virtual machine and also show how to use the persistent storage and port mapping in containers.
In the last part, shows what is kubernetes and what kind of problems kubernetes want to solve and how it solves.
In this slide, I briefly introduce the container and how docker implement it, including the image and container itself. also show how docker setup the networking connectivity by default bridge network.
Build Your Own CaaS (Container as a Service)HungWei Chiu
In this slide, I introduce the kubernetes and show an example what is CaaS and what it can provides.
Besides, I also introduce how to setup a continuous integration and continuous deployment for the CaaS platform.
Overview of kubernetes network functionsHungWei Chiu
In this slides, I briefly introduce the network function in the kubernetes and explain how kubernetes implement them.
Those function includes the container network interface (CNI) and kubernetes service.
In the last, I introduce the multus CNI which is designed for multiple networks in the container and it's necessary in some use case, such as SDN/NFV/5G
Show how does the iptables works and use the source code to explain the workflow of iptables step by step. including the file-lock, the system call and the related command of iptables rules.
In the last, I also show the architecture of the iptables extension and use the demo to show how to write your own iptables modules.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
2. WHO AM I
• Hung-Wei Chiu (邱宏瑋)
• hwchiu@linkernetworks.com
• hwchiu.com
• Experience
• Software Engineer at Linker Netowrks
• Software Engineer at Synology (2014~2017)
• Co-Found of SDNDS-TW
• Open Source experience
• SDN related projects (mininet, ONOS, Floodlight, awesome-sdn)
3. WHAT WE DISCUSS TODAY
The Drawback of Current Network Stack.
High Performance Network Model
DPDK
RDMA
Case Study
4. DRAWBACK OF CURRENT NETWORK STACK
• Linux Kernel Stack
• TCP Stack
• Packets Processing in Linux Kernel
5. LINUX KERNEL TCP/IP NETWORK STACK
• Have you imaged how applications communicate by network?
16. HOW ABOUT THE KERNEL ?
RECEIVE MESSAGE
• User Space -> read(data….)
• SYSCALL_DEFINE3(….) Kernel Space
• …..
17.
18. WHAT IS THE PROBLEM
• TCP
• Linux Kernel Network Stack
• How Linux process packets.
19. THE PROBLEM OF TCP
• Designed for WAN network environment
• Different hardware between now and then.
• Modify the implementation of TCP to improve its performance
• DCTCP (Data Center TCP)
• MPTCP (Multi Path TCP)
• Google BBR (Modify Congestion Control Algorithm)
• New Protocol
• [論文導讀]
• Re-architecting datacenter networks and stacks for low latency and high performance
20. THE PROBLEM OF LINUX NETWORK STACK
• Increasing network speeds: 10G 40G 100G
• Time between packets get smaller
• For 1538 bytes.
• 10 Gbis == 1230.4 ns
• 40 Gbis == 307.6 ns
• 100 Gbits == 123.0 ns
• Refer to
http://people.netfilter.org/hawk/presentations/LCA2015/net_stack_challenges_100G_L
CA2015.pdf
• Network stack challenges at increasing speeds The 100Gbit/s challenge
21. THE PROBLEM OF LINUX NETWORK STACK
• For smallest frame size 84 bytes.
• At 10Gbit/s == 67.2 ns, (14.88 Mpps) (packet per second)
• For 3GHz CPU, 201 CPU cycles for each packet.
• System call overhead
• 75.34 ns (Intel CPU E5-2630 )
• Spinlock + unlock
• 16.1ns
22. THE PROBLEM OF LINUX NETWORK STACK
• A single cache-miss:
• 32 ns
• Atomic operations
• 8.25 ns
• Basic sync mechanisms
• Spin (16ns)
• IRQ (2 ~ 14 ns)
23. SO..
• For smallest frame size 84 bytes.
• At 10Gbit/s == 67.2 ns, (14.88 Mpps) (packet per second)
• 75.34+16.1+32+8.25+14 = 145.69
27. PACKET PROCESSING
• When a network card receives a packet.
• Sends the packet to its receive queue (RX)
• System (kernel) needs to know the packet is coming and pass the data to a
allocated buffer.
• Polling/Interrupt
• Allocate skb_buff for packet
• Copy the data to user-space
• Free the skb_buff
28. PACKETS PROCESSING IN LINUX
User Space
Kernel Space
NIC TX/RX Queue
Application
Socket Driver Ring Buffer
29. PROCESSING MODE
• Polling Mode
• Busy Looping
• CPU overloading
• High Network Performance/Throughput
30. PROCESSING MODE
• Interrupt Mode
• Read the packet when receives the interrupt
• Reduce CPU overhead.
• We don’t have too many CPU before.
• Worse network performance than polling mode.
31. MIX MODE
• Polling + Interrupt mode (NAPI) (New API)
• Interrupt first and then polling to fetch packets
• Combine the advantage of both mode.
32. SUMMARY
• Linux Kernel Overhead (System calls, locking, cache)
• Context switching on blocking I/O
• Interrupt handling in kernel
• Data copy between user space and kernel space.
• Too many unused network stack feature.
• Additional overhead for each packets
33. HOW TO SOLVE THE PROBLEM
• Out-of-tree network stack bypass solutions
• Netmap
• PF_RING
• DPDK
• RDMA
34. HOW TO SOLVE THE PROBLEM
• How did those models handle the packet in 62.7ns?
• Batching, preallocation, prefetching,
• Staying cpu/numa local, avoid locking.
• Reduce syscalls,
• Faster cache-optimal data structures
35. HOW TO SOLVE THE PROBLEM
• How did those models handle the packet in 62.7ns?
• Batching, preallocation, prefetching,
• Staying cpu/numa local, avoid locking.
• Reduce syscalls,
• Faster cache-optimal data structures
36. HOW TO SOLVE.
• Now. There’re more and more CPU in server.
• We can dedicated some CPU to handle network packets.
• Polling mode
• Zero-Copy
• Copy to the user-space iff the application needs to modify it.
• Sendfile(…)
• UIO (User Space I/O)
• mmap (memory mapping)
38. DPDK
• Supported by Intel
• Only the intel NIC support at first.
• Processor affinity / NUMA
• UIO
• Polling Mode
• Batch packet handling
• Kernel Bypass
• …etc
39. PACKETS PROCESSING IN DPDK
User Space
Kernel Space
NIC TX/RX Queue
Application DPDK
UIO (User Space IO)
Driver
Ring Buffer
40. COMPARE
Network Interface Card
Linux Kernel
Network Stack
Network Driver
Application
Network Interface Card
Linux Kernel
Network Stack
Network Driver
Application
Kernel
Space
User Space
41. WHAT’S THE PROBLEM.
• Without the Linux Kernel Network Stack
• How do we know what kind of the packets we received.
• Layer2 (MAC/Vlan)
• Layer3 (IPv4, IPv6)
• Layer4 (TCP,UDP,ICMP)
42. USER SPACE NETWORK STACK
• We need to build the user space network stack
• For each applications, we need to handle following issues.
• Parse packets
• Mac/Vlan
• IPv4/IPv6
• TCP/UDP/ICMP
• For TCP, we need to handle three-way handshake
43. FOR ALL EXISTING NETWORK APPLICATIONS
• Rewrite all socket related API to DPDK API
• DIY
• Find some OSS to help you
• dpdk-ans (c )
• mTCP (c )
• yanff (go)
• Those projects provide BSD-like interface for using.
51. WHAT IT PROVIDES
• Low CPU usage
• High throughput
• Low-latency
• You can’t have those features in the same time.
• Refer to :Tips and tricks to optimize your RDMA code
52.
53.
54.
55.
56. SUPPORT RDMA
• Storage
• Ceph
• DRBD (Distributed Replicated Block Device)
• Tensorflow
• Case Study - Towards Zero Copy Dataflows using RDMA
57. CASE STUDY
• Towards Zero Copy Dataflows using RDMA
• 2017 SICCOM Poster
• Introduction
• What problem?
• How to solve ?
• How to implement ?
• Evaluation
58. INTRODUCTION
• Based on Tensorflow
• Distributed
• Based on RDMA
• Zero Copy
• Copy problem
• Contribute to Tensorflow (merged)
59. WHAT PROBLEMS
• Dataflow
• Directed Acyclic Graph
• Large data
• Hundred of MB
• Some data is unmodified.
• Too many copies operation
• User Space <-> User Space
• User Space <-> Kernel Space
• Kernel Space -> Physical devices
60. WHY DATA COPY IS BOTTLENECK
• Data buffer is bigger than the system L1/L2/L3 cache
• Too many cache miss (increate latency)
• A Single Application unlikely can congest the network bandwidth.
• Authors says.
• 20-30 GBs for data buffer 4KB
• 2-4 GBs for data buffer > 4MB
• Too many cache miss.
61. HOW TO SOLVE
• Too many data copies operations.
• Same device.
• Use DMA to pass data.
• Different device
• Use RDMA
• In order to read/write the remote GPU
• GPUDirect RDMA (published by Nvidia)
62.
63. HOW TO IMPLEMENT
• Implement a memory allocator
• Parse the computational graph/distributed graph partition
• Register the memory with RDMA/DMA by the node’s type.
• In Tensorflow
• Replace the original gRPC format by RDMA
64. EVALUATION (TARGET)
• Tensorflow v1.2
• Basd on gRPC
• RDMA zero copy Tensorflow
• Yahoo open RDMA Tensorflow (still some copy operat Software ions)
65. EVALUATION (RESULT)
• RDMA (zero copy) v.s gRPC
• 2.43x
• RDMA (zero copy) v.s Yahoo version
• 1.21x
• Number of GPU, 16 v.s 1
• 13.8x
67. EVALUATION (HARDWARE)
• Server * 4
• DUal6-core Intel Xeon E5-2603v4 CPU
• 4 Nvidia Tesla K40m GPUs
• 256 GB DDR4-2400MHz
• Mellanox MT27500 40GbE NIC
• Switch
• 40Gbe RoCE Switch
• Priority Flow Control
68. EVALUATION (SOFTWARE)
• VGG16 CNN Model
• Model parameter size is 528 MB
• Synchronous
• Number of PS == Number of Workers
• Workers
• Use CPU+GPU
• Parameter Server
• Only CPU