Using Kubernetes and TensorFlow to build the Fog Computing Platform that can dynamically deploy the deep learning applications on to the IoT devices (Raspberry PI).
Overview of kubernetes network functionsHungWei Chiu
In this slides, I briefly introduce the network function in the kubernetes and explain how kubernetes implement them.
Those function includes the container network interface (CNI) and kubernetes service.
In the last, I introduce the multus CNI which is designed for multiple networks in the container and it's necessary in some use case, such as SDN/NFV/5G
This document discusses CNI and the Linen CNI plugin. It begins with an introduction to CNI and how it allows plugins to configure network interfaces in containers. It then discusses the Linen CNI plugin, which is designed for overlay networks and uses Open vSwitch. It explains how Linen CNI works with Kubernetes and provides packet processing between nodes. The document also compares Linen CNI to other overlay networking solutions like OVN-Kubernetes.
Introduction to CNI (Container Network Interface)HungWei Chiu
A brief introduction to the CNI (Container Network Interface), the implementation of docker bridge network and the CNI usage, including why we develop the CNI, how to use the CNI and what is CNI.
We also introduction the pause container the kubernetes PoD and how to use the CNI in the kubernetes.
In the end, we use the flannel as an example to show how to install the CNI into your kubernetes cluster
[20200720]cloud native develoment - Nelson LinHanLing Shen
There is no shortage now of development and CI/CD tools for cloud-native application development. But how do we put the cloud-native concept and think as the cloud-native way on the leftmost side of CI/CD pipeline.
During developing phrase, the tools provided with cloud code can help you expedite iteration of source codes, run and debug cloud native applications in an easy and fast way, making cloud-native development turn into real-time process, reduce the gap between deployment and development.
現在不乏用於雲原生應用程序開發的開發和 CI/CD工具。 但是,我們如何將雲原生概念放在的 CI/CD 流水線的最左側呢?
在開發階段,如何用 Cloud code 協助您加快原始碼的迭代速度,以簡便快捷的方式運行和調用雲原生應用程序,使雲原生開發變為即使過程,縮小開發與部署之間的差
Integration kubernetes with docker private registryHungWei Chiu
What's the problem when we want to use the private registry in the kubernetes.
We also want to run a Docker-In-Docker Pod to push the private image to that private registry and the kubernetes node will pull the private image to run
How to deal second interface service discovery and load balancer in kubernetesMeng-Ze Lee
This document discusses how to deal with a second network interface in Kubernetes. It explains that having multiple interfaces is necessary for network functions and OpenStack deployments. CNI plugins like Multus and Genie allow containers to have multiple interfaces. The challenges are that Kubernetes does not have service and endpoint resources for the second interface, and components like CoreDNS and kube-proxy lack related information. To address this, the document proposes establishing a service mechanism, DNS resolution, and load balancing for the second interface. It provides examples of using projects to record pod IPs, DNS servers like CoreDNS, and discusses load balancing algorithms.
Overview of kubernetes network functionsHungWei Chiu
In this slides, I briefly introduce the network function in the kubernetes and explain how kubernetes implement them.
Those function includes the container network interface (CNI) and kubernetes service.
In the last, I introduce the multus CNI which is designed for multiple networks in the container and it's necessary in some use case, such as SDN/NFV/5G
This document discusses CNI and the Linen CNI plugin. It begins with an introduction to CNI and how it allows plugins to configure network interfaces in containers. It then discusses the Linen CNI plugin, which is designed for overlay networks and uses Open vSwitch. It explains how Linen CNI works with Kubernetes and provides packet processing between nodes. The document also compares Linen CNI to other overlay networking solutions like OVN-Kubernetes.
Introduction to CNI (Container Network Interface)HungWei Chiu
A brief introduction to the CNI (Container Network Interface), the implementation of docker bridge network and the CNI usage, including why we develop the CNI, how to use the CNI and what is CNI.
We also introduction the pause container the kubernetes PoD and how to use the CNI in the kubernetes.
In the end, we use the flannel as an example to show how to install the CNI into your kubernetes cluster
[20200720]cloud native develoment - Nelson LinHanLing Shen
There is no shortage now of development and CI/CD tools for cloud-native application development. But how do we put the cloud-native concept and think as the cloud-native way on the leftmost side of CI/CD pipeline.
During developing phrase, the tools provided with cloud code can help you expedite iteration of source codes, run and debug cloud native applications in an easy and fast way, making cloud-native development turn into real-time process, reduce the gap between deployment and development.
現在不乏用於雲原生應用程序開發的開發和 CI/CD工具。 但是,我們如何將雲原生概念放在的 CI/CD 流水線的最左側呢?
在開發階段,如何用 Cloud code 協助您加快原始碼的迭代速度,以簡便快捷的方式運行和調用雲原生應用程序,使雲原生開發變為即使過程,縮小開發與部署之間的差
Integration kubernetes with docker private registryHungWei Chiu
What's the problem when we want to use the private registry in the kubernetes.
We also want to run a Docker-In-Docker Pod to push the private image to that private registry and the kubernetes node will pull the private image to run
How to deal second interface service discovery and load balancer in kubernetesMeng-Ze Lee
This document discusses how to deal with a second network interface in Kubernetes. It explains that having multiple interfaces is necessary for network functions and OpenStack deployments. CNI plugins like Multus and Genie allow containers to have multiple interfaces. The challenges are that Kubernetes does not have service and endpoint resources for the second interface, and components like CoreDNS and kube-proxy lack related information. To address this, the document proposes establishing a service mechanism, DNS resolution, and load balancing for the second interface. It provides examples of using projects to record pod IPs, DNS servers like CoreDNS, and discusses load balancing algorithms.
This document provides an introduction to Kubernetes and Container Network Interface (CNI). It begins with an introduction to the presenter and their background. It then discusses the differences between VMs and containers before explaining why Kubernetes is needed for container orchestration. The rest of the document details the architecture of Kubernetes, including the master node, worker nodes, pods, labels, replica sets, deployments, services, and how to build a Kubernetes cluster. It concludes with a brief introduction to CNI and a call for questions.
High performace network of Cloud Native Taiwan User GroupHungWei Chiu
The document discusses high performance networking and summarizes a presentation about improving network performance. It describes drawbacks of the current Linux network stack, including kernel overhead and data copying. It then discusses approaches like DPDK and RDMA that can help improve performance by reducing overhead and enabling zero-copy data transfers. A case study is presented on using RDMA to improve TensorFlow performance by eliminating unnecessary data copies between devices.
How to Integrate Kubernetes in OpenStack Meng-Ze Lee
The document discusses various open source projects for integrating Kubernetes and containers into OpenStack including:
- Kolla provides production-ready containers and deployment tools for operating OpenStack clouds using Kubernetes in a scalable and reliable way.
- Magnum allows deploying and managing container orchestration engines like Docker Swarm, Mesos and Kubernetes on OpenStack.
- Zun is an OpenStack service for managing containers on OpenStack using projects like Docker and Kuryr.
- Kuryr-Kubernetes provides networking between Kubernetes and OpenStack Neutron.
Introduction to the Container Network Interface (CNI)Weaveworks
CNI, the Container Network Interface, is a standard API between container runtimes and container network implementations. These slides are from the Cloud Native Computing Foundation's Webinar, and explain what CNI is, how you use it, and what lies ahead on the roadmap.
Integrate Kubernetes into CORD(Central Office Re-architected as a Datacenter)inwin stack
- CORD aims to virtualize telecom central offices using open source software and commodity hardware. Kubernetes could help integrate NFV apps but challenges remain.
- Issues include converting existing VM-based NFVs to containers, supporting both OpenStack and Kubernetes, and ensuring the SDN controller ONOS can communicate with Kubernetes network components.
- The presenter's team addressed these by designing a multi-interface CNI plugin and centralized IPAM using Etcd to integrate ONOS and provide pod networking. Further work is needed to fully integrate ONOS control and test the solution.
How to build a Kubernetes networking solution from scratchAll Things Open
Presented by: Antonin Bas & Jianjun Shen, VMware
Presented at All Things Open 2020
Abstract: For the non-initiated, Kubernetes (K8s) networking can be a bit like dark magic. Many clusters have requirements beyond what the default network plugin, kubenet, can provide and require the use of a third-party Container Network Interface (CNI) plugin. But what exactly is the role of these plugins, how do they differ from each other and how does the choice of one affect your cluster?
In this talk, Antonin and Jianjun will describe how a group of developers was able to build a CNI plugin - an open source project called Antrea - from scratch and bring it to production in a matter of months. This velocity was achieved by leveraging existing open-source technologies extensively: Open vSwitch, a well-established programmable virtual switch for the data plane, and the K8s libraries for the control plane. Antonin and Jianjun will explain the responsibilities of a CNI plugin in the context of K8s and will walk the audience through the steps required to create one. They will show how Antrea integrates with the rest of the cloud-native ecosystem (e.g. dashboards such as Octant and Prometheus) to provide insight into the network and ensure that K8s networking is not just dark magic anymore.
Secure your K8s cluster from multi-layersJiantang Hao
The document discusses securing a Kubernetes cluster from multiple layers of risk. It covers securing the infrastructure layer by limiting access and exposure, the control plane layer by enabling TLS and RBAC, the workload layer using pod security policies and network policies, the container runtime layer with tools like Kata Containers, the user misconfiguration layer by avoiding defaults and validating configurations, and useful security tools. The presenter then provides contact information for potential job opportunities.
Deploying vn fs with kubernetes pods and vmsLibbySchulze1
This document discusses deploying virtual network functions (VNFs) using Kubernetes pods and VMs. It covers using single root I/O virtualization (SR-IOV) and Open vSwitch with Data Plane Development Kit (OVS-DPDK) for high performance networking. SR-IOV allows VNFs direct access to network interface cards to bypass the hypervisor. OVS-DPDK processes packets in userspace using DPDK for accelerated performance compared to native Linux networking or SR-IOV for some workloads. The document provides configuration details for enabling SR-IOV and OVS-DPDK on the host and specifying network interfaces in KubeVirt virtual machine instances.
In this slide, I briefly introduce the container and how docker implement it, including the image and container itself. also show how docker setup the networking connectivity by default bridge network.
Besides huge success in mobile, ARM is also ambitious in server field. Software ecosystem is now a barrier for wide deployment of ARM servers in data center. ARM Shanghai Workloads team is working on clouding and big data software enablement and optimization on ARM64 platform.
In this presentation, Yibo Cai will introduce the status and challenges of running OpenStack on ARM servers, with emphasis on OpenStack compute, storage and networking.
Kubernetes uses containers managed by container engines like Docker. It separates containers from the host machine using namespaces and cgroups for isolation. Docker containers share the host kernel and use aufs for the union filesystem. Virtual machines (VMs) run a full guest operating system with virtualization provided by hypervisors like KVM/QEMU. Containers are more lightweight than VMs as they share the host kernel and have smaller base images and faster launch times and resource usage.
Docker Networking with Container Orchestration Engines [Docker Meetup Santa C...Debra Robertson
The Docker container ecosystem is growing very fast and networking has taken an interesting direction with different networking models being introduced and it becomes even more interesting when container orchestration engines like Swarm, Mesos, Kubernetes have to implement networking for Docker containers. At this Meetup, we will talk about the networking capabilities for Docker, networking models like CNM (Container Network Model), how they fit into container orchestration frameworks, what's ready for production and what's in the design/discussion phase expected to be available in near future.
In this deck from the Docker Workshop at ISC 2015, Andreas Schmidt from Cassini Consulting describes Docker in a Nutshell
"As the newest flavor of Linux Containers, Docker gained a lot of momentum in the last 12 months. With a very convenient and open API-driven architecture Docker is able to help decrease the complexity of operations and increase the productivity of computation. During the last two years Andreas, Christian, and Wolfgang gained a lot of experience with Docker and were thrilled by its possible impact early on. Andreas started working with Docker in mid-2013 and is interested in developing tools for solving Enterprise IT requirements on networking and security. In 2014 he held talks and workshops about these topics. Christian started using Docker in 2013 to virtualize a complete HPC cluster stack and since then held multiple talks about how Docker might impact HPC. Wolfgang and his partner Burak Yenier introduced Docker as a corner-stone of the UberCloud Marketplace to drastically improve and simplify access to HPC cloud resources. UberCloud just announced their new containers for computational fluid dynamics software like Fluent, STAR-CCM+ and OpenFOAM."
Watch the video presentation: http://wp.me/p3RLHQ-enP
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Containers require a new approach to networking. How are your containers communicating with each other? This talk will go through the different network topologies of Kubernetes. How Kubernetes addresses networking compared to traditional physical networking concepts. What are your options for networking using Kubernetes. What is the CNI (Container Network Interface) and how it affects Kubernetes networking.
The document discusses iptables and network packet filtering. It begins with an introduction of the presenter and overview of iptables. It then covers how iptables works, including the tables, chains and communication between userspace and kernelspace. It discusses common iptables commands like flush and check. It also covers iptables extensions, how they are implemented in both userspace and kernelspace, and provides an example of custom TCP matching. The presentation aims to explain how the iptables userspace tools interact with the kernelspace netfilter system and custom extensions can be added.
The Contrail Virtual Execution Platform (VEP) allows Cloud administrators to manage data centers and monitor the usage of resources. Users can manage their distributed applications on IaaS Cloud providers under the control of Service Level Agreements (SLA). VEP applications are packaged in the standard OVF format and they are deployed inside Constrained Execution Environments (CEE) derived from the SLA, to support the specification of SLA contracts between users and providers.
These CEE environments allow to define constraints concerning virtual hardware performance, localization and affinity allowing the administrator to configure the monitoring system in order to feed external SLA enforcement services. VEP integrates elasticity management capabilities which can be controlled by external SLA enforcement services. A resource allocator service is integrated to dispatch the virtual components on the physical resources of the provider in accordance with the SLA terms.
The first version of VEP is currently implemented on OpenNebula. This talk presents the implementation of VEP on OpenNebula and discusses some implementation choices such as the resource allocator.
Kubernetes and OpenStack at Scale at OpenStack Summit Boston 2017
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster and elastic infrastructure. Now, take that one step further - all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, you will see just that.
In this presentation, we will walk through a recent benchmarking deployment using Kubernetes and OpenStack on the Cloud Native Computing Foundation’s (CNCF's) 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
You'll also what's been happening in subsequent rounds of testing in Red Hat's own SCALE lab and the CNCF cluster and how we are working with the relevant open source communities including OpenStack, Kubernetes, and Ansible to continue to raise the bar for horizontal scaling of these platforms via community powered innovation.
This document summarizes an upcoming Kubernetes meetup in Geneva. It discusses the history of the Kubernetes meetup group in Geneva, including changes in leadership and growth of the community. The meetup occurs quarterly and covers a wide range of topics related to Kubernetes and the surrounding ecosystem. Speakers and sponsors for the upcoming meetup are also mentioned.
This document provides an overview of machine learning in cyber security. It discusses definitions of machine learning, cyber security, and how machine learning can be used for cyber security tasks like malware detection. It also covers theoretical concepts, hands-on materials like necessary software and lab setup, and guidance for projects. Specific machine learning and security tools are mentioned, like Docker for containerization. The document aims to explain the importance and applications of machine learning in cyber security.
This document introduces BioCloud, a tool for using cloud computing platforms like Hadoop to process large biological datasets in parallel. It discusses how biology applications are becoming more resource-intensive and how cloud platforms can provide scalable computing resources at a lower cost than local hardware. It provides an overview of Hadoop and MapReduce as an framework for processing vast amounts of data across clusters of machines. Examples of companies using Hadoop include Google, Yahoo, and Facebook for applications involving terabytes of data.
This document provides an introduction to Kubernetes and Container Network Interface (CNI). It begins with an introduction to the presenter and their background. It then discusses the differences between VMs and containers before explaining why Kubernetes is needed for container orchestration. The rest of the document details the architecture of Kubernetes, including the master node, worker nodes, pods, labels, replica sets, deployments, services, and how to build a Kubernetes cluster. It concludes with a brief introduction to CNI and a call for questions.
High performace network of Cloud Native Taiwan User GroupHungWei Chiu
The document discusses high performance networking and summarizes a presentation about improving network performance. It describes drawbacks of the current Linux network stack, including kernel overhead and data copying. It then discusses approaches like DPDK and RDMA that can help improve performance by reducing overhead and enabling zero-copy data transfers. A case study is presented on using RDMA to improve TensorFlow performance by eliminating unnecessary data copies between devices.
How to Integrate Kubernetes in OpenStack Meng-Ze Lee
The document discusses various open source projects for integrating Kubernetes and containers into OpenStack including:
- Kolla provides production-ready containers and deployment tools for operating OpenStack clouds using Kubernetes in a scalable and reliable way.
- Magnum allows deploying and managing container orchestration engines like Docker Swarm, Mesos and Kubernetes on OpenStack.
- Zun is an OpenStack service for managing containers on OpenStack using projects like Docker and Kuryr.
- Kuryr-Kubernetes provides networking between Kubernetes and OpenStack Neutron.
Introduction to the Container Network Interface (CNI)Weaveworks
CNI, the Container Network Interface, is a standard API between container runtimes and container network implementations. These slides are from the Cloud Native Computing Foundation's Webinar, and explain what CNI is, how you use it, and what lies ahead on the roadmap.
Integrate Kubernetes into CORD(Central Office Re-architected as a Datacenter)inwin stack
- CORD aims to virtualize telecom central offices using open source software and commodity hardware. Kubernetes could help integrate NFV apps but challenges remain.
- Issues include converting existing VM-based NFVs to containers, supporting both OpenStack and Kubernetes, and ensuring the SDN controller ONOS can communicate with Kubernetes network components.
- The presenter's team addressed these by designing a multi-interface CNI plugin and centralized IPAM using Etcd to integrate ONOS and provide pod networking. Further work is needed to fully integrate ONOS control and test the solution.
How to build a Kubernetes networking solution from scratchAll Things Open
Presented by: Antonin Bas & Jianjun Shen, VMware
Presented at All Things Open 2020
Abstract: For the non-initiated, Kubernetes (K8s) networking can be a bit like dark magic. Many clusters have requirements beyond what the default network plugin, kubenet, can provide and require the use of a third-party Container Network Interface (CNI) plugin. But what exactly is the role of these plugins, how do they differ from each other and how does the choice of one affect your cluster?
In this talk, Antonin and Jianjun will describe how a group of developers was able to build a CNI plugin - an open source project called Antrea - from scratch and bring it to production in a matter of months. This velocity was achieved by leveraging existing open-source technologies extensively: Open vSwitch, a well-established programmable virtual switch for the data plane, and the K8s libraries for the control plane. Antonin and Jianjun will explain the responsibilities of a CNI plugin in the context of K8s and will walk the audience through the steps required to create one. They will show how Antrea integrates with the rest of the cloud-native ecosystem (e.g. dashboards such as Octant and Prometheus) to provide insight into the network and ensure that K8s networking is not just dark magic anymore.
Secure your K8s cluster from multi-layersJiantang Hao
The document discusses securing a Kubernetes cluster from multiple layers of risk. It covers securing the infrastructure layer by limiting access and exposure, the control plane layer by enabling TLS and RBAC, the workload layer using pod security policies and network policies, the container runtime layer with tools like Kata Containers, the user misconfiguration layer by avoiding defaults and validating configurations, and useful security tools. The presenter then provides contact information for potential job opportunities.
Deploying vn fs with kubernetes pods and vmsLibbySchulze1
This document discusses deploying virtual network functions (VNFs) using Kubernetes pods and VMs. It covers using single root I/O virtualization (SR-IOV) and Open vSwitch with Data Plane Development Kit (OVS-DPDK) for high performance networking. SR-IOV allows VNFs direct access to network interface cards to bypass the hypervisor. OVS-DPDK processes packets in userspace using DPDK for accelerated performance compared to native Linux networking or SR-IOV for some workloads. The document provides configuration details for enabling SR-IOV and OVS-DPDK on the host and specifying network interfaces in KubeVirt virtual machine instances.
In this slide, I briefly introduce the container and how docker implement it, including the image and container itself. also show how docker setup the networking connectivity by default bridge network.
Besides huge success in mobile, ARM is also ambitious in server field. Software ecosystem is now a barrier for wide deployment of ARM servers in data center. ARM Shanghai Workloads team is working on clouding and big data software enablement and optimization on ARM64 platform.
In this presentation, Yibo Cai will introduce the status and challenges of running OpenStack on ARM servers, with emphasis on OpenStack compute, storage and networking.
Kubernetes uses containers managed by container engines like Docker. It separates containers from the host machine using namespaces and cgroups for isolation. Docker containers share the host kernel and use aufs for the union filesystem. Virtual machines (VMs) run a full guest operating system with virtualization provided by hypervisors like KVM/QEMU. Containers are more lightweight than VMs as they share the host kernel and have smaller base images and faster launch times and resource usage.
Docker Networking with Container Orchestration Engines [Docker Meetup Santa C...Debra Robertson
The Docker container ecosystem is growing very fast and networking has taken an interesting direction with different networking models being introduced and it becomes even more interesting when container orchestration engines like Swarm, Mesos, Kubernetes have to implement networking for Docker containers. At this Meetup, we will talk about the networking capabilities for Docker, networking models like CNM (Container Network Model), how they fit into container orchestration frameworks, what's ready for production and what's in the design/discussion phase expected to be available in near future.
In this deck from the Docker Workshop at ISC 2015, Andreas Schmidt from Cassini Consulting describes Docker in a Nutshell
"As the newest flavor of Linux Containers, Docker gained a lot of momentum in the last 12 months. With a very convenient and open API-driven architecture Docker is able to help decrease the complexity of operations and increase the productivity of computation. During the last two years Andreas, Christian, and Wolfgang gained a lot of experience with Docker and were thrilled by its possible impact early on. Andreas started working with Docker in mid-2013 and is interested in developing tools for solving Enterprise IT requirements on networking and security. In 2014 he held talks and workshops about these topics. Christian started using Docker in 2013 to virtualize a complete HPC cluster stack and since then held multiple talks about how Docker might impact HPC. Wolfgang and his partner Burak Yenier introduced Docker as a corner-stone of the UberCloud Marketplace to drastically improve and simplify access to HPC cloud resources. UberCloud just announced their new containers for computational fluid dynamics software like Fluent, STAR-CCM+ and OpenFOAM."
Watch the video presentation: http://wp.me/p3RLHQ-enP
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Containers require a new approach to networking. How are your containers communicating with each other? This talk will go through the different network topologies of Kubernetes. How Kubernetes addresses networking compared to traditional physical networking concepts. What are your options for networking using Kubernetes. What is the CNI (Container Network Interface) and how it affects Kubernetes networking.
The document discusses iptables and network packet filtering. It begins with an introduction of the presenter and overview of iptables. It then covers how iptables works, including the tables, chains and communication between userspace and kernelspace. It discusses common iptables commands like flush and check. It also covers iptables extensions, how they are implemented in both userspace and kernelspace, and provides an example of custom TCP matching. The presentation aims to explain how the iptables userspace tools interact with the kernelspace netfilter system and custom extensions can be added.
The Contrail Virtual Execution Platform (VEP) allows Cloud administrators to manage data centers and monitor the usage of resources. Users can manage their distributed applications on IaaS Cloud providers under the control of Service Level Agreements (SLA). VEP applications are packaged in the standard OVF format and they are deployed inside Constrained Execution Environments (CEE) derived from the SLA, to support the specification of SLA contracts between users and providers.
These CEE environments allow to define constraints concerning virtual hardware performance, localization and affinity allowing the administrator to configure the monitoring system in order to feed external SLA enforcement services. VEP integrates elasticity management capabilities which can be controlled by external SLA enforcement services. A resource allocator service is integrated to dispatch the virtual components on the physical resources of the provider in accordance with the SLA terms.
The first version of VEP is currently implemented on OpenNebula. This talk presents the implementation of VEP on OpenNebula and discusses some implementation choices such as the resource allocator.
Kubernetes and OpenStack at Scale at OpenStack Summit Boston 2017
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster and elastic infrastructure. Now, take that one step further - all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, you will see just that.
In this presentation, we will walk through a recent benchmarking deployment using Kubernetes and OpenStack on the Cloud Native Computing Foundation’s (CNCF's) 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
You'll also what's been happening in subsequent rounds of testing in Red Hat's own SCALE lab and the CNCF cluster and how we are working with the relevant open source communities including OpenStack, Kubernetes, and Ansible to continue to raise the bar for horizontal scaling of these platforms via community powered innovation.
This document summarizes an upcoming Kubernetes meetup in Geneva. It discusses the history of the Kubernetes meetup group in Geneva, including changes in leadership and growth of the community. The meetup occurs quarterly and covers a wide range of topics related to Kubernetes and the surrounding ecosystem. Speakers and sponsors for the upcoming meetup are also mentioned.
This document provides an overview of machine learning in cyber security. It discusses definitions of machine learning, cyber security, and how machine learning can be used for cyber security tasks like malware detection. It also covers theoretical concepts, hands-on materials like necessary software and lab setup, and guidance for projects. Specific machine learning and security tools are mentioned, like Docker for containerization. The document aims to explain the importance and applications of machine learning in cyber security.
This document introduces BioCloud, a tool for using cloud computing platforms like Hadoop to process large biological datasets in parallel. It discusses how biology applications are becoming more resource-intensive and how cloud platforms can provide scalable computing resources at a lower cost than local hardware. It provides an overview of Hadoop and MapReduce as an framework for processing vast amounts of data across clusters of machines. Examples of companies using Hadoop include Google, Yahoo, and Facebook for applications involving terabytes of data.
This document provides an introduction to Kubernetes, including definitions of key concepts like pods, services, labels, replica sets, deployments, and horizontal pod autoscaling. It explains how Kubernetes abstracts and virtualizes resources to run and manage containers across a cluster. Examples and diagrams illustrate concepts like pod networking and canary deployments. The document recommends resources for learning more about Kubernetes and getting started, including Google Cloud Platform and a demo of Kubernetes capabilities.
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageMayaData Inc
Webinar Session - https://youtu.be/_5MfGMf8PG4
In this webinar, we share how the Container Attached Storage pattern makes performance tuning more tractable, by giving each workload its own storage system, thereby decreasing the variables needed to understand and tune performance.
We then introduce MayaStor, a breakthrough in the use of containers and Kubernetes as a data plane. MayaStor is the first containerized data engine available that delivers near the theoretical maximum performance of underlying systems. MayaStor performance scales with the underlying hardware and has been shown, for example, to deliver in excess of 10 million IOPS in a particular environment.
This document discusses containerization and the Docker ecosystem. It begins by describing the challenges of managing different software stacks across multiple environments. It then introduces Docker as a solution that packages applications into standardized units called containers that are portable and can run anywhere. The rest of the document covers key aspects of the Docker ecosystem like orchestration tools like Kubernetes and Docker Swarm, networking solutions like Flannel and Weave, storage solutions, and security considerations. It aims to provide an overview of the container landscape and components.
Docker allows creating isolated environments called containers from images. Containers provide a standard way to develop, ship, and run applications. The document discusses how Docker can be used for scientific computing including running different versions of software, automating computations, sharing research environments and results, and providing isolated development environments for users through Docker IaaS tools. K-scope is a code analysis tool that previously required complex installation of its Omni XMP dependency, but could now be run as a containerized application to simplify deployment.
This talk was given at a workshop entitled "Cybersecurity Engagement in a Research Environment" at Rady School of Management at UCSD. The workshop was organized by Michael Corn, the UCSD CISO. It tries to provoke discussion around the cybersecurity features and requirements of international science collaborations, as well as more generally, federated cyberinfrastructure systems.
Secure Your Containers: What Network Admins Should Know When Moving Into Prod...Cynthia Thomas
This session offers techniques for securing Docker containers and hosts using open source network virtualization technologies to implement microsegmentation. Come learn real tips and tricks that you can apply to keep your production environment secure.
Dataverse can be deployed using Docker containers to improve maintainability and portability. The document discusses how Docker can isolate applications and their dependencies into portable containers. It provides an example of deploying Dataverse as a set of microservices within Docker containers. Instructions are included on building Docker images, running containers, and managing the containers and images through commands and tools like Docker Desktop, Docker Hub, and Docker Compose.
Machine Learning , Analytics & Cyber Security the Next Level Threat Analytics...PranavPatil822557
This document provides an overview of machine learning, analytics, and cyber security presented by Manjunath N V. It includes definitions of key concepts like machine learning, data analytics, and cyber security. It also discusses how machine learning, data analytics, and cyber security are related and can be combined. The document outlines topics that will be covered, including theoretical foundations, hands-on materials, career opportunities, and demonstration of a final output.
Docker Orchestration: Welcome to the Jungle! Devoxx & Docker Meetup Tour Nov ...Patrick Chanezon
In two years, Docker hit the sweet spot for devs and ops, with tools for building, shipping, and running distributed apps architected as a set of collaborating microservices packaged as Linux containers. One area of the Docker ecosystem that saw a lot of innovation in the past year is container orchestration systems. This session compares and contrasts various Docker orchestration systems (Swarm, Machine, and Compose), the batteries included with Docker itself, Mesos, Kubernetes, CoreOS/Fleet, Deis, Cloud Foundry, and Tutum. It includes a demo of how to deploy a Java 8 app with MongoDB on several of these systems. The goal of the session is to give you a framework to help evaluate how these systems can meet your particular requirements.
Demo code at https://github.com/chanezon/docker-tips/blob/master/orchestration-networking/README.md
(DVO311) Containers, Red Hat & AWS For Extreme IT AgilityAmazon Web Services
Red Hat is helping organizations like Duke University become more efficient by delivering environmental parity for container-based applications across physical, virtual, private cloud, and public cloud environments. Red Hat delivers a comprehensive, integrated, and modular platform for containerized application delivery across the open hybrid cloud - from the OS platform, to software-defined storage, to development and deployment, and management. Through its work with Certified Cloud Service Providers like AWS, Red Hat ensures that application containers built for Red Hat Enterprise Linux can seamlessly move across public clouds. In this session, you will learn how Duke University used containers on Red Hat Enterprise Linux and AWS to combat a denial-of-service attack; how companies are using containers to increase the quality and speed of software delivery; key considerations for implementing container-based applications that can be moved across public clouds; and challenges organizations experience when using containers and how to address them. This session is sponsored by Red Hat.
Demystifying Containerization Principles for Data ScientistsDr Ganesh Iyer
Demystifying Containerization Principles for Data Scientists - An introductory tutorial on how Dockers can be used as a development environment for data science projects
The document discusses Docker's platform and ecosystem. It describes Docker's mission to build tools for mass innovation by providing a software layer to program the internet. It outlines key components of Docker including Docker Engine, Swarm for clustering multiple Docker hosts, Compose for defining and running multi-container applications, and Docker Hub for sharing images. It also discusses the Linux container ecosystem underpinning Docker and roadmaps for continued development.
Latest (storage IO) patterns for cloud-native applications OpenEBS
Applying micro service patterns to storage giving each workload its own Container Attached Storage (CAS) system. This puts the DevOps persona within full control of the storage requirements and brings data agility to k8s persistent workloads. We will go over the concept and the implementation of CAS, as well as its orchestration.
Using the concept of fog to implement a unified IoT platform
Dynamic replacing the applications or algorithms
Managing the resources of the IoT devices
Collecting the data to analyze and improve the performance
Business Insider puts Docker at no. 22 on its list of 40 tech skills
that will land you a 120K plus salary. A good factoid to know if you are drivenby money. On the other hand, Docker's technology, is just flat out fun if you are a Linux techie, delight in good DevOps, or just like cutting-edge innovation. This talk covers both the fun and funds of Docker technology. You'll learn essential container concepts and see them in action. You'll also get practical
insight for applying container technology at your company.
Intro to cloud computing — MegaCOMM 2013, JerusalemReuven Lerner
What is cloud computing? This is an introduction that I gave at MegaCOMM 2013, a conference for technical writers in Jerusalem. The talk describes how the combination of Internet access, virtualization, and open source have made computing a utility that we can turn on and off at will -- similar in some ways to electricity, water, and other utilities with which we're familiar.
This document summarizes Dr. Anita Goel's presentation on cloud computing infrastructure at the Workshop on Big Data and Cloud Computing in India in 2016. The presentation included an introduction to cloud computing concepts like virtualization and software defined networking and storage. It discussed the need for cloud computing to improve processor and energy efficiency. It also defined cloud computing according to NIST and described common cloud service models. The remainder of the presentation outlined research directions in cloud computing architecture and control layers and listed several related publications by Dr. Goel and collaborators.
Similar to Raspberry pi x kubernetes x tensorflow (20)
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
AI for Legal Research with applications, toolsmahaffeycheryld
AI applications in legal research include rapid document analysis, case law review, and statute interpretation. AI-powered tools can sift through vast legal databases to find relevant precedents and citations, enhancing research accuracy and speed. They assist in legal writing by drafting and proofreading documents. Predictive analytics help foresee case outcomes based on historical data, aiding in strategic decision-making. AI also automates routine tasks like contract review and due diligence, freeing up lawyers to focus on complex legal issues. These applications make legal research more efficient, cost-effective, and accessible.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...PIMR BHOPAL
Variable frequency drive .A Variable Frequency Drive (VFD) is an electronic device used to control the speed and torque of an electric motor by varying the frequency and voltage of its power supply. VFDs are widely used in industrial applications for motor control, providing significant energy savings and precise motor operation.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Gas agency management system project report.pdfKamal Acharya
The project entitled "Gas Agency" is done to make the manual process easier by making it a computerized system for billing and maintaining stock. The Gas Agencies get the order request through phone calls or by personal from their customers and deliver the gas cylinders to their address based on their demand and previous delivery date. This process is made computerized and the customer's name, address and stock details are stored in a database. Based on this the billing for a customer is made simple and easier, since a customer order for gas can be accepted only after completing a certain period from the previous delivery. This can be calculated and billed easily through this. There are two types of delivery like domestic purpose use delivery and commercial purpose use delivery. The bill rate and capacity differs for both. This can be easily maintained and charged accordingly.
4. ⾞位在哪裡:⾞位尋找及預約系統
▸ OpenMTC
• A prototype implementation of an IoT/M2M middleware
aiming to provide a standard-compliant platform for IoT
services
▸ Devices
• Arduino UNO R3
• Distance Measuring Sensor
• Arduino Wifi Shield
• Wireless router
!4https://github.com/WakeupTsai/IOT_Projects
7. Motivation
▸ Internet of Things (IoT) grows rapidly
▸ Produce incredible amount of data
• Overload the data centers and congest the networks
seriously
!7
8. Limitations of Current Solution
!8
Analyze and Compute
Data
in Data center
Huge Amount of
9. Edge Analytics - Pre-processing
▸ Reduce latency
▸ Reduce network traffic
▸ Reduce the load of data centers
!9
REPORT
11. ▸ Fog computing leverages devices in data centers, edge
networks, and end devices in simultaneously
End Devices
Edge Networks
Data Centers
Fog Computing
!11
12. Advantages: Fog >> Cloud
▸ Reduce network traffic
▸ Short response time
▸ Diverse kinds of resources
• Computations, communications, storage, and sensors
▸ Utilize wasted resources
▸ Low cost
▸ Low carbon foot print
▸ …
!12
13. Requirement of Dynamic Deployment
▸ Frequently updating or replacing the applications
• Application virtualization
▸ Managing lots of fog devices and applications
• Orchestration tool
▸ Triggering another application when something happened
• Event-driven mechanism
!13
14. Virtualization Technology
▸ Virtualized modules
• Dynamically placed on the fog devices
• Migrated among the fog devices
• Allocated the resource on-demand
• More private
!14
15. Traditional VM v.s Container
▸ Container
• Share the same OS kernel,
and use the namespaces
to distinguish one from
another
▸ Traditional Virtual Machine
• Need large storage space
and more computing
power
!15What is a Container [Digital image]. (n.d.). Retrieved from https://www.docker.com/what-container
17. Docker Image
▸ A Docker image is built up from a series of layers. Each
layer represents an instruction in the image’s Dockerfile.
Each layer except the very last one is read-only
!17
27. ▸ Safe Community Alert Network
• Using the available, cheap sensors we have to build an
affordable, practical security system
SCALE
!27
https://github.com/WakeupTsai/SmartAmericaSensors
28. MQTT
▸ MQTT is a M2M/IoT connectivity protocol. It was
designed as an extremely lightweight publish/
subscribe messaging transport
!28
30. ▸ Fog devices situation monitoring
• Resource usage (CPU, Memory)
• Containers status
Providing the important information for deployment
strategy.
Kubernetes Dashboard
!30https://github.com/WakeupTsai/FogComputingPlatform-dashboard
https://github.com/kubernetes/dashboard
34. Requirement of Edge IoT Analytics
▸ Raw sensor data are huge
• Deep learning
▸ Resource-constrained fog computing devices
• Distributed computing
!34
35. Pre-processing with Deep Learning
▸ TensorFlow
• An open-source software library for
Machine Intelligence
• Data flow graphs
❖ Nodes - mathematical operations
❖ Edges - multidimensional data arrays
(tensors)
!35
Cut
41. ▸ User interface
▸ Operator deployment algorithm
• Decide deploying which
operators on which minions
▸ Device manager
• Collect crucial device status
▸ Deployment manager
• Launch specific Docker images on chosen minions
▸ Image pool
• Docker images are stored in the image pool at the server
Master
!41
42. Minions
▸ TensorFlow-enabled container
• Docker containers including
TensorFlow and its analytic
libraries
▸ Client agent
• Monitor and report the status of minions and pod to
device manager
▸ Local image pool
!42
45. How to Implement
▸ Kubernetes YAML
▸ Docker Images (TensorFlow base image)
▸ python code (Import TensorFlow library)
!45
https://hub.docker.com/r/wakeup706/rpi-tensorflow/
https://github.com/samjabrahams/tensorflow-on-raspberry-pi
46. ▸ Helm is a tool for managing Kubernetes charts. Charts are
packages of pre-configured Kubernetes resources.
Helm
!46
https://github.com/helm/helm
https://github.com/WakeupTsai/FogComputingPlatform-Auto-Deploy
47. !47
Master
Alerter
Kubernetes
MQTT Broker
Minion
Minion
Environment
Monitor
Environment
Monitor
Environment
Monitor Minion
Publish the motion detect event.
Subscribe the event from broker,
then determine it as an intrusion.
Ask Kubernetes to launch
the surveillance service.
Object
Detection
Image
Capture
Deploy an image
capture app.
Deploy an object
detection app.
Publish the result.
Deploy environment
monitor app on every
street light.
Capture an image, and publish it
Subscribe the image, and
then do the analyze.
Scenario