SDN x Cloud Native Meetup #38
介紹 VSCode Remote Development 工具,示範如何透過 VSCode Development Container 來打造跨語言的容器式開發環境,包括 Java、Python、Node.js、Go 等程式語言都能夠使用此方式來開發系統,並且一個專案一個容器,不會污染本機環境,可以安心地執行程式開發工作。
嵌入式智慧應用開源軟硬整合新趨勢 (Open Source Software and Hardware Integration Trend for Emb...William Liang
本投影片闡述智慧應用的發展趨勢及開放源碼與開放平台扮演的角色與重要性。
發表於:嵌入式智慧應用開源軟硬整合新趨勢 (Open Source Software and Hardware Integration Trend for Embedded Smart Applications), in Intel 2015 嵌入式應用論壇, hosted by Digitimes@台北維多麗亞酒店大宴會廳 on 2015/03/24.
SDN x Cloud Native Meetup #38
介紹 VSCode Remote Development 工具,示範如何透過 VSCode Development Container 來打造跨語言的容器式開發環境,包括 Java、Python、Node.js、Go 等程式語言都能夠使用此方式來開發系統,並且一個專案一個容器,不會污染本機環境,可以安心地執行程式開發工作。
嵌入式智慧應用開源軟硬整合新趨勢 (Open Source Software and Hardware Integration Trend for Emb...William Liang
本投影片闡述智慧應用的發展趨勢及開放源碼與開放平台扮演的角色與重要性。
發表於:嵌入式智慧應用開源軟硬整合新趨勢 (Open Source Software and Hardware Integration Trend for Embedded Smart Applications), in Intel 2015 嵌入式應用論壇, hosted by Digitimes@台北維多麗亞酒店大宴會廳 on 2015/03/24.
在 PIXNET 研究組裡,有許多數據和 AI 研究結果需要提供服務給公司使用,但是早期服務設計和部署需要後端和維運團隊介入。在深度結合 Google Cloud Platform 提供的各種服務後,現在系統開發和維運可以由研究團隊自主控制,並且結合 App Engine 導流功能開發各種 A/B Test 的可能性,達到 AI 服務的優化。
The document discusses migrating to cloud native solutions. It defines cloud native as an approach that exploits the advantages of cloud computing using containers, microservices, and other modern technologies. This allows applications to be scalable, resilient, and manageable. The document outlines the benefits of cloud native and provides a "trail map" to transitioning applications. It also discusses common challenges like technical debt and failing to meet CI/CD expectations, and provides recommendations to address them such as automating processes and simplifying architectures.
This document discusses how Citrix Application Delivery Management (ADM) can be used to manage Citrix ADC instances at scale in cloud-native environments. Key points include:
- Citrix ADM allows controlling and gaining insights from one to thousands of Citrix ADC instances (VPX, MPX, CPX), across container platforms like Mesos/Marathon and Kubernetes.
- Metadata from Citrix ADCs provides valuable information to Citrix ADM for an "App Health Score", including user experience metrics, security threats, and device health.
- Citrix ADM provides capabilities for app-centric lifecycles, configuration at scale, visibility, and security across Citrix ADC instances.
This document summarizes a presentation about electronic invoices and Google Cloud Functions. It discusses electronic invoices in Taiwan, how Google Cloud Functions work and can be used to process events like changes to files in Cloud Storage buckets or messages published to Pub/Sub topics. It also provides examples of using Cloud Functions to process electronic invoice notifications received via Gmail, extract data from the CSV attachments, and insert the data into a Google Sheet. The document concludes with some tips, like needing to decode the CSV attachments which use ANSI encoding, and how to set permissions for Cloud Pub/Sub.
1. The document discusses using a new high-efficiency and agile platform to quickly implement cloud-native development.
2. It introduces Pivotal Container Service (PKS) which provides a production-ready container platform for deploying Kubernetes clusters on VMware vSphere.
3. PKS leverages technologies like VMware NSX-T and BOSH to provide capabilities like security, high availability, and auto-scaling for container workloads.
The last mile of digital transformation AI大眾化:數位轉型的最後一哩inwin stack
1) The document discusses democratizing AI and digital transformation through building an AI cloud platform in Taiwan with ASUS, Gigabyte, and Taiwan Mobile to provide computing resources.
2) It describes ASUS WebStorage, a personal cloud storage service with features like cross-device syncing, backup, photo uploading, sharing, and file searching. Storage is free up to 5GB with more for ASUS device owners.
3) The last mile of digital transformation is the democratization of AI through addressing challenges of computing power, tools, data, and commercialization. ASUS is working to provide cloud infrastructure, data platforms, AI platforms, and develop vertical ecosystems and applications.
The document discusses the history and development of Cloud Foundry and Kubernetes container technologies. It provides steps to deploy SUSE Cloud Foundry on Kubernetes, including adding Helm charts, deploying UAA and Cloud Foundry, and checking pod statuses. The benefits highlighted are providing the familiar Cloud Foundry developer experience on Kubernetes, leveraging innovations to improve agility, and allowing customers to simplify their digital transformations.
An Open, Open source way to enable your Cloud Native Journeyinwin stack
The document discusses SUSE's open source strategy and product portfolio. It outlines that SUSE is committed to open source, being a leader in the community, and delivering open and flexible technology to customers. It then provides an overview of SUSE's products for application delivery, infrastructure management, software-defined infrastructure, and container management.
This document provides an overview and outline of topics related to operating and maintaining Kubernetes in production environments. It discusses considerations for self-hosting Kubernetes versus using managed Kubernetes services, techniques for managing stateful applications, logging and monitoring, continuous delivery practices, troubleshooting, and trends in Kubernetes technologies. The document also provides references and advice for using kubectl effectively when managing multiple Kubernetes clusters and namespaces.
This document summarizes Serverless Framework for Kubernetes called Fission. Fission is an open source Kubernetes-native serverless framework that allows running serverless applications on Kubernetes clusters both on-premises and on cloud providers. It provides portability, a flexible cost model through optimization features like autoscaling, and integrates with DevOps pipelines. Fission treats functions as core objects along with triggers, environments and uses Kubernetes custom resource definitions to store configuration. It supports various programming languages and event sources through a pool-based or new-deployment execution model to provide tunable cost and performance tradeoffs. The document demonstrates creating and testing functions through Fission's development workflow and deployment features like canary deployments.
This document discusses the evolution of web backend technologies. It covers the history and concepts of infrastructure as code, immutable infrastructure, blue-green deployments, and canary deployments. It also discusses tools for physical delivery, virtual machines, configuration management, continuous integration/delivery, Docker, and Kubernetes. Kubernetes makes it easy to implement infrastructure as code practices and deployment strategies like blue-green and canary deployments through features like deployments and services.
This document summarizes using Kubernetes to deploy a Spark big data computing environment. It discusses why Kubernetes is preferable to other solutions like Cloudera for managing Spark. The architecture of running Spark on Kubernetes is shown, with the Spark master and worker controllers. Performance is compared between Spark on Kubernetes and standalone Spark using the SparkPI and WordCount examples. Support for Spark 2.3.0 on Kubernetes is now official.
Setup Hybrid Clusters Using Kubernetes Federationinwin stack
This document summarizes how to setup hybrid clusters using Kubernetes Federation. It discusses the benefits of federation such as keeping applications synced across multiple clusters and configuring network resources to route traffic. It then describes the federation architecture including the federation control plane and federated resources/clusters. Finally, it provides steps to setup a demo federation including initializing the control plane and joining clusters from different regions.
This document introduces riff, an open source project that provides a serverless platform for executing functions in response to events. It discusses using riff to run functions on Kubernetes in a polyglot way and scale them based on concurrent event load. Example use cases for event-driven functions include stream processing, web events, integration and FaaS. The document also covers integrating riff with Spring Cloud Function and using riff to transform a monolithic application into microservices.
The document discusses monitoring Kubernetes clusters using Prometheus and Grafana. It describes how Prometheus scrapes metrics using exporters like Node Exporter and stores them in a time series database. Grafana is used to build dashboards and visualize the metrics collected by Prometheus. It provides configuration details for deploying Prometheus, Node Exporter, and Grafana as Kubernetes deployments and accessing the services.
This document discusses using Kubernetes to implement highly reliable applications. It begins with an agenda that includes an overview of microservices, an introduction to Kubernetes, and using NodeRed and Kubernetes to build a chatbot. It then provides background on microservices architecture, explaining how applications have evolved from huge monolithic applications to independent microservices that can be deployed and updated more quickly. It introduces Kubernetes concepts like pods, deployments, statefulsets, daemonsets and jobs. It also discusses using Kubernetes to run NodeRed chatbot containers as a deployment, including load balancing, self-healing and scaling benefits. Challenges with logging and maintaining chat conversations across containers are noted.
Integrate Kubernetes into CORD(Central Office Re-architected as a Datacenter)inwin stack
- CORD aims to virtualize telecom central offices using open source software and commodity hardware. Kubernetes could help integrate NFV apps but challenges remain.
- Issues include converting existing VM-based NFVs to containers, supporting both OpenStack and Kubernetes, and ensuring the SDN controller ONOS can communicate with Kubernetes network components.
- The presenter's team addressed these by designing a multi-interface CNI plugin and centralized IPAM using Etcd to integrate ONOS and provide pod networking. Further work is needed to fully integrate ONOS control and test the solution.
This document discusses running distributed TensorFlow on Kubernetes. It provides an introduction to Kubernetes and how it can schedule GPUs. It then discusses distributed TensorFlow, how to set it up to run across multiple workers and parameter servers. Finally, it discusses how to package the TensorFlow code into a Docker container and deploy it on Kubernetes, taking advantage of Kubernetes' scaling, load balancing and fault tolerance.
Containers provide isolation between processes using cgroups and namespaces to limit resource utilization and isolate processes. Containers run within a single operating system kernel and share the kernel with other containers, using fewer resources than virtual machines which run entire guest operating systems. Docker is the most common container platform and uses containerization to package applications and their dependencies into portable containers that can be run on any Linux server.
This document discusses using Kubernetes to implement highly reliable applications. It begins with an introduction to microservices and containers. It then provides an overview of Kubernetes, including Kubernetes clusters, concepts like deployments and pods. It concludes by demonstrating running a NodeRed chatbot application on Kubernetes, comparing the Kubernetes architecture to a traditional VM architecture. The benefits of the Kubernetes approach for this application are around resource utilization, scalability, load balancing and self-healing.
How to integrate Kubernetes in OpenStack: You need to know these projectinwin stack
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications, while OpenStack is a free and open-source software platform for cloud computing, networking, and storage. The document discusses different ways to integrate Kubernetes and OpenStack, including using Zun to provide an OpenStack API for launching and managing containers, Magnum to offer container orchestration engines for deploying and managing containers, Kolla and Kolla Kubernetes to deploy OpenStack on Kubernetes, Kuryr Kubernetes to bridge networking models between containers and OpenStack, and Stackube which uses Kubernetes as the compute fabric controller instead of Nova.
4. Copyright 2015 ITRI 工業技術研究院
Structure
Choose custom setting
Image ,GPU
DNN API will create container
User upload data
SSH into Container
Training data
Download finished trained data
14. Copyright 2015 ITRI 工業技術研究院
From Development to Production
Manual
Source code storage
Setting config, run script
Version control
Server IP, Port conflicts
15. Copyright 2015 ITRI 工業技術研究院
From Development to Production
Manual
Source code storage
Setting config, run script
Version control
Server IP, Port conflicts
16. Copyright 2015 ITRI 工業技術研究院
• command: [ "/bin/bash", "-c", "sh /adduser.sh;service ssh start" ]
• Fix : Setting config, run script
• Docker push ${IMAGE_NAME}: ${Version}
• kubectl set image deployment ${name}
${name}=${IMAGE_NAME}:${Version} --record
• kubectl rollout status deployment ${name}
• Fix:Source code storage , Version control
From Development to Production
17. Copyright 2015 ITRI 工業技術研究院
From Development to Production
apiVersion: v1
kind: Service
metadata:
labels:
app: dnn-server
name: dnn-server
spec:
ports:
- name: tcp
nodePort: 30001
port: 8000
selector:
app: dnn-server
type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
labels:
app: dnn-api
name: dnn-api
spec:
ports:
- name: tcp
nodePort: 30002
port: 8000
selector:
app: dnn-api
type: LoadBalancer
Fix : Server IP, Port conflicts