Kubernetes Story - Day 2: Quay.io Container Registry for Publishing, Building...Mihai Criveti
Friday Brunch - a Kubernetes Story - Day 2: Build containers with Buildah, Skopeo and Quay.io https://www.youtube.com/watch?v=ygJrzMIZiWQ
In this workshop you'll learn how to build and manage containers, publish images to Quay, then install and deploy containers onto OpenShift.
Experience new tools to build, manage and deploy containerized applications following best practices. Learn how to build containers locally with podman, skopeo and buildah, publish and scan containers for vulnerabilities - and deploy containerized applications locally or on cloud using Kubernetes and OpenShift!
Mihai will take you through the process of:
Day 1 = Build: Building and running container images locally with podman, skopeo and buildah. Building containers for years or just getting started? Check out these new tools that help you build and run containers locally, and how they can help you get started with Kubernetes and OpenShift.
Learn some of the best practices on how you can build containers that run as regular users and how to automate the container build process using buildah. Learn about the Universal Base Image and how you can start your image builds from a known, trusted source.
and then over the next two Fridays the story will evolve as follows...
Day 2 = Publish: Publishing container images to quay.io and scanning containers for vulnerabilities and container best practices
Day 3 = Deploy: Getting started with OpenShift using CodeReady Containers or OKD and deploying containers on a Kubernetes Platform (Red Hat OpenShift / OKD / CRC)
Kubernetes Story - Day 1: Build and Manage Containers with PodmanMihai Criveti
OpenShift Workshop Day 1: https://www.youtube.com/watch?v=3IuaZu8-fsY - Build and Manage Containers with Podman
In this workshop you'll learn how to build and manage containers, publish images to Quay, then install and deploy containers onto OpenShift.
This document discusses Puppeteer, an open-source Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol. It covers using Puppeteer to take screenshots, generate PDFs, emulate user interactions like keyboard input, and run end-to-end tests with Jest. Examples are provided for common tasks like navigating pages, selecting elements, and interacting with pages in an automated fashion.
Get started with Ansible - an introduction for Python developers
Ansible: Provisioning and Configuration Management
Molecule: Test your Ansible Playbooks on Docker, Vagrant or Cloud
Vagrant: Test images with vagrant
Kubernetes Story - Day 3: Deploying and Scaling Applications on OpenShiftMihai Criveti
Day 3: OpenShift, CodeReady Containers and Operators https://www.youtube.com/watch?v=0txK3icU2Pg
Experience new tools to build, manage and deploy containerized applications following best practices. Learn how to build containers locally with podman, skopeo and buildah, publish and scan containers for vulnerabilities - and deploy containerized applications locally or on cloud using Kubernetes and OpenShift!
Mihai will take you through the process of:
Day 1 = Build: Building and running container images locally with podman, skopeo and buildah. Building containers for years or just getting started? Check out these new tools that help you build and run containers locally, and how they can help you get started with Kubernetes and OpenShift.
Learn some of the best practices on how you can build containers that run as regular users and how to automate the container build process using buildah. Learn about the Universal Base Image and how you can start your image builds from a known, trusted source.
and then over the next two Fridays the story will evolve as follows...
Day 2 = Publish: Publishing container images to quay.io and scanning containers for vulnerabilities and container best practices
Day 3 = Deploy: Getting started with OpenShift using CodeReady Containers or OKD and deploying containers on a Kubernetes Platform (Red Hat OpenShift / OKD / CRC)
Lessons Learned: Using Concourse In ProductionShingo Omura
The document summarizes ChatWork's experience using Concourse for continuous integration and delivery (CI/CD) of their infrastructure projects. It describes ChatWork's context and use case, highlighting the benefits of using Concourse such as reduced operational load and easier development and testing of deployment processes. It also provides tips for developing pipelines in Concourse and notes some limitations around authorization and parameterized jobs that could be improved.
Docker is a relatively new technology, but it is based on solid underpinnings of the Linux Kernel. It can provision instances in a fraction of the time versus a traditional virtual machine. This makes it a great candidate for development teams to create consistent test benches for their developers. To set up your own disposable Docker environments bring a laptop and make your development a pleasurable experience.
- The document discusses Docker, a tool that allows users to package applications into standardized units called containers for development, shipping and running applications.
- It provides an overview of Docker concepts like images, containers, the Dockerfile and Docker Hub registry. It also includes examples of Docker commands and a sample Dockerfile.
- The document encourages readers to use Docker for benefits like continuous integration/delivery, distributed applications and easy application deployment in a platform-as-a-service model.
Kubernetes Story - Day 2: Quay.io Container Registry for Publishing, Building...Mihai Criveti
Friday Brunch - a Kubernetes Story - Day 2: Build containers with Buildah, Skopeo and Quay.io https://www.youtube.com/watch?v=ygJrzMIZiWQ
In this workshop you'll learn how to build and manage containers, publish images to Quay, then install and deploy containers onto OpenShift.
Experience new tools to build, manage and deploy containerized applications following best practices. Learn how to build containers locally with podman, skopeo and buildah, publish and scan containers for vulnerabilities - and deploy containerized applications locally or on cloud using Kubernetes and OpenShift!
Mihai will take you through the process of:
Day 1 = Build: Building and running container images locally with podman, skopeo and buildah. Building containers for years or just getting started? Check out these new tools that help you build and run containers locally, and how they can help you get started with Kubernetes and OpenShift.
Learn some of the best practices on how you can build containers that run as regular users and how to automate the container build process using buildah. Learn about the Universal Base Image and how you can start your image builds from a known, trusted source.
and then over the next two Fridays the story will evolve as follows...
Day 2 = Publish: Publishing container images to quay.io and scanning containers for vulnerabilities and container best practices
Day 3 = Deploy: Getting started with OpenShift using CodeReady Containers or OKD and deploying containers on a Kubernetes Platform (Red Hat OpenShift / OKD / CRC)
Kubernetes Story - Day 1: Build and Manage Containers with PodmanMihai Criveti
OpenShift Workshop Day 1: https://www.youtube.com/watch?v=3IuaZu8-fsY - Build and Manage Containers with Podman
In this workshop you'll learn how to build and manage containers, publish images to Quay, then install and deploy containers onto OpenShift.
This document discusses Puppeteer, an open-source Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol. It covers using Puppeteer to take screenshots, generate PDFs, emulate user interactions like keyboard input, and run end-to-end tests with Jest. Examples are provided for common tasks like navigating pages, selecting elements, and interacting with pages in an automated fashion.
Get started with Ansible - an introduction for Python developers
Ansible: Provisioning and Configuration Management
Molecule: Test your Ansible Playbooks on Docker, Vagrant or Cloud
Vagrant: Test images with vagrant
Kubernetes Story - Day 3: Deploying and Scaling Applications on OpenShiftMihai Criveti
Day 3: OpenShift, CodeReady Containers and Operators https://www.youtube.com/watch?v=0txK3icU2Pg
Experience new tools to build, manage and deploy containerized applications following best practices. Learn how to build containers locally with podman, skopeo and buildah, publish and scan containers for vulnerabilities - and deploy containerized applications locally or on cloud using Kubernetes and OpenShift!
Mihai will take you through the process of:
Day 1 = Build: Building and running container images locally with podman, skopeo and buildah. Building containers for years or just getting started? Check out these new tools that help you build and run containers locally, and how they can help you get started with Kubernetes and OpenShift.
Learn some of the best practices on how you can build containers that run as regular users and how to automate the container build process using buildah. Learn about the Universal Base Image and how you can start your image builds from a known, trusted source.
and then over the next two Fridays the story will evolve as follows...
Day 2 = Publish: Publishing container images to quay.io and scanning containers for vulnerabilities and container best practices
Day 3 = Deploy: Getting started with OpenShift using CodeReady Containers or OKD and deploying containers on a Kubernetes Platform (Red Hat OpenShift / OKD / CRC)
Lessons Learned: Using Concourse In ProductionShingo Omura
The document summarizes ChatWork's experience using Concourse for continuous integration and delivery (CI/CD) of their infrastructure projects. It describes ChatWork's context and use case, highlighting the benefits of using Concourse such as reduced operational load and easier development and testing of deployment processes. It also provides tips for developing pipelines in Concourse and notes some limitations around authorization and parameterized jobs that could be improved.
Docker is a relatively new technology, but it is based on solid underpinnings of the Linux Kernel. It can provision instances in a fraction of the time versus a traditional virtual machine. This makes it a great candidate for development teams to create consistent test benches for their developers. To set up your own disposable Docker environments bring a laptop and make your development a pleasurable experience.
- The document discusses Docker, a tool that allows users to package applications into standardized units called containers for development, shipping and running applications.
- It provides an overview of Docker concepts like images, containers, the Dockerfile and Docker Hub registry. It also includes examples of Docker commands and a sample Dockerfile.
- The document encourages readers to use Docker for benefits like continuous integration/delivery, distributed applications and easy application deployment in a platform-as-a-service model.
This presentation provides an overview of Docker concepts and commands for Java developers. It covers creating Docker hosts, running containers, building images, running applications in Docker, linking containers, composing with Docker Compose, and an overview of Docker Swarm clustering. Live coding examples are shown for many of the Docker commands and concepts.
Openshift: The power of kubernetes for engineers - Riga Dev Days 18Jorge Morales
1. The document introduces OpenShift as a container application platform based on Kubernetes that provides developers with tools for building, deploying and managing containerized applications.
2. It discusses key OpenShift concepts like pods, services, projects and image registries that allow grouping and connecting container workloads as well as storing and distributing container images.
3. Hands-on examples and tutorials are provided to demonstrate how developers can use OpenShift to develop multi-container applications from source code to deployment through features like source-to-image builds, deployments and routes.
Docker at Djangocon 2013 | Talk by Ken CochranedotCloud
Ken Cochrane gave a presentation on Docker and Docker's suitability for Django projects. He began with an introduction to Docker, explaining how it uses Linux containers to package applications into lightweight portable containers. He then discussed several common use cases for Docker like local development, continuous integration/deployment, and testing. The presentation concluded with a demo of Docker commands and a discussion of upcoming Docker 1.0 features.
Sides from my Jfokus 2015 talk.
# Abstract
Does your application deployment rely on an unhealthy amount of shell scripting glue code? Start encapsulating your application and its runtime environment in Docker containers.
This talk will give you a brief introduction to the concepts behind Docker and a handful of tips to get you started on the exciting journey towards a more robust and reliable application deployment. By the end of the talk you will have learned how to build and deploy Docker images, how to let your containers talk to each other and why the JVM and Docker are a perfect match.
Continuous Integration using Docker & JenkinsB1 Systems GmbH
This document discusses using Docker and Jenkins for continuous integration. It introduces B1 Systems and their areas of expertise including virtualization, configuration management, and cloud technologies. It then describes how Docker is used to build and deploy applications into containers and how Fig, GitLab, Jenkins, and Puppet are integrated to provide continuous integration, collaboration on code, and configuration management capabilities. Use cases are presented for automatically testing Puppet modules and integrating/testing a simple web application.
Introductory seminar on Docker and its components (networks and Compose in particular). Focused on going through some basic concepts, mention some more advanced topics, and introduce a practical workshop held on the same evening.
Docker-hanoi meetup #1: introduction about DockerNguyen Anh Tu
This document provides an overview of Docker's growth and ecosystem over the past 15 months since its launch in March 2013. It highlights the large community of over 460 contributors and 250+ meetup groups, along with over 2.75 million downloads and 6,700 projects on GitHub using Docker. The document also thanks the individuals and projects that helped make Docker possible, including various open source projects, as well as some of the early adopter users and partners helping to build the Docker ecosystem.
Considerable improvements can be achieved by automating the integration of Kamailio-based projects: automated builds, tests and deployments save time and increase reliability. This presentation focuses on common practices to automate the build of Kamailio (and RTPEngine) on various distributions and deploy them, together with their configuration, on testing and production environments.
Docker plays an important role in providing flexible, clean building environments and keep the process reproducible. We’ll see how Jenkins can orchestrate the builds with Docker slaves, and perform the deployments with a combination of platform-specific packages, Fabric, Puppet and Ansible.
Docker has created enormous buzz in the last few years. Docker is a open-source software containerization platform. It provides an ability to package software into standardised units on Docker for software development. In this hands-on introductory session, I introduce the concept of containers, provide an overview of Docker, and take the participants through the steps for installing Docker. The main session involves using Docker CLI (Command Line Interface) - all the concepts such as images, managing containers, and getting useful work done is illustrated step-by-step by running commands.
Docker allows applications to be packaged with all their dependencies and run consistently across computing environments. It provides isolation, security and portability for applications. This document discusses setting up an Eh Avatar application to run in Docker containers for Postgres, Redis and the application itself. It covers bringing up the dependency containers, building a custom Docker image for the application, and using Docker Compose to define and run the multi-container application. While this provides an introduction, there is still more to learn about optimizing Docker usage and avoiding common pitfalls.
1. The document discusses setting up a private Docker registry using Docker Registry and Nginx on local, AWS EC2, and adding authentication with basic auth and HTTPS.
2. Key steps include running Docker Registry with port 5000, linking it to Nginx, and configuring Nginx as a reverse proxy. Authentication is added using htpasswd and securing access with HTTPS and self-signed certificates.
3. The process involves building a test image, pushing it to the local registry, then pushing it to the registry accessible at an external URL after configuring the necessary network, domain name, and security settings.
Adrian Otto from Rackspace will present his perspective of "Docker 101", for Docker novices. Learn the difference between Dockerfiles, containers, running containers, terminated containers, container images, Docker Registry, and a demo of the Docker CLI that goes beyond what you learn from the online simulator.
The document discusses Python virtual environments (virtualenv) and the pip package manager. It introduces virtualenv and pip, explains why they are useful tools for isolating Python environments and managing packages, and provides exercises for creating virtual environments, using pip to install/uninstall packages, creating your own pip packages, and sharing packages on PyPI. The goal is to help users understand and learn to use these tools in 90 minutes.
The document provides an agenda for a workshop on RabbitMQ. It introduces RabbitMQ and its core concepts like exchanges, queues, bindings and message passing. It then outlines 5 exercises demonstrating key RabbitMQ patterns including hello world, work queues, publish/subscribe, routing and topics. Environmental setup using Docker is also covered.
This document provides an introduction to Docker presented by Adrian Otto. It defines Docker components like the Docker Engine (CLI and daemon), images, containers and registries. It explains how containers combine cgroups, namespaces and images. It demonstrates building images with Dockerfiles, committing container changes to new images, and the full container lifecycle. Finally, it lists Docker CLI commands and promises a demo of building/running containers.
Enable Fig to deploy to multiple Docker servers by Willy KuoDocker, Inc.
Fig (http://www.fig.sh/) is an Docker-based development environment tool which is owned by Docker. Originally, we can only deploy to one host at one time. My hack in Docker Global Hack Day #2 is to enable Fig to deploy multiple hosts at one time. In this talk, I'll give a brief introduction to Fig first. Then describe my hack in the hack day. Finally I'll give a short demo about deploying apps to multi hosts at one time.
Automate App Container Delivery with CI/CD and DevOpsDaniel Oh
This document discusses how to automate application container delivery with CI/CD and DevOps. It describes building and deploying container images using Source-to-Image (S2I) to deploy source code or application binaries. OpenShift automates deploying application containers across hosts via Kubernetes. The document also discusses continuous integration, continuous delivery, and how OpenShift supports CI/CD with features like Jenkins-as-a-Service and OpenShift Pipelines.
This document discusses 10 things not to forget before deploying Docker in production. It covers logging, monitoring, secrets, container access, filesystem choices, disk space usage, build optimizations, download speeds, backups, and Docker clusters. Overall, Docker provides benefits for portability and workflows but has some challenges to address for system-wide deployments in production environments.
I’m a developer and I get frustrated when things are harder than they should be. Our product (Mir) was harder to release and to use than it should be but no-one cared enough to do anything about it.
My employer allows time for (approved) "side projects". Exploiting this, I started writing an "Abstraction Layer" (MirAL) as a proof-of-concept that these problems could be solved.
Over time it became apparent that this approach solved other problems and management interest grew. Until MirAL became my "day job" and adopted as part of the product.
This talk covers the both the technical and organisational aspects of the problem and the solution. Hopefully, comparisons can be made with the experience of attendees.
This presentation provides an overview of Docker concepts and commands for Java developers. It covers creating Docker hosts, running containers, building images, running applications in Docker, linking containers, composing with Docker Compose, and an overview of Docker Swarm clustering. Live coding examples are shown for many of the Docker commands and concepts.
Openshift: The power of kubernetes for engineers - Riga Dev Days 18Jorge Morales
1. The document introduces OpenShift as a container application platform based on Kubernetes that provides developers with tools for building, deploying and managing containerized applications.
2. It discusses key OpenShift concepts like pods, services, projects and image registries that allow grouping and connecting container workloads as well as storing and distributing container images.
3. Hands-on examples and tutorials are provided to demonstrate how developers can use OpenShift to develop multi-container applications from source code to deployment through features like source-to-image builds, deployments and routes.
Docker at Djangocon 2013 | Talk by Ken CochranedotCloud
Ken Cochrane gave a presentation on Docker and Docker's suitability for Django projects. He began with an introduction to Docker, explaining how it uses Linux containers to package applications into lightweight portable containers. He then discussed several common use cases for Docker like local development, continuous integration/deployment, and testing. The presentation concluded with a demo of Docker commands and a discussion of upcoming Docker 1.0 features.
Sides from my Jfokus 2015 talk.
# Abstract
Does your application deployment rely on an unhealthy amount of shell scripting glue code? Start encapsulating your application and its runtime environment in Docker containers.
This talk will give you a brief introduction to the concepts behind Docker and a handful of tips to get you started on the exciting journey towards a more robust and reliable application deployment. By the end of the talk you will have learned how to build and deploy Docker images, how to let your containers talk to each other and why the JVM and Docker are a perfect match.
Continuous Integration using Docker & JenkinsB1 Systems GmbH
This document discusses using Docker and Jenkins for continuous integration. It introduces B1 Systems and their areas of expertise including virtualization, configuration management, and cloud technologies. It then describes how Docker is used to build and deploy applications into containers and how Fig, GitLab, Jenkins, and Puppet are integrated to provide continuous integration, collaboration on code, and configuration management capabilities. Use cases are presented for automatically testing Puppet modules and integrating/testing a simple web application.
Introductory seminar on Docker and its components (networks and Compose in particular). Focused on going through some basic concepts, mention some more advanced topics, and introduce a practical workshop held on the same evening.
Docker-hanoi meetup #1: introduction about DockerNguyen Anh Tu
This document provides an overview of Docker's growth and ecosystem over the past 15 months since its launch in March 2013. It highlights the large community of over 460 contributors and 250+ meetup groups, along with over 2.75 million downloads and 6,700 projects on GitHub using Docker. The document also thanks the individuals and projects that helped make Docker possible, including various open source projects, as well as some of the early adopter users and partners helping to build the Docker ecosystem.
Considerable improvements can be achieved by automating the integration of Kamailio-based projects: automated builds, tests and deployments save time and increase reliability. This presentation focuses on common practices to automate the build of Kamailio (and RTPEngine) on various distributions and deploy them, together with their configuration, on testing and production environments.
Docker plays an important role in providing flexible, clean building environments and keep the process reproducible. We’ll see how Jenkins can orchestrate the builds with Docker slaves, and perform the deployments with a combination of platform-specific packages, Fabric, Puppet and Ansible.
Docker has created enormous buzz in the last few years. Docker is a open-source software containerization platform. It provides an ability to package software into standardised units on Docker for software development. In this hands-on introductory session, I introduce the concept of containers, provide an overview of Docker, and take the participants through the steps for installing Docker. The main session involves using Docker CLI (Command Line Interface) - all the concepts such as images, managing containers, and getting useful work done is illustrated step-by-step by running commands.
Docker allows applications to be packaged with all their dependencies and run consistently across computing environments. It provides isolation, security and portability for applications. This document discusses setting up an Eh Avatar application to run in Docker containers for Postgres, Redis and the application itself. It covers bringing up the dependency containers, building a custom Docker image for the application, and using Docker Compose to define and run the multi-container application. While this provides an introduction, there is still more to learn about optimizing Docker usage and avoiding common pitfalls.
1. The document discusses setting up a private Docker registry using Docker Registry and Nginx on local, AWS EC2, and adding authentication with basic auth and HTTPS.
2. Key steps include running Docker Registry with port 5000, linking it to Nginx, and configuring Nginx as a reverse proxy. Authentication is added using htpasswd and securing access with HTTPS and self-signed certificates.
3. The process involves building a test image, pushing it to the local registry, then pushing it to the registry accessible at an external URL after configuring the necessary network, domain name, and security settings.
Adrian Otto from Rackspace will present his perspective of "Docker 101", for Docker novices. Learn the difference between Dockerfiles, containers, running containers, terminated containers, container images, Docker Registry, and a demo of the Docker CLI that goes beyond what you learn from the online simulator.
The document discusses Python virtual environments (virtualenv) and the pip package manager. It introduces virtualenv and pip, explains why they are useful tools for isolating Python environments and managing packages, and provides exercises for creating virtual environments, using pip to install/uninstall packages, creating your own pip packages, and sharing packages on PyPI. The goal is to help users understand and learn to use these tools in 90 minutes.
The document provides an agenda for a workshop on RabbitMQ. It introduces RabbitMQ and its core concepts like exchanges, queues, bindings and message passing. It then outlines 5 exercises demonstrating key RabbitMQ patterns including hello world, work queues, publish/subscribe, routing and topics. Environmental setup using Docker is also covered.
This document provides an introduction to Docker presented by Adrian Otto. It defines Docker components like the Docker Engine (CLI and daemon), images, containers and registries. It explains how containers combine cgroups, namespaces and images. It demonstrates building images with Dockerfiles, committing container changes to new images, and the full container lifecycle. Finally, it lists Docker CLI commands and promises a demo of building/running containers.
Enable Fig to deploy to multiple Docker servers by Willy KuoDocker, Inc.
Fig (http://www.fig.sh/) is an Docker-based development environment tool which is owned by Docker. Originally, we can only deploy to one host at one time. My hack in Docker Global Hack Day #2 is to enable Fig to deploy multiple hosts at one time. In this talk, I'll give a brief introduction to Fig first. Then describe my hack in the hack day. Finally I'll give a short demo about deploying apps to multi hosts at one time.
Automate App Container Delivery with CI/CD and DevOpsDaniel Oh
This document discusses how to automate application container delivery with CI/CD and DevOps. It describes building and deploying container images using Source-to-Image (S2I) to deploy source code or application binaries. OpenShift automates deploying application containers across hosts via Kubernetes. The document also discusses continuous integration, continuous delivery, and how OpenShift supports CI/CD with features like Jenkins-as-a-Service and OpenShift Pipelines.
This document discusses 10 things not to forget before deploying Docker in production. It covers logging, monitoring, secrets, container access, filesystem choices, disk space usage, build optimizations, download speeds, backups, and Docker clusters. Overall, Docker provides benefits for portability and workflows but has some challenges to address for system-wide deployments in production environments.
I’m a developer and I get frustrated when things are harder than they should be. Our product (Mir) was harder to release and to use than it should be but no-one cared enough to do anything about it.
My employer allows time for (approved) "side projects". Exploiting this, I started writing an "Abstraction Layer" (MirAL) as a proof-of-concept that these problems could be solved.
Over time it became apparent that this approach solved other problems and management interest grew. Until MirAL became my "day job" and adopted as part of the product.
This talk covers the both the technical and organisational aspects of the problem and the solution. Hopefully, comparisons can be made with the experience of attendees.
Ultimate Guide to Microservice Architecture on Kuberneteskloia
This document provides an overview of microservice architecture on Kubernetes. It discusses:
1. Benefits of microservice architecture like independent deployability and scalability compared to monolithic applications.
2. Best practices for microservices including RESTful design, distributed configuration, client code generation, and API gateways.
3. Tools for microservices on Kubernetes including Prometheus for monitoring, Elasticsearch (ELK) stack for logging, service meshes, and event sourcing with CQRS.
The document discusses using Google App Engine and Google Web Toolkit to develop a simple stock market analysis program. It provides an overview of cloud computing and the key aspects of Google App Engine, including its architecture, data storage via Bigtable, and development process. It then describes how the stock analysis program was built with GWT to allow users to search for stock quotes, view their portfolio, and remove stocks, all while taking advantage of Google's cloud infrastructure. Code snippets demonstrate integrating GWT with App Engine for user login/logout and accessing data from the cloud.
This deck was presented at Lendingkart meetup in Bangalore covering our experiences with creating CI/CD Pipeline with Kubernetes. Here is the video link of the meetup.
https://youtu.be/YraPL_NGmcs
IBM Monitoring and Event Management SolutionsIBM Danmark
This document discusses IBM's monitoring and event management solutions including IBM Tivoli Monitoring, IBM Event Management, and IBM Business Service Management. It provides an agenda for the Nordic Pulse conference on May 28-29 including presentations on new technologies, IBM monitoring solutions, customer examples, and more. Specific topics covered include IBM Tivoli Monitoring dashboards, IBM SmartCloud Monitoring, IBM SmartCloud Application Performance Management, benefits of analytics, and solutions for monitoring workloads in cloud environments.
This document provides an overview of Kubernetes and microservices architecture. It discusses the challenges with monolithic applications and benefits of microservices. Key Kubernetes concepts are explained like masters, nodes, objects, pods, services and deployments. Azure Kubernetes Service (AKS) is introduced as a way to simplify deploying and managing Kubernetes clusters on Azure without having to self-host the Kubernetes infrastructure.
Cloud Composer workshop at Airflow Summit 2023.pdfLeah Cole
Cloud Composer workshop agenda includes:
- Introductions from engineering managers and staff
- Setting up workshop projects and GCP credits for participants
- Introduction to Cloud Composer architecture and features
- Disaster recovery process using Cloud Composer snapshots for high availability
- Demonstrating data lineage capabilities between Cloud Composer, BigQuery and Dataproc
Skill Petals - Google Associate Cloud Engineer GCP-ACE Syllabus.pdfthinkcomtech
Skill Petals is providing a GCP - Associate Cloud Engineer Certificate after successful completion of Google Cloud Platform- Associate Cloud Engineer Course. Find the complete syllabus of this cloud computing global certification training program. Add this global certification to your cv along with all your relevant skills.
Krishna Divagar Kumaresan has over 12 years of experience managing IT projects involving software design, development, and support. He has a strong technical background working with technologies like Oracle, .NET, and Crystal Reports. Kumaresan has worked on projects for clients like GIC and Citibank, managing teams and taking on roles like technical lead and project management. His experience includes developing systems for performance attribution, document management, and timesheet tracking.
Continuous Delivery of a Cloud Deployment at a Large Telecommunications ProviderM Kevin McHugh
This document discusses how a large telecommunications provider implemented continuous delivery for a cloud deployment. It defines continuous delivery as automating the process of software delivery through techniques like continuous integration, automated testing, and continuous deployment. It then describes the specific components and tools used in the telecom provider's implementation, including adopting agile methodology, integrating rational team concert, automated testing with a REST API, and using SmartCloud Orchestrator for automated builds and deployment.
Building Autonomous Operations for Kubernetes with keptnJohannes Bräuer
Keptn is a framework for automating continuous delivery and operations of Kubernetes applications. It uses a GitOps-based approach with event-driven automation to enable unbreakable delivery pipelines and self-healing deployments. Keptn provides autonomous control plane capabilities including automated testing, deployment, evaluation and operations through reusable services. The demo shows how keptn can onboard a service, deploy new versions through the stages, and enable automated remediation through integration with monitoring and runbook tools.
A fresh look at Google’s Cloud by Mandy Waite Codemotion
Google, one of the early PaaS (Platform as a Service) pionneers, has recently substantially improved AppEngine, expanded its Cloud Platform to include CloudStorage, BigQuery and soon Google Compute Engine (still in early access as of this writing).
ACDKOCHI19 - Turbocharge Developer productivity with platform build on K8S an...AWS User Group Kochi
AWS Community Day Kochi 2019 - Technical Session
Turbocharge Developer productivity with platform build on K8S and AWS services by - Laks , Principal Engineer - Intuit
GDG Cloud Southlake #16: Priyanka Vergadia: Scalable Data Analytics in Google...James Anderson
Do you know The Cloud Girl? She makes the cloud come alive with pictures and storytelling.
The Cloud Girl, Priyanka Vergadia, Chief Content Officer @Google, joins us to tell us about Scaleable Data Analytics in Google Cloud.
Maybe, with her explanation, we'll finally understand it!
Priyanka is a technical storyteller and content creator who has created over 300 videos, articles, podcasts, courses and tutorials which help developers learn Google Cloud fundamentals, solve their business challenges and pass certifications! Checkout her content on Google Cloud Tech Youtube channel.
Priyanka enjoys drawing and painting which she tries to bring to her advocacy.
Check out her website The Cloud Girl: https://thecloudgirl.dev/ and her new book: https://www.amazon.com/Visualizing-Google-Cloud-Illustrated-References/dp/1119816327
Building and scaling a B2D service, the bootstrap wayNadav Soferman
This presentation was given at Daho.am, Munich developers conference. It tells the Cloudinary story of building and scaling a service for developers in the bootstrap way with zero external funding.
The presentation shares some insights we have, behind-the-scenes details including the evolution of Cloudinary's internal architecture and some interesting numbers.
Cloudinary provides a cloud-based service for image and video management: uploading media files to the cloud directly from the browser or mobile device, perform image manipulation and video transcoding on the fly using URL-based API and deliver the media content optimized to your users via a fast CDN.
Similar to Deploying Apache Kylin on AWS and designing a task scheduler for it (20)
This document discusses Strikingly Analytics, an analytics platform built using Amazon Web Services (AWS) technologies like Apache Kylin, Elastic MapReduce, and DynamoDB. It collects and analyzes clickstream data from Strikingly's website. Key points discussed include how it uses Kylin to enable SQL queries on large datasets, runs ETL processes on AWS, and scales elastically using services like ECS, ALB, and Auto-Scaling. The system provides interactive queries with sub-4 millisecond latency while maintaining high availability and scalability.
Pregel In Graphs - Models and InstancesChase Zhang
An introduction to Google's large scale graph computing model Pregel. Tons of system design graphs are provided along with a real world application instance.
Intro to Hadoop ecosystem and Apache KylinChase Zhang
The document provides an introduction to the Hadoop ecosystem and Apache Kylin. It discusses how technologies like MapReduce, HDFS, Hive, and HBase were developed based on Google papers to address the need for distributed data processing. It introduces Apache Kylin as an OLAP system that performs automatic ETL to enable fast multi-dimensional analysis on large datasets. Key concepts of Kylin like models, cubes, jobs and segments are explained. Comparisons are made between Kylin and alternatives like Hive/SparkSQL and Druid for suitability for multi-tenant analytics use cases requiring sub-second queries.
This document provides an intermediate guide to various Git commands and techniques. It begins with basic commands for status, commit, branch, merge, and remote operations. It then covers more advanced history management techniques like reset, cherry-pick, and interactive rebase. Other sections discuss Git's internal object model and file structure, as well as additional tools and optimizations. The guide aims to explain both common and less familiar Git features while also addressing related questions and examples.
PyData London 2024: Mistakes were made (Dr. Rebecca Bilbro)Rebecca Bilbro
To honor ten years of PyData London, join Dr. Rebecca Bilbro as she takes us back in time to reflect on a little over ten years working as a data scientist. One of the many renegade PhDs who joined the fledgling field of data science of the 2010's, Rebecca will share lessons learned the hard way, often from watching data science projects go sideways and learning to fix broken things. Through the lens of these canon events, she'll identify some of the anti-patterns and red flags she's learned to steer around.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of March 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of May 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
06-18-2024-Princeton Meetup-Introduction to MilvusTimothy Spann
06-18-2024-Princeton Meetup-Introduction to Milvus
tim.spann@zilliz.com
https://www.linkedin.com/in/timothyspann/
https://x.com/paasdev
https://github.com/tspannhw
https://github.com/milvus-io/milvus
Get Milvused!
https://milvus.io/
Read my Newsletter every week!
https://github.com/tspannhw/FLiPStackWeekly/blob/main/142-17June2024.md
For more cool Unstructured Data, AI and Vector Database videos check out the Milvus vector database videos here
https://www.youtube.com/@MilvusVectorDatabase/videos
Unstructured Data Meetups -
https://www.meetup.com/unstructured-data-meetup-new-york/
https://lu.ma/calendar/manage/cal-VNT79trvj0jS8S7
https://www.meetup.com/pro/unstructureddata/
https://zilliz.com/community/unstructured-data-meetup
https://zilliz.com/event
Twitter/X: https://x.com/milvusio https://x.com/paasdev
LinkedIn: https://www.linkedin.com/company/zilliz/ https://www.linkedin.com/in/timothyspann/
GitHub: https://github.com/milvus-io/milvus https://github.com/tspannhw
Invitation to join Discord: https://discord.com/invite/FjCMmaJng6
Blogs: https://milvusio.medium.com/ https://www.opensourcevectordb.cloud/ https://medium.com/@tspann
Expand LLMs' knowledge by incorporating external data sources into LLMs and your AI applications.
Deploying Apache Kylin on AWS and designing a task scheduler for it
1. Deploying Apache Kylin on AWS
And designing a task scheduler for it
Chase Zhang
Strikingly
2. Outline
Introduction
Strikingly
Analytics Service of Strikingly
Deploy Apache Kylin on AWS
Overview
Containerizing Kylin
Maintenance
Scheduler for Kylin System
Designing Goals
Basic Idea & Implementation
Tasks, Executors and Services
Concurrency and Fault Tolerance
Maintenance and Monitoring
Conclusion
5. Introduction
Analytics Service of Strikingly
The version 0 of our analytics service is Google Analytics
Strikingly
Google
Analytics
User Pages
User
Register / Get Track IDSet Track ID
Generate User's website
Collect Page Views Data
Serve User Query
Figure: Google Analytics
6. Introduction
Analytics Service of Strikingly
The version 1 of our analytics service is through Keen IO, a 3rd party service
Strikingly
User Pages
User
View Analytics
Generate User's website
Keen.IO
Serve User Query
Collect Page Views Data
Figure: Keen IO
7. Introduction
Analytics Service of Strikingly
The version 2 of our analytics services is combining Keen IO and Apache Kylin
Strikingly
User Pages
User
View Analytics
Generate User's website
Keen.IO
Collect Page Views Data
Apache
Kylin
Serve User Query
Figure: Keen IO + Apache Kylin
10. Deploy Apache Kylin on AWS
Containerizing Kylin
Problem
We’d like to
▶ Deploy Kylin on multiple regions
▶ Customize behaviors with environment variables
▶ Build a single docker image and run everywhere
12. Deploy Apache Kylin on AWS
Maintenance
Problem
Two problems while maintaining this system:
▶ Auto scale and dynamic ports
▶ Clean-up and back-up
13. Deploy Apache Kylin on AWS
Maintenance
ECS
33345
33345
xxxxx
Kylin
(query)
Kylin
(query)
Kylin
(query)
Application
Load
Balancer
33347
Target
Group
80
Query
Requests
Kylin
(query)7070Kylin
(job)
Figure: Auto Scale and Dynamic Listening Ports
14. Deploy Apache Kylin on AWS
Maintenance
./bin/metastore.sh backup
./bin/metastore.sh restore
./bin/metastore.sh clean
./bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob
Figure: Clean-up and back-up tools
15. Deploy Apache Kylin on AWS
Maintenance
Solution
A customized task scheduler.
16. Scheduler for Kylin System
Designing Goals
▶ Customizing task scheduling
▶ Making system robust and fault tolerant
▶ Solving both previously mentioned maintenance problems
17. Scheduler for Kylin System
Basic Idea & Overall Design
The Systemd (Anti-UNIX) philosophy
▶ Scheduler works as a central service
▶ Other components work as RPC services
18. Scheduler for Kylin System
Basic Idea & Overal Design
Scheduler
Target Group
Kylin
(query)
Kylin
(query)
Kylin
(query)
Kylin
(query)
Kylin
(job)
HBase Hive
DynamoDB S3
Keen.IO
Kylin
(query)
19. Scheduler for Kylin System
Basic Idea & Overall Design
Implementation details:
▶ Applying FP9
and Actor Model10
ideas
▶ Implemented with Scala11
and Akka12
▶ Interact with Hadoop components through Java libraries
9
https://en.wikipedia.org/wiki/Functional_programming
10
https://en.wikipedia.org/wiki/Actor_model
11
http://scala-lang.org/
12
https://akka.io/
20. Scheduler for Kylin System
Basic Idea & Overall Design
Control Actor
Consistent
Hashing Router
Task Actor
Executor
Scheduler
1 2
1
22
3
3
3
1
2
3
1
Control Message
Task Message
Service
Figure: Scheduler’s Actor System
21. Scheduler for Kylin System
Tasks, Executors and Services
▶ Task = immutable message
▶ Task has a type for executor
▶ Executor call services to work
▶ Task categories: planning tasks, working tasks, maintaining tasks
22. Scheduler for Kylin System
Tasks, Executors and Services
PlanDataRefresh
PlanCubeMaintenance
HiveTableRefresh
KylinCubeBuild
KylinCubeRefresh
KylinCubeMerge
Hive Service
Kylin Service
Hourly
Daily
Need import new data?
Need build a new segment?
Need refresh old segments?
Need fill holes between segments?
Need merge segments?
Need fill holes in hive table?
Hive table has been refreshed, refresh segment
Planning Tasks Working Tasks Services
Message Storage
Service
Figure: Planning Tasks and Working Tasks
23. Scheduler for Kylin System
Tasks, Executors and Services
KylinMetadataBackup
KylinMetadataCleanup
KylinMetadataRestore
KylinHBaseTableCleanup
HBase Service
Kylin Service
AWS Service
S3
Apache Kylin
KYLIN_XWFQ12
kylin_metadata
kylin-metadata-backups
Update Cache
Get Cube Info
Delete Table
Read MetadataDelete Row
Write ZIP File
Read ZIP File
Write Table
Get Cube Info
Figure: Maintaining Tasks
24. Scheduler for Kylin System
Concurrency and Fault Tolerance
Problem
We’d like to execute tasks in order
▶ Maintaining tasks run exclusively
▶ Tasks of the same cube run execlusively
25. Scheduler for Kylin System
Concurrency and Fault Tolerance
Solution
Two manners to solve this problem:
▶ ReadWriteLock
▶ ConsistentHashingRouter
26. Scheduler for Kylin System
Concurrency and Fault Tolerance
Problem
We’d like to be fault tolerant:
1. Recovering from failures
2. Filling missed segment gaps
3. Recording history
27. Scheduler for Kylin System
Concurrency and Fault Tolerance
Solution
We’re taking multiple manners to solve this problem:
1. Assigning each task with a Unique ID
2. Persisting task message with progress to DynamoDB
3. Implementing planning and working tasks carefully to be issue aware
28. Scheduler for Kylin System
Concurrency and Fault Tolerance
ControlActor TaskActor
Executor
Consistent
HashingRouterinit running finish
error
DynamoDB DynamoDB DynamoDB
TaskMessage
Acquire Lock Release Lock
Figure: Concurrency and Message Persistent
29. Scheduler for Kylin System
Maintenance and Monitoring
Problem
We still have two trival problems to solve:
▶ Manually performing actions
▶ Task monitoring and error notification
30. Scheduler for Kylin System
Maintenance and Monitoring
How to design the user interface of scheduler?
31. Scheduler for Kylin System
Maintenance and Monitoring
Introducing scheduler slack bot...
32. Scheduler for Kylin System
Maintenance and Monitoring
Event Bus
Control Actor
Consistent
Hashing Router Task Actor Executor Service
SlackBot Actor
User Command
Figure: Scheduler Slack Bot
33. Scheduler for Kylin System
Maintenance and Monitoring
Figure: List task status
34. Scheduler for Kylin System
Maintenance and Monitoring
Figure: List Kylin Job Progress
35. Conclusion
▶ With Apache Kylin, we’re providing a sub-second web analytics service
▶ With little effort, we managed to deploy Apache Kylin with docker container
▶ With the scheduler, we deployed the system on AWS without losses of features
▶ We’ve made the system concurrency safe and robust
39. Thank you!
BTW, we’re still hiring Data Platform
Engineer:
1. Writing Scala
2. Working on AWS
3. Working with Apache Kylin
4. Working on our “Project Manhattan”