Oracle Developers APAC Meetup #1 - Working with Wercker
This is the hands-on work exercise example used for the Oracle APAC Developers Meetup #1 held in Singapore on 7th February 2018.
The worksheets are intended to be used in conjunction with the slides.
You can find the slides at :
https://www.slideshare.net/DarrelChia1/oracle-apac-developers-meetup-1-working-with-wercker-slides
Meetup Site:
https://www.meetup.com/Oracle-Developers-APAC/events/247111220/
The document discusses two serverless computing platforms that support Swift - OpenWhisk and Fn.
OpenWhisk is an open source system that is event-driven, containerized, and allows chaining of actions. It is hosted on Bluemix but can be difficult to deploy elsewhere. Fn is container-native and deploys functions as containers communicating via standard input/output. Both allow simple Swift functions to be deployed and called remotely with REST APIs or command line tools. The document provides examples of writing, deploying and calling functions on each platform.
Dockerfiles building docker images automatically v (workdir, env, add, and ...ansonjonel
The document discusses Dockerfile instructions for building Docker images automatically. It covers the WORKDIR instruction for setting the working directory, the ENV instruction for defining environment variables, and the ADD instruction for copying files into an image. Examples are provided for each instruction to demonstrate how to use them in a Dockerfile to automate the image building process.
Considerable improvements can be achieved by automating the integration of Kamailio-based projects: automated builds, tests and deployments save time and increase reliability. This presentation focuses on common practices to automate the build of Kamailio (and RTPEngine) on various distributions and deploy them, together with their configuration, on testing and production environments.
Docker plays an important role in providing flexible, clean building environments and keep the process reproducible. We’ll see how Jenkins can orchestrate the builds with Docker slaves, and perform the deployments with a combination of platform-specific packages, Fabric, Puppet and Ansible.
This document summarizes a presentation on using Vagrant for development. The presentation covers motivation for using Vagrant, basic Vagrant usage, provisioning Vagrant machines with Chef cookbooks, and creating custom base images with Packer. The agenda includes an introduction to Vagrant, demonstrating common Vagrant commands, modifying Vagrantfiles to configure VMs, provisioning VMs with Chef recipes, and using Packer to build reusable base images.
Introductory seminar on Docker and its components (networks and Compose in particular). Focused on going through some basic concepts, mention some more advanced topics, and introduce a practical workshop held on the same evening.
The document discusses Python virtual environments (virtualenv) and the pip package manager. It introduces virtualenv and pip, explains why they are useful tools for isolating Python environments and managing packages, and provides exercises for creating virtual environments, using pip to install/uninstall packages, creating your own pip packages, and sharing packages on PyPI. The goal is to help users understand and learn to use these tools in 90 minutes.
The document provides an agenda for a workshop on RabbitMQ. It introduces RabbitMQ and its core concepts like exchanges, queues, bindings and message passing. It then outlines 5 exercises demonstrating key RabbitMQ patterns including hello world, work queues, publish/subscribe, routing and topics. Environmental setup using Docker is also covered.
The document discusses two serverless computing platforms that support Swift - OpenWhisk and Fn.
OpenWhisk is an open source system that is event-driven, containerized, and allows chaining of actions. It is hosted on Bluemix but can be difficult to deploy elsewhere. Fn is container-native and deploys functions as containers communicating via standard input/output. Both allow simple Swift functions to be deployed and called remotely with REST APIs or command line tools. The document provides examples of writing, deploying and calling functions on each platform.
Dockerfiles building docker images automatically v (workdir, env, add, and ...ansonjonel
The document discusses Dockerfile instructions for building Docker images automatically. It covers the WORKDIR instruction for setting the working directory, the ENV instruction for defining environment variables, and the ADD instruction for copying files into an image. Examples are provided for each instruction to demonstrate how to use them in a Dockerfile to automate the image building process.
Considerable improvements can be achieved by automating the integration of Kamailio-based projects: automated builds, tests and deployments save time and increase reliability. This presentation focuses on common practices to automate the build of Kamailio (and RTPEngine) on various distributions and deploy them, together with their configuration, on testing and production environments.
Docker plays an important role in providing flexible, clean building environments and keep the process reproducible. We’ll see how Jenkins can orchestrate the builds with Docker slaves, and perform the deployments with a combination of platform-specific packages, Fabric, Puppet and Ansible.
This document summarizes a presentation on using Vagrant for development. The presentation covers motivation for using Vagrant, basic Vagrant usage, provisioning Vagrant machines with Chef cookbooks, and creating custom base images with Packer. The agenda includes an introduction to Vagrant, demonstrating common Vagrant commands, modifying Vagrantfiles to configure VMs, provisioning VMs with Chef recipes, and using Packer to build reusable base images.
Introductory seminar on Docker and its components (networks and Compose in particular). Focused on going through some basic concepts, mention some more advanced topics, and introduce a practical workshop held on the same evening.
The document discusses Python virtual environments (virtualenv) and the pip package manager. It introduces virtualenv and pip, explains why they are useful tools for isolating Python environments and managing packages, and provides exercises for creating virtual environments, using pip to install/uninstall packages, creating your own pip packages, and sharing packages on PyPI. The goal is to help users understand and learn to use these tools in 90 minutes.
The document provides an agenda for a workshop on RabbitMQ. It introduces RabbitMQ and its core concepts like exchanges, queues, bindings and message passing. It then outlines 5 exercises demonstrating key RabbitMQ patterns including hello world, work queues, publish/subscribe, routing and topics. Environmental setup using Docker is also covered.
A high-level overview of Docker concepts and getting started with Docker.
Outline:
1. What is a container?
2. What is Docker?
3. Running containers
4. Working with containers
5. Building your own containers
6. Improving your containers
Kamailio World 2018 - Workshop: kamailio-testsGiacomo Vacca
This document discusses kamailio-tests, a testing framework for Kamailio. It aims to provide unit tests for Kamailio's core and module functionality to reduce the need for end-to-end testing. The framework uses Docker to allow tests to run across different operating systems and distributions. Unit tests are contained in directories and test scripts, and a control script can run all tests or specific ones. The document outlines the project structure, current test units, and future development plans to expand testing.
AtoM and Vagrant: Installing and Configuring the AtoM Vagrant Box for Local T...Artefactual Systems - AtoM
These slides introduce AtoM users to Vagrant, and walk users through the process of installing the the AtoM Vagrant box for local testing and development on a home computer or laptop, regardless of what operating system you use.
WARNINGS:
These slides were last updated in May 2017, using the AtoM 2.4 Vagrant box, which is installed using Ubuntu 16.04. and PHP 7.0. Future versions of AtoM may use a different version of Ubuntu and PHP, which might change some of the command-line tasks used to update the box in Part 2. Be sure to check the AtoM documentation for the most up-to-date information: https://www.accesstomemory.org/docs/latest/
The AtoM Vagrant box is designed for local testing and development - it is NOT PRODUCTION READY and should not be used for long-term data storage. Please see the AtoM documentation for instructions on how to install AtoM on a server for use in your institution.
Preparation study for Docker Event
Mulodo Open Study Group (MOSG) @Ho chi minh, Vietnam
http://www.meetup.com/Open-Study-Group-Saigon/events/229781420/
Mukta Aphale presented at ChefConf 2015. She discussed her background transitioning from developer to DevOps architect. She contributed to Chef development and created several Chef knife plugins. Aphale also discussed using Docker and Chef together to automate container management and deployment. She showed how to build a Docker image using Chef recipes and push it to a registry for deployment using Chef push jobs.
Cargo is the package manager for Rust. It is used to create new projects, manage dependencies, compile code, and publish packages. Cargo uses the Cargo.toml file to define metadata and dependencies, and Cargo.lock to lock dependency versions. Dependencies can come from Crates.io, Git repositories, or local paths. Build scripts allow generating code during compilation. Cargo can publish packages to Crates.io and supports workspaces, environment variables, custom commands, and unstable features for nightly builds.
Using Kubernetes for Continuous Integration and Continuous DeliveryCarlos Sanchez
This document summarizes how to use Kubernetes for continuous integration and continuous delivery. It discusses using the Jenkins Kubernetes plugin to run Jenkins agents as Kubernetes pods for infinite scalability. It provides examples of defining pods with multiple containers for multi-language pipelines. It also covers using persistent volumes, resource limits, and deploying applications to Kubernetes from Jenkins pipelines.
Developing and Deploying PHP with DockerPatrick Mizer
The document discusses using Docker for developing and deploying PHP applications. It begins with an introduction to Docker, explaining that Docker allows applications to be assembled from components and eliminates friction between development, testing and production environments. It then covers some key Docker concepts like containers, images and the Docker daemon. The document demonstrates building a simple PHP application as a Docker container, including creating a Dockerfile and building/running the container. It also discusses some benefits of Docker like portability, separation of concerns between developers and DevOps, and immutable build artifacts.
GDG-ANDROID-ATHENS Meetup: Build in Docker with Jenkins Mando Stam
The document discusses automating an Android application build process using Docker and Jenkins. It describes how previously the build was done manually across multiple machines. The proposed solution is to create Docker images with the Android SDK, NDK and other build tools. These images would be used as build agents in Jenkins. Several challenges are addressed such as setting environment variables and running builds interactively in Docker containers. Defining properties files and caching downloads are techniques used to optimize the build process.
Vagrant allows developers to quickly set up uniform development environments for Node.js projects. It uses configuration files to define and provision virtual machines with all necessary tools and libraries. Chef is used for configuration management, ensuring environments are identical. Vagrant provides portability and abstraction, allowing environments to run on different providers like VirtualBox or cloud services.
Deploying an application with Chef and DockerDaniel Ku
Docker 캐주얼 토크 #1 (2014-10-15)에서 발표하기 위해 만든 자료.
원래 'Docker 실서비스 도입기'를 발표하려고 했으나, 아직 도입이 마무리되지 못한 관계로 그 과정에서 의미 있는 부분을 찾아보았다.
그래서 Chef와 Docker가 도입되면 StudyGPS에서 어플리케이션을 업데이트하는 기존의 방식이 어떻게 변화하는지에 대해 설명하고, 그 변화의 의미에 대해서 생각해보고 정리하였다.
The document discusses different platforms for deploying microservices using containers including Docker, Kubernetes, AWS ECS, AWS Elastic Beanstalk, OpenShift, and Fabric8. Docker allows deploying containers but does not provide orchestration capabilities. Kubernetes provides orchestration of containers across clusters and can be deployed on-premises or on cloud providers. AWS ECS and Elastic Beanstalk integrate Docker containers with AWS but lack portability. OpenShift is a distribution of Kubernetes that can be used to deploy and manage containerized applications. Fabric8 builds upon Docker and Kubernetes to provide a full Platform as a Service with DevOps capabilities.
The document provides an introduction and overview of Docker. It begins with the speaker introducing himself as a Delivery Manager at Bank of America with over 10 years of experience in banking and financial services. The rest of the document covers the basics of Docker, including what Docker is, why it is needed, Docker architecture, working with Docker, and the Docker ecosystem. Key points are made about how Docker provides isolation and portability for applications and their dependencies through containers.
Vagrant is a tool that allows users to create and configure lightweight, reproducible, and portable development environments. It works with virtualization software like VirtualBox to allow developers to run virtual machines that match production environments. Key features include adding "boxes" which are preconfigured virtual machine images, provisioning boxes using configuration files and tools like Chef and Puppet, and portability across different operating systems.
This presentation provides an overview of Docker concepts and commands for Java developers. It covers creating Docker hosts, running containers, building images, running applications in Docker, linking containers, composing with Docker Compose, and an overview of Docker Swarm clustering. Live coding examples are shown for many of the Docker commands and concepts.
Docker can be used to build custom images. The document describes how to:
1. Pull a base Ubuntu 18.04 image and install Miniconda3 to create a first custom image version 0.1.
2. Run a container from this image, copy libraries into it, and install packages to create an updated image version 0.2.
3. Define a Dockerfile to script the image creation process and build a final custom image that includes Java, pulling from multiple sources.
This document provides an introduction to Docker and containerization. It covers:
1. The differences between virtual machines and containers, and the container lifecycle.
2. An overview of the Docker ecosystem tools.
3. Instructions for installing and using the Docker Engine and Docker CLI to build, run, and manage containers.
4. A demonstration of using Docker Hub to build and store container images.
5. An introduction to Docker networking and volumes.
6. A demonstration of using Docker Compose to define and run multi-container applications.
7. Suggestions for further learning resources about Docker.
Setting up the fabric v 0.6 in hyperledgerkesavan N B
This document provides instructions for setting up Hyperledger Fabric version 0.6 using Docker containers. It outlines downloading the necessary software, cloning the Fabric codebase, running a network of four peer nodes, registering and enrolling users, deploying and invoking a sample chaincode.
Using Docker to build and test in your laptop and JenkinsMicael Gallego
Docker is changing the way we create and deploy software. This presentation is a hands-on introduction to how to use docker to build and test software, in your laptop and in your Jenkins CI server
The document outlines an 90 minute introduction to Ansible using Docker. It discusses setting up the environment with Docker, using ad-hoc commands and playbooks to automate tasks like installing Apache and configuring variables. Exercises demonstrate inventory management, templating configurations with Jinja2, and other core Ansible concepts. The document provides an overview but does not cover more advanced topics like dynamic inventory, roles, writing custom modules, or Ansible Tower.
Infrastructure-As-Code means that infrastructure should be treated as code – a really powerful concept. Server configuration, packages installed, relationships with other servers, etc. should be modeled with code to be automated and have a predictable outcome, removing manual steps prone to errors. That doesn’t sound bad, does it?
The goal is to automate all the infrastructure tasks programmatically. In an ideal world you should be able to start new servers, configure them, and, more importantly, be able to repeat it over and over again, in a reproducible way, automatically, by using tools and APIs.
Have you ever had to upgrade a server without knowing whether the upgrade was going to succeed or not for your application? Are the security updates going to affect your application? There are so many system factors that can indirectly cause a failure in your application, such as different kernel versions, distributions, or packages.
A high-level overview of Docker concepts and getting started with Docker.
Outline:
1. What is a container?
2. What is Docker?
3. Running containers
4. Working with containers
5. Building your own containers
6. Improving your containers
Kamailio World 2018 - Workshop: kamailio-testsGiacomo Vacca
This document discusses kamailio-tests, a testing framework for Kamailio. It aims to provide unit tests for Kamailio's core and module functionality to reduce the need for end-to-end testing. The framework uses Docker to allow tests to run across different operating systems and distributions. Unit tests are contained in directories and test scripts, and a control script can run all tests or specific ones. The document outlines the project structure, current test units, and future development plans to expand testing.
AtoM and Vagrant: Installing and Configuring the AtoM Vagrant Box for Local T...Artefactual Systems - AtoM
These slides introduce AtoM users to Vagrant, and walk users through the process of installing the the AtoM Vagrant box for local testing and development on a home computer or laptop, regardless of what operating system you use.
WARNINGS:
These slides were last updated in May 2017, using the AtoM 2.4 Vagrant box, which is installed using Ubuntu 16.04. and PHP 7.0. Future versions of AtoM may use a different version of Ubuntu and PHP, which might change some of the command-line tasks used to update the box in Part 2. Be sure to check the AtoM documentation for the most up-to-date information: https://www.accesstomemory.org/docs/latest/
The AtoM Vagrant box is designed for local testing and development - it is NOT PRODUCTION READY and should not be used for long-term data storage. Please see the AtoM documentation for instructions on how to install AtoM on a server for use in your institution.
Preparation study for Docker Event
Mulodo Open Study Group (MOSG) @Ho chi minh, Vietnam
http://www.meetup.com/Open-Study-Group-Saigon/events/229781420/
Mukta Aphale presented at ChefConf 2015. She discussed her background transitioning from developer to DevOps architect. She contributed to Chef development and created several Chef knife plugins. Aphale also discussed using Docker and Chef together to automate container management and deployment. She showed how to build a Docker image using Chef recipes and push it to a registry for deployment using Chef push jobs.
Cargo is the package manager for Rust. It is used to create new projects, manage dependencies, compile code, and publish packages. Cargo uses the Cargo.toml file to define metadata and dependencies, and Cargo.lock to lock dependency versions. Dependencies can come from Crates.io, Git repositories, or local paths. Build scripts allow generating code during compilation. Cargo can publish packages to Crates.io and supports workspaces, environment variables, custom commands, and unstable features for nightly builds.
Using Kubernetes for Continuous Integration and Continuous DeliveryCarlos Sanchez
This document summarizes how to use Kubernetes for continuous integration and continuous delivery. It discusses using the Jenkins Kubernetes plugin to run Jenkins agents as Kubernetes pods for infinite scalability. It provides examples of defining pods with multiple containers for multi-language pipelines. It also covers using persistent volumes, resource limits, and deploying applications to Kubernetes from Jenkins pipelines.
Developing and Deploying PHP with DockerPatrick Mizer
The document discusses using Docker for developing and deploying PHP applications. It begins with an introduction to Docker, explaining that Docker allows applications to be assembled from components and eliminates friction between development, testing and production environments. It then covers some key Docker concepts like containers, images and the Docker daemon. The document demonstrates building a simple PHP application as a Docker container, including creating a Dockerfile and building/running the container. It also discusses some benefits of Docker like portability, separation of concerns between developers and DevOps, and immutable build artifacts.
GDG-ANDROID-ATHENS Meetup: Build in Docker with Jenkins Mando Stam
The document discusses automating an Android application build process using Docker and Jenkins. It describes how previously the build was done manually across multiple machines. The proposed solution is to create Docker images with the Android SDK, NDK and other build tools. These images would be used as build agents in Jenkins. Several challenges are addressed such as setting environment variables and running builds interactively in Docker containers. Defining properties files and caching downloads are techniques used to optimize the build process.
Vagrant allows developers to quickly set up uniform development environments for Node.js projects. It uses configuration files to define and provision virtual machines with all necessary tools and libraries. Chef is used for configuration management, ensuring environments are identical. Vagrant provides portability and abstraction, allowing environments to run on different providers like VirtualBox or cloud services.
Deploying an application with Chef and DockerDaniel Ku
Docker 캐주얼 토크 #1 (2014-10-15)에서 발표하기 위해 만든 자료.
원래 'Docker 실서비스 도입기'를 발표하려고 했으나, 아직 도입이 마무리되지 못한 관계로 그 과정에서 의미 있는 부분을 찾아보았다.
그래서 Chef와 Docker가 도입되면 StudyGPS에서 어플리케이션을 업데이트하는 기존의 방식이 어떻게 변화하는지에 대해 설명하고, 그 변화의 의미에 대해서 생각해보고 정리하였다.
The document discusses different platforms for deploying microservices using containers including Docker, Kubernetes, AWS ECS, AWS Elastic Beanstalk, OpenShift, and Fabric8. Docker allows deploying containers but does not provide orchestration capabilities. Kubernetes provides orchestration of containers across clusters and can be deployed on-premises or on cloud providers. AWS ECS and Elastic Beanstalk integrate Docker containers with AWS but lack portability. OpenShift is a distribution of Kubernetes that can be used to deploy and manage containerized applications. Fabric8 builds upon Docker and Kubernetes to provide a full Platform as a Service with DevOps capabilities.
The document provides an introduction and overview of Docker. It begins with the speaker introducing himself as a Delivery Manager at Bank of America with over 10 years of experience in banking and financial services. The rest of the document covers the basics of Docker, including what Docker is, why it is needed, Docker architecture, working with Docker, and the Docker ecosystem. Key points are made about how Docker provides isolation and portability for applications and their dependencies through containers.
Vagrant is a tool that allows users to create and configure lightweight, reproducible, and portable development environments. It works with virtualization software like VirtualBox to allow developers to run virtual machines that match production environments. Key features include adding "boxes" which are preconfigured virtual machine images, provisioning boxes using configuration files and tools like Chef and Puppet, and portability across different operating systems.
This presentation provides an overview of Docker concepts and commands for Java developers. It covers creating Docker hosts, running containers, building images, running applications in Docker, linking containers, composing with Docker Compose, and an overview of Docker Swarm clustering. Live coding examples are shown for many of the Docker commands and concepts.
Docker can be used to build custom images. The document describes how to:
1. Pull a base Ubuntu 18.04 image and install Miniconda3 to create a first custom image version 0.1.
2. Run a container from this image, copy libraries into it, and install packages to create an updated image version 0.2.
3. Define a Dockerfile to script the image creation process and build a final custom image that includes Java, pulling from multiple sources.
This document provides an introduction to Docker and containerization. It covers:
1. The differences between virtual machines and containers, and the container lifecycle.
2. An overview of the Docker ecosystem tools.
3. Instructions for installing and using the Docker Engine and Docker CLI to build, run, and manage containers.
4. A demonstration of using Docker Hub to build and store container images.
5. An introduction to Docker networking and volumes.
6. A demonstration of using Docker Compose to define and run multi-container applications.
7. Suggestions for further learning resources about Docker.
Setting up the fabric v 0.6 in hyperledgerkesavan N B
This document provides instructions for setting up Hyperledger Fabric version 0.6 using Docker containers. It outlines downloading the necessary software, cloning the Fabric codebase, running a network of four peer nodes, registering and enrolling users, deploying and invoking a sample chaincode.
Using Docker to build and test in your laptop and JenkinsMicael Gallego
Docker is changing the way we create and deploy software. This presentation is a hands-on introduction to how to use docker to build and test software, in your laptop and in your Jenkins CI server
The document outlines an 90 minute introduction to Ansible using Docker. It discusses setting up the environment with Docker, using ad-hoc commands and playbooks to automate tasks like installing Apache and configuring variables. Exercises demonstrate inventory management, templating configurations with Jinja2, and other core Ansible concepts. The document provides an overview but does not cover more advanced topics like dynamic inventory, roles, writing custom modules, or Ansible Tower.
Infrastructure-As-Code means that infrastructure should be treated as code – a really powerful concept. Server configuration, packages installed, relationships with other servers, etc. should be modeled with code to be automated and have a predictable outcome, removing manual steps prone to errors. That doesn’t sound bad, does it?
The goal is to automate all the infrastructure tasks programmatically. In an ideal world you should be able to start new servers, configure them, and, more importantly, be able to repeat it over and over again, in a reproducible way, automatically, by using tools and APIs.
Have you ever had to upgrade a server without knowing whether the upgrade was going to succeed or not for your application? Are the security updates going to affect your application? There are so many system factors that can indirectly cause a failure in your application, such as different kernel versions, distributions, or packages.
BLCN532 Lab 1Set up your development environmentV2.0.docxmoirarandell
BLCN532 Lab 1
Set up your development environment
V2.0
Introduction
This course introduces students to blockchain development for enterprise environments. Before you can develop software applications, you need to ensue your development environment is in place. That means you’ll need all the tools and infrastructure installed and configured to support enterprise blockchain software development projects.
In this lab you’ll set up your own Hyperledger Fabric development environment and install the course software from the textbook. When you finish this lab, you’ll have a working development environment and will be ready to start running and modifying blockchain applications.
The instructions in your textbook are for Mac and Linux computers.
However
, there is no guarantee that your installation of MacOS or Linux is completely compatible with the environment in which the commands from the textbook work properly. For that reason, I
STRONGLY SUGGEST
that you acquire an Ubuntu 16.04 Virtual Machine (VM) for your labs. Using an Ubuntu 16.04 VM will make the labs far easier to complete.
The instructions in this course’s labs assume that your computer runs the Windows operating system. If you run MacOS or Linux, you can get
Vagrant
and
VirtualBox
for those operating systems and follow the gist of the “Initial setup for Windows computers”.
Lab Deliverables:
To complete this lab, you must create a
Lab Report file
and submit the file in iLearn. The Lab Report file must be a Microsoft Word format (.docx), and have the filename with the following format:
BLCN532_SECTION_STUDENTID_LASTNAME_FIRSTNAME_Lab01.docx
· SECTION is the section number of your current course (2 digits)
· STUDENTID is your student ID number (with leading zeros)
· LASTNAME is your last name, FIRSTNAME is your first name
To get started, create a Microsoft Word document (.docx) with the correct filename for this lab. You’ll be asked to enter text and paste screenshots into the lab report file.
NOTE: All screenshots MUST be readable. Use the Ubuntu Screen Capture utility (see the lab video.) Make sure that you label each screenshot (i.e. Step 2.1.3) and provide screenshots in order. For commands that produce lots of output, I only want to see the last full screen when the command finishes. Provide FULL screenshots, NOT cropped images.
SECTION 1: Initial setup for Windows computers (Chapter 3)
Step 1.1: Install Oracle Virtualbox (Windows, Linux, MacOS)
Oracle Virtualbox is an open source virtualization environment that allows you to run multiple virtual machines and containers on a single personal computer. Virtualbox is free and it is easy to install.
In your favorite web browser, navigate to:
https://www.virtualbox.org/
and click the “Download Virtualbox” button. Click the “Windows hosts” link to download the main installation executable. You should also click the “All supported platforms” under the “Extension Pack” heading to download extra software supp.
Get started with Vagrant! This basic intro assumes no prior knowledge of the platform, and it is universally applicable regardless of the Provider (hypervisor\cloud platform) that you are using.
We also delve briefly into provisioners (ex: Shell, Puppet, Chef) to unlock the true power of Vagrant: quick & easy templates of production systems.
Vagrant allows users to easily create and configure virtual development environments. The document outlines a 5 step process to get started with Vagrant: 1) select a virtualization provider like VirtualBox, 2) install Vagrant, 3) download a virtual machine image or "box", 4) initialize and start the VM with Vagrant commands, and 5) log into the VM via SSH. It also discusses additional features like version controlling Vagrant files, customizing the VM, using multiple VMs, and provisioning VMs with tools like Puppet, Chef, or Ansible.
Docker is a platform for building, shipping and running applications. It provides lightweight virtual containers that allow applications to run consistently regardless of environment. Key Docker concepts include images, containers, Docker Engine and tools like Docker Compose and Docker Machine. The document then provides steps for setting up WordPress and Laravel projects using Docker, including using Docker Compose to define services and Docker Machine to provision and manage Docker hosts.
The document provides an overview of Docker fundamentals, including an introduction to Docker and containerization, how to install Docker on various platforms, and how to use basic Docker commands to run containers from images. It covers topics such as Docker architecture, images vs containers, managing containers, networking, Docker Compose, and how Docker is implemented using Linux kernel features like namespaces and cgroups.
The document discusses Docker containers and images. It explains that Docker containers allow applications to be packaged and run in isolation. Images contain the build files and metadata for containers. The document provides examples of creating, running, stopping, restarting, and removing Docker containers based on images. It also discusses viewing container logs and committing changed containers back to new images.
Create Development and Production Environments with VagrantBrian Hogan
Need a Linux box to test a Wordpress site or a Windows VM to test a web site on IE 10? Creating a virtual machine to test or deploy your software doesn’t have to be a manual process. Bring one up in seconds with Vagrant, software for creating and managing virtual machines. With Vagrant, you can bring up a new virtual machine with the software you need, share directories, copy files, and configure networking using a friendly DSL. You can even use shell scripts or more powerful provisioning tools to set up your software and install your apps. Whether you need a Windows machine for testing an app, or a full-blown production environment for your apps, Vagrant has you covered.
In this talk you’ll learn to script the creation of multiple local virtual machines. Then you’ll use the same strategy to provision production servers in the cloud.
I work with Vagrant, Terraform, Docker, and other provisioning systems daily and am excited to show others how to bring this into their own workflows.
Docker Essentials Workshop— Innovation Labs July 2020CloudHero
This presentation was the foundation of our Docker Essentials workshop hosted by CloudHero CEO & founder Andrei Manea for the Innovation Labs team on the 23rd of July 2020.
This presentation covers the following topics:
-Getting started with containers
-A bit of history about orchestration
-Introduction to services (what they are, how to create and scale them).
To find out more about this topic, check https://cloudhero.io/
Evolving to serverless
How the applications are transforming
A note on CI/CD
Architecture of Docker
Setting up a docker environment
Deep dive into DockerFile and containers
Tagging and publishing an image to docker hub
A glimpse from session one
Services: scale our application and enable load-balancing
Swarm: Deploying application onto a cluster, running it on multiple machines
Stack: A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together.
Deploy your app: Compose file works just as well in production as it does on your machine.
Extras: Containers and VMs together
Docker is a containerization platform that packages applications and dependencies into containers that can run on any infrastructure. Containers are more lightweight than virtual machines and provide operating-system-level virtualization. The key Docker components are the Docker Engine (including the daemon and client), images, containers, registries, and networks. Dockerfiles define how to build images automatically by running commands. Images act as templates for containers, which are lightweight and portable environments for applications.
Il s’agit dans un premier temps de présenter Docker, ses cas d’usage et quelques bonnes pratiques d’utilisation.
Le but est de présenter Docker, son mode de fonctionnement et son écosystème.
Ce qu’il peut apporter et les pièges à éviter
https://github.com/kanedafromparis/prez-fabric8-dmp
This document summarizes a Docker workshop that covers the basics of Docker including:
- What Docker is and how it differs from virtual machines
- Installing Docker Desktop on Windows
- Running simple Docker containers like Redmine
- Creating a custom Docker image from a Dockerfile
- Binding a local folder to a container for development
- Common Docker commands
- Next steps like using Docker Compose and hosting one's own Docker registry
This document discusses how Docker can be used to improve the Java development environment. It outlines problems with traditional development environments like long setup times and differences between local and production environments. Docker Toolbox allows running Docker on Windows and Macs. Examples show setting up multiple apps with different stacks using Docker Compose. Use cases demonstrated include debugging, continuous deployment from IDEs, integration testing, and reproducing production issues. Best practices recommend using Docker Machine and volumes. The overall message is that Docker can make the development environment more consistent with production.
Do any VM's contain a particular indicator of compromise? E.g. Run a YARA signature over all executables on my virtual machines and tell me which ones match.
Docker introduction.
References : The Docker Book : Containerization is the new virtualization
http://www.amazon.in/Docker-Book-Containerization-new-virtualization-ebook/dp/B00LRROTI4/ref=sr_1_1?ie=UTF8&qid=1422003961&sr=8-1&keywords=docker+book
Be a happier developer with Docker: Tricks of the tradeNicola Paolucci
This document provides an overview of how Docker can make developers happy by providing clean and perfect development environments, fast application mobility and repeatability, and enabling great collaboration through microservices architecture. It then discusses various workflows and techniques for using Docker, including developing inside a single running container, leveraging containers to modularize code, reusing Dockerfiles, sharing data between containers through volumes, accessing Docker in a VM through methods like NFS or Samba, using linked containers for simple service connections, and opening ports on containers using techniques like port forwarding, VBoxManage port exposure, and iptables.
Similar to Oracle Developers APAC Meetup #1 - Working with Wercker Worksheets (20)
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
20 Comprehensive Checklist of Designing and Developing a Website
Oracle Developers APAC Meetup #1 - Working with Wercker Worksheets
1. Wercker Hands On Session
Objectives
The objective of this sheet is to introduce the users to the general features of Wercker.
We hope that through the session, it builds up sufficient foundation on the product that users can
easily explore the more advanced features of wercker.
Pre-requisites
The following software and services are required to complete the hands-on exercise:
1. Hashicorp Vagrant (https://vagrantup.com)
2. Oracle VirtualBox (VirtualBox is used as the provider for Vagrant)
3. GitHub account (http://GitHub.com)
4. DockerHub account (http://hub.docker.com)
5. Oracle Container Cloud Services (optional)
6. Your preferred Text Editor.
In addition, the user should have some knowledge with
git
docker
basic Linux shell commands
What the session is not intended to cover
The session takes the user through a simple exercise to build a wercker pipeline. In the example we
make use of a variety of tools, such as vagrant, git or docker. The tutorial will NOT be covering the
use of these tools in detail.
Working with Vagrant
The hands on session leverages on Vagrant to provision a working environment for the hands on
session.
To help reduce the setup time, we’ve prepared a vagrant box that uses the VirtualBox provider. The
box can be downloaded at:
The box comes installed with the following:
1. Oracle Enterprise Linux 7.4
2. Wercker CLI
3. golang 1.9.4
4. git 1.9.3
The above toolsets is sufficient to get through the entire hands on session.
See Appendix A for some commonly used commands used by the tools.
Additional Notes and Observations about Vagrant
For our exercises, we use Vagrant with the VirtualBox provider. Before we get started, we
should ensure that we are able to get VirtualBox running. Especially for Windows machines.
Windows uses Hyper-V as the underlying hypervisor for Docker, and it conflicts with
2. VirtualBox directly. To use VirtualBox, users need to disable the Hyper-V service, and ensure
that the VT-X settings on the BIOS is enabled.
Getting started
Provision a container cloud instance
At the last few steps of the hands-on, we will try to build the wercker pipeline to provision to an
Oracle Container Cloud service. Before we start working on the exercise proper, we should provision
the service, as it may take some time for the service to start up. Use the following steps to create a
container cloud.
Note: Wercker does support pipelines to other container services, but for our purposes, we have
chosen to work with Oracle Container Cloud Service.
Hands On
The hands on involves building a simple wercker workflow that builds a simple golang code,
containerizes it, then uploads the container to a container repository, in our case, DockerHub.
Stage 1: Setup the working environment
Note:
Our Vagrantfile uses virtualbox as the provider. You need to ensure that you are able to run VirtualBox before starting these steps,
especially if you are working on a windows environment. VirtualBox is known to use technologies that conflict with Docker.
To fix these docker-virtualbox issues, you may need to ensure the following:
Docker services are stopped
Hyper-V services are stopped
VT-X options are enabled. You’ll need to changes the settings in your bios, and reboot.
Let’s work on getting the working environment up first. To do this step, you need to have Vagrant,
and VirtualBox installed, and a working GitHub account
1. Create your own GitHub repository, and import the codes from
https://github.com/darrelchia/wercker-meetup-exercise into your new repository.
2. Clone the repository onto your working machine. The repo contains a simple “Hello World”
golang program (main.go), a unit test file (main_test.go) and a Vagrantfile.
3. Now open up the Vagrantfile. It should look something like the text below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
Vagrant.configure("2") do |config|
config.vm.box = "darrelchia/wercker-meetup-go"
config.vm.box_version = "1.0.0"
# config.vm.box = "wercker-meetup-go"
# config.vm.box_url = "file://D:wercker-meetup-go.box"
config.vm.network "forwarded_port", guest: 9000, host: 9000, "id": "portainer-port"
3. 18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
config.vm.network "forwarded_port", guest: 5000, host: 5000, "id": "golang-port"
config.vm.network "forwarded_port", guest: 22, host: 2222, "id": "ssh"
# You may want to install the vagrant proxyconf plugin.
# At the command line, type : vagrant plugin install vagrant-proxyconf
if Vagrant.has_plugin?("vagrant-proxyconf")
config.proxy.http = ""
config.proxy.https = ""
config.proxy.no_proxy = ""
end
config.vm.synced_folder "D:Exercisewercker-meetup", "/home/vagrant/wercker-
exercise"
config.vm.box_check_update = false
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |vbox|
vbox.memory = 2048
vbox.cpus = 2
vbox.name = "Wercker-Meetup-Golang2"
end
end
4. If you have received a box image (either through a USB, or via a cloud download link and have
access to the .box file, do the steps below. If you have not, and want to download the Vagrant
Box from Vagrant Cloud, you may skip this step.
Comment out lines 11 config.vm.box and line 12 config.vm.box_version .
Uncomment out line 14 config.vm.box and line 15 config.vm.box_url Update the value
to point to the location of the downloaded box. Use the file:// syntax. If you are doing this
in windows, please use a double slash as the directory separator. (e.g. C:Program
Fileswercker-meetup-go.box”)
What these steps do, is to make sure vagrant uses the local box instead of download from
Vagrant Cloud.
5. Change the value of config.vm.synced_folder to point to the location where you have
cloned the GitHub repository. By doing this, we’re going to share the directory on our host
machine, with the vagrant box. This gives us the convenience of using a preferred editing tool on
our host machine.
For example, if we cloned our git repository to the directory D:ExerciseWercker-Meetup, or
config.vm.synced_folder in line 27 will look like this:
27 config.vm.synced_folder "D:ExerciseWercker-Meetup", "/home/vagrant/wercker-
exercise"
6. Startup the development environment from the command prompt using the following command
as an Administrator. This will import the box into VirtualBox and start up the environment.
vagrant up
4. Note: All vagrant commands should be run in the same directory as the Vagrantfile.
7. When the command has finished executing without any errors, you’re ready to start. Log into
the environment using the following command:
vagrant ssh
This opens up a ssh session to the running Vagrant Box. Note: You can open multiple ssh
sessions into the same box.
Phase 2: Building a simple wercker.yml file
For the second phase, we’re going to write a simple workflow to test and build our sample code. We
will start off with using the Wercker CLI tool on our local environment to make sure we get the
workflow right first.
8. Create an empty text based file called wercker.yml. This file should be located in your working
directory (the place where you cloned the repository.
Note
Wercker.yml uses the yml format. yml is very strict on formatting, and does not accept the <tab> character. For
indenting the code, we MUST use a double <space>. If you fail to do so, you will find that your wercker workflows may
fail.
9. In the wercker.yml file, add the following :
1
2
3
4
box:
id: golang
ports:
- "5000"
10. This defines the base docker image that we will be working with. For our example, we’re working
with golang, hence, we defined a box with the id: golang. When wercker runs, it will retrieve the
golang docker image from DockerHub.
11. Now, let’s add some pipelines. Add the following into the wercker.yml file.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
box:
id: golang
ports:
- "5000"
build:
# The steps that will be executed on build
steps:
# golint step!
- wercker/golint
# Test the project
- script:
name: go test
code: |
go test ./...
5. 17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Build the project
- script:
name: go build
code: |
go build –o wercker-example main.go
dev:
steps:
- internal/watch:
code: |
go build –o wercker-example main.go
./wercker-example
reload: true
12. Save the file, and we can test it locally.
13. Go to our command prompt the vagrant ssh session, and execute the command wercker
build on the same location as the wercker.yml file. This executes the build pipeline defined
with the build: label (line 6).
14. The initial build will take some time, the Wercker CLI needs to download the containers to get it
to work.
15. Now, let’s run the dev: pipeline using the command: wercker dev -–expose-ports
16. Once you see that the pipeline is running, open a browser on the host machine, and access
http://localhost:5000. You should see the output of the go application “Hello World”. Also,
access http://localhost:5000/cities.json. You should also see a list of cities in a json format.
17. When using the internal/watch step, the CLI is able to watch for changes to the source code, and
rebuild and refresh the running code. You can try this by opening another vagrant ssh session.
Modify main.go:15 and adding one more city into the array. If you access
http://localhost:5000/cities.json again. You will find that the list of cities has been updated.
Note: This does not work if you are using a shared folder between the vagrant box and the host
machine. We’re not sure if it’s an issue with VirtualBox synced folders, or with Wercker.
18. Terminate the wercker dev session with a ctrl+c.
19. Now that we’re able to execute the wercker pipeline using the CLI, let’s bring it into wercker.
20. First let’s update the code changes into our repository. If you’re going to push the code from the
vagrant ssh session, you’ll need to configure git first. Run the following set of commands to
push the code back to GitHub.
git config –global user.name “<your name>”
git config –global user.email “<your email address>”
git add wercker.yml
git commit –m “Added wercker.yml”
git push
Once this is done, let’s go the wercker web console.
6. 21. Open a browser and log in to http://app.wercker.com. You’ll see a screen similar to the one
below.
Click on the “Create your first application” to create the application.
22.
23. In the Select user and SCM screen, select GitHub, and click Next.
If it shows “Not Connected”, you may need to link your GitHub account to Wercker first.
24. In the Select Repository page, select the repository that you have pushed our wercker.yml to,
and click Next.
25. In the Configure Access page, select “wercker will check out the code without using an SSH key”
and click Next.
26. Click on Create to create the application. You should see a popup saying the Application was
created successfully. Wercker will import your code from GitHub and bring you to a
configuration page.
7. On the page, click on the “I already have a wercker.yml, trigger a build now.
27. You’ll see your workflow running. (Hopefully successfully)
Stage 3: Adding a new pipeline to push to DockerHub
28. By now you will have realized that just building is not enough, we need to push the artifacts
somewhere. Let’s add one more step to push the build artifacts to a container repository.
29. Let’s get back to our wercker.yml file. Add the following pipeline into the wercker file to push
the built container onto DockerHub.
31
32
33
34
35
36
37
38
39
40
41
42
push-docker:
steps:
- internal/docker-push:
username: $DOCKER_USERNAME
password: $DOCKER_PASSWORD
repository: $DOCKER_USERNAME/wercker-sg-meetup
ports: "5000"
registry: https://registry.hub.docker.com/v2
tags: latest
cmd: “/pipeline/source/wercker-example”
30. Save and push the file into the GitHub repo. Pushing changes to GitHub automatically triggers
the wercker workflow to run. Wercker automatically creates a web hook that picks up code
commits and executes the build.
31. Let’s modify our workflow now. Go back to the browser and access the Workflows tab.
8. 32. In the editor, you should be able to see the build step. What we want to do is add a new
pipeline.
33. Click on the Add new pipeline button. Enter a name, and the pipeline name (as per what you
have created in the wercker.yml file. Click Create.
34. Then you will be asked to add pipeline environment variables. The pipeline environment
variables are what we define in the wercker.yml file using by prefixing the $ symbol (e.g.
$DOCKER_USERNAME). By adding the variables, here it means only this pipeline can use these
variables. If you wish to create workflow-wide variables, click on the Environment tab.
9. 35. Create the variables for DOCKER_USERNAME and DOCKER_PASSWORD. When you’re done, click
on the workflows tab at the top again.
36. Now, let’s add the pipeline into our workflow. In the Editor, click on the + icon after the build
step.
37. On the popup, enter the following details:
On branch(es) : *
No on branch(es): <blank>
Execute pipeline: push-docker
Click on the add button.
10. 38. Now, let’s trigger the build again.
39. We’ll see that the wercker workflow executes and successfully publishes the build artifacts to
DockerHub.
40. You may wish to pull the image down to test it. Our vagrant box has docker installed, so you can
pull it there to check that it’s working.
Stage 4: Complete it with Oracle Container Cloud
41. As a last step, we want to configure our container to run on a container cloud service. For our
exercise, we have chosen to work with Oracle Container cloud.
42. Before we start, we need to configure an OCCS service first. We can make use of wercker’s
custom steps, or using the script function to do this automatically, but for simplicity sake, we’re
going to do it manually.
This assumes that you already have an oracle container cloud provisioned. If you don’t you may
need to log in to provision it.
43. Log in to your Oracle Container Cloud Service.
11. 44. In the OCCS console, Click on the Services menu on the sidebar, and click the New Service
Button.
45. Let’s create a new service called wercker-example. Enter the location of the image to use for
this service. The image should be the same one that we have used in the previous stage.
12. Configure the exposed ports by clicking on the Ports checkbox under Available Options.
46. Click on the Add button from the Ports section. In the popup, expose the host port 5000 to the
container port 5000 over TCP. This is where our Hello World is running.
47. Click on the deploy button to start up the container service. This make take a minute or two to
work.
48. Click on the hosts menu on the sidebar and locate our service. Click on the hostname, and look
for the public_ip for our instance. We’ll need this information to access our example later.
13. 49. Now to work on our pipeline. Add the following pipeline to your wercker.yml file. We’re using a
custom step that interacts with the Oracle Container cloud service.
43
44
45
46
47
48
49
50
51
52
occs-restart:
steps:
# Manage Oracle Container Cloud Service container
- peternagy/oracle-occs-container-util:
occs_user: $OCCS_USERNAME
occs_password: $OCCS_PASSWORD
rest_server_url: $REST_SERVER_URL
function: stop
deployment_name: $DEPLOYMENT_NAME
50. Using the steps we’ve learnt in the earlier phases, do the following:
push the changes into the repository
log in to the wercker web console, and create the pipeline
remember to add the environment variables: The values map in the following way:
o $OCCS_USERNAME – The username to log into the OCCS instance. This is different
from your oracle cloud account name.
o $OCCS_PASSWORD – The password for the OCCS instance.
o $REST_SERVER_URL – This is the IP address to access your OCCS instance. See step
43. The format of this should be https://<OCCS IP address (e.g. https://129.10.11.12)
o $DEPLOYMENT_NAME – This is the name of the service we created in step 45.
51. Add the pipeline to our workflow and we’re done!
You can try going to main.go on our vagrant box and make some changes to the code and push it
back to our github repository, and you can watch the process deploy.
For this particular stage, what we’re doing is making our OCCS deployment restart. On restart, it will
go fetch the latest image from DockerHub and re-deploy it again, completing our CD workflow.
There are also different types of steps that you can choose to explore, e.g. deploy to a rolling router
for blue-green deployments, deploy to a K8s, just browse through the marketplace for interesting
steps that others have contribute.
This concludes the hands on.
14. Appendix A: Commonly used commands for the various tools
This section lists the commands and resources that we have used throughout this exercise. During
the hands on, please print out this section and distribute to the attendees.
Today’s WIFI Access:
SSID: clear-guest
Username: guest
Password:
GitHub Repository containing the sample codes:
https://github.com/darrelchia/oracledeveloper-apac-wercker.git
Vagrant
vagrant up Initializes the box, downloads and/or imports any images and starts up the
instance.
vagrant ssh Starts a SSH connection to our vagrant session
vagrant halt Stop (shutdown) the vagrant session.
vagrant destroy Cleanup and delete the vagrant session. The next time you run a vagrant
up, it will be a entirely new instance.
GIT
git config –global user.email <email address> This configures the email address of the using
git.
git config –global user.name <user name> This configures the username of the using git.
git add <file> This adds a file into the local repository.
git commit –m <commit message> This commits a file into the local repository
git push This pushes the changes from our local
repository to our remote one (GitHub)
Code Blocks
This section provides the code snippets used in the example
dev:
steps:
- internal/watch:
code: |
go build -o wercker_example main.go
./wercker_example
reload: true
# Build definition
build:
# The steps that will be executed on build
steps:
# golint step!. This re-formats the go code
- wercker/golint
# Run the go unit tests
15. - script:
name: go test
code: |
go test ./...
# Build the project
- script:
name: go build
code: |
go build -o wercker_example main.go
push:
steps:
# Push to public docker repo
- internal/docker-push:
ports: "5000"
username: $DOCKER_USERNAME
password: $DOCKER_PASSWORD
tag: latest
repository: $DOCKER_REPOSITORY
registry: https://index.docker.io/v2/
cmd: /pipeline/source/wercker_example
You can use the following OCC account:
Address: _________________________________________
Username: occ_admin
Password: ________________________________________
restart-occs:
steps:
# Manage Oracle Container Cloud Service container
- peternagy/oracle-occs-container-util:
occs_user: $OCCS_USER
occs_password: $OCCS_PASSWORD
rest_server_url: $REST_SERVER_URL
function: $FUNCTION
deployment_name: $DEPLOYMENT_NAME