Presentation on Pesantren Kilat Code Security
Tangerang, 2016-06-06
We talk about docker. What it is? Why it matters? and how it can benefit us?
This presentation is an introduction and delivered to local meetup in Indonesia.
This document summarizes a presentation about using Docker for development. It discusses installing Docker, running a "Hello World" Docker image, building a custom Python Docker image, and composing a more complex Docker application with PHP, MySQL, and Apache. The benefits of Docker like lightweight containers, easy environment setup, and scalability are highlighted. Some challenges with scaling and orchestration are also mentioned, along with solutions like Docker Swarm and Kubernetes.
This document provides an overview of Docker basics including requirements, software, architecture, and concepts. It discusses traditional servers, virtual machines, and containers. Key advantages and disadvantages of each approach are listed. Docker concepts like images, containers, layers, Dockerfile, registry, and hub are defined. Common Docker commands are also outlined.
Most people think "adopting containers" means deploying Docker images to production. In practice, adopting containers in the continuous integration process provides visible benefits even if the production environment are VMs.
In this webinar, we will explore this pattern by packaging all build tools inside Docker containers.
Container-based pipelines allow us to create and reuse building blocks to make pipeline creation and management MUCH easier. It's like building with Legos instead of clay.
This not only makes pipeline creation and maintenance much easier, it also solves a myriad of classic CI/CD problems such as:
Putting an end to version conflicts in build machines
Eliminating build machine management in general
Step portability and maintenance
In a very real sense, Docker-based pipelines reflect lessons learned from microservices in CI/CD pipelines. We will share tips and tricks for running these kinds of pipelines while using Codefresh as a CI/CD solution as it fully supports pipelines where each build step is running on its own Docker image.
This document provides an introduction to Docker, including what Docker is, why it matters, and how it works. Some key points:
- Docker implements lightweight containers that provide process isolation using features of the Linux kernel like cgroups and namespaces. It allows building and shipping applications without dependency and compatibility issues.
- Docker solves the "N times N" compatibility problem that arises when applications need to run in different environments. Its portable containers and standardized operations help automate development and deployment workflows.
- Containers isolate applications from one another and their dependencies without the overhead of virtual machines. This makes them lightweight and efficient while still providing isolation of applications and flexibility to run anywhere.
This document introduces Docker and discusses its benefits. Docker is an open platform that allows developers and administrators to build, ship, share, and run distributed applications. It allows building applications from any programming language or framework. Docker provides portability, automation, standardization, and the ability to rapidly scale applications up or down. It also helps support microservices architectures.
DockerCon EU 2015: Persistent, stateful services with docker cluster, namespa...Docker, Inc.
This document discusses providing persistent, stateful services with Docker clusters. It covers using Docker volumes and namespaces to manage storage, implementing "storage engines" to back up volumes for different clouds, and using supercontainers to control the host and peer containers. It summarizes setting up stateful Docker clusters using Mesos/Marathon and scheduling a supercontainer volume service for each host to support backups across multiple storage engines.
Docker and containers : Disrupting the virtual machine(VM)Rama Krishna B
This document discusses Docker containers and how they are disrupting virtual machines. It begins with definitions of key terms like virtualization, virtual machines, and hypervisors. It then compares virtual machines to containers, noting that containers are more lightweight and efficient since they share the host operating system and resources, while still providing isolation. The document traces the evolution of containers from early technologies like chroot to modern implementations in Docker. It positions Docker as an open source tool that packages and runs applications in portable software containers. While containers increase efficiency over virtual machines, the document argues both technologies can coexist in cloud environments.
This document summarizes a presentation about using Docker for development. It discusses installing Docker, running a "Hello World" Docker image, building a custom Python Docker image, and composing a more complex Docker application with PHP, MySQL, and Apache. The benefits of Docker like lightweight containers, easy environment setup, and scalability are highlighted. Some challenges with scaling and orchestration are also mentioned, along with solutions like Docker Swarm and Kubernetes.
This document provides an overview of Docker basics including requirements, software, architecture, and concepts. It discusses traditional servers, virtual machines, and containers. Key advantages and disadvantages of each approach are listed. Docker concepts like images, containers, layers, Dockerfile, registry, and hub are defined. Common Docker commands are also outlined.
Most people think "adopting containers" means deploying Docker images to production. In practice, adopting containers in the continuous integration process provides visible benefits even if the production environment are VMs.
In this webinar, we will explore this pattern by packaging all build tools inside Docker containers.
Container-based pipelines allow us to create and reuse building blocks to make pipeline creation and management MUCH easier. It's like building with Legos instead of clay.
This not only makes pipeline creation and maintenance much easier, it also solves a myriad of classic CI/CD problems such as:
Putting an end to version conflicts in build machines
Eliminating build machine management in general
Step portability and maintenance
In a very real sense, Docker-based pipelines reflect lessons learned from microservices in CI/CD pipelines. We will share tips and tricks for running these kinds of pipelines while using Codefresh as a CI/CD solution as it fully supports pipelines where each build step is running on its own Docker image.
This document provides an introduction to Docker, including what Docker is, why it matters, and how it works. Some key points:
- Docker implements lightweight containers that provide process isolation using features of the Linux kernel like cgroups and namespaces. It allows building and shipping applications without dependency and compatibility issues.
- Docker solves the "N times N" compatibility problem that arises when applications need to run in different environments. Its portable containers and standardized operations help automate development and deployment workflows.
- Containers isolate applications from one another and their dependencies without the overhead of virtual machines. This makes them lightweight and efficient while still providing isolation of applications and flexibility to run anywhere.
This document introduces Docker and discusses its benefits. Docker is an open platform that allows developers and administrators to build, ship, share, and run distributed applications. It allows building applications from any programming language or framework. Docker provides portability, automation, standardization, and the ability to rapidly scale applications up or down. It also helps support microservices architectures.
DockerCon EU 2015: Persistent, stateful services with docker cluster, namespa...Docker, Inc.
This document discusses providing persistent, stateful services with Docker clusters. It covers using Docker volumes and namespaces to manage storage, implementing "storage engines" to back up volumes for different clouds, and using supercontainers to control the host and peer containers. It summarizes setting up stateful Docker clusters using Mesos/Marathon and scheduling a supercontainer volume service for each host to support backups across multiple storage engines.
Docker and containers : Disrupting the virtual machine(VM)Rama Krishna B
This document discusses Docker containers and how they are disrupting virtual machines. It begins with definitions of key terms like virtualization, virtual machines, and hypervisors. It then compares virtual machines to containers, noting that containers are more lightweight and efficient since they share the host operating system and resources, while still providing isolation. The document traces the evolution of containers from early technologies like chroot to modern implementations in Docker. It positions Docker as an open source tool that packages and runs applications in portable software containers. While containers increase efficiency over virtual machines, the document argues both technologies can coexist in cloud environments.
Docker provides security features to secure content, access, and platforms. It delivers integrated security through content trust, authorization and authentication, and runtime containment using cGroups, namespaces, capabilities, seccomp profiles, and Linux security modules.
This slide is just for beginner journey with docker who are eager to learn docker but don't know where to start or how it works. In here I am trying to explain every basic things of docker as simple as possible.
Docker uses virtualization techniques like namespaces and cgroups to isolate processes and share resources efficiently across multiple Linux containers. Namespaces isolate things like process IDs, network interfaces, and mounted filesystems between containers, while cgroups limit resources like CPU and memory for containers. AuFS combines multiple filesystem layers into one for containers. Docker builds on these technologies to package applications and their dependencies into lightweight Linux containers that can run virtually anywhere.
Introduction to Containers - SQL Server and DockerChris Taylor
Containers provide lightweight virtualization that packages applications and dependencies together. The document introduces containers and Docker, discusses the differences between containers and virtual machines, and covers key Docker concepts like images, Dockerfiles, Docker Hub, and running SQL Server in containers. It also addresses container setup, licensing, and performance considerations for using containers with SQL Server.
Dockerized containers are the current wave that promising to revolutionize IT. Everybody is talking about containers, but a lot of people remain confused on how they work and why they are different or better than virtual machines. In this session, Black Duck container and virtualization expert Tim Mackey will demystify containers, explain their core concepts, and compare and contrast them with the virtual machine architectures that have been the staple of IT for the last decade.
Docker allows for the use of lightweight containers that share the host operating system kernel. Containers isolate applications from one another and provide a way to package applications with their dependencies. Containers use resource isolation features and union file systems for efficiency. Docker images are built from layers and can be distributed. The Docker ecosystem includes tools for the container lifecycle, networking, storage, and distribution of images.
Docker Security - Secure Container Deployment on LinuxMichael Boelen
How to securely deploy your containers, by the author of rkhunter and auditing tool Lynis.
Many introductory talks about Docker and its container technology, have been given. This attention to the subject is not surprising, seeing the amount of people "doing DevOps" now.
With container technology being fairly new on the Linux platform, the security aspects of containers are often being overlooked. While Linux containers do still not fully contain from a security point of view, we can definitely improve the security level of them.
In this talk, we have a look at the underlying Linux security measures, followed by the features Docker itself has to offer. The goal is to get an understanding how we can deploy containers in a secure way. After all, Docker is no longer just a toy, and our precious data is involved.
Container security involves securing the host, container content, orchestration, and applications. The document discusses how container isolation evolved over time through namespaces, cgroups, capabilities, and other Linux kernel features. It also covers securing container images, orchestrators, and applications themselves. Emerging technologies like LinuxKit, Katacontainers, and MirageOS aim to provide more lightweight and secure container environments.
Lightweight virtualization uses container technology to isolate processes and their resources through namespaces and cgroups. Docker is a container management system that provides lightweight virtualization. Baidu chose Docker for its BAE platform because containers provide better isolation than sandboxes with fewer restrictions and lower costs. Docker meets BAE's needs but was improved with additional security and resource constraints for its PAAS platform.
DockerCon EU 2015: Docker and PCI-DSS - Lessons learned in a security sensiti...Docker, Inc.
This document summarizes Udo Seidel's presentation on Docker and PCI compliance at Amadeus. It discusses how Amadeus implemented Docker while meeting PCI requirements for security, access controls, logging, and more. Some key lessons included reusing existing security tools, having a dedicated security architect role, and emphasizing communication between security, operations and development teams. Docker provided benefits like abstraction, ease of use and mobility while allowing Amadeus to port more applications over time in compliance with PCI standards.
This document discusses Docker, including what it is, why it is used, and how it works. Docker provides lightweight software containers that package code and its dependencies so the application runs quickly and consistently on any computing infrastructure. It allows applications to be easily deployed and migrated across computing environments. The document outlines how Docker addresses issues like managing multiple software stacks and hardware environments by creating portable containers that can be run anywhere without reconfiguration. Examples of using Docker for microservices, DevOps, and data centers are also provided.
The document discusses the importance of diversity and inclusion in the workplace. It notes that a diverse workforce leads to better problem solving and decision making by bringing in a variety of perspectives. The document recommends that companies implement diversity training for all employees and promote a culture of acceptance across differences to reap the benefits of diversity.
The document introduces containers and Docker. It discusses the problems with traditional virtualization approaches for managing and deploying code. Containers provide a lightweight virtualization method that packages code and dependencies together so the application runs reliably from one computing environment to another. Docker is a tool that makes it easy to create, deploy and run containers. The document provides examples of using Docker to build container images from a Dockerfile, run containers, link containers together using Docker Compose, and share container images publicly on Docker Hub.
This document provides an overview of Docker technologies including Docker Engine, Docker Machine, Docker Kitematic, Docker Compose, Docker Swarm, Docker Registry, Docker Content Trust, Docker Networking, and Docker Universal Control Plane. It describes what each technology is used for, provides examples, and references additional resources for further information.
This document summarizes Docker security features as of release 1.12. It discusses key security modules like namespaces, cgroups, capabilities, seccomp, AppArmor/SELinux that provide access control and isolation in Docker containers. It also covers multi-tenant security, image signing, TLS for daemon access, and best practices like using official images and regular updates.
Docker 1.11 Meetup: Containerd and runc, by Arnaud Porterie and Michael Crosby Michelle Antebi
In this talk, Michal Crosby will present on runC and Containerd, the internals and how they work together to start and manage containers in Docker. Afterwards, Arnaud Porterie will touch on about what was shipped in 1.11 and how it will enable some of the things we are working on for 1.12.
Docker is a system for running applications in isolated containers. It addresses issues with traditional virtual machines by providing lightweight containers that share resources and allow applications to run consistently across different environments. Docker eliminates inconsistencies in development, testing and production environments. It allows applications and their dependencies to be packaged into a standardized unit called a container that can run on any Linux server. This makes applications highly portable and improves efficiency across the entire development lifecycle.
This document summarizes a presentation on container security given by Phil Estes. It identifies several threat vectors for containers including risks from individual containers, interactions between containers, external attacks, and application security issues. It then outlines various security tools and features in Docker like cgroups, Linux Security Modules, capabilities, seccomp, and user namespaces that can help mitigate these threats. Finally, it discusses some future directions for improving container security through more secure defaults, image signing, and network security enhancements.
Docker is a system for running applications securely isolated in a container to provide a consistent deployment environment. The document introduces Docker, discusses the challenges of deploying applications ("the matrix from hell"), and how Docker addresses these challenges by allowing applications and their dependencies to be packaged into lightweight executable containers that can run on any infrastructure. It also summarizes key Docker tools like Docker Compose for defining and running multi-container apps, Docker Machine for provisioning remote Docker hosts in various clouds, and Docker Swarm for clustering Docker hosts.
Docker is a system for running applications in lightweight containers that can be deployed across machines. It allows developers to package applications with all dependencies into standardized units for software development. Docker eliminates inconsistencies in environments and allows applications to be easily deployed on virtual machines, physical servers, public clouds, private clouds, and developer laptops through the use of containers.
Docker provides security features to secure content, access, and platforms. It delivers integrated security through content trust, authorization and authentication, and runtime containment using cGroups, namespaces, capabilities, seccomp profiles, and Linux security modules.
This slide is just for beginner journey with docker who are eager to learn docker but don't know where to start or how it works. In here I am trying to explain every basic things of docker as simple as possible.
Docker uses virtualization techniques like namespaces and cgroups to isolate processes and share resources efficiently across multiple Linux containers. Namespaces isolate things like process IDs, network interfaces, and mounted filesystems between containers, while cgroups limit resources like CPU and memory for containers. AuFS combines multiple filesystem layers into one for containers. Docker builds on these technologies to package applications and their dependencies into lightweight Linux containers that can run virtually anywhere.
Introduction to Containers - SQL Server and DockerChris Taylor
Containers provide lightweight virtualization that packages applications and dependencies together. The document introduces containers and Docker, discusses the differences between containers and virtual machines, and covers key Docker concepts like images, Dockerfiles, Docker Hub, and running SQL Server in containers. It also addresses container setup, licensing, and performance considerations for using containers with SQL Server.
Dockerized containers are the current wave that promising to revolutionize IT. Everybody is talking about containers, but a lot of people remain confused on how they work and why they are different or better than virtual machines. In this session, Black Duck container and virtualization expert Tim Mackey will demystify containers, explain their core concepts, and compare and contrast them with the virtual machine architectures that have been the staple of IT for the last decade.
Docker allows for the use of lightweight containers that share the host operating system kernel. Containers isolate applications from one another and provide a way to package applications with their dependencies. Containers use resource isolation features and union file systems for efficiency. Docker images are built from layers and can be distributed. The Docker ecosystem includes tools for the container lifecycle, networking, storage, and distribution of images.
Docker Security - Secure Container Deployment on LinuxMichael Boelen
How to securely deploy your containers, by the author of rkhunter and auditing tool Lynis.
Many introductory talks about Docker and its container technology, have been given. This attention to the subject is not surprising, seeing the amount of people "doing DevOps" now.
With container technology being fairly new on the Linux platform, the security aspects of containers are often being overlooked. While Linux containers do still not fully contain from a security point of view, we can definitely improve the security level of them.
In this talk, we have a look at the underlying Linux security measures, followed by the features Docker itself has to offer. The goal is to get an understanding how we can deploy containers in a secure way. After all, Docker is no longer just a toy, and our precious data is involved.
Container security involves securing the host, container content, orchestration, and applications. The document discusses how container isolation evolved over time through namespaces, cgroups, capabilities, and other Linux kernel features. It also covers securing container images, orchestrators, and applications themselves. Emerging technologies like LinuxKit, Katacontainers, and MirageOS aim to provide more lightweight and secure container environments.
Lightweight virtualization uses container technology to isolate processes and their resources through namespaces and cgroups. Docker is a container management system that provides lightweight virtualization. Baidu chose Docker for its BAE platform because containers provide better isolation than sandboxes with fewer restrictions and lower costs. Docker meets BAE's needs but was improved with additional security and resource constraints for its PAAS platform.
DockerCon EU 2015: Docker and PCI-DSS - Lessons learned in a security sensiti...Docker, Inc.
This document summarizes Udo Seidel's presentation on Docker and PCI compliance at Amadeus. It discusses how Amadeus implemented Docker while meeting PCI requirements for security, access controls, logging, and more. Some key lessons included reusing existing security tools, having a dedicated security architect role, and emphasizing communication between security, operations and development teams. Docker provided benefits like abstraction, ease of use and mobility while allowing Amadeus to port more applications over time in compliance with PCI standards.
This document discusses Docker, including what it is, why it is used, and how it works. Docker provides lightweight software containers that package code and its dependencies so the application runs quickly and consistently on any computing infrastructure. It allows applications to be easily deployed and migrated across computing environments. The document outlines how Docker addresses issues like managing multiple software stacks and hardware environments by creating portable containers that can be run anywhere without reconfiguration. Examples of using Docker for microservices, DevOps, and data centers are also provided.
The document discusses the importance of diversity and inclusion in the workplace. It notes that a diverse workforce leads to better problem solving and decision making by bringing in a variety of perspectives. The document recommends that companies implement diversity training for all employees and promote a culture of acceptance across differences to reap the benefits of diversity.
The document introduces containers and Docker. It discusses the problems with traditional virtualization approaches for managing and deploying code. Containers provide a lightweight virtualization method that packages code and dependencies together so the application runs reliably from one computing environment to another. Docker is a tool that makes it easy to create, deploy and run containers. The document provides examples of using Docker to build container images from a Dockerfile, run containers, link containers together using Docker Compose, and share container images publicly on Docker Hub.
This document provides an overview of Docker technologies including Docker Engine, Docker Machine, Docker Kitematic, Docker Compose, Docker Swarm, Docker Registry, Docker Content Trust, Docker Networking, and Docker Universal Control Plane. It describes what each technology is used for, provides examples, and references additional resources for further information.
This document summarizes Docker security features as of release 1.12. It discusses key security modules like namespaces, cgroups, capabilities, seccomp, AppArmor/SELinux that provide access control and isolation in Docker containers. It also covers multi-tenant security, image signing, TLS for daemon access, and best practices like using official images and regular updates.
Docker 1.11 Meetup: Containerd and runc, by Arnaud Porterie and Michael Crosby Michelle Antebi
In this talk, Michal Crosby will present on runC and Containerd, the internals and how they work together to start and manage containers in Docker. Afterwards, Arnaud Porterie will touch on about what was shipped in 1.11 and how it will enable some of the things we are working on for 1.12.
Docker is a system for running applications in isolated containers. It addresses issues with traditional virtual machines by providing lightweight containers that share resources and allow applications to run consistently across different environments. Docker eliminates inconsistencies in development, testing and production environments. It allows applications and their dependencies to be packaged into a standardized unit called a container that can run on any Linux server. This makes applications highly portable and improves efficiency across the entire development lifecycle.
This document summarizes a presentation on container security given by Phil Estes. It identifies several threat vectors for containers including risks from individual containers, interactions between containers, external attacks, and application security issues. It then outlines various security tools and features in Docker like cgroups, Linux Security Modules, capabilities, seccomp, and user namespaces that can help mitigate these threats. Finally, it discusses some future directions for improving container security through more secure defaults, image signing, and network security enhancements.
Docker is a system for running applications securely isolated in a container to provide a consistent deployment environment. The document introduces Docker, discusses the challenges of deploying applications ("the matrix from hell"), and how Docker addresses these challenges by allowing applications and their dependencies to be packaged into lightweight executable containers that can run on any infrastructure. It also summarizes key Docker tools like Docker Compose for defining and running multi-container apps, Docker Machine for provisioning remote Docker hosts in various clouds, and Docker Swarm for clustering Docker hosts.
Docker is a system for running applications in lightweight containers that can be deployed across machines. It allows developers to package applications with all dependencies into standardized units for software development. Docker eliminates inconsistencies in environments and allows applications to be easily deployed on virtual machines, physical servers, public clouds, private clouds, and developer laptops through the use of containers.
The document provides an introduction to Docker, containers, and the problems they aim to solve. It discusses:
- Why Docker was created - to address the "matrix from hell" of developing and deploying applications across different environments and platforms.
- How Docker works at a high level, using lightweight containers that package code and dependencies to run consistently on any infrastructure.
- Some key Docker concepts like images, containers, the Dockerfile for building images, and common Docker commands.
- Benefits of Docker for developers and operations in simplifying deployment, reducing inconsistencies, and improving portability of applications.
This document provides an introduction to Docker, including why it was created, how it works, and its growing ecosystem. Docker allows applications to be packaged with all their dependencies and run consistently across any Linux server by using lightweight virtual containers rather than full virtual machines. It solves the problem of differences between development, testing, and production environments. The document outlines the technical details and advantages of Docker, examples of how companies are using it, and the growing support in tools and platforms.
This document provides an agenda and overview for a hands-on workshop on container and Docker technologies. It begins with a brief introduction to containers and Docker, then covers installing and managing Docker containers using tools like Portainer and OpenShift Origin. It also discusses building simple Docker applications and has sections on container and Docker concepts like images, containers, registries, advantages, and the Docker ecosystem. The document aims to explain containers and Docker for both developers and IT administrators.
Demystifying Containerization Principles for Data ScientistsDr Ganesh Iyer
Demystifying Containerization Principles for Data Scientists - An introductory tutorial on how Dockers can be used as a development environment for data science projects
Introduction to dockers and kubernetes. Learn how this helps you to build scalable and portable applications with cloud. It introduces the basic concepts of dockers, its differences with virtualization, then explain the need for orchestration and do some hands-on experiments with dockers
Write Once and REALLY Run Anywhere | OpenStack Summit HK 2013dotCloud
The document outlines the agenda for the OpenStack Summit in November 2013. The agenda includes sessions on Docker and its ecosystem, using Docker with OpenStack and Rackspace, and a cross-cloud deployment demo. Docker is presented as a solution for developing and deploying applications across multiple environments by encapsulating code and dependencies in portable containers. It can help eliminate inconsistencies between development, testing, and production environments.
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
The document outlines the agenda for the OpenStack Summit in November 2013, including presentations on Docker and its ecosystem, how Docker can be used with OpenStack and Rackspace, and a demonstration of cross-cloud application deployment using Docker. Docker is presented as a solution to the "matrix from hell" of running applications across different environments by providing lightweight, portable containers that can run anywhere regardless of the operating system. The summit aims to educate attendees on Docker and showcase its integration with OpenStack for simplified and efficient application deployment and management across multiple clouds.
This document provides an introduction and overview of Docker. It discusses why Docker was created to address issues with managing applications across different environments, and how Docker uses lightweight containers to package and run applications. It also summarizes the growth and adoption of Docker in its first 7 months, and outlines some of its core features and the Docker ecosystem including integration with DevOps tools and public clouds.
This document provides an overview of containers and Docker for automating DevOps processes. It begins with an introduction to containers and Docker, explaining how containers help break down silos between development and operations teams. It then covers Docker concepts like images, containers, and registries. The document discusses advantages of containers like low overhead, environment isolation, quick deployment, and reusability. It explains how containers leverage kernel features like namespaces and cgroups to provide lightweight isolation compared to virtual machines. Finally, it briefly mentions Docker ecosystem tools that integrate with DevOps processes like configuration management and continuous integration/delivery.
In this talk Ben will walk you through running Cassandra in a docker environment to give you a flexible development environment that uses only a very small set of resources, both locally and with your favorite cloud provider. Lessons learned running Cassandra with a very small set of resources are applicable to both your local development environment and larger, less constrained production deployments.
Configuration management tools like Chef, Puppet, and Ansible aim to reduce inconsistencies by imposing and managing consistent configurations across environments. However, they do not fully address issues related to dependencies, isolation, and portability. Docker containers build on these tools by adding standard interfaces and a lightweight virtualization layer that encapsulates code and dependencies, allowing applications and their environments to be packaged together and run consistently on any infrastructure while also providing isolation.
This document provides an introduction to Docker, including:
- Docker allows developers to package applications with all dependencies into standardized units called containers that can run on any infrastructure.
- Docker uses namespaces and control groups to provide isolation and security between containers while allowing for more efficient use of resources than virtual machines.
- The Docker architecture includes images which are templates for creating containers, a Dockerfile to automate image builds, and Docker Hub for sharing images.
- Kubernetes is an open-source platform for automating deployment and management of containerized applications across clusters of hosts.
Docker, Containers, and the Future of Application Delivery document discusses:
- The challenges of running applications across different environments due to variations in stacks and hardware ("N x N" compatibility problem).
- How Docker addresses this by allowing applications and their dependencies to be packaged into standardized software containers that can run consistently across any infrastructure similar to how shipping containers standardized cargo transportation.
- The benefits of Docker for developers in building applications once and running them anywhere without dependency or compatibility issues, and for operations in simplifying configuration management and automation.
Docker-Hanoi @DKT , Presentation about Docker EcosystemVan Phuc
The document provides an overview of Docker Platform and Ecosystem. It begins with introductions and background on Docker, explaining how Docker solves the problem of dependency hell and portability issues by allowing applications to run in isolated containers that package code and dependencies. It then discusses key components of Docker including Engine, Registry, Machine, Swarm, Compose and tools like Toolbox and Cloud. The document concludes with examples of using Docker for continuous integration pipelines and microservices architectures.
Docker, Containers and the Future of Application DeliveryDocker, Inc.
Docker containers provide a standardized way to package applications and their dependencies to run consistently regardless of infrastructure. This solves the "N x N" compatibility problem caused by multiple applications, stacks, and environments. Containers allow applications to be built once and run anywhere while isolating components. Docker eliminates inconsistencies between development, testing and production environments and improves automation of processes like continuous integration and delivery.
Docker, Containers and the Future of Application DeliveryDocker, Inc.
This document discusses Docker and containers as a solution to challenges in application delivery caused by the multiplicity of hardware environments and software stacks. It describes how Docker solves this "N x N" compatibility problem by allowing applications and their dependencies to be packaged into standardized, self-sufficient containers that can run on any infrastructure. The document outlines why Docker is gaining excitement from developers and operations teams by enabling "build once, run anywhere", continuous integration/deployment, and consistent application environments. It also summarizes some alternative approaches and real-world use cases being developed by the Docker community.
Newt Global provides DevOps transformation, cloud enablement, and test automation services. It was founded in 2004 and is headquartered in Dallas, Texas with locations in the US and India. The company is a leader in DevOps transformations and has been one of the top 100 fastest growing companies in Dallas twice. The document discusses an upcoming webinar on Docker 101 that will be presented by two Newt Global employees: Venkatnadhan Thirunalai, the DevOps Practice Leader, and Jayakarthi Dhanabalan, an AWS Solution Specialist.
Designing Malware for Modern Red Team and Adversary Tradecraft.
Why using python for building malware?
Lesson learn and consideration.
as presented in PyCon ID 2021 (05/12/2021)
Man in the Middle, classic but still relevant.
What is MITM? How to achieve it? What impact it have?
Find out MITM in this presentation (Jakarta, 25/07/2020)
This document discusses cyber security careers in Indonesia. It provides an overview of common cyber security jobs including penetration tester, security engineer, SOC engineer, and forensic investigator. It also outlines the skills needed to work in cyber security, challenges in the field in Indonesia like a lack of professionals, and ways to prepare like building skills through virtual labs, capture the flag challenges, and communities. Daily activities of a penetration tester are also summarized as finding vulnerabilities, creating proof of concepts, writing reports, and presenting results.
small talk about IOT security especially IOT pentesting for beginner. What exactly IOT and how we test it?
Live on Ethical Hacker Indonesia
April 14th 2020
Slide yang kupresentasikan di PyCon 2019 (Surabaya, 23/11/2019)
Red-Teaming is a simulation of real world hacking against organization. It has little to no limit of time, location, and method to attack. Only results matter. This talk gives insight about how “hacker” works and how python can be used for sophisticated series of attack.
This document discusses security issues related to the Internet of Things (IoT). It begins with an introduction to IoT, noting the exponential growth in connected devices. It then outlines common threats to IoT systems, including attacks aimed at devices, networks, and cloud infrastructure. Specific examples like the Mirai botnet and attacks on Ukrainian power grids are examined. The presentation concludes with recommendations for improving IoT security, such as understanding system architectures, implementing policies, and regularly monitoring networks.
This 5-day course teaches reverse engineering and malware analysis skills. It covers analyzing Windows and Linux binaries to understand program flow and perform static and dynamic analysis. Topics include disassembly, decompilation, debugging, instrumentation, and patching code. The course is intended for reverse engineers, malware analysts, security engineers, incident responders, and security professionals who need to analyze and modify binaries to securely defend against evolving threats.
Modern software applications are complex and connected systems that process, store, and transmit data. This complexity introduces many potential attack vectors if proper security practices are not followed. Common attacks include injection attacks by inserting malicious code into inputs, insecure direct object reference issues that allow unauthorized access to restricted resources, and exposing sensitive information if debugging artifacts or unobfuscated binaries are deployed. To mitigate these risks, developers must filter all inputs, carefully check authorization for resources, and ensure no secrets or development files are included in production deployments.
Presentation for Roadshow of Cyber Security Marathon 2018
Mozilla Community Space
Jakarta, 2018-01-20
How many of you know firmware?
Then how many of you know that firmware can be reversed?
Let's see how can we do that.
The Offensive Python: Practical Python for Penetration TestingSatria Ady Pradana
Presentation for Roadshow of Cyber Security Marathon 2018
Code Margonda
Depok, 2018-01-11
So you got python? How far can you push your python?
Why would hackers love python?
It's not hard to know that python is amazing language. But how amazing it could be for cyber security? Let's see by getting our hands dirty, from simple tasks to more challenging action
From Reversing to Exploitation: Android Application Security in EssenceSatria Ady Pradana
This document discusses techniques for analyzing and exploiting Android applications. It begins by explaining why security is important given people's growing dependency on digital technology and mobile devices. It then discusses decompiling APK files and using tools like Apktool, Dex2Jar, and decompilers to view an app's code. The document also covers using proxies like Burp Suite and Frida to intercept network traffic and manipulate app behavior at runtime. The goal is usually to obtain sensitive data, bypass restrictions, or modify the app. Examples of scenarios explored include tampering with network requests, bypassing security checks, and decrypting encrypted data.
Presentation on Technostar 2017
STMIK Jakarta STI&K
Jakarta, 2017-10-10
General overview of android security from hacker's perspective. Android security mostly seen as only "exploiting the device with RAT" and some of it. Here, I want to show that there are more than that.
Small discussion on Echo's Hack In The Zoo (HITZ) 2017
Ragunan Zoo Jakarta
Jakarta, 2017-09-09
Frida? It's a Dynamic Binary Instrumentation. DBI.
Let's see what frida can do for us, reverse engineer.
Malware comes in many forms and poses increasing threats. The document discusses the basics of how malware works, including propagation techniques to spread, payloads to damage systems, and self-defense mechanisms. It also covers common malware classes like viruses, worms and Trojans. Examples are given of real malware outbreaks like WannaCry and Petya to show how quickly they can spread. Defense strategies include using antivirus software, keeping systems updated, and maintaining backups.
Reverse Engineering: Protecting and Breaking the Software (Workshop)Satria Ady Pradana
Workshop on Let's Secure Your Code
Universitas Muhammadiyah Surakarta
Surakarta, 2017-05-02
This workshop is a small introductory to Reverse Engineering, with C# and CIL as focus.
The crackme: https://pastebin.com/AS8NEtLc
The challenge: https://pastebin.com/Tb0MutfK
Reverse Engineering: Protecting and Breaking the SoftwareSatria Ady Pradana
Presentation on Let's Secure Your Code
Universitas Muhammadiyah Surakarta
Surakarta, 2017-05-01
Introduction to Reverse Engineering.
This presentation is focusing on software or code, emphasizing on common practice in reverse engineering of software.
Memory Forensic: Investigating Memory Artefact (Workshop)Satria Ady Pradana
Workshop of memory forensic
Atmajaya University
Yogyakarta, 2017-04-29
What is memory forensic? How could it be important? How can we use memory forensic in certain case? Should we do memory forensic?
This is the workshop side with hands-on material.
Presentation of memory forensic
Atmajaya University
Yogyakarta, 2017-04-29
What is memory forensic? How could it be important? How can we use memory forensic in certain case? Should we do memory forensic?
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
6. Why is Docker Awesome?
◉It’s like a Virtual Machine, but much
lightweight.
◉Can up and run in few seconds.
◉Easy deploy, easy remove.
◉Clear separation of concerns.
◉Scale more easily
◉Get higher density and run more workloads
9. Virtual Machine
Impractical to store and
transfer.
If you want to replicate a VM
which used as a service,
you need full VM for each of
instance.
1 GB space for 1 instance =
1 TB for 1000 instance.
Some notes
Container
Share a bulk of space to
hundred or thousands of
containers, thanks to union
file system.
VMs are very large, which makes.
10. Virtual Machine
Full virtualized system
means allocate resource to
specific VM.
Heavier!
Some notes
Container
No need to create virtual
device. All container share
host, running on top op
same kernel but isolated.
Resource utilization
17. and spawned an Intermodal Shipping Container
Ecosystem
• 90% of all cargo now shipped in a standard container
• Order of magnitude reduction in cost and time to load and unload ships
• Massive reduction in losses due to theft or damage
• Huge reduction in freight cost as percent of final goods (from >25% to
<3%)
massive globalizations
• 5000 ships deliver 200M containers per year
18. Did you figure it out?
◉It’s like our code and environment to run the
code.
◉A problem in development and deployment.
19. Static website
Web frontend
User DB
Queue Analytics DB
Background workers
API endpoint
nginx 1.5 + modsecurity + openssl +
bootstrap 2
postgresql + pgv8 + v8
hadoop + hive + thrift + OpenJDK
Ruby + Rails + sass + Unicorn
Redis + redis-sentinel
Python 3.0 + celery + pyredis + libcurl + ffmpeg +
libopencv + nodejs + phantomjs
Python 2.7 + Flask + pyredis + celery + psycopg +
postgresql-client
Development VM
QA server
Public Cloud
Disaster recovery
Contributor’s laptop
Production Servers
Meet Code and EnvironmentMultiplicityofStacks
Multiplicityof
hardware
environments
Production Cluster
Customer Data Center
Doservicesandapps
interact
appropriately?
CanImigrate
smoothlyand
quickly?
20. Static website Web frontendUser DB Queue Analytics DB
Developm
ent VM
QA server Public Cloud Contributor’
s laptop
We need a shipping container
system for code
MultiplicityofStacks
Multiplicityof
hardware
environments
Production
Cluster
Customer
Data Center
Doservicesandapps
interact
appropriately?
CanImigrate
smoothlyandquickly
…that can be
manipulated using
standard operations and
run consistently on
virtually any hardware
platform
An engine that
enables any payload
to be encapsulated
as a lightweight,
portable, self-
sufficient
container…
21. Why containers matter?
Physical Containers Docker
Content Agnostic The same container can hold
almost any type of cargo
Can encapsulate any payload
and its dependencies
Hardware Agnostic Standard shape and interface
allow same container to
move from ship to train to
semi-truck to warehouse to
crane without being
modified or opened
Using operating system
primitives (e.g. LXC) can run
consistently on virtually any
hardware—VMs, bare metal,
openstack, public IAAS, etc.—
without modification
Content Isolation
and Interaction
No worry about anvils
crushing bananas.
Containers can be stacked
and shipped together
Resource, network, and
content isolation. Avoids
dependency hell
Automation Standard interfaces make it
easy to automate loading,
unloading, moving, etc.
Standard operations to run,
start, stop, commit, search,
etc. Perfect for devops: CI, CD,
autoscaling, hybrid clouds
22. Physical Containers Docker
Highly efficient No opening or modification,
quick to move between
waypoints
Lightweight, virtually no perf
or start-up penalty, quick to
move and manipulate
Separation of duties Shipper worries about inside
of box, carrier worries about
outside of box
Developer worries about code.
Ops worries about
infrastructure.
Physical Containers Docker
Highly efficient No opening or modification,
quick to move between
waypoints
Lightweight, virtually no perf
or start-up penalty, quick to
move and manipulate
Separation of duties Shipper worries about inside
of box, carrier worries about
outside of box
Developer worries about
code. Ops worries about
infrastructure.
24. For Developers
• Build once…run anywhere
• A clean, safe, hygienic and portable runtime environment for
your app.
• No worries about missing dependencies, packages and other
pain points during subsequent deployments.
• Run each app in its own isolated container, so you can run
various versions of libraries and other dependencies for each
app without worrying
• Automate testing, integration, packaging…anything you can
script
• Reduce/eliminate concerns about compatibility on different
platforms, either your own or your customers.
• Cheap, zero-penalty containers to deploy services? A VM
without the overhead of a VM? Instant replay and reset of image
snapshots? That’s the power of Docker
25. For Ops / Devops
• Configure once…run anything
• Make the entire lifecycle more efficient, consistent, and
repeatable
• Increase the quality of code produced by developers.
• Eliminate inconsistencies between development, test,
production, and customer environments
• Support segregation of duties
• Significantly improves the speed and reliability of continuous
deployment and continuous integration systems
• Because the containers are so lightweight, address significant
performance, costs, deployment, and portability issues normally
associated with VMs
29. Terminology
Image
Read only layer used to
build a container. They do
not change.
Container
Self contained runtime
environment using one or
more images. You can
commit your changes to a
container and create an
image.
Hub / Registry
Public or private servers
which act as repository
where pople can upload
images and share what
they made.
29
30. First Interaction
• xathrya@bluewyvern$ docker run -ti
ubuntu:12.04 /bin/bash
• $ cat /etc/issue
Ubuntu 12.04
We are running a container, open it in
interactive mode, and running a command
31. What docker really do?
• Downloaded the image from Hub / Registry
• Generated a new container
• Created a new file system
• Mounted a read/write layer
• Allocated network interface
• Setup IP
• Setup NAT
• Executed bash shell in container
33. Let’s Try An App
• $ docker run -d -P training/webapp python
app.py
• $ docker ps
You must see something like: 0.0.0.0:32768-
>5000/tcp
Go to web browser and enter url:
localhost:32768
Docker exposed port 5000 (default Python Flask
port) to our host in port 32768
• $ docker run –d –p 8080:5000 training/webapp
36. Actually two ways
◉Update container created from an image and
commit the results to a new image
◉Create Dockerfile
If you having experience with Vagrant, it’s
similar concept.
Dockerfile is a file to create and configure a
new image so it can be instanced as container.
44. Specially-designated directory within one or
more containers that bypasses the Union File
System. Useful for persistent or shared data.
• Initialized when container is created.
• Can be shared and reused among containers
• Changes to data volume will not be included
when you update image.
• Data volume persist even if container is
deleted.
45. • $ docker run -d -P --name web -v /webapp
training/webapp python app.py
mapped automagically chosen by docker engine
• $ docker run -d -P --name web -v
/src/webapp:/opt/webapp training/webapp
python.py
map /src/webapp (host) to /opt/webapp
(container)
49. Like Playing Lego
• Add container you need, like a component, by
their function.
• Every container has similar and uniform concept.
• Stack the container, to create complex
system.
• No need to worry about detail, focus and what you
need.
• Need to change a component? Just change it
• Upgrade version?
• Rollback?
50. Stack: MySQL
• $ docker run -p 3900:3306 --name mysql –e
MYSQL_ROOT_PASSWORD=toorsql -d
mysql:latest
• $ mysql -u root -p -h 127.0.0.1 –p 3900
• mysql> CREATE USER ‘php’@’%’ IDENTIFIED BY
‘pass’;
• mysql> GRANT ALL PRIVILEGES ON *.* TO
‘php’@’%’ WITH GRANT OPTION;
• mysql> FLUSH PRIVILEGES;
51. Stack: Apache (Dockerfile)
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y apache2
RUN apt-get install -y php5 php5-common php5-
cli php5-mysqli php5-curl
EXPOSE 80
CMD [“/usr/sbin/apache2ctl”, “-D”,
“FOREGROUND”]
55. DevOps and Modern
Day in Software
Engineering
Neet Dave the developer and Oscar the Operations
4
56. DevOps is
• Development + Operations
• Culture, movement, or practice that
emphasizes the collaboration and
communication of both software developers
and other IT professionals while automating
the process of software delivery and
infrastructure changes.
• Environment where building, testing, and
releasing software can happen rapidly,
frequently, and more reliably
57. Set of Toolchains
• Code – code development and review, continuous
integration tools
• Build – version control tools, code merging, build
status
• Test – test and results determine performance
• Package – artifact repository, application pre-
deployment staging
• Release – change management, release approvals,
release automation
• Configure – infrastructure configuration and
management
• Monitor – application performance monitoring
58. To name a few
• Docker (containerization)
• Jenkins (continuous integration)
• Puppet (infrastructure as code)
• Vagrant (virtualization platform)
59. Effectiveness
To practice DevOps effectively, software
application have to meet set of Architecture
Significant Requirements (ASRs)
• Deployability
• Modifiability
• Testability
• Monitorability
Most of time, microservice architectural style is
becoming standard for building continuous
deployed systems.
60. Three Ways Principle
• Systems Thinking
• Amplify Feedback Loops
• Culture of Continual Experimentation and
Learning
62. Continuous Integration
• Practice of Agile Development
• Developer or Team of Developer is given sub
task
• For large project it might have multiple teams
developing different tasks.
• At the end, all tasks must be integrated to
build whole application.
• CI force devs to integrate individual work with
each other as early as possible.
63. Continuous Delivery
• Step after integrating, deliver to next stage of
application delivery lifecycle.
• The goal is to get the new features that devs
created as soon as possible to QA and to
production.
• Not all integration should come to QA, only
good one at a time.
• In terms of functionality, stability, and other
NFSs
• In essence: practice of regularly delivering
application to QA and operations for
validation and potential release to customers.
64. Continuous Testing
• Process of executing automated tests
• Scope of testing:
• Validating bottom-up requirements
• Validating user stories to assessing system
requirement associated with overarching business
goals
• Object is provided by previous phase
• Give (fast) feedback to development
regarding the level of business risk in latest
build.
65. Continuous Monitoring
• Detect compliance and risk issues associated
with organization financial and operational
environment.
• Correct or replace weak or poorly designed
controls
67. Continuous Integration setup consists of
• Running unit test
• Compiling service
• Build Docker image that we run and deploy
• Pushing final image to Docker registry
Docker registry might be local repository.
https://docs.docker.com/registry/deploying/
68. Deployment might depends on infrastructure or
cloud provider. Few cloud providers support
Docker image:
• Amazon EC2 Container Service
• Digital Ocean
• Giant Swarm