Docker is a platform for developing, delivering, and running applications. It allows separating applications from infrastructure so that users can deliver software quickly. Docker provides tools and a platform to manage the lifecycle of user containers: developing applications using containers, containers being the unit for distributing and testing applications, and deploying applications to production environments as containers. Key benefits of Docker include speed, consistency, portability, and ability to dynamically manage workloads.
Jenkins can be used to automate the software development code pipeline using a Domain Specific Language (DSL) scripted in Groovy, which provides flexibility. A Jenkins pipeline consists of three main parts - building and testing the software artifact, assuring quality, and orchestrating deployment to different environments like development, staging, and production. Quality assurance checks are run at each stage of the deployment process.
The document discusses Terraform, an infrastructure as code tool. It covers installing Terraform, deploying infrastructure like EC2 instances using Terraform configuration files, destroying resources, and managing Terraform state. Key topics include authentication with AWS for Terraform, creating a basic EC2 instance, validating and applying configuration changes, and storing state locally versus remotely.
The document provides an agenda for a workshop on RabbitMQ. It introduces RabbitMQ and its core concepts like exchanges, queues, bindings and message passing. It then outlines 5 exercises demonstrating key RabbitMQ patterns including hello world, work queues, publish/subscribe, routing and topics. Environmental setup using Docker is also covered.
Tempest is the OpenStack integration test suite. It uses unittest and nosetest frameworks to run API calls against OpenStack services like Nova, Glance, Keystone, etc. and validate the responses. Tempest tests include smoke, positive, negative, stress and white box tests. It has a modular structure with common, services, and tests directories. Tempest plays an important role in OpenStack continuous integration by running on proposed code changes to check for regressions.
Terraform modules and some of best-practices - March 2019Anton Babenko
This document summarizes best practices for using Terraform modules. It discusses:
- Writing resource modules to version infrastructure instead of individual resources
- Using infrastructure modules to enforce tags, standards and preprocessors
- Calling modules in a 1-in-1 structure for smaller blast radii and dependencies
- Using Terragrunt for orchestration to call modules dynamically
- Working with Terraform code by using lists, JSONnet, and preparing for Terraform 0.12
Beyond static configuration management discusses how containerization and distributed configuration management are disrupting traditional system engineering. Key developments include specialized container-centric operating systems like CoreOS, orchestration tools like Docker, Mesos, and Kubernetes, as well as configuration stores like etcd, Consul, and Zookeeper that enable dynamic configuration of distributed systems. The talk argues this represents an exciting transition period for development and operations.
Jenkins can be used to automate the software development code pipeline using a Domain Specific Language (DSL) scripted in Groovy, which provides flexibility. A Jenkins pipeline consists of three main parts - building and testing the software artifact, assuring quality, and orchestrating deployment to different environments like development, staging, and production. Quality assurance checks are run at each stage of the deployment process.
The document discusses Terraform, an infrastructure as code tool. It covers installing Terraform, deploying infrastructure like EC2 instances using Terraform configuration files, destroying resources, and managing Terraform state. Key topics include authentication with AWS for Terraform, creating a basic EC2 instance, validating and applying configuration changes, and storing state locally versus remotely.
The document provides an agenda for a workshop on RabbitMQ. It introduces RabbitMQ and its core concepts like exchanges, queues, bindings and message passing. It then outlines 5 exercises demonstrating key RabbitMQ patterns including hello world, work queues, publish/subscribe, routing and topics. Environmental setup using Docker is also covered.
Tempest is the OpenStack integration test suite. It uses unittest and nosetest frameworks to run API calls against OpenStack services like Nova, Glance, Keystone, etc. and validate the responses. Tempest tests include smoke, positive, negative, stress and white box tests. It has a modular structure with common, services, and tests directories. Tempest plays an important role in OpenStack continuous integration by running on proposed code changes to check for regressions.
Terraform modules and some of best-practices - March 2019Anton Babenko
This document summarizes best practices for using Terraform modules. It discusses:
- Writing resource modules to version infrastructure instead of individual resources
- Using infrastructure modules to enforce tags, standards and preprocessors
- Calling modules in a 1-in-1 structure for smaller blast radii and dependencies
- Using Terragrunt for orchestration to call modules dynamically
- Working with Terraform code by using lists, JSONnet, and preparing for Terraform 0.12
Beyond static configuration management discusses how containerization and distributed configuration management are disrupting traditional system engineering. Key developments include specialized container-centric operating systems like CoreOS, orchestration tools like Docker, Mesos, and Kubernetes, as well as configuration stores like etcd, Consul, and Zookeeper that enable dynamic configuration of distributed systems. The talk argues this represents an exciting transition period for development and operations.
The document discusses Python virtual environments (virtualenv) and the pip package manager. It introduces virtualenv and pip, explains why they are useful tools for isolating Python environments and managing packages, and provides exercises for creating virtual environments, using pip to install/uninstall packages, creating your own pip packages, and sharing packages on PyPI. The goal is to help users understand and learn to use these tools in 90 minutes.
Raffaele Rialdi discusses building plugins for Node.js that interface with .NET Core. He covers hosting the CoreCLR from C++, building a C++ V8 addon, and introduces xcore which allows calling .NET from JavaScript/TypeScript by analyzing metadata and optimizing performance. Examples show loading CLR, creating a .NET object, and calling methods from JavaScript using xcore. Potential use cases include Node.js apps, Electron apps, scripting Powershell, and Nativescript mobile apps.
Infrastructure testing with Jenkins, Puppet and Vagrant - Agile Testing Days ...Carlos Sanchez
Extend Continuous Integration to automatically test your infrastructure.
Continuous Integration can be extended to test deployments and production environments, in a Continuous Delivery cycle, using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations, and test the infrastructure the same way continuous integration tools do with developers’ code.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services, … in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Using Vagrant, a command line automation layer for VirtualBox, we can easily spin off virtual machines with the same configuration as production servers, run our test suite, and tear them down afterwards.
We will show how to set up automated testing of an application and associated infrastructure and configurations, creating on demand virtual machines for testing, as part of your continuous integration process.
사내 발표자료 겸 만들었는데, ECS Fargate를 이용하실 분들이라면, 편리하게 쓰실 수 있도록 최대한 상세하게 만들어 보았습니다.
사실 CloudFormation 등 배포는 좀 더 편리하게 할 수 있지만, 회사 사정도 있고, 제가 일단 그런 기술을 너무 늦게 알았기 때문에 다루지는 않았습니다.
Seven perilous pitfalls to avoid with Java | DevNation Tech TalkRed Hat Developers
Developers and security: It’s a lot more than just turning on SSL. In this session we’re going to learn to think differently about designing and coding in Java so that the application is less open to being attacked and (bonus) is often of higher quality. This talk will cover seven types of development issues that can get your application into trouble. With code examples (of course), we’ll explore a series of common code pitfalls and explain how to design and code differently. There is much to learn when creating a secure application - take your first steps here.
This document discusses using Docker containers and Chef configuration management together. It begins by showing how to build Docker images that include Chef using Dockerfiles. It then explains how Chef can be used to configure containers during the image build process, essentially "baking" the configuration into the images. This allows immutable infrastructure where configured containers can be started without needing to rerun Chef provisioning. The document also discusses using multi-stage Dockerfiles and Chef runs to fully configure images. It briefly covers tools for deploying Docker containers, such as using Chef on EC2 instances or with OpenStack Heat orchestration.
The document discusses two serverless computing platforms that support Swift - OpenWhisk and Fn.
OpenWhisk is an open source system that is event-driven, containerized, and allows chaining of actions. It is hosted on Bluemix but can be difficult to deploy elsewhere. Fn is container-native and deploys functions as containers communicating via standard input/output. Both allow simple Swift functions to be deployed and called remotely with REST APIs or command line tools. The document provides examples of writing, deploying and calling functions on each platform.
This document discusses using the continuous integration platform Drone to test Ansible configuration files. It provides steps to install Drone and Docker on an Ubuntu server and configure Drone to run builds and tests whenever code is pushed to a Github repository. An example .drone.yml file and drone_it script are also described that lint, deploy, and verify Ansible configurations through server-spec tests.
The document discusses using Play Framework, Docker, CircleCI, and AWS together to create an automated microservices build pipeline. Key aspects include using GitHub for source control, CircleCI for continuous integration to build Docker images, pushing images to Docker Hub, and deploying to AWS using ECS for container orchestration. The author demonstrates setting up each part of the pipeline live.
Docker Swarm v1.0 provides resource management, advanced scheduling, and multi-host networking capabilities. It uses multiple discovery backends like etcd and consul for leader election and a key-value store. The scheduler uses filters and strategies to schedule containers across managers and followers using constraints, affinities, health checks, and dependencies. Resource management controls CPU, memory, ports, and more. Tutum provides Docker services for the cloud while UCP is Docker's on-premises clustering solution based on Swarm.
This document provides an introduction to running Docker containers with Mesos. It discusses Mesos' architecture and components like masters, slaves, frameworks and tasks. It also briefly outlines Mesos' benefits for running Docker at scale and provides a high-level overview of topics covered in the document, including the Mesos world, a tiny demo, and next steps like service abstraction and dynamic scaling.
Elixir/Phoenix releases and research about common deployment strategies.Michael Dimmitt
Phoenix is a web framework for the Elixir programming language which runs on the Erlang VM.
This talk covers releases in Phoenix which bundles your application and the parts of the Erlang VM needed by your application into a bundle and research about common deployment strategies.
Often this includes putting the release into a docker file and deploying on AWS.
Another strategy not included in the presentation
gigalixir is not mentioned but is another common deployment option.
This document provides an introduction to Docker presented by Adrian Otto. It defines Docker components like the Docker Engine (CLI and daemon), images, containers and registries. It explains how containers combine cgroups, namespaces and images. It demonstrates building images with Dockerfiles, committing container changes to new images, and the full container lifecycle. Finally, it lists Docker CLI commands and promises a demo of building/running containers.
Docker Security Deep Dive by Ying Li and David LawrenceDocker, Inc.
Securing software supply chains and deployed systems is important. The document discusses using Docker Content Trust (DCT) to ensure authenticity, integrity, and freshness of container images. It also recommends validating dependencies, signing applications, scanning for vulnerabilities, and using features in Docker 1.12 like mutual TLS and certificate rotation to securely manage Docker clusters.
Slides from Ansible Oxford meetup on 29th July 2015: Cows and Containers. How does Ansible play with Docker? How can we use Ansible to build, ship and run Docker containers?
1. The document discusses an upcoming meetup on Terraform 0.12. It provides an agenda that includes an overview of Terraform 0.12 features, examples of using Terraform 0.12, and a Q&A session.
2. The speaker, Anton Babenko, is introduced. He is described as a Terraform and AWS expert who contributes to open source Terraform projects.
3. New features in Terraform 0.12 discussed include first-class expressions, for expressions, dynamic blocks, generalized splat operators, conditional improvements, and references as first-class values. Backward compatibility and impacts to providers and modules are also covered.
CI and CD at Scale: Scaling Jenkins with Docker and Apache MesosCarlos Sanchez
In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.
By taking advantage of Apache Mesos, the Jenkins platform is dynamically scaled to run jobs across hundreds of Jenkins masters, on Docker containers distributed across the Mesos cluster. Jenkins slaves are dynamically created based on load, using the Jenkins Mesos and Docker plugins, running in containers distributed across multiple hosts, and isolating job execution.
This presentation will allow a better understanding of Apache Mesos and the challenges of running Docker containerized and distributed applications, particularly JVM ones, by sharing a real world use case, including good and bad decisions and how they affected the development.
From Monolith to Docker Distributed ApplicationsCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed microservice architectures. But migrating an existing Java application to a distributed microservice architecture is no easy task, requiring a shift in the software development, networking, and storage to accommodate the new architecture. This presentation provides insights into the experience of the speaker and his colleagues in creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon and applicable to all types of applications, especially Java- and JVM-based ones.
The document summarizes the speaker's experience at ApacheCon NA 2011. It discusses several keynotes and sessions attended, including talks on building secure software, the success of Hadoop, Watson's use of Apache technologies, and new features in Lucene 4.0 like improved performance through UTF8 encoding and faster querying through new query types and indexing approaches. The document also mentions several projects that build user interfaces and experiences on top of Solr, like Prism, Blacklight, TwigKit, and Ajax Solr.
Introduction to Networking | Linux-Unix and System Administration | Docker an...andega
Linux/Unix is an operating system that supports multitasking and multi-user functionality. It consists of a kernel, shell, and programs. Unix is widely used on servers, desktops, and embedded in other operating systems. Docker is a tool that allows users to package applications into containers that can run on any infrastructure. It provides a way to deploy applications easily and consistently from development to production. Docker uses a client-server architecture, with a Docker daemon managing containers and images based on requests from a Docker client.
The document provides an overview of the UNIX operating system, including its history of development at Bell Labs in the late 1960s, its key features like multitasking and multi-user capabilities that allow multiple users to run multiple applications simultaneously, and how it became widely adopted for its flexibility, portability, and ability to be extended through open source development. It also discusses the kernel and shell components of UNIX systems and the large library of applications that have been developed for UNIX over time across various implementations like Linux.
The document discusses Python virtual environments (virtualenv) and the pip package manager. It introduces virtualenv and pip, explains why they are useful tools for isolating Python environments and managing packages, and provides exercises for creating virtual environments, using pip to install/uninstall packages, creating your own pip packages, and sharing packages on PyPI. The goal is to help users understand and learn to use these tools in 90 minutes.
Raffaele Rialdi discusses building plugins for Node.js that interface with .NET Core. He covers hosting the CoreCLR from C++, building a C++ V8 addon, and introduces xcore which allows calling .NET from JavaScript/TypeScript by analyzing metadata and optimizing performance. Examples show loading CLR, creating a .NET object, and calling methods from JavaScript using xcore. Potential use cases include Node.js apps, Electron apps, scripting Powershell, and Nativescript mobile apps.
Infrastructure testing with Jenkins, Puppet and Vagrant - Agile Testing Days ...Carlos Sanchez
Extend Continuous Integration to automatically test your infrastructure.
Continuous Integration can be extended to test deployments and production environments, in a Continuous Delivery cycle, using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations, and test the infrastructure the same way continuous integration tools do with developers’ code.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services, … in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Using Vagrant, a command line automation layer for VirtualBox, we can easily spin off virtual machines with the same configuration as production servers, run our test suite, and tear them down afterwards.
We will show how to set up automated testing of an application and associated infrastructure and configurations, creating on demand virtual machines for testing, as part of your continuous integration process.
사내 발표자료 겸 만들었는데, ECS Fargate를 이용하실 분들이라면, 편리하게 쓰실 수 있도록 최대한 상세하게 만들어 보았습니다.
사실 CloudFormation 등 배포는 좀 더 편리하게 할 수 있지만, 회사 사정도 있고, 제가 일단 그런 기술을 너무 늦게 알았기 때문에 다루지는 않았습니다.
Seven perilous pitfalls to avoid with Java | DevNation Tech TalkRed Hat Developers
Developers and security: It’s a lot more than just turning on SSL. In this session we’re going to learn to think differently about designing and coding in Java so that the application is less open to being attacked and (bonus) is often of higher quality. This talk will cover seven types of development issues that can get your application into trouble. With code examples (of course), we’ll explore a series of common code pitfalls and explain how to design and code differently. There is much to learn when creating a secure application - take your first steps here.
This document discusses using Docker containers and Chef configuration management together. It begins by showing how to build Docker images that include Chef using Dockerfiles. It then explains how Chef can be used to configure containers during the image build process, essentially "baking" the configuration into the images. This allows immutable infrastructure where configured containers can be started without needing to rerun Chef provisioning. The document also discusses using multi-stage Dockerfiles and Chef runs to fully configure images. It briefly covers tools for deploying Docker containers, such as using Chef on EC2 instances or with OpenStack Heat orchestration.
The document discusses two serverless computing platforms that support Swift - OpenWhisk and Fn.
OpenWhisk is an open source system that is event-driven, containerized, and allows chaining of actions. It is hosted on Bluemix but can be difficult to deploy elsewhere. Fn is container-native and deploys functions as containers communicating via standard input/output. Both allow simple Swift functions to be deployed and called remotely with REST APIs or command line tools. The document provides examples of writing, deploying and calling functions on each platform.
This document discusses using the continuous integration platform Drone to test Ansible configuration files. It provides steps to install Drone and Docker on an Ubuntu server and configure Drone to run builds and tests whenever code is pushed to a Github repository. An example .drone.yml file and drone_it script are also described that lint, deploy, and verify Ansible configurations through server-spec tests.
The document discusses using Play Framework, Docker, CircleCI, and AWS together to create an automated microservices build pipeline. Key aspects include using GitHub for source control, CircleCI for continuous integration to build Docker images, pushing images to Docker Hub, and deploying to AWS using ECS for container orchestration. The author demonstrates setting up each part of the pipeline live.
Docker Swarm v1.0 provides resource management, advanced scheduling, and multi-host networking capabilities. It uses multiple discovery backends like etcd and consul for leader election and a key-value store. The scheduler uses filters and strategies to schedule containers across managers and followers using constraints, affinities, health checks, and dependencies. Resource management controls CPU, memory, ports, and more. Tutum provides Docker services for the cloud while UCP is Docker's on-premises clustering solution based on Swarm.
This document provides an introduction to running Docker containers with Mesos. It discusses Mesos' architecture and components like masters, slaves, frameworks and tasks. It also briefly outlines Mesos' benefits for running Docker at scale and provides a high-level overview of topics covered in the document, including the Mesos world, a tiny demo, and next steps like service abstraction and dynamic scaling.
Elixir/Phoenix releases and research about common deployment strategies.Michael Dimmitt
Phoenix is a web framework for the Elixir programming language which runs on the Erlang VM.
This talk covers releases in Phoenix which bundles your application and the parts of the Erlang VM needed by your application into a bundle and research about common deployment strategies.
Often this includes putting the release into a docker file and deploying on AWS.
Another strategy not included in the presentation
gigalixir is not mentioned but is another common deployment option.
This document provides an introduction to Docker presented by Adrian Otto. It defines Docker components like the Docker Engine (CLI and daemon), images, containers and registries. It explains how containers combine cgroups, namespaces and images. It demonstrates building images with Dockerfiles, committing container changes to new images, and the full container lifecycle. Finally, it lists Docker CLI commands and promises a demo of building/running containers.
Docker Security Deep Dive by Ying Li and David LawrenceDocker, Inc.
Securing software supply chains and deployed systems is important. The document discusses using Docker Content Trust (DCT) to ensure authenticity, integrity, and freshness of container images. It also recommends validating dependencies, signing applications, scanning for vulnerabilities, and using features in Docker 1.12 like mutual TLS and certificate rotation to securely manage Docker clusters.
Slides from Ansible Oxford meetup on 29th July 2015: Cows and Containers. How does Ansible play with Docker? How can we use Ansible to build, ship and run Docker containers?
1. The document discusses an upcoming meetup on Terraform 0.12. It provides an agenda that includes an overview of Terraform 0.12 features, examples of using Terraform 0.12, and a Q&A session.
2. The speaker, Anton Babenko, is introduced. He is described as a Terraform and AWS expert who contributes to open source Terraform projects.
3. New features in Terraform 0.12 discussed include first-class expressions, for expressions, dynamic blocks, generalized splat operators, conditional improvements, and references as first-class values. Backward compatibility and impacts to providers and modules are also covered.
CI and CD at Scale: Scaling Jenkins with Docker and Apache MesosCarlos Sanchez
In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.
By taking advantage of Apache Mesos, the Jenkins platform is dynamically scaled to run jobs across hundreds of Jenkins masters, on Docker containers distributed across the Mesos cluster. Jenkins slaves are dynamically created based on load, using the Jenkins Mesos and Docker plugins, running in containers distributed across multiple hosts, and isolating job execution.
This presentation will allow a better understanding of Apache Mesos and the challenges of running Docker containerized and distributed applications, particularly JVM ones, by sharing a real world use case, including good and bad decisions and how they affected the development.
From Monolith to Docker Distributed ApplicationsCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed microservice architectures. But migrating an existing Java application to a distributed microservice architecture is no easy task, requiring a shift in the software development, networking, and storage to accommodate the new architecture. This presentation provides insights into the experience of the speaker and his colleagues in creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon and applicable to all types of applications, especially Java- and JVM-based ones.
The document summarizes the speaker's experience at ApacheCon NA 2011. It discusses several keynotes and sessions attended, including talks on building secure software, the success of Hadoop, Watson's use of Apache technologies, and new features in Lucene 4.0 like improved performance through UTF8 encoding and faster querying through new query types and indexing approaches. The document also mentions several projects that build user interfaces and experiences on top of Solr, like Prism, Blacklight, TwigKit, and Ajax Solr.
Introduction to Networking | Linux-Unix and System Administration | Docker an...andega
Linux/Unix is an operating system that supports multitasking and multi-user functionality. It consists of a kernel, shell, and programs. Unix is widely used on servers, desktops, and embedded in other operating systems. Docker is a tool that allows users to package applications into containers that can run on any infrastructure. It provides a way to deploy applications easily and consistently from development to production. Docker uses a client-server architecture, with a Docker daemon managing containers and images based on requests from a Docker client.
The document provides an overview of the UNIX operating system, including its history of development at Bell Labs in the late 1960s, its key features like multitasking and multi-user capabilities that allow multiple users to run multiple applications simultaneously, and how it became widely adopted for its flexibility, portability, and ability to be extended through open source development. It also discusses the kernel and shell components of UNIX systems and the large library of applications that have been developed for UNIX over time across various implementations like Linux.
Linux and Java - Understanding and TroubleshootingJérôme Kehrli
Linux is an open-source operating system that powers many devices from supercomputers to smartphones. It uses a kernel developed by Linus Torvalds and combines with software from the GNU project to form a complete operating system. The Java Virtual Machine (JVM) allows Java programs to run on different platforms by executing Java bytecode. It uses just-in-time compilation to convert bytecode to native machine code. Both Linux and the JVM use memory management techniques like virtual memory and garbage collection to support multi-tasking of processes and applications.
Every time you switch on your computer, you see a screen where you can perform different activities like write, browse the internet or watch a video. What is it that makes the computer hardware work like that? How does the processor on your computer know that you are asking it to run a mp3 file?
Well, it is the operating system or the kernel which does this work. A kernel is a program at the heart of any operating system that takes care of fundamental stuff, like letting hardware communicate with software.
So, to work on your computer you need an operating system (OS). In fact, you are using one as you read on your computer. Now, you may have used popular OS’s like Windows, Apple OS X but here we see what Linux is and what benefits it offers over other OS choices.
This document provides an introduction to UNIX/Linux operating systems. It discusses what an operating system is and its main functions. It then covers the history of UNIX, its general characteristics, and popular flavors including Linux. The document outlines the main parts of UNIX like the kernel, shell, and utilities. It compares Linux and Windows and describes UMBC's computing environment including graphical and command line interfaces. Finally, it lists some common programming tools available under Linux.
This document provides an introduction to the UNIX operating system. It discusses the history and development of UNIX, the key components of the UNIX system architecture including the kernel, shells/GUIs, and file system. It also outlines common UNIX commands and sessions, describing how to log in and out, change passwords, and view system information. The document is intended to explain the basic concepts and components of UNIX to new users.
History of Linux
Brain behind development
Why Linux
GNU
Why GNU ?
Where can you find Linux?
Linux is Best!!
Core components of Linux
File system
Drive letter’s
Security
Facts about Linux
Chapter 8 - nsa Introduction to Linux.pptgadisaAdamu
Linux is an open-source operating system kernel created by Linus Torvalds. It can run on a variety of systems including servers, desktops, embedded devices, and more. Since its initial release in 1991, the Linux kernel has grown significantly with contributions from thousands of programmers. It is free to use, modify, and distribute, driving its widespread adoption for servers, embedded systems, and as an alternative to other proprietary operating systems.
This document provides an overview of the Linux operating system. It discusses that Linux is an open-source version of UNIX with a freely available source code. It then describes the three main components of Linux - the kernel, system libraries, and system utilities. It explains that the kernel executes in kernel mode for high performance, while other programs run in user mode. The document also includes sections on the architecture of Linux, its history and evolution, features like security and portability, and why Linux is commonly used. It contrasts Linux with UNIX and Windows operating systems.
The document discusses operating systems and provides details about several types of operating systems. It begins by defining an operating system as a collection of programs that provide services like disk, file, and device management to allow users and other programs to interact with a computer. It then provides information about graphical user interfaces, how operating systems manage hardware resources using drivers, and how they govern data input/output and task management. The document also discusses characteristics of different types of operating systems like real-time, single-user, multi-user, and network operating systems. Specific examples of operating systems are given like DOS, Windows, Mac OS, Linux, and UNIX.
This document discusses network operating systems. It begins by defining key concepts like systems, networks, and operating systems. It then introduces network operating systems, which allow users to access remote resources by logging into other machines or transferring files between computers. Example features of network operating systems are described like security, directory services, and file/print sharing. Specific network operating systems are also outlined, such as Novell NetWare, Linux, and Windows XP. The document concludes by summarizing the differences between a regular operating system and a network operating system.
This document provides an overview of the Linux operating system. It discusses that Linux is an open-source, multi-user operating system that can run on 32-bit or 64-bit hardware. It then describes some key features of Linux like portability, security, and its hierarchical file system. The document also outlines the architecture of Linux, including its hardware layer, kernel, shell, and utilities. It compares Linux to Unix and Windows, noting Linux is free while Unix is not and that Linux supports multi-tasking better than Windows. Finally, it lists some advantages like free/open-source nature and stability as well as disadvantages such as lack of standard edition and less gaming support.
This document provides an overview of Linux by discussing its history, key components, licensing, and how it differs from Windows. It covers:
- The origins and development of Linux from 1991 by Linus Torvalds, influenced by Minix and UNIX.
- The major components of a Linux distribution including the Linux kernel, GNU tools, desktop environments like KDE and GNOME, servers like Apache and Samba.
- Open source licensing models like GPL and LGPL that ensure freedom of use, modification, and redistribution of Linux software.
- Architectural differences like its ability to run on different hardware, support for multiple users, and the optional client-server X Window System.
- Popular distributions like
It is a simple powerpoint presentation on Linux Operating System of its brief and simplified introduction of this Operating System.
This is based on Ubuntu version of Linux.
The document provides an overview of the Unix operating system and its components. It discusses:
- Unix is a multi-user, multi-tasking operating system made up of a kernel, shell, and programs. The kernel manages hardware access and allocation of resources while the shell acts as an interface between the user and kernel.
- The history of Unix, which was first created in 1969 at Bell Labs. Key developments included it being rewritten in C in 1973 and the origins of Linux in 1991.
- The core components of Unix - the kernel, shell, utilities, and applications. The kernel handles processes and resources while the shell interprets commands. There are standard utilities and custom applications.
The document provides an introduction to UNIX and Linux operating systems. It discusses what an operating system is and its main tasks like controlling hardware, running applications, and managing files and data. It then covers the history of UNIX, its characteristics, parts like the kernel and shell, flavors including open source like Linux and proprietary like Solaris, interfaces, and programming tools available in Linux.
The document provides information about the UNIX operating system. It begins with an introduction to UNIX and defines an operating system. It then discusses key aspects of UNIX like allocating computer resources, built-in task scheduling, the history and development of UNIX over time by researchers at Bell Labs and the University of California, Berkeley. The document also covers different flavors of UNIX, including proprietary and open-source variations, and summarizes the core components and architecture of the UNIX operating system.
This document provides an introduction to UNIX/Linux operating systems. It discusses what an operating system is and its main functions. It then covers the history of UNIX, developed in the 1960s at Bell Labs. Characteristics of UNIX include being multi-user, multi-tasking, having a large number of free and commercial applications, and being less resource intensive than other operating systems. The document outlines the main parts of the UNIX OS and popular flavors including proprietary and open source versions like Linux. It also describes graphical and command line interfaces and provides an overview of UMBC's computing environment.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Natural Language Processing (NLP), RAG and its applications .pptxfkyes25
1. In the realm of Natural Language Processing (NLP), knowledge-intensive tasks such as question answering, fact verification, and open-domain dialogue generation require the integration of vast and up-to-date information. Traditional neural models, though powerful, struggle with encoding all necessary knowledge within their parameters, leading to limitations in generalization and scalability. The paper "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks" introduces RAG (Retrieval-Augmented Generation), a novel framework that synergizes retrieval mechanisms with generative models, enhancing performance by dynamically incorporating external knowledge during inference.
3. Fungsi networking pada data engineer
Beberapa alasan mengapa data engineers mengerti networking
Beberapa alasan data engineer perlu memahami jaringan komputer:
insinyur Data sering akses server yang secara fisik dapat ada di mana
saja (cloud atau di premise).
Data Engineer perlu untuk memahami bagaimana komunikasi antara mesin
terjadi pada platform data, bagaimana memasuki jaringan dan
meninggalkan jaringan.
Ketika membangun jalur etl pipeline, Data Engineer sering akan
berkomunikasi dengan infra /network atau SRE (DevOps) sehingga perlu
untuk memahami istilah dalam jaringan komputer.
4. OSIModel
Apa itu OSI Model
Model Open System interkoneksi (OSI) menjelaskan tujuh
lapis yang digunakan sistem komputer untuk
berkomunikasi melalui jaringan. Gambar disamping
merupakan model standar pertama untuk komunikasi
jaringan, diadopsi oleh semua perusahaan besar komputer
dan telekomunikasi di awal 1980-an Internet modern tidak
didasarkan pada OSI, tetapi pada TCP/IP sederhana.
Namun, model OSI 7-layer masih banyak digunakan,
karena membantu visualize dan berkomunikasi bagaimana
jaringan beroperasi, dan membantu mengisolasi dan
memecahkan masalah jaringan.OSI diperkenalkan pada
tahun 1983 oleh perwakilan dari komputer utama dan
perusahaan telekomunikasi, dan diadopsi oleh ISO sebagai
standar internasional pada tahun 1984.
5. Ping:
Sebuah utilitas, umumnya dikenal sebagai PING, yang digunakan untuk mencari komputer lain di jaringan
TCP-IP untuk memeriksa bahwa ada sambungan ke sana. Pengguna utilitas ini mengeksekusi dengan
nama komputer atau alamat IP-nya, dan kemudian mengirimkan satu set pesan yang meminta komputer
jauh untuk menjawab dan menghasilkan laporan singkat apakah koneksi tercapai. Kebanyakan sistem
operasi berisi utilitas PING sederhana ada juga banyak versi komersial dan shareware yang tersedia.
Terkadang disebut sebagai paket Gopher Internet.
Tools
Jaringan
6. telnet:
Apa itu Telnet?
Telnet, dikembangkan pada tahun 1969, adalah protokol yang menyediakan antarmuka baris perintah untuk komunikasi dengan perangkat remote atau
server, kadang-kadang digunakan untuk manajemen jarak jauh, tetapi juga untuk perangkat awalname seperti perangkat keras jaringan. Telnet singkatan
dari jaringan Teletype, tetapi juga dapat digunakan sebagai kata kerja; 'ke telnet' adalah untuk membangun koneksi menggunakan protokol Telnet.
Apakah Telnet aman?
Karena itu dikembangkan sebelum adaptasi mainstream dari internet, Telnet sendiri tidak menggunakan bentuk enkripsi apapun, membuatnya
usang dalam hal keamanan modern. Sebagian besar telah dibatasi oleh protokol shell aman (SSH) (yang memiliki pertimbangan keamanan
sendiri sekitar Akses Jarak Jauh), paling tidak di internet publik, tapi untuk instansi mana Telnet masih digunakan, ada beberapa metode untuk
mengamankan komunikasimu.
Bagaimana Telnet bekerja?
Telnet menyediakan pengguna dengan sebuah sistem komunikasi interaktif text-oriented yang menggunakan sambungan terminal virtual lebih dari 8
byte. Data pengguna diselundupkan dalam-band dengan informasi kendali telnet atas protokol kendali transmisi (TCP). Seringkali, Telnet digunakan
pada terminal untuk menjalankan fungsi dari jarak jauh.Pengguna yang terhubung ke server dengan memakai protokol Telnet, yang berarti
memasuki Telnet ke dalam suatu perintah dengan mengikuti sintaks ini: telnet hostname port. Pengguna kemudian mengeksekusi perintah pada
server dengan menggunakan perintah Telnet tertentu ke prompt Telnet. Untuk mengakhiri suatu sesi dan log mati, pengguna mengakhiri perintah
Telnet dengan Telnet.
Apa yang digunakan untuk telnet
Telnet dapat digunakan untuk menguji atau penelusuran remote web atau server surat, serta untuk akses jarak jauh ke MUDs (multi-user
dungeon games) dan jaringan internal terpercaya.
7. ssh:
SSH atau Secure Shell adalah komunikasi jaringanprotokol yang memungkinkan dua komputer untuk
berkomunikasi (http.f atau hypertext Protocol, yang merupakanprotokol yang digunakan untuk
mentransfer hypertext seperti webhalaman) dan berbagi data. Fitur inheren dari ssh adalah bahwa
komunikasi antara dua komputer dienkripsi berarti Bahwa cocok untuk digunakan dijaringan yang
tidak aman.
9. Unix adalah sistem operasi. Ini mendukung fungsionalitas multi-pengguna. Unix paling banyak digunakan dalam berbagai bentuk
sistem komputasi seperti desktop, laptop, dan server. Pada Unix, terdapat Antar muka pengguna grafis yang mirip dengan
jendela yang mendukung navigasi Mudah dan lingkungan dukungan. Dengan GUI, menggunakan sistem berbasis Unix itu mudah
tapi tetap harus tahu perintah Unix untuk kasus dimana GUI tidak tersedia seperti sesi telnet.
Ada beberapa versi yang berbeda UNIX, namun, ada banyak kesamaan. Varietas yang paling populer dari sistem UNIX adalah
Sun Solaris, Linux/GNU, dan MacOS X.Sistem operasi UNIX terdiri dari tiga bagian; kernel, shell dan program.
Unix
10. Sama seperti Windows, iOS, dan Mac OS, Linux adalah sistem operasi.
Bahkan, salah satu platform paling populer di planet ini, Android
didukung oleh sistem operasi Linux. Sebuah sistem operasi adalah
perangkat lunak yang mengatur semua sumber daya perangkat keras
yang terkait dengan desktop atau laptop Anda. Sederhananya, sistem
operasi mengatur komunikasi antara perangkat lunak Anda dan
perangkat keras Anda. Tanpa sistem operasi (OS), perangkat lunak
tidak akan berfungsi.
Linux:
11. Sistem operasi Linux terdiri beberapa potongan yang berbeda:
1..Bootloader– Perangkat lunak yang mengatur proses boot komputer. Bagi kebanyakan pengguna, ini
hanya akan menjadi layar splash yang muncul dan akhirnya hilang untuk boot ke dalam sistem operasi.
2. .Kernel–Kernel adalah inti sistem dan mengelola CPU, memori, dan perangkat periferal. Kernel adalah
tingkat terendah dalam OS..
3. Init system – Ini adalah sub - sistem yang bootstraps ruang pengguna dan diisi dengan
pengendalidaemon. Salah satu yang paling banyak digunakan di sistem ini adalah sistem? yang juga terjadi
menjadi salah satu yang paling kontroversial. Ini adalah sistem init yang mengelola proses boot, setelah
boot awal diserahkan dari bootloader.
4. .Daemons– Ini adalah layanan latar belakang (mencetak, suara, penjadwalan, dsb.) yang akan dimulai
saat boot atau setelah Anda masuk ke desktop.
5. .Graphical server – Ini adalah subsistem yang menampilkan grafik di monitor pengguna. Biasanya
disebut sebagai X server atau hanya X.
6. .Desktop environment – Ini adalah bagian bahwa pengguna benar-benar berinteraksi. Ada banyak
lingkungan desktop yang dipilih dari (GNOME, Cinnamon, Mate, Pantheon, pencerahan, KDE, Xfce, dll.name).
Setiap lingkungan desktop meliputi aplikasi bawaan ( seperti manajer berkas, perkakas konfigurasi,
webbrowsers, and games).
7. .Applications – Desktop Environmets tidak menawarkan aplikasi yang lengkap. Sama seperti Windows dan
mac OS,Linux menawarkan ribuan atas ribuan judul perangkat lunak berkualitas tinggi yang dapat dengan
mudah ditemukan dan diinstal. Kebanyakan modern Linux distribusi ( lebih pada ini di bawah) sertakan
Perkakas App - like yang dipusat dan menyederhanakan instalasi aplikasi.
13. Docker Basic.
Docker merupakan platform terbuka untuk pengembangan, pengiriman, dan menjalankan aplikasi. Docker
memungkinkan untuk memisahkan aplikasi dari infrastruktur sehingga pengguna dapat mendeliver perangkat
lunak dengan cepat Dengan Docker pengguna dapat mengatur infrastruktur dengan cara sama yang berbentuk
image. Kegunaan penggunaan metodologi Docker untuk shipping, testing, and deploying kode secara simultan,
pengguna dapat mengurangi secara signifikan antara menulis kode dan menjalankannya dalam produksi.
14. The Docker platform
Docker menyediakan kemampuan untuk paket dan menjalankan aplikasi dalam lingkungan terisolasi yang
bebas disebut kontainer. Isolasi dan keamanan memungkinkan pengguna untuk menjalankan banyak
kontainer secara simultan pada host yang diberikan. Kontainer ringan dan berisi segala sesuatu yang
diperlukan untuk menjalankan aplikasi, sehingga pengguna tidak perlu mengandalkan apa yang saat ini
diinstall pada host. Pengguna dapat dengan mudah berbagi kontainer dan bahwa setiap pengguna dapat
berbagi dengan mendapat kontainer yang sama yang bekerja dengan cara yang sama.
Docker menyediakan tooling dan platform untuk mengelola lifecycle kontainer pengguna:
• Mengembangkan aplikasi pengguna dan komponen pendukung yang menggunakan Kontainer.
• Kontainer menjadi unit untuk mendistribusikan dan menguji aplikasi pengguna.
• Ketika pengguna siap, menyebarkan aplikasi pengguna ke lingkungan produksi, sebagai kontainer. Ini
bekerja sama apakah lingkungan produksi pengguna adalah pusat data center, cloud, atau hibrid dari
keduanya.
15. Docker menyediakan tools dan platform untuk mengelola lifecycle container pengguna
Mengembangkan aplikasi dan komponen pendukung yang menggunakan kontainer.
Kontainer menjadi unit untuk mendistribusikan dan menguji aplikasi
16. Mengapa memakai Docker?
Cepat, dan konsisten. Docker menghapus semua lifecycle pengembang dengan cara bekerja di
lingkungan standardisasi menggunakan wadah kontainer yang menyediakan aplikasi dan
layanan. Kontainer besar untuk integrasi berkelanjutan dan pengiriman terus-menerus
(CI/CD) workflows.
Sebagai contoh berikut:
Pengembang menulis kode lokal dan berbagi pekerjaan mereka dengan rekan mereka
menggunakan Docker Kontainer.
Menggunakan Docker untuk mendorong aplikasi mereka ke dalam lingkungan tes dan
mengeksekusi otomatis dan tes manual.
Ketika pengembang menemukan bug, mereka dapat memperbaikinya dalam lingkungan
pengembangan dan memindahkan mereka ke lingkungan tes untuk pengujian dan validasi.
Ketika pengujian selesai, mendapatkan perbaikan kepada pelanggan adalah sesederhana seperti
mendorong gambar diperbarui ke lingkungan produksi.
17. Responsif skala pengembangan
Docker berbasis kontainer memungkinkan untuk mengelola aplikasi besar yang dapat dikerjakan
secara ringan. Kontainer Docker dapat dijalankan pada laptop lokal pengembang, pada mesin
fisik atau virtual di pusat data, berbasis cloud, atau di campuran lingkungan. Docker memiliki
portabilitas dengan ringan dapat membuatnya mudah secara dinamis mengelola muatan,
manejemen aplikasi dan layanan lainnya
Menjalankan muatan pada perangkat hardware yang sama
Docker memiliki kapasitas ringan dan cepat. Menyediakan alternatif efektif biaya-efektif untuk
mesin virtual berbasis hypervisor, sehingga pengguna dapat menggunakan lebih banyak
kapasitas komputer. Docker adalah sempurna untuk lingkungan kepadatan tinggi dan untuk
penyebaran kecil dan menengah di mana pengguna perlu melakukan lebih banyak dengan
sumber daya lebih sedikit.
18. Arsitektur
Docker
Docker menggunakan arsitektur client-server. Klien Docker berbicara dengan Docker daemon, yang
melakukan lifting of building, running, dan mendistribusikan Docker kontainer pengguna. Klien Docker dan
daemon dapat berjalan pada sistem yang sama, atau pengguna dapat menghubungkan klien Docker ke
daemon Docker jarak jauh. Klien Docker dan daemon berkomunikasi menggunakan API cadangan, melalui
soket UNIX atau antar muka jaringan. Klien deker lain adalah Docker Compose, yang memungkinkan
pengguna bekerja dengan aplikasi terdiri dari satu set kontainer.
19. Docker daemon
Docker Daemon (dockerd) dapat menjalankan permintaan API Docker dan mengelola objek Docker seperti
image, kontainer, jaringan, dan volume. Daemon juga dapat berkomunikasi dengan daemon lain untuk
mengelola layanan Docker.
Docker client
Klien Docker (docker) adalah cara utama yang berinteraksi dengan pengguna Docker. Ketika pengguna
menggunakan perintah seperti docker run, klien dapat mengirimkan perintah ini ke dockerd, yang
membawa mereka keluar. Perintah docker menggunakan API Docker. Klien Docker dapat berkomunikasi
dengan lebih dari satu daemon.
Docker Desktop
Desktop Docker adalah bentuk aplikasi yang mudah-menginstal untuk Mac atau Windows. Pengguna dapat
memungkinkan untuk membangun dan berbagi aplikasi dan microservices. Docker Desktop termasuk daemon
Docker (dockerd), Docker client (docker), Docker Compose, Docker Content Trust, Kubernetes.
Docker registries
Sebuah file Docker registry menyimpan image Docker. Docker Hub adalah registry publik yang bisa
dipakai siapapun, dan Docker diatur untuk mencari image di Docker Hub. Pengguna bahkan dapat
menjalankan file secara privat.Ketika pengguna menggunakan docker pull atau docker run commands,
image yang diperlukan akan diambil dari registry yang dikonfigurasi. Ketika pengguna menggunakan
perintah push docker, image yang dibuat telah di push ke registry yang telah dikonfigurasi
20. Docker object
Ketika menggunakan pengguna dapat membuat dan menggunakan image, kontainer, networks,
volume,plugin, dan objek lainnya.
Images
Dalam docker image memiliki format read-only dengan instruksi untuk membuat sebuah container dalam
Docker. Seringkali, gambar didasarkan pada gambar lain, dengan beberapa kustomisasi
tambahan.Pengguna dapat membuat image sendiri atau dapat menggunakan image yang dibuat oleh
orang lain dan diterbitkan dalam sebuah registri. Untuk membangun image, pengguna harus membuat
sebuah Dockerfile dengan sintaks sederhana untuk mendefinisikan langkah-langkah yang diperlukan untuk
membuat gambar dan menjalankannya. Setiap instruksi dalam sebuah Dockerfile menciptakan sebuah
lapis dalam image. Ketika Anda mengubah Dockerfile dan membangun kembali gambar, hanya lapisan
mereka yang telah berubah kembali. Bentuk tersebut merupakan bagian dari image yang memiliki berat
ringan, kecil, dan cepat, ketika dibandingkan dengan teknologi virtualisasi lainnya.
21. Kontainer
Kontainer adalah contoh yang dapat dijalankan dari sebuah image. Pengguna dapat
membuat, awal, stop, move, atau hapus wadah menggunakan API Docker atau CLI.
Pengguna dapat menghubungkan wadah ke satu atau lebih jaringan, melampirkan
penyimpanan untuk itu, atau bahkan membuat gambar baru berdasarkan keadaan saat ini.
Kontainer relatif jauh terisolasi dari kontainer lain dan mesin hostnya. Pengguna dapat
mengontrol bagaimana mengisolasi jaringan kontainer, penyimpanan, atau sub-sistem
lainnya dari wadah lain atau dari mesin host. Suatu wadah didefinisikan oleh gambar dan
juga pilihan konfigurasi yang pengguna berikan padanya ketika pengguna membuat atau
memulainya. Ketika kontainer dihapus, setiap perubahan ke kondisinya yang tidak disimpan
dalam penyimpanan persisten menghilang.