Containers have many advantages that help enable the delivery of services for wireless operators with 4G and soon to be 5G environments. What a container is and its role in a 4G and 5G wireless environment to improve edge services is explored.
Two parts:
1. The evolution of Joyent's SmartDataCenter cloud infrastructure management software from a largely monolithic app to a microservices architecture.
2. How container infrastructure enables microservices.
More details in http://www.meetup.com/cloudclub/events/220026896/
Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment. This provides many of the benefits of loading an application onto a virtual machine, as the application can be run on any suitable physical machine without any worries about dependencies.
These slides accompanied a live install of Triton Elastic Container Infrastructure as described in the following blog post:
https://www.joyent.com/blog/spin-up-a-docker-dev-test-environment-in-60-minutes-or-less
Presentation abstract:
Hardware hypervisors were a first generation approach to the challenges of resource and security isolation, but they’re unnecessarily shackling operations and developers with limitations that are no longer relevant to containerized deployments.
We need bare metal performance, but how can we get the security isolation and elasticity that we need without VMs? Container -- truly secure, bare metal containers -- offer an alternative that improve performance while reducing costs (and CO2 emissions too!).
What are they, how do they work, and how does containerization affect my apps??
These slides were presented at:
http://www.meetup.com/austin-devops/events/223284754/
http://www.meetup.com/PhillyDevOps/events/223197735/
http://www.meetup.com/DevOpsandAutomationNJ/events/223432942/
Onboarding For Public Private And Hybrid Clouds Aws 30.04.09Chris Purrington
AWS Meetup slides. Cloud Computing on-boarding Solutions. Intro to ElasticServer software factory, and overlay network VNPC-Cubed. Facilitating moving to the cloud with ease and confidence.
Two parts:
1. The evolution of Joyent's SmartDataCenter cloud infrastructure management software from a largely monolithic app to a microservices architecture.
2. How container infrastructure enables microservices.
More details in http://www.meetup.com/cloudclub/events/220026896/
Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment. This provides many of the benefits of loading an application onto a virtual machine, as the application can be run on any suitable physical machine without any worries about dependencies.
These slides accompanied a live install of Triton Elastic Container Infrastructure as described in the following blog post:
https://www.joyent.com/blog/spin-up-a-docker-dev-test-environment-in-60-minutes-or-less
Presentation abstract:
Hardware hypervisors were a first generation approach to the challenges of resource and security isolation, but they’re unnecessarily shackling operations and developers with limitations that are no longer relevant to containerized deployments.
We need bare metal performance, but how can we get the security isolation and elasticity that we need without VMs? Container -- truly secure, bare metal containers -- offer an alternative that improve performance while reducing costs (and CO2 emissions too!).
What are they, how do they work, and how does containerization affect my apps??
These slides were presented at:
http://www.meetup.com/austin-devops/events/223284754/
http://www.meetup.com/PhillyDevOps/events/223197735/
http://www.meetup.com/DevOpsandAutomationNJ/events/223432942/
Onboarding For Public Private And Hybrid Clouds Aws 30.04.09Chris Purrington
AWS Meetup slides. Cloud Computing on-boarding Solutions. Intro to ElasticServer software factory, and overlay network VNPC-Cubed. Facilitating moving to the cloud with ease and confidence.
Server virtualization is a fundamental technological innovation that is used extensively in IT enterprises. Server virtualization enables creation of multiple virtual machines on single underlying physical machine. It is realized either in form of hypervisors or containers. Hypervisor is an extra layer of abstraction between the hardware and virtual machines that emulates underlying hardware. In contrast, the more recent container-based virtualization technology runs on host kernel without additional layer of abstraction. Thus container technology is expected to provide near native performance compared to hypervisor based technology. We have conducted a series of experiments to measure and compare the performance of workloads over hypervisor based virtual machines, Docker containers and native bare metal machine. We use a standard benchmark workload suite that stresses CPU, memory, disk IO and system. The results obtained show that Docker containers provide better or similar performance compared to traditional hypervisor based virtual machines in almost all the tests. However as expected the native system still provides the best performance as compared to either containers or hypervisors.
Hitchhiker's Guide to Open Source Cloud ComputingMark Hinkle
Imagine it’s eight o’clock on a Thursday morning and you awake to see a bulldozer out your window ready to plow over your data center. Normally you may wish to consult the Encyclopedia Galáctica to discern the best course of action but your copy is likely out of date. And while the Hitchhiker’s Guide to the Galaxy (HHGTTG) is a wholly remarkable book it doesn’t cover the nuances of cloud computing. That’s why you need the Hitchhiker’s Guide to Cloud Computing (HHGTCC) or at least to attend this talk understand the state of open source cloud computing. Specifically this talk will cover infrastructure-as-a-service, platform-as-a-service and developments in big data and how to more effectively take advantage of these technologies using open source software. Technologies that will be covered in this talk include Apache CloudStack, Chef, CloudFoundry, NoSQL, OpenStack, Puppet and many more.
Specific topics for discussion will include:
Infrastructure-as-a-Service - The Systems Cloud - Get a comparision of the open source cloud platforms including OpenStack, Apache CloudStack, Eucalyptus, OpenNebula
Platform-as-a-Service - The Developers Cloud - Find out what tools are availble to build portable auto-scaling applications including CloudFoundry, OpenShift, Stackato and more.
Data-as-a-Service - The Analytics Cloud - Want to figure out the who, what , where , when and why of big data ? You get an overview of open source NoSQL databases and technologies like MapReduce to help crunch massive data sets in the cloud.
Finally you'll get a overview of the tools that can help you really take advantage of the cloud? Want to auto-scale virtual machiens to serve millions of web pages or want to automate the configuration of cloud computing environments. You'll learn how to combine these tools to provide continous deployment systems that will help you earn DevOps cred in any data center.
[Finally, for those of you that are Douglas Adams fans please accept the deepest apologies for bad analogies to the HHGTTG.]
Demystifying Containerization Principles for Data ScientistsDr Ganesh Iyer
Demystifying Containerization Principles for Data Scientists - An introductory tutorial on how Dockers can be used as a development environment for data science projects
Your developers are asking for it. The boss is wondering how much longer it's going to take. You need to get Kubernetes up and running. This session will explore core Kubernetes concepts as it relates to our knowledge as a vSphere administrator. We will explore the differences between open source and commercialized versions of Kubernetes and take a quick look at different application deployment mechanisms. You’re going to leave with a better understanding of Kubernetes architecture and how to take the next step towards containerization.
Docker on a laptop is easy, but Docker in the cloud is hard. Networking, scaling, and security are very different problems at cloud scale than on a laptop, and they can combine to sink a project before it ships What makes that transition so hard, and what can we do about it? How does this challenge affect hosting infrastructure and application design?
Casey will share lessons learned, open-source solutions, and working patterns for building and scaling applications to production.
Presented at:
http://www.meetup.com/Docker-Austin/events/223284441/
http://www.meetup.com/Docker-Philadelphia/events/223290624/
OSCON 2013 - The Hitchiker’s Guide to Open Source Cloud ComputingMark Hinkle
And while the Hitchhiker’s Guide to the Galaxy (HHGTTG) is a wholly remarkable book it doesn’t cover the nuances of cloud computing. Whether you want to build a public, private or hybrid cloud there are free and open source tools that can help provide you a complete solution or help augment your existing Amazon or other hosted cloud solution. That’s why you need the Hitchhiker’s Guide to (Open Source) Cloud Computing (HHGTCC) or at least to attend this talk understand the current state of open source cloud computing. This talk will cover infrastructure-as-a-service, platform-as-a-service and developments in big data and how to more effectively deploy and manage open source flavors of these technologies. Specific the guide will cover:
Infrastructure-as-a-Service – The Systems Cloud – Get a comparison of the open source cloud platforms including OpenStack, Apache CloudStack, Eucalyptus and OpenNebula
Platform-as-a-Service – The Developers Cloud – Learn about the tools that abstract the complexity for developers and used to build portable auto-scaling applications ton CloudFoundry, OpenShift, Stackato and more.
Data-as-a-Service – The Analytics Cloud – Want to figure out the who, what, where, when and why of big data? You’ll get an overview of open source NoSQL databases and technologies like MapReduce to help parallelize data mining tasks and crunch massive data sets in the cloud.
Network-as-a-Service – The Network Cloud – The final pillar for truly fungible network infrastructure is network virtualization. We will give an overview of software-defined networking including OpenStack Quantum, Nicira, open Vswitch and others.
Finally this talk will provide an overview of the tools that can help you really take advantage of the cloud. Do you want to auto-scale to serve millions of web pages and scale back down as demand fluctuates. Are you interested in automating the total lifecycle of cloud computing environments You’ll learn how to combine these tools into tool chains to provide continuous deployment systems that will help you become agile and spend more time improving your IT rather than simply maintaining it.
[Finally, for those of you that are Douglas Adams fans please accept the deepest apologies for bad analogies to the HHGTTG.]
Today, there are several trends that are forcing application architectures to evolve. Users expect a rich, interactive and dynamic user experience on a wide variety of clients including mobile devices. Applications must be highly scalable, highly available and run on cloud environments. Organizations often want to frequently roll out updates, even multiple times a day. Consequently, it’s no longer adequate to develop simple, monolithic web applications that serve up HTML to desktop browsers.
This site describes a new, alternative architecture: microservices. Applications with a microservice architecture consist of a set of narrowly focused, independently deployable services. Read on to find out more about this approach and its associated trade-offs. A good starting point is the Monolithic Architecture pattern.
Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux.[5] Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.
Infatuation Leads to Love — How Container Orchestration and Federation Enable...Dana Gardner
Transcript of a discussion on new ways to gain container orchestration, use Serverless models, and employ inclusive management to keep the container love alive and well.
Introduction to OS LEVEL Virtualization & ContainersVaibhav Sharma
This Presentation contains information about os level virtualization and Containers internals. It has used other material on slide share which is referenced in Notes of PPT
Docker on a laptop is easy, but Docker in the cloud is hard. What makes that transition so hard, and what can we do about it? How does this challenge affect the hosting infrastructure and application design? Casey will share lessons learned so far and further questions uncovered in Joyent’s work to build support for Docker in public and private cloud environments.
See also http://www.meetup.com/Docker-San-Diego/events/221128898/ and http://www.meetup.com/Docker-Orange-County/events/221129001/
Adoption of Cloud Computing in Healthcare to Improves Patient Care CoordinationMindfire LLC
The cloud has revolutionized the way we live and work. It has brought about a new era of flexibility and convenience, allowing us to access information and collaborate with others from anywhere in the world.
According to a Gartner survey, global spending on cloud services is projected to reach over $482 billion this year (2022). The numbers are much higher than those recorded last year, i.e., $313 billion.
Server virtualization is a fundamental technological innovation that is used extensively in IT enterprises. Server virtualization enables creation of multiple virtual machines on single underlying physical machine. It is realized either in form of hypervisors or containers. Hypervisor is an extra layer of abstraction between the hardware and virtual machines that emulates underlying hardware. In contrast, the more recent container-based virtualization technology runs on host kernel without additional layer of abstraction. Thus container technology is expected to provide near native performance compared to hypervisor based technology. We have conducted a series of experiments to measure and compare the performance of workloads over hypervisor based virtual machines, Docker containers and native bare metal machine. We use a standard benchmark workload suite that stresses CPU, memory, disk IO and system. The results obtained show that Docker containers provide better or similar performance compared to traditional hypervisor based virtual machines in almost all the tests. However as expected the native system still provides the best performance as compared to either containers or hypervisors.
Hitchhiker's Guide to Open Source Cloud ComputingMark Hinkle
Imagine it’s eight o’clock on a Thursday morning and you awake to see a bulldozer out your window ready to plow over your data center. Normally you may wish to consult the Encyclopedia Galáctica to discern the best course of action but your copy is likely out of date. And while the Hitchhiker’s Guide to the Galaxy (HHGTTG) is a wholly remarkable book it doesn’t cover the nuances of cloud computing. That’s why you need the Hitchhiker’s Guide to Cloud Computing (HHGTCC) or at least to attend this talk understand the state of open source cloud computing. Specifically this talk will cover infrastructure-as-a-service, platform-as-a-service and developments in big data and how to more effectively take advantage of these technologies using open source software. Technologies that will be covered in this talk include Apache CloudStack, Chef, CloudFoundry, NoSQL, OpenStack, Puppet and many more.
Specific topics for discussion will include:
Infrastructure-as-a-Service - The Systems Cloud - Get a comparision of the open source cloud platforms including OpenStack, Apache CloudStack, Eucalyptus, OpenNebula
Platform-as-a-Service - The Developers Cloud - Find out what tools are availble to build portable auto-scaling applications including CloudFoundry, OpenShift, Stackato and more.
Data-as-a-Service - The Analytics Cloud - Want to figure out the who, what , where , when and why of big data ? You get an overview of open source NoSQL databases and technologies like MapReduce to help crunch massive data sets in the cloud.
Finally you'll get a overview of the tools that can help you really take advantage of the cloud? Want to auto-scale virtual machiens to serve millions of web pages or want to automate the configuration of cloud computing environments. You'll learn how to combine these tools to provide continous deployment systems that will help you earn DevOps cred in any data center.
[Finally, for those of you that are Douglas Adams fans please accept the deepest apologies for bad analogies to the HHGTTG.]
Demystifying Containerization Principles for Data ScientistsDr Ganesh Iyer
Demystifying Containerization Principles for Data Scientists - An introductory tutorial on how Dockers can be used as a development environment for data science projects
Your developers are asking for it. The boss is wondering how much longer it's going to take. You need to get Kubernetes up and running. This session will explore core Kubernetes concepts as it relates to our knowledge as a vSphere administrator. We will explore the differences between open source and commercialized versions of Kubernetes and take a quick look at different application deployment mechanisms. You’re going to leave with a better understanding of Kubernetes architecture and how to take the next step towards containerization.
Docker on a laptop is easy, but Docker in the cloud is hard. Networking, scaling, and security are very different problems at cloud scale than on a laptop, and they can combine to sink a project before it ships What makes that transition so hard, and what can we do about it? How does this challenge affect hosting infrastructure and application design?
Casey will share lessons learned, open-source solutions, and working patterns for building and scaling applications to production.
Presented at:
http://www.meetup.com/Docker-Austin/events/223284441/
http://www.meetup.com/Docker-Philadelphia/events/223290624/
OSCON 2013 - The Hitchiker’s Guide to Open Source Cloud ComputingMark Hinkle
And while the Hitchhiker’s Guide to the Galaxy (HHGTTG) is a wholly remarkable book it doesn’t cover the nuances of cloud computing. Whether you want to build a public, private or hybrid cloud there are free and open source tools that can help provide you a complete solution or help augment your existing Amazon or other hosted cloud solution. That’s why you need the Hitchhiker’s Guide to (Open Source) Cloud Computing (HHGTCC) or at least to attend this talk understand the current state of open source cloud computing. This talk will cover infrastructure-as-a-service, platform-as-a-service and developments in big data and how to more effectively deploy and manage open source flavors of these technologies. Specific the guide will cover:
Infrastructure-as-a-Service – The Systems Cloud – Get a comparison of the open source cloud platforms including OpenStack, Apache CloudStack, Eucalyptus and OpenNebula
Platform-as-a-Service – The Developers Cloud – Learn about the tools that abstract the complexity for developers and used to build portable auto-scaling applications ton CloudFoundry, OpenShift, Stackato and more.
Data-as-a-Service – The Analytics Cloud – Want to figure out the who, what, where, when and why of big data? You’ll get an overview of open source NoSQL databases and technologies like MapReduce to help parallelize data mining tasks and crunch massive data sets in the cloud.
Network-as-a-Service – The Network Cloud – The final pillar for truly fungible network infrastructure is network virtualization. We will give an overview of software-defined networking including OpenStack Quantum, Nicira, open Vswitch and others.
Finally this talk will provide an overview of the tools that can help you really take advantage of the cloud. Do you want to auto-scale to serve millions of web pages and scale back down as demand fluctuates. Are you interested in automating the total lifecycle of cloud computing environments You’ll learn how to combine these tools into tool chains to provide continuous deployment systems that will help you become agile and spend more time improving your IT rather than simply maintaining it.
[Finally, for those of you that are Douglas Adams fans please accept the deepest apologies for bad analogies to the HHGTTG.]
Today, there are several trends that are forcing application architectures to evolve. Users expect a rich, interactive and dynamic user experience on a wide variety of clients including mobile devices. Applications must be highly scalable, highly available and run on cloud environments. Organizations often want to frequently roll out updates, even multiple times a day. Consequently, it’s no longer adequate to develop simple, monolithic web applications that serve up HTML to desktop browsers.
This site describes a new, alternative architecture: microservices. Applications with a microservice architecture consist of a set of narrowly focused, independently deployable services. Read on to find out more about this approach and its associated trade-offs. A good starting point is the Monolithic Architecture pattern.
Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux.[5] Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.
Infatuation Leads to Love — How Container Orchestration and Federation Enable...Dana Gardner
Transcript of a discussion on new ways to gain container orchestration, use Serverless models, and employ inclusive management to keep the container love alive and well.
Introduction to OS LEVEL Virtualization & ContainersVaibhav Sharma
This Presentation contains information about os level virtualization and Containers internals. It has used other material on slide share which is referenced in Notes of PPT
Docker on a laptop is easy, but Docker in the cloud is hard. What makes that transition so hard, and what can we do about it? How does this challenge affect the hosting infrastructure and application design? Casey will share lessons learned so far and further questions uncovered in Joyent’s work to build support for Docker in public and private cloud environments.
See also http://www.meetup.com/Docker-San-Diego/events/221128898/ and http://www.meetup.com/Docker-Orange-County/events/221129001/
Adoption of Cloud Computing in Healthcare to Improves Patient Care CoordinationMindfire LLC
The cloud has revolutionized the way we live and work. It has brought about a new era of flexibility and convenience, allowing us to access information and collaborate with others from anywhere in the world.
According to a Gartner survey, global spending on cloud services is projected to reach over $482 billion this year (2022). The numbers are much higher than those recorded last year, i.e., $313 billion.
The Axigen Docker image is provided for users to be able to run an Axigen based mail service within a Docker container.
The following services are enabled and mapped as 'exposed' TCP ports in Docker:
§ SMTP (25 - non secure, 465 - TLS)
§ IMAP (143 - non secure, 993 - TLS)
§ POP3 (110 - non secure, 995 - TLS)
§ WEBMAIL (80 - non secure, 443 - TLS)
§ WEBADMIN (9000 - non secure, 9443 - TLS)
CLI (7000 - non secure
Crossing the river by feeling the stones from legacy to cloud native applica...OPNFV
Doug Smith, Red Hat, Inc, Gergely Csatari, Nokia
There is an anecdote about a tourist lost in the middle of the countryside in Ireland, who pulls over and asks a local, "How can I get to Galway from here?" To which the local, after thinking for some time, responds, "If I was going to Galway, I wouldn't start from here at all."
Cloud native application development can feel like that sometimes, especially in the telecom industry. I have an application, it's running fine on a bare metal server, and now I am expected to make it resilient, scale-out, cloud native, microservice architecture, buzzword compliant. But how do you get there from where you are?
This presentation will present the hero's quest, identifying the key constraint to cloud resiliency at each stage, and identifying measures for addressing them. By showing the evolution story from the perspective of two applications, including a real telecom application, this presentation addresses the practical problems. The approach is not "rewrite your app from scratch", it is refactoring for incremental improvements.
Doug and Gergely will address the automation of application deployment and configuration, separation of state from behaviour, clustering, handling storage for cloud native applications, monitoring and event management, and container orchestration, so that, at each step along the journey, you know what problem you are solving, and how to get to the next step from where you are.
This presentation is in addition to a series of workshops held at the summit sponsored by the Cloud Native Computing Foundation and organized by Dave Neary, and includes a short summary of the topics presented in those workshops in addition to the perspectives on how to complete the quest to cloud native applications.
Many of the advantages of using Docker containers include fast development, testing, and server deployments of your application. This PPT explains some of the Docker use cases that will help you to improve software development, application portability & deployment, and agility for your business
Docker-PPT.pdf for presentation and otheradarsh20cs004
Consistency: With Docker, developers can create Dockerfiles to define the environment and dependencies required for their applications. This ensures consistent development, testing, and production environments, reducing deployment errors and streamlining workflows.
Scalability: Docker's containerization model facilitates horizontal scaling by replicating containers across multiple nodes or instances. This scalability enables applications to handle varying workload demands and ensures optimal performance during peak usage times.
Speed: Docker containers start up quickly and have faster deployment times compared to traditional deployment methods. This speed is especially beneficial for continuous integration/continuous deployment (CI/CD) pipelines, where rapid iteration and deployment are essential.
Complexity: Docker introduces additional complexity, especially for users who are new to containerization concepts. Understanding Dockerfile syntax, image creation, container orchestration, networking, and storage management can be challenging for beginners.
Security Concerns: While Docker provides isolation at the process level, it is still possible for vulnerabilities or misconfigurations to compromise container security. Shared kernel vulnerabilities, improper container configurations, and insecure container images can pose security risks.
Networking Complexity: Docker's networking capabilities, while powerful, can be complex to configure and manage, especially in distributed or multi-container environments. Issues such as container-to-container communication, network segmentation, and service discovery may require additional expertise
Fake general image detection refers to the process of identifying whether an image has been manipulated or altered in some way to create a deceptive or false representation of reality. This type of detection is commonly used in fields such as forensics, journalism, and social media moderation to identify images that have been doctored or manipulated for malicious purposes, such as spreading fake news, propaganda, or misinformation. Fake general image detection techniques can include analyzing the image's metadata, examining inconsistencies in the lighting and shadows, identifying anomalies in the image's pixel patterns, and comparing the image to known authentic images or reference images. Some algorithms use machine learning techniques to analyze large datasets of both authentic and fake images to improve the accuracy of their detection.
However, it's important to note that no single method or algorithm can detect all types of fake images with 100% accuracy, and as technology advances, so do the techniques for creating convincing fake images. Therefore, it's essential to use a combination of techniques and human expertise to identify fake images and prevent them from spreading.
There are several techniques that can be used to detect fake images on social media. Here are a few examples: Done it
Kubernetes Vs. Docker Swarm: Comparing the Best Container Orchestration Tool ...Katy Slemon
Let's see, the major advantages and disadvantages of the two most powerful and most popular container orchestration tools: Kubernetes and Docker Swarm.
Using Docker container technology with F5 Networks products and servicesF5 Networks
The evolving needs of IT and the advent of agile development and deployment strategies has led to the emergence of “containerization,” an alternative to full machine virtualization in which an application is encapsulated in a container with its own operating environment. Containerization is an attractive solution that enables developers to iterate faster. It also offers additional benefits that address the overhead associated with virtual machines, allowing for higher utilization of resources in the software-defined data center (SDDC).
Although containerization isn’t a new concept, Docker, developed by Docker, Inc., has been widely cited as the implementation of choice due to its broad industry support, standardization, and comprehensive breadth of capability. In the company’s words, Docker is “an open platform for building, shipping, and running distributed applications. It gives programmers, development teams and operations engineers the common toolbox they need to take advantage of the distributed and networked nature of modern applications.” As such, Docker simplifies application lifecycle management from development to deployment and enables application portability. This simplification is critical for enterprises, considering that there are multiple hosting options for an application, either in the public cloud or private cloud infrastructure.
This paper outlines F5’s direction on using containers within F5 technology and for supporting Docker for application delivery and security. Before we discuss this strategy, it is important to recognize data center pain points and why these technologies are critical for the next generation enterprise application delivery.
How docker & kubernetes can optimize the cost of hosting9 series
Container orchestration and container technology are revolutionizing the management and deployment of applications in a multi-node distributed domain at scale.
What is the Difference Between Kubernetes and Docker?Ravendra Singh
Apps that operate in containers may be automatically scaled, deployed, and managed with the help of Kubernetes, an open-source cloud-native infrastructure solution that is available for free. While Kubernetes was first developed and maintained by Google, the Cloud Native Computing Foundation took over the development and management of the system.
What is Docker & Why is it Getting Popular?Mars Devs
Docker and containerization, in general, are now causing quite a stir But what is Docker, and how does it relate to containerization. Today, in this blog we will walk you through the nitty-gritty of Docker and why it is getting adopted rapidly.
Click here to know more: https://www.marsdevs.com/blogs/what-is-docker-why-is-it-getting-popular
IoT protocols overview part 2- Tethered protocolsClint Smith
The advancement of IIoT 4.0 for smart factories, cities and buildings ushers in many exciting possibilities for improved automation and capabilities. IoT devices are unlocking the great potential for improved efficiency and improved user experiences. However, there are many different IoT protocols, network topologies and frequency bands, making IoT an intranet of things and not an internet of things. Therefore, in order to determine which IoT technology to use in solving your use case and future proofing your investment, an understanding of the IoT ecosystem is needed. This is the second in a series of papers describing the different protocols, topologies and frequency bands used in IoT deployments focusing on IoT Tethered Protocols.
The convergence of blockchain technology with Fifth Generation Wireless (5G), Long Term Evolution (LTE), Internet of Things (IoT) and cloud services fosters in a host of new services that were not possible before. The proliferation of devices in a distributed architecture has many challenges including ensuring the device configuration management and cybersecurity. This paper discusses how smart contracts can be used for configure, operate and protect IoT devices.
Citizen Broadband Radio Service (CBRS) is a shared spectrum service with three tiers of users. The three tiers in ranked order of priority are the incumbent access (IA), priority access license (PAL) and general authorized access (GAA). CBRS uses LTE TDD as the radio access method and is not a new radio technology. CBRS is a dynamic spectrum control scheme using short term leases to enable services. This paper will briefly discuss many of the technical issues pertaining to specific CBRS that are not LTE -TDD specific.
The advancement of IIoT 4.0 for smart factories, cities and buildings ushers in many exciting possibilities for improved automation and capabilities. IoT devices are unlocking the great potential for improved efficiency and improved user experiences. However, there are many different IoT protocols, network topologies and frequency bands, making IoT an intranet of things and not an internet of things. Therefore, in order to determine which IoT technology to use in solving your use case and future proofing your investment, an understanding of the IoT ecosystem is needed. This is the first in a series of papers describing the different protocols, topologies and frequency bands used in IoT deployments.
Public safety is undergoing a revolution in situation awareness due to the inclusion of multimedia. Unfortunately, many public safety communication networks were not designed or envisioned to support multimedia and the wealth of other data intensive applications. The increasing use of data and multimedia applications to enhance first responder situation awareness requires that the backhaul network must be capable of supporting this demand. This paper will explore several options available for backhauling large amounts of data in real time
IoT 7 Critical Musts
Making an IoT device or platform decision involves many issues to consider. This paper discusses the seven (7) critical musts that need to be considered when deciding on your IoT solution or service offering.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
1. Docker Containers for Wireless Networks Explained
Clint Smith, P.E.
csmith@nextgconnect.com
Abstract:
Containers have many advantages that help enable the delivery of services for wireless
operators with 4G and soon to be 5G environments. What a container is and its role in a
4G and 5G wireless environment to improve edge services is explored.
2. Next G Connect
www.nextgconnect.com 1 August 15, 2019
Containers are making a lot of headway into the wireless telecom space for valid reasons. Containers have many
advantages that help enable the delivery of services for wireless operators with 4G and soon to be 5G environments.
Containers are not new and have been around for quite some time finding use in the IT domain. They have many
unique and important advantages that can help with service delivery in 4G and 5G. Containers are becoming an
essential part of the overall edge network solution for 4G,5G and beyond.
So, what is a container.
Containers enable you to separate your applications from the underlying infrastructure making them infrastructure
agnostic. Containers utilize virtual environments and there are several different types of containers that can be used.
A service provider, enterprise, or utility using containers can deploy, replicate move and even backup processes and
workloads faster and more efficiently than when utilizing virtual machines (VM) alone.
One of the more prolific container platforms that utilize a virtualized environment is Docker which is based on the
Linux operating system. Containers using Docker run in a virtual environment like NFV however without the need
of a hypervisor. Instead of utilizing a hypervisor the container uses the host machines kernel.
Containers have many attributes and some of them are:
• Lightweight: only have what is needed, share the host kernel
• Scalable: using the orchestrator you can automatically replicate containers
• Stackable: you can stack services allowing
• Portable: containers can be run locally or on a cloud.
One of the strengths of containers is that they allow for applications to be distributed making them more efficient.
The individual portions of the application in a distributed application are called Services. For example, if you provide
an application which is a video game it will likely be made up of many components. The video game application can
include a service for an interface, online collaboration, storing application data to a database and so forth. Previously
these components of the application would be all in one vertical and be bound together. However, by leveraging the
distributed application approach using containers the application can operate more efficiently with less resources being
consumed.
Containers in short are individual instances of the application. You can have one image of an application which many
instances of that applications use. This allows multiple containers to use the same image at the same time improving
its efficiency. Also, every container is isolated from each other because they are able to have their own software,
libraries and configuration files to run the application itself. Because containers are created from images the image
contains specific content detailing exactly how the container/ application is supposed to function and behave.
So why not just use a virtual machine (VM) instead of a container or use a container in place of a VM. The answer
lies in what you are solving.
Both containers and VMs are part of a wireless telecom solution for delivering services because each has unique
characteristics which favor specific solutions. Containers as well as VMs are a very essential part of the evolving
solution for edge device solutions.
VMs are part of a Network Function Virtualized (NFV) network. An NFV allows an operator to virtualize functions
like a Packet Gateway (PGW) or Mobility Management Entity (MME). Depending on the resources an NFV can have
several VMs running on the same physical hardware. A VM in short is a virtual service that emulates a hardware
server. A VM however relies on the physical hardware that it is running on. The requirements for the VM typically
include a separate and specific operating system (OS) as well as hardware dependencies.
An example of a VM would be an MME in 4G LTE. A VM however in this case is a separate instance of the MME
itself. The advantage here is that you can scale the amount of VM MMEs based on traffic loads and other requirements
through the use of software with the use of the hypervisor. A VM for the other components which make up a 4G and
3. Next G Connect
www.nextgconnect.com 2 August 15, 2019
5G network can be run on the same server or between different servers. The hypervisor in an NFV environment
controls what VM is spun up and down depending on the rules or policies it has been instructed to function with.
Therefore, with an NFV environment the VM is managed by rules for how they are allocated.
Containers on the other hand while virtual themselves do not control the underling hardware. Instead containers
virtualize the OS of the host. Containers can also be setup to perform the same functions as a VM. For instance, a
container can be created that has the functionality of a MME. The containerized MME could have all the functionality
of an MME or a subset of a full MME depending on what needs to be solved. However, containers are best used for
delivering services in a virtual distributed network (vDN). Containers using a vDN can deliver services to edge
devices with low latency facilitating mobile edge computing and other requirements.
Both Containers and VM have similar resource isolation and allocation schemes. However, containers virtualize the
operating system while VMs virtualize the hardware.
To help show the power of Docker containers for applications a simple diagram is presented in figure 1 showing the
difference in the stacks between an NFV and Docker environment.
Figure 1: Virtual and Docker comparison
With an NFV and Docker platform there are some common items. Specifically, the server is the entity that is used to
host multiple VMs or containers. The Host OS is the OS for the server which is either Linux or Windows. However,
NFV and Docker the differences begin after the Host OS layer.
With an NFV platform there are multiple instances of the operating systems. The Guest OS resides on top of a
hypervisor in an NFV which is above the host OS. Whereas in Docker the host OS is shared amongst the containers.
The applications for the NFV network use the Guest OS and is unique for each application. However, with Docker
the application is now a container that can be run using a shared OS.
In addition, the Guest OS in an NFV has a full implementation of the OS that the VM uses. With an NFV the VM
could be Linux or Microsoft meaning that every VM that is spun up also has a separate OS consuming valuable
resources and time. However, with Docker only the required resources are needed to run the application since the
container bundles only the needed libraries for the application to run, since it shares the host OS. Containers can
therefore be spun up faster and consume less resources than VMs.
NFV
App App
Guest OS Guest OS
Hypervisor
Host OS
Server Infrastructure
App
Docker
App App
Docker Engine
Host OS
Server Infrastructure
VM1 VM2 VM3 Container 1 Container 2 Container 3
App
Guest OS
4. Next G Connect
www.nextgconnect.com 3 August 15, 2019
However, you can also run containers within a VM. Figure 2 shows an example of containers within a VM
environment. For instance, if you had both Linux and Microsoft applications using the same Host OS it would not
work due to kernel incompatibility. Obviously, capacity is sacrificed for flexibility in Figure 2.
Figure 2: Docker
Therefore, the use of a VM or Container should be considered based on what needs to be solved. Using both containers
and VMs in a deployment can be helpful, because they each have their unique attributes which should be leveraged
based on the design criteria.
The focus of this article is containers, however, it was necessary to review and attempt to put into context some high-
level conceptual points related to virtual environments.
The operating system for containers which is more applicable to wireless networks is Docker. Docker is an open
platform that uses containers for creating, delivering and running applications that are very efficient and flexible.
Docker, through the use of containers, enables you to separate your applications from infrastructure making it more
portable and quicker to spin up when needed.
Containers are considered lightweight because they only need the host machines kernel and not a hypervisor meaning
fewer computing resources are needed. Because containers are lightweight many containers can be run with the same
computing resources required for a single VM.
Recapping containers virtualize the operating system while VM’s virtualize the hardware.
Therefore, the strength of containers is that it provides a common method to run your code and or application making
it hardware agnostic and more efficient.
The reason Docker containers (DC) are more efficient for running applications is that it decouples the application
from the operating systems. Because a DC can decouple the application from an operating system it enables a more
NFV with different Docker VMs
Hypervisor
Host OS
Server Infrastructure
VM1
Docker Engine
Guest OS
App App
Container 1 Container 2
VM2
Docker Engine
Guest OS
App App
Container 1 Container 2
VM3
Docker Engine
Guest OS
App App
Container 1 Container 2
5. Next G Connect
www.nextgconnect.com 4 August 15, 2019
efficient deployment of the application. In other words, a user utilizing a DC can run their application using only the
resources it needs instead of having additional resources assigned that it does not require to function, the bloat.
A DC is designed to be isolated from other DC’s by running the application entirely within itself. A fundamental key
attribute of a DC is that it shares the kernel of the host operating system (OS). Because the DC shares the kernel the
container can be run on a host of environments without the worry of software dependencies because everything needed
by the application is contained in the container.
However, the Docker platform performs more functions than just running containers. The Docker platform using GO
enables the creation and usage of images, networks, plug ins, and a host of other items. Using the Docker platform
wireless operators, enterprises and utilities can easily create applications and ship them in containers that can be
deployed anywhere.
Figure 3: Docker Environment
However, what exactly is a container? Specifically, a container is a run time environment and uses an image which is
a package that contains all the items the application needs to function like the config files, code, libraries etc.
Figure 3 representing a Docker environment that shows more than one container along with several other components.
Docker has multiple components all with a specific function that they perform.
The main components for Docker are:
• Docker Client
• Docker Server (engine) DE
• Docker Containers (DC)
• Image (stored in public or private repositories)
• Volumes (this holds the data)
• Networking.
Docker uses a client server architecture where the server receives commands from a client over a socket or network
interface (in case the server is remote) to run and interact with a container. The Docker client program can be remote,
or it can also reside inside Docker as well.
The Docker container is unique in that it does not use more resources, CPU, memory and other resources than it needs.
Think of it as running a minimum operating system necessary for the application. What controls the resources used
by the container is the kernel.
At the heart of a DC is the kernel and the kernel manages the DC. In essence the same kernel is shared across
containers in the same hardware environment.
So, what is a kernel.
Docker Server
(engine)
Docker Client Docker Containersocket
Docker Container Volume
Memory
Respository
(images)
6. Next G Connect
www.nextgconnect.com 5 August 15, 2019
A Kernel is a key component for any software platform and application to operate. A kernel does many things however
the following are the more relevant attributes:
• Defines the libraries and other attributes of a program/application.
• Controls the underlying hardware by allocating computing resources like memory, CPU, networks and so.
• Starts and schedules programs.
• Controls and organizes storage.
• Facilitates messages being passed between programs/applications.
Kernels not only enable the running of containers in a Docker environment they also restrict the capabilities to just
what is needed making the container lightweight, portable and secure. For example, not all the libraries for a particular
OS are included with the application in a container. A kernel can also be used to restrict who has access or what is
allowed to be changed.
The Docker Engine (DE) is an essential element in the Docker system that enables the ability to deploy and run
applications within a container. The DE is a client-servicer application and is called dockerd which is a daemon. The
DE utilizes CLI REST API for interacting with other Docker elements. The DE components are shown in figure 4
and highlights some of the interactions between the client, host and registry.
Figure 4: Docker Engine components
In figure 4 three distinct functions are shown: (1) build process flow, (2) pull process, (3) run. Each is represented in
Figure 4 by a different line format that shows how it relates to the client, Docker host and registry.
The Docker platform has many components as you would suspect. Referring to figure 4 Docker components can be
thought of in three layers: software, objects and registries (SOR). Tools like Swarm and Kubernetes are used to help
manage the SOR.
The three SOR layers are:
1. Software: This involves two main software functions
• Daemon- used by the Docker engine to manage and handle container objects, (network and volumes).
• Client- uses a CLI function to interact with the Daemon. The client is the primary method for interacting
with Docker via Docker API and can communicate with more than one Docker daemon
a. LCOW- allows you to run Docker containers on Linux
Docker build
Docker run
Docker pull
Docker Daemon
Container 2
Container 1 Image 1
Image 2
Container (n) Image (n)
Image 1
Image 2
Image (n)
Client
Docker Host
Registry
7. Next G Connect
www.nextgconnect.com 6 August 15, 2019
b. WCOW – allows you to run Docker containers on Windows
c. MAC - allows you to run Docker containers on MAC
2. Objects: There are three types of objects with Docker: images, containers and services.
• Image: This is what is used to store and then ship applications. In short, the image is used to build the
container and is only readable.
• Container: This runs the application and is the execution layer for Docker. Containers are images that
have a writable layer. Several containers can be linked together for a tiered application.
• Services: This is a series of containers that function in distributed fashion and is how you scale containers
across multiple Docker daemons. You can also define the number of replications allowed for any
instance at any given time. The service is load-balanced across all nodes.
3. Registries: This is a repository for Docker images and can be both public and private. Images can be pulled
from or pushed to the registry from the Docker client.
A container as mentioned previously is a runtime implementation of an image. Many containers can be started from
one image which is a key attribute for service delivery to the edge of the network. Another key attribute is the time it
takes to start and stop a container is much shorter than a VM and it consumes less resources.
A container is what is thought about when mentioning Docker however it is only a part. In essence a container is a
runnable instance of a particular image. In Docker a container is isolated from other containers and the degree of
isolation is determined by how you configure it. Therefore, a container is basically defined by its image and the
associated configuration for it.
A container’s life cycle is shown in figure 5.
Figure 5: Container Life
Looking at figure 5 a container’s life begins when it is created. When the container is created it is really stored in the
repository as an image and becomes a container where it is run. During a containers life it can be run, stopped, paused
and killed over and over again. However, with all software/applications it to has a life span and changes are needed
over time for a host of reasons.
To make a container using an image the docker engine takes the image from the repository then adds a read-write
filesystem on top of the image in the container and initializes all the various settings including network ports, container
name, ID and resource limits.
Created
Running Paused
Kill
Stopped
pause
un pause
Stop
Restart
kill
run
8. Next G Connect
www.nextgconnect.com 7 August 15, 2019
Several kernel features that also help enable containers and their functionality includes namespace, control groups and
UnionFS.
• Namespaces basically are used by Docker to provide isolated workspaces in Docker which is the container
itself. Each instance of the container runs a separate namespace and the containers access is limited to that
particular namespace. Using namespaces, a container can have its own unique network interfaces, IP address
etc. Therefore, each container will have its own namespace and the processes running inside that namespace
will not have any privileges outside its namespace.
• Control Groups also called cgroups, limits the resources an application is allowed to use. cgroups also allow
the Docker engine to share hardware resources between containers. For instance, through cgroups you can
limit the amount of CPU or memory a particular container can utilize thereby keeping one container from
impacting another due to resource consumption.
• Union File System (unionFS) is the file system structure used by a DC. There are multiple types of file
systems depending on the application being run like virtual file systems (vfs) and B-Tree file system (btrfs).
DC’s utilize copy-on-write (COW) as a method for improving efficiency of file sharing and copying within the
application running in the container. Images used to create containers are constructed in layers. COW basically uses
the file stored in the writeable layer already there. With COW if a file or directory exists in a lower layer within the
image itself and another layer needs read access to it the COW maps it to the existing file for use. However, if the file
needs to be modified then the file is then copied into that layer and modified.
Images are used in software all the time and Docker uses them as well. Images are read-only instructions and are used
to create a container in Docker. What is interesting is an image can be nested or based on another image. Images are
built in Docker using dockerfile. Each layer of the image is created and defined by dockerfile. However, if you need
to change or update the image only the layer that needs to be changed is modified. How the image is changed or
modified is an advantage Docker has with applications and also makes the applications light-weight.
With Docker there are several ways to modify an image which contains the application. Figure 6 shows the interactions
an image can have with regards to changing it. The three primary methods for modifying an image are:
1. Pull from the repository
2. Dockerfile
3. Commit a container that has been modified
Figure 6: Docker Image
ImageContainer docker commit
docker run
Docker
Repository
docker push
docker pull
Docker dir
docker build
Docker file
Text editor
edits
statements
Contained in
9. Next G Connect
www.nextgconnect.com 8 August 15, 2019
What defines and determines what goes into a container and its environment is determined by dockerfile. A dockerfile
is a text file that defines a Docker image and is essentially the manifest for the container. Everything that the
application needs to function is defined in the dockerfile. A dockerfile allows the application to be ported to other
locations and run identically. Specifically, a dockerfile will define how the application will interface to the outside
world, what is needed to run the application, any virtual resources needed to run within the container, network
interfaces and external drives and or volumes.
Dockerfiles include the following if you open it with an editor:
• FROM - the portable runtime environment
• WORKDIR - this is the working directory in the container for the application
• COPY – copy the contents into the container
• RUN – install all the needed packages for the app defined in a txt file called requirements.txt
• EXPOSE – this is where you expose a port(s) for the container
• ENV - this is where you name or define the environment variable
• CMD – when the container launches it runs the app defined here, i.e. app.py
Therefore, when the dockerfile is made into an image its made from by copying the contents of the current directory
which has both the app defined by CMD and requirements defined in RUN. The output of the app is available from
the port defined in EXPOSE.
When creating an image there is also a YAML file with the extension (,yml). The YAML file is not part of the
dockerfile but a separate file which is included when building the image itself. The YAML file typically called the
docker-compose.yml and provides the instructions for running and scaling the container and services.
The main functions docker-compose.yml instructs the DC to do certain functions include:
• Defining how many replications (replicas) of the image are allowed
• Limiting the type and amount of resources used
• Restart the container if it fails
• Map ports
• Others.
So, what is the difference between an image and container. There are several differences however the one difference
that stands out is a container has a writable top layer while an image does not. This means that all data either read or
modified is stored in the read-writable layer. Also, when the container is killed, deleted, the data that is in the read-
writeable layer is also deleted. However, the image that was used to create the container is unchanged from its initial
state regardless of what data was collected.
With applications it’s all about the data. Therefore, what you do with the data created depends upon the type of data
that is involved, and it is either stateless or stateful.
• Stateless is where you do not save or store any of the data collected or created. If the container restarts
everything is lost.
• Stateful is where the application directly reads and writes data to and from a database. When the container
restarts or stops the data written is still available.
Docker Volumes are where the data used and or created from the container application is stored. Utilizing volumes
is the preferred method for handling data generated and used by DC’s. A volume managed by Docker is different
than bind mounts since a bind mount is dependent upon the host machines whereas volumes are not dependent on the
host machine. Typically, volumes are external to the container.
10. Next G Connect
www.nextgconnect.com 9 August 15, 2019
A volume acts to the application like its file folder with which it can read and write data. This use of volumes is meant
to ensure that container is immutable. However, if allowed, a container can also store data it creates in local memory
however this data is not persistent.
Figure 7 depicts several different methods for storing or reading data with containers.
Figure 7: Docker Volumes
The data that the application generates and uses is very important. The ability to read and write data to and from
external devices increases the applications functionality and commercial viability.
When a container runs an image, the container has its own writable container layer where all changes or data is stored
in that container layer. However multiple containers can and do share access to the same image, but each has their
own data state making each unique.
Some of the advantages Volumes have in a Docker environment are:
• Work on both Linux and Windows containers.
• Shared among multiple containers.
• Volumes can be stored on remote hosts or cloud provider.
• Volumes can have their content pre-populated by the container.
• Share volumes between other containers.
There are several ways to create and manage Docker volumes. Each method has its own advantages and disadvantages.
The methods to create a Docker volume are:
- Let Docker do it
- Define a specific directory (mount)
- Use Dockerfile
Docker Container
Memory
(a) Internal to Container
File System
Docker Area
File System
(b) External to Container
(c) External to Container
Docker Container
Memory
Docker Container
Memory
11. Next G Connect
www.nextgconnect.com 10 August 15, 2019
Docker containers need to communicate with the outside world in some fashion. All Docker container communication
is done using bridges to create virtual networks. The type of network and interfaces used for the application need to
be declared in the dockerfile. However typically the Ethernet bridge in Docker is called docker0 and is created by
default.
In order to scale the deployment applications some form of container orchestration is needed. A container
orchestration is needed for managing and or scheduling the work of each of the containers to support the applications.
In short container orchestration’s primary mission is to manage the life cycle of the container which includes
provisioning and load balancing.
There are many container orchestrators available however some of the more prevalent ones are Kubernetes, Docker
Swarm, and RedHat Openshift.
A container orchestration platform sits on top or rather outside of Docker. The orchestration platform should be able
to perform the following general tasks at a minimum.
• Provision and deploy container.
• Provisioning host environment.
• Instantiating containers.
• Allocation of resources between containers.
• Exposing services to entities outside of the cluster
• Load balancing
• Scaling the cluster automatically by adding or removing containers
• Ensuring redundancy and availability of the containers
• Rescheduling/starting failed containers
• Health monitoring of containers and hosts
Both Docker Swarm and Kubernetes perform all the tasks above. However, Docker Inc. has made Kubernetes along
with Docker Swarm part of the Docker Community (CE) and Docker Enterprise editions (DE). Kubernetes is more
complex than Docker Swarm but with the added benefit of better orchestration.
The following gives a brief overview of both Docker Swarm and Kubernetes.
Docker Swarm (DS) is a Docker-native container orchestration engine that coordinates container placement and
management among multiple Docker Engine hosts and is open source. The DS is able to cluster Docker containers
called a swarm. DS through the use of clusters enables communication directly with swarm instead of communicating
with each DE individually.
Figure 8 is an example of a DS architecture. With DS there are two types of nodes shown in Figure 8 and they are the
Manager Nodes and Worker Nodes. With DS a task is a Docker container (DC).
• Manager Node manages the Worker Nodes. The manager node when deploying an application delivers the
work or task to the Worker Nodes by distributing and scheduling the tasks.
• Worker Nodes on the other hand run the tasks. The Worker node also reports back to the Manager node
informing it of the health and wellbeing of the tasks and containers.
12. Next G Connect
www.nextgconnect.com 11 August 15, 2019
Figure 8 Docker Swarm
Kubernetes however is becoming the orchestration of choice for Docker deployments. Kubernetes is an open-source
project. Kubernetes utilizes a client-server architecture. There are several variants to Kubernetes offered by AWS,
Google, Azure and others, however they are similar. A high-level diagram is shown for Kubernetes in figure 9.
Figure 9: Kubernetes
In figure 9 only one master and node are shown where is reality you can have many nodes associated with a master
representing a cluster. There can also be multiple masters for multiple clusters. Therefore, a cluster is a set of nodes
with at least one master node and many worker nodes.
The Master manages the schedule and deployment of an application. The Master is able to communicate with the
worker and other master nodes through the API server. The main components for a Master include:
• API Server: This facilitates the communication between the various components within Kubernetes.
• Scheduler: It places the workload on the appropriate node and assigns the worker nodes to pods according to
how the rules are configured for the assignment.
• Controller Manager: Ensures that the cluster’s desired state matches the current state by scaling workloads
up and down to match what the configuration calls for.
• etcd: This stores the configuration data which can be accessed by the API Server.
The worker node has the Docker containers and is able to communicate with the repository for retrieving the images.
The worker node has many components and the pods hold the containers.
Worker
Worker
Docker CLI
Manager Manager Manager Manager
Internal Distributed State Store
Worker
Worker
Worker
Worker
Worker
Worker
Worker
Worker
Worker
Worker
Docker Hub
Scheduler
API API Server
Contoller
Manager
etcd
Kubelet Kube-Proxy
C
C
C
C
C
C
CC
C
C
Internet
Docker
POD
Container
kubectl
Ul
Master Node
13. Next G Connect
www.nextgconnect.com 12 August 15, 2019
• Kubelet: This is one for each Worker node and is responsible for managing the state of the node including
functions like starting and stopping the containers. The Kubelet gets its instructions from the API server in
the Master Node.
• Pods: Kubernetes deploys and schedules containers in groups called pods. A pod runs on the same node and
shares resources like volumes, kernel namespaces, and an IP address
• Containers: These are Docker containers that reside in a pod.
• Kube-Proxy provides the interface between the worker node and the repository to obtain the image that can
be loaded and run in a pod.
However, all this has to start with an ask of some sort. Therefore, kubeclt command or UI is used to instruct the
Master Node via a REST API to create a pod. As noted earlier a pod can contain multiple containers.
This article, while somewhat long, is just a brief introduction to containers for use in a virtual environment for wireless
operators, enterprises, utilities and public safety. The use of container will facilitate edge computing and service
delivery.
An excellent place to continue with containers and Docker is www.docker.com which you can go to and get more
detailed information on any of the topics discussed here.
I hope this information has been of some use and I have several more articles that you may also want to read.
Clint Smith, P.E.
Next G Connect
CTO
csmith@nextgconnect.com
Who we are:
NGC is a consulting team of highly skilled and experienced professionals. Our background is in wireless
communications for both the commercial and public safety sectors. The team has led deployment and operations
spanning decades in the wireless technology. We have designed software and hardware for both network infrastructure
and edge devices from concept to POC/FOA. Our current areas of focus include 4G/5G, IoT and security.
The team has collectively been granted over 160 patents in the wireless communication space during their careers.
We have also written multiple books used extensively in the industry on wireless technology and published by
McGraw-Hill.
Feel free to utilize this information in any presentation or article with the simple request you reference its origin.
If you see something that should be added, changed or simply want to talk about your potential needs please contact
us at info@nextgconnect.com or call us at 1.845.987.1787.