When it comes to applications, historically, IT administrators deployed with a 1:1 application to server ratio. When a new application was required by the business, it was deployed onto a newly provisioned physical system, to ensure no conflicts with existing applications and workloads. This resulted in a huge number of physical servers, all with very low utilization.
Fast forward to a more modern datacenter, where virtualization is now prevalent, and you’ll find significantly higher consolidation ratios, much greater utilization and significantly accelerated app deployment speeds as administrators deploy applications in minutes, compared with hours, days or weeks in a purely physical datacenter.
Compared with applications that ran on individual physical servers, the compatibility of those same apps to run inside virtual machines was typically very high. After all, the virtual machine just presents virtual hardware to the same operating system that was running in the physical world. The only consideration being, if that application or workload has a requirement for a specific piece of hardware, such as a PCI-E card, that couldn’t be virtualized and presented through to the guest operating system. In addition, once that application was encapsulated inside the virtual machine, it benefited from higher levels of redundancy, and also mobility, through features such as live migration.
There is however, a new and increasingly popular way to build, ship, deploy and instantiate applications. Containers can further accelerate application deployment and streamline the way IT operations and development teams collaborate to deliver applications to the business.
But what are containers? Well, to give the computer science definition, containers are an operating system-level isolation method for running multiple applications on a single control host. With developers building, and then packaging their applications into containers, and providing them to IT to run on a standardized platform, it reduces the overall effort to deploy applications, and can streamline the whole dev and test cycle, ultimately reducing costs. As containers can run on a host OS which itself could be physical or virtual, it provides IT with flexibility, and the opportunity to drive an increased level of server consolidation, all whilst maintaining a level of isolation that allows many containers to share the same host operating system.
But why do we need containers? What do containers provide that virtual machines can’t? Who is driving the momentum behind containers?
Applications are fueling the innovation in today’s cloud-mobile world, and developers hold the keys to the power of those applications. The more streamlined and efficient the process for developers to build and deliver their applications, the faster that more powerful applications can reach the business. This however, has to work across both the developers, and IT who hold the keys when it comes to the infrastructure that the applications will run on.
For the developers, containers unlock huge gains in productivity, and freedom – the ability to build an application, package within a container, and deploy, knowing that wherever you deploy that container, it will run without modification, whether that is on-premises, in a service provider’s datacenter, or in the public cloud, using services such as Microsoft Azure. These containers don’t have to be deployed independently – developers can model complex multi-tier applications, with each tier packaged within a container, and these can be distributed across IaaS and PaaS models, again, increasing the overall surface area that the developer can aim for when releasing their application. This powerful abstraction of microservices provides developers with incredible potential to deliver applications more rapidly than ever before. They can’t however, do it without the Operations’ team support.
On the Operations side, they benefit considerably by being able to gain ever higher levels of consolidation for applications and workloads than even virtualization could provide, and in addition, they can put in place a platform that can rapidly scale up and down to meet the changing needs of the business. This standardized platform is easier to manage, yet provides the developers with a consistent environment into which they can simple provide their app, and hit ‘run’.
This integration across development and operations is what’s becoming known in the industry as DevOps. DevOps aims to integrate people, process and tools to streamline the application development and deployment process. Ops can focus on providing a standardized infrastructure and a set of resources that can be consumed by the development teams, and developers can focus on designing, building, packaging and testing their applications, utilizing the platform that IT provide.
Now that we understand a little more about why containers are important to both operations, and developers, it’s important to understand just what is a container.
As you can see from the graphic on the right hand side, at the base we have a server. This could be physical, or virtual, and at this stage, it doesn’t matter. On that server, is a host operating system, which, for the purpose of a containers discussion, has container support within the kernel.
If you think about an application, each app tends to have it’s own dependencies. This could include software, such as services, or libraries, or it could have hardware dependencies, such as CPU, memory, or storage. The container engine that exists within the host OS is essentially a lightweight virtualization mechanism which isolates these dependencies on an application by application basis, by packaging them into virtual containers.
The differences in underlying OS and infrastructure are abstracted away – as long as the base image is consistent, the container can be deployed and run anywhere, which for developers, is incredible attractive.
These containers run as isolated processes in user space, on the host operating system, sharing the kernel with other containers. These containers can be created also instantly, which unlocks rapid scale-up and scale-down scenarios in response to changes in demand from the business.
Containers are attractive for developers and for IT for a number of reasons: Fast iteration: Containers allow for rapid iteration through the development process both because they are lightweight and because of the way the application is packaged with its dependencies Defined state separation: Changes to the container don’t affect other containers. Resource controls: The host controls how much of the host’s resources can be used by a container. Governing resources like CPU, RAM and network bandwidth ensure that a container gets the resources it expects and that it doesn’t impact the performance of other containers running on the host. Immutability: Changes made within one container won’t affect containers running on the same host. Rapid deployment: Since containers are lightweight in terms of resources, they are easy to move, copy, and share. This enables rapid application deployment.
But how do these containers differ from VMs?
Well, if you think about a VM, each VM typically includes the app itself, required binaries and libraries and a guest OS, which may consist of multiple GB of data. This runs on top of a hypervisor, and consumes a slice of resources from the underlying host operating system. One advantage of the virtualization approach, is that the virtual machines can contain different guest operating systems to one another, and to the host operating system, which provides considerable flexibility and high utilization. In addition, virtual machines can be flexibly migrated from host to host, preserving state, and granting administrators with considerable flexibility, especially in scenarios such as resource optimization and maintenance. Virtual machines also offer very high levels of isolation, both resource and security, for key virtualized workloads.
You can however, achieve a ‘best-of-both worlds’ approach.
Containers run on a host OS, but that host OS doesn’t need to be a physical host. By combining containers with VMs, users can deploy multiple, different VM operating systems, and inside, deploy multiple containers within those guest OSs. By combining containers with VMs, fewer VMs would be required to support a larger number of apps and fewer VMs would result in a reduction in storage consumption.
Each VM would support multiple isolated apps, albeit sharing the same guest operating system for the base image, but increasing overall density. This provides IT with considerable flexibility, as running containers inside VMs enables features such as live migration for optimal resource utilization and host maintenance.
So what are some of the core Windows Server container capabilities.
The first key takeaway, is that there is core functionality for containers, supported natively within the kernel, and they will be available in the next release of Windows Server.
Developers will use familiar development tools, such as Visual Studio, to write apps to run within containers. Instead of trying to backport existing applications, by building modular apps leveraging containers, modules can scale independently, and be updated on independent cadences, providing the developer with much greater flexibility and speed.
Applications can rely on other packages to provide core functionality. As you can see from the graphic, there are 2 containers that are sharing a number of libraries. In addition, when packaging, the packages also depend on a base package which describes the underlying operating system, such as Server Core, which has a large number of APIs that Windows supports, such as .NET, IIS etc. Nano Server is another, however this has a much smaller surface, that will target apps that have been written from the ground up, with the cloud in mind.
Containers are isolated behind their own network compartment. This can be provided a NAT DHCP or Static IP. Each container has an independent session namespace, which helps to provide isolation and additional security. The kernel object namespace is isolated per container.
Each container also has access to certain CPU and memory resources, along with storage and network capacity – these are controlled by the administrator, and ensures predictable and guaranteed control of processes.
These containers can be managed using tools such as PowerShell, or using the Docker management tools.
So what does a lifecycle look like?
Firstly, developers build and test their applications, in containers, on their own box. This could be using a development environment like Visual Studio, or one from a 3rd party. You’ll see in this case, there is a couple of different containers, perhaps representing 2 tiers of an application or workload.
Once completed, these containers are pushed to central repository. This could be a Docker repository, which you’ll learn more about later.
Operations automates deployment of the containers, from this central repository, to the target machines, which could be physical or virtual. They continue to monitor the containers…
…and collaborate with developers to provide them with insight and monitoring metrics which help the development teams gain insight into the usage of the applications.
This could be used to drive an update to a particular container, which, with the developers perform on their own boxes, iterate a version, and deploy the updated version to the central repository, which in turn, is then used to update the existing deployed containers. They could also, if they wanted, to roll it back to a previous version. Containers provides considerable flexibility in this space.
Hyper-V Containers take a slightly different approach to containerization. To create more isolation, Hyper-V Containers each have their own copy of the Windows kernel and have memory assigned directly to them, a key requirement of strong isolation. We use Hyper-V for CPU, memory and IO isolation (like network and storage), delivering the same level of isolation found in VMs. Like for VMs, the host only exposes a small, constrained interface to the container for communication and sharing of host resources. This very limited sharing means Hyper-V Containers have a bit less efficiency in startup times and density than Windows Server Containers, but the isolation required to allow untrusted and “hostile multi-tenant” applications to run on the same host.
So aren’t Hyper-V Containers the same as VMs? Besides the optimizations to the OS that result from it being fully aware that it’s in a container and not a physical machine, Hyper-V Containers will be deployed using the magic of Docker and can use the exact same packages that run in Windows Server Containers. Thus, the tradeoff of level of isolation versus efficiency/agility is a deploy-time decision, not a development-time decision – one made by the owner of the host.
We’ve mentioned Docker a number of times already – what is Docker?
At a high level, Docker is an open source engine that automates the deployment of any application as a portable, self-sufficient container that can run almost anywhere.
Back in June 2014, Microsoft Azure added support for Docker containers on Linux VMs, enabling the broad ecosystem of Dockerized Linux applications to run within Azure’s industry-leading cloud.
In October 2014, Microsoft and Docker Inc. jointly announced bringing the Windows Server ecosystem to the Docker community, through investments in the next wave of Windows Server, open-source development of the Docker Engine for Windows Server, Azure support for the Docker Open Orchestration APIs and federation of Docker Hub images in to the Azure Gallery and Portal.
Many customers are running a mix of Windows Server and Linux workloads and Microsoft Azure offers customers the most choice of any cloud provider. By supporting Docker containers on the next wave of Windows Server, we are excited to make available Docker open solutions across both Windows Server and Linux. Applications can themselves be mixed; bringing together the best technologies from the Linux ecosystem and the Windows Server ecosystem. Windows Server containers will run in your datacenter, your hosted datacenter, or any public cloud provider – and of course, Microsoft Azure
Docker has done a fantastic job of building a vibrant open source ecosystem based on Linux container technologies, providing an easy user experience to manage the lifecycle of containers drawn from a huge collection of open and curated applications in Docker Hub. We will bring Windows Server containers to the Docker ecosystem to expand the reach of both developer communities.
As part of this, Docker Engine for Windows Server containers will be developed under the aegis of the Docker open source project, where Microsoft will participate as an active community member. Windows Server container images will also be available in the Docker Hub alongside the huge number of Docker images for Linux already available.
Finally, we are working on supporting Docker client natively on Windows Server. As a result, Windows customers will be able to use the same standard Docker client and interface, for management, on multiple development environments
Windows Server and Hyper-V Containers will both take advantage of the smaller installation options for Windows Server: Server Core and (new in Windows Server 2016) Nano Server. Nano Server is a highly-optimized, headless deployment option for Windows Server that runs at a fraction of the Windows Server footprint and is ideal for cloud services. Containers running Server Core are available now with Windows Server 2016 Technical Preview 3.
Our goal as we bring containers to Windows Server was to offer multiple choices and tools. That means you can manage with PowerShell or with Docker. You will be able to choose in Azure between Linux containers and Windows Server Containers. And you can choose to run containers in Azure, in an on-premises datacenter or in a service provider datacenter. You can also take advantage of familiar tools to build applications.
So it’s important to think of containers as really part of your arsenal – an option just the way that virtual machines are an option.
How do the different containers and VM technologies compare?
As mentioned earlier, containers, regardless of Windows or Linux, need to share the same OS as the host they are running on, which is very different from a virtual machine, which can contain a variety of different operating systems, that don’t need to match the host itself. VMs however, do guarantee a higher level of security than containers, providing a level of hardware isolation that cannot be matched by containers today.
All 3 offerings allow resources, such as CPU, memory, disk and network, to be controlled and managed, ensuring that the administrators can deliver expected levels of performance and reliability.
When thinking about density, and the number of applications that can run on a particular server, the lighter weight nature of containers naturally leads to higher levels of density compared to virtual machines, which themselves, provide a much greater level of density versus running a 1:1 app to server consolidation in the physical world. In addition, the lightweight nature leads to reduced startup times for applications, versus starting up a virtual machine from cold, meaning IT can respond even quicker to changing business needs. VM’s however, do benefit from VM-specific features, such as live migration and high availability, both of which would not apply to containers themselves.
Having multiple VMs providing multiple apps, also consumes considerably more storage space. Each VM typically, in a production environment, has it’s own non-shared virtual hard disk, which, if for example, we assume the size of the virtual disk is 20GB, having 20 VMs each with an application inside, would consume well over 400GB. Compare this with containers, which would consume 20GB for the host OS, and only incremental add-ons for the application binaries themselves, on a per application basis, and it’s clear that the savings on storage are considerable.
Finally, from an application perspective, virtual machines, typically have a very high compatibility for running applications that were designed for physical systems. In many cases, most applications that are being developed today, are being developed and tested in virtual environments, however these apps, in their native form, are unlikely to work in containers without significant redevelopment work. To realize the greatest benefit, apps should be designed, architected and written for containers i.e. stateless, componentized versus ports of traditional, monolithic apps.
Windows Server Containers
Sunucuların yönetimlerini ve kontrollerini etkilemeden
uygulama geliştiricilerin yenilikçi uygulamalar geliştirmesi
Geliştiriciler IT endişesi olmadan rekabetçi bir
ortamda uygulamarını geliştirmek isterler.
Yeni uygulamalar geliştiricilerin makinelerinde
sorunsuz çalışırken, IT sunucularında sorun
Geliştirici verimliliği ve uygulama yenilikleri bu tarz
problemlerden dolayı askıya alınır.
IT sunucuları yönetirken ve bakımlarını yaparken en
düşük kesintilere ihtiyaç duyar.
IT yabancı uygulamaları sistemlere entegre ederken
geliştiricilerin yardımına ihtiyaç duyar.
IT uygulama uyumlulukları ve sunucu korumalarına
Geleneksel sanal makineler= donanım sanallaştırma
VM VM VM VM VM
Containerlar = İşletim sistemi sanallaştırma
CONTAINER CONTAINER CONTAINER CONTAINER CONTAINER