vSphere Integrated Containers – Manageability and Security

vSphere Integrated Containers.

For the last few years I’ve been noting with increasing anxiety the rise of docker and the predicted invasion of the virtualized machine environment. When VMware previewed the open source Photon platform and the vSphere Integrated Container initiates I breathed a sigh of relief that they had noticed the amassing armies of container fans.

What are VMware Integrated containers, and how do they measure up in terms of infrastructure qualities; In this post I will just look at two aspects Manageability and Security. (For more information on infrastructure qualities see here)

 


container layers - shared kernel

Side Note: Containerization

Containerization is a form of virtualization, however unlike full virtual machine virtualization the host kernel is shared among containers. A container encapsulates and wraps up application software in a complete filesystem along with all of its dependencies.

Because a container it is a standardized unit that contains everything it needs, it can easily be transported from a developer’s desktop to production servers, without having to worry about inconstancies between environments.

The diagram shows full virtual machine virtualization side-by-side with containers, the virtual machines run on a hypervisor and need operating systems. The container shares the host kernel, making it lighter and quicker than the full virtual machine.

 

 

 

Containers are built with lightweight images and run in layers, as changes are made to the base image a progressive series of layers are built and stacked one on top of the other. Only the top layer is writable the base image and lower layers are read only.

If a developer wants to discard a change the previous images layers are retrievable and re-buildable. This is obviously very cool for application development as a complete working edition of the previous version available within seconds. 

 

 


Manageability:

vSphere environments are ubiquitous, a hypervisor is the easiest place to get a container project started as virtual machines are ludicrously simple to provision and highly scalable.

VMware state that Integrated Containers accelerate container initiatives by enabling IT teams to take advantage of existing VMware infrastructure. So stop and think about that, if your priority is to get a docker initiative up and running, how many new concepts are you willing to take on board. For anything more than a home/demo lab, production architecture needs to be thought about and how multiple teams can support infrastructure.  VIC (vSphere Integrated Containers) are much more than a vm on a hypervisor running Docker. A plugin will allow teams of administrators and operators to manage and monitor Containers directly on the vSphere Web Client.

container layers - VIC (2)The concept that VMware have presented is that VIC simplifies the configuration of multi host deployment, as compute resources (ie: a cluster of esx hosts), shared storage and network resources are combined as a logical entity to create a VCH (Virtual Container Host). When containers are then deployed in on VCH and VMware’s resource management will take care of placement and resource allocation.

What VMware have called out is that the lack of isolation of micro-instances within containers complicates performance troubleshooting and infrastructure administration. Their solution is to use Instant Clone technology to create separate virtual machines, providing a familiar experience for resource monitoring and capacity provisioning. These isolated individual machines are called “just enough VMs”, forked from the bare bones ‘pico’ edition of the VMware’s open source Photon OS Linux Kernel. In other words the jeVM is a container for the containers.

The full level of compatibility with other VMware features such as snapshots and CBT backups with become apparent over the next few weeks and months, VMware have stated that High Availability, Dynamic Resource Scheduling and vMotion will be available, added to these are integration with VSAN and NSX.

VMware are looking at Docker from an infrastructure prospective, for example, they have worked in collaboration with Cluster HQ to develop vSphere driver for Flocker. Native Docker data volumes are tied to a single server, so when a container moves they stay put. Flocker enables volumes to follow containers when they move between different hosts in a cluster. What the vSphere driver will allow is the provisioning of storage for Docker containers using shared vSphere-compatible storage, including vsan.

 

 

Security:

Security and development are not two words that sit comfortably in the same sentence, and docker is about empowering the developer. One concern that is often pointed out in Docker is network security and management.


container layers - single host network

Side Note: Docker Networking

Native Docker networking for single hosts is typically through the docker bridge, which maps an internal network to the host network, docker commands run on the host are used to expose and map ports.

Multi-host networking is supported natively out-of-the-box, though it requires a valid key-value store service, a swarm cluster, and an vxlan-based overlay network, the networks are then configured on the swarm master. The overlay network driver provides out-of-the-box connectivity between the containers on multiple hosts within the same network, the docker_gwbridge network allows containers external connectivity outside of their cluster.

There are also a multitude of non-native solutions such a the google sourced Kubernetes. However container networks are being provisioned by the Docker Administrator, rather than infrastructure or network teams.


 

VIC-consoleThe potential of very large scale container sprawl is enormous, and with it a huge audit headache, besides the likelihood that a less experienced Docker administrator will open the door to intrusion, especially for production web facing applications.

With Integrated Containers the graphical Web Client plug-in allows infrastructure and Network admins as well as security and operations teams see this all important port mapping information, and know where the container is located.

 

VMware have already indicated that Integrated Containers will tie-in with NSX network virtualization, and although NSX is not open source, nor cheap, it is the natural fit.  NSX is so much more than a vxlan overlay, the ability to apply security to groups of machines based on policy, is fundamental in large scale micro-segmentation and security.

 

An initial version of VIC is now available on Github (vmware/vic) along with the first blogs on how to install and get it running. It’s still early days, but production ready code is now coming into view.

 

Cormac Hogan – Getting started with Photon OS and vSphere Integrated Containers

Björn BrundertInstall vSphere Integrated Containers v0.1 via VMware Photon OS TP2

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *