VMware + Containers = Containers without Compromise

For this post, I’m temporarily taking off my VMware End User Computing hat and putting back on my Software-Defined Data Center (SDDC) one. (As many of you may know, I was part the VMware SDDC team for 9+ years, so every now and then I still need my SDDC fix!)

Today I’m at Gartner Catalyst participating in a panel session entitled “Virtual Machines: Forever Glory or Dying Breed?” The session is motivated out of interest in Docker and Linux containers, and the question as to whether containers might one day replace virtual machines. You’ve probably heard about containers or seen them in the news, but not everyone has had time to research container-based approaches. With that in mind, let me give a quick background on containers before going into VMware’s perspective on the whole matter.

A quick container primer

The notion of a “container” is that it provides operating system-level process isolation, similar in concept to hardware virtualization, like we do at VMware. The difference is that the isolation is done in the OS rather than at the hardware abstraction layer. Containers have been around in various forms for years: for instance FreeBSD Jails or Solaris Zones. Google realized the potential of containers early on as well and started contributing to the Linux kernel to add process isolation functionality to various subsystems. Projects like OpenVZ and LXC appeared to contribute to the Linux kernel and to orchestrate these Linux kernel subsystems to execute isolated processes (containers) on Linux.

Even though Linux was building a fledging set of container technologies and even with Google strongly advocating containers for application delivery, not many people noticed or showed interest in containers until Docker hit the scene almost a year and a half ago. Docker has taken Linux containers to the next level by creating a very simple and elegant application packaging system that leverages containers and enables true portability across Linux distros and between dev and prod environments, while enabling efficient creation of a full application image and its associated libraries. Docker has also crafted their APIs such that they fit seamlessly into the developer workflow. In other words, Docker made containers easy and approachable for any and every developer.

When first hearing about containers, people often compare containers to virtual machines, given they have similar isolation concepts. This is in fact where the topic of today’s panel session comes from: if you have containers, why do you need virtual machines? It’s an interesting question, but in many ways misses the point.

The important point is that we (IT Vendors and R&D guys like me) need to make customers successful. And specifically that means enabling customers to successfully run and manage applications.

Before I dive into the details, I want to be clear about two points:

  • VMware sees tremendous value in containers. In fact, VMware has actually been a huge proponent of containers for many years now. You could even call us a pioneer of containers in the enterprise space, as we created a container system called Warden for CloudFoundry back in the fall of 2011. We did this exactly because we realized the need for simple application delivery into an isolated OS environment. Thus we’re very excited to see Docker catalyzing the industry around containers, as they streamline application delivery and help to make customers even more successful.
  • Second, we (VMware) see containers and virtual machines as technologies that function better together. As I’ll show below, it’s not just the basic runtime that containers enable, it’s about providing mature, enterprise-proven infrastructure services to make customers successful running and operating containerized applications in production. By combining containers and virtual machines, customers can improve their ability to deliver applications without compromising their enterprise IT standards.

It’s about more than just the container!

Running and managing apps is what VMware has always been about. The point is not about VMs as a container, per se, but about the infrastructure that has been developed around VMs that enable applications to be run more efficiently and securely, while ensuring their high availability. For example, checkpoint/restore capabilities to enable mobility, resource isolation to ensure compute, network, and storage QoS, network isolation/micro-segmentation, firewalling, load balancing, and other network services, shared persistence and mechanisms for storage snapshotting, replication, hybrid on/off-premises, and much more. To be sure, all apps will need these capabilities, whether they run on containers or VMs. VMware and its partners have developed this ecosystem of infrastructure services over the last decade for apps running in VMs. This is where VMware has been focused with our infrastructure and cloud products like vSphere, NSX, Virtual SAN, and vCloud Hybrid Service. And we will bring these infrastructure capabilities to containers by leveraging our experience with VMs.

Managing applications is just as important as running them and VMware has been focused on making customers successful in this area as well by providing enterprise tools for automated provisioning and lifecycle management, performance and capacity management, cost and metering, and of course a common governance model with self-service capabilities. Making customers successful means being able to support everything a customer has running in their datacenter. So, all these VMware management products and tools support not only VMware infrastructure, but also physical machines as well as other hypervisors; not to mention hybrid and public clouds. This functionality is provided by VMware’s cloud management products, including vCloud Automation Center, vCenter Operations Management Suite, Log Insight, and IT Business Management. As with infrastructure, we can extend these enterprise management services to containers so that customers can run them with the same assurances in production.

All of these pieces together – infrastructure and management – comprise VMware’s Software-defined Data Center (SDDC):

The value of VMware’s SDDC stack, and its continued evolution, is that all of the operational complexity I mentioned above (such as security, performance monitoring, etc.) can be masked from developers, allowing them to do their jobs more efficiently. Moreover, the benefit for IT is a single platform for running and managing all their apps and infrastructure. A great example of this is our sister company Pivotal’s CloudFoundry-based product called Pivotal CF. Pivotal CF is an enterprise-grade system for running and managing containers, which takes full advantage of VMware’s SDDC. This allows IT to avoid creating yet another datacenter silo as new workloads enter the picture, enabling them to consistently deliver services to developers and other end-users while maintaining control, security, and compliance.

Virtual machines and containers: better together

Now let’s get back to the containers vs virtual machines question raised above. You see, this is really a question about the compute piece of the above SDDC diagram. It’s an important piece, for sure, but just one piece.

  • That’s the first thing to understand: neither containers nor VMs, by themselves, are sufficient to operate an application in production. So any conversation about containers needs to consider how they’re going to be run in an enterprise data center.
  • Containers provide great application portability, enabling the consistent provisioning of the application across infrastructures. However, applications and data alone are rarely the major barrier to workload mobility. Instead, operational requirements such as performance and capacity management, security, and various management tool integrations can make redeploying workloads to new environments a significant challenge. So while containers help with portability, they’re again only a piece of a bigger puzzle.
  • While we believe that, in the limit, virtual machine and containers can achieve equal levels of security isolation, today Linux containers are unproven in the enterprise. First, Linux containers are taking OS subsystems designed to work across apps and trying to add in isolation after the fact.  This is fundamentally different from a hypervisor, which is designed from the ground up for clean virtual machine isolation. Naturally, it will take some time for these Linux OS subsystems to achieve mature isolation characteristics. Second, vSphere virtual machines are time-tested in production enterprise environments. IT security expects have vetted them, they have been subjected to, and passed, any number of regulatory compliance tests, and they are now accepted as a standard security building block in IT.  We have no doubt Linux containers will achieve this as well, but we believe it will take many years before they reach that point.
  • Finally, while positioned as separate technologies, containers can of course run inside VMs. Running containers inside VMs brings all of the well-known VM benefits: the proven isolation and security properties I just mentioned, plus mobility, dynamic virtual networking, software-defined storage, and the massive ecosystem of third party tools built on top of VMs. The argument frequently raised against this is that containers-on-VM performance is worse than containers running on physical. But if you do an apples-to-apples comparison with a physical host with a certain config and a VM with the same config running on an identical host, you’ll be able to run the same number of containers with the same performance on both (we’ll follow up with a blog soon with some specific data points here – stay tuned!). Regardless, we see this as a choice for the customer: the SDDC can manage containers running on physical, or customers can take advantage of mature VM technology to run their containers and get all the additional benefits mentioned above.

So, in our minds, it’s not really a question of containers OR Virtual Machines, it’s really all about containers AND Virtual Machines. Again it’s all about making customers successful, and we are very excited about the potential for containers to help customers be successful by delivering the next generation of applications. Customers will need a common way to run, operate, and manage both new and existing applications, and this is exactly what VMware’s Software-defined Data Center offers. It’s architected to provide open support for heterogeneous infrastructure and apps, including containerized ones. In the end, VMware technology enables customers to be successful by operating containerized applications without compromise in their enterprise IT standards.

VMworld

We’ll be delving into all of this at a much deeper level at VMworld US in San Francisco.  Be sure to check out the General Session keynotes on Monday and Tuesday, August 25th and 26th. In addition, we’ll have a number of breakout sessions giving even more detail on our perspective, customer success stories, and partner integrations:

As you can see, there’s lot of great content being presented at VMworld, which will dive down to the next level of detail on virtual machines and containers.

Containers are an exciting and dynamic topic and we’ve really only scratched the surface of the debate and discussion to be had.  We’ll continue the conversation at VMworld. I hope to see you there!

Are you looking at leveraging containers in your organization?  Please leave your comments and questions below.

Other posts by