Performance of RDMA and HPC Applications in VMs using FDR InfiniBand on VMware vSphere

Bare-metal and virtual MPI latencies for a range of ESX versions

Bare-metal and virtual MPI latencies for a range of ESX versions

Customers often ask whether InfiniBand (IB) can be used with vSphere. The short answer is, yes. VM Direct Path I/O (passthrough) allows InfiniBand cards to be made directly visible within a virtual machine so the guest operating system can directly access the device. With this approach, no ESX-specific driver is required — just the hardware vendor’s standard native device driver for Linux or Windows. VMware supports the VM Direct Path I/O mechanism as a basic platform feature and our hardware partners support their hardware and device drivers.

Those interested in InfiniBand are, of course, primarily concerned with performance — can we achieve performance competitive with bare metal? We’ve recently published a technical white paper that answers this question for two constituencies — those interested in consuming the low-level RDMA verbs interface directly (typically, distributed databases, file systems, etc.) and those interested in running HPC applications, which typically run on top of an MPI library which in turn uses RDMA to control the InfiniBand hardware.

In addition to providing detailed RDMA and MPI micro-benchmark results across several ESX versions, the paper includes several examples of real HPC application performance to demonstrate what can be achieved when running MPI applications on vSphere.

The paper, titled Performance of RDMA and HPC Applications in Virtual Machines using FDR InfiniBand on VMware vSphere, is available here.

Other posts by