The network implementations found in High Performance Computing (HPC) clusters have historically differed from those in datacenters in a few key aspects: low latency, low CPU overhead, and high cost. Recent trends in the networking world indicate that these distinctions are beginning to disappear as HPC network prices drop and datacenter network equipment begins to adopt features previously found only in HPC clusters. Products are already being offered that implement kernel or CPU bypassing (two common HPC network features) over 10Gbps Ethernet, while the prices for the popular Infiniband HPC interconnect have dropped dramatically and are now competitive with 10Gbps Ethernet hardware.
The goal of the Pilaf project is to to leverage the features of these high-performance networks esp. Remote Direct Memory Access (RDMA) to build fast and CPU-efficient cloud infrastructure such as distributed storage systems.
We thank NSF and the team building and maintaining the NSF PRObE testbed. Most of our scale-out experiments have been run on the PRObE testbed and the projects benefited enormously from being able to run these experiments.