Designing Non-blocking Broadcast with Collective Offload on InfiniBand Clusters: A Case Study with HPL
The upcoming MPI-3.0 standard is expected to include non-blocking collective operations. Non-blocking collec- tives offer a new MPI interface, using which an application can decouple the initiation and completion of collective operations. However, to be effective, the MPI library should provide a high- performance and scalable implementation. One of the major challenges in designing an effective non-blocking collective op- eration is to ensure progress of the operation while processors are busy in application-level computation. The recently intro- duced Mellanox ConnectX-2 InfiniBand adapters offer a task- offload interface (CORE-Direct) that enables communication progress without requiring CPU cycles. In this paper, we present the design of a non-blocking broad- cast operation (MPI Ibcast) using the CORE-Direct offload interface. Our experimental evaluations show that our im- plementation delivers near perfect overlap, without penaliz- ing the latency of the MPI Ibcast operation. Since existing MPI implementations do not provide non-blocking collective communication, scientific applications have been modified to implement collectives on top of MPI point-to-point operations to achieve overlap. HPL is an example of an application use- case scenario for non-blocking collectives. We have explored the benefits of our proposed network offload based MPI Ibcast implementation with HPL and we observe that HPL can achieve its peak throughput with significantly smaller problem sizes, which also leads to an improvement in its run-time by up to 78%, with 512 processors. We also observe that our proposed designs can minimize the impact of system noise on applications.