Scale Out vs. Scale Up for Ultra-Scale Reservoir Simulation
K. Mukundakrishnan, R. Gandham, K.P. Esler, D. Dembeck, J. Shumway and V. Natoli
Event name: Third EAGE Workshop on High Performance Computing for Upstream
Publication date: 01 October 2017
Info: Abstract, PDF ( 131.39Kb )
It is an undisputed truth that demand for computational performance for simulating very large models in upstream applications is ever increasing. This demand can be met conceptually in one of two ways. “Scale-out”, implies exploiting additional computational nodes, while “scale-up” implies increasing the computational power, particularly floating point throughput and memory bandwidth of each node. In practice, these two approaches provide opposite bounds on a spectrum of cluster designs, from the use of many relatively weak, “thin” nodes, to a smaller number of powerful, “fat” nodes. The scale-out approach gained increasing dominance in HPC as scalability was prefered over absolute efficiency. Over the past decade, however, energy efficiency has become the key performance limiter. For applications with significant communication requirements, including reservoir simulation, the use of scale-up fat nodes provides an opportunity to localize communications and minimize interconnect traffic, thereby increasing energy efficiency. However, harnessing fat fat nodes comprising of several extremely high-performance GPUs to achieve performance for implicit simulations requires careful software design and novel algorithmic approaches. We will first present the algorithmic and computational challenges faced and the approaches needed to efficiently utilize the massive parallelism offered by such scaled-up nodes.