1887

Abstract

Summary

Large scale partial differential equation (PDE) solvers use some form of message passing to handle communications between compute nodes ( ). Message passing can be explicitly handled by the application developer or implicitly by the programming language. The prime example of explicit message passing is the Message Passing Interface (MPI) while Chapel ( ) and UPC ( ) are examples of the PGAS programming model which make the communications implicit.

It could be argued that the best performing parallel applications are the ones using carefully crafted explicit message passing. The principal reason resides in the message passing implementation being as efficient as possible for a very specific problem. The flexibility is however lost if some changes are required in the fundamentals of the algorithm. The converse is true for implicit message passing: higher flexibility but penultimate performance unachievable ( ).

For a developer, both approaches solve a different problem. In applications which are using MPI, usually a handful of calls to the message passing API calls are present in the whole application. Indeed, most of the grunt work resides in finding how the messages are transacted between processes, setting buffers and ways to fill or empty them.

In this work, I propose a library to help the PDE application developer to perform those low level tasks in an automated fashion. The library is also useful for refactoring existing PDE codes for use on supercomputers. By pairing a spatial hashing function with minimal geometrical knowledge extracted from the application, the communication pattern is discovered in P log (N/P) operations where N is the global number of hashes and P the number of processes. This pattern is then used to create the actual buffers the developer needs and handles all blocking, non-blocking and one sided communications.

Loading

Article metrics loading...

/content/papers/10.3997/2214-4609.201414034
2015-09-13
2024-03-29
Loading full text...

Full text loading...

References

  1. ChamberlainB.L.
    [2007] Parallel programmability and the chapel language. International Journal of High Performance Computing Applications, 291–312.
    [Google Scholar]
  2. CoarfaC., Y. D.-C.-G.-M.
    [2005] An evaluation of global address space languages: co-array fortran and unified parallel C.10th ACM SIGPLAN symposium on Principles and practice of parallel programming (PPoPP '05). New York: ACM, 36–47.
    [Google Scholar]
  3. Draper, J.M.
    [1999] Introduction to UPC and language specification. Center for Computing Sciences, Institute for Defense Analyses.
    [Google Scholar]
  4. Gropp, W., Hoefler, T., Thakur, R. and Lusk, E.
    [2014] Using Advanced MPI: Modern Features of the Message-Passing Interface.MIT Press.
    [Google Scholar]
  5. Hilbert curve. [n.d.] Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Hilbert_curve
    [Google Scholar]
  6. Skilling, J. [2004] Programming the Hilbert curve. In: Erickson, G.J. (Ed.) Bayesian Inference and Maximum Entropy Methods in Science and Engineering. 381–387.
    [Google Scholar]
  7. Z-order curve. [n.d.] Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Z-order_curve
    [Google Scholar]
http://instance.metastore.ingenta.com/content/papers/10.3997/2214-4609.201414034
Loading
/content/papers/10.3997/2214-4609.201414034
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error