Process arrival pattern aware algorithms for acceleration of scatter and gather operations - Publication - Bridge of Knowledge

Search

Process arrival pattern aware algorithms for acceleration of scatter and gather operations

Abstract

Imbalanced process arrival patterns (PAPs) are ubiquitous in many parallel and distributed systems, especially in HPC ones. The collective operations, e.g. in MPI, are designed for equal process arrival times (PATs), and are not optimized for deviations in their appearance. We propose eight new PAP-aware algorithms for the scatter and gather operations. They are binomial or linear tree adaptations introducing additional process ordering and (in some cases) additional activities in a special background thread. The solution was implemented using one of the most popular open source MPI compliant library (OpenMPI), and evaluated in a typical HPC environment using a specially developed benchmark as well as a real application: FFT. The experimental results show a significant advantage of the proposed approach over the default OpenMPI implementation, showing good scalability and high performance with the FFT acceleration for the communication run time: 16.7% and for the total application execution time: 3.3%.

Citations

  • 2

    CrossRef

  • 0

    Web of Science

  • 2

    Scopus

Cite as

Full text

download paper
downloaded 78 times
Publication version
Accepted or Published Version
License
Creative Commons: CC-BY open in new tab

Keywords

Details

Category:
Articles
Type:
artykuły w czasopismach
Published in:
Cluster Computing-The Journal of Networks Software Tools and Applications no. 23, pages 2735 - 2751,
ISSN: 1386-7857
Language:
English
Publication year:
2020
Bibliographic description:
Proficz J.: Process arrival pattern aware algorithms for acceleration of scatter and gather operations// Cluster Computing-The Journal of Networks Software Tools and Applications -Vol. 23, (2020), s.2735-2751
DOI:
Digital Object Identifier (open in new tab) 10.1007/s10586-019-03040-x
Bibliography: test
  1. MPI Init() open in new tab
  2. half T ime := 100 ms + random(0. . . maxDelay)/2 open in new tab
  3. MPI Barrier()
  4. MPI Barrier()
  5. PAT ProcessingStart() ime := MPI Wtime() open in new tab
  6. makeOperation(algorithm, data) 16. endT ime := MPI Wtime() 17. checkCorrectness(data) open in new tab
  7. MPI Allreduce(minAT , endT ime, MPI MIN. . . )
  8. MPI Allreduce(maxAT , endT ime, MPI MAX. . . ) 20. rtResults[i] := maxAT − minAT 21. myET := endT ime − startT ime open in new tab
  9. MPI Allreduce(sumET , myET , MPI SUM. . . ) 23. etResults[i] := sumET /P 24. PAT Finalize()
  10. MPI Finalize() open in new tab
  11. Arap, O., Swany, M., Brown, G., Himebaugh, B.: Adaptive recursive doubling algorithm for collective communication. In: 2015 IEEE International Parallel and Distributed Processing Symposium Workshop, pp. 121-128. IEEE (2015) open in new tab
  12. Bailey, D.H.: NAS parallel benchmarks. In: Padua, D. (ed.) Encyclopedia of Parallel Computing, pp. 1254-1259. Springer, Boston (2011) open in new tab
  13. Balducci, M., Choudary, A., Hamaker, J.: Comparative analysis of FFT algorithms in sequential and parallel form. In: Mississippi State University Conference on Digital Signal Processing, pp. 5-16 (1996) open in new tab
  14. Butenhof, D.R.: Programming with POSIX Threads. Addison- Wesley Professional, Boston (1997) open in new tab
  15. Czarnul, P., Kuchta, J., Matuszek, M., Proficz, J., Rościszewski, P., Wójcik, M., Szymański, J.: MERPSYS: an environment for simulation of parallel application execution on large scale HPC systems. Simul. Model. Pract. Theory 77, 124-140 (2017) open in new tab
  16. Dagum, L., Menon, R.: OpenMP: an industry standard API for shared-memory programming. IEEE Comput. Sci. Eng. 5(1), 46-55 (1998) open in new tab
  17. Dichev, K., Rychkov, V., Lastovetsky, A.: Two algorithms of irregular scatter/gather operations for heterogeneous platforms. In: Keller, R., Gabriel, E., Resch, M., Dongarra, J. (eds.) Recent Advances in the Message Passing Interface, pp. 289-293. open in new tab
  18. Faraj, A., Yuan, X., Lowenthal, D.: STAR-MPI: self tuned adaptive routines for MPI collective operations. In: Proceedings of the 20th Annual International Conference on Supercomputing, pp. 199-208 (2006) open in new tab
  19. Faraj, A., Patarasuk, P., Yuan, X.: A study of process arrival patterns for MPI collective operations. Int. J. Parallel Program. 36(6), 543-570 (2008) open in new tab
  20. Gabriel, E., Fagg, G.E., Bosilca, G., Angskun, T., Dongarra, J.J., Squyres, J.M., Sahay, V., Kambadur, P., Barrett, B., Lumsdaine, A., Castain, R.H., Daniel, D.J., Graham, R.L., Woodall, T.S.: Open MPI: goals, concept, and design of a next generation MPI implementation. In: Kranzlmüller, D., Kacsuk, P., Dongarra, J. (eds.) Recent Advances in Parallel Virtual Machine and Message Passing Interface, pp. 97-104. Springer, Berlin (2004) open in new tab
  21. Gropp, W., Lusk, E.: User's guide for MPICH, a portable im- plementation of MPI. Technical Report ANL-96/6, Argonne National Laboratory (1994) open in new tab
  22. Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface. The MIT Press, Cambridge (1996) open in new tab
  23. Hockney, R.W.: The communication challenge for MPP: Intel Paragon and Meiko CS-2. Parallel Comput. 20(3), 389-398 (1994) open in new tab
  24. Kandalla, K., Subramoni, H., Vishnu, A., Panda, D.K.: Designing topology-aware collective communication algorithms for large scale InfiniBand clusters: Case studies with Scatter and Gather. In: 2010 IEEE International Symposium on Parallel & Dis- tributed Processing, Workshops and PhD Forum (IPDPSW), pp. 1-8. IEEE (2010) open in new tab
  25. Krawczyk, H., Krysztop, B., Proficz, J.: Suitability of the time controlled environment for race detection in distributed applica- tions. Future Gener. Comput. Syst. 16(6), 625-635 (2000) open in new tab
  26. Krawczyk, H., Nykiel, M., Proficz, J.: Tryton supercomputer capabilities for analysis of massive data streams. Polish Maritime Res. 22(3), 99-104 (2015) open in new tab
  27. LAMMPS benchmarks. URL: https://lammps.sandia.gov/bench. html. Accessed 09 Dec 2018 open in new tab
  28. Lockwood, J.W., McKeown, N., Watson, G., Gibb, G., Hartke, P., Naous, J., Raghuraman, R., Luo, J.: NetFPGA-an open platform for gigabit-rate network switching and routing. In: 2007 IEEE International Conference on Microelectronic Systems Education (MSE'07), pp. 160-161. IEEE (2007) open in new tab
  29. Marendic, P., Lemeire, J., Vucinic, D., Schelkens, P.: A novel MPI reduction algorithm resilient to imbalances in process arrival times. J. Supercomput. 72, 1973-2013 (2016) open in new tab
  30. Marendić, P., Lemeire, J., Haber, T., Vučinić, D., Schelkens, P.: An investigation into the performance of reduction algorithms under load imbalance. Lecture Notes in Computer Science (in- cluding subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). LNCS, vol. 7484, pp. 439-450. Springer, Berlin (2012) open in new tab
  31. Patarasuk, P., Yuan, X.: Efficient MPI Bcast across different process arrival patterns. In: 2008 IEEE International Symposium on Parallel and Distributed Processing, pp. 1-11. IEEE (2008) open in new tab
  32. Petrini, F., Kerbyson, D.J., Pakin, S.: The case of the missing supercomputer performance. Proceedings of the 2003 ACM/IEEE Conference on Supercomputing-SC'03, vol. 836, p. 55. ACM Press, New York (2003) open in new tab
  33. Proficz, J.: Improving all-reduce collective operations for imbalanced process arrival patterns. J. Supercomput. 74(7), 3071-3092 (2018) open in new tab
  34. Proficz, J., Czarnul, P.: Performance and power-aware modeling of MPI applications for cluster computing. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9574, pp. 199-209. Springer, Berlin (2016) open in new tab
  35. Qian, Y., Afsahi, A.: Process arrival pattern aware alltoall and allgather on infiniband clusters. Int. J. Parallel Program. 39(4), 473-493 (2011) open in new tab
  36. Shanley, T.: Infiniband Network Architecture. Addison-Wesley Professional, Boston (2003)
  37. Träff, J.L.: Practical, distributed, low overhead algorithms for irregular gather and scatter collectives. Parallel Comput. 75, 100-117 (2018) open in new tab
  38. Traff, JL: Hierarchical gather/scatter algorithms with graceful degradation. In: 18th International Proceedings on Parallel and Distributed Processing Symposium, 2004, pp. 80-89. IEEE (2004) open in new tab
  39. Yoo, A.B., Jette, M.A., Grondona, M.: SLURM: simple Linux utility for resource management. In: Feitelson, D., Rudolph, L., Schwiegelshohn, U. (eds.) Job Scheduling Strategies for Parallel Processing, pp. 44-60. Springer, Berlin (2003) open in new tab
Verified by:
Gdańsk University of Technology

seen 13808 times

Recommended for you

Meta Tags