Abstract
Imbalanced process arrival patterns (PAPs) are ubiquitous in many parallel and distributed systems, especially in HPC ones. The collective operations, e.g. in MPI, are designed for equal process arrival times (PATs), and are not optimized for deviations in their appearance. We propose eight new PAP-aware algorithms for the scatter and gather operations. They are binomial or linear tree adaptations introducing additional process ordering and (in some cases) additional activities in a special background thread. The solution was implemented using one of the most popular open source MPI compliant library (OpenMPI), and evaluated in a typical HPC environment using a specially developed benchmark as well as a real application: FFT. The experimental results show a significant advantage of the proposed approach over the default OpenMPI implementation, showing good scalability and high performance with the FFT acceleration for the communication run time: 16.7% and for the total application execution time: 3.3%.
Citations
-
2
CrossRef
-
0
Web of Science
-
2
Scopus
Author (1)
Cite as
Full text
- Publication version
- Accepted or Published Version
- License
- open in new tab
Keywords
Details
- Category:
- Articles
- Type:
- artykuły w czasopismach
- Published in:
-
Cluster Computing-The Journal of Networks Software Tools and Applications
no. 23,
pages 2735 - 2751,
ISSN: 1386-7857 - Language:
- English
- Publication year:
- 2020
- Bibliographic description:
- Proficz J.: Process arrival pattern aware algorithms for acceleration of scatter and gather operations// Cluster Computing-The Journal of Networks Software Tools and Applications -Vol. 23, (2020), s.2735-2751
- DOI:
- Digital Object Identifier (open in new tab) 10.1007/s10586-019-03040-x
- Bibliography: test
-
- MPI Init() open in new tab
- half T ime := 100 ms + random(0. . . maxDelay)/2 open in new tab
- MPI Barrier()
- MPI Barrier()
- PAT ProcessingStart() ime := MPI Wtime() open in new tab
- makeOperation(algorithm, data) 16. endT ime := MPI Wtime() 17. checkCorrectness(data) open in new tab
- MPI Allreduce(minAT , endT ime, MPI MIN. . . )
- MPI Allreduce(maxAT , endT ime, MPI MAX. . . ) 20. rtResults[i] := maxAT − minAT 21. myET := endT ime − startT ime open in new tab
- MPI Allreduce(sumET , myET , MPI SUM. . . ) 23. etResults[i] := sumET /P 24. PAT Finalize()
- MPI Finalize() open in new tab
- Arap, O., Swany, M., Brown, G., Himebaugh, B.: Adaptive recursive doubling algorithm for collective communication. In: 2015 IEEE International Parallel and Distributed Processing Symposium Workshop, pp. 121-128. IEEE (2015) open in new tab
- Bailey, D.H.: NAS parallel benchmarks. In: Padua, D. (ed.) Encyclopedia of Parallel Computing, pp. 1254-1259. Springer, Boston (2011) open in new tab
- Balducci, M., Choudary, A., Hamaker, J.: Comparative analysis of FFT algorithms in sequential and parallel form. In: Mississippi State University Conference on Digital Signal Processing, pp. 5-16 (1996) open in new tab
- Butenhof, D.R.: Programming with POSIX Threads. Addison- Wesley Professional, Boston (1997) open in new tab
- Czarnul, P., Kuchta, J., Matuszek, M., Proficz, J., Rościszewski, P., Wójcik, M., Szymański, J.: MERPSYS: an environment for simulation of parallel application execution on large scale HPC systems. Simul. Model. Pract. Theory 77, 124-140 (2017) open in new tab
- Dagum, L., Menon, R.: OpenMP: an industry standard API for shared-memory programming. IEEE Comput. Sci. Eng. 5(1), 46-55 (1998) open in new tab
- Dichev, K., Rychkov, V., Lastovetsky, A.: Two algorithms of irregular scatter/gather operations for heterogeneous platforms. In: Keller, R., Gabriel, E., Resch, M., Dongarra, J. (eds.) Recent Advances in the Message Passing Interface, pp. 289-293. open in new tab
- Faraj, A., Yuan, X., Lowenthal, D.: STAR-MPI: self tuned adaptive routines for MPI collective operations. In: Proceedings of the 20th Annual International Conference on Supercomputing, pp. 199-208 (2006) open in new tab
- Faraj, A., Patarasuk, P., Yuan, X.: A study of process arrival patterns for MPI collective operations. Int. J. Parallel Program. 36(6), 543-570 (2008) open in new tab
- Gabriel, E., Fagg, G.E., Bosilca, G., Angskun, T., Dongarra, J.J., Squyres, J.M., Sahay, V., Kambadur, P., Barrett, B., Lumsdaine, A., Castain, R.H., Daniel, D.J., Graham, R.L., Woodall, T.S.: Open MPI: goals, concept, and design of a next generation MPI implementation. In: Kranzlmüller, D., Kacsuk, P., Dongarra, J. (eds.) Recent Advances in Parallel Virtual Machine and Message Passing Interface, pp. 97-104. Springer, Berlin (2004) open in new tab
- Gropp, W., Lusk, E.: User's guide for MPICH, a portable im- plementation of MPI. Technical Report ANL-96/6, Argonne National Laboratory (1994) open in new tab
- Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface. The MIT Press, Cambridge (1996) open in new tab
- Hockney, R.W.: The communication challenge for MPP: Intel Paragon and Meiko CS-2. Parallel Comput. 20(3), 389-398 (1994) open in new tab
- Kandalla, K., Subramoni, H., Vishnu, A., Panda, D.K.: Designing topology-aware collective communication algorithms for large scale InfiniBand clusters: Case studies with Scatter and Gather. In: 2010 IEEE International Symposium on Parallel & Dis- tributed Processing, Workshops and PhD Forum (IPDPSW), pp. 1-8. IEEE (2010) open in new tab
- Krawczyk, H., Krysztop, B., Proficz, J.: Suitability of the time controlled environment for race detection in distributed applica- tions. Future Gener. Comput. Syst. 16(6), 625-635 (2000) open in new tab
- Krawczyk, H., Nykiel, M., Proficz, J.: Tryton supercomputer capabilities for analysis of massive data streams. Polish Maritime Res. 22(3), 99-104 (2015) open in new tab
- LAMMPS benchmarks. URL: https://lammps.sandia.gov/bench. html. Accessed 09 Dec 2018 open in new tab
- Lockwood, J.W., McKeown, N., Watson, G., Gibb, G., Hartke, P., Naous, J., Raghuraman, R., Luo, J.: NetFPGA-an open platform for gigabit-rate network switching and routing. In: 2007 IEEE International Conference on Microelectronic Systems Education (MSE'07), pp. 160-161. IEEE (2007) open in new tab
- Marendic, P., Lemeire, J., Vucinic, D., Schelkens, P.: A novel MPI reduction algorithm resilient to imbalances in process arrival times. J. Supercomput. 72, 1973-2013 (2016) open in new tab
- Marendić, P., Lemeire, J., Haber, T., Vučinić, D., Schelkens, P.: An investigation into the performance of reduction algorithms under load imbalance. Lecture Notes in Computer Science (in- cluding subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). LNCS, vol. 7484, pp. 439-450. Springer, Berlin (2012) open in new tab
- Patarasuk, P., Yuan, X.: Efficient MPI Bcast across different process arrival patterns. In: 2008 IEEE International Symposium on Parallel and Distributed Processing, pp. 1-11. IEEE (2008) open in new tab
- Petrini, F., Kerbyson, D.J., Pakin, S.: The case of the missing supercomputer performance. Proceedings of the 2003 ACM/IEEE Conference on Supercomputing-SC'03, vol. 836, p. 55. ACM Press, New York (2003) open in new tab
- Proficz, J.: Improving all-reduce collective operations for imbalanced process arrival patterns. J. Supercomput. 74(7), 3071-3092 (2018) open in new tab
- Proficz, J., Czarnul, P.: Performance and power-aware modeling of MPI applications for cluster computing. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9574, pp. 199-209. Springer, Berlin (2016) open in new tab
- Qian, Y., Afsahi, A.: Process arrival pattern aware alltoall and allgather on infiniband clusters. Int. J. Parallel Program. 39(4), 473-493 (2011) open in new tab
- Shanley, T.: Infiniband Network Architecture. Addison-Wesley Professional, Boston (2003)
- Träff, J.L.: Practical, distributed, low overhead algorithms for irregular gather and scatter collectives. Parallel Comput. 75, 100-117 (2018) open in new tab
- Traff, JL: Hierarchical gather/scatter algorithms with graceful degradation. In: 18th International Proceedings on Parallel and Distributed Processing Symposium, 2004, pp. 80-89. IEEE (2004) open in new tab
- Yoo, A.B., Jette, M.A., Grondona, M.: SLURM: simple Linux utility for resource management. In: Feitelson, D., Rudolph, L., Schwiegelshohn, U. (eds.) Job Scheduling Strategies for Parallel Processing, pp. 44-60. Springer, Berlin (2003) open in new tab
- Verified by:
- Gdańsk University of Technology
seen 13808 times