Abstract
Large-scale Graph Convolutional Network (GCN) inference on traditional CPU/GPU systems is challenging due to a large memory footprint, sparse computational patterns, and irregular memory accesses with poor locality. Intel’s Programmable Integrated Unffied Memory Architecture (PIUMA) is designed to address these challenges for graph analytics. In this paper, a detailed characterization of GCNs is presented using the Open-Graph Benchmark (OGB) datasets to determine the viability of PIUMA as a potential solution to GCN scalability. First, the extent of sparse matrix dense matrix multiplication (SpMM) as a performance driver for GCN on CPU and GPU is explored, offering a methodology for predicting GCN behavior as a function of dataset characteristics. Second, an SpMM kernel optimized for PIUMA is described and investigated for sensitivity to system parameters including memory bandwidth, latency, and thread count. SpMM scalability on PIUMA is demonstrated, while the scalability limitations of a Xeon-optimized SpMM implementation are discussed. Finally, GCN performance is compared on PIUMA versus a Xeon CPU system and Ampere GPU system, showing impressive results on PIUMA for largescale datasets.
Citations
-
2
CrossRef
-
0
Web of Science
-
3
Scopus
Authors (12)
Cite as
Full text
full text is not available in portal
Keywords
Details
- Category:
- Conference activity
- Type:
- publikacja w wydawnictwie zbiorowym recenzowanym (także w materiałach konferencyjnych)
- Language:
- English
- Publication year:
- 2023
- Bibliographic description:
- Adiletta M. J., Tithi J. J., Farsarakis E., Gerogiannis G., Adolf R., Benke R., Kashyap S., Hsia S., Lakhotia K., Petrini F., Wei G., Brooks D.: Characterizing the Scalability of Graph Convolutional Networks on Intel® PIUMA// / : , 2023,
- DOI:
- Digital Object Identifier (open in new tab) 10.1109/ispass57527.2023.00025
- Verified by:
- Gdańsk University of Technology
seen 70 times