Abstract
In this paper, a task-scheduling approach to efficiently calculating sparse symmetric matrix-vector products and designed to run on Graphics Processing Units (GPUs) is presented. The main premise is that, for many sparse symmetric matrices occurring in common applications, it is possible to obtain significant reductions in memory usage and improvements in performance when the matrix is prepared in certain ways prior to computation. The preprocessing proposed in this paper employs task scheduling to overcome the difficulties that have suppressed the development of methods taking advantage of the symmetry of sparse matrices. The performance of the proposed task-scheduling method is verified using Kepler (Tesla K40c) graphics accelerator, and is compared to the performance of cuSPARSE library functions on a GPU and to functions from the Intel MKL on Central Processing Units (CPUs) executed in the parallel mode. The obtained results indicate that the proposed approach for a sparse symmetric matrix-vector products results in up to a 40% reduction in memory usage, as compared to non-symmetric matrix storage formats, while retaining good throughput. Compared to cuSPARSE and Intel MKL functions for sparse symmetric matrices, the proposed TSMV approach allowed us to achieve a significant speedup (of over one order of magnitude).
Authors (3)
Cite as
Full text
full text is not available in portal
Keywords
Details
- Category:
- Articles
- Type:
- artykuł w czasopiśmie wyróżnionym w JCR
- Published in:
-
SIAM JOURNAL ON SCIENTIFIC COMPUTING
no. 37,
pages 643 - 666,
ISSN: 1064-8275 - Language:
- English
- Publication year:
- 2015
- Bibliographic description:
- Mironowicz P., Dziekoński A., Mrozowski M.: A Task-Scheduling Approach for Efficient Sparse Symmetric Matrix-Vector Multiplication on a GPU// SIAM JOURNAL ON SCIENTIFIC COMPUTING. -Vol. 37, nr. 6 (2015), s.643-666
- Verified by:
- Gdańsk University of Technology
seen 145 times