Limitation of Floating-Point Precision for Resource Constrained Neural Network Training - Publication - Bridge of Knowledge

Search

Limitation of Floating-Point Precision for Resource Constrained Neural Network Training

Abstract

Insufficient availability of computational power and runtime memory is a major concern when it comes to experiments in the field of artificial intelligence. One of the promising solutions for this problem is an optimization of internal neural network’s calculations and its parameters’ representation. This work focuses on the mentioned issue by the application of neural network training with limited precision. Based on this research, the author proposes a new method of precision limitation for neural network training leveraging a custom, constrained floating-point representation with additional rounding mechanism. Its application allows to limit the resources required during neural network training thanks to the reduction of computational complexity and memory usage. The work shows that the proposed procedure allows to train commonly used benchmark networks such as LeNet, AlexNet and ResNet without significant accuracy degradation while using only 8-bit custom floating-point variables. It has also been proven that the proposed method of precision limitation does not negatively affect the network’s convergence, therefore, it is not required to extend the training by increasing the number of costly training epochs.

Cite as

Full text

download paper
downloaded 14 times
Publication version
Accepted or Published Version
License
Copyright (Author(s))

Keywords

Details

Category:
Thesis, nostrification
Type:
praca doktorska pracowników zatrudnionych w PG oraz studentów studium doktoranckiego
Language:
English
Publication year:
2024
Verified by:
Gdańsk University of Technology

seen 10 times

Recommended for you

Meta Tags