Limitation of Floating-Point Precision for Resource Constrained Neural Network Training - Publikacja - MOST Wiedzy

Wyszukiwarka

Limitation of Floating-Point Precision for Resource Constrained Neural Network Training

Abstrakt

Insufficient availability of computational power and runtime memory is a major concern when it comes to experiments in the field of artificial intelligence. One of the promising solutions for this problem is an optimization of internal neural network’s calculations and its parameters’ representation. This work focuses on the mentioned issue by the application of neural network training with limited precision. Based on this research, the author proposes a new method of precision limitation for neural network training leveraging a custom, constrained floating-point representation with additional rounding mechanism. Its application allows to limit the resources required during neural network training thanks to the reduction of computational complexity and memory usage. The work shows that the proposed procedure allows to train commonly used benchmark networks such as LeNet, AlexNet and ResNet without significant accuracy degradation while using only 8-bit custom floating-point variables. It has also been proven that the proposed method of precision limitation does not negatively affect the network’s convergence, therefore, it is not required to extend the training by increasing the number of costly training epochs.

Cytuj jako

Pełna treść

pobierz publikację
pobrano 14 razy
Wersja publikacji
Accepted albo Published Version
Licencja
Copyright (Author(s))

Słowa kluczowe

Informacje szczegółowe

Kategoria:
Doktoraty, rozprawy habilitacyjne, nostryfikacje
Typ:
praca doktorska pracowników zatrudnionych w PG oraz studentów studium doktoranckiego
Język:
angielski
Rok wydania:
2024
Weryfikacja:
Politechnika Gdańska

wyświetlono 10 razy

Publikacje, które mogą cię zainteresować

Meta Tagi