Abstract
This paper focuses on convolution neural network quantization problem. The quantization has a distinct stage of data conversion from floating-point into integer-point numbers. In general, the process of quantization is associated with the reduction of the matrix dimension via limited precision of the numbers. However, the training and inference stages of deep learning neural network are limited by the space of the memory and a variety of factors including programming complexity and even reliability of the system. On the whole the process of quantization becomes more and more popular due to significant impact on performance and minimal accuracy loss. Various techniques for networks quantization have been already proposed, including quantization aware training and integer arithmetic-only inference. Yet, a detailed comparison of various quantization configurations, combining all proposed methods haven’t been presented yet. This comparison is important to understand selection of quantization hyperparameters during training to optimize networks for inference while preserving their robustness. In this work, we perform in-depth analysis of parameters in the quantization aware training, the process of simulating precision loss in the forward pass by quantizing and dequantizing tensors. Specifically, we modify rounding modes, input preprocessing, output data signedness, bitwidth of the quantization and locations of precision loss simulation to evaluate how they affect accuracy of deep neural network aimed at performing efficient calculations on resource-constrained devices.
Authors (8)
Cite as
Full text
- Publication version
- Accepted or Published Version
- License
- Copyright (2019 IEEE)
Keywords
Details
- Category:
- Conference activity
- Type:
- publikacja w wydawnictwie zbiorowym recenzowanym (także w materiałach konferencyjnych)
- Language:
- English
- Publication year:
- 2019
- Bibliographic description:
- Kwaśniewska A., Szankin M., Ozga M., Wolfe J., Das A., Zajac A., Rumiński J., Rad P.: Deep Learning Optimization for Edge Devices: Analysis of Training Quantization Parameters// / : , 2019,
- Verified by:
- Gdańsk University of Technology
seen 182 times
Recommended for you
Speaker Recognition Using Convolutional Neural Network with Minimal Training Data for Smart Home Solutions
- M. Wang,
- T. Sirlapu,
- A. Kwaśniewska
- + 3 authors
Efficiency of Artificial Intelligence Methods for Hearing Loss Type Classification: an Evaluation
- M. Kassjański,
- M. Kulawiak,
- T. Przewoźny
- + 7 authors