Abstract
The goal of this thesis was to develop efficient video multi-task convolutional architectures for a range of diverse vision tasks, on RGB scenes, leveraging i) task relationships and ii) motion information to improve multi-task performance. The approach we take starts from the integration of diverse tasks within video multi-task learning networks. We present the first two datasets of their kind in the existing literature, featuring frame-level annotations for both visual scene enhancement and understanding. This thesis proposes novel architectures, capable of accommodating multiple tasks across various hierarchy levels. The second contribution of this thesis extends those findings into the MOST (Multi-Output, -Scale, -Task) model which exploits the inherent multi-scale nature of convolutional networks in a manner that benefits video multi- tasking. Thereafter, we propose a principled pruning approach inspired by NAS (Neural Architecture Search), named NSS (Neural Structure Search). NSS discovers a more effective MOST network, which boosts performance while simultaneously reducing computational requirements and parameter count. Lastly, we introduce ATB (Adaptive Task Balancing), an efficient training method that ensures tasks are trained at consistent rates with almost no additional computational cost, enabling a more balanced multi-task training process.
Author (1)
Cite as
Full text
- Publication version
- Accepted or Published Version
- License
- Copyright (Author(s))
Keywords
Details
- Category:
- Thesis, nostrification
- Type:
- praca doktorska pracowników zatrudnionych w PG oraz studentów studium doktoranckiego
- Language:
- English
- Publication year:
- 2024
- Verified by:
- Gdańsk University of Technology
seen 24 times