Currently, in contrast to shallow models exploited in the past, most deep learning systems extract features automatically, and to do that, they tend to rely on huge number of labeled data. Whereas the quality of dataset used to train neural networks have a huge impact on the models’ performance, those datasets are often noisy, biased and sometimes even contain incorrect samples. Moreover, deep neural networks (DNNs) are black-box models that usually have tens of layers with millions of parameters, and very complex latent space, which make their decisions very hard to interpret. Such fragile models are increasingly used to solve very sensitive and critical tasks. Therefore, the demand for a clear reasoning and correct decision is very high, especially when DNNs are used in transportation (autonomous cars), in healthcare, for legal systems, finances, and military.
To address those challenges the project aims to develop methods of Explainable Artificial Intelligence (XAI) which might help to uncover and reduce the problem of bias in data. The project involves investigation and integration of explainability into new and existing Artificial Intelligence systems, and mostly focuses on Deep Neural Networks in the field of Computer Vision. One of the ways of categorizing XAI methods is to divide them into local and global explanations. Local analysis aims to explain a single prediction of a model, whereas a global one tries to explain how the whole model works in general. The project aims to develop novel methods of both local and global explainability to help explain deep neural network-based systems in order to justify them, to control their reasoning process, and to discover new knowledge.
The project is divided into three research tasks:
• RT1: Developing globally aware local explanations for prediction justification
• RT2: Developing global explanations for detecting undesirable bias in data
• RT3: Developing trainable attention for eliminating influences of undesirable bias in data on the model
The main research objective of the proposed project is to propose and apply the methods for globally-aware local explanations for justification of single prediction, global explanations for identification of the bias in data collection, and finally, elimination of detected undesirable bias with trainable attention guidance.
Details
- Financial Program Name:
- PRELUDIUM
- Organization:
- Narodowe Centrum Nauki (NCN) (National Science Centre)
- Agreement:
- UMO-2019/35/N/ST6/04052 z dnia 2020-06-19
- Realisation period:
- 2020-06-19 - 2023-06-18
- Project manager:
- dr inż. Agnieszka Mikołajczyk-Bareła
- Realised in:
- Department of Control Systems Engineering
- Request type:
- National Research Programmes
- Domestic:
- Domestic project
- Verified by:
- Gdańsk University of Technology
seen 310 times