Ph.D. Proposal Defense (Heyi Li) "Understanding the Black Box: A Methodology for Interpreting Deep Neural Networks for Domain Scientists"

Friday, February 5, 2021 - 2:30pm
Zoom - contact for Zoom info.
Event Description: 



         Abstract: In recent years deep learning-based methods have achieved state-of-the-art performances in many domains. However, a thorough understanding of how deep learning models work remains a huge challenge. This lack of interpretability has significantly restricted a wide utilization of deep learning.

In this proposal, we first design a CNN-like network to address the problem of CT streak artifacts. Our model is based on the structure of the denoising autoencoder with added skip connections. Then we propose a novel two-step algorithm called Salient Relevance map to visualize the attention areas of pre-trained CNN models and explain their classification results. Our pipeline constructs a context-aware saliency map based on layer-wise relevance propagation. Apart from CNN models, we also come up with an enhanced Polarized-LRP algorithm which aims to understand the behaviors of GAN models. This research focuses on one of the network's major components, the Discriminator, which plays a vital role but is often overlooked. Our method consists of two parts i.e. a positive contribution heatmap for the images classified as ground truth and a negative contribution heatmap for the ones classified as generated. As a use case, we have chosen the deblending of two overlapping galaxy images via a branched GAN model. One interesting result we have achieved is the detection of a problematic data augmentation procedure that would else have remained hidden.

A promising direction of future work is to integrate interpretability with model design. Our objective is to make existing network models more transparent by adding attention mechanisms.

Computed Event Type: