Visual Analytics and Imaging Laboratory (VAI Lab)
Computer Science Department, Stony Brook University, NY

NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural Network Synthesis

Abstract: The success of deep neural networks (DNN) can be attributed to hours of parameter and architecture tuning by human experts. Neural Architecture Search (NAS) techniques aim to solve this problem by automating the search procedure for DNN architectures making it possible for non-experts to work with DNNs. Specifically, One-shot NAS techniques have recently gained popularity as they are known to reduce the search time for NAS techniques. One-Shot NAS works by training a large template network through parameter sharing which includes all the candidate NNs. This is followed by applying a procedure to rank its components through evaluating the possible candidate architectures chosen randomly. However, as these search models become increasingly powerful and diverse, they become harder to understand. Consequently, even though the search results work well, it is hard to identify search biases and control the search progression, hence a need for explainability and human-in-the-loop (HIL) One-Shot NAS. To alleviate these problems, we present NAS-Navigator, a visual analytics (VA) system aiming to solve three problems with One-Shot NAS; explainability, HIL design, and performance improvements compared to existing state-of-the-art (SOTA) techniques. NAS-Navigator gives full control of NAS back in the hands of the users while still keeping the perks of automated search, thus assisting non-expert users. Analysts can use their domain knowledge aided by cues from the interface to guide the search. Evaluation results confirm the performance of our improved One-Shot NAS algorithm is comparable to other SOTA techniques. While adding Visual Analytics (VA) using NAS-Navigator shows further improvements in search time and performance. We designed our interface in collaboration with several deep learning researchers and evaluated NAS-Navigator through a control experiment and expert interviews.

Teaser: The below shows the NAS-Navigator's visual analytics interface for explainable and human-in-the-loop neural network architecture search (NAS):

NAS-Navigator implements a one-shot NAS, using an iterative evolutionary search algorithm. The interface supports the visualization of NAS with the human-in-the-loop search control. Analysts start by designing a large template network, through a lego view (A); capable of emulating the search space of candidate neural networks. This network is then trained for a few epochs to initialize meaningful weights, useful for candidate NN search, via a loss chart view (B). Following this, our evolutionary search algorithm evaluates possible candidate NNs iteratively, with iteration counter (C); sampled from the large NN, and these accuracy evaluation results are then presented in the form of a candidate NN projection on a scatterplot, via a search space view (D). Analysts can further pause/stop the search and edit the template NN based on the fitness scores generated by our search algorithm, on the candidate information view (E); to generate the final NN architecture, or to reduce the search space size. The fitness scores are calculated for each node of the candidate neural networks which are sampled from the large template network during the search

Video: Watch it to get a quick overview:

Paper: A. Tyagi, C. Xie, K. Mueller, "NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural Network Synthesis," IEEE Trans. on Visualization and Computer Graphics, 29(1):299-309, 2023 PDF GITHUB

Funding: NSF grants CNS 1900706, IIS 1527200, and IIS 1941613.