Ph.D. Research Proficiency Presentation: Xiang Li, 'Does Self-supervised Learning Really Improve Reinforcement Learning from Pixels?'

Dates: 
Thursday, June 2, 2022 - 10:00am to 11:30am
Location: 
New Computer Science (NCS) Room 220
Event Description: 

Abstract
We investigate whether self-supervised learning (SSL) can improve online reinforcement learning (RL) from pixels. We extend the contrastive reinforcement learning framework (e.g., CURL) that jointly optimizes SSL and RL losses and conduct an extensive amount of experiments with various self-supervised losses. Our observations suggest that the existing SSL framework for RL fails to bring meaningful improvement over the baselines only taking advantage of image augmentation when the same amount of data and augmentation is used. We further perform an evolutionary search to find the optimal combination of multiple self-supervised losses for RL, but find that even such a loss combination fails to meaningfully outperform the methods that only utilize carefully designed image augmentations. Often, the use of self-supervised losses under the existing framework lowered RL performances. We evaluate the approach in multiple different environments including a real-world robot environment and confirm that no single self-supervised loss or image augmentation method can dominate all environments and that the current framework for joint optimization of SSL and RL is limited. Finally, we empirically investigate the pretraining framework for SSL + RL and the properties of representations learned with different approaches.

Computed Event Type: 
Mis
Event Title: 
Ph.D. Research Proficiency Presentation: Xiang Li, 'Does Self-supervised Learning Really Improve Reinforcement Learning from Pixels?'