Representation learning has been the key factor of success for many deep learning models. While conventional representation learning relies on a larger amount of human-annotated data, learning without human annotation has become the modern trend. In this talk, I will introduce three directions to advance representation learning with few human annotations, namely (1) self-supervised learning, (2) semi-supervised learning, and (3) webly-supervised learning. (1) and (2) will focus on the vision domain, whereas (3) will focus on vision-language understanding.
Speaker:
Bio: Junnan Li is currently a senior research scientist at Salesforce Research Asia. He obtained his PhD at the National University of Singapore in 2019. He has published in many top-tier venues in machine learning and computer vision, such as NeurIPS, ICLR, CVPR, ICCV, etc. His main research interests include self-supervised learning, semi-supervised learning, weakly-supervised learning, and vision-language learning. His ultimate research goal is to build general-purpose models that can self-learn without human involvement.