Towards Generating Human-like Deep Questions


Question Generation (QG) concerns the task of automatically generating questions from various inputs such as raw text, database, or semantic representation. People have the ability to ask deep questions about events, evaluation, opinions, synthesis, or reasons, usually in the form of Why, Why-not, How, What-if, which requires an in-depth understanding of the input source and the ability to reason over disjoint relevant contexts. Learning to ask such deep questions has broad application in future intelligent systems, such as dialog systems, online education, intelligent search, among others. This talk will introduce our recent research on generating deep questions that demand high cognitive skills, including questions that require multi-hop reasoning and questions that exhibit certain human-desired properties, such as being answerable by the passage. We will also introduce how deep question generation benefit two practical applications: multi-hop question answering, and fact verification.

NTU-NLP: Monthly NLP Talks
Zoom Meeting


Liangming Pan
Ph.D. Candidate, NUS.
Website | Google Scholar
Research topic: Text Generation, Knowledge Graph, Multi-media Learning

Bio: Liangming Pan is a fourth year Computer Science Ph.D. student at National University of Singapore, jointly advised by Prof. Min-Yen Kan and Prof. Tat-Seng Chua. He was also a visiting Ph.D. student at UC Santa Barbara in 2020. Prior to joining NUS, he received a Master degree from School of Computer Science at Tsinghua University. His board research interests include knowledge base, natural language processing, and data mining. His Ph.D. research topics focus on natural language generation with deep reasoning, including neural question generation, and text style transfer. He has published several research papers in top-ranked conferences including ACL, COLING, and NAACL. He has also won the Research Achievement Award and the Dean’s Graduate Award from NUS School of Computing.