Multimodal Domain-Aware Dialogue System
虚步, a Ph.D. student at BUPT. I am very fortunate to be advised by my advisor. My research is in the area of Vision, Language, and Reasoning, with a focus on Visual Dialogue. I am particularly interested in building a visually-grounded conversational AI (social robot) that can see the world and talk with us in natural language. Other interests include Visual/Language Grounding, Visual Reasoning, Visual Question Generation, and Visually-grounded Referring Expression.
Now I've been working on the MMD task, please feel free to contact me with pangweitf@bupt.edu.cn or pangweitf@163.com if you have any questions or concerns.
- Amrita Saha, Mitesh M. Khapra, Karthik Sankaranarayanan. Towards Building Large Scale Multimodal Domain-Aware Conversation Systems. In AAAI 2018.
- Shubham Agarwal, Ondr ej Dusˇek, Ioannis Konstas and Verena Rieser. Improving Context Modelling in Multimodal Dialogue Generation. In ACL 2018.
- Chen Cui, Wenjie Wang, Xuemeng Song, Minlie Huang, Xin-Shun Xu, and Liqiang Nie. User A ention-guided Multimodal Dialog Systems. In SIGIR 2019.
- Zheng Zhang, Lizi Liao, Minlie Huang, Xiaoyan Zhu, Tat-Seng Chua. Neural Multimodal Belief Tracker with Adaptive A ention for Dialogue Systems. In WWW 2019.
- ...