8000 GitHub - xubuvd/MMD: Multimodal Domain-Aware Dialogue System
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

xubuvd/MMD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 

Repository files navigation

MMD

Multimodal Domain-Aware Dialogue System

About me

虚步, a Ph.D. student at BUPT. I am very fortunate to be advised by my advisor. My research is in the area of Vision, Language, and Reasoning, with a focus on Visual Dialogue. I am particularly interested in building a visually-grounded conversational AI (social robot) that can see the world and talk with us in natural language. Other interests include Visual/Language Grounding, Visual Reasoning, Visual Question Generation, and Visually-grounded Referring Expression.

Now I've been working on the MMD task, please feel free to contact me with pangweitf@bupt.edu.cn or pangweitf@163.com if you have any questions or concerns.

experiment is in progress...

Performance

Training

References

  1. Amrita Saha, Mitesh M. Khapra, Karthik Sankaranarayanan. Towards Building Large Scale Multimodal Domain-Aware Conversation Systems. In AAAI 2018.
  2. Shubham Agarwal, Ondr ej Dusˇek, Ioannis Konstas and Verena Rieser. Improving Context Modelling in Multimodal Dialogue Generation. In ACL 2018.
  3. Chen Cui, Wenjie Wang, Xuemeng Song, Minlie Huang, Xin-Shun Xu, and Liqiang Nie. User A ention-guided Multimodal Dialog Systems. In SIGIR 2019.
  4. Zheng Zhang, Lizi Liao, Minlie Huang, Xiaoyan Zhu, Tat-Seng Chua. Neural Multimodal Belief Tracker with Adaptive A ention for Dialogue Systems. In WWW 2019.
  5. ...

About

Multimodal Domain-Aware Dialogue System

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0