[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Walter W. Chang


2022

pdf bib
Medical Question Understanding and Answering with Knowledge Grounding and Semantic Self-Supervision
Khalil Mrini | Harpreet Singh | Franck Dernoncourt | Seunghyun Yoon | Trung Bui | Walter W. Chang | Emilia Farcas | Ndapa Nakashole
Proceedings of the 29th International Conference on Computational Linguistics

Current medical question answering systems have difficulty processing long, detailed and informally worded questions submitted by patients, called Consumer Health Questions (CHQs). To address this issue, we introduce a medical question understanding and answering system with knowledge grounding and semantic self-supervision. Our system is a pipeline that first summarizes a long, medical, user-written question, using a supervised summarization loss. Then, our system performs a two-step retrieval to return answers. The system first matches the summarized user question with an FAQ from a trusted medical knowledge base, and then retrieves a fixed number of relevant sentences from the corresponding answer document. In the absence of labels for question matching or answer relevance, we design 3 novel, self-supervised and semantically-guided losses. We evaluate our model against two strong retrieval-based question answering baselines. Evaluators ask their own questions and rate the answers retrieved by our baselines and own system according to their relevance. They find that our system retrieves more relevant answers, while achieving speeds 20 times faster. Our self-supervised losses also help the summarizer achieve higher scores in ROUGE, as well as in human evaluation metrics.

pdf bib
Keyphrase Prediction from Video Transcripts: New Dataset and Directions
Amir Pouran Ben Veyseh | Quan Hung Tran | Seunghyun Yoon | Varun Manjunatha | Hanieh Deilamsalehy | Rajiv Jain | Trung Bui | Walter W. Chang | Franck Dernoncourt | Thien Huu Nguyen
Proceedings of the 29th International Conference on Computational Linguistics

Keyphrase Prediction (KP) is an established NLP task, aiming to yield representative phrases to summarize the main content of a given document. Despite major progress in recent years, existing works on KP have mainly focused on formal texts such as scientific papers or weblogs. The challenges of KP in informal-text domains are not yet fully studied. To this end, this work studies new challenges of KP in transcripts of videos, an understudied domain for KP that involves informal texts and non-cohesive presentation styles. A bottleneck for KP research in this domain involves the lack of high-quality and large-scale annotated data that hinders the development of advanced KP models. To address this issue, we introduce a large-scale manually-annotated KP dataset in the domain of live-stream video transcripts obtained by automatic speech recognition tools. Concretely, transcripts of 500+ hours of videos streamed on the behance.net platform are manually labeled with important keyphrases. Our analysis of the dataset reveals the challenging nature of KP in transcripts. Moreover, for the first time in KP, we demonstrate the idea of improving KP for long documents (i.e., transcripts) by feeding models with paragraph-level keyphrases, i.e., hierarchical extraction. To foster future research, we will publicly release the dataset and code.