[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model

Hongyi Yuan, Zheng Yuan, Ruyi Gan, Jiaxing Zhang, Yutao Xie, Sheng Yu


Abstract
Pretrained language models have served as important backbones for natural language processing. Recently, in-domain pretraining has been shown to benefit various domain-specific downstream tasks. In the biomedical domain, natural language generation (NLG) tasks are of critical importance, while understudied. Approaching natural language understanding (NLU) tasks as NLG achieves satisfying performance in the general domain through constrained language generation or language prompting. We emphasize the lack of in-domain generative language models and the unsystematic generative downstream benchmarks in the biomedical domain, hindering the development of the research community. In this work, we introduce the generative language model BioBART that adapts BART to the biomedical domain. We collate various biomedical language generation tasks including dialogue, summarization, entity linking, and named entity recognition. BioBART pretrained on PubMed abstracts has enhanced performance compared to BART and set strong baselines on several tasks. Furthermore, we conduct ablation studies on the pretraining tasks for BioBART and find that sentence permutation has negative effects on downstream tasks.
Anthology ID:
2022.bionlp-1.9
Volume:
Proceedings of the 21st Workshop on Biomedical Language Processing
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Dina Demner-Fushman, Kevin Bretonnel Cohen, Sophia Ananiadou, Junichi Tsujii
Venue:
BioNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
97–109
Language:
URL:
https://aclanthology.org/2022.bionlp-1.9
DOI:
10.18653/v1/2022.bionlp-1.9
Bibkey:
Cite (ACL):
Hongyi Yuan, Zheng Yuan, Ruyi Gan, Jiaxing Zhang, Yutao Xie, and Sheng Yu. 2022. BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model. In Proceedings of the 21st Workshop on Biomedical Language Processing, pages 97–109, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model (Yuan et al., BioNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.bionlp-1.9.pdf
Video:
 https://aclanthology.org/2022.bionlp-1.9.mp4
Code
 GanjinZero/BioBART
Data
BC5CDRCOMETAGENIAMEDIQA-AnSMIMIC-IIIMeQSumMedMentionsSemantic Scholar