[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Autoregressive Entity RetrievalDownload PDF

Published: 12 Jan 2021, Last Modified: 03 Apr 2024ICLR 2021 SpotlightReaders: Everyone
Keywords: entity retrieval, document retrieval, autoregressive language model, entity linking, end-to-end entity linking, entity disambiguation, constrained beam search
Abstract: Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering. One way to understand current approaches is as classifiers among atomic labels, one for each entity. Their weight vectors are dense entity representations produced by encoding entity meta information such as their descriptions. This approach leads to several shortcomings: (i) context and entity affinity is mainly captured through a vector dot product, potentially missing fine-grained interactions between the two; (ii) a large memory footprint is needed to store dense representations when considering large entity sets; (iii) an appropriately hard set of negative data has to be subsampled at training time. In this work, we propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion and conditioned on the context. This enables us to mitigate the aforementioned technical issues since: (i) the autoregressive formulation allows us to directly capture relations between context and entity name, effectively cross encoding both; (ii) the memory footprint is greatly reduced because the parameters of our encoder-decoder architecture scale with vocabulary size, not entity count; (iii) the exact softmax loss can be efficiently computed without the need to subsample negative data. We show the efficacy of the approach, experimenting with more than 20 datasets on entity disambiguation, end-to-end entity linking and document retrieval tasks, achieving new state-of-the-art or very competitive results while using a tiny fraction of the memory footprint of competing systems. Finally, we demonstrate that new entities can be added by simply specifying their unambiguous name. Code and pre-trained models at https://github.com/facebookresearch/GENRE.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We address entity retrieval by generating their unique name identifiers, left to right, in an autoregressive fashion, and conditioned on the context showing SOTA results in more than 20 datasets with a tiny fraction of the memory of recent systems.
Code: [![github](/images/github_icon.svg) facebookresearch/GENRE](https://github.com/facebookresearch/GENRE) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=5k8F6UU39V)
Data: [ACE 2004](https://paperswithcode.com/dataset/ace-2004), [AIDA CoNLL-YAGO](https://paperswithcode.com/dataset/aida-conll-yago), [AQUAINT](https://paperswithcode.com/dataset/aquaint), [CoNLL](https://paperswithcode.com/dataset/conll-1), [ELI5](https://paperswithcode.com/dataset/eli5), [HotpotQA](https://paperswithcode.com/dataset/hotpotqa), [IPM NEL](https://paperswithcode.com/dataset/ipm-nel), [KILT](https://paperswithcode.com/dataset/kilt), [Natural Questions](https://paperswithcode.com/dataset/natural-questions), [T-REx](https://paperswithcode.com/dataset/t-rex), [TriviaQA](https://paperswithcode.com/dataset/triviaqa), [Wizard of Wikipedia](https://paperswithcode.com/dataset/wizard-of-wikipedia)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2010.00904/code)
10 Replies

Loading