[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Out-of-Domain Semantics to the Rescue! Zero-Shot Hybrid Retrieval Models

  • Conference paper
  • First Online:
Advances in Information Retrieval (ECIR 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13185))

Included in the following conference series:

Abstract

The pre-trained language model (eg, BERT) based deep retrieval models achieved superior performance over lexical retrieval models (eg, BM25) in many passage retrieval tasks. However, limited work has been done to generalize a deep retrieval model to other tasks and domains. In this work, we carefully select five datasets, including two in-domain datasets and three out-of-domain datasets with different levels of domain shift, and study the generalization of a deep model in a zero-shot setting. Our findings show that the performance of a deep retrieval model is significantly deteriorated when the target domain is very different from the source domain that the model was trained on. On the contrary, lexical models are more robust across domains. We thus propose a simple yet effective framework to integrate lexical and deep retrieval models. Our experiments demonstrate that these two models are complementary, even when the deep model is weaker in the out-of-domain setting. The hybrid model obtains an average of 20.4% relative gain over the deep retrieval model, and an average of 9.54% over the lexical model in three out-of-domain datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 79.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 99.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    While we recognize that in some cases the deep retrievers are not necessarily dense, and vice versa, we loosely use these two terms interchangeably throughout the paper.

  2. 2.

    Note that we focus solely on recall, since we do not apply a second re-ranking stage for optimizing early precision.

References

  1. Amati, G., Rijsbergen, C.J.V.: Probabilistic models of information retrieval based on measuring the divergence from randomness. ACM Trans. Inf. Syst. 20(4), 357–389 (2002)

    Article  Google Scholar 

  2. Bajaj, P., et al.: MS MACRO: a human generated machine reading comprehension dataset (2018)

    Google Scholar 

  3. Bendersky, M., Zhuang, H., Ma, J., Han, S., Hall, K.B., McDonald, R.T.: RRF102: meeting the TREC-COVID challenge with a 100+ runs ensemble. CoRR, abs/2010.00200 (2020)

    Google Scholar 

  4. Berger, A., Caruana, R., Cohn, D., Freitag, D., Mittal, V.: Bridging the lexical chasm: statistical approaches to answer-finding. In: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2000, pp. 192–199. Association for Computing Machinery. New York (2000)

    Google Scholar 

  5. Cormack, G.V., Clarke, C.L.A., Buettcher, S.: Reciprocal rank fusion outperforms condorcet and individual rank learning methods. In: Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2009, pp, 758–759. Association for Computing Machinery, New York (2009)

    Google Scholar 

  6. Craswell, N., Campos, D., Mitra, B., Yilmaz, E., Billerbeck, B.: ORCAS: 20 million clicked query-document pairs for analyzing search. In: Proceedings of the 29th ACM International Conference on Information and Knowledge Management, pp. 2983–2989. Association for Computing Machinery, New York (2020)

    Google Scholar 

  7. Dai, Z., Callan, J.: Context-aware sentence/passage term importance estimation for first stage retrieval. CoRR, abs/1910.10687 (2019)

    Google Scholar 

  8. Dai, Z., Callan, J.: Deeper text understanding for IR with contextual neural language modeling. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, pp. 985–988. Association for Computing Machinery, New York (2019)

    Google Scholar 

  9. Dai, Z., Callan, J.: Context-aware term weighting for first stage passage retrieval. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2020, pp. 1533–1536. Association for Computing Machinery, New York (2020)

    Google Scholar 

  10. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics (2019)

    Google Scholar 

  11. Fan, A., Lewis, M., Dauphin, Y.N.: Hierarchical neural story generation. CoRR, abs/1805.04833 (2018)

    Google Scholar 

  12. Gao, L., Callan, J.: Unsupervised corpus aware language model pre-training for dense passage retrieval. CoRR, abs/2108.05540 (2021)

    Google Scholar 

  13. Gao, L., Dai, Z., Chen, T., Fan, Z., Van Durme, B., Callan, J.: Complement lexical retrieval model with semantic residual embeddings. In: Hiemstra, D., Moens, M.-F., Mothe, J., Perego, R., Potthast, M., Sebastiani, F. (eds.) ECIR 2021. LNCS, vol. 12656, pp. 146–160. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72113-8_10

    Chapter  Google Scholar 

  14. Guo, R., Sun, P., Lindgren, E., Geng, Q., Simcha, D., Chern, F., Kumar, S.: Accelerating large-scale inference with anisotropic vector quantization. In: Proceedings of the 37th International Conference on Machine Learning, ICML 2020, Virtual Event, 13–18 July 2020, vol. 119, pp. 3887–3896. PMLR (2020)

    Google Scholar 

  15. Jaleel, N.A., et al.: UMass at TREC 2004: Novelty and HARD. In: Voorhees, E.M., Buckland, L.P. (eds.) Proceedings of the 13th Text REtrieval Conference, TREC 2004, Gaithersburg, Maryland, USA, 16–19 November 2004, vol. 500-261. National Institute of Standards and Technology (NIST) (2004)

    Google Scholar 

  16. Johnson, J., Douze, M., Jégou, H.: Billion-scale similarity search with GPUs. IEEE Trans. Big Data 7(3), 535–547 (2021)

    Article  Google Scholar 

  17. Karpukhin, V., et al.: Dense passage retrieval for open-domain question answering. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, November 2020, pp. 6769–6781. Association for Computational Linguistics (2020)

    Google Scholar 

  18. Kuzi, S., Zhang, M., Li, C., Bendersky, M., Najork, M.: Leveraging semantic and lexical matching to improve the recall of document retrieval systems: a hybrid approach. CoRR, abs/2010.01195 (2020)

    Google Scholar 

  19. Lavrenko, V., Croft, W.B.: Relevance based language models. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2001, pp, 120–127. Association for Computing Machinery, New York (2001)

    Google Scholar 

  20. Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, July 2020, pp. 7871–7880. Association for Computational Linguistics (2020)

    Google Scholar 

  21. Lin, J., Ma, X.: A few brief notes on DeepImpact, coil, and a conceptual framework for information retrieval techniques. CoRR, abs/2106.14807 (2021)

    Google Scholar 

  22. Lin, S.-C., Yang, J.-H., Lin, J.: In-batch negatives for knowledge distillation with tightly-coupled teachers for dense retrieval. In: Proceedings of the 6th Workshop on Representation Learning for NLP, RepL4NLP-2021, August 2021, pp. 163–173, Online. Association for Computational Linguistics (2021)

    Google Scholar 

  23. Lu, J., Ábrego, G.H., Ma, J., Ni, J., Yang, Y.: Multi-stage training with improved negative contrast for neural passage retrieval. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic, November 2021, pp. 6091–6103. Association for Computational Linguistics (2021)

    Google Scholar 

  24. Luan, Y., Eisenstein, J., Toutanova, K., Collins, M.: Sparse, dense, and attentional representations for text retrieval. Trans. Assoc. Comput. Linguist. 9, 329–345 (2021)

    Article  Google Scholar 

  25. Ma, X., Sun, K., Pradeep, R., Lin, J.: A replication study of dense passage retriever. CoRR, abs/2104.05740 (2021)

    Google Scholar 

  26. Macdonald, C., McCreadie, R., Santos, R.L.T., Ounis, I.: From puppy to maturity: experiences in developing terrier. In: Proceedings of the OSIR at SIGIR, pp. 60–63 (2012)

    Google Scholar 

  27. Mao, Y., et al.: Generation-augmented retrieval for open-domain question answering. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online, August 2021, pp. 4089–4100. Association for Computational Linguistics (2021)

    Google Scholar 

  28. Nogueira, R., Lin, J.: From doc2query to docTTTTTquery (2019). Online

    Google Scholar 

  29. Nogueira, R., Yang, W., Lin, J., Cho, K.: Document expansion by query prediction. CoRR, abs/1904.08375 (2019)

    Google Scholar 

  30. Ponte, J.M., Croft, W.B.: A language modeling approach to information retrieval. In: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 1998, pp. 275–281. Association for Computing Machinery, New York (1998)

    Google Scholar 

  31. Pradeep, R., Nogueira, R., Lin, J.: The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models. CoRR, abs/2101.05667 (2021)

    Google Scholar 

  32. Qu, Y., et al.: RocketQA: an optimized training approach to dense passage retrieval for open-domain question answering. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, June 2021, pp. 5835–5847. Association for Computational Linguistics (2021)

    Google Scholar 

  33. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(140), 1–67 (2020)

    MathSciNet  MATH  Google Scholar 

  34. Roberts, K.: TREC-COVID: rationale and structure of an information retrieval shared task for COVID-19. J. Am. Med. Inf. Assoc. 27(9), 1431–1436 (2020). https://doi.org/10.1093/jamia/ocaa091

    Article  Google Scholar 

  35. Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M., Gatford, M.: Okapi at TREC-3. In: TREC (1994)

    Google Scholar 

  36. Sciavolino, C., Zhong, Z., Lee, J., Chen, D.: Simple entity-centric questions challenge dense retrievers. CoRR, abs/2109.08535 (2021)

    Google Scholar 

  37. Tao, T., Wang, X., Mei, Q., Zhai, C.: Language model information retrieval with document expansion. In: Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, June 2006, pp. 407–414. Association for Computational Linguistics, New York (2006)

    Google Scholar 

  38. Thakur, N., Reimers, N., Rücklé, A., Srivastava, A., Gurevych, I.: BEIR: a heterogeneous benchmark for zero-shot evaluation of information retrieval models. In: 35th Conference on Neural Information Processing Systems Datasets and Benchmarks Track (2021)

    Google Scholar 

  39. Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, Red Hook, NY, USA, pp. 6000–6010. Curran Associates Inc. (2017)

    Google Scholar 

  40. Voorhees, E.: Overview of the TREC 2004 robust retrieval track, 01 Aug 2005 (2005)

    Google Scholar 

  41. Wang, L.L., et al.: CORD-19: the COVID-19 open research dataset. In: Proceedings of the 1st Workshop on NLP for COVID-19, ACL 2020, Online, July 2020. Association for Computational Linguistics (2020)

    Google Scholar 

  42. Wang, S., Zhuang, S., Zuccon, G.: Bert-based dense retrievers require interpolation with BM25 for effective passage retrieval. In: Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2021, pp. 317–324. Association for Computing Machinery, New York (2021)

    Google Scholar 

  43. Xiong, L., et al.: Approximate nearest neighbor negative contrastive learning for dense text retrieval. In: International Conference on Learning Representations (2021)

    Google Scholar 

  44. Yang, P., Fang, H., Lin, J.: Anserini: reproducible ranking baselines using Lucene. J. Data Inf. Qual. 10(4), 1–20 (2018)

    Article  Google Scholar 

  45. Zhan, J., Mao, J., Liu, Y., Zhang, M., Ma, S.: RepBERT: contextualized text embeddings for first-stage retrieval. CoRR, abs/2006.15498 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, T., Zhang, M., Lu, J., Bendersky, M., Najork, M. (2022). Out-of-Domain Semantics to the Rescue! Zero-Shot Hybrid Retrieval Models. In: Hagen, M., et al. Advances in Information Retrieval. ECIR 2022. Lecture Notes in Computer Science, vol 13185. Springer, Cham. https://doi.org/10.1007/978-3-030-99736-6_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-99736-6_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-99735-9

  • Online ISBN: 978-3-030-99736-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics