PyGaggle provides a gaggle of deep neural architectures for text ranking and question answering. It was designed for tight integration with Pyserini, but can be easily adapted for other sources as well.
Currently, this repo contains implementations of the rerankers for CovidQA on CORD-19, as described in "Rapidly Bootstrapping a Question Answering Dataset for COVID-19".
-
For pip, do
pip install pygaggle
. If you prefer Anaconda, useconda env create -f environment.yml && conda activate pygaggle
. -
Install PyTorch 1.4+.
-
Download the index:
sh scripts/update-index.sh
. -
Make sure you have an installation of Java 11+:
javac --version
. -
Install Anserini.
By default, the script uses data/lucene-index-covid-paragraph
for the index path.
If this is undesirable, set the environment variable CORD19_INDEX_PATH
to the path of the index.
For a full list of mostly self-explanatory environment variables, see this file.
BM25 uses the CPU. If you don't have a GPU for the transformer models, pass --device cpu
(PyTorch device string format) to the script.
BM25:
python -um pygaggle.run.evaluate_kaggle_highlighter --method bm25
BERT:
python -um pygaggle.run.evaluate_kaggle_highlighter --method transformer --model-name bert-base-cased
SciBERT:
python -um pygaggle.run.evaluate_kaggle_highlighter --method transformer --model-name allenai/scibert_scivocab_cased
BioBERT:
python -um pygaggle.run.evaluate_kaggle_highlighter --method transformer --model-name biobert
T5 (fine-tuned on MS MARCO):
python -um pygaggle.run.evaluate_kaggle_highlighter --method t5
BioBERT (fine-tuned on SQuAD v1.1):
-
Download the weights, vocab, and config from the BioBERT repository to the same folder.
-
Rename the following files in the folder:
mv bert_config.json config.json
for filename in model.ckpt*; do
mv $filename $(python -c "import re; print(re.sub(r'ckpt-\\d+', 'ckpt', '$filename'))");
done
- Evaluate the model:
python -um pygaggle.run.evaluate_kaggle_highlighter --method qa_transformer --model-name <folder path>
BioBERT (fine-tuned on MS MARCO):
- Download the weights, vocab, and config from our Google Storage bucket. This requires an installation of gsutil.
mkdir biobert-marco && cd biobert-marco
gsutil cp "gs://neuralresearcher_data/doc2query/experiments/exp374/model.ckpt-100000*" .
gsutil cp gs://neuralresearcher_data/biobert_models/biobert_v1.1_pubmed/bert_config.json config.json
gsutil cp gs://neuralresearcher_data/biobert_models/biobert_v1.1_pubmed/vocab.txt .
- Rename the files:
for filename in model.ckpt*; do
mv $filename $(python -c "import re; print(re.sub(r'ckpt-\\d+', 'ckpt', '$filename'))");
done
- Evaluate the model:
python -um pygaggle.run.evaluate_kaggle_highlighter --method seq_class_transformer --model-name <folder path>