8000 GitHub - earthspecies/voxaboxen
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

earthspecies/voxaboxen

Repository files navigation

Voxaboxen

DOI

Voxaboxen is a deep learning framework designed to find the start and stop times of (possibly overlapping) sound events in a recording. We designed it with bioacoustics applications in mind, so it accepts annotations in the form of Raven selection tables.

If you use this software in your research, please cite it.

19_AL_Naranja_1025_detect

Read the preprint!

Installation

With uv, Voxaboxen can be run using uv run main.py.

Alternatively, install dependencies with pip install -r requirements.txt and run using python main.py.

To use the BEATs encoder, you can obtain the weights from here. By default, place this file in weights.

Quick start

Create a directory for your data. Add to it a train_info.csv file with three columns:

  • fn: Unique filename associated with each audio file
  • audio_fp: Filepaths to audio files in train set
  • selection_table_fp: Filepath to Raven selection tables

Repeat this for the other folds of your dataset, creating val_info.csv and test_info.csv. Run project setup and model training following the template in the Example Usage below.

Notes:

  • Audio will be automatically resampled to 16000 Hz mono, no resampling is necessary prior to training.
  • Selection tables are .txt files, with tab-separated columns. We only require the following columns: Begin Time (s), End Time (s), Annotation.

Example usage:

Get the BEATs weights from the link above. Get the preprocessed Meerkat (MT) dataset:

mkdir datasets/MT_demo; wget https://storage.googleapis.com/esp-public-files/voxaboxen-demo/formatted.zip -P datasets/MT_demo; unzip datasets/MT_demo/formatted.zip -d datasets/MT_demo; wget https://storage.googleapis.com/esp-public-files/voxaboxen-demo/original_readme_and_license.md -P datasets/MT_demo

Project setup:

uv run main.py project-setup --data-dir=datasets/MT_demo/formatted --project-dir=projects/MT_demo_experiment

Train model:

uv run main.py train-model --project-config-fp=projects/MT_demo_experiment/project_config.yaml --name=demo --n-epochs=50 --batch-size=4 --encoder-type=beats --beats-checkpoint-fp=weights/BEATs_iter3_plus_AS2M_finetuned_on_AS2M_cpt2.pt --bidirectional

Use trained model to infer annotations:

python main.py inference --model-args-fp=projects/MT_demo_experiment/demo/params.yaml --file-info-for-inference=datasets/MT_demo/formatted/test_info.csv

Reproduce the experiments

Obtain the datasets from here. Place them in the datasets directory.

For some datasets, events are above the 8kHz Nyquist frequency of the model. To get around this, we use slowed-down versions of the files. To create slowed-down versions, run uv run scripts/make_slowed_version.py

The main experiments from the paper can be reproduced using the shell script scripts/voxaboxen_experiments.sh

Other features

Here are some additional options that can be applied during training:

  • Flag --stereo accepts stereo audio. Order of channels matters; used for e.g. speaker diarization.
  • Flag --bidirectional predicts the ends of events in addition to the beginning, matches starts and ends based on IoU. May improve box regression.
  • Flag --segmentation-based switches to a frame-based approach. If used, we recommend putting --rho=1.
  • Flag --mixup applies mixup augmentation.

Editing Project Config

After running python main.py project-setup, a project_config.yaml file will be created in the project directory you specified. This config file codifies how different labels will be handled by any model within this project. This config file is automatically generated by the project setup script, but you can edit this file to revise how these labels are handled. There are a few things you can edit:

  1. label_set: This is a list of all the label types that a model will be able to output. It is automatically populated with all the label types that appear in the Annotation column of the selection table. If you want your model to ignore a particular label type, perhaps because there are few events with that label type, you must delete that label type from this list.

  2. label_mapping: This is a set of key: value pairs. Often, it is useful to group multiple types of labels into one. For example, maybe in your data there are multiple species from the same family, and you would like the model to treat this entire family with one label type. Upon training, Voxaboxen converts each annotation that appears as a key into the label specified by the corresponding value. When modifying label_mapping, you should ensure that each value that appears in label_mapping either also appears in label_set, or is the unknown_label.

  3. unknown_label: This is set to Unknown by default. Any sound event labeled with the unknown_label will be treated as an event of interest, but the label type of the event will be treated as unknown. This may be desirable when there are vocalizations that are clearly audible, but are difficult for an annotator to identify to species. When the model is trained, it learns to predict a uniform distribution across possible label types whenever it encounters an event with the unknown_label. When the model is evaluated, it is not penalized for predicting the label of events which are annotated with the unknown_label. The unknown_label should not appear in the label_set.

For example, say you annotate your audio with the labels Red-eyed Vireo REVI, Philidelphia VireoPHVI, and Unsure REVI/PHVI. To reflect your uncertainty about REVI/PHVI, your label_set would include REVI and PHVI, and your label_mapping would include the pairs REVI: REVI, PHVI: PHVI, and REVI/PHVI: Unknown. Alternatively, you could group both types of Vireo together by making your label_set only include Vireo, and your label_mapping include REVI: Vireo, PHVI: Vireo, REVI/PHVI: Vireo.

The name

Voxaboxen is designed to put a box around each vocalization (vox). It also rhymes with Roxaboxen.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 5

0