Open
Description
Given recent issues like #348 it might nice to have some lightweight benchmarking of a few key queries to track possible changes (either due to changes in bcdata or changes in catalogue/bcgw stuff. We can use this issue to roughgly outline a path. Here is one idea:
Lowest lift:
- Upon a merged PR, run a script that runs n number of queries for both tabular and spatial data that is representative of the types of queries that are possible with bcdata
- capture the timings on these queries with
bench::mark
- appends those timings to a csv file as new rows and commit that directly into this repo
- rinse repeat to build up a baseline of how long it would take
Stretch goals:
- benchmark directly in a PR to evaluate a code before merging <- I think really this should only ever be done if we ever encounter a scenario where it would have been nice
- plot benchmarks and commit a png/svg to the repo for easy visualization.