We require a unit test table📄 which show the output of various tests related to each submodule and corresponding backend.
We solve that issue by maintaining a database📊, to which the job output is pushed right after the workflow runs. We then pull the data from the database🔑, do some wrangling in a script and push a table result for each of these submodules in this branch. The rows consist of each test for each module(functional/stateful), and the columns consist of each backend framework.
The dashboard script is triggered every 20 mins and is deployed on cloud. The script used for updating the database is added as a step into the action workflows.
👉To vi 6336 ew the status of the tests at any given time, head on to -:
- Array API Tests
- Functional Core Tests
- Functional NN Tests
- Stateful Tests
- Experimental Core Tests
- Experimental NN Tests
- Torch Frontend Tests
- Jax Frontend Tests
- Tensorflow Frontend Tests
- Numpy Frontend Tests
- Miscellaneous Tests
These are higher level Submodule specific dashboards, for more fine grained individual test dashboards click on the badges✅ inside these submodules.
@article{lenton2021ivy, title={Ivy: Templated deep learning for inter-framework portability}, author={Lenton, Daniel and Pardo, Fabio and Falck, Fabian and James, Stephen and Clark, Ronald}, journal={arXiv preprint arXiv:2102.02886}, year={2021} }