Some internal tooling to do OpenAI inference. Will ultimately put output data in the merge_dir
with responses in the openai_response
field.
- In the
experiments/
directory, make a config json - Then either run with the
sandbox
command to do the full thing. Or do a flow ofupload
,check
,merge
Use the interactive jupyter notebook to build a config file. Then run either:
--command sandbox \
--config experiments/example/config.json \
--status-file experiments/example/status.json \
--experiment exp_name_goes_here \
--wait \
--interval 10
Or if you want to do this in several steps:
--command upload \
--config experiments/example/config.json<
5C70
/span> \
--status-file experiments/example/status.json \
--experiment-description exp_name_goes_here
then
--command check \
--config experiments/example/config.json \
--status-file experiments/example/status.json \
--wait \
--interval 10
and finally
--command merge \
--config experiments/example/config.json \
--status-file experiments/example/status.json
Here are some features that might be nice to have in the future:
- S3 support
- More sophisticated tracking than a json (maybe really important if lots of files)
- Estimated cost before submitting jobs (or after running things)