-
Notifications
You must be signed in to change notification settings - Fork 678
Disable automatic output model snapshot tracking #146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thank you @rorph , kind words are always appreciated :) With trains 0.15.1rc0 we added a few callbacks to allow you to interfere with the model registration, You can also disable the tracking automagic with : from trains import Task
Task.init('examples', 'no tracking', auto_connect_frameworks={'pytorch': False}) And then manually log a model: from trains import OutputModel
OutputModel().update_weights('my_best_model.bin') |
Hey @bmartinn , thanks for the reply. Looking at the PR , I think I probably can circumvent this issue by setting a event that never saves it. Just to put this in context, there's 2 scripts running in parallel, one training and one measuring the iterations, once a new iteration is created it's graded, if it finds a new best score i run |
fyi, If the "pre_callback" returns None the specific model save will not be tracked :)
This is very cool! (In theory you could make it a distributed process, launch a Task to do the validation on another machine, like clone&enqueue base Task that does inference then plug the results back to the training Task) @rorph out out curiosity which framework are you using ? |
Hello,
Is there an option to disable the output model snapshot while still keep tracking of the training framework?
EG: I want to save the best scoring model as output model, however, because snapshot tracking is enabled there's a race condition, when a new snapshot comes in it overwrites the best model set.
Thanks
Great project BTW
The text was updated successfully, but these errors were encountered: