ModelCheckpoint should throw an exception/provide warning if non-existing metric is being monitored · Issue #21109 · keras-team/keras · GitHub
More Web Proxy on the site http://driver.im/
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been playing around with keras==3.9.1 and noticed one thing - if you make typo in the ModelCheckpoint monitored, you won't get "hard stop". There is a message, that is being logged, but it probably requires a bit more of a setup to get it shown in jupyter-notebook. What is even more interesting is that I have had CheckpointVerbosity to 1 and it still didn't show up.
I only learned about that once I have tried to load the weights and h5py have thrown:
`File h5py/h5f.pyx:102, in h5py.h5f.open()
FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = '/home/.../best-models/lstm.weights.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)`
as unsuprisingly, no weights were saved.
I am open to discussion, but I would either be in favour of the throwing a hard exception in case the metric doesn't exist or always showing this warning, unless it is explicitly turned of (verbose=-1?). I can imagine there are some advanced, hard metrics people with custom Checkpoints want to calculate only once in a while and this might break their setup.
The text was updated successfully, but these errors were encountered:
ladi-pomsar
changed the title
ModelCheckpoint should throw an exception if non-existing metric is being monitored
ModelCheckpoint should throw an exception/provide warning if non-existing metric is being monitored
Mar 30, 2025
Hi everyone,
I have been playing around with keras==3.9.1 and noticed one thing - if you make typo in the ModelCheckpoint monitored, you won't get "hard stop". There is a message, that is being logged, but it probably requires a bit more of a setup to get it shown in jupyter-notebook. What is even more interesting is that I have had CheckpointVerbosity to 1 and it still didn't show up.
So model situation:
Mine checkpoint was:
checkpoint = ModelCheckpoint(str(checkpoint_filepath), monitor='F1', mode='max', verbose = 1, save_best_only=True, save_weights_only=True)
but I already migrated from my custom F1 method to keras.metricsF1Score. Hence the name of the metric has changed:
I only learned about that once I have tried to load the weights and h5py have thrown:
as unsuprisingly, no weights were saved.
I am open to discussion, but I would either be in favour of the throwing a hard exception in case the metric doesn't exist or always showing this warning, unless it is explicitly turned of (verbose=-1?). I can imagine there are some advanced, hard metrics people with custom Checkpoints want to calculate only once in a while and this might break their setup.
The text was updated successfully, but these errors were encountered: