Verify the accuracy metrics reported in the release accuracy table for Midas v2.1 Small256 · Issue #286 · isl-org/MiDaS · GitHub
More Web Proxy on the site http://driver.im/
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m evaluating the accuracy of MiDaS on the KITTI dataset. When I evaluate version 3.0 models (DPT_Large_384 and DPT_Hybrid_384) using zero-shot error on KITTI, the results match the released figures.
However, when I evaluate MiDaS v2.1 Small (Small_256), the result 44.19 is significantly different from the released value of 29.27. Can anyone tell me if there’s a difference in the evaluation method for MiDaS v2.1 Small_256 compared to models like DPT_Large_384 and DPT_Hybrid_384, or help confirm the released result?
Additionally, if anyone has any official evaluation source code that confirms the released results, could you kindly share it with me?
The text was updated successfully, but these errors were encountered:
I’m evaluating the accuracy of MiDaS on the KITTI dataset. When I evaluate version 3.0 models (DPT_Large_384 and DPT_Hybrid_384) using zero-shot error on KITTI, the results match the released figures.
However, when I evaluate MiDaS v2.1 Small (Small_256), the result 44.19 is significantly different from the released value of 29.27. Can anyone tell me if there’s a difference in the evaluation method for MiDaS v2.1 Small_256 compared to models like DPT_Large_384 and DPT_Hybrid_384, or help confirm the released result?
Additionally, if anyone has any official evaluation source code that confirms the released results, could you kindly share it with me?
The text was updated successfully, but these errors were encountered: