-
Notifications
You must be signed in to change notification settings - Fork 283
Aaron/fix pin to 0 code #686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The big question, does it work? It seems to! Here we have 3 plots - data with the original calibration (no pin to origin), with our current pin to origin code (the big skeleton guy in orange), and with the fixed pin to origin code in the PR. The fixed pin to origin and the original calibration now line up exactly |
What this also means is that the true issue causing the bugs mentioned in this issue (#681) was at the core, never related to the OS or the video order. The reason that when we sorted the videos we saw alignment is just because the Thankfully, this also means that the video chosen as |
...processes/capture_volume_calibration/anipose_camera_calibration/anipose_camera_calibrator.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running through a couple of test recordings this looks good!
After a good bit of digging, I believe I've found the issue the
pin_camera_0_to_origin
code, which lies in how we're handling different coordinate systems.Currently we're trying to move each cameras
tvec
by applying a transformation that would place camera 0's position at [0,0,0]. However, each camerastvec
exists in its own local reference frame, not in the world coordinate system.So currently we calculate a translation in the context of camera 0's coordinate system and then try to apply this same translation to other cameras with different orientations. This means that this function hasn't been rigidly moving the cameras as a set - it's been changing the spatial relationships between cameras. This explains why different 'translations' that have been applied (for example, how which video was listed as 'camera 0' led to a different
translation to origin
vector) has led to differently scaled data.The solution that I've tried to implement here is to calculate the translation needed to get camera 0 to the origin in the world coordinate system, not in camera 0's specific coordinate system. Then, we take this world-space translation, transform it into each cameras local coordinate system, and apply it.