Clarification on Depth Camera Viewing Angle and Gripper Visibility Requirements · Issue #5 · TEA-Lab/DemoGen · GitHub
More Web Proxy on the site http://driver.im/
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m experimenting with DemoGen and have a few questions about the setup of the depth camera and the point-cloud data:
Camera viewing angle requirements
Is there a recommended or minimum field-of-view / mounting angle for the depth camera?
In particular, should the camera be positioned so that the gripper is fully visible (e.g. a side view) throughout the demonstration?
Gripper visibility in early frames
I noticed that in the provided demo datasets, the manipulated object and the gripper always appear simultaneously.
If the gripper does not appear in the first few frames of the point-cloud sequence (e.g. it enters the scene later), will that negatively impact the synthetic demo generation or downstream policy learning?
Necessity of gripper point-cloud
Is it strictly necessary to include the gripper’s point-cloud geometry in the input demonstrations for DemoGen to work correctly?
Could I, for example, mask out or ignore the gripper cloud and still obtain robust synthetic trajectories?
Any guidance on optimal camera placement or dataset requirements would be greatly appreciated. Thanks in advance!
The text was updated successfully, but these errors were encountered:
We recommend L515. You can check its detailed FoV specs. In our experiments on the Panda arm, the camera was set in front of the tabletop workspace. In the experiments on Galaxea R1 (bimanual humanoid), the camera was stuck on the robot's head and looked down at the workspace. We didn't try side views, but I think it will make it harder to handle the point cloud of the end-effectors.
Actually, we have encountered similar cases in the simulator, where the cropped workspace of the point cloud was set too low to see the end-effector at first. We didn't observe significant negative effects, as long as the end-effector dives down at the beginning of all the demos, so that the policy could overfit to the diving-down behavior, even though it cannot see the end-effector.
I will say it depends on your particular settings. In the real world, I don't think you can easily make off the points from the end-effector during inference, so I don't quite understand the benefits brought by masking off these points. Maybe you can describe your robot setting in detail so that I can provide more concrete suggestions.
Hello, TeaLab-TEAM
I’m experimenting with DemoGen and have a few questions about the setup of the depth camera and the point-cloud data:
Camera viewing angle requirements
Gripper visibility in early frames
Necessity of gripper point-cloud
Any guidance on optimal camera placement or dataset requirements would be greatly appreciated. Thanks in advance!
The text was updated successfully, but these errors were encountered: