You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been working extensively with the CALVIN simulator, and I must say it's truly impressive. Your work has significantly advanced the field of embodied AI, and CALVIN has rightfully become one of the best benchmarks in this domain. Its comprehensive design and realistic physics have been invaluable for my research. While working with this excellent tool, I've encountered a couple of questions regarding the Robot class implementation that I hope you might be able to clarify.
When use_target_pose=True, the target for this step action execution is the previous target (target_pos) plus the current action (rel_pos).
When use_target_pose=False, the target is the current tcp_pos plus the action (rel_pos).
For policies like GR1, where the model input is based on the current actual state of the robot arm (tcp_pos), it seems that use_target_pose should be set to False for both training and testing. Is my understanding correct? But I found GR1 is using use_target_pose=True for both training and evaluation.
Question about the apply_action function:
I've noticed that after executing apply_action, the robot's end effector often doesn't reach the target position (target_ee_pos). The discrepancy can be significant, with the end effector typically moving only part of the way towards target_ee_pos. Is this due to the robot arm's dynamic constraints?
I would greatly appreciate your insights on these matters. Your expertise would be invaluable in helping me better understand the CALVIN simulator's implementation and behavior.
The text was updated successfully, but these errors were encountered:
I've been working extensively with the CALVIN simulator, and I must say it's truly impressive. Your work has significantly advanced the field of embodied AI, and CALVIN has rightfully become one of the best benchmarks in this domain. Its comprehensive design and realistic physics have been invaluable for my research. While working with this excellent tool, I've encountered a couple of questions regarding the Robot class implementation that I hope you might be able to clarify.
Based on my understanding of the apply_action (https://github.com/mees/calvin_env/blob/1431a46bd36bde5903fb6345e68b5ccc30def666/calvin_env/robot/robot.py#L244) and relative_to_absolute (https://github.com/mees/calvin_env/blob/1431a46bd36bde5903fb6345e68b5ccc30def666/calvin_env/robot/robot.py#L228) functions, it appears that:
For policies like GR1, where the model input is based on the current actual state of the robot arm (tcp_pos), it seems that use_target_pose should be set to False for both training and testing. Is my understanding correct? But I found GR1 is using use_target_pose=True for both training and evaluation.
I've noticed that after executing apply_action, the robot's end effector often doesn't reach the target position (target_ee_pos). The discrepancy can be significant, with the end effector typically moving only part of the way towards target_ee_pos. Is this due to the robot arm's dynamic constraints?
I would greatly appreciate your insights on these matters. Your expertise would be invaluable in helping me better understand the CALVIN simulator's implementation and behavior.
The text was updated successfully, but these errors were encountered: