You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for open sourcing the great work.
I am reading the code, and get baffled with the gtsam retraction here.
My (wrong) derivation indicates that the chi should be negative,
self.last_state = x0.retract(-gtsam_delta)
It follows from the below observations.
Here we are optimizing w_T_b, its update is defined as w_T_b \leftarrow w_T_b exp(epsilon).
In droid-slam, the ba optimizes c_T_w, its update is defined as c_T_w \leftarrow exp(nu) c_T_w.
By comparing nerfslam droid_kernels.cu and droid-slam droid_kernels.cu, I see that they are using the same observations (gru flow) and the same Hessians and residuals.
So following droid slam, denote the delta chi by nu, we will update w_T_b by w_T_b \leftarrow b_T_w exp(-nu).
Can you please clarify this?
The text was updated successfully, but these errors were encountered:
Thank you for open sourcing the great work.
I am reading the code, and get baffled with the gtsam retraction here.
My (wrong) derivation indicates that the chi should be negative,
It follows from the below observations.
Here we are optimizing w_T_b, its update is defined as w_T_b \leftarrow w_T_b exp(epsilon).
In droid-slam, the ba optimizes c_T_w, its update is defined as c_T_w \leftarrow exp(nu) c_T_w.
By comparing nerfslam droid_kernels.cu and droid-slam droid_kernels.cu, I see that they are using the same observations (gru flow) and the same Hessians and residuals.
So following droid slam, denote the delta chi by nu, we will update w_T_b by w_T_b \leftarrow b_T_w exp(-nu).
Can you please clarify this?
The text was updated successfully, but these errors were encountered: