You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello @zwxdxcm,
Thank you for your interest in this work.
Great question. The gradient of log Q is equal to the gradient of Q divided by Q. I perform this division in line 280 of "examples/train_ngp_nerf_prop.py".
here we ignore scale * eps. correction = 1/[Q(x)]^\alpha. netgrad is the gradiant of total loss. How does it equal to grad(Q(x)) / Q(x)? It seems like grad(L)/[1/[Q(x)]^\alpha * L]. Is there anything i ignored? Thank you again !
approximate correction(x) = Q(x)
thus → ▽log(Q(x)) = ▽Q(x)/Q(x) = ▽correction(x)/correction(x)
I dont know why to multiply loss_per_pix again given that there is loss_per_pix.mul_(correction) in line 267
Hi,
Thanks for your contribution.
I am wondering that why there is not log operator in the codebase?
Here is code in
lmc.py
:But the equation(10) in paper is:
The text was updated successfully, but these errors were encountered: