Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue on Control Point Position Optimization #18

Open
mikelovesolivia opened this issue Sep 28, 2024 · 2 comments
Open

Issue on Control Point Position Optimization #18

mikelovesolivia opened this issue Sep 28, 2024 · 2 comments

Comments

@mikelovesolivia
Copy link

mikelovesolivia commented Sep 28, 2024

Hi Sebastian,

In pytests/tests/tf/test_tf_optimization.py line 106, derivative_tf_indices is defined as follows:

image

During training, I find that the position of each control point does not change. How can I optimize the position of each control point during training? What should I do with this and other parameters? Thank you.

@shamanDevel
Copy link
Owner

Hi, these are not actually the derivatives or the values you want to optimize. For forward-mode autodiff, you need to assign "optimization slots" for each variable you want to compute gradients for. This is because forward-mode autodiff scales linearly with the number of parameters and thus you want to minimize them. Only the parameters with a derivative-index >= 0 (and all must be unique) are optimized.

The actual TF values are in initial_tf, these are the values that are optimized.

@mikelovesolivia
Copy link
Author

mikelovesolivia commented Sep 29, 2024

Hi Sebastian, may I know if I understand this correctly: if I let derivative_tf_indices = torch.tensor([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]], dtype=torch.int32), both the RGBA values and positions of all control points, including the first and last points, of initial_tf are optimized? Is there a demo code in this repo for position optimization?

Besides, since I want to optimize the positions of the control points of initial_tf, the positions may be not in ascending order and even not positive after optimization. What can I do to make the optimized transfer function still valid? Does it affect gradient computation if I sort the rows of current_tf during optimization? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants