-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add routine for modern dark scaling #172
Conversation
@jeanconn - this looks generally good, thanks! I had a short block of time where I couldn't focus otherwise so I just made this more to my liking. |
The only other substantive comment that I had was that it needs a reference to the notebook with the derivation of the key scaling formula. |
I figured we'd probably want a reference to the notebook - and I couldn't find the notebook in a brief search. @javierggt ? |
For reference I profiled this code.
The question is whether to use the new function in |
Since I now contributed code this should be for @javierggt to review. |
I note my limited speed-testing
Which suggests to me that, as expected, the new code speed depends on the number of pixels in img, so that suggests to me that if we want to use it in proseco where we depend on speed, that we'd likely want to just get the scaling for the pixels in the acquisition boxes if we want no performance hit. But the speed hit is probably acceptable even without that if this scaling is preferred. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran the tests and checked that this function corresponds to what I have used. Seems good to me.
I went back to the notebook, and I didn't remember the details, so I added a final "summary" section where I summarized this function in words to the best of my (current) knowledge.
I note that I do not know how/why I chose the 265.15 value. In the notebook, I used 263.15 in a few places, but I don't think this is a big difference anyway.
1.90718453e-01, | ||
] | ||
t = 265.15 | ||
y = np.log(np.clip(img, 20, 1e4)) - e * t |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You removed the np.isnan
check. Right now I do not know where that might affect the result, but I can guarantee that if I put it there it is because at some point I cared (maybe a warning at some point).
Maybe something to just keep in mind in case weird things happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding nans, it looks like the routine isn't breaking on them:
In [15]: img = [np.nan, -5, 0, 100, 1000, 1000]
In [16]: dark_temp_scale_img(img, -10, 5)
Out[16]:
array([ nan, -21.16578998, 0. , 504.20852555,
4645.94826402, 4645.94826402])
I think the output for negative numbers is probably unhelpful but it looks like the nans stay nans.
I know I just approved it, but I think it would be worth it to add a It is an arbitrary number, and at least I saw I used two different values in the notebook. It would also mean it is better documented. I would documented as
|
Given that we've largely settled on 265.15 as a value to use for now because it seems to work well, I think that it probably deserves a better comment or a better name for the internal variable, not a new optional argument. |
The main point of having it an argument is that it is an arbitrary number, not that is better documented. The documentation is a good side effect. For a better reference, look at the notebook, including the summary I added. We cannot do much better now. I cannot justify the choice of 65.15 now. The only justification is "it works" based on some metric that we are not going to dig out. It might stop working, or we might need an estimate in an extreme temperature, in which case we would override it. |
I just figured if we really needed to fiddle with it because of bad behavior, that instead of plugging in new values for t_ccd_alpha we'd probably need to refit the model and get new coefficients anyway. But if you feel the current coefficients would remain stable and you'd just use a different reference t_ccd_alpha, we can add the option. |
But it could go in a new PR. |
The way I see it, it is a real parameter of the function that we are deciding to not use and set it to an average value. Look at the notebook, there is a figure at the very bottom. What this fit does is to give you the slope of the lines as a function of the temperature (x axis) and current (y-axis), and when we choose a temperature we are basically choosing a vertical line in the figure. That is ok if this temperature is close to the actual temperatures. In our trending pages we use a reference temperature for dark currents, but as things heat up we could increase that temperature, and then the whole thing would be better with a higher t_ccd_alpha without re-fitting the model. |
I don't mind making the change myself. Just give me a minute. |
I don't mind either, I just mean this PR is presently tested and reviewed and you are talking about an incremental change. We don't get charged per PR. |
As you prefer. I made another PR for that change and I do not see how adding the change to this PR changes anything in terms of effort. If you prefer to merge this and then start another review for my trivial change, that's your choice, but I think it's a waste of time. I say, merge that here and be done with it, unless you disagree with that change. |
I'm with @jeanconn about leaving the 265.15 fixed in this PR. I approved this PR contingent on the functional validation provided by using it in the If any of those parameters are changed then we need to provide supporting validation that the model works just as well and document in the function docstring how the parameter change would impact results. And document why specifically changing the |
ok, but I am very unhappy about this, so let me note a few things:
Requesting changes would have been motivated because:
Now instead you want to start a back and forth discussing these things. OK. |
Regarding "I do not see tests about acdc plots after your commits. So I don't get how that contingent approval works then." that's a fair reviewer comment. I note that the simple tests I made passed first on the "pretty-much-straight" copy of the code and those tests passed again after Tom's changes, but I have not explicitly verified that the function returns the same values as the copies in acdc, so I should do that as functional verification in this PR. |
@javierggt - good points about the lack of tests for NaN and lack of functional testing. I've addressed this with a new commit and updates to the description. I think that for the original goal of providing a new function that is the drop-in equivalent of what was in I have questions about the impact and use-case for adding a |
Description
Add routine for more accurate dark scaling.
Interface impacts
None. If this function is used as a replacement for
dark_temp_scale
for scaling a dark current image, note the slower speed that is documented in the comments.Testing
Unit tests
Independent check of unit tests by [REVIEWER NAME]
Functional tests
Functional testing documented in https://github.com/sot/acdc/pull/86.