-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a way to record features extracted during optimisations #215
Comments
It would be very nice to have this enhancement. For example to analyse how features extracted from good models compare to the experimental ones. |
I will see when I have time to implement this. It won't be trivial, because it involves some changes to the current user-facing API. I'll try to implement in a way so that it doesn't affect existing scripts. |
Ok, I see. Another solution would be to re-run models (>10, potentially hundreds parameter combinations). Maybe this can be efficiently done in parallel (using multiprocessing or ipyparallel), by re-using existing code. |
Yes, for now that might be the best solution. You could use the ipyparallel map function for that. You would only have to pass a function that returns the feature values instead of the scores. |
@DrTaDa, could you have a look at this. It's an often requested feature. Not urgent though. |
Does #350 answer entirely to what was requested here ? |
At the moment the scores of the parameters are returned to the master process. We should also have a way to e.g. get the raw efeature values out. This would be useful for sensitivity analysis e.g.
The text was updated successfully, but these errors were encountered: