Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add separate fit and test functions #472

Merged
merged 7 commits into from
Dec 22, 2021
Merged

add separate fit and test functions #472

merged 7 commits into from
Dec 22, 2021

Conversation

olliethomas
Copy link
Member

Added separate fit and predict methods for the InProcessing models.

This is mostly just splitting up what's already packaged together in various train_and_predict functions.

The bit that needs double checking is for the async algos. Here a model save location now gets passed, then the trained model is just dumped there. I'm not sure if there's a better way to do this or not. I imagine that there is.

There's now quite a lot of duplication. This should really be sorted....

@olliethomas
Copy link
Member Author

this is useful if you want to train a model and then evaluate on a bunch of different test sets

@tmke8
Copy link
Member

tmke8 commented Dec 22, 2021

As we discussed before, the whole async stuff should be ripped out anyway and replaced by something based on ray.

Apart from that, this seems fine.

self.__is_fairness_algo = is_fairness_algo

@abstractmethod
def fit(self, train: DataTuple) -> InAlgorithm:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be the perfect opportunity to use -> Self but it's not widely supported yet: python/typeshed#6300

@olliethomas olliethomas merged commit b20421f into main Dec 22, 2021
@olliethomas olliethomas deleted the fit_predict branch December 22, 2021 17:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants