-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extracting features importance #71
Comments
Thanks for reporting this @faisalaleissa However, I think that the problem is that So, in order to do this, the only option that you have is to write a python primitive in MLPrimitives that in its fit method calls the RandomForest fit method and in its produce method just takes the attribute from the RandomForest object and returns it. However, a question arises: what do you intentend to do with the If what you want to do is a feature selection based on that, I recommend you to have a look at the SelectFromModel class from scikit-learn and the corresponding MLBlocks integration, which you can find here: |
Thanks for the quick reply. I'm working on a classifier reporting library, in which, we compute metrics and evaluate the classifiers. One part is to rank the features that are most affective for the classifier. I think the feature selector you mentioned will be suitble. Thanks again. |
…tributors Change the "author" entry in the JSONs to a "contributors" list
Description
I have been trying to produce the features importance from a classifier (RandomForrest). The feature_importances function doesn't take arguments. So, the MLPipeline doesn't execute (TypeError: 'numpy.ndarray' object is not callable).
What I Did
I left the arguments empty in the primitive definition file in the produce part.
The text was updated successfully, but these errors were encountered: