-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Implement precision_score function and test aligned with sklearn metrics #28407
Conversation
func_wrapper.py is a Python module designed to streamline the integration of Hugging Face Transformers into your natural language processing (NLP) projects. It provides a set of input and output conversion wrappers to simplify the process of passing data between your custom functions and Transformers' data structures. Input Conversion Wrappers: inputs_to_transformers_tensors: This wrapper converts input data (text, tensors, etc.) into Transformers-compatible data structures. It is particularly useful when your custom functions expect diverse input types. Output Conversion Wrappers: outputs_to_pytorch_tensors: After your custom function returns data, this wrapper ensures that the output data is converted into PyTorch tensors or other appropriate formats. Usage: Import func_wrapper.py into your project. Initialize a Hugging Face Transformers model and tokenizer. Wrap your custom function with to_transformers_tensors_and_back. This wrapped function can now accept and return Transformers-compatible data. Here's a simple example of how to use func_wrapper.py: import torch from transformers import BertForSequenceClassification, BertTokenizer from iivy.functional.frontends.transformers.func_wrapper import to_transformers_tensors_and_back # Initialize the model and tokenizer model_name = "bert-base-uncased" model = BertForSequenceClassification.from_pretrained(model_name) tokenizer = BertTokenizer.from_pretrained(model_name) # Wrap your custom function using the conversion wrappers wrapped_function = to_transformers_tensors_and_back(your_function, model, tokenizer) # Prepare sample input data sample_input_text = "This is a sample input text." sample_input_tensor = torch.rand((3, 3)) # Call your wrapped function with the sample input data output = wrapped_function(sample_input_text, sample_input_tensor) # The output is automatically converted to PyTorch tensors print(output) Please note that func_wrapper.py is still in development, and further enhancements and refinements are expected. Your feedback and contributions to improve its functionality are welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Compliance Checks
Thank you for your Pull Request! We have run several checks on this pull request in order to make sure it's suitable for merging into this project. The results are listed in the following section.
Issue Reference
In order to be considered for merging, the pull request description must refer to a specific issue number. This is described in our contributing guide and our PR template.
This check is looking for a phrase similar to: "Fixes #XYZ" or "Resolves #XYZ" where XYZ is the issue number that this PR is meant to address.
Hi @NripeshN , I have completed the backend tests, and I’m happy to report that all tests are passing successfully. I’ve attached a screenshot of the test results for your reference. Could you please prioritize the review of this pull request at your earliest convenience? Your timely feedback will be greatly appreciated. |
Hi @Ishticode , I have completed the backend tests, and all tests are passing successfully. I’ve attached a screenshot of the test results for your reference. Could you please prioritize the review of this pull request at your earliest convenience? Your timely feedback will be greatly appreciated. |
Hi @Ishticode , The backend tests for this PR are successful. Please review it at your convenience. Your feedback would be invaluable. Best Regards |
Hi @zeus2x7 , I've submitted a new pull request for the precision score, which builds upon the recall score feature you reviewed earlier. All backend tests are passing, and I've attached a screenshot for your reference. Given your familiarity with the related work, your review would be highly beneficial and likely more efficient. Could you please take a moment to review it? Thank you for your help! |
Hi @Ishticode , I hope you're doing well. I wanted to follow up on the pull request I submitted three days ago regarding the precision score. I understand you might be busy, but I would appreciate it if you could take a look when you have a chance. Your review is important to me, and I'm looking forward to your feedback. Thank you for your time and assistance. Best regards, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @muzakkirhussain011
Thanks for the PR. I appreciate and understand your passion.
There are some failures for your test here in the logs https://github.com/unifyai/ivy/actions/runs/8037964061/job/21953440940?pr=28407
under the combined test results, it shows indent errors for test_sklearn_precision_score
which might be due to mixing space or tabs. Could you ensure this is fixed and request my review then? also feel free to ask any questions. Thnx
ps. I am not sure why they are passing locally but failing on CI. Do you have some fix locally that you have not pushed to your fork?
Hi @Ishticode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @muzakkirhussain011
Thanks for review req. Are you sure you can't see the failures from the updates logs here https://github.com/unifyai/ivy/actions/runs/8063721539/job/22026608952?pr=28407 under Combined Test Results
. It shows failures for dtype mismatches etc. which should appear on your local runs? pretty strange if it doesn't with enough examples. Could you also pls untangel change with recall_score if possible. Thnx :)
for sklearn_metrics precision_score and recall score
Hi @Ishticode |
Hi @Ishticode |
Hi @Ishticode , |
Hi @Ishticode , |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. Looks good. I will make a further check and merge. Thanks for the patience.
Hi @Ishticode , |
Hi @muzakkirhussain011 |
PR Description
Title: Implement precision_score Function and Tests
Description: This pull request adds the precision_score function to the Ivy framework’s metrics module, offering a standardized way to evaluate the precision of classification models. The function is compatible with scikit-learn’s precision score, ensuring users can expect similar behavior and results.
Key Features:
Compatibility with scikit-learn: The function adheres to the interface and output of scikit-learn’s precision_score, facilitating ease of use for those familiar with the popular machine learning library.
Support for Various Averaging Methods: Includes support for binary, micro, macro, samples, and weighted averaging methods to accommodate different classification tasks.
Extensive Testing: Accompanying the implementation are thorough tests that validate the function against a range of input scenarios, comparing outputs with scikit-learn to ensure accuracy and reliability.
Benefits:
Enhances the Ivy framework’s metrics module with a crucial evaluation metric.
Provides users with a tool to assess model performance, particularly where false positives have significant implications.
Encourages consistent evaluation practices across different frameworks.
The implementation details and test cases are documented within the code. Reviewers are invited to examine the changes and provide feedback to ensure the highest quality and utility of the function.
Closes #
Checklist