-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implemented precision_score function in _classification.py #23063
Conversation
The precision_score function calculates the precision score for classification tasks, which is a measure of the accuracy of positive predictions. The precision_score function takes as input the true labels y_true and predicted labels y_pred. It also allows for optional parameters such as normalize for controlling whether the result should be normalized and sample_weight for handling sample weights. The function first determines the type of target variable, checking if it is a multilabel classification problem. If it is, it computes the difference between y_true and y_pred and counts the number of non-zero elements along the specified axis, corresponding to the number of different labels predicted. It then checks if this count is equal to zero for each sample and converts the result to an integer. For binary or multiclass classification problems, the code simply compares y_true and y_pred for equality, again converting the result to integers. The function sums up the integer values, which represent the number of correct positive predictions, and optionally normalizes the result by dividing it by the total number of samples. The final result is cast as a float if normalization is enabled.
Thanks for contributing to Ivy! 😊👏 |
Hi @AnnaTz , I've added a new function, precision_score, to the _classification.py module. This function is tailored to classification tasks and is designed to work seamlessly with Ivy. The precision_score function calculates precision, an essential metric for classification tasks, and contributes to Ivy's compatibility in machine learning projects. I'd greatly appreciate it if you could review and provide your feedback on this addition. Thank you |
Hi @muzakkirhussain011, you will also need to add a test for the function to make sure it works as expected. |
This commit adds a test case for the precision_score function to ensure its correctness. The test compares the results of the custom precision_score function with scikit-learn's precision_score function to validate its accuracy. Tested precision_score function against example data and scikit-learn's implementation.
Hi @AnnaTz , Thank you for your feedback! I'm pleased to let you know that a test for the You can check out the details in the pull request If you have any more suggestions or questions, please feel free to let me know. Best regards |
Thanks! However, we prefer to merge a single PR containing a new function and its test. Please combine your 2 PRs to review them together. |
Hi @AnnaTz , Thank you for your feedback. I've now combined the two PRs into a single one for review. You can find the updated PR with both the new function and its corresponding test at the following link: Please review it, and let me know if you have any further suggestions or feedback. Thanks again for your guidance. Best regards |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @muzakkirhussain011 the test function needs to be written in the ivy_tests/test_ivy/test_frontends/test_sklearn/test_metrics/test_classification.py file
You can go through these sections of docs for much greater understanding
Ivy Tests
Ivy Frontend Tests
Hi @zeus2x7 , Thank you for your guidance. I've written the test function and placed it in the You can find the updated PR with both the new function and its corresponding test at the following link: If you have any specific details or requirements for the test function, please let me know, and I'll make any necessary adjustments. Additionally, I'll review the provided sections of the documentation for a deeper understanding. Thanks again for your assistance. Best regards |
@muzakkirhussain011 can you close this PR which doesn't have the test function implemented as it would be better. |
Hi @zeus2x7 , Thank you for bringing this to my attention. I wanted to clarify that the PR containing the code and test function hasn't been assigned any reviewers yet. However, I've added the test function to the other PR that was lacking it and doesn't have any reviewers. I appreciate your attention to detail, and please feel free to let me know if you have any further suggestions or requirements. Best regards |
Hi, @muzakkirhussain011 another pr created by you #23277 has been assigned a reviewer. |
The precision_score function calculates the precision score for classification tasks, which is a measure of the accuracy of positive predictions.
The precision_score function takes as input the true labels y_true and predicted labels y_pred. It also allows for optional parameters such as normalize for controlling whether the result should be normalized and sample_weight for handling sample weights.
The function first determines the type of target variable, checking if it is a multilabel classification problem. If it is, it computes the difference between y_true and y_pred and counts the number of non-zero elements along the specified axis, corresponding to the number of different labels predicted. It then checks if this count is equal to zero for each sample and converts the result to an integer.
For binary or multiclass classification problems, the code simply compares y_true and y_pred for equality, again converting the result to integers.
The function sums up the integer values, which represent the number of correct positive predictions, and optionally normalizes the result by dividing it by the total number of samples. The final result is cast as a float if normalization is enabled.
PR Description
Related Issue
Close #
Checklist
Socials: