-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
On the inconsistency of Taxid and BERTax taxonomy labels and the calculation of evaluation metrics for AveP. #14
Comments
Sorry for the late answer.
Hope this helps! |
I'm sorry, I think the |
Hello! "The average precision was calculated based on micro average Precision-Recall-curves (sklearn.metrics.average_precision_score). For the accuracy, we used a balanced version due to unbalanced data: taking the mean over all superkingdom classes, as described in the paper. Additionally, there are also confusion matrices for everything here: https://github.com/f-kretschmer/bertax/tree/master/confusion_matrices." |
Hi! Both the balanced accuracy calculation (sklearn.metrics.balanced_accuracy_score) and average precision calculation (sklearn.metrics.precision_score) is used for all ranks. |
|
In this table it is Average Precision (AveP), but we also have Precision-Recall-plots, ROC-curves and balanced accuracy. |
So comprehensive!! |
The "final" dataset has a lot more data and also an additional output layer for "genus" prediction. Everything is detailed in the section "Performance of Final BERTax Model" in the PNAS Paper. See especially SFig. 2, which has a visualization trying to show why adding the genus layer leads to better performance. |
Hi!
I'm interested in your work and I'm trying to reproduce the results on the data you released, but I'm having some problems.
1, The released sequence data contains
taxid
, and I used NCBI to map these taxids into taxonomic classification, and I got the corresponding taxonomic level for each sequence. However, many of these taxonomic labels obtained cannot correspond to those labels in the BERTax model(5 superkingdom,44 phylum,156 genus), and some of them I have corrected manually.Although I have done the correction in the final dataset, the genus level correction is a bit difficult in
similar dataset
andnon-similar dataset
. I would like to ask, is this an objective problem right? Is there any possible solution?2, I would also like to ask if the
Accuracy
andAveP
metrics mentioned in the paper areaccuracy
andprecision
as we know them? Usefrom sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
is it possible to calculate the same metrics mentioned in the paper?Thank you for your work.
The text was updated successfully, but these errors were encountered: