-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to reproduce Precision-Recall plot Supp. Fig. 5 #5
Comments
After looking more closely at the source code, as best I can tell the calculations of both precision and recall are not correct. Line 5 in bench.fdr.noselfhit.awk outputs 6 column headings: 1 2 3 4 5 6 print "PREC_FAM","PREC_SFAM","PREC_FOLD","RECALL_FAM","RECALL_SFAM","RECALL_FOLD"; Line 37 in bench.fdr.noselfhit.awk outputs 7 columns: 1 2 3 4 5 6 7 NR %1000 == 0{ print tp_fam/(tp_fam+fp), tp_sfam/(tp_sfam+fp), tp_fold/(tp_fold+fp), tp_fam / queries, tp_sfam / queries, tp_fold / queries, tp_fam; } Matching column headings to expressions in line 37: 1. PREC_FAM = tp_fam/(tp_fam+fp) 2. PREC_SFAM = tp_sfam/(tp_sfam+fp) 3. PREC_FOLD = tp_fold/(tp_fold+fp) 4. RECALL_FAM = tp_fam / queries # queries = constant = 3566 5. RECALL_SFAM = tp_sfam / queries 6. RECALL_FOLD = tp_fold / queries Definitions of precision and recall are (see e.g. https://scikit-learn.org/1.5/auto_examples/model_selection/plot_precision_recall.html): # TP = number of true positive hits above the threshold # FP = number of false positive hits above the threshold # FN = number of false negatives at the threshold Precision = TP/(TP + FP) Recall = TP/(TP + FN) The calculation of precision appears to be wrong for family and superfamily because The calculation of recall appears to be wrong because the divisor should be (TP+FN) but is constant (always 3556). |
Thanks for flagging this. We will have a look at it. But since the postdoc left the lab it might need some time. As for the TM-align results, we tried various methods to sort the hits. A reviewer recommended using the average of qTM and tTM scores, which indeed worked best. It’s possible that the uploaded file does not reflect this averaging. |
I have tried reproducing Supp. Fig. 5 using the scripts in this repository and also my own code but I get quite different results. For example, as shown in the figure above I find TM-align is better than DALI on Superfamily over the entire range, while your figure shows TM-align to be substantially worse (Fig. S5 on the left, my results on the right). My plot on the right was generated using your data and scripts as follows.
Hits downloaded from https://wwwuser.gwdguser.de/~compbiol/foldseek/scop.benchmark.result.tar.gz
Plot column
PREC_SFAM
(y axis) vs.RECALL_SFAM
(x axis).As you can see, the plot for DALI looks right but TM-align looks very wrong.
Any help in resolving this discrepancy will be much appreciated.
Also, the calculation of precision and recall in
bench.fdr.noselfhit.awk
appears to be making corrections compared to the standard formulas, but I don't understand how it works. Can you clarify? In particular, what is the variablenorm
is doing? Thanks!The text was updated successfully, but these errors were encountered: