Skip to content
This repository has been archived by the owner on Jun 21, 2023. It is now read-only.

Evaluation step TP53 classifier #385

Merged
merged 22 commits into from
Jan 10, 2020

Conversation

kgaonkar6
Copy link
Collaborator

@kgaonkar6 kgaonkar6 commented Dec 31, 2019

Purpose/implementation Section

What scientific question is your analysis addressing?

Evaluation of TP53/NF1 classifier scores

What was your approach?

Use TP53/NF1 inactivation scores generated for polya and stranded (run script for classifier #380)
Identify potential true TP53/NF1 losses from SNV (Get tp53 nf1 alt #381)
Run 02-evaluate-classifier.py to

  • format SNV calls to binary status of TP53/NF1 mutation status
  • use clinical sample_id column to match RNAseq from classifier score dataframe with WGS/WES mutation dataframe
  • plot ROC curve for true calls and shuffle data

What GitHub issue does your pull request address?

#165

Directions for reviewers. Tell potential reviewers what kind of feedback you are soliciting.

Which areas should receive a particularly close look?

In 02-evaluate-classifier.py I'm using sample_id to match RNAseq from the classifier score and WGS/WES from SNV data (TP53/NF1 loss binary status) does that sound good?

Other than that I've made minimal changes to fit our data and folder structure in get_roc_plot() to read in for stranded OR polya samples datasets and plot polya_NF1.png & polya_TP53.png OR stranded_NF1.png and stranded_TP53.png

Is there anything that you want to discuss further?

Any other plots/tables required here ? We are only plotting ROC for TP53 and NF1 is this PR

Is the analysis in a mature enough form that the resulting figure(s) and/or table(s) are ready for review?

Yes

Results

What types of results are included (e.g., table, figure)?

figure

What is your summary of the results?

We have ~0.8 AUROC for TP53 classifier in both stranded and polya.
0.7 and 0.92 AUROC for NF1 in stranded and polya respectively.

The count table for TP53 loss status in stranded data is (1= has TP53 coding SNV alteration, 0 = doesn't have coding SNV alteration)
0 933
1 81
The count table for NF1 loss status in stranded data is (1= has NF1 coding SNV alteration, 0 = doesn't have coding SNV alteration)
0 978
1 36

The count table for TP53 loss status in polya data is (1= has TP53 coding SNV alteration, 0 = doesnt have coding SNV alteration)
0 30
1 28
The count table for NF1 loss status in polya data is (1= has NF1 coding SNV alteration, 0 = doesn't have coding SNV alteration)
0 53
1 5

Corrected "damaging" to "coding"

Reproducibility Checklist

  • The dependencies required to run the code in this pull request have been added to the project Dockerfile.
  • This analysis has been added to continuous integration.
  • This analysis is recorded in the table in analyses/README.md.

@kgaonkar6 kgaonkar6 changed the title Evaluation step Evaluation step TP53 classifier Dec 31, 2019
@jaclyn-taroni jaclyn-taroni mentioned this pull request Dec 31, 2019
3 tasks
@jharenza jharenza self-requested a review January 4, 2020 01:39
Copy link
Collaborator

@jharenza jharenza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kgaonkar6 this looks much better, however still concerned about NF1 stranded, and think the selection of mutations will improve this, see #381.

I also noticed ras scores and ras shuffle scores still being produced from the apply classifier step, and we should remove those, as we will not be using them - i think we should modify the code from this PR: #380

Agree with @jaclyn-taroni to remove clinical from this step.

Can you change seqnames to chr or the like for column 1 of TP53_NF1_snv_alteration.tsv?

@kgaonkar6
Copy link
Collaborator Author

@kgaonkar6 this looks much better, however still concerned about NF1 stranded, and think the selection of mutations will improve this, see #381.

I've added the newly generated TP53_NF1_snv_alteration.tsv and merged the clinical file here with only the IDs and no clinical information as suggested in #381

I also noticed ras scores and ras shuffle scores still being produced from the apply classifier step, and we should remove those, as we will not be using them - i think we should modify the code from this PR: #380

Oh yes I'll remember to edit 01 to remove Ras in another PR and add all the files from this analysis to run script as well.

Copy link
Collaborator

@gwaybio gwaybio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of minor inline comments, looking good so far!

One additional note:

Holy smokes! The NF1 classifier is working extremely well for poly-a samples. Perhaps suspiciously well... Is there documentation in this repo how poly-a samples were collected (vs. stranded?). Is this just a matter of sample prep? All other ROC curves are behaving as expected :)

Also, as @jharenza described in #381 (comment), defining aberrant lesions is extremely important. Was there any difference in how these lesions were defined poly-a vs. stranded?

The classifier was originally trained by casting a wide net on loss-of-function. Here is the key piece:

image

(from Way et al 2018)

Gregory Way, 2018
Modified by Krutika Gaonkar for OpenPBTA, 2020

In the following notebook I evaluate the predictions made by the NF1 and TP53 classifiers in the input PDX RNAseq data.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No longer PDX right?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably worth revising this docstring

@@ -8,6 +8,9 @@ Now published in [Rokita et al. _Cell Reports._ 2019.](https://doi.org/10.1016/j
In brief, _TP53_ inactivation, _NF1_ inactivation, and Ras activation classifiers are applied to the stranded and polya OpenPBTA RNA-seq data.
The classifiers were trained on TCGA PanCan data ([Way et al. _Cell Reports._ 2018](https://doi.org/10.1016/j.celrep.2018.03.046), [Knijnenburg et al. _Cell Reports._ 2018.](https://doi.org/10.1016/j.celrep.2018.03.076)).
See [`01-apply-classifier.py`](01-apply-classifier.py) for more information about the procedure.
To evaluate the classifier scores [`02-evaluate-classifier.py`](02-evaluate-classifier.py) uses SNV data to identify true TP53/NF1 loss samples and compares scores of shuffled data to true calls and plots ROC curves.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To evaluate the classifier scores [`02-evaluate-classifier.py`](02-evaluate-classifier.py) uses SNV data to identify true TP53/NF1 loss samples and compares scores of shuffled data to true calls and plots ROC curves.
To evaluate the classifier scores, we use [`02-evaluate-classifier.py`](02-evaluate-classifier.py) and input SNV data to identify true TP53/NF1 loss samples and compare scores of shuffled data to true calls and plots ROC curves.

@jaclyn-taroni
Copy link
Member

Thanks for the review @gwaygenomics! To answer your methodological questions:

Holy smokes! The NF1 classifier is working extremely well for poly-a samples. Perhaps suspiciously well... Is there documentation in this repo how poly-a samples were collected (vs. stranded?). Is this just a matter of sample prep? All other ROC curves are behaving as expected :)

Here's the nucleic acids prep section of the manuscript: https://github.com/AlexsLemonade/OpenPBTA-manuscript/blob/master/content/03.methods.md#nucleic-acids-extraction-and-library-preparation

Was there any difference in how these lesions were defined poly-a vs. stranded?

How aberrant lesions were defined were the same. Specifically, we filtered the MAF file to only include SNV in coding regions

# filter the MAF data.frame to only include entries that fall within the

And for NF1 specifically - we subsequently filter out any mutations that are classified as silent, classified as missense (because they are not in OncoKB: #381 (comment)), or in introns

nf1_coding <- coding_consensus_snv %>%

I dug into whether or not the difference could be due to the experimental strategy (e.g., WGS vs. WXS), but all samples with NF1 alterations that pass this filter are WGS. However, there are only 4 samples with NF1 alterations in the poly-A dataset. So perhaps that could be what's going on - thoughts?

@jharenza
Copy link
Collaborator

jharenza commented Jan 9, 2020

Thanks for the comments @gwaygenomics!

We did not use NF1 CNVs here, as we are concerned about using the calls in the absence of consensus calls - #128. Maybe we should hold on this for NF1 until that file is released and assess deep loss as an additional true alteration?

One thing about the polyA/stranded discrepancies, in the current v12 release (#347), the stranded is not entirely a pure cohort, in that we discovered our latest batch of stranded was both polyA and stranded. I wonder if this is having an effect...

@jaclyn-taroni
Copy link
Member

One thing about the polyA/stranded discrepancies, in the current v12 release (#347), the stranded is not entirely a pure cohort, in that we discovered our latest batch of stranded was both polyA and stranded. I wonder if this is having an effect..

I think the class imbalance (low number of positives) for NF1 means that a ROC curve might be the wrong measure/display item here.

Maybe we should hold on this for NF1 until that file is released and assess deep loss as an additional true alteration?

My preference would be to get this merged and document that we need to make this update.

Copy link
Collaborator

@jharenza jharenza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to add CNVs to NF1 classifier after consensus files are released.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants