Skip to content

Commit

Permalink
Update docs/SOMATIC_SNV_BENCHMARK.md
Browse files Browse the repository at this point in the history
Co-authored-by: Dan Miller <dmiller15@users.noreply.github.com>
  • Loading branch information
migbro and dmiller15 authored Jun 28, 2024
1 parent 1bfd619 commit a423d9e
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions docs/SOMATIC_SNV_BENCHMARK.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ BAM files relevant to our workflow (BWA-aligned) were called using our standard
1. Gold standard VCFs were downloaded from https://ftp-trace.ncbi.nlm.nih.gov/ReferenceSamples/seqc/Somatic_Mutation_WG/release/v1.2/ with [SNV](https://ftp-trace.ncbi.nlm.nih.gov/ReferenceSamples/seqc/Somatic_Mutation_WG/release/v1.2/sSNV.MSDUKT.superSet.v1.2.vcf.gz) and [INDEL](https://ftp-trace.ncbi.nlm.nih.gov/ReferenceSamples/seqc/Somatic_Mutation_WG/release/v1.2/sINDEL.MDKT.superSet.v1.2.vcf.gz) VCFs merged into a single VCF
1. Merged gold standard VCF was then subset to BWA hits and broken up into call sets matching samples (parse `calledSamples` `INFO` field)
1. Next, a custom [benchmark script](https://github.com/kids-first/kfdrc-benchmark/blob/main/scripts/benchmark_vcf_calls.py), which collected the following metrics:
- True Positive
- False Positive
- False Negative
- F1 Score
- Precision
- Accuracy
- True Positive
- False Positive
- False Negative
- F1 Score
- Precision
- Accuracy

## Cell Line Consensus Call Method Benchmarking Results
Results were collated into the following tables, broken down by confidence level aggregations for the cell line data:
Expand Down

0 comments on commit a423d9e

Please sign in to comment.