RepliTimer processes short whole-genome sequencing reads from any organism's G1 and S phase cells into replication timing values. To enable step-by-step data processing, we describe each of the individual data processing steps. We provide a Snakemake pipeline with clearly defined dependencies and Anaconda environments to automate the pipeline. We include a compact dataset in the repository that you may use to test the pipeline. We also provide an example detailing how RepliTimer can be used to process publicly available replication timing data from zebrafish.
- Requirements
- Description of individual steps in pipeline
- Step-by-step instructions on running Snakemake pipeline:
- Output structure
- Examples
- References
Requirement | Description |
---|---|
Raw RT data | separate .fastq files from paired-end whole genome sequencing of S phase and matched G1 cells |
Indexed genome | your genome of interest indexed for bwa |
RT windows | .bed file with coordinates of replication timing windows. RT windows file for the zebrafish GRCz11 genome is provided. |
Blacklisted regions | .bed file with blacklisted regions. Blacklisted file provided for zebrafish GRCz11 is provided. |
Greylisted regions | .bed file with greylisted regions. Greylisted file for zebrafish GRCz11 that works well for the Tab5 strain is provided. |
Software | fastp/0.23.2, bwa/0.7.15, samtools/1.14, picard/2.21.2, bamtools/2.5.1, bedtools/2.30.0, R/4.1.2-mkl |
R Packages | GenomicRanges, SummarizedExperiment, pspline, bamsignals, matrixStats |
Requirement | Description |
---|---|
Software | miniconda/4.11.0 (for Conda environments), python/3.10.2, numpy/1.2.2, pandas/1.4.1, snakemake-minimal/7.3.1 |
Snakemake config file | .csv file with snakemake parameters in the config/ directory |
Snakemake cluster config file | .csv file with cluster configuration parameters in the config/directory |
The data are processed according to Siefert, 2017 and Siefert, 2018 with some modifications. Paired-end sequencing reads are trimmed with fastp and then aligned to the genome using BWA-mem. Aligned reads are filtered using picard tools, bamtools, and bedtools. Reads are counted in user selected replication timing windows using the R bamsignals package with a custom R script. The S/G1 quotients are calculated for each window using a custom R script and then smoothed with a with a cubic smoothing spline through a custom R script and the pspline R package. The smoothed quotients are then transformed into log2 values or z-scores. If replicate data is provided, median values are calculated. Individual and median data for all samples are provided in an R RangedSummarizedExperiment (.rds) file and in individual .bedgraph files.
fastp \
-i {input.fq1} \
-I {input.fq2} \
-o {output.trimmed1} \
-O {output.trimmed2} \
-h {output.fastp_report} \
--json {output.fastp_json} \
-R "{wildcards.sample}" \
-w {params.threads}
# align pair of .fastq files to the genome and convert the .sam output to .bam
bwa mem \
-M \
-t {params.threads} \
{params.genome} \
{input.R1} {input.R2} | \
samtools sort \
-@ {params.threads} > {output.bam}
samtools index \
-@ {params.threads} \
{output.bam} > {output.bai}
JAVA_MEM_OPTS={params.memory}
RAM=$(echo $JAVA_MEM_OPTS | rev | cut -c2- | rev)
picard MarkDuplicates \
COMPRESSION_LEVEL=9 \
VALIDATION_STRINGENCY=LENIENT \
MAX_RECORDS_IN_RAM=$((200000*RAM)) \
CREATE_INDEX=true \
I={input.bam} \
O={output.marked_bam} \
M={output.marked_metrics}
samtools index {output.marked_bam}
bamtools filter \
-in {input.marked_bam} \
-forceCompression \
-mapQuality '{params.mapQ}' \
-isDuplicate false \
-isFailedQC false \
-isMapped true \
-isMateMapped true \
-isPaired true \
-isPrimaryAlignment true \
-isProperPair true \
-out {output.filtered_bam}
samtools index {output.filtered_bam}
bedtools intersect \
-v \
-split \
-abam {input.filtered_bam} \
-b {params.HiCount_BED_FILENAME} > {input.filtered_bam}_FILTERED1.BAM
bedtools intersect \
-v \
-split \
-f 0.3 \
-abam {input.filtered_bam}_FILTERED1.BAM \
-b {params.Sat_BED_FILENAME} > {output.doubleFiltered_bam}
rm {input.filtered_bam}_FILTERED1.BAM
samtools index {output.doubleFiltered_bam}
Rscript \
workflow/scripts/CountReadsOverBed_ver02.R \
{params.RT_windows} \
{input.doubleFiltered_bam} \
{output.counts}
SAMPLES=( {params.sample_list} )
BEDGRAPHS=( {params.counts_bedgraphs_list} )
TEMP_COUNTS=( {params.counts_temp_list} )
mkdir results/temp_merge
for i in ${{!BEDGRAPHS[@]}}; do
echo -e "\t\t\t${{SAMPLES[i]}}" > ${{TEMP_COUNTS[i]}}.header
cut -f4 ${{BEDGRAPHS[i]}} > ${{TEMP_COUNTS[i]}}.counts
cat ${{TEMP_COUNTS[i]}}.header ${{TEMP_COUNTS[i]}}.counts > ${{TEMP_COUNTS[i]}}
rm ${{TEMP_COUNTS[i]}}.header
rm ${{TEMP_COUNTS[i]}}.counts
done
paste {params.counts_temp_list} > {output.merged}
rm -rf results/temp_merge/
Rscript workflow/scripts/MakeCountsRSE_ver01.R \
{output.merged} \
{params.samples_table} \
{params.RT_windows} \
{output.rse_counts}
Rscript workflow/scripts/CalculateQuotientsSmoothScale_ver01.R \
{input.rse_counts} \
{output.rse_processed}
mkdir Log2Ratios
mkdir ZScores
mkdir Smoothed
mkdir Quotients
Rscript workflow/scripts/Generate_RT_Bedgraphs_ver01.R \
{input.rse_processed} \
"Log2Ratios"
Rscript workflow/scripts/Generate_RT_Bedgraphs_ver01.R \
{input.rse_processed} \
"ZScores"
Rscript workflow/scripts/Generate_RT_Bedgraphs_ver01.R \
{input.rse_processed} \
"Smoothed"
Rscript workflow/scripts/Generate_RT_Bedgraphs_ver01.R \
{input.rse_processed} \
"Quotients"
tar -czvf {output.Log2Ratios_bedgraphs} Log2Ratios/
tar -czvf {output.ZScores_bedgraphs} ZScores/
tar -czvf {output.Smoothed_bedgraphs} Smoothed/
tar -czvf {output.Quotients_bedgraphs} Quotients/
rm -rf Log2Ratios
rm -rf ZScores
rm -rf Smoothed
rm -rf Quotients
Note. The commands to do this will be different on your machine. These commands are specific to an HPC using slurm with these modules installed.
ml slurm/20.02
ml miniconda/4.11.0
git clone https://github.com/SansamLab/RepliTimer.git
# rename folder with project name
mv RepliTimer/ My_RT_Project_Folder/
# change directory into root of your project folder
cd My_RT_Project_Folder
# -f is the location of the environment .yml file.
## The relative path assumes that you are in the root directory of this repository.
# -p is the path where you want to install this environment
conda env create -f workflow/envs/SnakemakeEnv2.yml -p /s/sansam-lab/SnakemakeEnv2
conda activate /s/sansam-lab/SnakemakeEnv2
You must enter the following:
- samples_table:
- path and name of the .csv file that lists sample names and paths to all fastq files
- example: "config/testSamples.csv"
- fastp_cpus:
- number of processors that fastp will use to trim your sequencing reads
- example: 8
- bwa_cpus:
- number of processors that bwa mem will use to align reads to genome
- example: 12
- picard_memory:
- amount of memory allocated to picard for marking duplicates
- example: "24G"
- bwa_genome:
- location of bwa indexed genome for the alignment
- example: "/Volumes/hts_core/Shared/zebrafish/danRer11/noAlts/danRer11_noAlts.fa"
- bamtools_filter_mapQ:
- mapQ score cutoff for quality filtering reads
- example: ">10"
- bedtools_intersect_blacklist:
- .bed file with blacklisted genomic regions for filtering out reads
- example: "resources/Satellites.bed"
- for zebrafish GRCz11
- file has regions with certain repeat types that we find are associated with anomalously high read counts
- bedtools_intersect_greylist:
- .bed file with greylisted genomic regions for filtering out reads
- example: "resources/HiCountWindows.bed"
- for zebrafish GRCz11
- file has regions that have anomalously high read counts in multiple Tab5 whole genome sequencing runs
- RT_windows:
- .bed file with regions used for calculating replication timing values
- example: "resources/RTWindows_danRer11_noAlts_ver01.bed"
- for zebrafish GRCz11
The samples.csv file in the config folder has paths to the test fastq files. You must replace those paths with those for your own fastq files. The first column of each row is the sample name. This name will be used for all output files.
CPU and memory requests for each rule in the pipeline are detailed in this file. If you are using SLURM, you may need to alter this file to fit your needs/system.
A dry run produces a text output showing exactly what commands will be executed. Look this over carefully before submitting the full job. It is normal to see warnings about changes made to the code, input, and params.
snakemake -npr
snakemake --dag | dot -Tpdf > dag.pdf
This snakemake pipeline could be executed without slurm, but if an hpc with slurm is used, the following will start the pipeline with the parameters defined in the config/cluster_config.yml file.
If conda is to be used for rule-specific environments, you may find it useful to create the environments first. Running 'snakemake' with the '--conda-create-envs-only' option will create the environments without running the pipeline. The '--conda-prefix' option is used to set a directory in which the ‘conda’ and ‘conda-archive’ directories are created. This directory may be changed to a stable or shared location.
sbatch --mem 32G \
--wrap="\
snakemake \
--cores all \
--use-conda \
--conda-prefix ../condEnvs/ \
--conda-create-envs-only \
--conda-frontend conda"
Once the environments are setup, you may execute pipeline with conda environments using the following command:
sbatch --constraint=westmere \
--wrap="\
snakemake \
-R \
-j 999 \
--use-conda \
--conda-prefix ../condEnvs/ \
--conda-frontend conda \
--latency-wait 100 \
--cluster-config config/cluster_config.yml \
--cluster '\
sbatch \
-A {cluster.account} \
-p {cluster.partition} \
--cpus-per-task {cluster.cpus-per-task} \
--mem {cluster.mem} \
--output {cluster.output}'"
Rather than using conda environments, you may prefer to use modules installed on your computing cluster. These modules are defined for each rule in 'workflow/Snakefile'. This must be customized for your environment, and you must modify the Snakefile yourself.
To execute the pipeline with environment modules, enter the following:
sbatch --constraint=westmere \
--wrap="\
snakemake \
-R \
-j 999 \
--use-envmodules \
--latency-wait 100 \
--cluster-config config/cluster_config.yml \
--cluster '\
sbatch \
-A {cluster.account} \
-p {cluster.partition} \
--cpus-per-task {cluster.cpus-per-task} \
--mem {cluster.mem} \
--output {cluster.output}'"
The results will be saved to the "results" folder. Look over log files generated in either the logs/ or logs/snakelogs folders (depending on whether slurm was used).
conda deactivate
Folder | Description |
---|---|
logs | Log file for each snakemake rule executed |
qc | Quality control files generated by fastp |
trimmed | .fastq files trimmed by fastq |
aligned | .bam files aligned by bwamem |
duplicatesMarkedBam | .bam files with duplicates marked by picardtools |
filteredBams | .bam files filtered for read quality |
doubleFilteredBams | .bam files also filtered using blacklisted and greylisted regions |
counts | .bedgraph files with counts for each sample |
merged | All counts in a .txt tab delimited table |
rse | .rds file with RangedSummarized Experiment Object with Counts |
processed_rse | .rds file with RangedSummarizedExperiment Object with quotients, smoothed, Log2, and ZScores for all S phase samples. Medians values for replicates are also included |
bedgraphs | Zipped bedgraphs with calculated values from processed_rse |
Run Snakemake pipeline with sample data included in repository Run Snakemake pipeline with Siefert, 2017 data.
Siefert JC, Georgescu C, Wren JD, Koren A, Sansam CL. DNA replication timing during development anticipates transcriptional programs and parallels enhancer activation. Genome Res. 2017;27(8):1406-1416.
Siefert JC, Clowdus EA, Goins D, Koren A, Sansam CL. Profiling dna replication timing using zebrafish as an in vivo model system. JoVE. 2018;(134):57146.
Chen S, Zhou Y, Chen Y, Gu J. fastp: an ultra-fast all-in-one FASTQ preprocessor. Bioinformatics. 2018;34(17):i884-i890. doi:10.1093/bioinformatics/bty560
Danecek, P., Bonfield, J. K., Liddle, J., Marshall, J., Ohan, V., Pollard, M. O., Whitwham, A., Keane, T., McCarthy, S. A., Davies, R. M., & Li, H. (2021). Twelve years of samtools and bcftools. GigaScience, 10(2), giab008. https://doi.org/10.1093/gigascience/giab008
Li, H. (2013). Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. ArXiv:1303.3997 [q-Bio]. http://arxiv.org/abs/1303.3997
http://broadinstitute.github.io/picard/
Barnett DW, Garrison EK, Quinlan AR, Stromberg MP, Marth GT. BamTools: a C++ API and toolkit for analyzing and managing BAM files. Bioinformatics. 2011;27(12):1691-1692.
Quinlan AR, Hall IM. BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics. 2010;26(6):841-842.
R Core Team (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.
Alessandro Mammana and Johannes Helmuth (2021). bamsignals: Extract read count signals from bam files. R package version 1.26.0. https://github.com/lamortenera/bamsignals
S original by Jim Ramsey. R port by Brian Ripley ripley@stats.ox.ac.uk. (2022). pspline: Penalized Smoothing Splines. R package version 1.0-19. https://CRAN.R-project.org/package=pspline
Martin Morgan, Valerie Obenchain, Jim Hester and Hervé Pagès (2021). SummarizedExperiment: SummarizedExperiment container. R package version 1.24.0. https://bioconductor.org/packages/SummarizedExperiment
Lawrence M, Huber W, Pag`es H, Aboyoun P, Carlson M, et al. (2013) Software for Computing and Annotating Genomic Ranges. PLoS Comput Biol 9(8): e1003118. doi:10.1371/journal.pcbi.1003118
Henrik Bengtsson (2021). matrixStats: Functions that Apply to Rows and Columns of Matrices (and to Vectors). R package version 0.61.0. https://CRAN.R-project.org/package=matrixStats
Köster, J., & Rahmann, S. (2012). Snakemake—A scalable bioinformatics workflow engine. Bioinformatics (Oxford, England), 28(19), 2520–2522. https://doi.org/10.1093/bioinformatics/bts480
Anaconda Software Distribution. (2020). Anaconda Documentation. Anaconda Inc. Retrieved from https://docs.anaconda.com/