From 65f8fe48d57e65d181a66e49840188784140c096 Mon Sep 17 00:00:00 2001 From: Victor Lin <13424970+victorlin@users.noreply.github.com> Date: Thu, 15 Aug 2024 14:57:14 -0700 Subject: [PATCH] Add sections on sampling methods Reword the subsampling introduction with *what* it is, followed by examples on *why* paired with *how*. This also allows future sampling methods such as weighted sampling to be added by simply including a new section. --- .../filtering-and-subsampling.rst | 66 +++++++++++++++++-- 1 file changed, 61 insertions(+), 5 deletions(-) diff --git a/src/guides/bioinformatics/filtering-and-subsampling.rst b/src/guides/bioinformatics/filtering-and-subsampling.rst index cfdbefab..40afd2a9 100644 --- a/src/guides/bioinformatics/filtering-and-subsampling.rst +++ b/src/guides/bioinformatics/filtering-and-subsampling.rst @@ -114,11 +114,56 @@ Subsampling within ``augur filter`` Subsampling is applied after all standard filter options and before force-inclusive filter options. -Another common filtering operation is subsetting of data to achieve a more -even spatio-temporal distribution or to cut-down data set size to more -manageable numbers. The filter command allows you to partition the data into -groups based on column values and sample uniformly. For example, target one -sequence per month from each country: +Another common filtering operation is **subsampling**: selection of data using +rules based on output size rather than individual sequence attributes. These are +the sampling methods supported by ``augur filter`` and a final section for caveats: + +.. contents:: + :local: + +Random sampling +--------------- + +The simplest scenario is a reduction of dataset size to more manageable numbers. +For example, limit the output to 100 sequences: + +.. code-block:: bash + + augur filter \ + --sequences data/sequences.fasta \ + --metadata data/metadata.tsv \ + --min-date 2012 \ + --exclude exclude.txt \ + --subsample-max-sequences 100 \ + --output-sequences subsampled_sequences.fasta \ + --output-metadata subsampled_metadata.tsv + +Random sampling is easy to define but can expose sampling bias in some datasets. +Consider uniform sampling to reduce sampling bias. + +Uniform sampling +---------------- + +``--group-by`` allows you to partition the data into groups based on column +values and sample uniformly. For example, sample evenly across countries over +time: + +.. code-block:: bash + + augur filter \ + --sequences data/sequences.fasta \ + --metadata data/metadata.tsv \ + --min-date 2012 \ + --exclude exclude.txt \ + --group-by country year month \ + --subsample-max-sequences 100 \ + --output-sequences subsampled_sequences.fasta \ + --output-metadata subsampled_metadata.tsv + +An alternative to ``--subsample-max-sequences`` is ``--sequences-per-group``. +This is useful if you care less about total sample size and more about having +a fixed number of sequences from each group. For example, target one sequence +per month from each country: .. code-block:: bash @@ -132,6 +177,17 @@ sequence per month from each country: --output-sequences subsampled_sequences.fasta \ --output-metadata subsampled_metadata.tsv +Caveats +------- + +For these sampling methods, the number of targeted sequences per group does not +take into account the actual number of sequences available in the input data. +For example, consider a dataset with 200 sequences available from 2023 and 100 +sequences available from 2024. ``--group-by year --subsample-max-sequences 300`` +is equivalent to ``--group-by year --sequences-per-group 150``. This will take +150 sequences from 2023 and all 100 sequences from 2024 for a total of 250 +sequences, which is less than the target of 300. + Subsampling using multiple ``augur filter`` commands ====================================================