-
Notifications
You must be signed in to change notification settings - Fork 11
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Browse files
Browse the repository at this point in the history
* Cellxgene export (#315) * updated count rounding warning in streamlining * improved meta data streamlining * updated DOIs to distinguish preprint and journal * CLI improvements #321 #314 (#332) * add new adding datasets figure * add sample_source * renamed assay to assay_sc * fix assay_sc template * add cell_types_original_obs_key * add sfaira annotate-dataloader hints Signed-off-by: zethson <lukas.heumos@posteo.net> * added lazy ontology loading in OCS (#334, #335) * reassigned gamma cell in pancreas to pancreatic PP cell CL:0002275 (#338) - affects d10_1016_j_cmet_2016_08_020, d10_1016_j_cels_2016_08_011 * added new edge types (#341) * Improve CLI documentation (#320) * improved error reporting in annotate * improved file not found reporting in annotate * update template creation workflow * fix doi promting * update download urls * fix data path handling in CLI * fix disease default in cli * fix test-dataloader [skip ci] * fix CI (#339) Co-authored-by: david.seb.fischer <david.seb.fischer@gmail.com> Co-authored-by: le-ander <20015434+le-ander@users.noreply.github.com> Co-authored-by: Lukas Heumos <lukas.heumos@posteo.net> * Feature/dao improvements (#318) * updated rounding in cellxgene format export warning * updated DOIs to distinguish preprint and journal * fixed issue with ethnicity handling in cellxgene export * reordered obs in cellxgene streamlining * added store benchmark script * added multi-organism store * update doi setting in datasetinteractive * added mock data for unit test * added msle metric * enabled in memory handling of h5ad backed store * added infrastructure for ontology re-caching * fixed all unit tests and optimised run time a bit Co-authored-by: Abdul Moeed <abdulmoeed444@gmail.com> Co-authored-by: le-ander <20015434+le-ander@users.noreply.github.com> * store improvements (#346) * improvments to store API * added retrieval index sort to dask store * fixed bug in single store generator if index input was None * added sliced X and adata object emission to single store * moved memory footprint into store base class * fixed h5ad store indexing * restructured meta data streamlining code (#347) - includes bug fix that lead to missing meta data import from cellxgene structured data sets - simplified meta data streamlining code and enhanced code readability - depreceated distinction between cell type and cell type original in data set definition in favor of single attribute - allowed all ontology constrained meta data items to be supplied in any format (original + mapl, symbol, or id) via the `*_obs_col` attribute of the loader - removed resetting of _obs_col attributes in streamlining in favor of adataids controlled obs col names that extend to IDs and original labels - updated cell type entry in all data loaders * added attribute check for dictionary formatted attributes from YAML * added processing of obs columns in cellxgene import * extended error reporting in data loader discovery * fixed value protection in meta data streamlining * fixed cellxgene obs adapter * added additional mock data set with little meta data annotation * refactored cellxgene streamlining and added HANCESTRO support via EBI * fixed handling of missing ethnicity ontology for mouse * fixed EBI EFO backend * ontology unit tests now check that ontologies can be downloaded * added new generator interface, restructured batch index design interface and fixed adata uns merge in DatasetGroup (#351) - Iterators for tf dataset and similar are now emitted as an instance of a class that has an property that emit the iterator. This class keeps a pointer to the data set that is iterated over in its attributes. Thus, if this instance stays in the namespace in which tensorflow uses the iterator, it can be restarted without creating a new pointer. This had previously delayed training because tensorflow restarted the validation data set for each epoch, thus creating a new dask data set in each epoch at relatively high cost. - There is now only one iterator end point for stores (before there was base and balanced). The different index shuffling / sampling schedules are now refactored into functions and can be chosen based on string names. This makes creation and addition of new index schedules ("batch designs") easier. - Direct conversion of adata objects in memory to a store is now supported via a new multi store class. - Estimators do not have any more adata processing code but still acceppt adata, next to store instances. The adata are directly converted to a adata store instance though. All previous code related to adata processing is depreceated in the estimators. - The interface of store to estimators in the estimator is heavily simplified through the new generator interface of the store. The generator instances are placed in the train name space for efficiency but not in testing and evaluation namespaces, in which only a data set single pass is required. * Added new batch index design code - Batch schedules are now classes rather than functions. - Introduced epoch-wise reshuffling of indices in batch schedule: The reshuffling is achieved by transferring the schedule from a one-time function evaluation in the generator constructor to a evaluation of a schedule instance property that shuffles at the beginning of the iterator * Fixed balanced batch schedule. * Added merging of shared uns fields in DatasetGroup so that uns streamlining is maintained across merge of adatas. * passed empty store index validation * passed zero length index processing in batch schedule * allowed re-indexing of generator and batch schedule * added uberon versioning (#354) * added data life cycle rst (#355 ) Co-authored-by: Lukas Heumos <lukas.heumos@posteo.net> Co-authored-by: le-ander <20015434+le-ander@users.noreply.github.com> Co-authored-by: Abdul Moeed <abdulmoeed444@gmail.com>
- Loading branch information
1 parent
dab9cb3
commit 6c4dbff
Showing
180 changed files
with
5,818 additions
and
3,100 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
.. _data_life_cycle_rst: | ||
|
||
The data life cycle | ||
=================== | ||
|
||
The life cycle of a single-cell count matrix often looks as follows: | ||
|
||
1. **Generation** from primary read data in a read alignment pipeline. | ||
2. **Annotation** with cell types and sample meta data. | ||
3. **Publication** of annotated data, often together with a manuscript. | ||
4. **Curation** of this public data set for the purpose of a meta study. In a python workflow, this curation step could be a scanpy script based on data from step 3, for example. | ||
5. **Usage** of data curated specifically for the use case at hand, for example for a targeted analysis or a training of a machine learning model. | ||
|
||
where step 1-3 is often only performed once by the original authors of the data set, | ||
while step 4 and 5 are repeated multiple times in the community for different meta studies. | ||
Sfaira offers the following functionality groups that accelerate steps along this pipeline: | ||
|
||
I) Data loaders | ||
~~~~~~~~~~~~~~~ | ||
We maintain streamlined data loader code that improve **Curation** (step 4) and make this step sharable and iteratively improvable. | ||
Read more in our guide to data contribution :ref:`adding_data_rst`. | ||
|
||
II) Dataset, DatasetGroup, DatasetSuperGroup | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
Using the data loaders from (I), we built an interface that can flexibly download, subset and curate data sets from the sfaira data zoo, thus improving **Usage** (step 5). | ||
This interface can yield adata instances to be used in a scanpy pipeline, for example. | ||
Read more in our guide to data consumption :ref:`consuming_data_rst`. | ||
|
||
III) Stores | ||
~~~~~~~~~~~ | ||
Using the streamlined data set collections from (II), we built a computationally efficient data interface for machine learning on such large distributed data set collection, thus improving **Usage** (step 5): | ||
Specifically, this interface is optimised for out-of-core observation-centric indexing in scenarios that are typical to machine learning on single-cell data. | ||
Read more in our guide to data stores :ref:`distributed_data_rst`. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
.. _distributed_data_rst: | ||
|
||
Distributed data | ||
================ | ||
|
||
For a high-level overview of data management in sfaira, read :ref:`data_life_cycle_rst` first. | ||
Sfaira supports usage of distributed data for model training and execution. | ||
The tools are summarized under `sfaira.data.store`. | ||
In contrast to using an instance of AnnData in memory, these tools can be used to use data sets that are saved | ||
in different files (because they come from different studies) flexibly and out-of-core, | ||
which means without loading them into memory. | ||
A general use case is the training of a model on a large set of data sets, subsetted by particular cell-wise meta | ||
data, without creating a merged AnnData instance in memory first. | ||
|
||
Build a distributed data repository | ||
----------------------------------- | ||
|
||
You can use the sfaira dataset API to write streamlined groups of adata instances to a particular disk locaiton that | ||
then is the store directory. | ||
Some of the array backends used for loading stores can read arrays from cloud servers, such as dask. | ||
Therefore, these store directories can also be on cloud servers in some cases. | ||
|
||
Reading from a distributed data repository | ||
------------------------------------------ | ||
|
||
The core use-case is the consumption of data in batches from a python iterator (a "generator"). | ||
In contrast to using the full data matrix, this allows for workflows that never require the full data matrix in memory. | ||
This generators can for example directly be used in tensorflow or pytorch stochastic mini-batch learning pipelines. | ||
The core interface is `sfaira.data.load_store()` which can be used to initialise a store instance that exposes a | ||
generator, for example. | ||
An important concept in store reading is that the data sets are already streamlined on disk, which means that they have | ||
the same feature space for example. | ||
|
||
Distributed access optimised (DAO) store | ||
---------------------------------------- | ||
|
||
The DAO store format is a on-disk representation of single-cell data which is optimised for generator-based access and | ||
distributed access. | ||
In brief, DAO stores optimize memory consumption and data batch access speed. | ||
Right now, we are using zarr and parquet, this may change in the future, we will continue to work on this format using | ||
the project name "dao". | ||
Note that data sets represented as DAO on disk can still be read into AnnData instances in memory if you wish! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,8 +1,10 @@ | ||
anndata>=0.7.6 | ||
crossref_commons | ||
cellxgene-schema | ||
dask | ||
docutils | ||
fuzzywuzzy | ||
IPython | ||
loompy | ||
matplotlib | ||
networkx | ||
|
Oops, something went wrong.