Skip to content

Commit

Permalink
Centralise repeated base URLs
Browse files Browse the repository at this point in the history
  • Loading branch information
lochhh committed Nov 2, 2023
1 parent 440f667 commit 4719743
Show file tree
Hide file tree
Showing 7 changed files with 63 additions and 37 deletions.
32 changes: 16 additions & 16 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@

### Creating a development environment

It is recommended to use [conda](https://docs.conda.io/en/latest/)
or [mamba](https://mamba.readthedocs.io/en/latest/index.html) to create a
It is recommended to use [conda](conda:)
or [mamba](mamba:) to create a
development environment for movement. In the following we assume you have
`conda` installed, but the same commands will also work with `mamba`/`micromamba`.

Expand Down Expand Up @@ -47,7 +47,7 @@ We recommend, and adhere, to the following conventions:
- One approval of a PR (by a repo owner) is enough for it to be merged.
- Unless someone approves the PR with optional comments, the PR is immediately merged by the approving reviewer.
- Ask for a review from someone specific if you think they would be a particularly suited reviewer.
- PRs are preferably merged via the ["squash and merge"](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits) option, to keep a clean commit history on the _main_ branch.
- PRs are preferably merged via the ["squash and merge"](github-docs:pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits) option, to keep a clean commit history on the _main_ branch.

A typical PR workflow would be:
* Create a new branch, make your changes, and stage them.
Expand Down Expand Up @@ -103,7 +103,7 @@ See [sample data](#sample-data) for more information.


### Continuous integration
All pushes and pull requests will be built by [GitHub actions](https://docs.github.com/en/actions).
All pushes and pull requests will be built by [GitHub actions](github-docs:actions).
This will usually include linting, testing and deployment.

A GitHub actions workflow (`.github/workflows/test_and_deploy.yml`) has been set up to run (on each push/PR):
Expand All @@ -118,9 +118,9 @@ We use [semantic versioning](https://semver.org/), which includes `MAJOR`.`MINOR
* MINOR = new feature
* MAJOR = breaking change

We use [setuptools_scm](https://github.com/pypa/setuptools_scm) to automatically version movement.
We use [setuptools_scm](setuptools-scm:) to automatically version movement.
It has been pre-configured in the `pyproject.toml` file.
`setuptools_scm` will automatically [infer the version using git](https://github.com/pypa/setuptools_scm#default-versioning-scheme).
`setuptools_scm` will automatically [infer the version using git](setuptools-scm:usage#default-versioning-scheme).
To manually set a new semantic version, create a tag and make sure the tag is pushed to GitHub.
Make sure you commit any changes you wish to be included in this version. E.g. to bump the version to `1.0.0`:

Expand All @@ -138,7 +138,7 @@ The version number is automatically determined from the latest tag on the _main_
## Contributing documentation

The documentation is hosted via [GitHub pages](https://pages.github.com/) at
[movement.neuroinformatics.dev](https://movement.neuroinformatics.dev).
[movement.neuroinformatics.dev](movement-website:).
Its source files are located in the `docs` folder of this repository.
They are written in either [reStructuredText](https://docutils.sourceforge.io/rst.html) or
[markdown](https://myst-parser.readthedocs.io/en/stable/syntax/typography.html).
Expand Down Expand Up @@ -179,7 +179,7 @@ my_new_file
### Updating the API reference
If your PR introduces new public-facing functions, classes, or methods,
make sure to add them to the `docs/source/api_index.rst` page, so that they are
included in the [API reference](https://movement.neuroinformatics.dev/api_index.html),
included in the [API reference](movement-website:api_index),
e.g.:

```rst
Expand All @@ -198,7 +198,7 @@ that follow the [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html

### Updating the examples
We use [sphinx-gallery](https://sphinx-gallery.github.io/stable/index.html)
to create the [examples](https://movement.neuroinformatics.dev/examples/index.html).
to create the [examples](movement-website:examples).
To add new examples, you will need to create a new `.py` file in `examples/`.
The file should be structured as specified in the relevant
[sphinx-gallery documentation](https://sphinx-gallery.github.io/stable/syntax.html).
Expand Down Expand Up @@ -226,11 +226,11 @@ rm -rf docs/build && sphinx-build docs/source docs/build
## Sample data

We maintain some sample data to be used for testing, examples and tutorials on an
[external data repository](https://gin.g-node.org/neuroinformatics/movement-test-data).
Our hosting platform of choice is called [GIN](https://gin.g-node.org/) and is maintained
[external data repository](gin:neuroinformatics/movement-test-data).
Our hosting platform of choice is called [GIN](gin:) and is maintained
by the [German Neuroinformatics Node](https://www.g-node.org/).
GIN has a GitHub-like interface and git-like
[CLI](https://gin.g-node.org/G-Node/Info/wiki/GIN+CLI+Setup#quickstart) functionalities.
[CLI](gin:G-Node/Info/wiki/GIN+CLI+Setup#quickstart) functionalities.

Currently the data repository contains sample pose estimation data files
stored in the `poses` folder. Each file name starts with either "DLC" or "SLEAP",
Expand All @@ -256,13 +256,13 @@ This can be changed by setting the `DATA_DIR` variable in the `movement.datasets
Only core movement developers may add new files to the external data repository.
To add a new file, you will need to:

1. Create a [GIN](https://gin.g-node.org/) account
2. Ask to be added as a collaborator on the [movement data repository](https://gin.g-node.org/neuroinformatics/movement-test-data) (if not already)
3. Download the [GIN CLI](https://gin.g-node.org/G-Node/Info/wiki/GIN+CLI+Setup#quickstart) and set it up with your GIN credentials, by running `gin login` in a terminal.
1. Create a [GIN](gin:) account
2. Ask to be added as a collaborator on the [movement data repository](gin:neuroinformatics/movement-test-data) (if not already)
3. Download the [GIN CLI](gin:G-Node/Info/wiki/GIN+CLI+Setup#quickstart) and set it up with your GIN credentials, by running `gin login` in a terminal.
4. Clone the movement data repository to your local machine, by running `gin get neuroinformatics/movement-test-data` in a terminal.
5. Add your new files and commit them with `gin commit -m <message> <filename>`.
6. Upload the commited changes to the GIN repository, by running `gin upload`. Latest changes to the repository can be pulled via `gin download`. `gin sync` will synchronise the latest changes bidirectionally.
7. Determine the sha256 checksum hash of each new file, by running `sha256sum <filename>` in a terminal. Alternatively, you can use `pooch` to do this for you: `python -c "import pooch; pooch.file_hash('/path/to/file')"`. If you wish to generate a text file containing the hashes of all the files in a given folder, you can use `python -c "import pooch; pooch.make_registry('/path/to/folder', 'sha256_registry.txt')`.
8. Update the `movement.datasets.py` module on the [movement GitHub repository](https://github.com/SainsburyWellcomeCentre/movement) by adding the new files to the `POSE_DATA` registry. Make sure to include the correct sha256 hash, as determined in the previous step. Follow all the usual [guidelines for contributing code](#contributing-code). Make sure to test whether the new files can be fetched successfully (see [fetching data](#fetching-data) above) before submitting your pull request.
8. Update the `movement.datasets.py` module on the [movement GitHub repository](movement-github:) by adding the new files to the `POSE_DATA` registry. Make sure to include the correct sha256 hash, as determined in the previous step. Follow all the usual [guidelines for contributing code](#contributing-code). Make sure to test whether the new files can be fetched successfully (see [fetching data](#fetching-data) above) before submitting your pull request.

You can also perform steps 3-6 via the GIN web interface, if you prefer to avoid using the CLI.
6 changes: 3 additions & 3 deletions docs/source/community/mission-scope.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,11 @@

## Mission

[movement](https://movement.neuroinformatics.dev/) aims to **facilitate the study of animal behaviour in neuroscience** by providing a suite of **Python tools to analyse body movements** across space and time.
[movement](movement-website:) aims to **facilitate the study of animal behaviour in neuroscience** by providing a suite of **Python tools to analyse body movements** across space and time.

## Scope

At its core, movement handles trajectories of *keypoints*, which are specific body parts of an *individual*. An individual's posture or *pose* is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms *pose tracks*. In neuroscience, these tracks are typically extracted from video data using software like [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) or [SLEAP](https://sleap.ai/).
At its core, movement handles trajectories of *keypoints*, which are specific body parts of an *individual*. An individual's posture or *pose* is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms *pose tracks*. In neuroscience, these tracks are typically extracted from video data using software like [DeepLabCut](dlc:) or [SLEAP](sleap:).

With movement, our vision is to present a **consistent interface for pose tracks** and to **analyze them using modular and accessible tools**. We aim to accommodate data from a range of pose estimation packages, in **2D or 3D**, tracking **single or multiple individuals**. The focus will be on providing functionalities for data cleaning, visualisation and motion quantification (see the [Roadmap](target-roadmap) for details).

Expand All @@ -24,4 +24,4 @@ movement is committed to:
- __Performance and responsiveness__, especially for large datasets, using parallel processing where appropriate.
- __Modularity and flexibility__. We envision movement as a platform for new tools and analyses, offering users the building blocks to craft their own workflows.

Some of these principles are shared with, and were inspired by, napari's [Mission and Values](https://napari.org/stable/community/mission_and_values.html) statement.
Some of these principles are shared with, and were inspired by, napari's [Mission and Values](napari:community/mission_and_values) statement.
12 changes: 6 additions & 6 deletions docs/source/community/roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,14 @@

The roadmap outlines **current development priorities** and aims to **guide core developers** and to **encourage community contributions**. It is a living document and will be updated as the project evolves.

The roadmap is **not meant to limit** movement features, as we are open to suggestions and contributions. Join our [Zulip chat](https://neuroinformatics.zulipchat.com/#narrow/stream/406001-Movement/topic/Welcome!) to share your ideas. We will take community demand and feedback into account when planning future releases.
The roadmap is **not meant to limit** movement features, as we are open to suggestions and contributions. Join our [Zulip chat](movement-zulip:) to share your ideas. We will take community demand and feedback into account when planning future releases.

## Long-term vision
The following features are being considered for the first stable version `v1.0`.

- __Import/Export pose tracks from/to diverse formats__. We aim to interoperate with leading tools for animal pose estimation and behaviour classification, and to enable conversions between their formats.
- __Standardise the representation of pose tracks__. We represent pose tracks as [xarray data structures](https://docs.xarray.dev/en/latest/user-guide/data-structures.html) to allow for labelled dimensions and performant processing.
- __Interactively visualise pose tracks__. We are considering [napari](https://napari.org/) as a visualisation and GUI framework.
- __Standardise the representation of pose tracks__. We represent pose tracks as [xarray data structures](xarray:user-guide/data-structures.html) to allow for labelled dimensions and performant processing.
- __Interactively visualise pose tracks__. We are considering [napari](napari:) as a visualisation and GUI framework.
- __Clean pose tracks__, including, but not limited to, handling of missing values, filtering, smoothing, and resampling.
- __Derive kinematic variables__ like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience.
- __Integrate spatial data about the animal's environment__ for combined analysis with pose tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it.
Expand All @@ -19,8 +19,8 @@ The following features are being considered for the first stable version `v1.0`.
## Short-term milestone - `v0.1`
We plan to release version `v0.1` of movement in early 2024, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include the following features:

- Importing pose tracks from [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) and [SLEAP](https://sleap.ai/) into a common `xarray.Dataset` structure. This has been largely accomplished, but some remaining work is required to handle special cases.
- Visualisation of pose tracks using [napari](https://napari.org/). We aim to represent pose tracks via the [napari tracks layer](https://napari.org/stable/howtos/layers/tracks.html) and overlay them on a video frame. This should be accompanied by a minimal GUI widget to allow selection of a subset of the tracks to plot. This line of work is still in a pilot phase. We may decide to use a different visualisation framework if we encounter roadblocks.
- Importing pose tracks from [DeepLabCut](dlc:) and [SLEAP](sleap:) into a common `xarray.Dataset` structure. This has been largely accomplished, but some remaining work is required to handle special cases.
- Visualisation of pose tracks using [napari](napari:). We aim to represent pose tracks via the [napari tracks layer](napari:howtos/layers/tracks) and overlay them on a video frame. This should be accompanied by a minimal GUI widget to allow selection of a subset of the tracks to plot. This line of work is still in a pilot phase. We may decide to use a different visualisation framework if we encounter roadblocks.
- At least one function for cleaning the pose tracks. Once the first one is in place, it can serve as a template for others.
- Computing velocity and acceleration from pose tracks. Again, this should serve as a template for other kinematic variables.
- Package release on PyPI and conda-forge, along with documentation. The package is already available on [PyPI](https://pypi.org/project/movement/) and the [documentation website](https://movement.neuroinformatics.dev/) is up and running. We plan to also release it on conda-forge to enable one-line installation.
- Package release on PyPI and conda-forge, along with documentation. The package is already available on [PyPI](https://pypi.org/project/movement/) and the [documentation website](movement-website:) is up and running. We plan to also release it on conda-forge to enable one-line installation.
26 changes: 26 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,3 +132,29 @@
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']

# The linkcheck builder will skip verifying that anchors exist when checking
# these URLs
linkcheck_anchors_ignore_for_url = [
"https://gin.g-node.org/G-Node/Info/wiki/",
"https://neuroinformatics.zulipchat.com/",
]

myst_url_schemes = {
"http": None,
"https": None,
"ftp": None,
"mailto": None,
"movement-github": "https://github.com/neuroinformatics-unit/movement/{{path}}",
"movement-website": "https://movement.neuroinformatics.dev/{{path}}",
"movement-zulip": "https://neuroinformatics.zulipchat.com/#narrow/stream/406001-Movement/topic/Welcome!",
"conda": "https://docs.conda.io/en/latest/",
"dlc": "https://www.mackenziemathislab.org/deeplabcut/",
"gin": "https://gin.g-node.org/{{path}}#{{fragment}}",
"github-docs": "https://docs.github.com/en/{{path}}#{{fragment}}",
"mamba": "https://mamba.readthedocs.io/en/latest/",
"napari": "https://napari.org/dev/{{path}}",
"setuptools-scm": "https://setuptools-scm.readthedocs.io/en/latest/{{path}}#{{fragment}}",
"sleap": "https://sleap.ai/",
"xarray": "https://docs.xarray.dev/en/stable/{{path}}#{{fragment}}",
}
20 changes: 10 additions & 10 deletions docs/source/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

## Installation

We recommend you install movement inside a [conda](https://docs.conda.io/en/latest/)
or [mamba](https://mamba.readthedocs.io/en/latest/index.html) environment.
We recommend you install movement inside a [conda](conda:)
or [mamba](mamba:) environment.
In the following we assume you have `conda` installed,
but the same commands will also work with `mamba`/`micromamba`.

Expand Down Expand Up @@ -36,7 +36,7 @@ pip install --upgrade movement

:::{tab-item} Developers
To get the latest development version, clone the
[GitHub repository](https://github.com/neuroinformatics-unit/movement/)
[GitHub repository](movement-github:)
and then run from inside the repository:

```sh
Expand All @@ -53,7 +53,7 @@ Please see the [contributing guide](target-contributing) for more information.

## Loading data
You can load predicted pose tracks from the pose estimation software packages
[DeepLabCut](http://www.mackenziemathislab.org/deeplabcut) or [SLEAP](https://sleap.ai/).
[DeepLabCut](dlc:) or [SLEAP](sleap:).

First import the `movement.io.load_poses` module:

Expand All @@ -67,7 +67,7 @@ Then, use the `from_dlc_file` or `from_sleap_file` functions to load the data.

:::{tab-item} SLEAP

Load from [SLEAP analysis files](https://sleap.ai/tutorials/analysis.html) (`.h5`):
Load from [SLEAP analysis files](sleap:tutorials/analysis) (`.h5`):
```python
ds = load_poses.from_sleap_file("/path/to/file.analysis.h5", fps=30)
```
Expand Down Expand Up @@ -132,7 +132,7 @@ and load the data, as shown above.
## Working with movement datasets

Loaded pose estimation data are represented in movement as
[`xarray.Dataset`](https://docs.xarray.dev/en/stable/generated/xarray.Dataset.html) objects.
[`xarray.Dataset`](xarray:generated/xarray.Dataset.html) objects.

You can view information about the loaded dataset by printing it:
```python
Expand All @@ -156,13 +156,13 @@ list of unique names (str) for `individuals` and `keypoints`,
in seconds if `fps` is provided, otherwise they are in frame numbers.

The dataset contains two data variables stored as
[`xarray.DataArray`](https://docs.xarray.dev/en/latest/generated/xarray.DataArray.html#xarray.DataArray) objects:
[`xarray.DataArray`](xarray:generated/xarray.DataArray.html#xarray.DataArray) objects:
- `pose_tracks`: with shape (`time`, `individuals`, `keypoints`, `space`)
- `confidence`: with shape (`time`, `individuals`, `keypoints`)

You can think of a `DataArray` as a `numpy.ndarray` with `pandas`-style
indexing and labelling. To learn more about `xarray` data structures, see the
relevant [documentation](https://docs.xarray.dev/en/latest/user-guide/data-structures.html).
relevant [documentation](xarray:user-guide/data-structures.html).

The dataset may also contain the following attributes as metadata:
- `fps`: the number of frames per second in the video
Expand Down Expand Up @@ -197,11 +197,11 @@ resulting in a `DataArray` rather than a `Dataset`:
```python
pose_tracks = ds.pose_tracks.sel(individuals="individual1", keypoints="snout")
```
You may also use all the other powerful [indexing and selection](https://docs.xarray.dev/en/latest/user-guide/indexing.html) methods provided by `xarray`.
You may also use all the other powerful [indexing and selection](xarray:user-guide/indexing.html) methods provided by `xarray`.

### Plotting

You can also use the built-in [`xarray` plotting methods](https://docs.xarray.dev/en/latest/user-guide/plotting.html)
You can also use the built-in [`xarray` plotting methods](xarray:user-guide/plotting.html)
to visualise the data. Check out the [Load and explore pose tracks](./examples/load_and_explore_poses.rst)
example for inspiration.

Expand Down
Loading

0 comments on commit 4719743

Please sign in to comment.