Skip to content

Commit

Permalink
Update overview, mission, scope, and roadmaps (#352)
Browse files Browse the repository at this point in the history
* update project overview

* update mission statement

* updated scope

* update roadmaps and consistently use `movement` (monospace)

* Add wheel as a dependency (#344)

* implement Adam's suggestions

* Apply some suggestions outright

Co-authored-by: sfmig <33267254+sfmig@users.noreply.github.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update project overview based on feedback

* clarify statement about action recognition

* updated scope

* mention "keypoints" for SLEAP and DLC representations in "scope".

---------

Co-authored-by: sfmig <33267254+sfmig@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
3 people authored Nov 28, 2024
1 parent e85c9f5 commit 85904d9
Show file tree
Hide file tree
Showing 5 changed files with 80 additions and 30 deletions.
19 changes: 14 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

# movement

A Python toolbox for analysing body movements across space and time, to aid the study of animal behaviour in neuroscience.
A Python toolbox for analysing animal body movements across space and time.


![](docs/source/_static/movement_overview.png)
Expand All @@ -27,10 +27,19 @@ conda activate movement-env
## Overview

Pose estimation tools, such as [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) and [SLEAP](https://sleap.ai/) are now commonplace when processing video data of animal behaviour. There is not yet a standardised, easy-to-use way to process the pose tracks produced from these software packages.

movement aims to provide a consistent modular interface to analyse pose tracks, allowing steps such as data cleaning, visualisation and motion quantification.
We aim to support a range of pose estimation packages, along with 2D or 3D tracking of single or multiple individuals.
Deep learning methods for motion tracking have revolutionised a range of
scientific disciplines, from neuroscience and biomechanics, to conservation
and ethology. Tools such as
[DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) and
[SLEAP](https://sleap.ai/) now allow researchers to track animal movements
in videos with remarkable accuracy, without requiring physical markers.
However, there is still a need for standardised, easy-to-use methods
to process the tracks generated by these tools.

`movement` aims to provide a consistent, modular interface for analysing
motion tracks, enabling steps such as data cleaning, visualisation,
and motion quantification. We aim to support all popular animal tracking
frameworks and file formats.

Find out more on our [mission and scope](https://movement.neuroinformatics.dev/community/mission-scope.html) statement and our [roadmap](https://movement.neuroinformatics.dev/community/roadmaps.html).

Expand Down
2 changes: 1 addition & 1 deletion docs/source/community/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Community

Contributions to movement are absolutely encouraged, whether to fix a bug,
Contributions to `movement` are absolutely encouraged, whether to fix a bug,
develop a new feature, or improve the documentation.
To help you get started, we have prepared a statement on the project's [mission and scope](target-mission),
a [roadmap](target-roadmaps) outlining our current priorities, and a detailed [contributing guide](target-contributing).
Expand Down
46 changes: 38 additions & 8 deletions docs/source/community/mission-scope.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,25 +3,55 @@

## Mission

[movement](target-movement) aims to **facilitate the study of animal behaviour in neuroscience** by providing a suite of **Python tools to analyse body movements** across space and time.
`movement` aims to **facilitate the study of animal behaviour**
by providing a suite of **Python tools to analyse body movements**
across space and time.

## Scope

At its core, movement handles trajectories of *keypoints*, which are specific body parts of an *individual*. An individual's posture or *pose* is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms *pose tracks*. In neuroscience, these tracks are typically extracted from video data using software like [DeepLabCut](dlc:) or [SLEAP](sleap:).

With movement, our vision is to present a **consistent interface for pose tracks** and to **analyze them using modular and accessible tools**. We aim to accommodate data from a range of pose estimation packages, in **2D or 3D**, tracking **single or multiple individuals**. The focus will be on providing functionalities for data cleaning, visualisation and motion quantification (see the [Roadmap](target-roadmaps) for details).

While movement is not designed for behaviour classification or action segmentation, it may extract features useful for these tasks. We are planning to develop separate packages for this purpose, which will be compatible with movement and the existing ecosystem of related tools.
At its core, `movement` handles the position and/or orientation
of one or more individuals over time.

There are a few common ways of representing animal motion from video
recordings: an animal's position could be reduced to that of a single keypoint
tracked on its body (usually the centroid), or instead a set of keypoints
(often referred to as the pose) to better capture its orientation as well as
the positions of limbs and appendages. The animal's position could be also
tracked as a bounding box drawn around each individual, or as a segmentation
mask that indicates the pixels belonging to each individual. Depending on the
research question or the application, one or other format may be more
convenient. The spatial coordinates of these representations may be defined
in 2D (x, y) or 3D (x, y, z).

Animal tracking frameworks such as [DeepLabCut](dlc:) or [SLEAP](sleap:) can
generate keypoint representations from video data by detecting body parts and
tracking them across frames. In the context of `movement`, we refer to these
trajectories as _tracks_: we use _pose tracks_ to refer to the trajectories
of a set of keypoints, _bounding boxes' tracks_ to refer to the trajectories
of bounding boxes' centroids, or _motion tracks_ in the more general case.

Our vision is to present a **consistent interface for representing motion
tracks** along with **modular and accessible analysis tools**. We aim to
support data from a range of animal tracking frameworks, in **2D or 3D**,
tracking **single or multiple individuals**. As such, `movement` can be
considered as operating downstream of tools like DeepLabCut and SLEAP.
The focus is on providing functionalities for data cleaning, visualisation,
and motion quantification (see the [Roadmap](target-roadmaps) for details).

In the study of animal behaviour, motion tracks are often used to extract and
label discrete actions, sometimes referred to as behavioural syllables or
states. While `movement` is not designed for such tasks, it can be used to
generate features that are relevant for action recognition.

## Design principles

movement is committed to:
`movement` is committed to:
- __Ease of installation and use__. We aim for a cross-platform installation and are mindful of dependencies that may compromise this goal.
- __User accessibility__, catering to varying coding expertise by offering both a GUI and a Python API.
- __Comprehensive documentation__, enriched with tutorials and examples.
- __Robustness and maintainability__ through high test coverage.
- __Scientific accuracy and reproducibility__ by validating inputs and outputs.
- __Performance and responsiveness__, especially for large datasets, using parallel processing where appropriate.
- __Modularity and flexibility__. We envision movement as a platform for new tools and analyses, offering users the building blocks to craft their own workflows.
- __Modularity and flexibility__. We envision `movement` as a platform for new tools and analyses, offering users the building blocks to craft their own workflows.

Some of these principles are shared with, and were inspired by, napari's [Mission and Values](napari:community/mission_and_values) statement.
23 changes: 13 additions & 10 deletions docs/source/community/roadmaps.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,31 @@
(target-roadmaps)=
# Roadmaps

The roadmap outlines **current development priorities** and aims to **guide core developers** and to **encourage community contributions**. It is a living document and will be updated as the project evolves.
This page outlines **current development priorities** and aims to **guide core developers** and to **encourage community contributions**. It is a living document and will be updated as the project evolves.

The roadmap is **not meant to limit** movement features, as we are open to suggestions and contributions. Join our [Zulip chat](movement-zulip:) to share your ideas. We will take community demand and feedback into account when planning future releases.
The roadmaps are **not meant to limit** `movement` features, as we are open to suggestions and contributions. Join our [Zulip chat](movement-zulip:) to share your ideas. We will take community feedback into account when planning future releases.

## Long-term vision
The following features are being considered for the first stable version `v1.0`.

- __Import/Export pose tracks from/to diverse formats__. We aim to interoperate with leading tools for animal pose estimation and behaviour classification, and to enable conversions between their formats.
- __Standardise the representation of pose tracks__. We represent pose tracks as [xarray data structures](xarray:user-guide/data-structures.html) to allow for labelled dimensions and performant processing.
- __Interactively visualise pose tracks__. We are considering [napari](napari:) as a visualisation and GUI framework.
- __Clean pose tracks__, including, but not limited to, handling of missing values, filtering, smoothing, and resampling.
- __Derive kinematic variables__ like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience.
- __Integrate spatial data about the animal's environment__ for combined analysis with pose tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it.
- __Import/Export motion tracks from/to diverse formats__. We aim to interoperate with leading tools for animal tracking and behaviour classification, and to enable conversions between their formats.
- __Standardise the representation of motion tracks__. We represent tracks as [xarray data structures](xarray:user-guide/data-structures.html) to allow for labelled dimensions and performant processing.
- __Interactively visualise motion tracks__. We are experimenting with [napari](napari:) as a visualisation and GUI framework.
- __Clean motion tracks__, including, but not limited to, handling of missing values, filtering, smoothing, and resampling.
- __Derive kinematic variables__ like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience and ethology.
- __Integrate spatial data about the animal's environment__ for combined analysis with motion tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it.
- __Define and transform coordinate systems__. Coordinates can be relative to the camera, environment, or the animal itself (egocentric).
- __Provide common metrics for specialised applications__. These applications could include gait analysis, pupillometry, spatial
navigation, social interactions, etc.
- __Integrate with neurophysiological data analysis tools__. We eventually aim to facilitate combined analysis of motion and neural data.

## Short-term milestone - `v0.1`
We plan to release version `v0.1` of movement in early 2024, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include:
We plan to release version `v0.1` of `movement` in early 2025, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include:

- [x] Ability to import pose tracks from [DeepLabCut](dlc:), [SLEAP](sleap:) and [LightningPose](lp:) into a common `xarray.Dataset` structure.
- [x] At least one function for cleaning the pose tracks.
- [x] Ability to compute velocity and acceleration from pose tracks.
- [x] Public website with [documentation](target-movement).
- [x] Package released on [PyPI](https://pypi.org/project/movement/).
- [x] Package released on [conda-forge](https://anaconda.org/conda-forge/movement).
- [ ] Ability to visualise pose tracks using [napari](napari:). We aim to represent pose tracks via napari's [Points](napari:howtos/layers/points) and [Tracks](napari:howtos/layers/tracks) layers and overlay them on video frames.
- [ ] Ability to visualise pose tracks using [napari](napari:). We aim to represent pose tracks as napari [layers](napari:howtos/layers/index.html), overlaid on video frames.
20 changes: 14 additions & 6 deletions docs/source/index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
(target-movement)=
# movement

A Python toolbox for analysing body movements across space and time, to aid the study of animal behaviour in neuroscience.
A Python toolbox for analysing animal body movements across space and time.

::::{grid} 1 2 2 3
:gutter: 3
Expand All @@ -17,7 +17,7 @@ Installation, first steps and key concepts.
:link: examples/index
:link-type: doc

A gallery of examples using movement.
A gallery of examples using `movement`.
:::

:::{grid-item-card} {fas}`comments;sd-text-primary` Join the movement
Expand All @@ -32,10 +32,18 @@ How to get in touch and contribute.

## Overview

Pose estimation tools, such as [DeepLabCut](dlc:) and [SLEAP](sleap:) are now commonplace when processing video data of animal behaviour. There is not yet a standardised, easy-to-use way to process the *pose tracks* produced from these software packages.

movement aims to provide a consistent modular interface to analyse pose tracks, allowing steps such as data cleaning, visualisation and motion quantification.
We aim to support a range of pose estimation packages, along with 2D or 3D tracking of single or multiple individuals.
Deep learning methods for motion tracking have revolutionised a range of
scientific disciplines, from neuroscience and biomechanics, to conservation
and ethology. Tools such as [DeepLabCut](dlc:) and [SLEAP](sleap:)
now allow researchers to track animal movements
in videos with remarkable accuracy, without requiring physical markers.
However, there is still a need for standardised, easy-to-use methods
to process the tracks generated by these tools.

`movement` aims to provide a consistent, modular interface for analysing
motion tracks, enabling steps such as data cleaning, visualisation,
and motion quantification. We aim to support all popular animal tracking
frameworks and file formats.

Find out more on our [mission and scope](target-mission) statement and our [roadmap](target-roadmaps).

Expand Down

0 comments on commit 85904d9

Please sign in to comment.