Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the option for multiple views loading #197

Closed
wants to merge 66 commits into from

Conversation

vigji
Copy link
Collaborator

@vigji vigji commented May 30, 2024

Description

This PR attempts at defining a loading mode for supporting multi-camera data.

  • Bug fix
  • [ x] Addition of a new feature
  • Other

This kind of data structure would be needed by anyone in the process of extracting 3D poses. This is something needed only as long as you do not have yet 3D coordinates, in my case (but I assume in anyone's) the end goal of having multiple views. So, if you want to support only the final, "clean" data where some triangulation of sort already happened, I would understand!

References

Discussion started in #196

How has this PR been tested?

Currently in the process of defining tests, atm just with already existing data, pretending it to be multi-camera.

Is this a breaking change?

If we decide to approve this, it will require some core redefinition in the dataset validator. Currently it seems to be quite strict; if the view dimension ends up being optional, it would require some work in the Validator class. I am not familiar with the processing functions to compute kinematics, in principle I assume the semantic indexing of xarray should allow for a smooth handling of this optionality - but I am not yet an xarray pro

Does this PR require an update to the documentation?

If accepted, yes.

If any features have changed, or have been added. Please explain how the
documentation has been updated.

No changes yet

Checklist:

  • [ x] The code has been tested locally
  • Tests have been added to cover all new functionality
  • The documentation has been updated to reflect any changes
  • The code has been formatted with pre-commit

@vigji vigji marked this pull request as draft May 30, 2024 09:28
Copy link

Quality Gate Passed Quality Gate passed

Issues
0 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code

See analysis details on SonarCloud

Copy link

codecov bot commented May 30, 2024

Codecov Report

Attention: Patch coverage is 99.32318% with 4 lines in your changes missing coverage. Please review.

Project coverage is 99.34%. Comparing base (426003c) to head (8c6b213).
Report is 58 commits behind head on main.

Files with missing lines Patch % Lines
movement/io/load_poses.py 87.09% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #197      +/-   ##
==========================================
- Coverage   99.68%   99.34%   -0.35%     
==========================================
  Files          11       14       +3     
  Lines         638      912     +274     
==========================================
+ Hits          636      906     +270     
- Misses          2        6       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

@niksirbi
Copy link
Member

Cross-linking this thread on Zulip, where we are discussing matters relevant to this PR.

niksirbi and others added 5 commits May 31, 2024 16:18
…ormatics-unit#193)

* example usage for the median filter

* formating and wording tweaks

* make example titles and subtitles consistently imperative

* finished first full draft of the example

* added section about combining smoothing filters

* ignore doi link during linkcheck

* Apply suggestions from code review

Co-authored-by: Chang Huan Lo <changhuan.lo@ucl.ac.uk>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix line lenghts

* use px2 instead of dB

---------

Co-authored-by: Chang Huan Lo <changhuan.lo@ucl.ac.uk>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* added from_numpy() function to the load_poses module

* unit test new function

* use from_numpy() function in other loaders as well

* add Examples section to docstring and render in API index

* add Examples section to docstring and render in API index

* confidence array is optional

* None is the default for confidence

* rename private functions

* renamef from_dlc_df to from_dlc_style_df

* harmonise docstrings in load_poses

* harmonised function names and docstrings in save_poses

* harmonised docstrings in validators

* split Input/Output section of API index into modules

* renamed `_from_lp_or_dlc_file` to `_ds_from_lp_or_dlc_file`
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.4.3 → v0.4.7](astral-sh/ruff-pre-commit@v0.4.3...v0.4.7)
- [github.com/codespell-project/codespell: v2.2.6 → v2.3.0](codespell-project/codespell@v2.2.6...v2.3.0)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* bump python versions for PyPI and CI

* recommend py3.11 for conda env during installation

* display supported python versions as a shield in README

* pyupgrade auto-update Union to | syntax
neuroinformatics-unit#206)

* Validate required dimensions only

* Fix fstring

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>

* Update docstring

---------

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>
lochhh and others added 12 commits June 10, 2024 10:59
* Split validators into modules

* Fix API index

* Fix API reference

* Fix docstring

* Move validator module one level up
* move logging.py inside utils

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Skeleton for ValidBboxesDataset class

* Add ID string format check

* Print all IDs with wrong format

* Check IDs are 1-based

* Make ID list not optional

* Add test for invalid centroid or shape arrays

* Add test for invalid ID array

* Fix confidence array setting to nan

* Add test for confidence array

* Add fixtures for valid bboxes arrays

* move method to class

* Add log message check to invalid_centroid_position_array test

* Print attribute.name (rather than attribute) in error messages

* Add log messages for shape_array tests

* Add logs check for ID attribute

* Fix log message

* Add log message check to confidence array

* Combine pattern matching tests

* Change log messages in tests

* Rename data array from centroid_position to position

* Rename IDs to individual_names

* Remove checks for position and shape arrays to be 3 dimensional

* Remove enforcing IDs are 1-based in valid bbox class

* Make individual_names optional and assign default value

* Remove requirement (and test) for a indvidual_names to be in a specific format

* Change validator and test to take individual_names as optional inputs

* Add fixture request to confidence array test

* Clean up

* Clean up tests

* Fix docstring

* Small cosmetic edits

* Add assert statement for default confidence matrix

* Feedback from code review

* Remove some comments

* length of box --> extent

* Refactor parameters into dict

* Refactor as a function

* Revert "Refactor as a function"

This reverts commit 8056e28.

* Clarify docstring for `individual_names`

* Add feedback from code review
* Split validators into modules

* Split validators tests. Factor out long fixture.

* Check log messages in tests

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add bboxes tests and put dataset fixtures in test file

* Fix rebase side effect

* Fix ky -> key

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Since [PR208](neuroinformatics-unit#208), python v3.11 is the main version we test across OSes and it's the one we recommend in the installation guide. But I'd forgotten to update the python version in the quick install section of the README. This PR rectifies that.
* Add "type" to sample datasets metadata, to include bbox data

* rename pytest fixture POSE_DATA_PATHS to DATA_PATHS

* Add 'type' to metadata required fields

* Update docs

* Apply suggestions from code review

Co-authored-by: Chang Huan Lo <changhuan.lo@ucl.ac.uk>

* Add platform specific tabs for computing hash

* Remove pass for not implemented bit

* pytest.DATA_PATHS refactor

* Refactor `fetch_dataset_paths`

* Update `sample_data` docstrings to accommodate bboxes data

* Suggestion from code review

* Remove if poses case for now, clarify TODO

---------

Co-authored-by: Chang Huan Lo <changhuan.lo@ucl.ac.uk>
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.4.7 → v0.5.0](astral-sh/ruff-pre-commit@v0.4.7...v0.5.0)
- [github.com/pre-commit/mirrors-mypy: v1.10.0 → v1.10.1](pre-commit/mirrors-mypy@v1.10.0...v1.10.1)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* API index just lists modules not single funcs

* add customised jinja templated for sphinx-autosummary

* Sanitise module-level docstrings

* pre-commit applied

* add modules rubric on api_index page

* updated contributing guide section on API reference

* monospace formating of attrs in module docstrings

* Work around failing tests due to PR#231

* Hide attrs validator functions

* Change class section "parameters" to "attributes"

* Add MovementDataset class attributes docstrings

* Revert "Work around failing tests due to PR#231"

This reverts commit fe2c1c4.

* Hide class method header if empty

* Remove class attributes autosummary table

* Remove extra space

---------

Co-authored-by: lochhh <changhuan.lo@ucl.ac.uk>
…it#224)

* added optional video argument to sample data fetchers

* renamed otpional arg to with_video

* use the smaller Aeon video for testing video fetching

* updated docs section on sample data

* apply suggestions from review

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
lochhh and others added 6 commits July 17, 2024 09:05
…tics-unit#209)

* Draft dataarray accessor

* Move dataarray accessor methods to `filtering`

* Add dataarray functions, test equality

* Add tests

* Add integration test

* Remove filters taking Dataset as input

* Reorganise filtering module

* Update filter and smooth examples

* Replace `window_length` with `window`

* Format assert string

* Remove old code accidentally reintroduced during rebase

* Update docstrings

* Add filtering methods to the `move` accessor

* Add example to docstring

* Remove obsolete and unused function imports

* Move util functions to reports.py and logging.py

* Apply suggestions from code review

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>

* Update docstrings

* Add missing docstring

* Add `move` accessor examples in docstrings

* Remove `position` check in kinematics wrapper

* Change`interpolate_over_time` to operate on num of observations

* Add test for different `max_gap` values

* Update `filter_and_interpolate.py` example

* Fix `filtering_wrapper` bug

* Update filter examples

* Use dictionary `update` in `smooth` example

* Move `logger` assignment to top of file

* Add `update` example to "getting started"

* Cover both dataarray and dataset in `test_log_to_attrs`

* Test that ``log`` contains the filtering method applied

* Use :py:meth: syntax for xarray.DataArray.squeeze() in examples

* Update `reports.py` docstrings

* Handle missing `individuals` and `keypoints` dims in NaN-reports

* Return str in `report_nan_values`

* Clean up examples

* Convert filtering multiple data variables tip to section

* Use `update()` in `filter_and_interpolate` example

---------

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>
* Add zenodo badge and citation to readme

* Create CITATION.CFF

* Use full names

* Add citation file to manifest

* Add citation to website

* updated citation.cff file after validation by cffinit

* remove 2nd date from citation entry

* homepage citation is included from README

* added Sofia to pyproject.toml authors

* harmonise spelling of my own name

---------

Co-authored-by: niksirbi <niko.sirbiladze@gmail.com>
* Add frame_array to ValidBboxesDataset

* Edit log

* Add test

* Fix docstring

* Add missing spaces in warning messages

* Change default IDs to 0-based

* Adapt sample_data tests to new metadata.yaml file in GIN repo

* Refactor shape check

* Suggestion for consistent names

* Add attribute type

* Check frames are contiguous

* Add a column vector converter to frame_array

* Revert "Add a column vector converter to frame_array"

This reverts commit 3a99c1a.
* Add skeleton for ValidVIAtracksCSV class

* Add skeleton for ValidVIAtracksCSV test

* Draft VIA file validator

* Change asserts to errors (WIP)

* Remove 1-based integer checks (for track ID and frames). Replace assert by errors

* Small edits

* Add tests for VIA file (pending fixtures)

* Add one fixture

* Add frame number as invalid file attribute

* Factor out valid header fixture

* Add test for frame number wrongly encoded in the filename

* Add unique frame numbers test. Check bbox shape.

* Add test for region attribute not defined

* Add test for track ID not castable as an integer

* Add test for unique track IDs per frame

* Small edits to comments and docstrings

* Apply suggestions from code review

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>

* Fix test duplicate from rebase

* Rename symbols

* csv to .csv

* Small edits to comments

---------

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>
* Updates and suggestions to i/o docs section

* Update input output with bboxes edits

* Update dataset section with bboxes edits

* Update sample data section with bboxes edits

* pose --> poses datasets

* Fix typo

* Add from_numpy

* Add example to export csv tracking data

* Refer to _poses_ datasets in API reference for loading pose tracking data

* Apply suggestions from code review

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>

* Add bboxes dataset to CONTRIBUTING.md

* Clarify 2D poses in dataset docs

* Add tabs and shape as code

* Change shape formatting back

* Fix `ds_type` comment

* Add line with `to_dataframe` method

* Replace assign_attrs by dict syntax

---------

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>
* Automate api_index.rst generation

* Revert to using (updated) NIU build and publish docs actions

* Include `make` commands in fancy tabs

* Swap tab order

* Mention index paths in the corresponding tabs
Lauraschwarz and others added 2 commits October 22, 2024 19:36
…rmatics-unit#304)

* first draft of example to convert file formats and changing keypoints

* second draft to convert and modify pose track files. I split the function into three separate operations.

* formatted the code and gotrid of spelling mistakes for the docs

* changed outline of code, new functions now handle ds not fpaths and added additional example function to run on fpath

* Apply suggestions from code review

added Niko's suggestions, implementing comments

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>

* pre-commit changes

* corrected import error

* pre-commit changes again

* get rid of extra underlines

* finishing touches

* added default thumbnail for examples without plots

---------

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>
* replace accessor with MovementDataset dataclass

* moved pre-save validations inside save_poses module

* deleted accessor code and associated tests

* define dataset structure in modular classes

* updated stale docstring for _validate_dataset()

* remove mentions of the accessor from the getting started guide

* dropped accessor use in examples

* ignore linkcheck for opensource licenses

* Revert "ignore linkcheck for opensource licenses"

This reverts commit c8f3498.

* use ds.sizes instead of ds.dims to suppress warning

* Add references

* remove movement_dataset.py module

---------

Co-authored-by: lochhh <changhuan.lo@ucl.ac.uk>
@niksirbi
Copy link
Member

Hey @vigji, sorry for the long silence on this. I'm going through stale PRs and I think it's time to resurrect this idea.

For context, we are slowly undertaking changes to allow for more flexible dimensions in movement datasets, as you had suggested. We got rid of the accessor #322, and with it some overzealous validation steps. Now we only validate the dimensions we actually need, on a per-function basis. We are also planning to reorder the dimensions #236 and stop creating singleton dimensions, but these changes are still to be implemented.

Given the above, I think it's time to go forward with this PR. As far as I can tell, the extra "views" dimensions shouldn't prevent you from running filtering and kinematics functions on your datasets. The computations will be simply broadcast across all extra dimensions.

I think the first step would be to rebase this PR on the current main branch and check if the function or its docstring need updating. After that, it would just be a matter of modifying some tests - e.g creating a fixture with a multi-view dataset, and adding that to the parameters of some filtering and kinematics tests. I can do that last bit if you can't find the time. Let me know if you'd have time for this in the coming weeks and if you'll need help with anything.

niksirbi and others added 6 commits October 24, 2024 16:04
* Add `clean` action to Windows `make` file

* Use `fail-on-warning` mode in Windows `make` file

* Drop unused `--keep-going` flag in Makefile

* Refactor auto-generate api docs

* Update contributing docs

* Remove Sphinx version constraint

* Document make-mode only

* Allow target chaining in Windows make
* added tip for R users

* added script for converting admonitions to myst

* refactored conversion script
* Draft inter-individual distances

* Return vector norm in `compute_interindividual_distances`

* Add `compute_interkeypoint_distances`

* Refactor pairwise distances tests

* Use `scipy.spatial.distance.cdist`

* Add examples to docstrings

* Rename variables

* Update test function args + fix indentation

* Handle scalar and 1d dims

* Handle missing `core_dim`

* Refactor `cdist` and tests

* Fix docstrings

* Reorder functions + cleanup docs

* Reduce pairwise distances functions

* Mention examples of available distance metrics

* Update docstrings

* Require `pairs` in `compute_pairwise_distances`

* Raise error if there are no pairs to compute distances for

* Rename `core_dim` to `labels_dim`

* Spell out expected pairs in test

* Merge old `kinematics` file changes into new

* Rename `core_dim` to `labels_dim` in tests

* Validate dims in `compute_pairwise_distances`

* Apply suggestions from code review

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>

* Apply suggestions from code review

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Niko Sirmpilatze <niko.sirbiladze@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.6.9 → v0.7.2](astral-sh/ruff-pre-commit@v0.6.9...v0.7.2)
- [github.com/pre-commit/mirrors-mypy: v1.11.2 → v1.13.0](pre-commit/mirrors-mypy@v1.11.2...v1.13.0)
- [github.com/mgedmin/check-manifest: 0.49 → 0.50](mgedmin/check-manifest@0.49...0.50)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
…-unit#280)

* implement compute_speed and compute_path_length functions

* added speed to existing kinematics unit test

* rewrote compute_path_length with various nan policies

* unit test compute_path_length across time ranges

* fixed and refactor compute_path_length and its tests

* fixed docstring for compute_path_length

* Accept suggestion on docstring wording

Co-authored-by: Chang Huan Lo <changhuan.lo@ucl.ac.uk>

* Remove print statement from test

Co-authored-by: Chang Huan Lo <changhuan.lo@ucl.ac.uk>

* Ensure nan report is printed

Co-authored-by: Chang Huan Lo <changhuan.lo@ucl.ac.uk>

* adapt warning message match in test

* change 'any' to 'all'

* uniform wording across path length docstrings

* (mostly) leave time range validation to xarray slice

* refactored parameters for test across time ranges

* simplified test for path lenght with nans

* replace drop policy with ffill

* remove B905 ruff rule

* make pre-commit happy

---------

Co-authored-by: Chang Huan Lo <changhuan.lo@ucl.ac.uk>
@niksirbi
Copy link
Member

Trying another ping, @vigji. I'm keen on your input.

@vigji
Copy link
Collaborator Author

vigji commented Nov 12, 2024

Thanks for re-pinging me and keeping me in the loop!

I've lost a bit track of this but I started working back on it just this week, so I'll give this a try. So to confirm, now you think I can just rebase to main and the kinematics functions should work?

Also, is there anything holding #236 and dropping singletons back? Do I still have to work under the assumption that I need "individuals" dimensions? adding my arbitrary "view" dimension is not an issue at all in this context? It feels that until you're free to drop singletons you are not really in "time and space are the only requirements" mode yet, right?

@niksirbi
Copy link
Member

Also, is there anything holding #236 and dropping singletons back? Do I still have to work under the assumption that I need "individuals" dimensions? adding my arbitrary "view" dimension is not an issue at all in this context? It feels that until you're free to drop singletons you are not really in "time and space are the only requirements" mode yet, right?

Afaik @lochhh is planning to tackle #236 soon, and nothing should be holding us back from dropping singletons (other than what unforeseen effects we may discover when we try it).

If you have a go at it now, assume that individuals is still needed. That said, your extra views dimensions should not prevent you from computing kinematics or applying filtering functions as things stand. That's because functions in filtering.py and kinematics.py only validate the dimensions they actually require for the computation (in most cases that's time, space or both), and they should simply broadcast over all other existing dimensions. So if you have multiple views, a given computation - e.g. compute_velocity() - will be performed on every view and the resulting DataArray will retain the views dimension. If you discover that something doesn't work as expected, then it's a bug and we'll have to solve it.

So in summary, I suggest implementing the multi-view loading assuming the current number and order of dimensions. When we fix #236 and the singleton problem, it will be our (the maintainers') job to make sure that your function is updated accordingly (together with all other loader functions). That's in the interest of keeping PRs atomic (at least conceptually).

Does that make sense?

@vigji
Copy link
Collaborator Author

vigji commented Nov 12, 2024

Awesome! Yes, it does, I'll give this a try this week :)

@vigji
Copy link
Collaborator Author

vigji commented Nov 20, 2024

So, I'm back to this! I can confirm that kinematics computation now runs smoothly, this is pretty cool :)

At the moment, there is a single function taking a dictionary {view_name: view_filename} . While I feel this is the most agnostic way of loading such data, it leaves upstream the burden of creating the dictionary. In my code I am currently using a function that looks for all .slp or .dlc files in a folder, and create the dictionary above using as keys the non-matching part of the filenames. I'd be happy to add it as well, depending on how you feel on having two functions both referring to a single use case and possibly a bit idiosyncratic.

I need to add sample data+test+miniml documentation. Can you add me to the GIN repo? (same username as here). My sample data would be 5 .dlc datasets corresponding to the 5 different views. I feel that it would be more correct to keep them all within a folder, although I do not see any other subfolders inside poses/ and I don't know if this would be a problem.

Let me know!

@vigji
Copy link
Collaborator Author

vigji commented Nov 20, 2024

Sorry, I actually realised that in principle I can also test with what is already here, and it is what I was doing in the past. A separate folder with my data could help though if

  1. I also include the function that I mentioned above
  2. in the future, I port to this project also triangulation code - don't know if there could be interest in that

@niksirbi
Copy link
Member

niksirbi commented Nov 20, 2024

So, I'm back to this! I can confirm that kinematics computation now runs smoothly, this is pretty cool :)

Great to hear!

At the moment, there is a single function taking a dictionary {view_name: view_filename} . While I feel this is the most agnostic way of loading such data, it leaves upstream the burden of creating the dictionary.

I'm perfectly fine with that. I think that flexibility is desirable and goes well with our philosophy. We can't imagine everyone's use-cases, and most movement users probably will be able to put a dict together. That said, we might consider adding some more convenient/specialised entry points in the future, but for the first implementation your current approach is perfectly sensible.

I need to add sample data+test+minimal documentation. Can you add me to the GIN repo? (same username as here). My sample data would be 5 .dlc datasets corresponding to the 5 different views. I feel that it would be more correct to keep them all within a folder, although I do not see any other subfolders inside poses/ and I don't know if this would be a problem.

I've just added you to the GIN repo. Relevant contributing docs (which you've probably already read). Sometimes GIN gives us troubles. If your run into unexpected issues, look into this excellent guide that @sfmig has written.

Regarding where to place the data, our current logic dictates that all files containing "poses" go into the poses folder, without subfolders. In your case I would add some sensible suffix to indicate that they belong together - e.g. {name}_view-0.csv, {name}_view-1.csv etc.. Unfortunately, the whole logic of sample data is designed around the idea that each "poses" file is a dataset, so each one needs its own entry in the metadata.yaml file. Semantically, it would make sense to somehow group these files together. Perhaps this could be achieved via zipping them? But in that case additional logic would have to be added for pooch to unzip archives after downloading, which is a bit of an overhead. We'll have to look into the sample_data.py module and see if there is a "minimally invasive surgery" we can do to unlock your use-case. I'll check to see if there is an ease way without overhauling the entire thing, which we plan to do at some point, see #237.

Despite these pains, I'm keen to find a solution, because such data will be useful for you and us to develop and debug features related to triangulation (and associated stuff in napari).

Now some questions regarding your data:

  • Are you adding just the predicted poses or associated videos/frames as well?
  • How big are the files? We want to keep things small-ish to not slow down tests, CI, documentation builds etc.

Copy link

@vigji
Copy link
Collaborator Author

vigji commented Nov 20, 2024

So, I'd say: let's leave this PR and merge as it is now. We'll figure out a solution for the actual multi-view dataset in a separate PR dealing with triangulation, what do you think?

Off for the size, my poses data would be 5x77kb files of just 200 representative timepoints. I can have also the triangulated data - similar size - and the original videos, which are however small (4x(600,200) videos and 1x(600, 600) video, all 200 frames).

But again, this can be addressed in a separate PR!

@niksirbi
Copy link
Member

So, I'd say: let's leave this PR and merge as it is now. We'll figure out a solution for the actual multi-view dataset in a separate PR dealing with triangulation, what do you think?

Agreed!

Off for the size, my poses data would be 5x77kb files of just 200 representative timepoints. I can have also the triangulated data - similar size - and the original videos, which are however small (4x(600,200) videos and 1x(600, 600) video, all 200 frames).

Perfectly acceptable sizes!

But again, this can be addressed in a separate PR!

Absolutely.

On an unrelated note, do we want to rebase this PR to main maybe, instead of merging? The diff is quite hard to parse as is.

@niksirbi
Copy link
Member

@vigji I've invited you as a collaborator with write access to this GitHub repo. This way I won't have to keep approving the CI workflow runs and you can debug faster.

@vigji
Copy link
Collaborator Author

vigji commented Nov 20, 2024

Yes sorry don't know what happened there, was trying to rebase and made a mess! Will close this and start over from main with just the relevant changes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support for multiple views?
7 participants