NeuroConv is a Python package for converting neurophysiology data in a variety of proprietary formats to the Neurodata Without Borders (NWB) standard.
Features:
- Reads data from 40 popular neurophysiology data formats and writes to NWB using best practices.
- Extracts relevant metadata from each format.
- Handles large data volume by reading datasets piece-wise.
- Minimizes the size of the NWB files by automatically applying chunking and lossless compression.
- Supports ensembles of multiple data streams, and supports common methods for temporal alignment of streams.
We always recommend installing and running Python packages in a clean environment. One way to do this is via conda environments:
conda create --name <give the environment a name> --python <choose a version of Python to use>
conda activate <environment name>
To install the latest stable release of neuroconv though PyPI, run:
pip install neuroconv
To install the current unreleased main
branch (requires git
to be installed in your environment, such was via conda install git
), run:
pip install git+https://github.com/catalystneuro/neuroconv.git@main
NeuroConv also supports a variety of extra dependencies that can be specified inside square brackets, such as
pip install "neuroconv[openephys, dandi]"
which will then install extra dependencies related to reading OpenEphys data as well as the usage of the DANDI CLI (such as automatic upload to the DANDI Archive).
You can read more about these options in the main installation guide.
See our ReadTheDocs page for full documentation, including a gallery of all supported formats.
NeuroConv is distributed under the BSD3 License. See LICENSE for more information.