The Bagel CLI is a simple Python command line tool to automatically read and annotate a BIDS dataset so that it can be integrated into the Neurobagel graph.
Please refer to our official Neurobagel documentation for more information on how to use the CLI.
Option 1 (RECOMMENDED): Pull the Docker image for the CLI from Docker Hub: docker pull neurobagel/bagelcli
Option 2: Clone the repository and build the Docker image locally:
git clone https://github.com/neurobagel/bagel-cli.git
cd bagel-cli
docker build -t bagel .
Build a Singularity image for bagel-cli
using the Docker Hub image:
singularity pull bagel.sif docker://neurobagel/bagelcli
CLI commands can be accessed using the Docker/Singularity image.
NOTE: The Docker examples below assume that you are using the official Neurobagel Docker Hub image for the CLI.
If you have instead locally built an image, replace neurobagel/bagelcli
in commands with your built image tag.
# Docker
docker run --rm neurobagel/bagelcli # this is a shorthand for `docker run --rm neurobagel/bagelcli --help
# Singularity
singularity run bagel.sif
For a specific command:
# Docker
docker run --rm neurobagel/bagelcli <command-name> --help
# Singularity
singularity run bagel.sif <command-name> --help
cd
into your local directory containing (1) your phenotypic .tsv file, (2) Neurobagel-annotated data dictionary, and (3) BIDS directory (if available).- Run a
bagel-cli
container and include your CLI command at the end in the following format:
# Docker
docker run --rm --volume=$PWD:$PWD -w $PWD neurobagel/bagelcli <CLI command here>
# Singularity
singularity run --no-home --bind $PWD --pwd $PWD /path/to/bagel.sif <CLI command here>
In the above command, --volume=$PWD:$PWD -w $PWD
(or --bind $PWD --pwd $PWD
for Singularity) mounts your current working directory (containing all inputs for the CLI) at the same path inside the container, and also sets the container's working directory to the mounted path (so it matches your location on your host machine). This allows you to pass paths to the containerized CLI which are composed the same way as on your local machine. (And both absolute paths and relative top-down paths from your working directory will work!)
If your data live in /home/data/Dataset1
:
home/
└── data/
└── Dataset1/
├── neurobagel/
│ ├── Dataset1_pheno.tsv
│ └── Dataset1_pheno.json
└── bids/
├── sub-01
├── sub-02
└── ...
You could run the following: (for a Singularity container, replace the first part of the Docker commands with the Singularity command from the above template)
cd /home/data/Dataset1
# 1. Construct phenotypic subject dictionaries (pheno.jsonld)
docker run --rm --volume=$PWD:$PWD -w $PWD neurobagel/bagelcli pheno \
--pheno "neurobagel/Dataset1_pheno.tsv" \
--dictionary "neurobagel/Dataset1_pheno.json" \
--output "neurobagel" \
--name "Dataset1"
# 2. Add BIDS data to pheno.jsonld generated by step 1
docker run --rm --volume=$PWD:$PWD -w $PWD neurobagel/bagelcli bids \
--jsonld-path "neurobagel/pheno.jsonld" \
--bids-dir "bids" \
--output "neurobagel"
To ensure that our Docker images are built in a predictable way,
we use requirements.txt
as a lock-file.
That is, requirements.txt
includes the entire dependency tree of our tool,
with pinned versions for every dependency (see also)
We suggest that you create a development environment that is as close as possible to the environment we run in production.
To do so, we first need to install the dependencies from our lockfile (dev_requirements.txt
):
pip install -r dev_requirements.txt
And then we install the CLI without touching the dependencies
pip install --no-deps -e .
Finally, to run the test suite we need to install the bids-examples
and neurobagel_examples
submodules:
git submodule init
git submodule update
Confirm that everything works well by running a test
pytest .
The requirements.txt
file is automatically generated from the setup.cfg
constraints. To update it, we use pip-compile
from the pip-tools
package.
Here is how you can use these tools to update the requirements.txt
file.
To install:
pip install pip-tools
To update the runtime dependencies in requirements.txt
, run
pip-compile -o requirements.txt --upgrade
The above command only updates the runtime dependencies.
To update the developer dependencies in dev_requirements.txt
, run:
pip-compile -o dev_requirements.txt --extra all
Terms in the Neurobagel namespace (nb
prefix) and their class relationships are serialized to a file
called nb_vocab.ttl, which is automatically
uploaded to new Neurobagel graph deployments.
This vocabulary is used by Neurobagel APIs to fetch available attributes and attribute instances from a graph store.
When the Neurobagel graph data model is updated (e.g., if new classes or subclasses are created), this file should be regenerated by running:
python generate_nb_vocab_file.py
This will create a file called nb_vocab.ttl
in the current working directory.