Nextflow-API is a web application and REST API for submitting and monitoring Nextflow pipelines on a variety of execution environments. The REST API is implemented in Python using the (Tornado) framework.
This Nextflow-API was adapted from SciDAS. Thank you very much for your amazing code.
Depending on your setup, you may not need to install mongodb
. You may also prefer to install the Python dependencies in an Anaconda environment:
conda create -n nextflow-api python=3.7
conda activate nextflow-api
pip install -r requirements.txt
Use scripts/startup-local.sh
to deploy Nextflow-API locally, although you may need to modify the script to fit your environment.
The core of Nextflow-API is a REST API which provides an interface to run Nextflow pipelines and can be integrated with third-party services. Nextflow-API provides a collection of CLI scripts to demonstrate how to use the API.
Nextflow-API stores workflow runs and tasks in one of several "backend" formats. The file
backend stores the data in a single pkl
file, which is ideal for local testing. The mongo
backend stores the data in a Mongo database, which is ideal for production.
Endpoint | Method | Description |
---|---|---|
/api/workflows |
GET | List all workflow instances |
/api/workflows |
POST | Create a workflow instance |
/api/workflows/{id} |
GET | Get a workflow instance |
/api/workflows/{id} |
POST | Update a workflow instance |
/api/workflows/{id} |
DELETE | Delete a workflow instance |
/api/workflows/{id}/upload |
POST | Upload input files to a workflow instance |
/api/workflows/{id}/launch |
POST | Launch a workflow instance |
/api/workflows/{id}/log |
GET | Get the log of a workflow instance |
/api/workflows/{id}/download |
GET | Download the output data as a tarball |
/api/tasks |
GET | List all tasks |
/api/tasks |
POST | Save a task (used by Nextflow) |
Nextflow-API automatically collects resource usage data generated by Nextflow, including metrics like runtime, CPU utilization, memory usage, and bytes read/written. Through the web interface you can download this data as CSV files, create visualizations, and train prediction models for specific pipelines and processes.