The LinTO Deployment Tool streamlines the setup, configuration, and deployment of LinTO Services, such as transcription and LinTO Studio. Leveraging Docker in Swarm mode, this tool efficiently manages complex deployments, including transcription services, diarization, and reverse proxy setups. It provides a simplified interface for deploying the entire LinTO suite on a single node, while also enabling scalability across multiple nodes if required.
For more details on LinTO Studio's Architecture, refer to LINTO-STUDIO.md.
- Setup Script: The
setup.sh
script initializes the environment, including configuring Docker in Swarm mode, setting up required networks, installing dependencies, and preparing services for deployment. It also generates a set of.yaml
files in therunning
directory, which are later used by thestart.sh
script to launch the services. This process will not affect any Docker or Docker Compose services that are already running on your system. - Start Script: The
start.sh
script deploys a stack of services using Docker Swarm, providing a straightforward approach to managing multiple services. - Environment Configuration: The configuration is managed through a
.envdefault
file, which can be customized or overridden by creating a.env
file to adjust deployment settings.
- Docker: Ensure that Docker is installed. The
setup.sh
script will handle all Docker-related tasks, including initializing Swarm mode and setting up networks, so there is no need for manual configuration. - Optional: Docker Desktop and WSL2 are required if you are working in a Windows environment.
-
Clone this repository and navigate to the project directory.
-
Configure the environment variables by modifying
.envdefault
or creating a.env
file to override the defaults. Environment variables are crucial for customizing your deployment. They define paths, network names, domain configurations, and other parameters used by the scripts during setup and deployment. By using a.env
file, you can personalize your setup without modifying the default settings, which is useful for managing different environments. -
Run the setup script to prepare the environment:
./setup.sh
The
setup.sh
script is interactive and will guide you through several key steps, including:- Cleanup Step: The script removes
.yaml
files from therunning
directory, ensuring a fresh start. - Dependencies Installation: The script installs all necessary dependencies, including
dialog
for user interactions,jsonnet
for generating configuration files,apache2-utils
for authentication,jq
for JSON processing, andyq
for YAML processing. It also installsmkcert
to create local SSL certificates. - Service Configuration: The script creates necessary networks and directories for various services. Depending on your selection, it sets up directories for STT (speech-to-text), LLM (large language model), LinTO Studio, and Traefik.
- Mode Selection: You will be prompted to choose between 'server' and 'local' deployment modes. If you select 'server' mode, a Let's Encrypt certificate will be automatically generated for secure connections, whereas 'local' mode will generate certificates using
mkcert
. - Service Selection: The script allows you to select specific services to deploy, such as transcription (in different languages), diarization, summarization, live streaming, and LinTO Studio.
- GPU Configuration: If your system has a GPU, you will be asked if you want to use GPU acceleration for the services.
- Deployment Configuration: The script configures Docker Swarm, setting up the Swarm mode if it is not already active. It may also promote worker nodes to manager nodes if necessary.
- Building Services: The script generates
.yaml
configuration files for the selected services using predefined templates, ensuring that all necessary services are properly built and configured for deployment.
- Cleanup Step: The script removes
To launch the LinTO services, run:
./start.sh
This script will deploy all services defined in the ./running
directory as a Docker stack, utilizing Docker Swarm to orchestrate the deployment.
To customize your deployment, copy .envdefault
to .env
and modify the variables as needed. Below is an example based on typical settings in .envdefault
:
See Detailed Environment Settings
# Docker network to be used for the deployment
DOCKER_NETWORK=linto
# The name of the Docker stack used for deployment
LINTO_STACK_NAME=linto
# HTTP authentication for accessing some services
LINTO_HTTP_USER=linto
LINTO_HTTP_PASSWORD=LINAGORA
# Docker image tag for LinTO services (e.g., latest, latest-unstable)
LINTO_IMAGE_TAG=latest-unstable
# Directory to be used for storing shared and local data (audio uploads, configuration files, etc.)
LINTO_SHARED_MOUNT=~/shared_mount
LINTO_LOCAL_MOUNT=~/deployment_config
# Redis configuration (password for Redis services)
REDIS_PASSWORD=My_Password
# Theme settings for the LinTO front-end interface
LINTO_FRONT_THEME=LinTO-green
# Default permissions for new organizations (upload, summary, session)
ORGANIZATION_DEFAULT_PERMISSIONS=upload,summary,session
# Superuser settings
SUPER_ADMIN_EMAIL=superadmin@mail.com
SUPER_ADMIN_PWD=superadmin
The .env file allows you to configure Docker networks, authentication, stack names, paths for shared and local mounts, Redis settings, and the visual theme of the front-end. Be sure to adjust these variables according to your deployment needs.
The deployment tool can be used for:
- Deploying LinTO Studio: LinTO Studio is a media management platform that provides a powerful web interface for managing transcription sessions and interacting with LinTO services. It offers features such as real-time transcription, closed-captioning, and diarization, allowing users to effectively manage and analyze multimedia content. For more information, visit the LinTO Studio GitHub page.
- Deploying transcription services: This tool utilizes Whisper models for high-accuracy, GPU-accelerated processing of audio data. These models are designed to deliver cutting-edge performance in transcription accuracy, particularly for scenarios requiring detailed linguistic analysis, and they can leverage GPU capabilities to handle resource-intensive tasks efficiently.
- Setting up advanced diarization and transcription services: The tool includes features to accurately identify speakers and generate precise transcriptions.
- Configuring an API gateway: The deployment includes an API gateway for seamless integration with LinTO Studio or other external services.
The infrastructure requirements vary based on the services you intend to deploy. Here are some general guidelines:
- CPU Deployment: Suitable for development or low-demand scenarios, such as testing or small-scale deployments, which can be run on a typical local machine or a small cloud instance.
- GPU Deployment: Recommended for Whisper models or real-time transcription, as these services are computationally intensive. For production-level deployment involving real-time transcriptions, a machine with a compatible Nvidia GPU is highly advised.
To ensure proper GPU utilization when using Docker Swarm, you need to configure the Nvidia container runtime, CUDA, and Nvidia drivers. Since Docker Swarm does not support the --gpus
flag like a simple Docker runtime, you must configure /etc/docker/daemon.json
as follows to enable GPU capabilities:
# Configure Docker to use the NVIDIA Container Runtime
sudo tee /etc/docker/daemon.json > /dev/null <<EOF
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
EOF
- Nvidia Container Toolkit: Install the Nvidia Container Toolkit to provide Docker containers with GPU support. Follow Nvidia's official documentation for installation instructions.
- CUDA: CUDA is required for GPU acceleration. Install the appropriate version that matches your GPU and the software requirements. Refer to Nvidia's compatibility guide for choosing the correct version.
- Nvidia Drivers: Make sure you have the correct Nvidia drivers installed. These drivers must be compatible with CUDA and the Nvidia container runtime.
If you are using WSL2 with Docker Desktop, GPU access must be explicitly enabled, and Docker runtime settings may need adjustments to support GPU-based workloads. Ensure that Docker Desktop is configured to use WSL2 and that GPU sharing is enabled for GPU-intensive services.
You will need to configure the Docker Engine to enable GPU capabilities by adding the following configuration in Docker Desktop's settings under "Docker Engine":
{
"builder": {
"gc": {
"defaultKeepStorage": "20GB",
"enabled": true
}
},
"experimental": false,
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
This configuration enables Docker to use the Nvidia runtime for GPU-accelerated tasks, ensuring compatibility with GPU-based services.
The setup.sh
script handles several key tasks, including:
- Cleaning up old configurations: Removes outdated or conflicting setup files to ensure a clean start.
- Running dependencies setup: Installs all necessary services and tools required by the LinTO suite.
This script prepares everything required for a seamless deployment, from clearing temporary files to setting up the running
directory and configuring Docker.
The start.sh
script deploys the services using Docker Swarm by reading the configuration files located in the ./running
directory. Each service is deployed as part of a stack, ensuring that all services are well-integrated. You do not need to handle the underlying Docker commands start.sh
simplifies this process for you.
If running locally or using self-signed certificates, browsers may require manual configuration to accept these certificates. You might encounter warnings when accessing the LinTO services via HTTPS. This is expected for self-signed certificates, and you can manually approve the certificate in your browser settings to proceed.
When interacting with LinTO services via the command line using curl, self-signed certificates may cause security warnings or connection errors. To bypass these errors for local testing, you can use the -k or --insecure option, which tells curl to ignore certificate validation.
Example:
curl -k https://yourdomain.com/api
This will allow curl to proceed without verifying the self-signed certificate. However, for production environments.
The provided scripts focus on deploying LinTO on a single-node Docker Swarm; however, you can scale the deployment across multiple nodes by adjusting the Docker Swarm configuration. For larger deployments, adding nodes to your Docker Swarm will help with load distribution and enhance reliability.
This documentation outlines the different access points for interacting with services through web interfaces, Swagger, and APIs registered behind the gateway. The URLs below use localhost
as the default value, but they can be adjusted based on the domain used during deployment.
Note: Replace
localhost
with the appropriate domain during deployment to match your configuration.
The following interfaces can be accessed via a web browser to monitor and manage the system:
- Studio Front: Available at https://localhost/, this interface allows interaction with the studio front-end.
- Swarmpit: Available at https://localhost/monitoring-swarmpit/, this interface allows monitoring of Docker Swarm containers and services.
- Celery: Available at https://localhost/monitoring-celery/, this interface monitors the background tasks run by Celery.
The following APIs expose their documentation via Swagger, enabling easier exploration and testing of there api:
- Studio API: https://localhost/cm-api/apidoc/#/, documentation for the studio API that manages backend services.
- Session API: https://localhost/session-api/api-docs/#/, documentation for the API handling user of live sessions.
- STT French Whisper v3: https://localhost/stt-french-whisper-v3/docs/#/, documentation for the Speech-to-Text API in French (based on Whisper v3).
- STT English Whisper v3: https://localhost/stt-english-whisper-v3/docs/#/, documentation for the Speech-to-Text API in English (based on Whisper v3).
- LLM Gateway: https://localhost/llm-gateway/docs/#/, documentation for the Large Language Model (LLM) Gateway API.
The following APIs are exposed and routed through the Traefik reverse proxy, allowing direct interaction with backend services if enable:
- Studio API: https://localhost/cm-api/
- Session API: https://localhost/session-api/
- STT French Whisper v3: https://localhost/stt-french-whisper-v3/
- STT English Whisper v3: https://localhost/stt-english-whisper-v3/
- LLM Gateway: https://localhost/llm-gateway/
The APIs are also accessible behind the Gateway, centralizing access to the services if enable:
- Studio API: https://localhost/gateway/cm-api/
- Session API: https://localhost/gateway/session-api/
- STT French Whisper v3: https://localhost/gateway/stt-french-whisper-v3/
- STT English Whisper v3: https://localhost/gateway/stt-english-whisper-v3/
- LLM Gateway: https://localhost/gateway/llm-gateway/
The superuser has an administrative access to the back office of studio, which includes managing organization creation, assigning default permissions, and overseeing users within organizations. To set up a superuser, configure the following environment variables in your .env
file:
SUPER_ADMIN_EMAIL=superadmin@mail.fr
SUPER_ADMIN_PWD=superadminpassword
The superuser will have the authority to define organization-wide settings, manage user roles and can monitore all live sessions.
By default, each newly created organization is granted the following permissions, which define what members can do within the organization:
- Upload: Grants access to use the transcription service to upload and process media.
- Summary: Enables the use of large language models (LLM) to generate summaries for uploaded media.
- Session: Provides access to the Session API, allowing the organization to create live meetings.
These default permissions can be set up on project startup or adjusted individually in the back office by the superuser.
To configure default permissions at startup, set the following variable in the .env
file:
ORGANIZATION_DEFAULT_PERMISSIONS=upload,summary,session
Note: If any default permission is removed, future organizations will not have access to that functionality unless the superuser grants it in the back office. Note: To disable all permissions, set
ORGANIZATION_DEFAULT_PERMISSIONS=none
An organization can be structured with various user roles, each granting specific permissions. The default role is Member, and each subsequent role inherits the permissions of the previous one, as outlined below:
- Member: Can view and edit any media regarding of the media permission.
- Uploader: Can create and upload new media.
- Meeting Manager: Has the ability to initiate and manage sessions.
- Maintainer: Manages all users within the organization.
- Admin: Has full control over all organization actions and settings, including permissions and user management.
These roles allow for a structured, role-based permission system within each organization, ensuring that each user has the appropriate level of access based on their responsibilities.