Connect NATS Jetstream to S3 for long term storage and replay.
This application facilitates storing and loading NATS messages to and from S3 object storage.
Define store
jobs to handle serializing messages, compressing into blocks
and writing to S3. Send HTTP requests to start load
jobs to download messages
from S3 and submit back into NATS.
See the examples directory to get started.
The app can be run from a pre-built docker container
version: "3.7"
services:
nats3:
image: evanofslack/nats-s3-connector:latest
ports:
- 8080:8080
volumes:
- ./config.toml:/etc/nats3/config.toml
Alternatively, build the executable from source
git clone https://github.com/evanofslack/nats-s3-connector
cd nats-s3-connector
cargo build
Jobs that store NATS messages in S3 are defined through the config file. Config values can be defined through toml or yaml formats.
[[store]]
name ="job-1"
stream = "jobs"
subject = "subjects-1"
bucket = "bucket-1"
[[store]]
name ="job-2"
stream = "jobs"
subject = "subjects-2"
bucket = "bucket-2"
The config can take any number of store
definitions. It will start
threads to monitor each job.
Messages stored in S3 can be loaded and submitted back into NATS.
These load jobs are started by sending a PUT request to the HTTP server
on the endpoint /load
:
curl --header "Content-Type: application/json" \
--request POST \
--data '{
"bucket":"bucket-1",
"read_stream":"jobs",
"read_subject":"subjects-1",
"write_stream":"jobs",
"write_subject":"destination",
"delete_chunks":true
}' \
http://localhost:8080/load
This will start loading messages from S3 and publishing them to specified stream.
There is an prometheus compatible metrics endpoint at /metrics
. It provides
counters for messages stored and loaded, as well as gauges tracking in-progress
jobs. All metrics are prefixed with nats3
.