Skip to content

Latest commit

 

History

History

su

ao Scheduler Unit

This is an spec compliant ao Scheduler Unit, implemented as a Rust actix web server.

Prerequisites

Database setup

  • The server will migrate the database at startup but you must create a postgres database called su and provide the url for it in the DATABASE_URL environment variable described below

Environment Variables

Create a .env file with the following variables, or set them in the OS:

  • SU_WALLET_PATH a local filepath to an arweave wallet the SU will use to write tx's
  • DATABASE_URL a postgres database url, you must have a postgres database called su
  • DATABASE_READ_URL an optional separate postgres database url for reads
  • GRAPHQL_URLan url for the arweave graphql interface https://arweave-search.goldsky.com
  • ARWEAVE_URLan arweave gateway url to fetch actual transactions and network info from https://arweave.net/
  • GATEWAY_URLan default fallback for the above 2. Must provide graphql, network info, and tx fetching.
  • UPLOAD_NODE_URL an uploader url such as https://up.arweave.net
  • MODE can be either value su or router but for local development use su
  • SCHEDULER_LIST_PATH a list of schedulers only used for router MODE. Ignore when in su MODE, just set it to "".
  • DB_WRITE_CONNECTIONS how many db connections in the writer pool,defaults to 10
  • DB_READ_CONNECTIONS how many db connections in the reader pool, default to 10
  • USE_DISK whether or not to write and read rocksdb, this is a performance enhancement for the data storage layer
  • SU_DATA_DIR if USE_DISK is true, this is where rocksdb will be initialized
  • MIGRATION_BATCH_SIZE when running the migration binary how many to fetch at once from postgres
  • ENABLE_METRICS enable application level prometheus metrics to be available on the /metrics endpoint

You can also use a .env file to set environment variables when running in development mode, See the .env.example for an example .env

Usage

Setup and run local development server with hot reloading

cargo install systemfd cargo-watch
systemfd --no-pid -s http::8999 -- cargo watch -x 'run --bin su su 9000'

or

Run the binary already in this repo

You can run the binary that is already in the repository if your machine is compatible. It is built for the x86_64 architecture and runs on Linux. You must have Clang and LLVM on the machine

./su su 9000

Tests

You can execute unit tests by running cargo test

Compiling a binary (mainly for production/other live environments)

To build with docker on your local machine delete all su images and containers if you have previously run this, then run

docker system prune -a
docker build --target builder -t su-binary .
docker create --name temp-container su-binary
docker cp temp-container:/usr/src/su/target/release/su .

This will create the binary called su which can be pushed to the repo for deployment or used directly. This is no longer a static binary and requires external libraries like Clang and LLVM.

Running the binary, su MODE

Can run directly in the terminal (for compatible machines)

./su su 9000

Or in Docker

cp .env.example .env.su
docker build -t su-runner .
docker run --env-file .env.su -v ./.wallet.json:/app/.wallet.json su-runner su 9000

When running the binary in docker you will need to make sure the environment variables are set in the container. If not see a NotPresent error you are missing the environment variables. You will also need to make sure the database url is accessible in the container.

  • SU_WALLET_PATH a local filepath to an arweave wallet the SU will use to write tx's
  • DATABASE_URL a postgres database url, you must have a postgres database called su
  • DATABASE_READ_URL an optional separate postgres database url for reads
  • GRAPHQL_URLan url for the arweave graphql interface https://arweave-search.goldsky.com
  • ARWEAVE_URLan arweave gateway url to fetch actual transactions and network info from https://arweave.net/
  • GATEWAY_URLan default fallback for the above 2. Must provide graphql, network info, and tx fetching.
  • UPLOAD_NODE_URL an uploader url such as https://up.arweave.net
  • MODE can be either value su or router but for local development use su
  • SCHEDULER_LIST_PATH a list of schedulers only used for router MODE. Ignore when in su MODE, just set it to "".
  • DB_WRITE_CONNECTIONS how many db connections in the writer pool,defaults to 10
  • DB_READ_CONNECTIONS how many db connections in the reader pool, default to 10
  • USE_DISK whether or not to write and read rocksdb, this is a performance enhancement for the data storage layer
  • SU_DATA_DIR if USE_DISK is true, this is where rocksdb will be initialized
  • MIGRATION_BATCH_SIZE when running the migration binary how many to fetch at once from postgres
  • ENABLE_METRICS enable application level prometheus metrics to be available on the /metrics endpoint

Running a router in front of multiple scheduler units

If you have multiple scheduler units running you can run a su in router mode to act as a single entrypoint for all of them.

First in the environment for this node, set the SCHEDULER_LIST_PATH variable to a json file containing a list of the su urls. The json file should look like the below -

[
    {
        "url": "https://ao-su-1.onrender.com"
    },
    {
        "url": "https://ao-su-2.onrender.com"
    }
]

Also set the MODE environment variable to router

Now the url for the router can be used as a single entry point to all the sus. In this configuration all sus and the router should share the same wallet configured in the environment variable SU_WALLET_PATH

When running the binary in docker you will need to make sure the environment variables are set in the container as well.

Running the binary, router MODE

Can run directly in the terminal (for compatible machines)

./su router 9000

Or in Docker

cp .env.example .env.router
docker build -t su-runner .
docker run --env-file .env.router -v ./.wallet.json:/app/.wallet.json -v ./schedulers.json:/app/.schedulers.json su-runner router 9000

Migrating data to disk for an existing su instance

If a su has been running using postgres for sometime there may be performance issues. Writing to and reading files from disk has been added. In order to switch this on set the environment variables

  • USE_DISK whether or not to read and write binary files from/to the disk/rocksdb. If the su has already been running for a while the data will need to be migrated using the mig binary before turning this on.
  • SU_DATA_DIR the data directory on disk where the su will read from and write binaries to

Then the mig binary can be used to migrate data in segments from the existing db. It will currently only migrate the message files to the disk. It takes a range which represents a range in the messages table. So 0-500 would grab the first 500 messages from the messages table and write them to rocksdb on the disk and so on. Just 0 as an argument would read the whole table, the range is so you can run multiple instances of the program on different segments of data for faster migration. To read from record 1000 to the end of the table you would just send 1000 as an argument.

Migrate the entire messages table to disk

./mig 0

Migrate the first 1000 messages

./mig 0-1000

Migrate from 1000 to the end of the table

./mig 1000

Building the mig binary, delete all su images and containers if you have previously run this, then run

docker system prune -a
docker build --target mig-builder -t mig-binary -f DockerfileMig .
docker create --name temp-container-mig mig-binary
docker cp temp-container-mig:/usr/src/mig/target/release/mig .

System Requirements for SU + SU-R cluster

The SU + SU-R runs as a cluster of nodes. The SU-R acts as a redirector to a set of SU's. In order to run the cluster you need at least 2 nodes. 1 SU and one SU-R (a SU running in router mode). In order for the SU-R to initialize properly when it boots up, it has to be started up with a configured set of SU's in the SCHEDULER_LIST_PATH environment variable.

So the workflow for setting up the SU/SU-R cluster properly the workflow is, start a set of SU nodes, configure the SU-R SCHEDULER_LIST_PATH with all the nodes, and then start the SU-R. To add more SU's later just add them into the SCHEDULER_LIST_PATH and reboot the SU-R.

The production SU is a Rust application built into a binary which can be run with the RunDockerfile. The SU-R can be run with the RunRouterDockerfile. They currently run on port 9000 so will require a web server to point to 9000. These containers need to have the ability to copy defined secret files .wallet.json and .schedulers.json into their container when deploying and also have a set of environment variables.

Lastly the SU and SU-R require a postgresql database for each node that is already initialized with the database name being "su" upon the first deployment. Deployments will migrate themselves at server start up. Each SU and SU-R should have its own database URL.

In summary the SU + SU-R requirements are

  • A docker environment to run 2 different dockerfiles
  • A server pointing to port 9000
  • Ablity to define and modify secrect files availabe in the same path as the dockerfiles, .wallet.json and .schedulers.json
  • Environement variables available in the container.
  • a postgresql database per node, defined with a database called "su" at the time of deployment.