This repository contains Banyan Computer's internal platform services.
This document will help you set up a new development environment to contribute to this project. Please open a pull request if these instructions are out-of-date.
You should have a working Rust toolchain, npm, docker engine (or equivalent), and sqlite installed on your machine.
You must have an up-to-date version of docker installed on your machine to fully run the development environment.
Make sure the following command work:
docker --version
If you need to install docker, please refer to the official documentation
This project relies upon sqlx
to interact
with SQL datases from Rust code. The sqlx command-line interface can be
installed using the following command.
cargo install sqlx-cli --no-default-features --features completions,native-tls,sqlite
NB: you may replace the native-tls
flag with the openssl-vendored
flag if
you would like to use a vendored copy of the openssl
library instead of your
system's version. Refer to the crate's manifest
here to
see additional features.
The core service uses a .env
file to store environment variables. A sample
file exists as a starting point. You can copy that sample file into place
by running the following command:
cp crates/banyan-core-service/.env{.sample,}
Next, edit the .env
file and provide the GOOGLE_CLIENT_ID=
and
GOOGLE_CLIENT_SECRET=
values for the project. You will need to get these from
someone that already has them.
This will only work with the development keys, using other keys won't allow authentication and shouldn't exist on developer's systems.
You'll also need to ensure that your account has been granted access to the OAuth2 project if we're still in the testing phase of our application. This needs to be done by someone with access to the Google Cloud Console (direct deeplink to the relevant page).
The staging service and storage provider service use a .env
file to store
an argument called UPLOAD_STORE_URL
. This argument is used to connect to
an Object Store backend. This variable must be present at runtime, or the service
will fail to start. This variable specifies a connection to the Object Store
backend the given service should use at runtime.
A sample file exists in either crate. This file should configure a valid connection
to a local filesystem Object Store at the path ./data/uploads
within either crate. You can copy those
sample files into place by running the following command:
cp crates/banyan-staging-service/.env{.sample,}
cp crates/banyan-storage-provider-service/.env{.sample,}
You must copy these defaults in place before attempting to run either service or resetting your
devlopment environment with ./bin/reset_env.sh
(discussed below).
These services also support using S3 as an Object Store backend. For
development purposes this project initializes a MinIo backend that is spawned
locally by ./bin/reset_env.sh
.
The .env.sample
files in either crate should contain example connection strings
for connecting to this local MinIo instance in your development environment.
DO NOT USE THE SPECIFIED CREDENTIALS IN PRODUCTION. If you've run./bin/reset_env.sh
successfully then MinIo should be available at those example endpoints.
You can check the status of the MinIo container by running:
docker ps -a
You should see a container named banyan-minio
running. If not, make sure docker
is installed and properly configured in your environment.
The easiest way to set up a development environment is to use the
reset_env.sh
helper script, which automatically performs the steps outlined
in this document below.
./bin/reset_env.sh
Some cleanup from prior runs, assumes you're in the root of this repository:
make clean
We need to generate the platform signing keys, this can be done by simply
starting up the core service and shutting it down. The
generate-core-service-key
command will do this for you. To run against the
tomb CLI add the --features fake
flag.
After that command has finished, the platform's public key should be copied
over to the data/
directories of the staging and storage provider services.
make generate-core-service-key
cp -f crates/banyan-core-service/data/service-key.public crates/banyan-staging-service/data/platform-key.public
cp -f crates/banyan-core-service/data/service-key.public crates/banyan-storage-provider-service/data/platform-key.public
The staging and storage provider services have their own authentication
material that must be generated to authenticate with the core service. Once
again, this can be done by starting up the services and then shutting them
down. The generate-staging-service-key
and
generate-storage-provider-service-key
commands will perform this step for
you.
make generate-staging-service-key
make generate-storage-provider-service-key
Now, the staging and storage provider services must be registered in the sqlite database as potential storage providers. This can be done by running the following scipts:
source bin/add_staging_host.sh
source bin/add_storage_host.sh
The services should now be ready to interact with each other. The steps so far only need to be run once or when the environment needs to be reset. The remaining instructions are for bringing up the project assuming everything is setup as documented here.
Run npm ci
to install JavaScript dependencies for the frontend.
cd crates/banyan-core-service/frontend
npm ci
cd -
This is only needed if you want to bring up the web interface locally.
NB: This command should be run once more whenever changes have been made to the frontend.
π TIP: a terminal multiplexer like tmux is recommended for this step.
To build the web interface, run these commands from the root of the repository:
cd crates/banyan-core-service/frontend
npm run build
cd -
In one terminal, start the core service by running:
cd crates/banyan-core-service
cargo run
In another terminal, start the staging service by running:
cd ../banyan-staging-service
cargo run
In another terminal, start the storage provider service by running:
cd crates/banyan-storage-provider-service
cargo run
You should now be able to open up your web-browser to http://127.0.0.1:3001, login, and use the platform.
It can be helpful during development to query the SQL databases for the core, staging, and storage provider services directly.
Each of these services' databases write to their respective data/
directory.
The Makefile
contains some commands to attach a sqlite prompt to each:
make connect-to-core-database
make connect-to-staging-database
make connect-to-storage-provider-database
Use these commands to list databases, indexes, and tables.
.databases
: list names and files of attached databases.indexes
: list names of indexes.tables
: list names of tables.schema
: show theCREATE
statements for table(s)
For more information run .help
in the sqlite prompt, or refer to the
Official Documentation.
The sqlx
library places some files within a .sqlx/
directory in order to
typecheck our queries. Sometimes, this cache may need to be refreshed.
Run the bin/prepare_queries.sh
script(s) if you encounter this error.
In the tomb repository go to the tomb-wasm
sub-crate. Build it with the
following command:
export BANYAN_CORE_CHECKOUT=~/workspace/banyan/banyan-core
wasm-pack build --release
rm -f ${BANYAN_CORE_CHECKOUT}/crates/banyan-core-service/frontend/tomb_build/
cp -f pkg/* ${BANYAN_CORE_CHECKOUT}/crates/banyan-core-service/frontend/tomb_build/
An extra step needs to be taken to setup local production builds for the core service. You will need either docker or podman installed (though only the latter has been tested) then some production build settings need to be provided through environment variables. From the root of the repo:
cp -f crates/banyan-core-service/.env.build.sample crates/banyan-core-service/.env.build.prod
Edit the file supply the project key from OpenReplay in the .env.build.prod
file as the TRACKER_PROJECT_KEY
variable. Make sure the TRACKER_INGEST_POINT
is pointing at the correct OpenReplay instance. From there you can create a local production container using the following command in either any of the service directories:
DRY_RUN=1 ./bin/build_and_deploy
Omit the DRY_RUN
parameter if you have SSH access to the appropriate server to automatically deploy the build to them.