Install the dependencies. On Debian/Ubuntu:
apt install build-essential cmake pkg-config libssl-dev flex bison libelf-dev iptables
Then, configure the Rust toolchain and install Just (only for dev environment):
rustup target add x86_64-unknown-linux-musl
cargo install just # optional
Finally, install the protobuf compiler.
git clone https://github.com/virt-do/cloudlet
Go to the project directory:
cd cloudlet
Create a TOML config file or update the existing one:
cat << EOF > src/cli/examples/config.toml
workload-name = "fibonacci"
language = "rust"
action = "prepare-and-run"
[server]
address = "localhost"
port = 50051
[build]
source-code-path = "$(readlink -f ./src/cli/examples/main.rs)"
release = true
EOF
Make sure to update the source-code-path
to the path of the source code you want to run.
Use an absolute path.
Here are more informations about each field
Warning
Make sure to replace CARGO_PATH
environment variable with the path to your cargo binary
export CARGO_PATH=$(which cargo)
sudo -E capsh --keep=1 --user=$USER --inh=cap_net_admin --addamb=cap_net_admin -- -c 'RUST_BACKTRACE=1 '$CARGO_PATH' run --bin vmm -- grpc'
cargo run --bin api
cargo run --bin cli -- run --config-path src/cli/examples/config.toml
Note
If it's your first time running the request, cloudlet
will have to compile a kernel and an initramfs image.
This will take a while, so make sure you do something else while you wait...
Here is a simple sequence diagram of Cloudlet:
sequenceDiagram
participant CLI
participant API
participant VMM
participant Agent
CLI->>API: HTTP Request /run
API->>VMM: gRPC Request to create VM
VMM->>Agent: Creation of the VM
VMM->>Agent: gRPC Request to the agent
Agent->>Agent: Build and run code
Agent-->>VMM: Stream Response
VMM-->>API: Stream Response
API-->>CLI: HTTP Response
- The CLI sends an HTTP request to the API which in turn sends a gRPC request to the VMM
- The VMM then creates a VM
- When a VM starts it boots on the agent which holds another gRPC server to handle requests
- The agent then builds and runs the code
- The response is streamed back to the VMM and then to the API and finally to the CLI.
Field | Description | Type |
---|---|---|
workload-name | Name of the workload you wanna run | String |
language | Language of the source code | String enum: rust, python node |
action | Action to perform | String enum: prepare-and-run |
server.address | Address of the server (currently not used) | String |
server.port | Port of the server (currently not used) | Integer |
build.source-code-path | Path to the source code on your local machine | String |
build.release | Build the source code in release mode | Boolean |