Rocket Pool is a distributed staking protocol for next-gen Ethereum.
This repository contains two Pulumi projects:
./rocketpool-pulumi
: deploys the necessary components for staking with the Rocket Pool protocol into a Kubernetes cluster; and./rocketpool-pulumi/cluster/
: an optional stack for deploying a GKE Autopilot Kubernetes cluster, if you don't have already have one handy.
Operating an RP node requires 17.6ETH to stake (vs. 32ETH for a full validator) and provides additional rewards via the RPL token. You should understand the long-term commitment and financial risks associated with staking before attempting to use this project.
(You can run a full validator with this setup, but you'll need to bring your own validator keys and deposits.)
Rocket Pool is very easy to deploy as an all-in-one "smartnode" using their install scripts, and for most users this is sufficient.
I wanted more control over my deployment topology. For example I wanted to
- use clients not already bundled into the smartnode stack,
- version and deploy components independently,
- incorporate redundancy into the setup for high availability, and
- deploy on a cloud provider for elasticity.
Kubernetes was a natural fit.
You'll need working knowledge of Linux, Kubernetes, and (optionally) Pulumi to get this up and running.
- A GCP
account,
gcloud
binary, and a project to install into. - A Pulumi account. It's highly recommend you use GCP KMS for secret encryption.
- (optional) An infura.io account for ETH1 fallback and/or checkpoint sync.
- (optional) A notification channel configured if you'd like to get alerted for operational issues like low volume capacity.
If using your own cluster, configure rocketpool:kubeconfig:
with the path to
your kubeconfig
.
Ensure you have vertical pod autoscaling enabled by following the instructions here.
See "Configuration" below for overriding storage classes.
- Erigon
- Nethermind
- Infura (discouraged for mainnet)
- Lighthouse is currently the only validator supported.
The Pulumi.mainnet.yaml
, Pulumi.prater.yaml
and ./cluster/Pulumi.gcp.yaml
show example configurations to use as a starting point.
Running pulumi up -s prater
will get you up and running. While clients are
sync'ing, you can connect to the rocketpool pod to initialize your wallet and
deposits.
tl;dr: configure rocketpool:consensus
and rocketpool:execution
with the
clients you'd like to use. Terminate pods to automatically scale up/down their
resource reservations. Optionally configure snapshots if you'd like to tear
everything down and come back to it later.
A 0.5 vCPU pod is always deployed with containers for the Lighthouse validator and the Rocket Pool rewards claim tool.
The stack attempts to use sane defaults (depending on whether you're deploying to mainnet or a testnet) as much as possible, but you can configure the overrides described in the table below.
Some config values are expected to be encrypted and can be set like so:
pulumi config -s mainnet set --secret --path teku.checkpointUrl 'https://...@eth2-beacon-mainnet.infura.io/eth/v2/debug/beacon/states/finalized'
config | description |
---|---|
rocketpool:consensus: list[string] | A list of consensus clients to use. This is in priority order, so the validator will prefer to connect to the first client. Available values are: "lighthouse", "lodestar", "nimbus" and "teku". |
rocketpool:execution: list[string] | A list of execution clients to use. This is inpriority order, so consensus clients will prefer to connect to the first execution client in this list. Available values are: "erigon", "nethermind" and "infura". Infura should be avoided on mainnet. |
rocketpool:gkeMonitoring: bool | If this is a cloud deployment and a notification channel is configured on the cluster, then set this to "true" to receive operational alerts. |
rocketpool:infura: { eth1Endpoint: secret } | Secret. Address of your Infura Eth1 API. Useful as a fallback but should be avoided on mainnet. |
rocketpool:infura: { eth2Endpoint: secret } | Secret. Address of your Infura Eth2 API. Useful as a fallback but should be avoided on mainnet. |
rocketpool:kubeconfig: string | Path to an existing cluster's kubeconfig . |
rocketpool:client: { command: list[string] } | A custom command to start the container with, helpful for starting a container with "sleep infinity" to load data into the PVC. |
rocketpool:client: { external: bool } | Whether to expose the client to the internet for discovery. Optional, and defaults to false; incurs additional costs if enabled. |
rocketpool:client: { image: string } | Docker image to use. |
rocketpool:client: { tag: string } | Image tag to use. |
rocketpool:client: { replicas: int } | How many instances to deploy. Set this to 0 to disable the client while preserving persistent volumes. |
rocketpool:client: { volume: { snapshot: bool } } | If "true" this will create a volume snapshot. Only set this after a volume has been created. |
rocketpool:client: { volume: { source: string } } | If set, new persistent volume claims will be created based on the volume snapshot with this name. |
rocketpool:client: { volume: { storage: string } } | The size of the persistent volume claim. |
rocketpool:client: { volume: { storageClass: string } } | The PVC's storage class. |
rocketpool:client: { targetPeers: int } | The maximum or desired number of peers. |
rocketpool:consensusclient: { checkpointUrl: string } | Consensus clients accept the same options as execution clients, plus a checkpointUrl: option. For Lighthouse this can be an Infura Eth2 address; for Teku it's of the form given above. |
rocketpool:rocketpool: { graffiti: string } | Graffiti for signed nodes. |
rocketpool:rocketpool: { nodePassword: secret } | Secret. Password for the Rocket Pool node. |
Clients are initially very over-provisioned to speed up the sync process. This works fine on testnets like Prater; after the sync is done, terminate the pod to automatically scale down its resource reservations (otherwise you'll be over-paying!).
A mainnet sync will take much longer than it would if running locally, or it might not complete at all:
- Nethermind requires at least "fast" / "pd-balanced" storage and takes about a week to sync.
- Erigon will not complete and requires manually uploading a complete database to the container.
TODO: Please file an issue if you'd like instructions for uploading chain data to the cluster.
I've tried to tune this to be as cost-effective as possible while still providing reliable, penalty-free attestations.
I'm currently running a 100% effective mainnet stack with Erigon, Lighthouse and Teku for ~$5 a day.
Your costs will vary depending on your configuration and region.
This is a hobby project that I don't expect much interest in, and I just find Typescript to be more enjoyable to work with than HCL, Kustomize, or YAML. Sorry!
- rocketpool-deploy (AWS, Terraform, Ansible)
- rp-ha (Docker Swarm)
- rocketpool-helm (Kubernetes)