This project contains useful scripts for easily running a Solana validator on a fresh Ubuntu 20.04 machine. It uses my (mvines) preferred Solana mainnet settings and best practices. The command names are slightly quirky, and currently not too well documented
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
sh -c "$(curl -sSfL https://release.anza.xyz/beta/install)"
then restart shell to apply new PATH
View all journalctl -f
with:
sudo adduser sol adm
sudo adduser sol sudo
sudo bash -c "cat >/etc/sysctl.d/20-solana.conf <<EOF
# Increase UDP buffer size
net.core.rmem_default = 134217728
net.core.rmem_max = 134217728
net.core.wmem_default = 134217728
net.core.wmem_max = 134217728
# Increase memory mapped files limit
vm.max_map_count = 2000000
EOF" && \
sudo sysctl -p /etc/sysctl.d/20-solana.conf
sudo bash -c "cat >/etc/security/limits.d/90-solana-nofiles.conf <<EOF
# Increase process file descriptor count limit
* - nofile 2000000
EOF"
sudo bash -c "cat >/etc/logrotate.d/sol <<EOF
$HOME/solana-validator.log {
rotate 3
daily
missingok
postrotate
systemctl kill -s USR1 sol.service
endscript
}
EOF"
sudo apt-get update
sudo apt-get install -y git htop silversearcher-ag iotop \
libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang \
cmake make libprotobuf-dev protobuf-compiler nvme-cli
ssh-keygen -t ed25519 && cat .ssh/id_ed25519.pub
for ch in edge beta stable; do \
git clone https://github.com/solana-labs/solana.git ~/$ch; \
done; \
(cd ~/beta; git checkout v1.14); \
(cd ~/stable; git checkout v1.13); \
ln -sf beta ~/solana
git clone https://github.com/mvines/sosh ~/sosh
then add to your bash config:
echo '[ -f $HOME/sosh/sosh.bashrc ] && source $HOME/sosh/sosh.bashrc' >> ~/.bashrc
echo '[ -f $HOME/sosh/sosh.profile ] && source $HOME/sosh/sosh.profile' >> ~/.profile
then restart shell
If you wish to customize the Sosh configuration
cat > ~/sosh-config.sh <<EOF
# Start with upstream defaults
source ~/sosh/sosh-config-default.sh
# Local config overrides go here
EOF
and adjust as desired
Pick the "beta" tree with:
p beta
or perhaps "stable":
p stable
or even "edge":
p edge
The primary keypair is what your staked node uses by default.
mkdir -p ~/keys/primary
then either copy your existing validator-identity.json
and
validator-vote-account.json
into that directory or create new ones with
solana-keygen new -o ~/keys/primary/validator-identity.json --no-bip39-passphrase -s && \
solana-keygen new -o ~/keys/primary/validator-vote-account.json --no-bip39-passphrase -s
If you wish to activate the primary keypair,
sosh-set-config primary
Secondary keypairs are host-specific and used by hot spare machines that can be switched over to primary at runtime. Once your primary keypair is configured, run
mkdir -p ~/keys/secondary && \
solana-keygen new -o ~/keys/secondary/validator-identity.json --no-bip39-passphrase -s
If you wish to activate the secondary keypair,
sosh-set-config secondary
Later run xferid <secondary-host>
from your primary to transfer voting to the
secondary.
Any string other than primary
and secondary
may be used to configure other keypairs for dev and testing.
For example to configure a dev
keypair:
mkdir -p ~/keys/dev && \
solana-keygen new -o ~/keys/dev/validator-identity.json --no-bip39-passphrase -s && \
solana-keygen new -o ~/keys/dev/validator-vote-account.json --no-bip39-passphrase -s
If you wish to activate the dev keypair,
sosh-set-config dev
Assuming /mnt/ledger-rocksdb
is the desired location for rocksdb, such as a
separate nvme:
mkdir -p ~/ledger/
ln -sf /mnt/ledger-rocksdb/level ~/ledger/rocksdb
ln -sf /mnt/ledger-rocksdb/fifo ~/ledger/rocksdb_fifo
If present the /mnt/accounts1
, /mnt/accounts2
, and /mnt/accounts3
locations will be added as an --account
arg to validator startup.
If none are present, accounts will be placed in the default location of
~/ledger/accounts
Example of accounts evenly disributed across two drives:
sudo ln -s /mnt/nvme1 /mnt/account1
sudo ln -s /mnt/nvme2 /mnt/account2
If not present, snapshots will be placed in the default location of ~/ledger
Example of putting all snapshots in a seperate drive:
sudo ln -s /mnt/nvme3 /mnt/snapshots
Example of putting incremental snapshots in a separate drive, full snapshots in default location:
sudo ln -s /mnt/nvme4 /mnt/incremental-snapshots
validator.sh init
then monitor logs with t
. This will prepare ~/ledger and download the correct
genesis config for the cluster.
Use fetch_snapshot.sh
to get a snapshot from a specific node.
- Mainnet NA-based nodes, try
fetch_snapshot.sh bv1
orfetch_snapshot.sh bv2
. - Mainnet EU-based nodes, try
fetch_snapshot.sh bv3
- Mainnet Asia-based nodes, try
fetch_snapshot.sh bv4
- Testnet, try
fetch_snapshot.sh tv
sudo bash -c "cat >/etc/systemd/system/sol.service <<EOF
[Unit]
Description=Solana Validator
After=network.target
StartLimitIntervalSec=0
Wants=sol-hc.service
[Service]
Type=simple
Restart=always
RestartSec=1
User=$USER
LimitNOFILE=2000000
LogRateLimitIntervalSec=0
ExecStart=$HOME/sosh/bin/validator.sh
[Install]
WantedBy=multi-user.target
EOF" &&
sudo bash -c "cat >/etc/systemd/system/sol-hc.service <<EOF
[Unit]
Description=Solana Validator Healthcheck
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=1
User=$USER
ExecStart=$HOME/sosh/bin/hc-service.sh
[Install]
WantedBy=multi-user.target
EOF" && sudo systemctl daemon-reload
then run:
soshr # Sosh alias for `sudo systemctl restart sol sol-hc`
and monitor with:
jf # Sosh alias for `journalctl -f`
and
t
soshs
svem # Alias for `solana-validator exit --monitor`
To fetch the latest commits for your current source tree:
p
then use svem
to restart the sol service
Syncing to a specific tag, rather than HEAD of your current source tree, is a little clumsy currently with a "p ":
p beta v1.13.5
So you updated your validator binaries and something bad happened. Quickly revert to the previous binaries with:
lkg
then use svem
to restart the sol service