You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we are running some RTRTR instances on Kubernetes clusters using a custom image:
Alpine 3.20.3 base image and build env
RTRTR 0.3.0, built with:
cargo 1.78.0
rustc 1.78.0
RTRTR usually runs against our own Routinator instance, but to demonstrate the issue, the following config can be used as well:
rtrtr.conf:
log_level = "debug"log_target = "stderr"http-listen = ["0.0.0.0:8323"]
[units.json]
type = "json"uri = "https://console.rpki-client.org/vrps.json"refresh = 60
[units.slurm]
type = "slurm"source = "json"files = [ "/home/rtrtr/slurm.json" ]
[targets.rtr]
type = "rtr"listen = [ "0.0.0.0:3323" ]
unit = "slurm"client-metrics = true
[targets.http]
type = "http"path = "/json"format = "json"unit = "slurm"
We realized that RTRTR does not start correctly, does not process the slurm file at all and does not give an error message if the slurm file contains an invalid network address in the prefix value of prefixAssertions.
slurm.json with wrong entry (10.10.10.164/27 is invalid/wrong, should be 10.10.10.160/27)
It would be great to have an error message in the log, that there's something wrong, while processing the slurm file. Of course it could be helpful to show the wrong/invalid entries in the log as well. This would simplify troubleshooting considerably, especially if the slurm file contains several hundred locallyAddedAssertions ;-)
Apologies for the very late response. I had notification for new PRs turned off during my vacation and forgot to check after.
Currently, the slurm unit delays any processing until the first successful load of the SLURM set and, for some reason, just ignores any error. I was going to release 0.3.1 today, but I am instead going to add logging an error and release another RC so this will get into 0.3.1.
Hello,
we are running some RTRTR instances on Kubernetes clusters using a custom image:
RTRTR usually runs against our own Routinator instance, but to demonstrate the issue, the following config can be used as well:
rtrtr.conf:
We realized that RTRTR does not start correctly, does not process the slurm file at all and does not give an error message if the slurm file contains an invalid network address in the prefix value of prefixAssertions.
slurm.json with wrong entry (
10.10.10.164/27
is invalid/wrong, should be10.10.10.160/27
)rtrtr log doesn't show anything about the issue, no slurm file processing, no target information...
local target isn't working (expected result according to the missing log entries from above):
Of course everything is working fine, if we correct the network address of the prefix to a valid one:
slurm.json with valid entries:
rtrtr log looks as expected:
local target gives the correct count of VRPs:
The text was updated successfully, but these errors were encountered: