-
-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: ipv6 wireguard rule does not get cleaned up #2471
Comments
@qdm12 is more or less the only maintainer of this project and works on it in his free time.
|
Tried manually deleting that ip rule and then gluetun container is able to restore the connection , but i have no idea why it isnt cleaning it up automatically . |
using this as a workaround for now |
seeing the same issue, but for me the ip6 rules aren't being cleaned up - using |
I have disabled ipv6 for my pod, that's why i have only ipv4 rules |
@Darkfella91 don't you get a debug log line containing |
Here , i will upload the whole log so you can check what happens when connection drops and it tries to recover. |
Hmm weird, so from your logs it looks like
meaning the ip rule already exists, before Gluetun starts? In my (non Kubernetes) container it does:
In your case, it doesn't do the ip rule removal, since it did not manage to add it (since it exists already, weird). Maybe this is due to a weird stop/kill of the container??? I could eventually parse the error message and detect |
I use truecharts which have gluetun in their common charts and you enable it as an addon in other charts. Im still learning my way around kubernetes so i cant really answer the question . Are you suggesting it might be an issue with how is gluetun implemented in the helm chart ? This issue started happening after updating to the latest version i think , haven't had it happen before and i've been using gluetun for 4-5 months now. Do you have any suggestions on what to test to maybe help troubleshoot where the problem is ? |
Sorry, not super familiar with truecharts; but do you have a way to specify the docker image tag? So for example if you would use image tag |
Yeah, i can test the old version tomorrow , just gotta change back to a custom wireguard provider again cos i started using protonvpn which was implemented in 3.39. I will let you know the result. |
I'd just like to +1 -- I'm seeing a similar/same issue using protonvpn on k8s. It seems to drop off after an indeterminate amount of time -- sometimes it lasts a few hours, sometimes it lasts a couple of days. The exec workaround did not work for me -- here's my current config (currently added a healthcheck to the gluten container, will see if it works): apiVersion: apps/v1
kind: Deployment
metadata:
name: qbittorrent
labels:
app: qbittorrent
annotations:
network.beta.kubernetes.io/ipv6: "false"
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
replicas: 1
selector:
matchLabels:
app: qbittorrent
template:
metadata:
labels:
app: qbittorrent
spec:
containers:
- name: qbittorrent
image: linuxserver/qbittorrent:4.6.6
env:
- name: PUID
value: "1000"
- name: GID
value: "1000"
- name: DOCKER_MODS
value: ghcr.io/vuetorrent/vuetorrent-lsio-mod:latest
ports:
- containerPort: 8080
- containerPort: 6881
protocol: TCP
- containerPort: 6881
protocol: UDP
volumeMounts:
- name: qbittorent-config
mountPath: /config
- name: media
mountPath: /media
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1
memory: 1Gi
livenessProbe:
httpGet:
path: /#/
port: 8080
initialDelaySeconds: 90
periodSeconds: 15
failureThreshold: 2
- name: vpn
image: qmcgaw/gluetun:v3.39.0
securityContext:
capabilities:
add:
- NET_ADMIN
env:
- name: VPN_SERVICE_PROVIDER
value: "protonvpn"
- name: VPN_TYPE
value: "wireguard"
- name: WIREGUARD_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: protonvpn-credentials
key: wiregaurd-private-key
- name: SERVER_COUNTRIES
value: "Netherlands,Switzerland"
- name: PORT_FORWARD_ONLY
value: "on"
- name: VPN_PORT_FORWARDING
value: "on"
resources:
requests:
cpu: 250m
memory: 1Gi
limits:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command:
- /gluetun-entrypoint
- healthcheck
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 3
volumes:
- name: qbittorent-config
nfs:
- name: media
nfs: Logs:
Otherwise I can fix by just killing the pod and it comes back working (for a while). |
@kvangent this seems very similar to #2521 feel free to comment back there with your results answering questions 2. and 3. of #2521 (comment) |
See #2521 (comment) which provides workarounds for Kubernetes. |
Closed issues are NOT monitored, so commenting here is likely to be not seen. This is an automated comment setup because @qdm12 is the sole maintainer of this project |
Fixed by #2526 (more details in this comment) |
Is this urgent?
No
Host OS
Talos OS
CPU arch
x86_64
VPN service provider
ProtonVPN
What are you using to run the container
Kubernetes
What is the version of Gluetun
Running version v3.39.0 built on 2024-08-09T08:07:23.827Z (commit 09c47c7)
What's the problem 🤔
Basically each time my internet connection drops for any reason or if my dns server isnt available, the health check restarts the vpn connection but it fails to connect after that and goes in loops . Only manually killing the pod would restore my vpn connection.
Share your logs (at least 10 lines)
Share your configuration
The text was updated successfully, but these errors were encountered: