You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't know how to reproduce this, we have jupyterhub deployed in several datacenters but only fails in our busiest one with over 150 servers simultaneously at the same time (not sure if it's an issue).
I'm afraid that the service is blocked / not running as expected because there are too many users
Expected behaviour
To cull the servers after they timeout
Actual behaviour
Is not culling the servers after the timeout
Your personal set up
Latest version of jupyterhub and idle culler
Logs
I can see the following logs from the culler:
⚠ Not sure if important but the fetching page 2 doesn't show the port just the url
Jan 23 16:41:12.885
nbt-hub
[I 250123 15:41:11 __init__:156] Fetching page 2 https://<url without port>/hub/api/users?state=ready&offset=50&limit=50
Jan 23 16:31:12.600
nbt-hub
File "/usr/local/lib/python3.10/dist-packages/jupyterhub_idle_culler/__init__.py", line 124, in fetch
Jan 23 16:31:12.600
nbt-hub
File "/usr/local/lib/python3.10/dist-packages/jupyterhub_idle_culler/__init__.py", line 142, in fetch_paginated
Jan 23 16:31:12.599
nbt-hub
File "/usr/local/lib/python3.10/dist-packages/jupyterhub_idle_culler/__init__.py", line 436, in cull_idle
Sometimes I also get
/usr/lib/python3.10/collections/__init__.py:431: RuntimeWarning: coroutine 'cull_idle.<locals>.handle_user' was never awaited
The text was updated successfully, but these errors were encountered:
Bug description
Culler is not culling despite the last activity clearly being surpassed by the maximum time allowed.
Current setup from our k8s logs:
Starting service 'cull-idle': ['python3', '-m', 'jupyterhub_idle_culler', '--url=https://<url>:5081/hub/api', '--timeout=14400', '--cull-every=600', '--concurrency=10']
We observe two things:
In the second case the last_activity was over 3days ago which is way more than the timeout of 14400 (4 hours)
Example obtained from
/api/users/<user>
How to reproduce
I don't know how to reproduce this, we have jupyterhub deployed in several datacenters but only fails in our busiest one with over 150 servers simultaneously at the same time (not sure if it's an issue).
I'm afraid that the service is blocked / not running as expected because there are too many users
Expected behaviour
To cull the servers after they timeout
Actual behaviour
Is not culling the servers after the timeout
Your personal set up
Latest version of jupyterhub and idle culler
Logs
I can see the following logs from the culler:
⚠ Not sure if important but the fetching page 2 doesn't show the port just the url
Sometimes I also get
The text was updated successfully, but these errors were encountered: