-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(helm): Support for flower and websocket containers #21806
feat(helm): Support for flower and websocket containers #21806
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this PR -- looking great. Any reason why you went for a flower sidecar instead of a standalone deployment? It's my understanding that flower will show ALL tasks/workers and not just the ones that are being executed by the local instance.
Yes, it will show tasks for all the workers, regardless of where it runs, because it connects to the broker. |
Ok, understood. Can we pls make it another deployment? 🙏 This pod can scale depending on usage quite a bit (especially when running reports or generating thumbnails, for example) |
Yes, sure, I can do this if you think it's useful. |
OK, this is done, flower is now in its own separate Deployment. One more change I propose is to switch the |
Another suggestion (not implemented here) is to add |
Sounds interesting, however pre-commit probably isn't the best spot, as folks can pretty easily skip them. What would be better would be a lint step in CI that checks the README against a generated version. |
OK, I've been convinced and switched back to using the shared defaults from There's a very low risk of regression, but it will presumably be very quick to identify and fix, for anyone who has overridden the previous |
I would still suggest to add |
Looks like there's some issues resolving bitnami? |
Yes, I've seen those CI failures, and assumed this was some kind of known problem, since this is unlikely to be related to any of the changes in this PR. I don't know very well how the CI was originally set up, but I can see that the I think the fix would be to pass the list of dependent repos to install prior to running
I'm sending some commits to try a fix, but it might take a while since this might needs some trial-and-error... |
OK, so I've unblocked the repository dependency issue. The chart linter now runs successfully, but was raising a tricky error, complaining about whitespaces inside braces in I've made this linting rule a bit more lax by allowing up to one space, but you may have other opinions on how to fix it. |
OK, we now have a passing linting and chart-testing! Just pushed one final polish to tweak some docs, and I think we're finally all good! |
Looks like we have passing checks across the board now! FYI I'm preparing a separate change to add |
LGTM, Thanks! |
SUMMARY
This PR brings a number of updates to the Helm chart:
flower
sidecar container in the celery beat poddeployment, to visualize the tasks (but NOT exposed through the ingress, since most users probably don't want to expose this publicly - either a new ingress needs to be deployed, or a port-forward to the flower service is needed)superset-websocket
server in a separate pod, together with associated services and ingress path within the same host(s)values.yaml
, addedstartupProbes
, as well as alivenessProbe
for the celery workers (that runs acelery inspect ping
command)README
, using helm-docs, to document the contents ofvalues.yaml
(and should be visible from artifacthub)indent
tonindent
so that we can apply indentation and improve readabilityThe more controversial change is also that I've removed the schema definition... I'm sorry but I've really find it too hard to work with, and was making iterative developments of the chart quite painfully slow, while not necessarily helping catch too many issues...
I guess this is debatable, and if that's a problem I can work on restoring it, but I personally suggest to drop it since I think it creates unnecessary work - unless there's a tool to generate it that I'm not aware of?
TESTING INSTRUCTIONS
supersetCeleryBeatFlower.enabled=true
to see the flower sidecar container in the celerybeat pod, and access the UI by port-forwarding its port 5555. WARNING this requires a custom Superset image that hasflower<1.0.0
installed (recent versions are causing a dependency conflict)supersetWebsockets.enabled=true
to see the superset-websocket pod (by default a custom image I've built from our fork, since there's no official one). It assumes that both pods are exposed through an ingress on the same domain (the built-in ingress will do this automatically) and requires various things:JWT_SECRET
set as a secret env variable - it will be passed to both Superset and websocket serverssuperset_config.py
:GLOBAL_ASYNC_QUERIES_TRANSPORT=ws
GLOBAL_ASYNC_QUERIES_WEBSOCKET_URL
(I usef"{re.sub('^http','ws',env('BASE_URL'))}/ws"
)GLOBAL_ASYNC_QUERIES_JWT_SECRET=env("JWT_SECRET")
This was tested successfully on Superset 1.5.2
ADDITIONAL INFORMATION