You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 9, 2020. It is now read-only.
I'd like to be able to add a Spark configuration to specify a node selector for all pods in the Spark job. This helps me select all pods from the job to do e.g. resource tracking per job.
The text was updated successfully, but these errors were encountered:
This continues to build on the need for custom pod specifications as mentioned in #38. I suppose the feature set and room for customization is possibly greater in Kubernetes compared to all of the other cluster managers that are supported so far. The original design that aligns with the other cluster managers may be too limiting here.
I'd like to be able to add a Spark configuration to specify a node selector for all pods in the Spark job. This helps me select all pods from the job to do e.g. resource tracking per job.
The text was updated successfully, but these errors were encountered: