You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in our clusters we have team namespaces configured with a LimitRange specifying a maxLimitRequestRatio of 5, cf. OpenShift Container Limits.
However, it seems the k6-operator hardcodes a memory ratio of 100, cf. pkg/resources/containers/curl_start.go (209715200 for limits and 2097152 for requests), and as a result the starter pod cannot start in our team namespaces.
Please make these resources actually configurable. The TestRun CRD already allows setting the resources, but they are not used.
Thanks for considering,
Flo
Suggested Solution (optional)
Most probably a structure similar to handling the Runner resources can be used.
Already existing or connected issues / PRs (optional)
No response
The text was updated successfully, but these errors were encountered:
Hi @NT-florianernst, thanks for bringing this up! You're right: this should be a relatively small fix in logic, given that one can already specify resources for jobs 👍
Feature Description
Hello all,
in our clusters we have team namespaces configured with a
LimitRange
specifying amaxLimitRequestRatio
of5
, cf. OpenShift Container Limits.However, it seems the k6-operator hardcodes a memory ratio of 100, cf. pkg/resources/containers/curl_start.go (
209715200
for limits and2097152
for requests), and as a result the starter pod cannot start in our team namespaces.Please make these resources actually configurable. The TestRun CRD already allows setting the resources, but they are not used.
Thanks for considering,
Flo
Suggested Solution (optional)
Most probably a structure similar to handling the Runner resources can be used.
Already existing or connected issues / PRs (optional)
No response
The text was updated successfully, but these errors were encountered: