-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ResourceLimits for test pods #253
Add ResourceLimits for test pods #253
Conversation
Skipping CI for Draft Pull Request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eshulman2, @eduolivares, @ashu-011: I would be really grateful ❤️ for your input here. This PR is about setting sane default resource limits [1] for the test pods. The limits are going to be different for each framework therefore I need an estimation from each of you, if possible:).
[1] https://pkg.go.dev/k8s.io/api/core/v1#ResourceRequirements
8827409
to
5147c15
Compare
This PR exposes the resources parameter in for all test-operator related CRs (Tempest, Tobiko, AnsibleTest, HorizonTest). This parameter can be used to specify amount of resources the test pods spawned by the test-operator should consume [1]. This commit intentionally unsets the value for Tempest test pods as the propagation of the fix for Tempest memory leak bug [1] did not reach the upstream openstack-tempest-all image yet. Once the fix is in place the default value of cifmw_test_operator_tempest_resources can be changed to {}. [1] openstack-k8s-operators/test-operator#253
5147c15
to
8e66c54
Compare
This change depends on a change that failed to merge. Change openstack-k8s-operators/ci-framework#2617 is needed. |
recheck |
This PR exposes the resources parameter in for all test-operator related CRs (Tempest, Tobiko, AnsibleTest, HorizonTest). This parameter can be used to specify amount of resources the test pods spawned by the test-operator should consume [1]. This commit intentionally unsets the value for Tempest test pods as the propagation of the fix for Tempest memory leak bug [1] did not reach the upstream openstack-tempest-all image yet. Once the fix is in place the default value of cifmw_test_operator_tempest_resources can be changed to {}. [1] openstack-k8s-operators/test-operator#253
/test all |
9a14c77
to
b157f44
Compare
/test all |
This PR exposes the resources parameter in for all test-operator related CRs (Tempest, Tobiko, AnsibleTest, HorizonTest). This parameter can be used to specify amount of resources the test pods spawned by the test-operator should consume [1]. This commit intentionally unsets the value for Tempest test pods as the propagation of the fix for Tempest memory leak bug [1] did not reach the upstream openstack-tempest-all image yet. Once the fix is in place the default value of cifmw_test_operator_tempest_resources can be changed to {}. [1] openstack-k8s-operators/test-operator#253
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tested all 4 CRs and they seem to be working. The default values are corresponding to the ones set in code and when changing resource values in CR, the change is propagated into the pod itself.
There is only one request I have regarding this patch and that is adding examples into all 4 CRs of the new parameter Resources. Like so:
# resources:
# limits:
# cpu: 8000m
# memory: 4Gi
# requests:
# cpu: 4000m
# memory: 2Gi
It lets users know there is an option to change the default resources and gives them overview what the default values are. In my opinion at least 😄
@kstrenkova that's a good idea. Keeping the docs up to date is definitely helpful! Thanks for the review! I will update the documentation. |
The test pods spawned by the test operator prior to this change were executed without any resource limits. This ultimately meant that the pods could consume an unlimited amount of resources until the limit for a worker node was reached. This commit introduces default resource limits for each test pod spawned by the test operator (tempest, tobiko, ansibletest, horizontest pod). The default value can be overridden using .Spec.Resources parameter which is introduced as part of this commit as well [1]: spec: resources: requests: memory: 1Gi cpu: 250m limits: memory: 2Gi cpu: 500M [1] https://pkg.go.dev/k8s.io/api/core/v1#ResourceRequirements Depends-On: openstack-k8s-operators/ci-framework#2617
b157f44
to
bc47b78
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for adding the examples to CRs and documentation! I think now the patch is complete and working, so lgtm.
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kstrenkova, lpiwowar The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
be35b07
into
openstack-k8s-operators:main
The test pods spawned by the test operator prior to this change were
executed without any resource limits. This ultimately meant that the
pods could consume an unlimited amount of resources until the limit for
a worker node was reached.
This commit introduces default resource limits for each test pod
spawned by the test operator (tempest, tobiko, ansibletest, horizontest
pod). The default value can be overridden using .Spec.Resources
parameter which is introduced as part of this commit as well [1]:
[1] https://pkg.go.dev/k8s.io/api/core/v1#ResourceRequirements
Depends-On: openstack-k8s-operators/ci-framework#2617