Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add a switch to disable PV in load test #977

Conversation

chenqianfzh
Copy link
Collaborator

@chenqianfzh chenqianfzh commented Feb 8, 2021

Scale-out and Scale-up load tests both failed due ot PV/PVC processing of deployments&statefulsets. The deployments/statefulset which uses volumes will never get the pods in "Running" state. As we are continuing to solve issue (tracked in #978), this PR provides a switch to skip this issue temporarily.

When the env var CL2_ENABLE_PVS is set to false, the test deployments&statefulsets will be created without attaching to PVs.

Verification:

This PR was tested on top of #976, which fixed another error in load-test, and I got the following result:

  1. Scale-up 100-node load test succeeded with commandline test-configs "--testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/disable_pvs.yaml".
  2. Scale-out 100-node load test succeeded with commandline test-configs "--testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/disable_pvs.yaml".
  3. Scale-up 100-node density test succeeded
  4. Scale-out 100-node density test succeeded

@centaurus-cloud-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign xiaoningding
You can assign the PR to them by writing /assign @xiaoningding in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Collaborator

@sonyafenge sonyafenge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verified this PR on top of #976 and both density and load with 100 nodes Success.
This PR can be a temporally workaround for #491

/LGTM

@centaurus-cloud-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: sonyafenge
To complete the pull request process, please assign xiaoningding
You can assign the PR to them by writing /assign @xiaoningding in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@centaurus-cloud-bot
Copy link
Collaborator

New changes are detected. LGTM label has been removed.

@centaurus-cloud-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: sonyafenge
To complete the pull request process, please assign xiaoningding
You can assign the PR to them by writing /assign @xiaoningding in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@zmn223
Copy link
Collaborator

zmn223 commented Feb 9, 2021

/lgtm

@zmn223
Copy link
Collaborator

zmn223 commented Feb 9, 2021

/approve

@centaurus-cloud-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sonyafenge, zmn223

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@centaurus-cloud-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sonyafenge, zmn223

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@centaurus-cloud-bot centaurus-cloud-bot merged commit 87ecb8a into CentaurusInfra:master Feb 9, 2021
@chenqianfzh chenqianfzh deleted the disable-pv-in-load-test branch February 9, 2021 02:54
sonyafenge pushed a commit to sonyafenge/arktos that referenced this pull request Feb 10, 2021
Co-authored-by: Ubuntu <ubuntu@ip-172-31-26-146.us-east-2.compute.internal>
centaurus-cloud-bot pushed a commit that referenced this pull request Feb 10, 2021
* add import-alias for k8s.io/api/admissionregistration/v1beta1

* add import-alias for k8s.io/api/admission/v1beta1

* add import-alias for k8s.io/api/apps/v1

* add import-alias for k8s.io/api/apps/v1beta1

* add import-alias for k8s.io/api/apps/v1beta2

* add import-alias for k8s.io/api/auditregistration/v1alpha1

* add import-alias for k8s.io/api/authentication/v1

* add import-alias for k8s.io/api/authentication/v1beta1

* add import-alias for k8s.io/api/authorization/v1

* add import-alias for k8s.io/api/authorization/v1beta1

* add import-alias for k8s.io/api/autoscaling/v1

* add import-alias for k8s.io/api/batch/v1

* add import-alias for k8s.io/api/batch/v1beta1

* add import-alias for k8s.io/api/certificates/v1beta1

* add import-alias for k8s.io/api/coordination/v1

* add import-alias for k8s.io/api/coordination/v1beta1

* add import-alias for k8s.io/api/core/v1

* add import-alias for k8s.io/api/events/v1beta1

* add import-alias for k8s.io/api/extensions/v1beta1

* add import-alias for k8s.io/api/imagepolicy/v1alpha1

* add import-alias for k8s.io/api/networking/v1

* add import-alias for k8s.io/api/networking/v1beta1

* add import-alias for k8s.io/api/node/v1alpha1

* add import-alias for k8s.io/api/node/v1beta1

* add import-alias for k8s.io/api/policy/v1beta1

* add import-alias for k8s.io/api/rbac/v1

* add import-alias for k8s.io/api/rbac/v1alpha1

* add import-alias for k8s.io/api/rbac/v1beta1

* add import-alias for k8s.io/api/scheduling/v1

* add import-alias for k8s.io/api/scheduling/v1alpha1

* add import-alias for k8s.io/api/scheduling/v1beta1

* add import-alias for k8s.io/api/settings/v1alpha1

* add import-alias for k8s.io/api/storage/v1

* add import-alias for k8s.io/api/storage/v1alpha1

* add import-alias for k8s.io/api/storage/v1beta1

* add import-alias for k8s.io/kubernetes/pkg/controller/apis/config/v1alpha1

* add import-alias for k8s.io/kubernetes/pkg/kubelet/apis/config/v1beta1

* add import-alias for k8s.io/kubernetes/pkg/kubelet/apis/deviceplugin/v1alpha

* add import-alias for k8s.io/kubernetes/pkg/kubelet/apis/deviceplugin/v1beta1

* add import-alias for k8s.io/kubernetes/pkg/kubelet/apis/pluginregistration/v1

* add import-alias for k8s.io/kubernetes/pkg/kubelet/apis/pluginregistration/v1alpha1

* add import-alias for k8s.io/kubernetes/pkg/kubelet/apis/pluginregistration/v1beta1

* verify import aliases

- Added scripts for update and verify
- golang AST code for scanning and fixing imports
- default regex allows it to run on just test/e2e.* file paths
- exclude verify-import-aliases.sh from running in CI jobs

Change-Id: I7f9c76f5525fb9a26ea2be60ea69356362957998
Co-Authored-By: Aaron Crickenberger <spiffxp@google.com>

* Add kubeletstatsv1alpha1 as the preferred alias for k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1

Change-Id: I05a8390a667dba307c09d95f836e08e0759c12ee

* add import-alias for k8s.io/kubernetes/pkg/kubelet/apis/podresources/v1alpha1

* add import-alias for k8s.io/kubernetes/pkg/kubelet/apis/resourcemetrics/v1alpha1

* add import-alias for k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1

* add import-alias for k8s.io/kubernetes/pkg/scheduler/apis/config/v1alpha1

* auto generated - make update

* update copyrights and owners

* set the namespace/tenant per the scope of the resource (#976)

Co-authored-by: Ubuntu <ubuntu@ip-172-31-26-146.us-east-2.compute.internal>

* add a switch to disable PV in load test (#977)

Co-authored-by: Ubuntu <ubuntu@ip-172-31-26-146.us-east-2.compute.internal>

* Replace dynamic client with metadata client in tenantcontroller (#970)

* Rename metadata.NewConfigOrDie to be consistent
kubernetes/kubernetes@98d87a4
committed on Jul 11, 2019

Updated name to match dynamic client

# Conflicts:
#	staging/src/k8s.io/client-go/metadata/metadata_test.go

* Use metadata informers instead of dynamic informers in controller manager
kubernetes/kubernetes@d631f9b
committed on Jul 11, 2019

All controllers in controller-manager that deal with objects generically
work with those objects without needing the full object. Update the GC
and quota controller to use PartialObjectMetadata input objects which
is faster and more efficient.

* Fix metadata client UT test bug (ported from community)

* Add multi-tenancy UTs for metadata client, fix metadataclient informer with multi-tenancy

* Switch tenant controller from dynamic client to metadata client.
Fix arktos network system CRD skip logic during tenant deletion due to migrating dynamic client to metadata client.

* make update, copyright changes

* Fix fatal message - CR comment

* update arktos version to v0.7.0 per CI changing version to 0.7.0

* update copyrights for veryfy.sh

Co-authored-by: Aaron Crickenberger <spiffxp@google.com>
Co-authored-by: Davanum Srinivas <davanum@gmail.com>
Co-authored-by: chenqianfzh <51831990+chenqianfzh@users.noreply.github.com>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-26-146.us-east-2.compute.internal>
Co-authored-by: Ying Huang <sindica2000@yahoo.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants