Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync the upstream changes from ceph/ceph-csi:devel into the devel branch #138

Merged
merged 7 commits into from
Oct 14, 2022

Conversation

nixpanic
Copy link
Member

Sync the upstream changes from ceph/ceph-csi:devel into the devel branch.
The most important recent changes that we want included are:

Madhu-1 and others added 7 commits October 12, 2022 14:49
As we dont need to delete the nfs daemonset
which was present in 3.6.x release in 3.8.x
release as user will upgrade from 3.6.x to
3.7.x and delete the nfs daemonset.

fixes #3324

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
When running on AWE EC2 virtual-machines, we'll use Podman instead of
installing a VM. The "none" driver might work as well, but it requires
additional dependencies to be installed, which may change over time with
new minikube or Kubernetes releases. Hopefully the Podman driver is less
affected with changes in dependencies.

Depends-on: #3419
Closes: #3415
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The scale down/up functions fail often with "deployment not found"
errors. Possibly deploying with Podman is slower than deploying in a
minikube VM, and there is a delay for the deployment to become
available.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
In case `wget` is not installed, downloading the Helm release will fail.
The `install-helm.sh` script won't return a fatal error in that case,
and CI jobs continue running in an environment that is not ready.

By adding a check that exist the script with a failure, the CI will now
correctly report a problem when Helm can not be downloaded.

See-also: #3430
Signed-off-by: Niels de Vos <ndevos@redhat.com>
There are occasions where deleting a PVC (or PV) never succeeds. The
reported status of the deleted object is sometimes empty, which suggests
that the PVC or PV was, in fact, deleted.

To diagnose the incorrect error checking, include the errors for
retrying in the logs.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
Because the rbd-nbd tests fail with minikube and the Podman driver,
disable the tests for the time being.

Updates: #3431
Signed-off-by: Niels de Vos <ndevos@redhat.com>
CephFS does not have a concept of "free inodes", inodes get allocated
on-demand in the filesystem.

This confuses alerting managers that expect a (high) number of free
inodes, and warnings get produced if the number of free inodes is not
high enough. This causes alerts to always get reported for CephFS.

To prevent the false-positive alerts from happening, the
NodeGetVolumeStats procedure for CephFS (and CephNFS) will not contain
inodes in the reply anymore.

See-also: https://bugzilla.redhat.com/2128263
Signed-off-by: Niels de Vos <ndevos@redhat.com>
@openshift-ci
Copy link

openshift-ci bot commented Oct 14, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: nixpanic

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Its a good idea label Oct 14, 2022
@Madhu-1
Copy link
Member

Madhu-1 commented Oct 14, 2022

/lgtm

@openshift-ci openshift-ci bot added the lgtm Code looks good label Oct 14, 2022
@openshift-merge-robot openshift-merge-robot merged commit 1cce89f into red-hat-storage:devel Oct 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Its a good idea lgtm Code looks good
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants