Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minio operator upgrade failed #833

Closed
wangcanfengxs opened this issue Jan 4, 2022 · 4 comments
Closed

Minio operator upgrade failed #833

wangcanfengxs opened this issue Jan 4, 2022 · 4 comments
Assignees
Labels
area/upgrading known-issue Issues we have known but no solution or plan to introduce fix yet release/1.2

Comments

@wangcanfengxs
Copy link
Collaborator

wangcanfengxs commented Jan 4, 2022

Minio operator upgrade failed when I update harbor-operator v1.1.1 to master

minio operator pod logs:

I0104 08:17:55.338890       1 main.go:75] Starting MinIO Operator
I0104 08:17:55.821313       1 main.go:142] caBundle on CRD updated
I0104 08:17:55.822121       1 main-controller.go:252] Setting up event handlers
I0104 08:17:55.822341       1 leaderelection.go:243] attempting to acquire leader lease harbor-operator-ns/minio-operator-lock...
I0104 08:17:55.834653       1 leaderelection.go:253] successfully acquired lease harbor-operator-ns/minio-operator-lock
I0104 08:17:55.834712       1 main-controller.go:464] minio-operator-6fb44849b5-fk6pp: I've become the leader
I0104 08:17:55.834748       1 main-controller.go:379] Waiting for API to start
I0104 08:17:55.865479       1 main-controller.go:361] Starting HTTPS API server
I0104 08:17:55.865652       1 main-controller.go:383] Starting Tenant controller
I0104 08:17:55.865669       1 main-controller.go:386] Waiting for informer caches to sync
I0104 08:17:55.865676       1 main-controller.go:391] Starting workers
I0104 08:17:55.912477       1 status.go:121] Hit conflict issue, getting latest version of tenant
I0104 08:18:00.860388       1 upgrades.go:84] Upgrading v4.2.9
E0104 08:18:00.860526       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 75 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x158c020, 0x254d3c0})
	k8s.io/apimachinery@v0.20.2/pkg/util/runtime/runtime.go:74 +0x85
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000836760})
	k8s.io/apimachinery@v0.20.2/pkg/util/runtime/runtime.go:48 +0x75
panic({0x158c020, 0x254d3c0})
	runtime/panic.go:1038 +0x215
github.com/minio/operator/pkg/controller/cluster.(*Controller).upgrade429(0xb, {0x1991278, 0xc000196000}, 0xc000a58018)
	github.com/minio/operator/pkg/controller/cluster/upgrades.go:271 +0x10d
github.com/minio/operator/pkg/controller/cluster.(*Controller).checkForUpgrades(0xc0001d8000, {0x1991278, 0xc000196000}, 0xc000662000)
	github.com/minio/operator/pkg/controller/cluster/upgrades.go:85 +0x8c7
github.com/minio/operator/pkg/controller/cluster.(*Controller).syncHandler(0xc0001d8000, {0xc0006ba960, 0x2c})
	github.com/minio/operator/pkg/controller/cluster/main-controller.go:630 +0x506
github.com/minio/operator/pkg/controller/cluster.(*Controller).processNextWorkItem.func1({0x1523ba0, 0xc000836760})
	github.com/minio/operator/pkg/controller/cluster/main-controller.go:557 +0x245
github.com/minio/operator/pkg/controller/cluster.(*Controller).processNextWorkItem(0xc0001d8000)
	github.com/minio/operator/pkg/controller/cluster/main-controller.go:569 +0x62
github.com/minio/operator/pkg/controller/cluster.(*Controller).runWorker(0x0)
	github.com/minio/operator/pkg/controller/cluster/main-controller.go:509 +0x70
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7f997beb7db8)
	k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000692000, {0x19653e0, 0xc0008b92f0}, 0x1, 0xc000183500)
	k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000692000, 0x3b9aca00, 0x0, 0x80, 0xc000db4fd0)
	k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x8dd5c0, 0xc0001ea610, 0xc000db4fb8)
	k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:90 +0x25
created by github.com/minio/operator/pkg/controller/cluster.(*Controller).Start.func1
	github.com/minio/operator/pkg/controller/cluster/main-controller.go:394 +0x2dd
@bitsf
Copy link
Collaborator

bitsf commented Jan 9, 2022

@bitsf
Copy link
Collaborator

bitsf commented Jan 9, 2022

after fix nil, there's another issue minio-harborcluster-sample has no log secret
image

NOTE:
this log is not a issue, just a warning

@bitsf
Copy link
Collaborator

bitsf commented Jan 9, 2022

The minio pod fail to start after upgrading
image

it looks like the Pod is run as user 10000, who doesn't have permission to change in folder /usr/bin/minio

NOTE:
workaround, if upgrade harbor before this issue, kubectl delete sts -n cluster-sample-ns minio-harborcluster-sample-zone-harbor

@bitsf
Copy link
Collaborator

bitsf commented Mar 17, 2022

At least need 3 worker nodes, or delete old minio-operator replicaset when upgrade harbor-operator

@bitsf bitsf closed this as completed Mar 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/upgrading known-issue Issues we have known but no solution or plan to introduce fix yet release/1.2
Projects
None yet
Development

No branches or pull requests

4 participants