Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix panic when returning error with provider reference name #302

Merged
merged 2 commits into from
Nov 11, 2021

Conversation

hasheddan
Copy link
Member

@hasheddan hasheddan commented Nov 11, 2021

Description of your changes

Fixes a panic in the storage container controller that used the provider
reference name in the error when we are unable to get the storage
account that serves as this type's provider. The ProviderReference may
be nil in the case that the newer ProviderConfigReference is used.

Signed-off-by: hasheddan georgedanielmangum@gmail.com

Fixes #248

I have:

  • Read and followed Crossplane's contribution process.
  • Run make reviewable test to ensure this PR is ready for review.

How has this code been tested

I have not tested this change extensively, but I have a high degree of confidence in the fix given the observed stack trace on panic:

goroutine 692 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/k8s.io/apimachinery@v0.20.1/pkg/util/runtime/runtime.go:55 +0x109
panic(0x197c8e0, 0x2993560)
	/opt/hostedtoolcache/go/1.16.7/x64/src/runtime/panic.go:965 +0x1b9
github.com/crossplane/provider-azure/pkg/controller/storage/container.(*containerSyncdeleterMaker).newSyncdeleter(0xc0003783e0, 0x1dde980, 0xc0002f0300, 0xc00028a480, 0xdf8475800, 0x0, 0x0, 0x1e0ab80, 0xc00028a480)
	/home/runner/work/provider-azure/provider-azure/pkg/controller/storage/container/container.go:158 +0x9d1
github.com/crossplane/provider-azure/pkg/controller/storage/container.(*Reconciler).Reconcile(0xc0002d75c0, 0x1dde9b8, 0xc0002f0300, 0x0, 0x0, 0xc0000c4600, 0x11, 0xc000b6b800, 0x0, 0x0, ...)
	/home/runner/work/provider-azure/provider-azure/pkg/controller/storage/container/container.go:114 +0x3a8
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000268a00, 0x1dde910, 0xc0000c6000, 0x19df200, 0xc000b63100)
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/sigs.k8s.io/controller-runtime@v0.8.0/pkg/internal/controller/controller.go:293 +0x30d
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000268a00, 0x1dde910, 0xc0000c6000, 0xc00025de00)
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/sigs.k8s.io/controller-runtime@v0.8.0/pkg/internal/controller/controller.go:248 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1(0x1dde910, 0xc0000c6000)
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/sigs.k8s.io/controller-runtime@v0.8.0/pkg/internal/controller/controller.go:211 +0x4a
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/k8s.io/apimachinery@v0.20.1/pkg/util/wait/wait.go:185 +0x37
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00025df50)
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/k8s.io/apimachinery@v0.20.1/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000703f50, 0x1daa3c0, 0xc0005a8cc0, 0xc0000c6001, 0xc00032c120)
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/k8s.io/apimachinery@v0.20.1/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00025df50, 0x3b9aca00, 0x0, 0x993f01, 0xc00032c120)
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/k8s.io/apimachinery@v0.20.1/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x1dde910, 0xc0000c6000, 0xc0007de880, 0x3b9aca00, 0x0, 0x1)
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/k8s.io/apimachinery@v0.20.1/pkg/util/wait/wait.go:185 +0xa6
k8s.io/apimachinery/pkg/util/wait.UntilWithContext(0x1dde910, 0xc0000c6000, 0xc0007de880, 0x3b9aca00)
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/k8s.io/apimachinery@v0.20.1/pkg/util/wait/wait.go:99 +0x57
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/home/runner/work/provider-azure/provider-azure/.work/pkg/pkg/mod/sigs.k8s.io/controller-runtime@v0.8.0/pkg/internal/controller/controller.go:208 +0x49e

I have also added a unit test that confirms that panic does occur prior to the change, and does not after.

Fixes a panic in the storage container controller that used the provider
reference name in the error when we are unable to get the storage
account that serves as this type's provider. The ProviderReference may
be nil in the case that the newer ProviderConfigReference is used.

Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
@@ -331,6 +331,24 @@ func Test_containerSyncdeleterMaker_newSyncdeleter(t *testing.T) {
"failed to retrieve storage account: %s", testAccountName),
},
},
{
name: "FailedToGetAccountNotFoundNoDeleteProviderConfigRef",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test panics without the updated error message.

Adds a unit test for the storage container reconciler to ensure that we
do not panic on usage of providerConfigRef when the storage account is
not found.

Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
@negz negz merged commit a50aa1a into crossplane-contrib:master Nov 11, 2021
@hasheddan
Copy link
Member Author

/backport

@github-actions
Copy link

Successfully created backport PR #303 for release-0.17.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Controller pod panics with storage - container resource if referenced provider (account) does not exist
2 participants