Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploying a Crossplane provider-sql PostgreSQL Role fails with "the server could not find the requested resource" #2865

Closed
renannprado opened this issue Mar 8, 2024 · 3 comments · Fixed by #2889
Assignees
Labels
kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed

Comments

@renannprado
Copy link

renannprado commented Mar 8, 2024

What happened?

Trying to deploy a PostgreSQL Role with ConfigGroup fails with:

  kubernetes:postgresql.sql.crossplane.io/v1alpha1:Role (postgresql/testrole):
    error: Preview failed: resource "urn:pulumi:dev::keycloak-provisioning::kubernetes:core/v1:Namespace$kubernetes:yaml:ConfigGroup$kubernetes:postgresql.sql.crossplane.io/v1alpha1:Role::postgresql/testrole" was not successfully created by the Kubernetes API server : the server could not find the requested resource

The same problem happens with ConfigFile as well.

Whereas deploying a Database from the same Crossplane provider works just fine.

Example

const pgSqlRole = new kube.yaml.ConfigGroup("keycloak-postgresql-role", {
        yaml: `
            apiVersion: postgresql.sql.crossplane.io/v1alpha1
            kind: Role
            metadata:
                name: testrole
                namespace: postgresql
            spec:
                providerConfigRef:
                    name: in-cluster-postgresql
                forProvider:
                    privileges:
                        createDb: false
                        login: true
                writeConnectionSecretToRef:
                    name: test-secret
                    namespace: postgresql
        `
    }, {provider: kubeProvider, parent: namespace})

Output of pulumi about

CLI
Version      3.109.0
Go Version   go1.22.1
Go Compiler  gc

Plugins
NAME        VERSION
kubernetes  4.9.0
nodejs      unknown

Host
OS       darwin
Version  14.3.1
Arch     arm64

This project is written in nodejs: executable='/Users/pradore/.asdf/shims/node' version='v21.6.2'

Current Stack: organization/keycloak-provisioning/dev

Found no resources associated with dev

Found no pending operations associated with dev

Backend
Name           Renanns-MacBook-Pro-10.local
URL            file://./state
User           pradore
Organizations
Token type     personal

Dependencies:
NAME                VERSION
@pulumi/kubernetes  4.9.0
@pulumi/pulumi      3.109.0
@types/node         18.19.22
ts-dedent           2.2.0

Pulumi locates its logs in /var/folders/cd/xn0tnx691gs84m7m7rl41rxm0000gn/T/ by default

Additional context

Trying to use ConfigGroup because I can't generate the typescript definitions due to pulumi/crd2pulumi#126.

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@renannprado renannprado added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Mar 8, 2024
@EronWright
Copy link
Contributor

EronWright commented Mar 12, 2024

One the problem here is that the ConfigGroup has namespace as its parent. One should use dependsOn to establish an ordering dependency.

The specific problem is, I think, that Role in postgresql.sql.crossplane.io/v1alpha1 is cluster-scoped (not namespace-scoped). Try removing the metadata.namespace field.

const pgSqlRole = new kube.yaml.ConfigGroup("keycloak-postgresql-role", {
        yaml: `
            apiVersion: postgresql.sql.crossplane.io/v1alpha1
            kind: Role
            metadata:
                name: testrole
            spec:
                providerConfigRef:
                    name: in-cluster-postgresql
                forProvider:
                    privileges:
                        createDb: false
                        login: true
                writeConnectionSecretToRef:
                    name: test-secret
                    namespace: postgresql
        `
    }, {provider: kubeProvider})

Update: I was able to repro this issue by using the xplane Role type, and the problem is actually a bug in the provider related to the fact that the kind is named Role. The above comments are still true but there's also a bug, and I'll prioritize a fix.

Meanwhile, a possible workaround is to wrap the manifest into a local Helm chart and use the Release resource.

@EronWright EronWright added awaiting-feedback Blocked on input from the author and removed needs-triage Needs attention from the triage team labels Mar 12, 2024
@renannprado
Copy link
Author

@EronWright your comment makes sense, I might try that later as well. Thanks for having a look!

I forgot to write it in here, but I did what you suggested already: I've moved the entire application from pulumi to helm so that I could create this resouce. It's a bit of mixed feelings for me because I'm not super fan of helm, but it gets the job done for now at least until I can go back fully to Pulumi.

@rquitales
Copy link
Member

I looked into this briefly, but it seems like we have a bug with how we're creating our dynamic client. The roles resource here is cluster scoped, however the k8s native roles.rbac.authorization.k8s.io resource is namespaced.

Looking at the code where we generate our clients:

namespaced, err := IsNamespacedKind(gvk, dcs)
if err != nil {
return nil, err
}
if namespaced {
return dcs.GenericClient.Resource(m.Resource).Namespace(NamespaceOrDefault(namespace)), nil
}

We determine whether we need to create a namespaced client or not. The logic we have to determine if a resource is namespaced or not is fairly simple, as we just do a check against a list of known kinds and not taking into account the GVK:

if known, namespaced := kinds.Kind(kind).Namespaced(); known {

Role,
RoleBinding,
Secret,
Service,
ServiceAccount,
StatefulSet:
return true, true

This means that any custom resources with the same kind as any of the ones in that hard-coded list (eg. Role) will always return a namespaced client. We need to improve this logic to account for resources that share the same kind as the ones that come with k8s natively.

@rquitales rquitales removed the awaiting-feedback Blocked on input from the author label Mar 13, 2024
EronWright added a commit that referenced this issue Mar 19, 2024
<!--Thanks for your contribution. See [CONTRIBUTING](CONTRIBUTING.md)
    for Pulumi's contribution guidelines.

    Help us merge your changes more quickly by adding more details such
    as labels, milestones, and reviewers.-->

### Proposed changes

<!--Give us a brief description of what you've done and what it solves.
-->

This PR fixes a couple of related problems with "ambiguous kinds":
1. For kinds with clashing names (e.g. `Role`), be sure to check the
apiversion before using built-in information or dynamic discovery.
2. For kinds with casing problems, don't mask the problem; show the API
server error.

Note that kubectl has the following behavior w.r.t (2):

```yaml
apiVersion: awx.ansible.com/v1beta1
kind: awx
metadata:
  name: my-awx
  namespace: awx
```

```
❯ kubectl apply -f manifest.yaml --server-side=false
The awx "my-awx" is invalid: kind: Invalid value: "awx": must be AWX
❯ kubectl apply -f manifest.yaml --server-side=true
Error from server (BadRequest): invalid object type: awx.ansible.com/v1beta1, Kind=awx
```

An explanation of the technical approach: the `kinds.Kind` type is used
in the codebase to represent a well-known kind, i.e. known at code
generation time. To prepare this PR, I audited the locations where
`Kind` is used, and ensured that it wasn't being used for arbitrary
kinds. Where necessary, the use of `Kind` was conditioned on the
`apiVersion` being one of the well-known values.

### Related issues (optional)

<!--Refer to related PRs or issues: #1234, or 'Fixes #1234' or 'Closes
#1234'.
Or link to full URLs to issues or pull requests in other GitHub
repositories. -->

Closes #2865
Closes #2143
@pulumi-bot pulumi-bot added the resolution/fixed This issue was fixed label Mar 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants