Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PSA Flag to Scorecard #6187

Merged

Conversation

theishshah
Copy link
Member

@theishshah theishshah commented Nov 18, 2022

Add PSA Flag to Scorecard

Signed-off-by: Ish Shah <ishah@redhat.com>
Signed-off-by: Ish Shah <ishah@redhat.com>
Signed-off-by: Ish Shah <ishah@redhat.com>
Signed-off-by: Ish Shah <ishah@redhat.com>
@theishshah theishshah temporarily deployed to deploy November 18, 2022 20:44 Inactive
@theishshah theishshah temporarily deployed to deploy November 18, 2022 20:44 Inactive
@theishshah theishshah temporarily deployed to deploy November 18, 2022 20:45 Inactive
@theishshah theishshah temporarily deployed to deploy November 18, 2022 20:45 Inactive
@theishshah theishshah temporarily deployed to deploy November 18, 2022 20:45 Inactive
@theishshah theishshah temporarily deployed to deploy November 18, 2022 20:45 Inactive
@theishshah theishshah temporarily deployed to deploy November 18, 2022 20:45 Inactive
Copy link
Member

@jmrodri jmrodri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

internal/scorecard/testpod.go Outdated Show resolved Hide resolved
@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Nov 19, 2022
Comment on lines 192 to 198
podSecFlag := true
if c.podSecurity == "resticted" {
podSecFlag = true
} else if c.podSecurity == "legacy" {
podSecFlag = false
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A nit/typo:

Suggested change
podSecFlag := true
if c.podSecurity == "resticted" {
podSecFlag = true
} else if c.podSecurity == "legacy" {
podSecFlag = false
}
podSecFlag := true
if c.podSecurity == "restricted" {
podSecFlag = true
} else if c.podSecurity == "legacy" {
podSecFlag = false
}

As an aside, since we are processing this as a boolean, would it be easier to just make this a boolean flag rather than a string flag? Maybe something like --enable-psa? This way we don't have to do any if statements like this where we are processing a string into a boolean.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somehow I missed this. No it should not be treated as a boolean. It should match what we're doing in run bundle. #6062 we treat it as an enum to match what is in operator-framework/api.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible for the scorecard implementation to import and use the same logic that was created for operator-sdk run bundle?

I'm fine to leave this as is if we are happy with it, but it would be nice to be using the same tooling/logic in all places we want this kind of functionality rather than having slightly different methods for each implementation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be possible, just seems weird. We would likely want to move the enum stuff out of run bundle into a common area so that it can be used by both sides: scorecard & run bundle.

internal/scorecard/testpod.go Outdated Show resolved Hide resolved

// Create a Pod to run the test
podDef := getPodDefinition(r.configMapName, test, r)
if podSec {
secCtx := v1.PodSecurityContext{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

He's adding the Seccompprofile

	// SecurityContext: &corev1.PodSecurityContext{
	//     SeccompProfile: &corev1.SeccompProfile{
	//         Type: corev1.SeccompProfileTypeRuntimeDefault,
	//     },

That's what we ended up adding to the run bundle command. You shouldn't have to add the runuser to it.

Copy link
Contributor

@bcrochet bcrochet Nov 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I was testing my patch, the 'scorecard-untar' init container needed the RunAsUser (the RunAsGroup may not be necessary, but I put it in there for completeness). This is needed because this container utilizes UBI directly, and UBI does not specify a user. Hence, the untar will not run. Here is the error message:

  Warning  Failed     4s (x2 over 4s)  kubelet            Error: container has runAsNonRoot and image will run as root (pod: "scorecard-test-tw6p_default(dba24b81-7d41-4941-9b4f-946e2a15a079)", container: scorecard-untar)

This is from running kubectl describe pod scorecard-test-xxxx with this patch. That's why the timeout is occurring in the e2e test. This is consistent with what I saw when testing before I put in the RunAsUser.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should consider creating and pushing a custom image that will run as a non-root user by default?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bcrochet I discussed with @everettraven and you are correct with the RunUser. The reason I didn't have to do it with run bundle was the index images have the proper userids. So +1 to adding your changes to this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should consider creating and pushing a custom image that will run as a non-root user by default?

My first thought was "why bother?" but then I had the thought that it might not be a bad idea to do that, and have it be based on ubi(8|9):latest so that it is kept up-to-date on every release of operator-sdk.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My first thought was "why bother?" but then I had the thought that it might not be a bad idea to do that, and have it be based on ubi(8|9):latest so that it is kept up-to-date on every release of operator-sdk.

My only reason for suggesting this was because I believe that there is a problem when setting RunAsUser on OpenShift >= 4.11.

There is this note in the sample pod security manifest in the Operator SDK Pod Security Standards docs:

# WARNING: Ensure that the image used defines an UserID in the Dockerfile
# otherwise the Pod will not run and will fail with `container has runAsNonRoot and image has non-numeric user`.
# If you want your workloads admitted in namespaces enforced with the restricted mode in OpenShift/OKD vendors
# then, you MUST ensure that the Dockerfile defines a User ID OR you MUST leave the `RunAsNonRoot` and
# RunAsUser fields empty.

Creating the custom container image that can be used by default would allow the default logic to always work without specifying RunAsUser which, as far as I understand, should make the default logic work on any cluster.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "why bother?" was more about just one more asset to have to track. I'm totally on board with doing it though. I think it makes a lot of sense once I took a step back and thought about it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@theishshah was there ever any decision on the direction to go in regards to creating a new container image for scorecard that will run as a non root user within the container?

@jmrodri
Copy link
Member

jmrodri commented Dec 7, 2022

@theishshah @everettraven why is this PR still here? This should've been done a while ago. What's holding it up?

@everettraven
Copy link
Contributor

@theishshah @everettraven why is this PR still here? This should've been done a while ago. What's holding it up?

Honestly not sure. I'm on vacation for the next 2 weeks but if this PR is still hanging out when I get back I'll try to move this along.

Signed-off-by: Ish Shah <ishah@redhat.com>
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Jan 5, 2023
@theishshah theishshah temporarily deployed to deploy January 5, 2023 14:29 — with GitHub Actions Inactive
@theishshah theishshah temporarily deployed to deploy January 5, 2023 14:29 — with GitHub Actions Inactive
@theishshah theishshah temporarily deployed to deploy January 5, 2023 14:29 — with GitHub Actions Inactive
@theishshah theishshah temporarily deployed to deploy January 5, 2023 14:29 — with GitHub Actions Inactive
@theishshah theishshah temporarily deployed to deploy January 5, 2023 14:29 — with GitHub Actions Inactive
@theishshah theishshah temporarily deployed to deploy January 5, 2023 14:29 — with GitHub Actions Inactive
@theishshah theishshah temporarily deployed to deploy January 5, 2023 14:29 — with GitHub Actions Inactive
sanity errors fix
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Jan 25, 2023
@laxmikantbpandhare laxmikantbpandhare temporarily deployed to deploy January 25, 2023 23:24 — with GitHub Actions Inactive
@laxmikantbpandhare laxmikantbpandhare temporarily deployed to deploy January 25, 2023 23:24 — with GitHub Actions Inactive
@laxmikantbpandhare laxmikantbpandhare temporarily deployed to deploy January 25, 2023 23:24 — with GitHub Actions Inactive
@laxmikantbpandhare laxmikantbpandhare temporarily deployed to deploy January 25, 2023 23:24 — with GitHub Actions Inactive
@laxmikantbpandhare laxmikantbpandhare temporarily deployed to deploy January 25, 2023 23:24 — with GitHub Actions Inactive
@laxmikantbpandhare laxmikantbpandhare temporarily deployed to deploy January 25, 2023 23:24 — with GitHub Actions Inactive
@laxmikantbpandhare laxmikantbpandhare temporarily deployed to deploy January 25, 2023 23:24 — with GitHub Actions Inactive
@laxmikantbpandhare
Copy link
Member

These changes look good to me. One thing I think will cause an error in CI is that we probably need to run make generate to update the documentation on the scorecard cmd flags (i could be wrong).

/lgtm

@everettraven - Yes, I updated the website content for the ScoreCard.

Copy link
Contributor

@everettraven everettraven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jan 25, 2023
@laxmikantbpandhare laxmikantbpandhare merged commit 8e7d743 into operator-framework:master Jan 25, 2023
@oceanc80
Copy link
Collaborator

oceanc80 commented Feb 1, 2023

Closes #5939

@jmrodri
Copy link
Member

jmrodri commented Feb 2, 2023

👏🏽 👏🏽 👏🏽

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants