Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Psa downstream #349

Merged
merged 4 commits into from
Aug 11, 2022
Merged

Conversation

anik120
Copy link
Contributor

@anik120 anik120 commented Aug 8, 2022

Namespace currently look like:

$ oc get ns openshift-operator-lifecycle-manager -o yaml  
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    include.release.openshift.io/ibm-cloud-managed: "true"
    include.release.openshift.io/self-managed-high-availability: "true"
    openshift.io/node-selector: ""
    openshift.io/sa.scc.mcs: s0:c20,c0
    openshift.io/sa.scc.supplemental-groups: 1000380000/10000
    openshift.io/sa.scc.uid-range: 1000380000/10000
    workload.openshift.io/allowed: management
  creationTimestamp: "2022-08-08T20:43:32Z"
  labels:
    kubernetes.io/metadata.name: openshift-operator-lifecycle-manager
    olm.operatorgroup.uid/05abd798-6c32-444f-b464-8d6b055c1782: ""
    olm.operatorgroup.uid/0a8c1277-46e3-4fc8-a1cc-8aa53df77f87: ""
    openshift.io/cluster-monitoring: "true"
    openshift.io/scc: anyuid
  name: openshift-operator-lifecycle-manager
.
.

oc get ns openshift-operators -o yaml 
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    include.release.openshift.io/ibm-cloud-managed: "true"
    include.release.openshift.io/self-managed-high-availability: "true"
    openshift.io/node-selector: ""
    openshift.io/sa.scc.mcs: s0:c20,c10
    openshift.io/sa.scc.supplemental-groups: 1000400000/10000
    openshift.io/sa.scc.uid-range: 1000400000/10000
    workload.openshift.io/allowed: management
  creationTimestamp: "2022-08-09T17:25:15Z"
  labels:
    kubernetes.io/metadata.name: openshift-operators
    openshift.io/scc: anyuid
    pod-security.kubernetes.io/enforce: baseline
    pod-security.kubernetes.io/enforce-version: v1.24
  name: openshift-operators
.
.

Notice the baseline profile enforced in openshift-operators namespace, to allow exsiting CSVs to install ☝🏽.

Workloads all working seamlessly with the right securityContext configs:

$ oc get pod catalog-operator-569796f9fc-22zp2 -n openshift-operator-lifecycle-manager -o yaml | grep securityContext -A 6
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsUser: 1000380000
    terminationMessagePath: /dev/termination-log
--
  securityContext:
    fsGroup: 1000380000
    runAsNonRoot: true
    seLinuxOptions:
      level: s0:c20,c0
    seccompProfile:
      type: RuntimeDefault

$ oc get pod olm-operator-59df754788-j4qm8 -n openshift-operator-lifecycle-manager -o yaml | grep securityContext -A 6
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsUser: 1000380000
    terminationMessagePath: /dev/termination-log
--
  securityContext:
    fsGroup: 1000380000
    runAsNonRoot: true
    seLinuxOptions:
      level: s0:c20,c0
    seccompProfile:
      type: RuntimeDefault

$ oc get pod package-server-manager-69d7b66648-7kzsp -n openshift-operator-lifecycle-manager -o yaml | grep securityContext -A 6
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
--
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000380000
    seLinuxOptions:
      level: s0:c20,c0
    seccompProfile:
      type: RuntimeDefault

$ oc get pods -n openshift-marketplace 
NAME                                    READY   STATUS    RESTARTS   AGE
certified-operators-xbpbt               1/1     Running   0          23m
community-operators-vfwgb               1/1     Running   0          9m17s
marketplace-operator-5df96fcb9c-cx6pq   1/1     Running   0          29m
redhat-marketplace-zw7bd                1/1     Running   0          23m
redhat-operators-d8cbk                  1/1     Running   0          23m

$ oc get pod community-operators-vfwgb -n openshift-marketplace -o yaml | grep securityContext -A 6
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      readOnlyRootFilesystem: false
      runAsNonRoot: true
--
  securityContext:
    fsGroup: 1000280000
    seLinuxOptions:
      level: s0:c17,c4
    seccompProfile:
      type: RuntimeDefault
  serviceAccount: community-operators

Test to make sure installing operator in the openshift-operator namespace works as expected:

  1. installed the aerospike-kubernetes-operator from the OperatorHub console (that installs the operator in the openshift-operators namespace)
$ oc get sub -n openshift-operators                   
NAME                            PACKAGE                         SOURCE                CHANNEL
aerospike-kubernetes-operator   aerospike-kubernetes-operator   community-operators   alpha

$ oc get pods -n openshift-marketplace 
NAME                                                              READY   STATUS      RESTARTS   AGE
5ce4a54e377765068d69d5a13713476ef8bfa345dd551448534d5f80abxhlcr   0/1     Completed   0          3m14s
certified-operators-gmcm7                                         1/1     Running     0          41m
community-operators-dmfdf                                         1/1     Running     0          41m
marketplace-operator-6ff65457ff-2hgvl                             1/1     Running     0          47m
redhat-marketplace-zn9bw                                          1/1     Running     0          41m
redhat-operators-q6m65                                            1/1     Running     0          41m

# check that the job has the right security context configs: 
$ oc get pod 5ce4a54e377765068d69d5a13713476ef8bfa345dd551448534d5f80abxhlcr -n openshift-marketplace -o yaml | grep securityContext -A 6
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1000270000
--
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1000270000
--
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1000270000
--
  securityContext:
    fsGroup: 1000270000
    seLinuxOptions:
      level: s0:c16,c15
    seccompProfile:
      type: RuntimeDefault

$ oc get csv -n openshift-operators 
NAME                                   DISPLAY                         VERSION   REPLACES   PHASE
aerospike-kubernetes-operator.v2.1.0   Aerospike Kubernetes Operator   2.1.0                Succeeded

$ oc get pods -n openshift-operators 
NAME                                                     READY   STATUS    RESTARTS   AGE
aerospike-operator-controller-manager-587678b6fc-67j7v   2/2     Running   0          32s
aerospike-operator-controller-manager-587678b6fc-xhswq   2/2     Running   0          32s

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 8, 2022
* (psa) make workloads compatible with psa:restricted profile

With the introduction of [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/#pod-security-admission-labels-for-namespaces), the reccomeneded
best practice is to enforce the Restricted policy of admission (see [1] for more details).
This PR
*) Lables the olm namespace as `enforce:restricted`
*) Labels the operators namespace as `enforce:baseline` (to allow existing CSV deployments
without securityContext set to deploy in the namespace, which won't be possible with
`enforce:resticted`)
*) updates the securityContext of olm workload pods(olm-operator, catalog-operator,
and CatalogSource registry pods) to adhere to the `Restricted` policy.
*) updates the bundle unpacking job to create a pod that adheres to the `Restricted` policy,
so that bundles can be unpacked in the `Restricted` namespace.

Signed-off-by: Anik Bhattacharjee <anikbhattacharya93@gmail.com>

* (flaky text fix): GC CSV with wrong namespace annotation

The test was modifying the `olm.operatornamespace` to an incorrect value,
and checking to make sure that the CSV was garbage collected as a result.
However, the olm-controller was copying a fresh copy back into the namespace,
so whenever the test was able to get a yes reply to the question "is the CSV
gone", in the brief window before it was copied back again, the test was passing.
This commit fixes that by making sure that if find a CSV that we expected to be
garbage collected, it passes if it determines that the CSV is a fresh copy, and
not the one modified before.

Signed-off-by: Anik Bhattacharjee <anikbhattacharya93@gmail.com>

Upstream-commit: 67177c0c822fbe7d554669262c6b4f54bebad17f
Upstream-repository: operator-lifecycle-manager
@anik120 anik120 force-pushed the psa-downstream branch 2 times, most recently from 5344ff4 to d6db602 Compare August 9, 2022 18:08
@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 9, 2022
@anik120 anik120 mentioned this pull request Aug 9, 2022
@anik120
Copy link
Contributor Author

anik120 commented Aug 9, 2022

/test e2e-gcp-olm

@openshift-ci-robot
Copy link

/retest-required

Remaining retests: 2 against base HEAD eb1026a and 8 for PR HEAD d6db602 in total

@timflannagan
Copy link
Contributor

Looks like b2b test case failures.

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 10, 2022
@anik120
Copy link
Contributor Author

anik120 commented Aug 10, 2022

That's strange, looks like the test [It] cleanup csvs with bad namespace annotation is failing, and I thought I'd fixed it in this commit. I did remove the flaky annotation from the test, and it passes in upstream, but looks like it's failing in a different stage than where the actual fix is. Regardless, the commit did introduce that change, but looks like it was expected before too that this is a flaky area,
/test e2e-gcp-olm
to see if it fails in the same spot again so that I can investigate it further tomorrow. Maybe it's just a case of introducing the flaky annotation back into the test, since I just ran the test upstream it passes every time, looks like the test passes in aws, it's strange that there could be a delta in gcp for this test.

@anik120
Copy link
Contributor Author

anik120 commented Aug 10, 2022

Found the problem, garbage collection is working "too well" in gcp compared to kind/aws, highlighting an error in logic. Fix posted in operator-framework/operator-lifecycle-manager#2837, once that merges will pull that commit in this PR

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Aug 10, 2022
@anik120
Copy link
Contributor Author

anik120 commented Aug 10, 2022

Pod pending timeout.

smh. /retest

This PR:
* introduces a chart value that decides if the --set-workload-user-id flag to true
or false for the catalog-operator container
* introduces chart values to fill in the psa enforce level/version for the namespaces
Closes #2827

Signed-off-by: Anik Bhattacharjee <anikbhattacharya93@gmail.com>

Upstream-commit: f982e2fbeecfcb917665ed760363326e313b2967
Upstream-repository: operator-lifecycle-manager
In the test `Operator Group cleanup csvs with bad namespace annotation`, the polling
logic that updates a copied csv with a bad annotation was checking to see if the
update was successful. However, once the copied csv is given a bad annotation, the CSV
is GC'd, and if the collection happens immediately, the polling logic fails. This fix
removes the logic that attempts to check the updated CSV, and instead relies on the
updateErr being nil to assert that the update was successful.

Signed-off-by: Anik Bhattacharjee <anikbhattacharya93@gmail.com>

Signed-off-by: Anik Bhattacharjee <anikbhattacharya93@gmail.com>

Upstream-commit: e08415d1228d095d0238dcf7264d3b70464e805b
Upstream-repository: operator-lifecycle-manager
@anik120
Copy link
Contributor Author

anik120 commented Aug 10, 2022

/test unit-olm

@anik120
Copy link
Contributor Author

anik120 commented Aug 10, 2022

/test e2e-gcp-olm

@anik120
Copy link
Contributor Author

anik120 commented Aug 11, 2022

c'mon man
/retest

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 11, 2022

@anik120: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@anik120
Copy link
Contributor Author

anik120 commented Aug 11, 2022

@timflannagan would you do the honors please 🥲

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 11, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 11, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: anik120, timflannagan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [anik120,timflannagan]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@timflannagan
Copy link
Contributor

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 11, 2022
@openshift-merge-robot openshift-merge-robot merged commit e41024e into openshift:master Aug 11, 2022
anik120 added a commit to anik120/operator-marketplace that referenced this pull request Aug 16, 2022
This PR removes the PSA baseline enforcement for workloads in the
openshift-marketplace namespace, so that the default restricted
profile can be enforced by default like other openshift-* namespaces.
This is possible due to the changes in
openshift/operator-framework-olm#349
openshift-bot pushed a commit to openshift-bot/operator-framework-olm that referenced this pull request Sep 13, 2024
* `functionalities` param is no longer exist.
  It was used to enable file fixes to ignore common lines from coverage.
  This feature is now seems to be on by default.

* Adding `disable_search` because we do not need for the codecov action
  to search for coverage files: we explicitly provide files.

Signed-off-by: Mikalai Radchuk <mradchuk@redhat.com>
Upstream-repository: api
Upstream-commit: ce8a923541376d7e7907af90f6c91ff3abafd2f5
openshift-bot pushed a commit to openshift-bot/operator-framework-olm that referenced this pull request Sep 21, 2024
* `functionalities` param is no longer exist.
  It was used to enable file fixes to ignore common lines from coverage.
  This feature is now seems to be on by default.

* Adding `disable_search` because we do not need for the codecov action
  to search for coverage files: we explicitly provide files.

Signed-off-by: Mikalai Radchuk <mradchuk@redhat.com>
Upstream-repository: api
Upstream-commit: ce8a923541376d7e7907af90f6c91ff3abafd2f5
openshift-bot pushed a commit to openshift-bot/operator-framework-olm that referenced this pull request Sep 25, 2024
* `functionalities` param is no longer exist.
  It was used to enable file fixes to ignore common lines from coverage.
  This feature is now seems to be on by default.

* Adding `disable_search` because we do not need for the codecov action
  to search for coverage files: we explicitly provide files.

Signed-off-by: Mikalai Radchuk <mradchuk@redhat.com>
Upstream-repository: api
Upstream-commit: ce8a923541376d7e7907af90f6c91ff3abafd2f5
openshift-bot pushed a commit to openshift-bot/operator-framework-olm that referenced this pull request Sep 26, 2024
* `functionalities` param is no longer exist.
  It was used to enable file fixes to ignore common lines from coverage.
  This feature is now seems to be on by default.

* Adding `disable_search` because we do not need for the codecov action
  to search for coverage files: we explicitly provide files.

Signed-off-by: Mikalai Radchuk <mradchuk@redhat.com>
Upstream-repository: api
Upstream-commit: ce8a923541376d7e7907af90f6c91ff3abafd2f5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants