Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for merging profiles recorded by a single profileRecording #1179

Merged
merged 11 commits into from
Oct 6, 2022

Conversation

jhrozek
Copy link
Contributor

@jhrozek jhrozek commented Sep 20, 2022

What type of PR is this?

/kind feature
/kind api-change

What this PR does / why we need it:

  • profilerecording API: Add an optional mergeStrategy field
    The profileRecording CRD gets a new field mergeStrategy. This field defaults to "none" which is the same behavior as earlier. When set to "containers", all policies belonging to the same container and the same recordingn will be merged.

  • api: add a profileSpecBase interface
    Adds a new interface that all profiles capable of producing recordable profiles should implement. The interface consists of two methods:
    IsPartial() - if the policy is partial and should not be reconciled yet
    ListProfilesByRecording() - lists all profiles recorded by a particular recording
    These will come handy when merging the profiles. Also adds several labels that these profiles will use to identify the recording and mark the profile as partial.

  • profilerecording: Create partial profiles if recording sets mergeStrategy=containers
    When the mergeStrategy field of profileRecording is set to containers, the profilerecorder controller in the deamon would create profiles with
    the label profilebase.ProfilePartialLabel set to true. The profiles would furthermore contain a label linking to the profileRecording and
    the name of the container which will later aid in reconstructing the merged profile.
    When the partial profiles are created, a finalizer is set on the profileRecording which prevents its deletion unless the partial profiles are either deleted manually by the admin or merged.

RBAC changes - the profilerecording controller (running under the spod identity) is now allowed to list,watch and get profilerecordings

  • nodestatus: Add a new profile status "Partial" for profiles yet to be merged
    Adds a new profile status "Partial" to both the nodestatus CRD and by extension the profiles itself. This status is used to denote profiles that are not to be used right away, but those that are about to be merged.
    Apart from managing the status itself, the node finalizer would also remove the finalizer preventing the recording from being deleted if all the recordings go away. This covers the use case where someone records partial policies and then just wants to remove them instead of merging.

  • seccomp/selinux: do not handle partial seccomp and selinux profile
    Skips reconciling partial selinux and seccomp profiles. SPO should only reconcile those when they are merged.

  • api: Add a generic Go interface to merge partial profiles
    In order to avoid repeating ourselves in the merge code, let's add a new interface MergeableProfile with a single method Merge() that is implemented by all profiles that can be recorded.
    Moves UnionSyscalls from the seccomp controller to the seccomp types along with its unit tests.

  • controller: Add a new recording merger controller
    Adds a new controller called recordingmerger that watches for profilerecordings that are about to be deleted. For those recordings, all partial profiles are merged into a final profile and created.

RBAC changes: adds delete/deletecollection for both seccomp and selinux profiles to the SPO controllers in order to remove the partial profiles.

  • e2e tests: Add tests for profilemerging
    Adds e2e tests that record a number of partial profiles where one of the pods in the recorded deployment performs a different action than the others, then merges the partial profiles and ensures that the merged profile is in fact a union of the partial profiles.

  • bundle: update with the recent changes
    Does what it says

Which issue(s) this PR fixes:

Fixes #677

Does this PR have test?

Yes

Special notes for your reviewer:

I'll point some inline

Does this PR introduce a user-facing change?

The profileRecording CR contains a new optional field mergeStrategy, defaulting
to none. When this field is set to containers, the recorded profiles will be set
as partial, both using a label and the profile status. These profiles are not
reconciled until the profileRecording exists, at which point the partial profiles
are merged and a union of the partial profiles is created and finally reconciled.

This allows for easier recording of policies e.g. during longer e2e runs which
are recording the policies.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/feature Categorizes issue or PR as related to a new feature. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 20, 2022
@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Sep 20, 2022
@jhrozek
Copy link
Contributor Author

jhrozek commented Sep 20, 2022

I'll put some questions for reviewers tomorrow.

profileNamespacedName types.NamespacedName,
profileID string,
) error {
labels, err := profileLabels(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a bit of code duplication between the three recording functions, something to clean up perhaps

@@ -98,6 +101,30 @@ type SelinuxProfile struct {
Status SelinuxProfileStatus `json:"status,omitempty"`
}

func (sp *SelinuxProfile) Merge(other profilebasev1alpha1.MergeableProfile) error {
// TODO(jhrozek): should we be defensive about checking if other attributes match as well? (e.g. inherit)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we? The profiles to merge /should/ come from the same recording, but I guess additional hardening wouldn't hurt?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say we make this private so that other projects do not use it via the API.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By making this private do you mean "not exposed by the API modules directly"?
I'm thinking how to make this private while still letting the merge controller use an interface and not care about what profile it is that they are merging..would something like the following work for you?

type MergeableProfile interface {
 merge()
}

type MergeableSeccompProfile struct {
   SeccompProfile
}

func (msp *MergeableSeccompProfile) merge(....) error {
   ...
}

type MergeableSelinuxProfile struct {
   SelinuxProfile
}

func (msp *MergeableSelinuxProfile)  merge(....) error {
   ...
}

this might a private API in the merge controller or somewhere else in utils/ but not exposed outside SPO...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correction: The Merge method could be exported outside its module, just not in the API. I guess that would still satisfy what you asked for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, please check

Copy link
Member

@saschagrunert saschagrunert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work, I just have a bunch of comments regarding the API visibility.

@@ -183,3 +212,30 @@ type SeccompProfileList struct {
func init() { //nolint:gochecknoinits // required to init scheme
SchemeBuilder.Register(&SeccompProfile{}, &SeccompProfileList{})
}

func UnionSyscalls(baseSyscalls, appliedSyscalls []*Syscall) []*Syscall {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we wanna make this private? This function should be only used internally since it does not handle arches and arguments.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, I don't think we need to make this public for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved UnionSyscalls into an util module where it can be reused by both the deamon and the controller

@@ -98,6 +101,30 @@ type SelinuxProfile struct {
Status SelinuxProfileStatus `json:"status,omitempty"`
}

func (sp *SelinuxProfile) Merge(other profilebasev1alpha1.MergeableProfile) error {
// TODO(jhrozek): should we be defensive about checking if other attributes match as well? (e.g. inherit)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say we make this private so that other projects do not use it via the API.

ctx,
list,
client.InNamespace(recordingNamespace),
client.MatchingLabels{v1alpha1.ProfileToRecordingLabel: recordingName}); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might need to pre-parse the recordingName to ensure it's usable as a label. Remember labels have a length limitation. We might need to limit recording names or hash in case the length is too long.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call!

Do you (or @sascha) have a preference? I think I prefer limiting the names, just because the hashes are user-unfriendly and there's really no strict need to keep them unique (unlike e.g. node labels).

An easy way might be to add an event in the webhook recorder that says the recording name is too long and no recording will happen, but I'm not sure if it's super well discoverable. Unfortunately we don't have a profilerecording controller except the merger. Actually, thinking out loud, the merger might be a good place to do that, even if it's not strictly its job. (I'd prefer to keep the webhooks as simple as possible).

This wouldn't prevent the recordings from being created with names that are too long, though we'd need an admission webhook for that which I think is really an overkill..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No preferences on my side. I'd love it if we could just leverage the CRD definitions to limit the name size (as we can add validations to fields in the CRD)... but I'm not sure and I don't think that's possible. So... whatever you see fit. All I'd like is that this is taken into account.

Copy link
Contributor Author

@jhrozek jhrozek Sep 29, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, there's a new comment that emits an event in the webhook as well as error when collecting the recording. Instead of the error we could be more permissive and just create the recording as usual (so not partial) and without the label.
I'm not sure myself what's preferable, but error seems somehow "safer" than guessing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure myself what's preferable, but error seems somehow "safer" than guessing.

What would happen if we didn't error out? Would we truncate and continue?

@@ -183,3 +212,30 @@ type SeccompProfileList struct {
func init() { //nolint:gochecknoinits // required to init scheme
SchemeBuilder.Register(&SeccompProfile{}, &SeccompProfileList{})
}

func UnionSyscalls(baseSyscalls, appliedSyscalls []*Syscall) []*Syscall {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, I don't think we need to make this public for now.

@@ -183,3 +212,30 @@ type SeccompProfileList struct {
func init() { //nolint:gochecknoinits // required to init scheme
SchemeBuilder.Register(&SeccompProfile{}, &SeccompProfileList{})
}

func UnionSyscalls(baseSyscalls, appliedSyscalls []*Syscall) []*Syscall {
allSyscalls := make(map[seccomp.Action]map[string]bool)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uhm... you already have a notion of how long the syscall lists will be from the two objects being passed. Why not directly allocate this map with the length of the longest list of the two? That way you reduce the number of allocations made at runtime.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can fix these comments (they are valid) just note that I just lifted the existing function from the seccomp controller :-) and didn't write it from scratch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jhrozek ah! I hadn't noticed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in a separate commit.

allSyscalls[s.Action][n] = true
}
}
finalSyscalls := make([]*Syscall, 0, len(appliedSyscalls)+len(baseSyscalls))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't this allocate more memory than you might need? some, if not most, of the entries of these profiles will be the same. Seems to me like it would be preferable to allocate only for the length of the longest array. The rest of the allocations would be handled via the append function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, probably

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in a separate commit.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 28, 2022
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 29, 2022
@codecov-commenter
Copy link

Codecov Report

Merging #1179 (b2dc494) into main (4e141f2) will decrease coverage by 2.06%.
The diff coverage is 31.34%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1179      +/-   ##
==========================================
- Coverage   47.51%   45.44%   -2.07%     
==========================================
  Files          44       48       +4     
  Lines        4927     5362     +435     
==========================================
+ Hits         2341     2437      +96     
- Misses       2487     2805     +318     
- Partials       99      120      +21     

@jhrozek
Copy link
Contributor Author

jhrozek commented Sep 29, 2022

@JAORMX I tried to implement your suggestion wrt the ownership instead of finalizers and I couldn't make it work. I think you meant to use foreground cascading deletion right?
That works fine, but you need to delete the object (in our case profile recording) while passing a DeleteOption (see the doc). When just deleting the object as the admin would probably do and as kubectl delete does, the dependant objects were just removed without adding the finalizer. Adding the finalizer myself as a hack didn't work either. We can discuss this more when you're back (I have the experimental patch still in a special branch).

@jhrozek jhrozek force-pushed the deploy_rec branch 3 times, most recently from 2efdd07 to 8e84658 Compare September 30, 2022 08:08
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 2, 2022
actually used.

In this case, it might be useful to merge the per-container profiles
into a single profile. This can be done by setting the `mergeStrategy`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my own understanding, but this is useful because it makes the one true profile easier to find, correct?

Otherwise, I could fumble through the profiles and grab one that doesn't contain all the syscalls, since it wasn't used at runtime?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That, but also, the recording merging might be more useful than the default no merging in case the profile is recorded in an e2e test suite, where you might test several different test-cases and what you want to allow is probably a union of the cases. Another use-case are replicated pod controllers (daemonSets, deployments) where you run several replicas of the same pod, but the test would only trigger an action on one of them (e.g. you have 10 replicas of nginx listening on port 80, which probably gives you bind() and listen(), but if you curl from one of them, only that replica would record accept()).

You are right that the profile merging is a usability vs security tradeoff. If you wanted to be very secure, you'd review every recorded policy, but since the target "audience" are workloads that are privileged in some way and have no policy at all, even recording a merger of several would probably be quite a bit more secure that none.

You raised some good points, though. Should I add more context to the docs? Please feel free to suggest what might be missing or needs clarification, I might suffer from "tunnel vision" being familiar with the code.


In this case, it might be useful to merge the per-container profiles
into a single profile. This can be done by setting the `mergeStrategy`
attribute to `containers` in the `ProfileRecording`. Note that the following
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are none (the default) and containers the only two options for mergeStrategy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right now, yes. I was also wondering about an option called workload or even per-workload options like deployment like you suggested earlier which might produce just one policy per workload, but so far I haven't seen a good use-case for that.

ctx,
list,
client.InNamespace(recordingNamespace),
client.MatchingLabels{v1alpha1.ProfileToRecordingLabel: recordingName}); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure myself what's preferable, but error seems somehow "safer" than guessing.

What would happen if we didn't error out? Would we truncate and continue?

// ProfileToContainerLabel is the name of the container that produced this profile.
ProfileToContainerLabel = "spo.x-k8s.io/container-id"
// RecordingHasUnmergedProfiles is a finalizer that indicates that the recording has partial policies. Its
// main use is as to hold off the deletion of the recording until all partial profiles are merged.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/as//

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, fixed

@@ -50,6 +67,13 @@ type ProfileRecordingSpec struct {
// +kubebuilder:validation:Enum=bpf;logs
Recorder ProfileRecorder `json:"recorder"`

// Whether or how to merge recorded profiles. Can be one of "none" or "containers".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok - this answers a question I had elsewhere.

What other types of merge strategies do we anticipate? If I use the containers strategy, my profile recording will be an aggregate of everything that happened across the containers. Do we anticipate merging recordings across other entities , like Deployments and ReplicaSets?

For example, would I be able to distill recordings from multiple Deployments if I set deployment as the merge strategy (assuming we implemented something like that?)

To merge, or not to merge, feels like a binary decision and the setting here eludes to potential functionality that adds another dimension to record merging, so I'm trying to understand that bit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking that maybe the workload level might be a good candidate for future or even a profile recording level (so, everything recorded using a single profileRecording would result in a single policy), but the container-level merging seemed like a good start in terms of being less granular than per-container-instance which is the none default but still not merging too much. As I said elsewhere, the profile merging is a bit of a security/usability tradeoff where you trade not having to deal with potentially dozens of policies after a recording session vs. the danger that one of the policies record something you don't want in the resulting policy.

I wonder if @saschagrunert or @JAORMX have other opinions?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree leaving the option for further higher level merges is a good path forward.

@@ -63,9 +65,10 @@ func LowerOfTwoStates(currentLowest, candidate ProfileState) ProfileState {
orderedStates := make(map[ProfileState]int)
orderedStates[ProfileStateError] = 0 // error must always have the lowest index
orderedStates[ProfileStateTerminating] = 1 // If one is set as terminating; all the statuses will end here too
orderedStates[ProfileStatePending] = 2
orderedStates[ProfileStateInProgress] = 3
orderedStates[ProfileStateInstalled] = 4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is a public method, does changing the mapping between the state and the integer matter?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, sort of (btw this logic is "borrowed" from Compliance Operator :-)). The lower the integer, the more "important" it is, or in other words, the lower the integer always "wins" when comparing two states. This method is used to find the "lowest common" state of two or more ProfileNodeStatus objects, the idea being that if two profiles are Installed, but one is still Pending, the profile as a whole should still list its state as "Pending" (we use the same logic for CO's scan statuses).

I did put the new state a bit arbitrarily, though, feel free to suggest a better "ordering" in the list of states.

The profileRecording CRD gets a new field mergeStrategy. This field
defaults to "none" which is the same behavior as earlier. When set to
"containers", all policies belonging to the same container and the same
recordingn will be merged.
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 3, 2022
@jhrozek
Copy link
Contributor Author

jhrozek commented Oct 3, 2022

just a rebase, no other changes.

Adds a new interface that all profiles capable of producing recordable
profiles should implement. The interface consists of two methods:
    IsPartial() - if the policy is partial and should not be reconciled
    yet
    ListProfilesByRecording() - lists all profiles recorded by a
    particular recording

These will come handy when merging the profiles. Also adds several
labels that these profiles will use to identify the recording and mark
the profile as partial.
…tegy=containers

When the mergeStrategy field of profileRecording is set to containers,
the profilerecorder controller in the deamon would create profiles with
the label profilebase.ProfilePartialLabel set to true. The profiles
would furthermore contain a label linking to the profileRecording and
the name of the container which will later aid in reconstructing the
merged profile.

When the partial profiles are created, a finalizer is set on the
profileRecording which prevents its deletion unless the partial profiles
are either deleted manually by the admin or merged.

RBAC changes:
    - the profilerecording controller (running under the spod identity)
      is now allowed to list,watch and get profilerecordings
… merged

Adds a new profile status "Partial" to both the nodestatus CRD and by
extension the profiles itself. This status is used to denote profiles
that are not to be used right away, but those that are about to be
merged.

Apart from managing the status itself, the node finalizer would also
remove the finalizer preventing the recording from being deleted if all
the recordings go away. This covers the use case where someone records
partial policies and then just wants to remove them instead of merging.
Skips reconciling partial selinux and seccomp profiles. SPO should only
reconcile those when they are merged.
Adds a new controller called recordingmerger that watches for
profilerecordings that are about to be deleted. For those recordings,
all partial profiles are merged into a final profile and created.

In order to avoid repeating ourselves in the merge code, let's add a new
interface MergeableProfile with two methods: Get() and Merge() which is
implemented by structures private to the controller that embed
SeccompProfile and SelinuxProfile respectively.

Moves UnionSyscalls from the seccomp controller to a utility module
along with its unit tests.

RBAC changes:
    - adds delete/deletecollection for both seccomp and selinux profiles
      to the SPO controllers in order to remove the partial profiles.
Adds e2e tests that record a number of partial profiles where one of the
pods in the recorded deployment performs a different action than the
others, then merges the partial profiles and ensures that the merged
profile is in fact a union of the partial profiles.
Since we now use the profile recording name as a label, we need to check
its name. Since we don't have a dedicated controller and status for the
recording, we emit an event. Moreover, we error and stop reconciling in
the recording controller.
@jhrozek
Copy link
Contributor Author

jhrozek commented Oct 3, 2022

@rhmdnd for some reason I couldn't reply to this comment inline, replying as a comment instead:

I'm not sure myself what's preferable, but error seems somehow "safer" than guessing.
What would happen if we didn't error out? Would we truncate and continue?

So maybe the comment is confusing, it was referencing a potential issue if the profilerecording was too long, then its name wouldn't be possible to use as a label value (k8s would error out on using the label value, in this case when the list() method is called). I opted for erroring out proactively, what I meant with guessing was trying to list the profiles based on e.g. the recorded container name or the profile name itself. But that seems too hacky. Ultimately I think we want to make use of a new k8s feature which allows to validate CRD attributes without resorting to webhooks (see issue #1199) and just limit the length of the name of the profile bindings.

Copy link
Member

@saschagrunert saschagrunert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work!

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 6, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jhrozek, saschagrunert

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [jhrozek,saschagrunert]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit 1bc5a47 into kubernetes-sigs:main Oct 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

recording of replicated pods should contain the union of the recorded policies
6 participants