-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use system-wide configmap instead of configuration via cli args #446
Comments
A rough draft of what the configmap could look like: # eraser-config.yml
---
eraser:
image: string
pullPolicy: corev1.PullPolicy
runtime: string, "containerd" | "dockershim" | "cri-o" # default "containerd"
imagelist: string # points to imagelist used for deletion; set to empty when using collector
profile:
enable: bool
port: int32
collector:
image: string
pullPolicy: corev1.PullPolicy
runtime: string, "containerd" | "dockershim" | "cri-o" # default "containerd"
scanDisabled: bool
profile:
enable: bool
port: int32
scanner:
image: string
pullPolicy: corev1.PullPolicy
profile:
enable: bool
port: int32
cpu:
request: string
limit: string
memory:
request: string
limit: string
scanOptions:
ignoreUnfixed: bool
deleteIfScanFailed: bool
securityChecks:
- string
- string
severities:
- string
- string
vulnerabilityTypes:
- string
- string |
A rough proposal of how these values would propagate to their respective components:
|
The ImageJob Controller need not reconcile on updates to the configmap. The configmap will simply be read once before creating the collector pods or the eraser pods. |
https://book.kubebuilder.io/component-config-tutorial/define-config.html is it possible to use this? i am guessing not since this is on the manager level? |
I'll have to take a closer look, but it may be possible to use that. If possible, it would be preferable because it's a versioned schema. Looks like you can create a custom config type. |
In the course of PR #555, it came up that we should use this configmap to enable the setting of cpu limits/requests on the collector/scanner/eraser pods: #555 (comment) |
Open questions
Proposed Schema---
runtime: containerd
otlpEndpoint: ""
logLevel: info
scheduling:
repeatInterval: 24h # to be parsed into time.Duration
beginImmediately: true
profile:
enable: false
port: 6060
imageJob:
successRatio: 1.0 # float; ok with YAML?
cleanup:
delayOnSuccess: 0s # to be parsed into time.Duration
delayOnFailure: 1d
pullSecrets: [] # image pull secrets for collector/scanner/eraser
nodeFilter:
type: exclude # must be either exclude|include
selectors:
- eraser.sh/cleanup.filter
components:
collector:
enable: false
image:
repo: ghcr.io/azure/eraser/collector
tag: latest
request:
cpu: 1000m
mem: 500Mi
limit:
cpu: 1500m
mem: 2Gi
scanner:
enable: false
image:
repo: ghcr.io/azure/eraser/trivy-scanner # supply custom image for custom scanner
tag: latest
request:
cpu: 1000m
mem: 500Mi
limit:
cpu: 1500m
mem: 2Gi
# The config needs to be passed through to the scanner as yaml, as a
# single string. Because we allow custom scanner images, the scanner is
# responsible for defining a schema, parsing, and validating.
config: |
# this is the schema for the default 'trivy-scanner' we should document
# this because most users will probably be using the default scanner.
cacheDir: /var/lib/trivy
dbRepo: ghcr.io/aquasecurity/trivy-db
deleteFailedImages: true
vulnerabilities:
ignoreUnfixed: true
types:
- os
- library
securityChecks: # need to be documented; determined by trivy, not us
- vuln
severities:
- CRITICAL
eraser:
image:
repo: ghcr.io/azure/eraser/eraser
tag: latest
request:
cpu: 1000m
mem: 500Mi
limit:
cpu: 1500m
mem: 2Gi Checklist of CLI argsmanager:
collector:
trivy-scanner:
eraser:
|
Describe the solution you'd like
The management of CLI args is getting complex and cluttered. A configmap should be implemented to replace some of the CLI flags for the manager, for example
--collector-pull-policy
,--collector-arg
, etc. Over time this will get very cluttered, and managing them using the helm chart is getting more and more difficult.Because options for scanner/eraser/collector containers are specified in code (and thus harder to template), they are currently passed in to cli flags on the manger.
A well-structured and well-documented configmap should be implemented. The manager should read this configuration at boot, but it will also have to be configured to respond to updates of the configmap if it is updated by the user while the manager is already deployed.
The text was updated successfully, but these errors were encountered: