Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LimitRange Memory and CPU Request Distribution does not handle sidecars correctly #4978

Closed
skaegi opened this issue Jun 15, 2022 · 1 comment · Fixed by #4996
Closed

LimitRange Memory and CPU Request Distribution does not handle sidecars correctly #4978

skaegi opened this issue Jun 15, 2022 · 1 comment · Fixed by #4996
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@skaegi
Copy link
Contributor

skaegi commented Jun 15, 2022

The current controller logic for managing "request" resources when LimitRanges are present takes the "default" resource value and divides it by the number of containers in a Pod. This is close, but not quite correct as "sidecars" are run in parallel so should not be part of this calculation

For...

apiVersion: "v1"
kind: "LimitRange"
metadata:
  name: "my-resource-limits" 
spec:
  limits:
    - type: "Container"
      defaultRequest:
        memory: "2Gi"
        cpu: "100m"
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
  name: memory-boom-taskrun
spec:
  taskSpec:
    steps:
    - name: memory-boom1
      image: polinux/stress
      script: |
        stress --vm 1 --vm-bytes 20M --vm-hang 10
    - name: memory-boom2
      image: polinux/stress
      script: |
        stress --vm 1 --vm-bytes 20M --vm-hang 10
    sidecars:
    - name: memory-boom-sc
      image: polinux/stress
      command: ["stress"]
      args: ["--vm", "1", "--vm-bytes", "10M", "--vm-hang", "10"]

Expected Behavior

Containers:
  step-memory-boom1:
    Requests:
      cpu:                50m
      memory:             1Gi
  step-memory-boom2:
    Requests:
      cpu:                50m
      memory:             1Gi
  sidecar-memory-boom-sc:
    Requests:
      cpu:                100m
      memory:             2Gi

Actual Behavior

Containers:
  step-memory-boom1:
    Requests:
      cpu:                33m
      memory:             715827882
  step-memory-boom2:
    Requests:
      cpu:                33m
      memory:             715827882
  sidecar-memory-boom-sc:
    Requests:
      cpu:                33m
      memory:             715827882
@skaegi skaegi added the kind/bug Categorizes issue or PR as related to a bug. label Jun 15, 2022
@skaegi skaegi changed the title Resource Memory and CPU Request Distribution does not handle sidecars correctly LimitRange Memory and CPU Request Distribution does not handle sidecars correctly Jun 15, 2022
@skaegi
Copy link
Contributor Author

skaegi commented Jun 15, 2022

Looking more carefully at https://github.com/tektoncd/pipeline/blob/main/pkg/internal/limitrange/transformer.go#L35 -- we're doing more than we need to here. We should only be transforming "Step" containers and should only be spreading "requests". For "limits" in Step containers as well as sidecars and the rest of the containers we should just let LimitRanger do its magic.

I'll do a PR here to show what I mean.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
1 participant