We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug A clear and concise description of what the bug is.
Version of Helm and Kubernetes:
Helm Version:
v3.3.4
Kubernetes Version:
Server Version: v1.21.7
Which version of the chart: 13.2.0
What happened:
Memory usage is keep on increasing and getting restart when it reached the limit
What you expected to happen:
Should not be increase
How to reproduce it (as minimally and precisely as possible):
<~-- This could be something like:
values.yaml (only put values which differ from the defaults)
resources: limits: cpu: 1000m memory: 500Mi requests: cpu: 100m memory: 200Mi env: RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR: 0.9 elasticsearch: auth: enabled: true user: "user" password: "password" hosts: ["es-master.default:9200"] scheme: "https" sslVerify: false requestTimeout: "10s" logLevel: "info" suppressTypeName: true configMaps: useDefaults: kubernetesMetadataFilterConfig: watch: false extraConfigMaps: containers.site.conf: |- # ignore containers labeled app_kubernetes_io/name: "fluentd-elasticsearch" <filter kubernetes.**> @type grep <exclude> key $.kubernetes.labels.app_kubernetes_io/name pattern fluentd-elasticsearch </exclude> </filter>
helm upgrade --namespace default --install fluentd kokuwa/fluentd-elasticsearch -f values.yaml --version 13.2.0
-->
Anything else we need to know:
The text was updated successfully, but these errors were encountered:
perhaps this is related? fluent/fluentd#2236 (comment)
Sorry, something went wrong.
No branches or pull requests
Describe the bug
A clear and concise description of what the bug is.
Version of Helm and Kubernetes:
Helm Version:
v3.3.4
Kubernetes Version:
Server Version: v1.21.7
Which version of the chart: 13.2.0
What happened:
Memory usage is keep on increasing and getting restart when it reached the limit
What you expected to happen:
Should not be increase
How to reproduce it (as minimally and precisely as possible):
<~--
This could be something like:
values.yaml (only put values which differ from the defaults)
-->
Anything else we need to know:
The text was updated successfully, but these errors were encountered: