-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
♻️ Fix compressor readiness shutdown_duration / Fix cassandra … #376
♻️ Fix compressor readiness shutdown_duration / Fix cassandra … #376
Conversation
…consistency value Signed-off-by: Rintaro Okamura <rintaro.okamura@gmail.com>
Best reviewed: commit by commit
Optimal code review plan
|
charts/vald/values.yaml
Outdated
readiness: | ||
server: | ||
http: | ||
shutdown_duration: 1m |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think readiness should shutdown immediately. this setting increase duration until disconnect from kubernetes service dns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you please describe more about why we need 1minutes duration for shutting down the readiness server?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm. which field affects to make compressor pods alive longer without increasing duration until disconnect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uh, I see. readiness should shutdown immediately so it should be 0s and for liveness, it affects pod to keep them alive.
but I know, we don't use liveness probe for agent and compressor, hmm.... 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah ha, okay understood.
we have now a backup strategy in compressor post-stop, it may be okay to be terminated suddenly caused by unexpected fails of liveness servers. 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so, maybe we need another phase for shutting down the process in internal server
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we can set post pre process for each server, It would be useful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i see. for now, i'm going to enable liveness and set readiness shutdown_duration to be zero. thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sound good to me.
please set liveness shutdown duration over 1minutes.
and we need to think about pod disruption budget
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
revised. please check it 😄
Signed-off-by: Rintaro Okamura <rintaro.okamura@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…serial_consistency value
Signed-off-by: Rintaro Okamura rintaro.okamura@gmail.com
Description:
I fixed several fields in values.yaml.
Related Issue:
nothing
How Has This Been Tested?:
nothing
Environment:
Types of changes:
Changes to Core Features:
Checklist: