-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(torch): processing duplications #3
Conversation
Signed-off-by: Jose Ramon Mañes <jose@celestia.org>
…tate Signed-off-by: Jose Ramon Mañes <jose@celestia.org>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another one and we are good to go 🔥
pkg/k8s/statefulsets.go
Outdated
for event := range watcher.ResultChan() { | ||
// Check if the event object is of type *v1.StatefulSet | ||
if statefulSet, ok := event.Object.(*v1.StatefulSet); ok { | ||
//log.Info("StatefulSet containers: ", statefulSet.Spec.Template.Spec.Containers) | ||
|
||
// check if the node is DA, if so, send it to the queue to generate the multi address | ||
if strings.HasPrefix(statefulSet.Name, "da") { | ||
// Check if the StatefulSet is valid based on the conditions | ||
if isStatefulSetValid(statefulSet) { | ||
// Perform necessary actions, such as adding the node to the Redis queue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
design wise, you can do the following:
- Check if not ok, fail fast with error and exit the for loop
- If ok, then you have a second if outside of the boundaries of the first if else one(where you have the ok checker)
- If you want to check all the objects in the channel, you need to store the error messages one by one and then exit the for loop. So far, as I understood the code, you are not intending to do so
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated this part of the code, but the point 3, we will only receive one event per STS at a time, which means that in case this STS has any issues, we return the error immediately, wdyt?
Signed-off-by: Jose Ramon Mañes <jose@celestia.org>
Signed-off-by: Jose Ramon Mañes <jose@celestia.org>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now you see how more neat it looks!
Wonderful job 💪
hello team!
small one, it fixes to add duplicates to the queue, that happened because the watcher sends an event for every container in the pod, but we only need it once.
cheers! 🚀
closes: https://github.com/celestiaorg/devops/issues/577
cc: @celestiaorg/devops