-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sends CWL tool logging to calrissian debug / stderr #59
Conversation
- Removed additional thread because it should not be necessary - Clarify state_is_running to only return true if running and add state_is_waiting - Starts following logs once pod goes into running state
- Otherwise test actually writes an empty filename.out file
- Timestamps will be available from the calrissian pod if needed, but are not written to the logging stderr stream
continue | ||
elif self.state_is_running(status.state): | ||
# Can only get logs once container is running | ||
self.follow_logs() # This will not return until pod completes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this introduce a starvation scenario? If for example we are running multiple pods, and we happen to be reading logs from a pod that takes a really long time to complete. When we could be starting subsequent pods we will be busy streaming logs.
Not sure of a good way to avoid this outside of only sending logs back once a pod is completed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. A KubernetesClient
object only manages a single pod. So in your example of running multiple concurrent steps, there are multiple instances of KubernetesClient
on different threads tracking those pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I'm seeing an issue here where the log gets truncated. Not sure if it's a connection timeout or what. Seems to happen after 60 minutes. I'll merge and release this but follow-up with an issue. |
- Didn't fix the issue
This might be due to log rotation: |
Thanks for the tip @johnbradley ! |
read_namespaced_pod_log()
log.debug()
, prefixed with the name of the pod in square bracketsFixes #17