-
Notifications
You must be signed in to change notification settings - Fork 14.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mark sure airflow get the right job status after log stream interrupt when schedule spark job on k8s #8964
Conversation
Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contribution Guide (https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst)
|
Now I am trying kubernetes client invoked by airflow instead of shell cmd, working on it |
This pr is closed ,and instead by a new pull request #9081 |
#8963
Description
I am using airflow SparkSubmitOperator to schedule my spark jobs on kubernetes cluster.
But for some reason, kubernetes often throw 'too old resource version' exception which will interrupt spark watcher, then airflow will lost the log stream and could not get 'Exit Code' eventually. So airflow will mark job failed once log stream lost but the job is still running.
This is a solution about an simple retry mechanism which is when the log stream is interrupted, then airflow try to get 'Exit Code' by 'kubectl describe pod xxxx-driver ' command.
update
There is no method like cmd 'describe pod ' in kubernetes python client api, so call 'read_namespaced_pod()' to get spark driver pod status instead.
Target Github ISSUE
#8963
Make sure to mark the boxes below before creating PR: [x]
In case of fundamental code change, Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in UPDATING.md.
Read the Pull Request Guidelines for more information.