Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark sure airflow get the right job status after log stream interrupt when schedule spark job on k8s #8964

Closed
wants to merge 14 commits into from

Conversation

dawany
Copy link

@dawany dawany commented May 22, 2020

#8963

Description

I am using airflow SparkSubmitOperator to schedule my spark jobs on kubernetes cluster.

But for some reason, kubernetes often throw 'too old resource version' exception which will interrupt spark watcher, then airflow will lost the log stream and could not get 'Exit Code' eventually. So airflow will mark job failed once log stream lost but the job is still running.

This is a solution about an simple retry mechanism which is when the log stream is interrupted, then airflow try to get 'Exit Code' by 'kubectl describe pod xxxx-driver ' command.


update

There is no method like cmd 'describe pod ' in kubernetes python client api, so call 'read_namespaced_pod()' to get spark driver pod status instead.

Target Github ISSUE

#8963


Make sure to mark the boxes below before creating PR: [x]

  • Description above provides context of the change
  • Unit tests coverage for changes (not needed for documentation changes)
  • Target Github ISSUE in description if exists
  • Commits follow "How to write a good git commit message"
  • Relevant documentation is updated including usage instructions.
  • I will engage committers as explained in Contribution Workflow Example.

In case of fundamental code change, Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in UPDATING.md.
Read the Pull Request Guidelines for more information.

@boring-cyborg
Copy link

boring-cyborg bot commented May 22, 2020

Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contribution Guide (https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst)
Here are some useful points:

  • Pay attention to the quality of your code (flake8, pylint and type annotations). Our pre-commits will help you with that.
  • In case of a new feature add useful documentation (in docstrings or in docs/ directory). Adding a new operator? Check this short guide Consider adding an example DAG that shows how users should use it.
  • Consider using Breeze environment for testing locally, it’s a heavy docker but it ships with a working Airflow and a lot of integrations.
  • Be patient and persistent. It might take some time to get a review or get the final approval from Committers.
  • Be sure to read the Airflow Coding style.
    Apache Airflow is a community-driven project and together we are making it better 🚀.
    In case of doubts contact the developers at:
    Mailing List: dev@airflow.apache.org
    Slack: https://apache-airflow-slack.herokuapp.com/

@dawany dawany changed the title [workaround]add a method:get k8s 'Exit Code' after k8s log stream int… Mark sure airflow get the right job status after log stream interrupt when schedule spark job on k8s May 22, 2020
@dawany dawany marked this pull request as draft May 23, 2020 14:23
@dawany
Copy link
Author

dawany commented May 25, 2020

Now I am trying kubernetes client invoked by airflow instead of shell cmd, working on it

@dawany dawany marked this pull request as ready for review May 29, 2020 06:16
@dawany dawany closed this May 31, 2020
@dawany
Copy link
Author

dawany commented May 31, 2020

This pr is closed ,and instead by a new pull request #9081

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant