Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[spark] Kill hanging spark jobs when databricks notebook is detached #42032

Merged
merged 2 commits into from
Dec 23, 2023

Conversation

WeichenXu123
Copy link
Contributor

@WeichenXu123 WeichenXu123 commented Dec 20, 2023

Why are these changes needed?

On databricks runtime, when user starts a Ray on spark cluster in a notebook, when notebook is detached, the Ray head node is killed, but we observe the Ray worker nodes are still running in some cases, so it causes the background spark job hanging and can't release resources.

So this PR is for addressing this issue. When databricks notebook is detached, we enforce all spark jobs created in this notebook REPL are killed.

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>

def on_spark_job_created(self, job_group_id):
db_api_entry = get_db_entry_point()
db_api_entry.registerBackgroundSparkJobGroup("job_group_id")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for registering spark job that will be killed when databricks notebook is detached.

@WeichenXu123 WeichenXu123 changed the title [spark] Clean hanging spark jobs when databricks notebook is detached [spark] Kill hanging spark jobs when databricks notebook is detached Dec 20, 2023
@WeichenXu123
Copy link
Contributor Author

CC @jjyao

@jjyao jjyao self-assigned this Dec 22, 2023
@jjyao
Copy link
Collaborator

jjyao commented Dec 22, 2023

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
@jjyao jjyao merged commit f3ef6fb into ray-project:master Dec 23, 2023
10 checks passed
vickytsang pushed a commit to ROCm/ray that referenced this pull request Jan 12, 2024
…ay-project#42032)

On databricks runtime, when user starts a Ray on spark cluster in a notebook, when notebook is detached, the Ray head node is killed, but we observe the Ray worker nodes are still running in some cases, so it causes the background spark job hanging and can't release resources.

So this PR is for addressing this issue. When databricks notebook is detached, we enforce all spark jobs created in this notebook REPL are killed.

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants