Skip to content

Commit

Permalink
[spark][bugfix] Fix Ray on Spark running on layered virtualenv python…
Browse files Browse the repository at this point in the history
… environment (ray-project#32996)

Supposing we create a virtualenv python environment based on the original python environment,
and Ray is installed in the original python environment,
and we run setup_ray_cluster on the virtualenv python environment,
then ray_exec_path = os.path.join(os.path.dirname(sys.executable), "ray") will get a invalid path,
because the ray script is installed in the original python environment.

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Jack He <jackhe2345@gmail.com>
  • Loading branch information
WeichenXu123 authored and ProjectsByJackHe committed May 4, 2023
1 parent 05c6fe6 commit 0293a40
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions python/ray/util/spark/start_ray_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@

temp_dir = os.path.normpath(temp_dir)

ray_exec_path = os.path.join(os.path.dirname(sys.executable), "ray")
ray_cli_cmd = "ray"

lock_file = temp_dir + ".lock"
lock_fd = os.open(lock_file, os.O_RDWR | os.O_CREAT | os.O_TRUNC)
Expand All @@ -51,7 +51,7 @@
# same temp directory, adding a shared lock representing current ray node is
# using the temp directory.
fcntl.flock(lock_fd, fcntl.LOCK_SH)
process = subprocess.Popen([ray_exec_path, "start", *arg_list], text=True)
process = subprocess.Popen([ray_cli_cmd, "start", *arg_list], text=True)

def try_clean_temp_dir_at_exit():
try:
Expand Down

0 comments on commit 0293a40

Please sign in to comment.