Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[<RAY Core|Train>] Autoscaler error after serveral hours running #48834

Closed
lxmanutd opened this issue Nov 21, 2024 · 4 comments
Closed

[<RAY Core|Train>] Autoscaler error after serveral hours running #48834

lxmanutd opened this issue Nov 21, 2024 · 4 comments
Assignees
Labels
bug Something that is supposed to be working; but isn't core Issues that should be addressed in Ray Core kuberay Issues for the Ray/Kuberay integration that are tracked on the Ray side P1 Issue that should be fixed within a few weeks

Comments

@lxmanutd
Copy link

What happened + What you expected to happen

I am using ray autoscaler for 16 workers each with 1NPU, And Ray head error during autoscaler opened for 23hours,the logs as following:
The autoscaler failed with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/urllib3/connection.py", line 198, in _new_conn
sock = connection.create_connection(
File "/opt/conda/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/opt/conda/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "/opt/conda/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
File "/opt/conda/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/opt/conda/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
conn.connect()
File "/opt/conda/lib/python3.10/site-packages/urllib3/connection.py", line 616, in connect
self.sock = sock = self._new_conn()
File "/opt/conda/lib/python3.10/site-packages/urllib3/connection.py", line 207, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0xffff0404f5e0>, 'Connection to kubernetes.default timed out. (connect timeout=None)')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/requests/adapters.py", line 667, in send
resp = conn.urlopen(
File "/opt/conda/lib/python3.10/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
File "/opt/conda/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='kubernetes.default', port=443): Max retries exceeded with url: /api/v1/namespaces/ai4s-station/pods/raycluster-multi-node-multi-gpu-head-gkqtq (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0xffff0404f5e0>, 'Connection to kubernetes.default timed out. (connect timeout=None)'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/_private/monitor.py", line 584, in run
self._run()
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/_private/monitor.py", line 389, in _run
self.autoscaler.update()
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/_private/autoscaler.py", line 384, in update
raise e
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/_private/autoscaler.py", line 377, in update
self._update()
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/_private/autoscaler.py", line 400, in _update
self.non_terminated_nodes = NonTerminatedNodes(self.provider)
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/_private/autoscaler.py", line 124, in init
self.all_node_ids = provider.non_terminated_nodes({})
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/batching_node_provider.py", line 162, in non_terminated_nodes
self.node_data_dict = self.get_node_data()
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/_private/kuberay/node_provider.py", line 338, in get_node_data
resource_version = self._get_pods_resource_version()
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/_private/kuberay/node_provider.py", line 453, in _get_pods_resource_version
pod_resp = self._get(f"pods/{RAY_HEAD_POD_NAME}")
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/_private/kuberay/node_provider.py", line 519, in _get
return self.k8s_api_client.get(path)
File "/opt/conda/lib/python3.10/site-packages/ray/autoscaler/_private/kuberay/node_provider.py", line 271, in get
result = requests.get(url, headers=self._headers, verify=self._verify)
File "/opt/conda/lib/python3.10/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/requests/adapters.py", line 688, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='kubernetes.default', port=443): Max retries exceeded with url: /api/v1/namespaces/ai4s-station/pods/raycluster-multi-node-multi-gpu-head-gkqtq (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0xffff0404f5e0>, 'Connection to kubernetes.default timed out. (connect timeout=None)'))

head and small group status:
raycluster-multi-node-multi-gpu-head-gkqtq 1/2 Error 0 41h
raycluster-multi-node-multi-gpu-small-group-worker-2b7jp 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-45xzn 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-8l8b4 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-d4v8l 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-grztc 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-ht5ph 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-j7hv4 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-l9rrh 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-phlxg 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-qr5ks 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-s9x46 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-stvgb 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-vqftp 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-w9mtr 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-xkcch 1/1 Running 0 23h
raycluster-multi-node-multi-gpu-small-group-worker-xvk5m 1/1 Running 0 23h

Versions / Dependencies

Ray, version 2.37.0
Platform: Linux-4.19.90-2107.6.0.0192.8.oe1.bclinux.aarch64-aarch64-with-glibc2.35
Python version: 3.10.14
PyTorch version: 2.1.0 (NPU)
Transformers version: 4.44.2
NPU type: Ascend910B2
CANN version: 8.0.RC3.alpha001

Reproduction script

`import os
from typing import Dict

import torch
from filelock import FileLock
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from torchvision.transforms import Normalize, ToTensor
from tqdm import tqdm

import ray.train
from ray.train import ScalingConfig
from ray.train.torch import TorchTrainer, TorchConfig

def get_dataloaders(batch_size):
# Transform to normalize the input images
transform = transforms.Compose([ToTensor(), Normalize((0.5,), (0.5,))])

with FileLock(os.path.expanduser("~/data.lock")):
    # Download training data from open datasets
    training_data = datasets.FashionMNIST(
        root="~/data",
        train=True,
        download=True,
        transform=transform,
    )

    # Download test data from open datasets
    test_data = datasets.FashionMNIST(
        root="~/data",
        train=False,
        download=True,
        transform=transform,
    )

# Create data loaders
train_dataloader = DataLoader(training_data, batch_size=batch_size, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=batch_size)

return train_dataloader, test_dataloader

Model Definition

class NeuralNetwork(nn.Module):
def init(self):
super(NeuralNetwork, self).init()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28 * 28, 512),
nn.ReLU(),
nn.Dropout(0.25),
nn.Linear(512, 512),
nn.ReLU(),
nn.Dropout(0.25),
nn.Linear(512, 10),
nn.ReLU(),
)

def forward(self, x):
    x = self.flatten(x)
    logits = self.linear_relu_stack(x)
    return logits

def train_func_per_worker(config: Dict):
lr = config["lr"]
epochs = config["epochs"]
batch_size = config["batch_size_per_worker"]

# Get dataloaders inside the worker training function
train_dataloader, test_dataloader = get_dataloaders(batch_size=batch_size)

# [1] Prepare Dataloader for distributed training
# Shard the datasets among workers and move batches to the correct device
# =======================================================================
train_dataloader = ray.train.torch.prepare_data_loader(train_dataloader)
test_dataloader = ray.train.torch.prepare_data_loader(test_dataloader)

model = NeuralNetwork()

# [2] Prepare and wrap your model with DistributedDataParallel
# Move the model to the correct GPU/CPU device
# ============================================================
model = ray.train.torch.prepare_model(model)

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=lr, momentum=0.9)

# Model training loop
for epoch in range(epochs):
    print_worker_info(epoch)
    if ray.train.get_context().get_world_size() > 1:
        # Required for the distributed sampler to shuffle properly across epochs.
        train_dataloader.sampler.set_epoch(epoch)

    model.train()
    for X, y in tqdm(train_dataloader, desc=f"Train Epoch {epoch}"):
        pred = model(X)
        loss = loss_fn(pred, y)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

    model.eval()
    test_loss, num_correct, num_total = 0, 0, 0
    with torch.no_grad():
        for X, y in tqdm(test_dataloader, desc=f"Test Epoch {epoch}"):
            pred = model(X)
            loss = loss_fn(pred, y)

            test_loss += loss.item()
            num_total += y.shape[0]
            num_correct += (pred.argmax(1) == y).sum().item()

    test_loss /= len(test_dataloader)
    accuracy = num_correct / num_total

    # [3] Report metrics to Ray Train
    # ===============================
    ray.train.report(metrics={"loss": test_loss, "accuracy": accuracy})

def train_fashion_mnist(num_workers=2, use_gpu=False):
global_batch_size = 32

train_config = {
    "lr": 1e-3,
    "epochs": 10,
    "batch_size_per_worker": global_batch_size // num_workers,
}

# Configure computation resources
torch_config = TorchConfig(backend="hccl", timeout_s=18000)
scaling_config = ScalingConfig(num_workers=num_workers, resources_per_worker={"NPU":2})

# Initialize a Ray TorchTrainer
trainer = TorchTrainer(
    train_loop_per_worker=train_func_per_worker,
    train_loop_config=train_config,
    torch_config=torch_config,
    scaling_config=scaling_config,
)

# [4] Start distributed training
# Run `train_func_per_worker` on all workers
# =============================================
result = trainer.fit()
print(f"Training result: {result}")

def print_worker_info(epoch):
context = ray.train.get_context()
# 获取当前训练作业的总工作节点数。
print("start to train epoch :", epoch)
world_size = context.get_world_size()
print("world_size is:", world_size)

# 获取当前工作节点的索引,通常用于区分不同的工作节点。
world_rank = context.get_world_rank()
print("world_rank is:", world_rank)

# 获取当前节点的排名,这在多节点训练中很有用。
node_rank = context.get_node_rank()
print("node_rank is:", node_rank)

# 获取当前工作节点上本地工作进程的索引。
local_rank = context.get_local_rank()
print("local_rank is:", local_rank)

if name == "main":
train_fashion_mnist(num_workers=2, use_gpu=True)
`

Issue Severity

High: It blocks me from completing my task.

@lxmanutd lxmanutd added bug Something that is supposed to be working; but isn't triage Needs triage (eg: priority, bug/not-bug, and owning component) labels Nov 21, 2024
@lxmanutd
Copy link
Author

(base) root@raycluster-multi-node-multi-gpu-small-group-worker-xvk5m:/opt/conda/lib/python3.10/site-packages/torch/utils/data# ray status
======== Autoscaler status: 2024-11-20 22:59:04.406709 ========
Node status

Active:
16 small-group
1 head-group
Pending:
(no pending nodes)
Recent failures:
(no failures)

Resources

Usage:
0.0/64.0 CPU (0.0 used of 1.0 reserved in placement groups)
16.0/16.0 NPU (16.0 used of 16.0 reserved in placement groups)
0B/365.08GiB memory
50.20KiB/108.10GiB object_store_memory

Demands:
(no resource demands)
The autoscaler failed with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/urllib3/connection.py", line 198, in _new_conn
sock = connection.create_connection(
File "/opt/conda/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/opt/conda/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "/opt/conda/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
File "/opt/conda/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/opt/conda/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
conn.connect()
File "/opt/conda/lib/python3.10/site-packages/urllib3/connection.py", line 616, in connect

@lxmanutd lxmanutd changed the title [<RAY Train>] Autoscaler error after serveral hours running [<RAY Core|Train>] Autoscaler error after serveral hours running Nov 21, 2024
@jcotant1 jcotant1 added the core Issues that should be addressed in Ray Core label Nov 21, 2024
@jjyao jjyao added P1 Issue that should be fixed within a few weeks kuberay Issues for the Ray/Kuberay integration that are tracked on the Ray side and removed triage Needs triage (eg: priority, bug/not-bug, and owning component) labels Nov 25, 2024
@lxmanutd
Copy link
Author

lxmanutd commented Nov 28, 2024

@jjyao @kevin85421 I see the following issue fixed in master , but seems no package delivered with the fix, when will the package include the fix delivered. [Core]: Fix ConnectionError on Autoscaler CR lookups in K8s clusters with custom DNS for Kubernetes API. #48541

@lxmanutd
Copy link
Author

lxmanutd commented Nov 29, 2024

@jjyao Do you know where could we set the autoscaling options ? Such as the interval cluster head get info from K8S API to decide whether need to do autoscaling up and down? Seems we can only set the following :
autoscalerOptions:
upscalingMode: Default
idleTimeoutSeconds: 60
imagePullPolicy: IfNotPresent
securityContext: {}
env: []
envFrom: []

@kevin85421
Copy link
Member

@lxmanutd Ray 2.40 has already been out and Ray 2.41 is currently under preparation. Close this issue. I will open an issue to track: #48834 (comment)

You can check ray-project/kuberay#2600 for the plan for Autoscaler V2 if you are interested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something that is supposed to be working; but isn't core Issues that should be addressed in Ray Core kuberay Issues for the Ray/Kuberay integration that are tracked on the Ray side P1 Issue that should be fixed within a few weeks
Projects
None yet
Development

No branches or pull requests

4 participants