Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip ENOENT for vram_str_path and sdma_str_path if files not created when passing some, but not all, GPUs to a docker image #194

Open
wants to merge 1 commit into
base: amd-staging
Choose a base branch
from

Conversation

jamesxu2
Copy link
Contributor

This PR is intended to resolve these issues where rocm-smi --showpids unexpectedly returns UNKNOWNs:
ROCm/ROCm#3002

When running rocm-smi --showpids when a process does not have access to all GPUs (eg. in a docker image where some, but not all devices are passed through), GetProcessInfoForPID attempts to enumerate all GPUs on the host and search for vram/sdma/cu_occupancy data. In the VRAM example:

  for (itr = gpu_set->begin(); itr != gpu_set->end(); itr++) {
    uint64_t gpu_id = (*itr);
    std::string vram_str_path = proc_str_path;
    vram_str_path += "/vram_";
    vram_str_path += std::to_string(gpu_id);

    err = ReadSysfsStr(vram_str_path, &tmp); //returns ENOENT
    [...]

So, we attempt to access file proc_str_path/vram_{gpu_id} for each gpu on the host (where rocm-smi is invoked). However, since the host and the monitored process have different perspectives on which GPUs exist, there is an inconsistency:

The monitored process will only create vram_{gpu_id} files for the GPUs it thinks exists, while the host expects to see vram_{gpu_id} files for all GPUs. This results in host attempting to read nonexistent files and return early out of the loop with ENOENT.

This PR will ignore ENOENTs when enumerating vram_ and sdma_ files but still return early if other errors are encountered, instead of prematurely returning from the loop. A previous PR - #155 - has handled the case for cu_occupancy which may be absent due to device non-support.

    when passing some, but not all, GPUs to a docker image.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant