Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

+ add use_cuda for get_model funcs in two OPs #389

Merged
merged 1 commit into from
Aug 19, 2024
Merged

+ add use_cuda for get_model funcs in two OPs #389

merged 1 commit into from
Aug 19, 2024

Conversation

HYLcool
Copy link
Collaborator

@HYLcool HYLcool commented Aug 19, 2024

As the title says.

This PR solve the problem that some OP can not make use of CUDA to speed up the loaded models.

@HYLcool HYLcool self-assigned this Aug 19, 2024
@HYLcool HYLcool added bug Something isn't working priority:high in high priority labels Aug 19, 2024
Copy link
Collaborator

@yxdyc yxdyc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is just a quick fix. Later we may add auto inspect for all model-based ops whose _accelerator='cuda', with similar codes below:

import inspect
......

def get_model(model_key=None, rank=None):
    if model_key is None:
        return None

    global MODEL_ZOO
    if model_key not in MODEL_ZOO:
        logger.debug(
            f'{model_key} not found in MODEL_ZOO ({mp.current_process().name})'
        )
        MODEL_ZOO[model_key] = model_key()
    
    frame = inspect.currentframe().f_back
    caller_locals = frame.f_locals
    caller_class = None
    for var_name, var_value in caller_locals.items():
        if isinstance(var_value, type):
            caller_class = var_value
            break
        if hasattr(var_value, '__class__'):
            caller_class = var_value.__class__
            break

    if caller_class is not None and caller_class.__name__ == 'DJ_operator': # to be corrected
        use_cuda = True               
    else: 
        use_cuda = False
    if use_cuda:
        rank = 0 if rank is None else rank
        rank = rank % cuda_device_count()
        move_to_cuda(MODEL_ZOO[model_key], rank)
    return MODEL_ZOO[model_key]

@HYLcool HYLcool merged commit 3eaba7f into main Aug 19, 2024
3 checks passed
@HYLcool HYLcool deleted the fix/use_cuda branch August 19, 2024 09:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working priority:high in high priority
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants