Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hybird gradio error #15

Open
Yutong18 opened this issue Jun 25, 2024 · 14 comments
Open

Hybird gradio error #15

Yutong18 opened this issue Jun 25, 2024 · 14 comments

Comments

@Yutong18
Copy link

Hi,
Thank you for your great work. I met following error with environment. Could you please help me check with this, thank you.
Traceback (most recent call last):
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1522, in process_api
result = await self.call_function(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/utils.py", line 674, in wrapper
response = f(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/run_gradio_audio_driven.py", line 860, in run
outputs = self.forward_sample(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/run_gradio_audio_driven.py", line 452, in forward_sample
val_output = self.pipeline(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/pipeline/pipeline.py", line 454, in call
down_res_face_tmp, mid_res_face_tmp, controlnet_flow, _ = self.face_controlnet(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/ldmk_ctrlnet.py", line 446, in forward
warped_cond_feature, occlusion_mask = self.get_warped_frames(cond_feature, scale_flows[fh // ch], fh // ch)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/ldmk_ctrlnet.py", line 300, in get_warped_frames
warped_frame = softsplat(tenIn=first_frame.float(), tenFlow=flows[:, i].float(), tenMetric=None, strMode='avg').to(dtype) # [b, c, w, h]
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 251, in softsplat
tenOut = softsplat_func.apply(tenIn, tenFlow)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/cuda/amp/autocast_mode.py", line 106, in decorate_fwd
return fwd(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 284, in forward
cuda_launch(cuda_kernel('softsplat_out', '''
File "cupy/_util.pyx", line 67, in cupy._util.memoize.decorator.ret
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 225, in cuda_launch
return cupy.cuda.compile_with_cache(objCudacache[strKey]['strKernel'], tuple(['-I ' + os.environ['CUDA_HOME'], '-I ' + os.environ['CUDA_HOME'] + '/include'])).get_function(objCudacache[strK
ey]['strFunction'])
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 464, in compile_with_cache
return _compile_module_with_cache(*args, **kwargs)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 492, in _compile_module_with_cache
return _compile_with_cache_cuda(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 561, in _compile_with_cache_cuda
mod.load(cubin)
File "cupy/cuda/function.pyx", line 264, in cupy.cuda.function.Module.load
File "cupy/cuda/function.pyx", line 266, in cupy.cuda.function.Module.load
File "cupy_backends/cuda/api/driver.pyx", line 210, in cupy_backends.cuda.api.driver.moduleLoadData
File "cupy_backends/cuda/api/driver.pyx", line 60, in cupy_backends.cuda.api.driver.check_status
cupy_backends.cuda.api.driver.CUDADriverError: CUDA_ERROR_INVALID_SOURCE: device kernel image is invalid
Traceback (most recent call last):
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1522, in process_api
result = await self.call_function(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/utils.py", line 674, in wrapper
response = f(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/run_gradio_audio_driven.py", line 860, in run
outputs = self.forward_sample(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/run_gradio_audio_driven.py", line 452, in forward_sample
val_output = self.pipeline(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/pipeline/pipeline.py", line 454, in call
down_res_face_tmp, mid_res_face_tmp, controlnet_flow, _ = self.face_controlnet(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/ldmk_ctrlnet.py", line 446, in forward
warped_cond_feature, occlusion_mask = self.get_warped_frames(cond_feature, scale_flows[fh // ch], fh // ch)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/ldmk_ctrlnet.py", line 300, in get_warped_frames
warped_frame = softsplat(tenIn=first_frame.float(), tenFlow=flows[:, i].float(), tenMetric=None, strMode='avg').to(dtype) # [b, c, w, h]
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 251, in softsplat
tenOut = softsplat_func.apply(tenIn, tenFlow)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/cuda/amp/autocast_mode.py", line 106, in decorate_fwd
return fwd(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 284, in forward
cuda_launch(cuda_kernel('softsplat_out', '''
File "cupy/_util.pyx", line 67, in cupy._util.memoize.decorator.ret
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 225, in cuda_launch
return cupy.cuda.compile_with_cache(objCudacache[strKey]['strKernel'], tuple(['-I ' + os.environ['CUDA_HOME'], '-I ' + os.environ['CUDA_HOME'] + '/include'])).get_function(objCudacache[strK
ey]['strFunction'])
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 464, in compile_with_cache
return _compile_module_with_cache(*args, **kwargs)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 492, in _compile_module_with_cache
return _compile_with_cache_cuda(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 561, in _compile_with_cache_cuda
mod.load(cubin)
File "cupy/cuda/function.pyx", line 264, in cupy.cuda.function.Module.load
File "cupy/cuda/function.pyx", line 266, in cupy.cuda.function.Module.load
File "cupy_backends/cuda/api/driver.pyx", line 210, in cupy_backends.cuda.api.driver.moduleLoadData
File "cupy_backends/cuda/api/driver.pyx", line 60, in cupy_backends.cuda.api.driver.check_status
cupy_backends.cuda.api.driver.CUDADriverError: CUDA_ERROR_INVALID_SOURCE: device kernel image is invalid

The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 501, in process_events
response = await self.call_prediction(awake_events, batch)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 465, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None

@MyNiuuu
Copy link
Owner

MyNiuuu commented Jun 25, 2024

Hi! It appears that the error is related to the CUDA version. Can you please tell me which version of the CUDA Toolkit you've installed? We have tested the demo using version 11.7.

@Yutong18
Copy link
Author

Yutong18 commented Jun 25, 2024

Thank you for your reply! I installed CUDA 11.7 on a GPU service and encountered the above issue. Are there any other versions of CuPy that might work?

@MyNiuuu
Copy link
Owner

MyNiuuu commented Jun 25, 2024

Thank you for your reply! I installed CUDA 11.7 on a GPU service and encountered the above issue. Are there any other versions of CuPy that might work?

This is a bit weird, as version 11.7 functions properly on my device. I discovered similar problems here, with one assertion being that CuPy might alter torch.cuda.current_device() to 0. Does your system have multiple GPUs, and are you specifying one other than the first one (cuda:0)?

To better resolve the issue, could you give specifics about:

  1. Info about your machine's GPU (number, model, memory). In my experience, many CUDA-related errors are actually due to insufficient GPU memory.
  2. Package list of your python environment(pip list).
  3. If possible, can you share your image and audio (video) files so I can try them on my own device?

Thank you!

@MyNiuuu
Copy link
Owner

MyNiuuu commented Jun 28, 2024

Closing it since there's no activity.

@MyNiuuu MyNiuuu closed this as completed Jun 28, 2024
@sbppy
Copy link

sbppy commented Jun 28, 2024

same issue on my 4090 machine.
pip list
Package Version


absl-py 2.1.0
accelerate 0.30.1
addict 2.4.0
aiofiles 23.2.1
altair 5.3.0
annotated-types 0.7.0
antlr4-python3-runtime 4.9.3
anyio 4.4.0
attrs 23.2.0
audioread 3.0.1
av 12.1.0
basicsr 1.4.2
blessed 1.20.0
certifi 2024.6.2
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
cmake 3.29.6
colorlog 6.8.2
contourpy 1.2.1
cupy-cuda117 10.6.0
cycler 0.12.1
decorator 5.1.1
diffusers 0.24.0
dnspython 2.6.1
einops 0.8.0
email_validator 2.2.0
exceptiongroup 1.2.1
facexlib 0.3.0
fastapi 0.111.0
fastapi-cli 0.0.4
fastrlock 0.8.2
ffmpy 0.3.2
filelock 3.15.4
filterpy 1.4.5
flatbuffers 24.3.25
fonttools 4.53.0
fsspec 2024.6.0
future 1.0.0
fvcore 0.1.5.post20221221
gfpgan 1.3.8
gpustat 1.1.1
gradio 4.5.0
gradio_client 0.7.0
grpcio 1.64.1
h11 0.14.0
httpcore 1.0.5
httptools 0.6.1
httpx 0.27.0
huggingface-hub 0.23.4
idna 3.7
imageio 2.34.2
importlib_metadata 8.0.0
importlib_resources 6.4.0
iopath 0.1.10
jax 0.4.30
jaxlib 0.4.30
Jinja2 3.1.4
joblib 1.4.2
jsonschema 4.22.0
jsonschema-specifications 2023.12.1
kiwisolver 1.4.5
kornia 0.7.2
kornia_rs 0.1.3
lazy_loader 0.4
librosa 0.10.2.post1
lit 18.1.8
llvmlite 0.43.0
lmdb 1.4.1
Markdown 3.6
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.9.0
mdurl 0.1.2
mediapipe 0.10.14
ml-dtypes 0.4.0
mpmath 1.3.0
msgpack 1.0.8
networkx 3.3
numba 0.60.0
numpy 1.23.0
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-ml-py 12.555.43
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
omegaconf 2.3.0
opencv-contrib-python 4.10.0.84
opencv-python 4.10.0.84
opencv-python-headless 4.10.0.84
opt-einsum 3.3.0
orjson 3.10.5
packaging 24.1
pandas 2.2.2
pillow 10.3.0
pip 24.0
platformdirs 4.2.2
pooch 1.8.2
portalocker 2.10.0
protobuf 4.25.3
psutil 6.0.0
pycparser 2.22
pydantic 2.7.4
pydantic_core 2.18.4
pydub 0.25.1
Pygments 2.18.0
pyparsing 3.1.2
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-multipart 0.0.9
pytorch3d 0.7.7
pytz 2024.1
PyYAML 6.0.1
referencing 0.35.1
regex 2024.5.15
requests 2.32.3
rich 13.7.1
rpds-py 0.18.1
safetensors 0.4.3
scikit-image 0.24.0
scikit-learn 1.5.0
scipy 1.13.1
semantic-version 2.10.0
setuptools 69.5.1
shellingham 1.5.4
six 1.16.0
sniffio 1.3.1
sounddevice 0.4.7
soundfile 0.12.1
soxr 0.3.7
starlette 0.37.2
sympy 1.12.1
tabulate 0.9.0
tb-nightly 2.18.0a20240627
tensorboard-data-server 0.7.2
termcolor 2.4.0
threadpoolctl 3.5.0
tifffile 2024.6.18
tokenizers 0.19.1
tomli 2.0.1
tomlkit 0.12.0
toolz 0.12.1
torch 2.0.1
torchvision 0.15.2
tqdm 4.66.4
transformers 4.41.1
trimesh 4.4.1
triton 2.0.0
typer 0.12.3
typing_extensions 4.12.2
tzdata 2024.1
ujson 5.10.0
urllib3 2.2.2
uvicorn 0.30.1
uvloop 0.19.0
watchfiles 0.22.0
wcwidth 0.2.13
websockets 11.0.3
Werkzeug 3.0.3
wheel 0.43.0
yacs 0.1.8
yapf 0.40.2
zipp 3.19.2

@MyNiuuu
Copy link
Owner

MyNiuuu commented Jun 28, 2024

I think this may be caused by the insufficient GPU memory of 4090 (24GB). Can you change to a smaller resolution (e.g., 384x384) or shorter frame length (e.g., 14 or less) to see if you can reproduce the error? Thank you

@MyNiuuu MyNiuuu reopened this Jun 28, 2024
@sbppy
Copy link

sbppy commented Jun 28, 2024

@MyNiuuu
Copy link
Owner

MyNiuuu commented Jun 28, 2024

Regardless of the initial resolution of the input image, the preprocess_image (https://github.com/MyNiuuu/MOFA-Video/blob/main/MOFA-Video-Hybrid/run_gradio_audio_driven.py#L977) function will resize the shortest side of the image according to the value of target_size (https://github.com/MyNiuuu/MOFA-Video/blob/main/MOFA-Video-Hybrid/run_gradio_audio_driven.py#L966) which is set to 512 by default. Maybe you can change target_size to a smaller value try again?

@sbppy
Copy link

sbppy commented Jun 28, 2024

set target_size = 216

full log attached:
start loading models...
IMPORTANT: You are using gradio version 4.5.0, however version 4.29.0 is available, please upgrade.

layers per block is 2
=> loading checkpoint './models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar'
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer4.2.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.2.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.3.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder4.2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.3.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.2.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.2.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.3.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_encoder.features.5.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.3.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.0.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.0.downsample.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer1.0.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.1.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.fusion2.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.1.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.0.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer4.2.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.4.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer1.1.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder2.5.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.fusion4.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.2.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.4.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder8.5.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.1.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.0.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.3.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.1.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.0.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder4.8.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder8.8.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.2.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.skipconv2.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer4.1.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder8.2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer4.0.downsample.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.0.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder1.4.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.1.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer4.0.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.0.downsample.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.skipconv4.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.1.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder1.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.fusion8.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer1.0.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder4.5.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.2.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer4.2.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer4.0.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder1.7.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer1.2.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer1.0.downsample.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.5.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer4.0.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder2.2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_encoder.features.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer1.2.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.3.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer2.0.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.5.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer4.1.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer4.1.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer1.2.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.flow_decoder.decoder2.8.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.5.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer1.1.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer1.0.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer1.1.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar: module.image_encoder.layer3.4.bn1.num_batches_tracked
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 212.02it/s]
models loaded.
Running on local URL: http://0.0.0.0:9873

To create a public link, set share=True in launch().
You selected None at [109, 100] from image
You selected None at [138, 101] from image
0%| | 0/1 [00:00<?, ?it/s]start diffusion process...
0%| | 0/25 [00:00<?, ?it/s]
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/env/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
File "/env/mofa/lib/python3.10/site-packages/gradio/route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "/env/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1522, in process_api
result = await self.call_function(
File "/env/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
File "/env/mofa/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/env/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/env/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/env/mofa/lib/python3.10/site-packages/gradio/utils.py", line 674, in wrapper
response = f(*args, **kwargs)
File "/app/MOFA-Video/MOFA-Video-Traj/run_gradio.py", line 570, in run
outputs = self.forward_sample(
File "/env/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/app/MOFA-Video/MOFA-Video-Traj/run_gradio.py", line 335, in forward_sample
val_output = self.pipeline(
File "/env/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/app/MOFA-Video/MOFA-Video-Traj/pipeline/pipeline.py", line 463, in call
down_block_res_samples, mid_block_res_sample, controlnet_flow, _ = self.controlnet(
File "/env/mofa/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/app/MOFA-Video/MOFA-Video-Traj/models/svdxt_featureflow_forward_controlnet_s2d_fixcmp_norefine.py", line 315, in forward
warped_cond_feature = self.get_warped_frames(cond_feature, scale_flows[fh // ch])
File "/app/MOFA-Video/MOFA-Video-Traj/models/svdxt_featureflow_forward_controlnet_s2d_fixcmp_norefine.py", line 231, in get_warped_frames
warped_frame = softsplat(tenIn=first_frame.float(), tenFlow=flows[:, i].float(), tenMetric=None, strMode='avg').to(dtype) # [b, c, w, h]
File "/app/MOFA-Video/MOFA-Video-Traj/models/softsplat.py", line 251, in softsplat
tenOut = softsplat_func.apply(tenIn, tenFlow)
File "/env/mofa/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/env/mofa/lib/python3.10/site-packages/torch/cuda/amp/autocast_mode.py", line 106, in decorate_fwd
return fwd(*args, **kwargs)
File "/app/MOFA-Video/MOFA-Video-Traj/models/softsplat.py", line 284, in forward
cuda_launch(cuda_kernel('softsplat_out', '''
File "cupy/_util.pyx", line 67, in cupy._util.memoize.decorator.ret
File "/app/MOFA-Video/MOFA-Video-Traj/models/softsplat.py", line 225, in cuda_launch
return cupy.cuda.compile_with_cache(objCudacache[strKey]['strKernel'], tuple(['-I ' + os.environ['CUDA_HOME'], '-I ' + os.environ['CUDA_HOME'] + '/include'])).get_function(objCudacache[strKey]['strFunction'])
File "/env/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 464, in compile_with_cache
return _compile_module_with_cache(*args, **kwargs)
File "/env/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 492, in _compile_module_with_cache
return _compile_with_cache_cuda(
File "/env/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 614, in _compile_with_cache_cuda
mod.load(cubin)
File "cupy/cuda/function.pyx", line 264, in cupy.cuda.function.Module.load
File "cupy/cuda/function.pyx", line 266, in cupy.cuda.function.Module.load
File "cupy_backends/cuda/api/driver.pyx", line 210, in cupy_backends.cuda.api.driver.moduleLoadData
File "cupy_backends/cuda/api/driver.pyx", line 60, in cupy_backends.cuda.api.driver.check_status
cupy_backends.cuda.api.driver.CUDADriverError: CUDA_ERROR_INVALID_SOURCE: device kernel image is invalid
Traceback (most recent call last):
File "/env/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
File "/env/mofa/lib/python3.10/site-packages/gradio/route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "/env/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1522, in process_api
result = await self.call_function(
File "/env/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
File "/env/mofa/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/env/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/env/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/env/mofa/lib/python3.10/site-packages/gradio/utils.py", line 674, in wrapper
response = f(*args, **kwargs)
File "/app/MOFA-Video/MOFA-Video-Traj/run_gradio.py", line 570, in run
outputs = self.forward_sample(
File "/env/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/app/MOFA-Video/MOFA-Video-Traj/run_gradio.py", line 335, in forward_sample
val_output = self.pipeline(
File "/env/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/app/MOFA-Video/MOFA-Video-Traj/pipeline/pipeline.py", line 463, in call
down_block_res_samples, mid_block_res_sample, controlnet_flow, _ = self.controlnet(
File "/env/mofa/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/app/MOFA-Video/MOFA-Video-Traj/models/svdxt_featureflow_forward_controlnet_s2d_fixcmp_norefine.py", line 315, in forward
warped_cond_feature = self.get_warped_frames(cond_feature, scale_flows[fh // ch])
File "/app/MOFA-Video/MOFA-Video-Traj/models/svdxt_featureflow_forward_controlnet_s2d_fixcmp_norefine.py", line 231, in get_warped_frames
warped_frame = softsplat(tenIn=first_frame.float(), tenFlow=flows[:, i].float(), tenMetric=None, strMode='avg').to(dtype) # [b, c, w, h]
File "/app/MOFA-Video/MOFA-Video-Traj/models/softsplat.py", line 251, in softsplat
tenOut = softsplat_func.apply(tenIn, tenFlow)
File "/env/mofa/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/env/mofa/lib/python3.10/site-packages/torch/cuda/amp/autocast_mode.py", line 106, in decorate_fwd
return fwd(*args, **kwargs)
File "/app/MOFA-Video/MOFA-Video-Traj/models/softsplat.py", line 284, in forward
cuda_launch(cuda_kernel('softsplat_out', '''
File "cupy/_util.pyx", line 67, in cupy._util.memoize.decorator.ret
File "/app/MOFA-Video/MOFA-Video-Traj/models/softsplat.py", line 225, in cuda_launch
return cupy.cuda.compile_with_cache(objCudacache[strKey]['strKernel'], tuple(['-I ' + os.environ['CUDA_HOME'], '-I ' + os.environ['CUDA_HOME'] + '/include'])).get_function(objCudacache[strKey]['strFunction'])
File "/env/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 464, in compile_with_cache
return _compile_module_with_cache(*args, **kwargs)
File "/env/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 492, in _compile_module_with_cache
return _compile_with_cache_cuda(
File "/env/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 614, in _compile_with_cache_cuda
mod.load(cubin)
File "cupy/cuda/function.pyx", line 264, in cupy.cuda.function.Module.load
File "cupy/cuda/function.pyx", line 266, in cupy.cuda.function.Module.load
File "cupy_backends/cuda/api/driver.pyx", line 210, in cupy_backends.cuda.api.driver.moduleLoadData
File "cupy_backends/cuda/api/driver.pyx", line 60, in cupy_backends.cuda.api.driver.check_status
cupy_backends.cuda.api.driver.CUDADriverError: CUDA_ERROR_INVALID_SOURCE: device kernel image is invalid

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/env/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 501, in process_events
response = await self.call_prediction(awake_events, batch)
File "/env/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 465, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None

@sbppy
Copy link

sbppy commented Jun 28, 2024

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:49:14_PDT_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0

cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 8
#define CUDNN_MINOR 5
#define CUDNN_PATCHLEVEL 0
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)

#endif /* CUDNN_VERSION_H */

nvidia-smi
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:4F:00.0 Off | Off |
| 30% 41C P0 54W / 450W | 0MiB / 24564MiB | 1% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+

@MyNiuuu
Copy link
Owner

MyNiuuu commented Jun 29, 2024

Hmm... This is a bit tough, I haven't come across such problems in the past. Regrettably, I'm unable to identify the cause at the moment. Despite extensive online research, I've found very few instances that resemble this one 😔.

@sbppy
Copy link

sbppy commented Jun 30, 2024

maybe the issue is:
nvidia-smi CUDA Version: 12.2 (host gpu driver)
nvcc: cuda_11.7 (docker image)

but other image (with cuda118) works fine.

@monk-after-90s
Copy link

The same

@zenmequmingzia
Copy link

same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants