Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please Fix this project to work with the latest CUDA and if possible have a video tutorial. TIA #37

Open
oisilener1982 opened this issue Jul 4, 2024 · 8 comments

Comments

@oisilener1982
Copy link

oisilener1982 commented Jul 4, 2024

After the work arounds i managed to open gradio but i got this error:

(mofa) D:\AI\MOFA-Video\MOFA-Video-Hybrid>python run_gradio_audio_driven.py
C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\requests_init_.py:86: RequestsDependencyWarning: Unable to find acceptable character detection dependency (chardet or charset_normalizer).
warnings.warn(
start loading models...
IMPORTANT: You are using gradio version 4.5.0, however version 4.29.0 is available, please upgrade.

layers per block is 2
layers per block is 2
=> loading checkpoint './models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar'
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder4.8.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder1.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer4.0.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.2.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_encoder.features.5.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.0.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer4.0.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.fusion8.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder8.8.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer4.1.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer4.2.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer1.0.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder4.5.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.0.downsample.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.2.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer1.0.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.2.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder8.5.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer1.2.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.0.downsample.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer1.2.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer1.2.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.3.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer1.1.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder1.7.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.1.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.0.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer1.0.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.skipconv4.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.2.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.3.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder4.2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer4.2.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.0.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder1.4.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.1.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer4.0.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.0.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.3.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder8.2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_encoder.features.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.4.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer1.0.downsample.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.fusion4.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.3.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.2.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder2.5.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.5.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder2.2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.0.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer1.1.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer4.0.downsample.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.1.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.1.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.1.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.1.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.5.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer4.2.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.4.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.5.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.decoder2.8.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer1.1.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer4.1.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.2.bn1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer4.1.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer2.3.bn3.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.fusion2.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.4.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.0.bn2.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.flow_decoder.skipconv2.1.num_batches_tracked
caution: missing keys from checkpoint ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints\ckpt_iter_42000.pth.tar: module.image_encoder.layer3.3.bn2.num_batches_tracked
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:00<00:00, 80.03it/s]
models loaded.
Running on local URL: http://127.0.0.1:9080

To create a public link, set share=True in launch().
You selected None at [170, 216] from image
[[[170, 216]]]
You selected None at [162, 265] from image
[[[170, 216], [162, 265]]]
torch.Size([1, 24, 2, 512, 512])
You selected None at [274, 231] from image
[[[170, 216], [162, 265]], [[274, 231]]]
You selected None at [284, 269] from image
[[[170, 216], [162, 265]], [[274, 231], [284, 269]]]
torch.Size([1, 24, 2, 512, 512])
torch.Size([1, 24, 2, 512, 512])
You selected None at [280, 222] from image
[[[170, 216], [162, 265]], [[280, 222]]]
You selected None at [267, 276] from image
[[[170, 216], [162, 265]], [[280, 222], [267, 276]]]
torch.Size([1, 24, 2, 512, 512])
C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\requests_init_.py:86: RequestsDependencyWarning: Unable to find acceptable character detection dependency (chardet or charset_normalizer).
warnings.warn(
using safetensor as default
load [net_G] and [net_G_ema] from ./ckpts/sad_talker\epoch_00190_iteration_000400000_checkpoint.pt
3DMM Extraction for source image
Traceback (most recent call last):
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\sadtalker_audio2pose\inference.py", line 187, in
main(args)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\sadtalker_audio2pose\inference.py", line 76, in main
first_coeff_path, crop_pic_path, crop_info = preprocess_model.generate(pic_path, first_frame_dir, args.preprocess,
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\sadtalker_audio2pose\src\utils\preprocess.py", line 103, in generate
x_full_frames, crop, quad = self.propress.crop(x_full_frames, still=True if 'ext' in crop_or_resize.lower() else False, xsize=512)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\sadtalker_audio2pose\src\utils\croper.py", line 129, in crop
lm = self.get_landmark(img_np)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\sadtalker_audio2pose\src\utils\croper.py", line 35, in get_landmark
lm = landmark_98_to_68(self.predictor.detector.get_landmarks(img)) # [0]
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\facexlib\alignment\awing_arch.py", line 373, in get_landmarks
pred = calculate_points(heatmaps).reshape(-1, 2)
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\facexlib\alignment\awing_arch.py", line 18, in calculate_points
preds = preds.astype(np.float, copy=False)
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\numpy_init_.py", line 324, in getattr
raise AttributeError(former_attrs[attr])
AttributeError: module 'numpy' has no attribute 'float'.
np.float was a deprecated alias for the builtin float. To avoid this error in existing code, use float by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.float64 here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations. Did you mean: 'cfloat'?
Traceback (most recent call last):
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\blocks.py", line 1522, in process_api
result = await self.call_function(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio_backends_asyncio.py", line 859, in run
result = context.run(func, *args)
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\utils.py", line 674, in wrapper
response = f(*args, **kwargs)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 860, in run
outputs = self.forward_sample(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 442, in forward_sample
ldmk_controlnet_flow, ldmk_pose_imgs, landmarks, num_frames = self.get_landmarks(save_root, first_frame_path, audio_path, input_first_frame[0], self.model_length, ldmk_render=ldmk_render)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 708, in get_landmarks
ldmknpy_dir = self.audio2landmark(audio_path, first_frame_path, ldmk_dir, ldmk_render)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 688, in audio2landmark
assert return_code == 0, "Errors in generating landmarks! Please trace back up for detailed error report."
AssertionError: Errors in generating landmarks! Please trace back up for detailed error report.
Traceback (most recent call last):
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\blocks.py", line 1522, in process_api
result = await self.call_function(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio_backends_asyncio.py", line 859, in run
result = context.run(func, *args)
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\utils.py", line 674, in wrapper
response = f(*args, **kwargs)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 860, in run
outputs = self.forward_sample(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 442, in forward_sample
ldmk_controlnet_flow, ldmk_pose_imgs, landmarks, num_frames = self.get_landmarks(save_root, first_frame_path, audio_path, input_first_frame[0], self.model_length, ldmk_render=ldmk_render)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 708, in get_landmarks
ldmknpy_dir = self.audio2landmark(audio_path, first_frame_path, ldmk_dir, ldmk_render)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 688, in audio2landmark
assert return_code == 0, "Errors in generating landmarks! Please trace back up for detailed error report."
AssertionError: Errors in generating landmarks! Please trace back up for detailed error report.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\queueing.py", line 501, in process_events
response = await self.call_prediction(awake_events, batch)
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\queueing.py", line 465, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None

1
2
3

@MyNiuuu
Copy link
Owner

MyNiuuu commented Jul 4, 2024

AttributeError: module 'numpy' has no attribute 'float'.
np.float was a deprecated alias for the builtin float. To avoid this error in existing code, use float by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.float64 here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations. Did you mean: 'cfloat'?

Hello, it seems that this error originates from the version mismatch of numpy. What is your numpy version? It is recommended to use version 1.23.0 as specified here.

@MyNiuuu
Copy link
Owner

MyNiuuu commented Jul 4, 2024

Please Fix this project to work with the latest CUDA and if possible have a video tutorial.

Recently I've been occupied with many tasks including this project, and as I haven't yet completed the entire open-source process (still remaining training scripts and HF Spaces to deal with), updating our project to CUDA 12.x might not be high on my priority list for the upcoming weeks. I would be appreciated if you could understand this situation.

@oisilener1982
Copy link
Author

Numpy is 1.23.0

(mofa) D:\AI\MOFA-Video\MOFA-Video-Hybrid>pip list
Package Version


absl-py 2.1.0
accelerate 0.30.1
addict 2.4.0
aiofiles 23.2.1
altair 5.3.0
annotated-types 0.7.0
ansicon 1.89.0
antlr4-python3-runtime 4.9.3
anyio 4.4.0
attrs 23.2.0
audioread 3.0.1
av 12.1.0
basicsr 1.4.2
blessed 1.20.0
Brotli 1.0.9
certifi 2024.6.2
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
colorama 0.4.6
colorlog 6.8.2
contourpy 1.2.1
cupy-cuda12x 13.2.0
cycler 0.12.1
decorator 5.1.1
diffusers 0.24.0
dnspython 2.6.1
einops 0.8.0
email_validator 2.2.0
exceptiongroup 1.2.1
facexlib 0.3.0
fastapi 0.111.0
fastapi-cli 0.0.4
fastrlock 0.8.2
ffmpy 0.3.2
filelock 3.13.1
filterpy 1.4.5
flatbuffers 24.3.25
fonttools 4.53.0
fsspec 2024.6.1
future 1.0.0
fvcore 0.1.5.post20221221
gfpgan 1.3.8
gmpy2 2.1.2
gpustat 1.1.1
gradio 4.5.0
gradio_client 0.7.0
grpcio 1.64.1
h11 0.14.0
httpcore 1.0.5
httptools 0.6.1
httpx 0.27.0
huggingface-hub 0.23.4
idna 3.7
imageio 2.34.2
importlib_metadata 8.0.0
importlib_resources 6.4.0
iopath 0.1.10
jax 0.4.30
jaxlib 0.4.30
Jinja2 3.1.4
jinxed 1.2.1
joblib 1.4.2
jsonschema 4.22.0
jsonschema-specifications 2023.12.1
kiwisolver 1.4.5
kornia 0.7.2
kornia_rs 0.1.4
lazy_loader 0.4
librosa 0.10.2.post1
llvmlite 0.43.0
lmdb 1.5.1
Markdown 3.6
markdown-it-py 3.0.0
MarkupSafe 2.1.3
matplotlib 3.9.0
mdurl 0.1.2
mediapipe 0.10.14
mkl-fft 1.3.8
mkl-random 1.2.4
mkl-service 2.4.0
ml-dtypes 0.4.0
mpmath 1.3.0
msgpack 1.0.8
networkx 3.3
numba 0.60.0
numpy 1.23.0
nvidia-ml-py 12.555.43
omegaconf 2.3.0
opencv-contrib-python 4.10.0.84
opencv-python 4.10.0.84
opencv-python-headless 4.10.0.84
opt-einsum 3.3.0
orjson 3.10.6
packaging 24.1
pandas 2.2.2
pillow 10.3.0
pip 24.0
platformdirs 4.2.2
pooch 1.8.2
portalocker 2.10.0
protobuf 4.25.3
psutil 6.0.0
pycparser 2.22
pydantic 2.8.1
pydantic_core 2.20.1
pydub 0.25.1
Pygments 2.18.0
pyparsing 3.1.2
PySocks 1.7.1
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-multipart 0.0.9
pytorch3d 0.7.7
pytz 2024.1
pywin32 306
PyYAML 6.0.1
referencing 0.35.1
regex 2024.5.15
requests 2.32.2
rich 13.7.1
rpds-py 0.18.1
safetensors 0.4.3
scikit-image 0.24.0
scikit-learn 1.5.1
scipy 1.13.1
semantic-version 2.10.0
setuptools 69.5.1
shellingham 1.5.4
six 1.16.0
sniffio 1.3.1
sounddevice 0.4.7
soundfile 0.12.1
soxr 0.3.7
starlette 0.37.2
sympy 1.12.1
tabulate 0.9.0
tb-nightly 2.18.0a20240703
tensorboard-data-server 0.7.2
termcolor 2.4.0
threadpoolctl 3.5.0
tifffile 2024.7.2
tokenizers 0.19.1
tomli 2.0.1
tomlkit 0.12.0
toolz 0.12.1
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2
tqdm 4.66.4
transformers 4.41.1
trimesh 4.4.1
typer 0.12.3
typing_extensions 4.11.0
tzdata 2024.1
ujson 5.10.0
urllib3 2.2.2
uvicorn 0.30.1
watchfiles 0.22.0
wcwidth 0.2.13
websockets 11.0.3
Werkzeug 3.0.3
wheel 0.43.0
win-inet-pton 1.1.0
yacs 0.1.8
yapf 0.40.2
zipp 3.19.2

@oisilener1982
Copy link
Author

Once you have the time please update MOFA to support the latest version of CUDA. THanks in advance

@MyNiuuu
Copy link
Owner

MyNiuuu commented Jul 4, 2024

numpy==1.23.0 works fine for my machine. Maybe you can try older versions? It seems that np.float is not deprecated until version 1.20.0. https://stackoverflow.com/questions/74844262/how-can-i-solve-error-module-numpy-has-no-attribute-float-in-python

@oisilener1982
Copy link
Author

The previous one was Sadtalker as landmark renderer. Here is the error if i use Aniportrait
Running on local URL: http://127.0.0.1:9080

To create a public link, set share=True in launch().
You selected None at [175, 222] from image
[[[175, 222]]]
You selected None at [172, 255] from image
[[[175, 222], [172, 255]]]
torch.Size([1, 24, 2, 512, 512])
You selected None at [274, 222] from image
[[[175, 222], [172, 255]], [[274, 222]]]
You selected None at [270, 258] from image
[[[175, 222], [172, 255]], [[274, 222], [270, 258]]]
torch.Size([1, 24, 2, 512, 512])
torch.Size([1, 24, 2, 512, 512])
C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\requests_init_.py:86: RequestsDependencyWarning: Unable to find acceptable character detection dependency (chardet or charset_normalizer).
warnings.warn(
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at ckpts/aniportrait/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at ckpts/aniportrait/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1720079715.734177 8936 face_landmarker_graph.cc:174] Sets FaceBlendshapesGraph acceleration to xnnpack by default.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
W0000 00:00:1720079715.754072 10804 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1720079715.767969 6468 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1720079715.804501 11024 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\google\protobuf\symbol_database.py:55: UserWarning: SymbolDatabase.GetPrototype() is deprecated. Please use message_factory.GetMessageClass() instead. SymbolDatabase.GetPrototype() will be removed soon.
warnings.warn('SymbolDatabase.GetPrototype() is deprecated. Please '
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
Traceback (most recent call last):
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\blocks.py", line 1522, in process_api
result = await self.call_function(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio_backends_asyncio.py", line 859, in run
result = context.run(func, *args)
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\utils.py", line 674, in wrapper
response = f(*args, **kwargs)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 860, in run
outputs = self.forward_sample(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 442, in forward_sample
ldmk_controlnet_flow, ldmk_pose_imgs, landmarks, num_frames = self.get_landmarks(save_root, first_frame_path, audio_path, input_first_frame[0], self.model_length, ldmk_render=ldmk_render)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 708, in get_landmarks
ldmknpy_dir = self.audio2landmark(audio_path, first_frame_path, ldmk_dir, ldmk_render)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 698, in audio2landmark
assert return_code == 0, "Errors in generating landmarks! Please trace back up for detailed error report."
AssertionError: Errors in generating landmarks! Please trace back up for detailed error report.
Traceback (most recent call last):
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\blocks.py", line 1522, in process_api
result = await self.call_function(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\anyio_backends_asyncio.py", line 859, in run
result = context.run(func, *args)
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\utils.py", line 674, in wrapper
response = f(*args, **kwargs)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 860, in run
outputs = self.forward_sample(
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 442, in forward_sample
ldmk_controlnet_flow, ldmk_pose_imgs, landmarks, num_frames = self.get_landmarks(save_root, first_frame_path, audio_path, input_first_frame[0], self.model_length, ldmk_render=ldmk_render)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 708, in get_landmarks
ldmknpy_dir = self.audio2landmark(audio_path, first_frame_path, ldmk_dir, ldmk_render)
File "D:\AI\MOFA-Video\MOFA-Video-Hybrid\run_gradio_audio_driven.py", line 698, in audio2landmark
assert return_code == 0, "Errors in generating landmarks! Please trace back up for detailed error report."
AssertionError: Errors in generating landmarks! Please trace back up for detailed error report.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\queueing.py", line 501, in process_events
response = await self.call_prediction(awake_events, batch)
File "C:\Users\Renel\anaconda3\envs\mofa\lib\site-packages\gradio\queueing.py", line 465, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None

@oisilener1982
Copy link
Author

oisilener1982 commented Jul 4, 2024

Here is the solution to the numpy error in sadtalker OpenTalker/SadTalker#822 (comment)

But in MOFA i cant find the my_awing_arch.py, preprocess.py and audio.py. I have a sadtalker and it is working fine in automatic 1111 1.9.4

@kostebas
Copy link

kostebas commented Jul 6, 2024

Here is the solution to the numpy error in sadtalker OpenTalker/SadTalker#822 (comment)

But in MOFA i cant find the my_awing_arch.py, preprocess.py and audio.py. I have a sadtalker and it is working fine in automatic 1111 1.9.4

I have the same error (

did you manage to solve it?

I found preprocess.py
\MOFA-Video-Hybrid\sadtalker_audio2pose\src\face3d\util\preprocess.py

aniportrait works well but wo lypsink ... this comand help for aniportrait for CUDA Toolkit 12.1 - conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia.

but SadTalker not work(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants