Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compatibility issues and dependency issues #6

Open
vitasha10 opened this issue Jul 16, 2024 · 4 comments
Open

Compatibility issues and dependency issues #6

vitasha10 opened this issue Jul 16, 2024 · 4 comments

Comments

@vitasha10
Copy link

I tried to install on fresh server and get this. It want media_toolkit, but there is one.

$ python3 face2face/server.py --port 8020
Traceback (most recent call last):
  File "/home/vasya/face2face/face2face/server.py", line 4, in <module>
    from fast_task_api import FastTaskAPI, ImageFile, JobProgress, MediaFile, VideoFile
  File "/home/vasya/.local/lib/python3.10/site-packages/fast_task_api/__init__.py", line 1, in <module>
    from fast_task_api.fast_task_api import FastTaskAPI
  File "/home/vasya/.local/lib/python3.10/site-packages/fast_task_api/fast_task_api.py", line 5, in <module>
    from fast_task_api.core.routers._fastapi_router import SocaityFastAPIRouter
  File "/home/vasya/.local/lib/python3.10/site-packages/fast_task_api/core/routers/_fastapi_router.py", line 5, in <module>
    from media_toolkit.file_conversion import convert_to_upload_file_type
ModuleNotFoundError: No module named 'media_toolkit.file_conversion'
$ pip3 install media_toolkit
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: media_toolkit in /home/vasya/.local/lib/python3.10/site-packages (0.0.6)

Also you have in

https://github.com/SocAIty/face2face/blob/main/pyproject.toml

this mistake

media-tookit[VideoFile] >= 0.0.6

Not "tookit", "toolkit", you forget "l" letter.

This package don't help:

socaity-face2face[service]
@vitasha10
Copy link
Author

Also errors without web, my script:

from face2face import Face2Face

try:
    f2f = Face2Face()
    swapped_img = f2f.swap_one(cv2.imread("src.jpg"), cv2.imread("target.jpg"))

except Exception as inst:
    print(type(inst))
    print(inst.args)
    print(inst)

Images in this folder with this names.
output:

Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /home/vasya/face2face/face2face/models/insightface./checkpoints/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /home/vasya/face2face/face2face/models/insightface./checkpoints/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /home/vasya/face2face/face2face/models/insightface./checkpoints/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /home/vasya/face2face/face2face/models/insightface./checkpoints/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /home/vasya/face2face/face2face/models/insightface./checkpoints/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (320, 320)
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
Killed

@w4hns1nn
Copy link
Collaborator

Hi vitasha, thank you for raising the issues:

I currently work with

  • media-toolkit version 0.0.6 and
  • Python 3.10.11.
    At least with that combination it works fine.

Definitely would help to set it up in an test environment with older python version.

For ONNX runtime. It's stuck to that old version, because the roop version of insightface is a little bit old.
Probably it is possible to update the ONNX version and convert the models in a way that they are compatible with the new version.

My system:

  • Windows 10
  • NVIDIA Geforce RTX 4060 TI and NVIDIA Geforce GTX 1080 TI
  • 32GB Ram

On my system it works with both GPUs and also on CPU.

I've created a branch for compatibility issues related tasks.
Do you think you could look into something?

I think Docker support would fit into another issue ;) - especially if it should work together with FastTaskAPI.

For now, if you want plain FastAPI you can simply replace all @task_endpoint with standard fastapi routes and kick FastTaskAPI out. Should work fine. I'm still working on the FastSDK to make a smooth client - server interaction / SDK.

Kind regards

@w4hns1nn w4hns1nn changed the title Please remove mistake from dependencies Compatibility issues and dependency issues Jul 16, 2024
@w4hns1nn
Copy link
Collaborator

I found and solved the dependency problem. It was not in face2face but in the dependency fast-task-API which needed to be updated to the newer media-toolkit version. I've updated fast-task-API and built a new wheel for it.

I've pushed the changes. And built the package. Please try a clean reinstall.

Next step: ONNX update. I'll work on that on compatibility_issue branch.

Let me know if your problem was solved

@vitasha10
Copy link
Author

vitasha10 commented Jul 17, 2024

Now it's working but when someone install from "git+htt..." application also want you (inside some code) to install package from apt. So now it's dublicate, but swap work in test dir.

Please define in README.md that "Killed" msg in console is when RAM ended, I spended two days. It want about 5gb ram to swap photo with CPU. and about 7seconds after loading, 11 with loading everything, 20+ for first time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

2 participants