-
-
Notifications
You must be signed in to change notification settings - Fork 49
Can I get viseme animation data? #235
Comments
you can use SSML to soundwave instead of text to soundwave,and set the viseme type as "FacialExpression" in your SSML string. below is a valid SSML example to get blendshape data. anyway, it might not be a good idea to drive a lipsync animation by 55 blendshapes unless your GPU is strong enough, I can't get any acceptable performance in my laptop(rtx3070,8G),so I have to give up and switch to using visemeID. |
As skyworder said, to get the blendshapes, you'll need a SSML data with the mstts:viseme input 😁 |
Following your guidance, I obtained the blendshape data, but how can I use this data to enable metahuman to implement lipsync? Hope to get some guidance. |
I've tried serval times to drive lipsync by using multi blend pose node, but not work at all. so I shifted to using viseme ID and offset time, it works well. here's main idea:
|
Can you give detail explanation i am also trying on that |
Hi, Can I get viseme data for 3D characters facial animation?
https://learn.microsoft.com/en-us/azure/ai-services/speech-service/how-to-speech-synthesis-viseme?tabs=3dblendshapes&pivots=programming-language-cpp#3d-blend-shapes-animation
I checked the viseme received value here, I confirmed that the other values were received correctly.
But, Viseme Data Aniamtion value is empty.
Are there any other settings to get that value?
Also, if you know how to connect the value to the metahuman's blendshape, I would appreciate your help.
The text was updated successfully, but these errors were encountered: