You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 12, 2021. It is now read-only.
is your question for OpenWhisk in general or for Dark Vision?
In Dark Vision, when you upload a video, you can specify the language of the video so Speech to Text will be able to correctly transcribe it. For NLU, Dark Vision relies on its auto-detection feature.
If you want to use NLU or STT in OpenWhisk (outside of Dark Vision), I see no limitation in language support.
Per the documentation of the NLU service (https://www.ibm.com/watson/developercloud/doc/natural-language-understanding/#supported-languages) and the documentation of speech-to-text (https://www.ibm.com/watson/developercloud/doc/speech-to-text/input.html#models) there is support for multiple languages.
Is there any additional restriction to support them in OpenWhisk?
The text was updated successfully, but these errors were encountered: