You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It might be a good feature that the author of a learning experience records two audio files: 1) what the learner is supposed to ask; 2) what the character is supposed to answer. These files then can be transcribed using Watson's text-to-speech SpeechInputManager. The resulting text transcript can then be injected as DialogueStep into the Watson dialogue (on root level). This way, users could quite flexibly extend the Q&A capabilities of a assistant, scaling scope and applicability of the assistant.
To make this a bit more sophisticated: it is possible to provide a webhook for Watson, which is executed every time the assistant is queried - which we can use to store all user input to Moodle into a specific table. If the user voice input triggers a reliable response (with high recognition probability): no action needed. If not, it could be used to improve a specific assistant - based on what the users ask. Such extension would have to be handled in the server backend in Moodle, providing a management interface that can be used to clean the speech transcript and group together variations of the same request (or adding other potential variations).
The text was updated successfully, but these errors were encountered:
Generally, this functionality seems not well placed in the app - this would be better in the Moodle editor. Moreover, it should be considered whether an interface with an LLM would be more useful.
In GitLab by @Wild on Feb 8, 2021, 16:42
It is possible to create dialogue (and associated intents) via the Watson API. The API allows to create a new dialogue step programmatically is documented here: https://cloud.ibm.com/apidocs/assistant/assistant-v1?code=dotnet-standard#createdialognode (also checkout the documentation for intents and workspaces).
It might be a good feature that the author of a learning experience records two audio files: 1) what the learner is supposed to ask; 2) what the character is supposed to answer. These files then can be transcribed using Watson's text-to-speech SpeechInputManager. The resulting text transcript can then be injected as DialogueStep into the Watson dialogue (on root level). This way, users could quite flexibly extend the Q&A capabilities of a assistant, scaling scope and applicability of the assistant.
To make this a bit more sophisticated: it is possible to provide a webhook for Watson, which is executed every time the assistant is queried - which we can use to store all user input to Moodle into a specific table. If the user voice input triggers a reliable response (with high recognition probability): no action needed. If not, it could be used to improve a specific assistant - based on what the users ask. Such extension would have to be handled in the server backend in Moodle, providing a management interface that can be used to clean the speech transcript and group together variations of the same request (or adding other potential variations).
The text was updated successfully, but these errors were encountered: