Web Chat is designed to be highly customizable. In order to build your own UI, you can use React Hooks to hook your UI component into Web Chat API.
To enable Web Chat API, all UI components must be located under the <Composer>
component. You can refer to our plain UI customization sample for details.
React Hooks will make your code cleaner and shorter, and also greatly improve readability.
Web Chat exposes APIs through React Hooks. This API surface enables us to freely move things behind the scene, introduce new APIs, and safely deprecate APIs. It will also make it easier to shuffle work between the internal Redux store and React Context.
We design our hooks largely with two basic shapes:
- Actions, these are functions that you can call at any time to perform a side-effect
- Properties, these are getter functions with an optional setter
- This is same as React State Hook pattern, but setters are optional
- If the value is changed, React will call your render function again
All actions will return a function that can be called at a later point. For example, if you want to focus to the send box, you will use the following code:
const focusSendBox = useFocusSendBox();
focusSendBox();
All properties follow React State Hook pattern. For example, if you want to get and set the value of send box, you will use the following code:
const [sendBoxValue, setSendBoxValue] = useSendBoxValue();
console.log(`The send box value is "${sendBoxValue}".`);
setSendBoxValue('Hello, World!');
Note: some properties may not be settable.
Following is the list of hooks supported by Web Chat API.
useActivities
useAdaptiveCardsHostConfig
useAdaptiveCardsPackage
useAvatarForBot
useAvatarForUser
useConnectivityStatus
useDictateInterims
useDictateState
useDisabled
useEmitTypingIndicator
useFocusSendBox
useGrammars
useGroupTimestamp
useLanguage
useLastTypingAt
useLocalize
useMarkActivityAsSpoken
usePerformCardAction
usePostActivity
useReferenceGrammarID
useRenderActivity
useRenderAttachment
useRenderMarkdownAsHTML
useScrollToEnd
useSendBoxValue
useSendEvent
useSendFiles
useSendMessage
useSendMessageBack
useSendPostBack
useSendTypingIndicator
useShouldSpeakIncomingActivity
useStartDictate
useStopDictate
useStyleOptions
useStyleSet
useSubmitSendBox
useSuggestedActions
useTimeoutForSend
useUserID
useUsername
useVoiceSelector
useWebSpeechPonyfill
useActivities(): [Activity[]]
This function will return a list of activities.
useAdaptiveCardsHostConfig(): [AdaptiveCards.HostConfig]
This function is only available in full bundle. The function will return the Adaptive Cards Host Config used for styling Adaptive Cards.
To modify this value, change the value in the style options prop passed to Web Chat.
useAdaptiveCardsPackage(): [AdaptiveCards]
This function is only available in full bundle. The function will return the Adaptive Cards package used for building and rendering Adaptive Cards.
To modify this value, change the value in the style options prop passed to Web Chat.
useAvatarForBot(): [{
image: string,
initials: string
}]
This function will return the image and initials of the bot. Both image and initials are optional and can be falsy.
To set the avatar for the bot, change the props passed to Web Chat via style options.
useAvatarForUser(): [{
image: string,
initials: string
}]
This function will return the image and initials of the user. Both image and initials are optional and can be falsy.
To set the avatar for the user, change the props passed to Web Chat via style options.
useConnectivityStatus(): [string]
This function will return the Direct Line connectivity status:
connected
: Connectedconnectingslow
: Connecting is incomplete and more than 15 seconds have passederror
: Connection errornotconnected
: Not connected, related to invalid credentialsreconnected
: Reconnected after interruptionreconnecting
: Reconnecting after interruptionsagaerror
: Errors on JavaScript renderer; please see the browser's consoleuninitialized
: Initial connectivity state; never connected and not attempting to connect.
useDictateInterims(): [string[][]]
This function will return active interims processed from a dictation event.
The first array represents separate sentences while the second array represents potential ambiguities or alternatives for the same sentence.
useDictateState(): [string]
This function will return one of the following dictation states:
IDLE
: Recognition engine is idle; not recognizingWILL_START
: Will start recognition after synthesis completedSTARTING
: Recognition engine is starting; not accepting any inputsDICTATING
: Recognition engine is accepting inputSTOPPING
: Recognition engine is stopping; not accepting any inputs
Please refer to
Constants.DictateState
inbotframework-webchat-core
for up-to-date details.
To control dictate state, use the useStartDictate
and useStopDictate
hooks.
useDisabled(): [boolean]
This function will return whether the UI should be disabled or not. All interactable UI components should honor this value.
To modify this value, change the value in the style options prop passed to Web Chat.
useEmitTypingIndicator(): () => void
When called, this function will send a typing activity from the user to the bot.
useFocusSendBox(): () => void
When called, this function will send focus to the send box.
useGrammars(): [string[]]
This function will return grammars for speech-to-text. Grammars is a list of words provided by the implementer for the speech-to-text engine to bias towards. It is commonly used for selecting the correct words with same or similar pronunciations, e.g. Bellevue vs. Bellview vs. Bellvue.
To modify this value, change the value in the style options prop passed to Web Chat.
useGroupTimestamp(): [number]
This function will return the interval for grouping similar activities with a single timestamp. The interval is represented in milliseconds.
For example, if this value is 5000
, successive activities within 5 seconds will share the timestamp of the first message.
To control the groupTimestamp
state, change the props passed to Web Chat via style options.
useLanguage(): [string]
This function will return the language of the UI. All UI components should honor this value.
To modify this value, change the value in the style options prop passed to Web Chat.
useLastTypingAt(): [{
[id: string]: number
}]
This function will return a map of the last typing time of all participants. The time is based on the client clock.
This property is computed on every incoming activity.
useLocalize(identifier: string) => string
This function will return a localized string represented by the identifier. It honors the language settings from the useLanguage
hook.
To modify this value, change the value in the style options prop passed to Web Chat.
useMarkActivityAsSpoken(): (activity: Activity) => void
When called, this function will mark the activity as spoken and remove it from the text-to-speech queue.
usePerformCardAction(): ({
displayText: string,
text: string,
type: string,
value: string
}) => void
When called, this function will perform the card action based on its type
. The card action will be performed by cardActionMiddleware
.
List of supported card action types can be found in this Direct Line Activity card action schema.
usePostActivity(): (activity: Activity) => void
When called, this function will post the activity on behalf of the user, to the bot.
You can use this function to send any type of activity to the bot, however we highly recommend limiting the activity types to one of the following:
event
message
typing
useReferenceGrammarId(): [string]
When called, this function will return the reference grammar ID used to improve speech-to-text performance when used with Cognitive Services.
This value is not controllable and is passed to Web Chat from the Direct Line channel.
useRenderActivity(): ({
activity: Activity,
renderAttachment: ({
activity: Activity,
attachment: Attachment
}) => React.Element,
timestampClassName: string
}) => React.Element
This function is for rendering an activity inside a React element. The caller will need to pass activity
, timestampClassName
, and a render function for the attachment. This function is a composition of activityRendererMiddleware
, which is passed as a prop.
useRenderAttachment(): ({
activity: Activity,
attachment: Attachment
}) => React.Element
This function is for rendering an attachments inside a React element. The caller will need to pass activity
and attachment
as parameters. This function is a composition of attachmentRendererMiddleware
, which is passed as a prop.
() => next => { activity, attachment } => next({ activity, attachment })
useRenderMarkdownAsHTML(): (markdown: string): string
This function will return a function that, when called, will render Markdown into an HTML string. For example,
const renderMarkdown = useRenderMarkdown();
renderMarkdown('Hello, World!') === '<p>Hello, World!</p>\n';
To modify this value, change the value in the style options prop passed to Web Chat.
useScrollToEnd(): () => void
This function will return a function that, when called, will scroll the transcript view to the end.
useSendBoxValue(): [string, (value: string) => void]
This function will return the current value of the send box and the setter function to change the value.
useSendEvent(): (name: string, value: string) => void
When called, this function will send an event activity to the bot.
useSendFiles(): (files: (Blob | File)[]) => void
When called, this function will send a message activity with one or more File attachments to the bot, including these operations:
- Convert File into object URL
- Generate thumbnail and will use a Web Worker and an offscreen canvas if supported
If you are using an ArrayBuffer
, you can use FileReader
to convert it into a blob before calling URL.createObjectURL
.
useSendMessage(): (text: string, method: string) => void
When called, this function will send a text message activity to the bot.
You can optionally include the input method how the text message was collected. Currently, if specified, only speech
is supported.
useSendMessageBack(): (value: any, text: string, displayText: string) => void
When called, this function will send a messageBack
activity to the bot.
useSendPostBack(): (value: any) => void
When called, this function will send a postBack
activity to the bot.
useSendTypingIndicator(): [boolean]
This function will return whether the typing indicator will be sent to the bot when the send box value is being modified.
To modify this value, change the value in the style options prop passed to Web Chat.
useShouldSpeakIncomingActivity(): [boolean, (value: boolean) => void]
This function will return a boolean and a function.
- boolean: whether the next incoming activity will be queued for text-to-speech
- function: a setter function to control the behavior
If the last outgoing message is sent via speech, Web Chat will set this state to true
, so the response from bot will be synthesized as speech.
useStartDictate(): () => void
This function will open the microphone for dictation. You should only call this function via a user-initiated gesture. Otherwise, the browser may block access to the microphone.
useStopDictate(): () => void
This function will close the microphone. It will not send the interims to the bot, but leave the interims in the send box.
useStyleOptions(): [StyleOptions]
This function will return the style options. UI components should honor the styling preferences.
The value is not the same as the props. Web Chat will merge the style options passed in props with default values specified in defaultStyleOptions.js
.
To modify the value of styleOptions
state, change the props you pass to Web Chat.
useStyleSet(): [StyleSet]
This function will return the style set.
To modify this value, change the value in the style options prop passed to Web Chat.
useSubmitSendBox(): () => void
This function will send the text in the send box to the bot and clear the send box.
useSuggestedActions(): [CardAction[], (CardAction[]) => void]
This function will return an array and a setter function.
- array: a list of suggested actions that should be shown to the user
- function: a setter function to clear suggested actions. The setter function can only be used to clear suggested actions, and it will accept empty array or falsy value only.
The suggested actions are computed from the last message activity sent from the bot. If the user posts an activity, the suggested actions will be cleared.
useTimeoutForSend(): [number]
This function will return the interval of time paused before a sending activity is considered unsuccessful. The interval is represented in milliseconds. Due to network partitioning problems, activities that fail to send may eventually be successfully delivered to the bot.
To modify this value, change the value in the style options prop passed to Web Chat.
useUserID(): [string]
This function will return the user ID.
To modify this value, change the value in the style options prop passed to Web Chat.
useUsername(): [string]
This function will return the username.
To modify this value, change the value in the style options prop passed to Web Chat.
useVoiceSelector(activity: Activity): (voices: SpeechSynthesisVoice[]) => SpeechSynthesisVoice
This function will return a function that can be called to select the voice for a specific activity.
To modify this value, change the value in the style options prop passed to Web Chat.
useWebSpeechPonyfill(): [{
SpeechGrammarList: SpeechGrammarList,
SpeechRecognition: SpeechRecognition,
speechSynthesis: SpeechSynthesis,
SpeechSynthesisUtterance: SpeechSynthesisUtterance
}]
This function will return the ponyfill for the Web Speech API.
To modify this value, change the value in the style options prop passed to Web Chat.
These are hooks specific provide specific user experience.
These are hooks that are specific for the microphone button.
useMicrophoneButtonClick(): () => void
When called, this function will toggle microphone open or close.
useMicrophoneButtonDisabled(): () => void
This function will return whether the microphone button is disabled. This is different from useDisabled()
. The microphone button could be disabled because it is currently starting or stopping.
This value can be partly controllable through Web Chat props.
These are hooks that are specific for the send box.
useSendBoxDictationStarted(): [boolean]
This function will return whether speech-to-text detection has been started or not.
These are hooks that are specific to the text box in the send box.
useTextBoxSubmit(): (setFocus: boolean) => void
This function will send the text box value as a message to the bot. In addition to the original useSubmitSendBox
hook, this function will also scroll to bottom and, optionally, set focus to the send box.
The focus is useful for a phone scenario where the virtual keyboard will only be shown when a text box is focused.
useTextBoxValue(): [string, (value: string) => void]
This function will return a string and a function.
- string: the text box value
- function: the setter function to set the text box value.
The setter function will call the setter of useSendBoxValue
and also stop dictation if started.
These are hooks that are specific to the typing indicator.
useTypingIndicatorVisible(): [boolean]
This function will return whether the typing indicator should be visible or not. This function is time-sensitive, meaning that the value will change as time passes.
This function derives the visibility of the typing indicator via:
typingAnimationDuration
value specified in style options, in milliseconds- Values from the
useLastTypingAt
hook