Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue #, if available:
Description of changes:
When using streaming mode, single tokens are published to SQS and then processed by a lambda function in micro-batches of 10 items.
Each message is then published as a mutation to AppSync and triggers a refresh of the UI. Tokens are reordered client side before the resulting text is displayed.
We have noticed that the text is often scrambled and upon each refresh it gets reordered. Upon investigation this is due to the fact that records in SQS batches are not in order and Lambda Powertools batch processor processes the records in the same order as they are received. This causes a lot of reordering client side since token are received individually.
Eg:
To improve the experience we sort the records in he batch based on their
sequenceNumber
before passing them to the Lambda Powertools batch processor.In the same example as above the batch will be reordered as [Hello, my, friend!] and the tokens sent in the correct order.
This is not an absolute ordering but improves the experience significantly already.
With this implementation we observed that client side the maximum number of token reordering is massively reduced from an average of 6-10 per received token to an average of 0-1.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.