Publish large message payloads to S3 #248
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
The 'official' recommendation from Amazon is to offload larger message bodies to S3 and publish a message with a reference to the uploaded file, which is why we also went for this approach.
Explanation of Changes
To have some leeway we upload to S3 at ~100KB rather than the exact limit of SQS. This means we should realistically always be able to publish the upload message body with the 'normal' message attributes.
Also adds a bunch of extra logs to make debugging things easier when it comes to SQS related issues.
Testing
You'll need to set up https://github.com/adobe/S3Mock on your local machine (the docker image is super easy to spin up). Once you have it running make sure to create a bucket before doing anything else:
curl --request PUT "http://localhost:9444/indexer-localnet-large-messages/"
. You should also have the SQS emulator running. If you're not using theseda-explorer
repository devcontainer docker-compose file as a reference please double check all the ports used in the commands.Caution
To make life easier add the following 2 lines to
./scripts/local_setup.sh
below the "# configure sedad" comment on line 32Now build the plugin in dev mode and start the chain with all the expected environment variables:
Once the chain is running and past block 1 you can submit a code upload transaction. I used the wormhole core contract, but most contracts should be fine:
Once the block with this TX is committed you should see that a file was uploaded to the S3 mock under the key
"tx-h${BLOCK_HEIGHT}-i${MSG_INDEX}.json"
with the JSON payload as the message body, and on the queue you should see a message with the following payload:Related PRs and Issues
Closes: #247