Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publish large message payloads to S3 #248

Merged
merged 2 commits into from
Apr 22, 2024
Merged

Conversation

Thomasvdam
Copy link
Member

@Thomasvdam Thomasvdam commented Apr 18, 2024

Motivation

The 'official' recommendation from Amazon is to offload larger message bodies to S3 and publish a message with a reference to the uploaded file, which is why we also went for this approach.

Explanation of Changes

To have some leeway we upload to S3 at ~100KB rather than the exact limit of SQS. This means we should realistically always be able to publish the upload message body with the 'normal' message attributes.

Also adds a bunch of extra logs to make debugging things easier when it comes to SQS related issues.

Testing

You'll need to set up https://github.com/adobe/S3Mock on your local machine (the docker image is super easy to spin up). Once you have it running make sure to create a bucket before doing anything else: curl --request PUT "http://localhost:9444/indexer-localnet-large-messages/". You should also have the SQS emulator running. If you're not using the seda-explorer repository devcontainer docker-compose file as a reference please double check all the ports used in the commands.

Caution

To make life easier add the following 2 lines to ./scripts/local_setup.sh below the "# configure sedad" comment on line 32

$BIN config set app streaming.abci.keys '["*"]'
$BIN config set app streaming.abci.plugin '"abci"'

Now build the plugin in dev mode and start the chain with all the expected environment variables:

make build-plugin-dev && COSMOS_SDK_ABCI=PATH_TO_REPOSITORY_ROOT/seda-chain/build/plugin SQS_QUEUE_URL=http://localhost/4100/local-updates.fifo SQS_ENDPOINT=http://localhost:4100 S3_ENDPOINT=http://localhost:9444 S3_LARGE_MSG_BUCKET_NAME="indexer-localnet-large-messages" PLUGIN_LOG_FILE=./test.log PLUGIN_LOG_LEVEL=trace ./scripts/local_setup.sh

Once the chain is running and past block 1 you can submit a code upload transaction. I used the wormhole core contract, but most contracts should be fine:

$BIN tx wasm store PATH_TO_LARGE_WASM --from acc1 --keyring-backend test --gas 300000000000 -y

Once the block with this TX is committed you should see that a file was uploaded to the S3 mock under the key "tx-h${BLOCK_HEIGHT}-i${MSG_INDEX}.json" with the JSON payload as the message body, and on the queue you should see a message with the following payload:

{
  "type": "large-message",
  "data": { "key": "tx-h${BLOCK_HEIGHT}-i${MESSAGE_INDEX}.json", "ETag": "\"74c6403e1d2713ff795a958be03e38ff\"" }
}

Related PRs and Issues

Closes: #247

@Thomasvdam Thomasvdam requested a review from a team April 18, 2024 15:19
@Thomasvdam Thomasvdam force-pushed the fix/publish-large-messages branch 3 times, most recently from 061c033 to 2a09543 Compare April 19, 2024 13:28
To have some leeway we upload to S3 at ~100KB rather than the exact
limit of SQS. This means we should realistically always be able to
publish the upload message body with the 'normal' message attributes.

Also adds a bunch of extra logs to make debugging things easier when it
comes to SQS related issues.

Closes: #247
plugins/indexing/aws/sqs_client.go Outdated Show resolved Hide resolved
plugins/indexing/aws/sqs_client.go Outdated Show resolved Hide resolved
@hacheigriega
Copy link
Member

hacheigriega commented Apr 19, 2024

Not related to this PR, but I just found that the package names of plugins/indexing/auth and plugins/log are bank and logger, respectively. I think those should be updated so that the packages can be imported without aliases.

@Thomasvdam Thomasvdam merged commit 0897ffb into main Apr 22, 2024
16 of 17 checks passed
@Thomasvdam Thomasvdam deleted the fix/publish-large-messages branch April 22, 2024 16:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

🐛 Large messages are not published
3 participants