replace DynamoDB Streams for the notifications-lambda
with SQS and a AppSync pipeline resolver
#150
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Co-Authored-By: @andrew-nowak
https://trello.com/c/VzkeAK2f/554-migrate-pinboard-to-relational-database
In the database ADR (specifically
pinboard/ADRs/database.md
Line 39 in ce8309a
invoke_lambda
(from within the Aurora DB engine) when we move from Dynamo to RDS. However upon further investigation/experimentation, due to the fact we must use Aurora ServerlessV1 (because it's the only thing which supports thedata-api
, which AppSync relies upon) this doesn't support attaching IAM roles and so we cannot permission the RDS cluster to invoke the lambda - putting an end to that approach. As the changes to the database ADR in this PR explain, that leaves us with two choices...add a lambda and RDS proxy between AppSync and RDS (as explained in https://aws.amazon.com/blogs/mobile/appsync-graphql-sql-rds-proxy)- this seems like too much infrastructure complexity (at this point)createItem
AppSync resolver to an AppSync 'pipeline' resolver, where the first function does the DB insert as before, then the second function invokes the lambda (and we leave the lambda to look-up what it needs to from the DB, a shame but worth it) - this is the solution we went for in this PR and can be done before any re-platforming from DynamoDB to RDS (to make that task simpler later on).What does this change?
createItem
resolver (defined in CDK) into a 'resolver function'notifications-lambda
as an AppSync 'data source' and add a 'resolver function' to invoke it, with the output of the insert (which includes the inserted item's generated ID)createItem
Mutationnotifications-lambda
to receive the payload sent from the new 'resolver function' above (rather than the 'DynamoDB Streams' payload) and then queue it on an SQS queue, so that the pipeline resolver can return quickly (and the user/client isn't waiting on all the notifications being sent before they know their message has been inserted)...notifications-lambda
...notifications-lambda
also be able to be invoked with anSQSEvent
(containing the item that it has just queued) to then do roughly what it used to (i.e. lookup which users need to receive a push notification for that item and send out all the push notifications)How to test
With this branch deployed to CODE (you'll need to check the pipeline resolver gets attached OK, as AWS' cloudformation of AppSync things is buggy, see aws/aws-appsync-community#146 (comment) for example)...
@
and selecting yourself)- ... you should still receive a desktop notification
How can we measure success?
This is kind of a no-op (functionality wise) but removes this complexity from the process of re-platforming from DynamoDB to RDS (where otherwise we would've had to do the
lambda_invoke
thing along the way.Have we considered potential risks?
There will be a performance hit when sending a message (more so when the
notifications-lambda
is cold), but from our testing this is tolerable.