-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only one resolver is allowed per field #128
Comments
Currently, we’re testing the iterative update by renaming the intial Todo model and updating it to Todos. Previously, this created a resolver called listTodoss (because it incorrectly blindly appended s). With the pluralization fix, it creates a resolver called listTodos - which unfortunately already exists and therefore appsync throws a Only one resolver is allowed per field error (aws/aws-appsync-community#128)
Currently, we’re testing the iterative update by renaming the intial Todo model and updating it to Todos. Previously, this created a resolver called listTodoss (because it incorrectly blindly appended s). With the pluralization fix, it creates a resolver called listTodos - which unfortunately already exists and therefore appsync throws a Only one resolver is allowed per field error (aws/aws-appsync-community#128)
I also see this problem when using Terraform to deploy AppSync. Update: |
Hey, there are a lot of similar issues (e.g. in the cdk repository: aws/aws-cdk#13269 (comment)), is there any progress on this issue? |
I encountered this issue when I renamed a table (
|
Currently, we’re testing the iterative update by renaming the intial Todo model and updating it to Todos. Previously, this created a resolver called listTodoss (because it incorrectly blindly appended s). With the pluralization fix, it creates a resolver called listTodos - which unfortunately already exists and therefore appsync throws a Only one resolver is allowed per field error (aws/aws-appsync-community#128)
Since this update in CDK this is an even bigger issue now! CDK has standardised naming on resolvers but this causes all existing IDs to change. The suggestion there is to hardcode all of the IDs to the old versions but this isn't practical on any reasonably sized project. Any chance this could get some attention? |
👋 we are experiencing this issue for our customer use-case which can be described as below: However, during the deployment it fails with I have attached the CFN logs below: From the logs for original "Test" stack (pic2), we see that it's waiting in phase "update_complete_cleanup_in_progress" where I would assume the resolvers to be soft deleted and detached. Then it wouldn't cause the subsequent new stack to fail with the error mentioned above. Please let me know if you need any other information. |
I ran into this issue yesterday. This is how I resolved it:
In my case the build hit the error again but for a different model & field. I had to run the same procedure above for that model & field. But then!!! My build succeeded |
AppSync does not seem to detach and attach resolvers when they are removed or renamed in CloudFormation resulting in edge cases that require manual intervention.
Reproduction Steps
The simplest version of this to replicate is to change the name of an AppSync resolver resulting in an error where the old resolver is still attached and the new one fails to attach with the error
Only one resolver is allowed per field
.Using CDK for conciseness:
Schema:
The above code deploys an AppSync API and attaches a resolver that echos "pong" when the "ping" field is queried. Once deployed simply changing the name of the datasource reproduces this error. Line 12 can be changed to
api.add_none_data_source("ping2", "Ping").create_resolver(
(note "ping2"). This results in the old datasource being deleted (but not detached) and the new datasource being created and failing to attach due to it clashing with the old datasource.Other
This issue has also been raised on the Amplify community - aws-amplify/amplify-cli#682
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: