-
Notifications
You must be signed in to change notification settings - Fork 825
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to drop & recreate DynamoDB table #2384
Comments
As an addendum, the error message for the table conflict is not easy to navigate either:
I'm sure for someone well-versed in CloudFormation, this makes more sense. For me, the CloudFormation config is an implementation detail of Amplify. Here's it's leaking out and I don't really know what to make of it. There's no clear indication of what I need to do. Removing the api component entirely was the only method I found after a couple hours of searching for help online and poking around at the source. |
Agree. This would be a really nice feature, as I am running into the same thing. What gets me is I've started to defined some tables a while back, informally, and am now going back and adding extra indexes, which triggers a "replace" according to cloudformation, which sets off the issue you describe. I'd like to dig into the code and see if something can be done because it becomes tricky to remove a table when you've got datasources and other resources depending on the table existing. |
+1 |
|
+1 |
1 similar comment
+1 |
Similar to this - it would be great to be able to clear the data / cache in DataStore when developing / testing |
All - We started brainstorming about this a bit internally. One concern is around data loss once you have moved past the rapid development phase and you've decided on a data model. Do you have any thoughts on how the workflow would be best for you here to protect against accidental deletion? One thought is a "production/locked" and "dev/unlocked" mode for safety purposes, where in the dev mode the tables are automatically recreated on certain foundational schema changes. |
A warning with a bold red font should do it. :)
In first place, you possibly could lock an environment like
If you want to recreate tables I could think of
Regards |
@undefobj Would a confirmation prompt be sufficient, with the user forced to type something like "I understand recreating the table results in total data loss for this table."? If you've decided on a data model, you shouldn't need to invoke this command at all. Hopefully such an ACK sequence would steer you away if you didn't intend on deleting the data. You could also have an option to rename, rather than drop, the old table. You'd probably have to make that optional because people don't like surprised billing, but it would give a nice safety mechanism, even in the event that you did want to adjust the schema. I would also be thrilled if there were a data export & import component to all of this, with the understanding that the data may very well need to be transformed. In the case of key changes, it'd be really nice if Amplify just handled the transition. In my original issue, I mentioned something akin to RDBMS approaches to adding a new column without locking the whole table, where a new table is created and data copied from the old table, with mechanisms in place to handle propagation of modifications during the copy process. When all done, the new table is renamed and the old one dropped and everyone is happy. |
i like the lock / force recommendation. Not sure how to handle it in amplify console. Currently, when running Not sure how it works under the hood but this could satisfy changes from both push from cli and amplify console. |
This would be really nice but I think this will be a big task so solve. Since amplify needs to know the old and the new model and how it should transform correctly. Sounds kind of difficult for all different use cases everyone has. |
Is there something in amplify that simply:
I think it would be a great feature to have for development. If we were able to run something like:
That would simply reset your amplify project to the current state of your schema |
I brought up a similar concern on #5601 and @yuth asked me to give my thoughts on the proposed workflows in this thread, so here is my 2 cents: force-sync cloud state to the local state:I think that in general, the ability to force-sync cloud state to the local state would be an extremely powerful feature. I would even argue that the whole idea of 'infrastructure as code' (not to mention Serverless computing) is unrealized until this is the case -- because otherwise the infrastructure becomes a function of (current-cloud-state + code). (I was actually surprised to learn amplify doesn't already support this.) It wouldn't only be helpful during initial development, but also when working on improvements and new features for an otherwise-mature product. It would essentially automatically buy you all the usual advantages of idempotence + source control:
I don't know if dynamoDB tables are the only amplify-supported functionality preventing this from happening -- I would imagine not (definitely not long-term). Concerns about breaking prod:I am only just getting started with amplify so perhaps I am misunderstanding things, but from the discussion above it sounds like as of right now, all amplify environments are created completely equal? (i.e. the only difference between prod and dev is the environment name?) I think the idea of a locked prod environment is a very good one (and eventually an inevitable one). The locking should probably be more than just a flag (e.g. it can be access-token-based), and should arguably work not only for 'breaking' changes but also for non-breaking changes. That'll eventually also buy all the standard CI advantages of enforcing test passes etc. before pushing to prod. Data migrations between incompatible statesData migrations between incompatible states are a bullet you're probably going to have to bite at some point, and honestly as a Serverless platform amplify is probably extremely well-situated to solve it in a very powerful way. But imho this sort of feature can wait as long as folks have the means to deal with migrations manually (which apparently we do). This turned out a bit longer that I planned, I hope that's alright 😅. I've spent a lot of time thinking about Serverless computing -- in fact about 5 years ago I was working on a startup that was trying to build something very similar to amplify as its first product. It fizzled out -- partly due to concerns that the cloud computing giants would join the space and out-compete a startup :-). I think you're doing a fantastic job with amplify, thank you and keep it up! |
Just to throw this out there, just like EC2 instances there's options to prevent them from being deleted (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingDisableAPITermination) I know this might not be for the Amplify team and more for the DynamoDB team.. but why can't we just flag tables to have a form of delete protection? A simple flag against the table that simply says, do not delete. Then we can run a command on a prod env let's say.. Seems a more stable option even for application outside Amplify and a good feature to even have on DynamoDB itself? |
Stumbling on this because I'm running into similar issues of having to rebuild the API when a schema change needs to happen during mid-cycle prototyping. @uncodable I think this could be on the DynamoDB team, but it makes just as much sense, for example, to simply have a configuration option like:
As part of Amplify rather than having this be some intrinsic feature of Dynamo since Amplify is simply orchestrating Dynamo. In CosmosDB, there is an API call I feel that this feature aligns with the rapid prototype, develop, deploy paradigm of Amplify and quite surprised that it's still not present. |
+1 |
Is there any solution/suggestion from either Amplify and/or DynamoDB team on the issue? I've run into this issue couple of times and am sure someone else will also run into the same. |
As of right right now, we are running into this very problem in one of our projects. |
Removing the datasources is such a pain when updating keys for a model / table |
I just deleted a couple of tables thinking they would simply be recreated on the next What is the workaround to have the table recreated via CF again? I've tried using Console to update the nested stack which contains the table, but even going through the Update wizard doesn't result in the table being recreated |
Any updates on this request ? |
I A painful work around since pushes take a while, but it works. |
As a workaround I rename in my AppSync schema the types, |
So I recreated the table manually (including attributes) to fix the CF error, then I tried this approach of commenting out the lines. Unfortunately, another resource (serverless container) had permission to the backing table, and everything went back into an unhealthy state again. |
This issue has been automatically locked since there hasn't been any recent activity after it was closed. Please open a new issue for related bugs. Looking for a help forum? We recommend joining the Amplify Community Discord server |
Is your feature request related to a problem? Please describe.
I'm just getting started with AWS Amplify. I've been working through the typical Todo application, when I finally got to the "GraphQL Transform" section in the docs. The first
@key
example is for creating a primary key, but the only way that can work with an existing Amplify project is to delete the existing DynamoDB table and update AppSync accordingly. I haven't found a good way to do this. I've had to resort to usingamplify api remove
followed byamplify add api
.Describe the solution you'd like
It'd be helpful, I think, to have something like
amplify api recreate
, which would tear down the existing table and recreate it. Since this is all in-development, I'm okay losing the data. And since there's really no way around recreating the table, I'd have to do this anyway.I suppose an alternative would be if both tables could be created and Amplify handled the rename so the old table would be archived and the new one would be the active table.
Describe alternatives you've considered
I tried renaming the table manually in the AWS Console to avoid a conflict induced by the new schema. Unfortunately, that didn't work. I think something in either AppSync or CloudFormation needs to be updated as well.
Another potential improvement is to modify the Todo API template to include a key (either primary or secondary) so the user gets sorted todo items out of the box and can be gently introduced to some of the transform DSL.
Additional context
I'd like to emphasize that this isn't exactly a case about not thinking of my data model up front. I'm really just working through the documentation.
The text was updated successfully, but these errors were encountered: