-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kong v0.11.0 Error during migrations on an already migrated cassandra database #2975
Comments
Hi! Thank you for the detailed report!
This is the source of the issue: this is not the documented procedure for upgrading. You should never run I know that this sounds like the old joke that goes "Doctor, it hurts when I do this!" and then the doctor says "Well, so stop doing that!", but really, that's all there is to it: |
Hey @hishamhm - Can you re-open this? My steps were actually in the wrong order: We did run migrations on a single node before starting up the new nodes. The procedure went:
|
As mentioned on Gitter, have you tried #2869 ? |
Ah I suspect that that is the root cause, however, it doesn't appear to be in an available release yet. Is that the case? I'm not sure if I'll be able to test this out in our production cluster without a properly released version. Let me get back to you and see what I find out. |
Actually it has just been released in 0.11.1 |
Hey all, the update to v0.11.1 seems to have addressed the problem. Thanks!
|
Summary
Executing
kong migrations up
on an already migrated and upgraded kong cluster is causing a migration error:Error: /usr/local/share/lua/5.1/kong/cmd/migrations.lua:34: [cassandra error] Error during migration 2016-09-05-212515_retries_step_1: [Invalid] Invalid column name retries because it conflicts with an existing column
.We'd expect this command to perform a no-op in this case as migrations are already done and tables / columns created. This appears to be a pretty bad bug as this will prevent us from upgrading kong beyond v0.11.0.
We successfully upgraded our v0.9.8 kong cluster to v0.11.0 on 10/6/17 following the upgrade guide. The v0.11.0 migrations were successfully executed and then the new nodes brought online. Below is the log output for the migrations, which successfully ran:
However, if I now try to execute the command
kong migrations up
on the same cluster, I am now getting a migration error and it appears that kong is trying to run the same migrations that have already executed. I can confirm that the migrations have already ran (see log output below) and that the tables and columns already exist in the cassandra keyspace. Why is kong trying to execute these migrations again? This will be a big problem for us the next time we need to upgrade versions and actually run new migrations.kong migrations list
kong migrations up
Steps To Reproduce
Additional Details & Logs
Kong version (
$ kong version
)v0.11.0
Kong debug-level startup logs (
$ kong start --vv
)The text was updated successfully, but these errors were encountered: