Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve experience deploying databases created by Migrations #19587

Open
5 of 24 tasks
ajcvickers opened this issue Jan 14, 2020 · 69 comments
Open
5 of 24 tasks

Improve experience deploying databases created by Migrations #19587

ajcvickers opened this issue Jan 14, 2020 · 69 comments
Labels
Milestone

Comments

@ajcvickers
Copy link
Member

ajcvickers commented Jan 14, 2020

This is a grouping of related issues. Feel free to vote (👍) for this issue to indicate that this is an area that you think we should spend time on, but consider also voting for individual issues for things you consider especially important.


Currently, many developers migrate their databases at application startup time. This is easy but is not recommended because:

  • Multiple threads/processes/servers may attempt to migrate the database concurrently
  • Applications may try to access inconsistent state while this is happening
  • Usually the database permissions to modify the schema should not be granted for application execution
  • Its hard to revert back to a clean state if something goes wrong

We want to deliver a better experience here that allows an easy way to migrate the database at deployment time. This should:

  • Work on Linux, Mac, and Windows
  • Be a good experience on the command line
  • Support scenarios with containers
  • Work with commonly used real-world deployment tools/flows
  • Integrate into at least Visual Studio

The result is likely to be many small improvements in EF Core (for example, better Migrations on SQLite), together with guidance and longer-term collaborations with other teams to improve end-to-end experiences that go beyond just EF.


Done in 6.0

Done in 9.0

Backlog

@ajcvickers ajcvickers added this to the 5.0.0 milestone Jan 14, 2020
@ajcvickers ajcvickers removed this from the 5.0.0 milestone Jan 14, 2020
@ErikEJ
Copy link
Contributor

ErikEJ commented Jan 14, 2020

sqlpackage / dacpac already does much of this - but sadly currently only for SQL Server.

@BEagle1984
Copy link

In our latest project we integrated the DB migrations in the deployment pipeline simply by generating an idempotent migration script (for all migration steps) via dotnet-ef and leveraging sqlcmd to execute it against the appropriate sql server instance. We do this in Jenkins, from a Linux slave.
I honestly don’t think our solution is too bad.

@ajcvickers ajcvickers added this to the 5.0.0 milestone Jan 17, 2020
@macsux
Copy link

macsux commented Feb 7, 2020

So I've built something that helps with this that was contributed to project SteelToe. It allows tasks to be bundled with app and launched via special command argument. Ef migration is one of them.
Is you're not familiar with SteelToe, the site is https://steeltoe.io

These are the relevant sources you may consider

https://github.com/SteeltoeOSS/Management/tree/dev/src/Steeltoe.Management.CloudFoundryTasks

https://github.com/SteeltoeOSS/Connectors/blob/dev/src/Steeltoe.CloudFoundry.Connector.EFCore/MigrateDbContextTask.cs

@johnnyreilly
Copy link

Thanks for linking to this @AndriySvyryd - great to see this is being actively worked on

@dazinator
Copy link

dazinator commented Feb 9, 2020

I ended up architecting my app to address these points, and putting all the logic in a reusable library which I might open source in future.

  • A lock file in shared storage directory (directory seen by all instances) to ensure only one instance can run the migrations at a time.
  • An mvc Action filter which rejects requests by default when there are EF pending upgrades detected and includes a header in the response so that the client can detect that the server requires ef core migrations to be applied and can display a suitable UI.
  • A blazor component and http client delegate handler to be used in blazor wasm applications that can auto detect the header in responses from the server and display an Upgrade pending screen, with an option for a deployment admin to login and apply.
  • An MVC controller with an Action method that can be called to Apply the pending EF core migrations. This is excluded from the action filter mentioned above that rejects requests and is instead only available when there are pending migrations to apply. I've applied authorization to this method so it requires "deployment admin" role.
  • A custom authentication scheme that allows a "deployment admin" to login using a login method that places no dependency on the application database for obvious reasons. This custom auth scheme generates a code and writes it to a file in the storage directory (same location as where the lock file also gets written). A person with access to that directory can then grab the code and use it to log in with which creates a claims principal with Deployment Admin role, and then allows them to see the pending migrations and click a button to apply them (I.e this calls the mvc action method mentioned above)

There was quite a bit to it in the end.
I havent addressed rollback.. but being able to rollback is not necessarily safe if data is going to be lost in the process, so the safest policy for me tends to be to only allow roll forward, and make sure migrations are tested adequately before rolling to a production environment.

One thing I like about this approach is that I can deploy a new version of my application using azure app service for containers, and when the new instance is swapped in and it has pending migrations, anyone currently using the app will now get presented with a UI that says an upgrade is in progress. Once its applied they can use the app again. Contrast this to upgrading the database during the deployment: if the database is upgraded prior to the new version of the app being swapped in, then current users will be using an older version of the app whilst its database is being upgraded to a newer version from under it - which could introduce errors. If the database is upgraded by the deployment process after the newer version of the app is swapped in, then unless the application is taken fully down until the db upgrade is complete then users will be potentially using a new version of the app before the db upgrade is complete which again can cause errors. With my approach, the app is not taken fully offline which allows me to provide a nicer experience for end users whilst the upgrade is taking place. Likewise if there are mvc methods that I know have no dependency on the database I can optionally keep them functional during the upgrade by excluding them from the action filter.

Basically unless EF migrations can be committed in a transaction and the corresponding version of the app swapped in to replace the old version in a single atomic operation, then you've got the potential for version mismatch for some period of time. I dont think the above is possible with app service for containers but if anyone knows different please do share!

@ajcvickers
Copy link
Member Author

@dazinator Thanks for sharing that. There is certainly some interesting things there--we talked about going in this kind of direction before, but robustness has always been a concern. If you do create a package I would be interested in taking a look.

@syndicatedshannon
Copy link

syndicatedshannon commented Feb 10, 2020

This product is pretty strong with this latest 3.0 release. It's great to see, since we decided not to bail on it during the rewrite.

I'm disheartened, though, that this capability is slated for 5.0, but still very little attention is being given to contract-first, conceptual model-first, database-first, or SSDT/.SQLPROJ-authoritative development. Is it just that there are insufficient requests for it, or is EF not intended for that market?

The semantics of up/down are great for early development, and I appreciate that they pull a lot of teams into EF during envisioning stages. Also, the syntax of EF's C# attribute-modelled and fluent definition are well-designed. Even so, this leaves a lot of gaps in both relational and constraint capability and support for collaboration with data designers. SQL code embedded as text strings in fluent definitions (without even schema validation), to me, means something is wrong. Which team writes that code? The team I have to allocate with both expert level C# lambda AND T-SQL background?

I know this all reveals my personal stance on code-first data development. The point is, I wonder where this feature is heading. Who needs to do this; who needs DACPAC-like deployment but wouldn't be better suited to getting SQL schema definitions out of C# and into source-controlled SQL, or a contract-first toolset?

Thanks for listening.

@ErikEJ
Copy link
Contributor

ErikEJ commented Feb 11, 2020

@syndicatedshannon I assume you are aware that EF Core Power Tools supports exactly that workflow (database first from a .dacpac/ .sqlproj)

@syndicatedshannon
Copy link

@ErikEJ - Thank you. Our team has visited that tool often in the past, starting when you first suggested it years ago. We had various challenges along the way. At last visit, I believe we did not see how to make it work for us to get schema revisions applied to our models as part of our toolchain. My recollection is there is an effortful process to replace the model each time, even if we weren't concerned about losing model customizations. Since then we purchased LLBLGen, but I would love to know if I'm missing something.

@syndicatedshannon
Copy link

@ErikEJ If you think I'm mistaken, and it does indeed support exactly this workflow, please LMK. Would love to hear I've missed something.

@ErikEJ
Copy link
Contributor

ErikEJ commented Feb 20, 2020

@syndicatedshannon The workflow you describe works fine for me with EF Core Power Tools. You can extend the generated POCO classes with NonMapped properties, as these classes are partial, and you can modify the DbContext model by implementing the partial OnModelCreatingPartial method in a partial DbContext class. But of course if you start treating your DAL as a Domain model, things could start falling apart. Please let me know what is blocking you from adpoting this, or if anything I wrote is unclear :-)

@syndicatedshannon
Copy link

Thank you Erik, I will try to understand, and follow up on #4321 when I have a handle on it, referencing your name. Apologize to others for cluttering this topic.

@GeeWee
Copy link

GeeWee commented Mar 10, 2020

Just want to chime in on this, as I've been researching this a bit.
It seems like most of the stuff re: parallel migrations can be solved by taking out exclusive locks.

I can see rails does it with advisory locks for postgres and mysql, FluentMigrator and Flyway does it with sp_getapplock for SQL Server, etc.

It seems that using the database locking can mitigate running migrations in parallel, thus making it safe to run on Startup. I'm not sure if this is true for all databases, but it seems to be for the big ones.

However, this of course only deals with half of the issue, because e.g. for rolling updates there's still an issue if some apps expect a different schema. Of course you can get around this by planning your migrations and releases well.

@ondrej-marinak
Copy link

Feedback regarding migrations and deployment experience

Setup:

  • Multi-stage CI/CD pipeline
  • Build agent on Ubuntu 18.04
  • DB server: MariaDB
  • multitenancy in the near future (db per tenant)

Recently, I spent a lot of time trying to solve the migrations and deployment case in the Azure Pipelines. I’ve tried to solve a straight forward case i.e. to update a database during the deployment stage.

  1. My first idea was to split the whole into two stages, build and deploy.

    • Build stage: generate migration script usingdotnet ef migrations script --output script.sql --idempotent
    • Deployment stage should download the artifact (sql script) and execute it.

    Problem: It’s not possible to execute the script using dotnet ef or using some extensions. The extensions that I tried didn't work. There are probably some extension but are working well only with MS SQL. The solution could be to install db specific executor on the agent machine.

  2. The second approach I tried is to migrate directly from assemblies

    • Build stage: nothing to do here
    • Deployment stage: execute dotnet exec --depsfile [path] --runtimeconfig [path] [ef path] etc.

    Problems:

    • Command to execute is not that simple
    • Need to know the path to ef.dll
    • Error saying that Microsoft.EntityFrameworkCore.Design nuget package is not found even if it was referenced in the startup project explicitly

    I also abandoned this approach because I was constantly stumbling upon new hurdles.

  3. The third approach, unfortunately, needs to pull and build source code once again in the deployment stage, on the other hand, it is just dotnet ef database update command. I stuck with this approach for now since it's the easiest one and doesn't require a lot of plumbing, I don't like it, though.

@JonPSmith
Copy link

Can I add one issue I have had when updating a database with migrations that might be added in a CI/CD solution.

With a client an update to EF Core 3 required some lambda properties to be added to the database. The classic example is adding a FullName string property to replace the lambda property that created the FullName from the FirstName and LastName.

The migration was easy, but updating the thousands of rows to have the correct FullName was a problem. My client's system used 'migrate on startup' and I added code to run code when the specific migration was applied. This code then filled in the new FullName from the FirstName and LastName columns. The problem was Azure WebApp timed out because the ASP.NET Core app took too long to start.

If the CI/CD migration could call code to run after the migration and provide names of migration just applied to the database, then this would handle this sort of issue.

@roji
Copy link
Member

roji commented Jun 17, 2021

I am happy to collaborate with you, but we should wait for feedback from @bricelam, @ajcvickers in case the EF Core team want to pick this up.

We generally recommend applying migrations as part of the application deployment process, and not as part of application startup. Among other things, this side-steps the whole distributed locking question, since that step is being performed at one place and not concurrently. Even if you have a distributed locking mechanism, you must still likely ensure that application instances are down or waiting for the migration to be applied, otherwise you get different versions of your application trying to access a different database schema... And that begs the question of why do it as part of application startup in the first place...

Even if you want to use distributed locking, the specific mechanisms for doing so are very different across databases (and don't necessary exist on all of them). I'm not sure what EF Core should or could contribute here.

@zejji
Copy link

zejji commented Jun 17, 2021

Even if you have a distributed locking mechanism, you must still likely ensure that application instances are down or waiting for the migration to be applied, otherwise you get different versions of your application trying to access a different database schema...

@roji - The approach we took was that zero-downtime deployments were fine as long as the expand-contract pattern was used, whereas any other migrations would require instances to be brought down. Since, in practice, most migrations tend to fall within the former category, I still think there's a strong argument for doing the changes at application startup.

@JonPSmith
Copy link

hi @roji,

Thanks for clarifying that - I should have remembered the EF Core philosophy for migrations. And yes, the DistributedLock library only supports SQL Server and Postgresql databases, which wouldn't match the EF Core team's aims.

@zejji and/or me will look at this because I still think @zejji approach is really useful. As well as making migrations possible on startup it also allows an action that has to be run once on startup - in ASP.NET Core authorization library I am building there is a need to seed the database with certain settings provided by the appsettings.json file, which most be only done once.

@roji
Copy link
Member

roji commented Jun 17, 2021

@roji - The approach we took was that zero-downtime deployments were fine as long as the expand-contract pattern was used, whereas any other migrations would require instances to be brought down. Since, in practice, most migrations tend to fall within the former category, I still think there's a strong argument for doing the changes at application startup.

That's an interesting point - it's indeed a good idea to manage migrations in a way which doesn't require downtime. But then I'm still unclear on the concrete advantages of applying at startup as opposed to at deployment... At the very least, the latter saves you from having to deal with the distributed locking issue (more discussion is available in our docs).

@zejji
Copy link

zejji commented Jun 18, 2021

@roji - Running the migrations at application startup wasn't actually our first choice and our original intention was to have one or more deployment scripts which handled all the infrastructure initialization tasks.

However, because our application was distributed to clients for deployment (on any cloud), this initialization logic couldn't be kept in, e.g. Azure DevOps pipelines running for each service (although we did use these internally), and was instead initially encapsulated in a single Ansible deployment script. Another option might have been to use Kubernetes init containers. In either case, we found that the complexity of the infrastructure-as-code was spiraling out of control. This made it difficult to understand and maintain for our (small) team.

Having a simple library (as described above) which was used in each service made things much easier. Since all initialization logic was now encapsulated in each service, our dev docker-compose file could be as simple as just a list of the services themselves (+ message bus and monitoring tools). In other words, this now represented the entirety of the knowledge required to deploy the application. Suddenly, the IaC became an order of magnitude simpler and we were able to do away with the Ansible script entirely.

This may be less relevant for a monolithic application but, in any case, there is certainly some appeal for me in having an application able to safely initialize its own infrastructure in a few lines of code.

@roji
Copy link
Member

roji commented Jun 18, 2021

Thanks for the additional details @zejji.

@joaopgrassi
Copy link
Member

joaopgrassi commented Jun 18, 2021

Just to say something, not sure it's going to help anybody, definitely not Jon and the idea of a library but: We started rolling the migrations at startup as well in our k8s cluster. We were "lucky" that we have in our infrastructure always one single pod that did some background work, so we just made that pod run the migrations during startup on each new deployment.

But, we started to have cases where we didn't have this "single" pod anymore and we were at this point where couldn't run the migrations as part of the service's startup since we have many instances of it in k8s.

The approach I used and it's working so far for us is use k8s jobs + helm pre-install hooks. Basically the idea is that a service declares a pre-install Helm hook and via a "simple" console app container image, we run the migrations and all happens before Helm upgrades the charts.

Like @roji said, this also has downsides as currently running pods will use a new version of the db for a fraction of time, but in our team so far we are being diligent and managed to not introduce any backwards incompatible changes, so all works.

Another point which we didn't do yet is to handle cases where the migration fails.. but AFAIK we could also leverage hooks for that, and rollback it using maybe post-rollback hooks.

@dazinator
Copy link

Even if you have a distributed locking mechanism, you must still likely ensure that application instances are down or waiting for the migration to be applied, otherwise you get different versions of your application trying to access a different database schema...

@roji - The approach we took was that zero-downtime deployments were fine as long as the expand-contract pattern was used, whereas any other migrations would require instances to be brought down. Since, in practice, most migrations tend to fall within the former category, I still think there's a strong argument for doing the changes at application startup.

I've been using the expand contract principle for years but never knew that was it's formal term! Thank you for this - now I can sound more educated when talking to my fellow developers :-)
On a related note, I wonder if there is anything EF can do to highlight:

  1. Potentially breaking migrations. For example - adding a new non nullable column with no default value is potentially breaking when there are existing records.
  2. Aid with expand contract pattern. For example, removing a column is a contraction that may break existing instances and is therefore more "unsafe" than an expansion of the model. Imagine if by default "contractions" could be prevented by EF migrations unless you pass in a special "-allow-contraction" argument when generating the model. This would make it safer to use the expand contract pattern without accidental premature contractions. P.s "accidental premature contractions" is not a phrase I'd ever imagine writing in relation to software dev :-)

@roji
Copy link
Member

roji commented Jun 18, 2021

I've been using the expand contract principle for years but never knew that was it's formal term! Thank you for this - now I can sound more educated when talking to my fellow developers :-)

Oh I just made up the term zero-downtime migrations, definitely nothing formal about it 🤣

adding a new non nullable column with no default value is potentially breaking when there are existing records.

When you do this with EF Core, it will create a non-nullable column with the the default value for existing columns (e.g. 0 for ints - but you can go change the default in the scaffolded migration if you want). So this isn't a breaking migration.

For example, removing a column is a contraction that may break existing instances and is therefore more "unsafe" than an expansion of the model

EF does already issue a warning about potentially destructive migrations (e.g. removing a column). That sounds like what you're describing here?

@dazinator
Copy link

dazinator commented Jun 18, 2021

Oh I just made up the term zero-downtime migrations, definitely nothing formal about it

I was referring to the term "expand-contract pattern" mentioned by @zejji :-) it's nice to know this formal pattern name at last!

EF does already issue a warning about potentially destructive migrations (e.g. removing a column). That sounds like what you're describing here?

Yes but I think to support the expand-contraxt pattern more formally, you'd want ef core migration generation to error for any contraction of the model. This could be removal of a column but it could also be a reduction in column size or the change to a datatype of a column etc - basically anything that isn't purely additive (an expansion of the model) can be viewed as dangerous and so it would be great for safety to reasons to have ef migration generation "fail by default" in such a case as opposed to just generating warnings which can easily be missed. Then for times where you don't want this safety net- i.e because you want to do a legitimate "contraction" of the model, you could perhaps generate the EF migration with an optional command line parameter that turns this safety net off and allows the migration to be created. Just an idea - but off topic so I apologise:-)

@roji
Copy link
Member

roji commented Jun 18, 2021

@dazinator it sounds like you're asking for a way to turn the destructive migration warning into an error?

@dazinator
Copy link

dazinator commented Jun 18, 2021

@roji yes if we could make dotnet ef migration add error by default for destructive migrations that would be safer for us. With a way to override when needed, for example suppose we pass in an additional command line arg, something like --allow-destructive. Finally if the destructive migrations themselves were generated with an informational attribute added to the top of the migration class, they could be more easily seen and discussed during code / peer review phase

[Destructive()]
public partial class SomeMigration
{

}

A Destructive attribute (or similar) on the migration could also allow for deployment time checks. If you can detect that the migrations you are about about apply to a database are marked as destructive then you might want to do things differently than when there are only non destructive migrations to apply. For example, when there are destructive changes you might want to:

  • Take all application instances down whilst you upgrade the database and then all the application instances.
  • Generate the migration sql scripts at the start of the upgrade to capture against the deployment for audit or approval purposes.

If there are no destructive migrations however then you might do a zero downtime deployment and care less about the above.

dotnet ef database upgrade -connection [conn string] --has-destructive
True

As a final safety net, dotnet ef database upgrade could fail by default if migrations to be applied were destructive unless --allow-destructive was specified. This way someone can't apply destructive changes to the database without being explicit in this intent.

dotnet ef database upgrade -connection [conn string] --allow-destructive

@JonPSmith
Copy link

Hi @zejji and @ajcvickers,

I have finally created a library that uses @zejji DistributedLock approach. You can see the code at RunStartupMethodsSequentially and I have also written an article called "How to safely apply an EF Core migrate on ASP.NET Core startup" which has a lot of info in it.

@zejji, you will see I overcame the "database not created yet" by having two parts to the locking:

  1. Check if the resource exists. If doesn’t exist, then try another lock approach
  2. If the resource exists, then lock that resource and run the startup services

This means I can try to lock the database, but if its not there i have a second go using a FileStore Directory.
I have tried it on Azure and it works (see the article). @zejji, I really appreciate you describing this approach.

@ajcvickers and @roji: I know you don't like the "migrate on startup" approach but I have made it very clear that you need to make sure your migration does not contain a breaking change migration.

@zejji
Copy link

zejji commented Dec 2, 2021

@JonPSmith - Thanks for the heads-up regarding your library implementation. Looks good! 😃

@GeorgeTTD
Copy link

GeorgeTTD commented Apr 26, 2022

[Destructive()]
public partial class SomeMigration
{

}

If there are no destructive migrations however then you might do a zero downtime deployment and care less about the above.

dotnet ef database upgrade -connection [conn string] --has-destructive
True

@dazinator @roji I would like to add another angle to this. We are currently looking at custom implementations to do a zero downtime deploy for apps with EF. Our prefered deploy strategy would be:

  1. Spin up new deployment 2
  2. Apply non destructive migrations
  3. Switch traffic to route to deployment 2
  4. remove deployment 1
  5. Apply destructive migrations

It would be nice if you could split Up and Down migrations into destructive and non-destructive. You could do this several ways so I will leave the implementation up to the powers that be. However here is an idea which would work for our scenarios.

dotnet ef migrations add MyMigration --split-destructive

Outputs
# <timestamp>_MyMigration.cs
# <timestamp>_destructive_MyMigration.cs


dotnet ef database upgrde ...

# Only applies <timestamp>_MyMigration.cs

and then once deploy is done and deployment 1 is remove

dotnet ef database upgrade ... --allow-destructive

# applies <timestamp>_destructive_MyMigration.cs

Bonus points would be great if we could apply the steps in reverse for rollbacks.

@roji
Copy link
Member

roji commented Apr 26, 2022

@GeorgeTTD how would the proposed --split-destructive switch work if there are more than one migrations pending, each with its own non-destructive and destructive components?

More generally, you are of course free to structure your migrations in this way, and that's indeed what zero-downtime migrations usually entail; but this isn't really something EF Core can automate for you without your own planning and inspection of migrations.

@dazinator
Copy link

dazinator commented Apr 26, 2022

@GeorgeTTD @roji If EF did mark destructive migrations with an attribute e.g

[Destructive()]
public partial class SomeMigration
{

}

Then @GeorgeTTD you could atleast get part way there with the following:-

  1. Use two seperate migrations assemblies, for destructive vs non destructive.
  2. Add a test to assert this convention - uses reflection to discover migrations with or without this attribute and makes sure they haven't accidentally been added to the wrong project.
  3. You can now run each migrations assembly independently as required..

However I'm not sure this makes sense as could you get scenarios where a non destructive migration depends on a destructive one if that was the order they were generated. For example a migration to do a column rename (destructive) could be generated before a migration to add an index on that column (non destructive).. For this reason it might be the best you can do is just to know whether the release does or doesn't contain non destructive migrations and optimise your deployment accordingly - they still need to be applied in the same collective order they were generated in.

@dazinator
Copy link

dazinator commented Apr 26, 2022

Just to be clear, my proposition with the [Destructive] stuff was for EF to support a workflow that prevents teams from accidentally including Destructive migrations in a release. The value is that by deterring this it allows deployment flows where services need not be brought down whilst the database is being upgraded. It allows support of workflows where releases containing destructive migrations can be done, but behind a safety net that lends itself to better team communication and planning around it, as it can no longer be done "accidentally".

@roji
Copy link
Member

roji commented Apr 27, 2022

@dazinator as you write above, migrations are a linear, ordered list, with dependencies between them. It isn't possible to execute only non-destructive migrations, deferring destructive ones that are interspersed between them (because of the dependencies). So splitting migrations into separate assemblies (or otherwise cherry-pick/separating them) doesn't seem possible.

However, in your second comment you seem to be proposing a cmdline switch where the EF Core tooling would simply refuse to apply migrations (e.g. when running a migrations bundle or dotnet ef database update) if any of them contain (or come after) a destructive migration. That does seem possible, and could possibly help prevent accidental errors, though note that any migrating would be blocked once a destructive migration is pending; I'm not sure exactly how that helps the workflow.

At the end of the day, zero-downtime migrations are something which require careful planning and a specific workflow; EF can certainly try to help, but nothing will obviate a manual checking and migration planning here.

Regardless, if you want to continue discussing this, please open a new issue - let's try to keep this general issue clear of detailed specific discussions.

@ajcvickers ajcvickers added the composite-issue A grouping of multiple related issues into one issue label Dec 6, 2022
@ajcvickers ajcvickers modified the milestones: Epics, Backlog Dec 6, 2022
@stijnherreman
Copy link

In our latest project we integrated the DB migrations in the deployment pipeline simply by generating an idempotent migration script (for all migration steps) via dotnet-ef and leveraging sqlcmd to execute it against the appropriate sql server instance.

I ended up with the same solution. The idempotent script is an artifact published by the build stage and is executed with sqlcmd.

- task: PowerShell@2
  inputs:
    targetType: inline
    script: |
      Import-Module SqlServer
      Invoke-Sqlcmd -ServerInstance <...> -Database <...> -InputFile <...>

One thing I'd still like to add is the ability to review the generated script in a pull request. GitHub has done this and open sourced their code, so I'm hoping to use their work as inspiration.

@bricelam bricelam removed their assignment Jul 8, 2023
@alrz
Copy link
Member

alrz commented Nov 2, 2023

Is there any issue regarding something like "fluent migrations", that is, instead of inferring the changes from the model, you explicitly call into AddColumn(e => e.NewColumn), run sql, etc, to make a versioned migration script. I think that would address most of the pain points I've faced in the past,

@ajcvickers
Copy link
Member Author

@alrz If you want to write a migration where you explicitly call the builder methods instead of scaffolding the starting point from the model, then you are free to do that. You can then create a script from this in the normal way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests