-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: safer re-merging with updated upstream #3499
Conversation
We used to call forceClone() to update with upstream, but this deletes the checked out directory. This is inefficient, can delete existing plan files, and is very surprising if you are working manually in the working directory. We now fetch an updated upstream, and re-do the merge operation. This leaves any working files intact.
It's never safe to clone again. But sometimes we need to check for upstream changes to avoid reverting changes. The flag is now used to know when we need to merge again non-destructively with new changes.
The failing test is due to this renovate commit in main: 1d91dfb |
server/events/working_dir.go
Outdated
return w.mergeToBaseBranch(p, runGit) | ||
} | ||
|
||
func (w *FileWorkspace) makeGitRunner(log logging.SimpleLogging, cloneDir string, headRepo models.Repo, p models.PullRequest) func(args ...string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why return a function? It's already a function.
Previously it was returning a function because it was onlyb used within one scope. Now that's it's broken out, you might as well use it as a function instead of a function of a function
func (w *FileWorkspace) makeGitRunner(log logging.SimpleLogging, cloneDir string, headRepo models.Repo, p models.PullRequest) func(args ...string) error { | |
func (w *FileWorkspace) runGit(log logging.SimpleLogging, cloneDir string, headRepo models.Repo, p models.PullRequest) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @nitrocode here. This is needlessly complicating the call stack. See my comments on how to properly refactor this.
As long as the branch itself has not been updated, plans should be kept. Even if upstream has changed.
This flag was only set to true in case a call to Clone() ended up merging with an updated upstream, so the new name better represents what it means.
This complements the test that Clone with unmodified branch but modified upstream does _not_ wipe plans.
@nitrocode added some tests and tried to clean up the comments here, does it look resaonable now? |
hopefully this gets merged, it's the only thing that holds us from updating atlantis |
I'm reviewing today |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this might bloat the PR a bit. But I think this is an excellent opportunity to refactor the git code a bit better.
I draw the line at trying to create a puesdo git client with a recursive function "runner" creating functions. The proper to do this would be to refactor all the Git code into a util package that initializes a client.
Let's create a proper Git interface that moves the git related functions like Clone() out of the WorkingDir interface. Can you do that?
server/events/working_dir.go
Outdated
return w.mergeToBaseBranch(p, runGit) | ||
} | ||
|
||
func (w *FileWorkspace) makeGitRunner(log logging.SimpleLogging, cloneDir string, headRepo models.Repo, p models.PullRequest) func(args ...string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @nitrocode here. This is needlessly complicating the call stack. See my comments on how to properly refactor this.
This is no longer a warning, but expected behavior in merge chekout mode
Every call to wrappedGit for the same PR uses identical setup for directory, head repo and PR, so passing the
It seems like this scope creep has stalled this PR. Can we push the git refactor to another effort so we can get this bugfix in? |
The PR isn't stalled, I just haven't been able to review @finnag changes. My main concern was the recursive function calls which they seemed to have addressed. I will be re-reviewing here shortly. |
@GenPage ok, thanks for your time! |
* Safer handling of merging with an updated upstream. We used to call forceClone() to update with upstream, but this deletes the checked out directory. This is inefficient, can delete existing plan files, and is very surprising if you are working manually in the working directory. We now fetch an updated upstream, and re-do the merge operation. This leaves any working files intact. * Rename SafeToReClone -> CheckForUpstreamChanges It's never safe to clone again. But sometimes we need to check for upstream changes to avoid reverting changes. The flag is now used to know when we need to merge again non-destructively with new changes. * Update fixtures.go * Add test to make sure plans are not wiped out As long as the branch itself has not been updated, plans should be kept. Even if upstream has changed. * renamed HasDiverged to MergedAgain in PlanResult and from Clone() This flag was only set to true in case a call to Clone() ended up merging with an updated upstream, so the new name better represents what it means. * Test that Clone on branch update wipes old plans This complements the test that Clone with unmodified branch but modified upstream does _not_ wipe plans. * runGit now runs git instead of returning a function that runs git * Updated template to merged again instead of diverged This is no longer a warning, but expected behavior in merge chekout mode * Rename git wrapper to wrappedGit, add a type for static config Every call to wrappedGit for the same PR uses identical setup for directory, head repo and PR, so passing the --------- Co-authored-by: nitrocode <7775707+nitrocode@users.noreply.github.com> Co-authored-by: PePe Amengual <jose.amengual@gmail.com>
* Safer handling of merging with an updated upstream. We used to call forceClone() to update with upstream, but this deletes the checked out directory. This is inefficient, can delete existing plan files, and is very surprising if you are working manually in the working directory. We now fetch an updated upstream, and re-do the merge operation. This leaves any working files intact. * Rename SafeToReClone -> CheckForUpstreamChanges It's never safe to clone again. But sometimes we need to check for upstream changes to avoid reverting changes. The flag is now used to know when we need to merge again non-destructively with new changes. * Update fixtures.go * Add test to make sure plans are not wiped out As long as the branch itself has not been updated, plans should be kept. Even if upstream has changed. * renamed HasDiverged to MergedAgain in PlanResult and from Clone() This flag was only set to true in case a call to Clone() ended up merging with an updated upstream, so the new name better represents what it means. * Test that Clone on branch update wipes old plans This complements the test that Clone with unmodified branch but modified upstream does _not_ wipe plans. * runGit now runs git instead of returning a function that runs git * Updated template to merged again instead of diverged This is no longer a warning, but expected behavior in merge chekout mode * Rename git wrapper to wrappedGit, add a type for static config Every call to wrappedGit for the same PR uses identical setup for directory, head repo and PR, so passing the --------- Co-authored-by: nitrocode <7775707+nitrocode@users.noreply.github.com> Co-authored-by: PePe Amengual <jose.amengual@gmail.com>
what
After the initial clone, if we need to merge with an updated master, we now fetch and merge
instead of deleting the directory and cloning again.
why
We used to call forceClone() to update with upstream, but this deletes the
checked out directory. This is inefficient, can delete existing plan files,
and is very surprising if you are working manually in the working directory.
We now fetch an updated upstream, and re-do the merge operation. This
leaves any working files intact.
The previous attempted fix in #3493 reduced the issue a bit by only calling forceClone() when starting a plan, but this still causes problems for previous plans that have been created.
tests
I have tested my changes by:
references