Skip to content
This repository has been archived by the owner on Sep 9, 2020. It is now read-only.

vendor directory writes: add counts to verbose logging, limits writers, abort on error #1043

Merged
merged 3 commits into from
Aug 30, 2017
Merged

Conversation

jmank88
Copy link
Collaborator

@jmank88 jmank88 commented Aug 21, 2017

What does this do / why do we need it?

This change adds counts to some of the verbose logging lists:

  1. dry-run vendor directories (Would have written..)
  2. normal run vendor directories (Wrote.../Failed to write..., previously logged Writing... prior to work)
  3. normal run vendor errors list (Failed to write dep tree...)

2 is concurrent and I/O limited, so this should help communicate progress to the user. 1 and 3 log immediately, but I still think the count/total is valuable for reference, grouping, and just scrolling through a really long list.

Example:

...
(46/84) Wrote github.com/pelletier/go-buffruneio@v0.2.0
(47/84) Wrote github.com/go-openapi/validate@master
(48/84) Wrote github.com/golang/groupcache@master
(49/84) Wrote github.com/go-openapi/analysis@master
(50/84) Wrote github.com/sirupsen/logrus@v1.0.3
(51/84) Wrote github.com/gorilla/mux@v1.4.0
(52/84) Wrote github.com/cactus/go-statsd-client@v3.1.0
(53/84) Wrote github.com/urfave/cli@v1.20.0
(54/84) Wrote github.com/spf13/afero@master
...

Edit: Second commit refactors the concurrency for limiting writers and aborting on error.

What should your reviewer look out for in this PR?

  • concurrency simplifications
  • better logs messages
  • more places to add counts

Which issue(s) does this PR fix?

Follow up from #1037

@@ -148,6 +148,10 @@ func (i ProjectIdentifier) errString() string {
return fmt.Sprintf("%s (from %s)", i.ProjectRoot, i.Source)
}

func (i ProjectIdentifier) String() string {
return i.errString()
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any objections to this errString format becoming the canonical exported String method format? fmt.Sprintf("%s (from %s)", i.ProjectRoot, i.Source)

errString() has 36 usages, all of which are passed to formatting functions with %s, so I'd like to do a follow up PR to just absorb that method into this one and let them all implicitly call this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mmm yes, i like this. the individual properties are accessible already - callers can easily construct their own output if they so choose. 👍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(separate PR, though)

@ibrasho
Copy link
Collaborator

ibrasho commented Aug 21, 2017

One thing I regret not doing in retrospective, especially after going through #903, is that we really want to stop writing vendor as soon as an error occurs (since continuing is useless). This impacts logging somehow, but we can merge these changes and apply that after.


var err error
defer func() {
if r := recover(); r != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we get panics here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't seen one, but I wanted to be careful not to return a false positive if one occurs.

@jmank88
Copy link
Collaborator Author

jmank88 commented Aug 27, 2017

One thing I regret not doing in retrospective, especially after going through #903, is that we really want to stop writing vendor as soon as an error occurs (since continuing is useless). This impacts logging somehow, but we can merge these changes and apply that after.

I went ahead and added this because I was playing around with it and, although it's more complex, it actually ended up cleaner (e.g. no more recover()), plus we can control the number of concurrent writers.

@jmank88 jmank88 changed the title verbose logging: add counts to vendor directory lists vendor directory writes: add counts to verbose logging, limits writers, abort on error Aug 27, 2017
}
close(writeCh)
// Launch writers.
writers := runtime.GOMAXPROCS(-1)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps an optimal number is some factor of GOMAXPROCS, or ultimately something configurable.
Related discussion on how this might be configurable: #1028 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm not sure that GOMAXPROCS is the right thing, here - the goroutines themselves aren't actually doing work, but end up mostly in a sleep mode waiting for their spawned subprocesses. limiting it to GOMAXPROCS seems likely to be lower than we want.

i think i'd rather just pick an arbitrary number to begin with - let's say 16 - and see how we do.

Copy link
Member

@sdboyer sdboyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks mostly good

}
close(writeCh)
// Launch writers.
writers := runtime.GOMAXPROCS(-1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm not sure that GOMAXPROCS is the right thing, here - the goroutines themselves aren't actually doing work, but end up mostly in a sleep mode waiting for their spawned subprocesses. limiting it to GOMAXPROCS seems likely to be lower than we want.

i think i'd rather just pick an arbitrary number to begin with - let's say 16 - and see how we do.

msg := "Wrote"
if resp.err != nil {
if len(errs) == 0 {
close(cancel)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to know to break out of the loop explicitly here. i believe there's a possibility of a double-close on the cancel channel:

  1. X and Y are all sent into the writeCh, and are all picked up by their own goroutine
  2. X finishes first and fails, indicates as much to the respCh; it terminates, decrementing the wg
  3. main goroutine receives the fail and closes cancel, logs result, returns to loop and waits for new value
  4. meanwhile, Y finishes and also fails, sends to respCh, and terminates, closing respCh from the waiter goroutine
  5. however, main goroutine is still waiting on the respCh and is already working before it receives second fail, panics on double-close

note that because the range loop will not terminate on a buffered channel until the chan is both closed AND empty, i believe this is guaranteed to happen if two workers simultaneously encounter an error.

Copy link
Collaborator Author

@jmank88 jmank88 Aug 29, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The close(cancel) is protected by len(errs) == 0 so that the channel is only closed on the first error. We could break and abort on the first error (and only report that single error), but since we want to gracefully block/wait/shutdown anyways, it seemed cleaner to inspect all responses and collect all errors (e.g. might be relevant to investigating the original error; we don't want to block/wait in silence then not log anything; etc...).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gah, you're right, i just breezed right by that check. ok 👍

@jmank88 jmank88 closed this Aug 29, 2017
@jmank88 jmank88 reopened this Aug 29, 2017
@carolynvs carolynvs removed their request for review August 29, 2017 13:23
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants