Skip to content
This repository has been archived by the owner on Feb 1, 2023. It is now read-only.

fix multiple data races #76

Merged
merged 5 commits into from
Feb 20, 2019
Merged

fix multiple data races #76

merged 5 commits into from
Feb 20, 2019

Conversation

Stebalien
Copy link
Member

@Stebalien Stebalien commented Feb 20, 2019

  • fix pubsub race: Calling wg.Add after wg.Wait has returned is invalid. This change swaps the wait group for a plain rwmutex.
  • fix Stat race: mixing atomics with locks doesn't work.
  • fix race detector.

(caught with the race detector)

@ghost ghost assigned Stebalien Feb 20, 2019
@ghost ghost added the status/in-progress In progress label Feb 20, 2019
Calling `wg.Add` after `wg.Wait` has returned is invalid. This change swaps the
wait group for a plain rwmutex.

(caught with the race detector)
@Stebalien Stebalien changed the title pubsub: fix race on shutdown fix multiple data races Feb 20, 2019
Copy link
Contributor

@hannahhoward hannahhoward left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment about switching to mutex + closed bool -- seems like the cancel channel is still more versatile, even w/o the waitgroup. I am new to this code though so grain of salt

return
default:
}

ps.wrapped.Pub(block, block.Cid().KeyString())
}

// Not safe to call more than once.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure this comment no longer relevant. I mean not that you should, but looks like it'll just return early if you do

@@ -107,9 +100,10 @@ func (ps *impl) Subscribe(ctx context.Context, keys ...cid.Cid) <-chan blocks.Bl
if !ok {
return
}
// We could end up blocking here if the client
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that the waitgroup is the source of the race, and I get removing it, but I'm not 100% on why removing the cancel channel is needed, and it does seem like a nice way to insure these go routines get cleaned up, even if the subscribe client doesn't cancel context.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mostly just removed it because it wasn't absolutely necessary. We introduced it in the first place to avoid a lock, IIRC.

However, you're right. There's no reason not to have it.

Ensures that we don't leave goroutines behind, even if the client forgets to
unsubscribe.
@Stebalien
Copy link
Member Author

@hannahhoward fixed (and I've fixed another test race).

Copy link
Contributor

@hannahhoward hannahhoward left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Stebalien Stebalien merged commit 722239f into master Feb 20, 2019
@ghost ghost removed the status/in-progress In progress label Feb 20, 2019
Jorropo pushed a commit to Jorropo/go-libipfs that referenced this pull request Jan 26, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants