Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simplify and add DoNotWaitForStreamingResponses #75

Merged
merged 8 commits into from
Jul 25, 2023
Merged

simplify and add DoNotWaitForStreamingResponses #75

merged 8 commits into from
Jul 25, 2023

Conversation

Jorropo
Copy link
Contributor

@Jorropo Jorropo commented Jul 19, 2023

No description provided.

I removed the search value canceled context test since it was racy, it was racing the errCh.
The new code waits for one worker to not error or all of them to error before resolving instead of leaking goroutines.
@Jorropo
Copy link
Contributor Author

Jorropo commented Jul 19, 2023

hugo@Radus ~/k/go-libp2p-routing-helpers (refactor)> go mod tidy && git status
Sur la branche refactor
Votre branche est à jour avec 'origin/refactor'.

rien à valider, la copie de travail est propre

go.sum is good idk goreleaser is confused

Copy link
Contributor

@aschmahmann aschmahmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No code bugs identified here, mostly just concerns about the complexity and clarity of what's going on.

The potential bugs are associated with whether this design will really give us what we need in kubo (e.g. ipfs/kubo#10020 (comment)). In particular the non-generic parallel routers were pretty specific but were handling edge cases that compparallel is having problems with. We can deal with that via leveraging the composability better, if the code in the consumer (e.g. kubo) ends up looking reasonable with this abstraction then 👍 otherwise it might need to change a bit to accommodate.

compsequential.go Outdated Show resolved Hide resolved
@@ -3,6 +3,7 @@ module github.com/libp2p/go-libp2p-routing-helpers
go 1.19

require (
github.com/Jorropo/jsync v1.0.1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the value of using jsync here? The commit that introduces this claims it's a performance improvement due to not needing a goroutine that might be short lived, but AFAIK this code isn't really in the critical path anywhere which means it's just another abstraction reviewers have to keep in mind.

Copy link
Contributor Author

@Jorropo Jorropo Jul 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imo it makes the code more clear, easy as when it reaches 0 some code executes.
It's also limited to internal implementation and isn't exposed to consumers in anyway, jsync just depends on the std so I think it's fine as-is.

compparallel_test.go Show resolved Hide resolved
compparallel.go Outdated Show resolved Hide resolved
compparallel.go Outdated Show resolved Hide resolved
errorsLk.Lock()
if outCh == nil {
outCh = make(chan T)
errors = nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this do anything other than let the GC collect a few bytes earlier?

Copy link
Contributor Author

@Jorropo Jorropo Jul 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's used in the error handling:

				if errors == nil {
					return
				}
				errs = append(errs, err)

if errors is nil then it understand something else already had a sucessfull and it does not need to be appending.

errCh := make(chan error)
var wg sync.WaitGroup
var ready sync.Mutex
ready.Lock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think more explanation around the ready lock seems reasonable. Readers can follow the code, but with async code like this it's easy for anyone who comes next to miss something.

Comment on lines +362 to +370
var minusOne uint64
minusOne--
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we doing this with uints instead of ints?

Copy link
Contributor Author

@Jorropo Jorropo Jul 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it can't be negative.

@Jorropo Jorropo force-pushed the refactor branch 2 times, most recently from 850d18a to 95ff051 Compare July 25, 2023 17:00
@github-actions
Copy link

Suggested version: v0.7.1

Comparing to: v0.7.0 (diff)

Changes in go.mod file(s):

(empty)

gorelease says:

# diagnostics
go.sum: one or more sums are missing. Run 'go mod tidy' to add missing sums.

# summary
Suggested version: v0.7.1

gocompat says:

Your branch is up to date with 'origin/master'.

Cutting a Release (and modifying non-markdown files)

This PR is modifying both version.json and non-markdown files.
The Release Checker is not able to analyse files that are not checked in to master. This might cause the above analysis to be inaccurate.
Please consider performing all the code changes in a separate PR before cutting the release.

Automatically created GitHub Release

A draft GitHub Release has been created.
It is going to be published when this PR is merged.
You can modify its' body to include any release notes you wish to include with the release.

@Jorropo Jorropo merged commit 2d62ab8 into master Jul 25, 2023
20 checks passed
@Jorropo Jorropo deleted the refactor branch July 25, 2023 17:21
@BigLep BigLep mentioned this pull request Aug 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants