Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Actually track progress for send/receive #53

Merged
merged 4 commits into from
Jun 1, 2020

Conversation

hannahhoward
Copy link
Collaborator

@hannahhoward hannahhoward commented May 29, 2020

Goals

Setup hooks with graphsync to facilitate tracking progress on the request -- this is important to get to being able to pause and resume at key increments. This sets up a bunch of the needed infrastructure for doing pause/resume and other needed facilities for retrieval.

Implementation

  • Modify the channels list to make send and receive mutable. Sync it's just two numbers, rather than all of the channels list to pointers to channel state, just make these members pointers to unsigned uint64 and use sync.atomic (low level primitives for doing atomic operations on numbers). Also, move the actual data structures for channels into the channels list package and leave the types as interfaces instead
  • Setup an abstraction layer for converting between events that happen in graphsync through hooks and semantic events that mean something to data transfer. this allows some seperation of graphsync logic from data transfer logic. While I didn't attempt doing it here, I think we could move everything that is actually transport-layer specific behind and interface and have all the remaining high level logic be universal (and potentially use a different transport layer on different requests so we can support multiple transports in a single data transfer instance)
  • Setup all hooks to actually record data progress on a data transfer so we can support pausing at intervals for retrieval

Setup channel tracking to properly support mutable send and receive operations
create hooks abstraction layer to manage mapping graphsync requests to data transfer events on
channel ids. Also adds recording for data progress
Add tests for the hooks interface and verify we are able to receive data for both send and receive
Copy link
Contributor

@shannonwells shannonwells left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks ok to me, looks like the tests are pretty complete. I had only suggestions, nothing blocking. The test instrumentation looked like a PITA to get done but it did make things pretty readable and easier to write new tests for.

require.NoError(t, err)
sent, err = channelList.IncrementSent(datatransfer.ChannelID{Initiator: peers[0], ID: tid1}, 25)
require.Equal(t, uint64(125), sent)
require.NoError(t, err)
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just as a sanity check it might be nice to have a parallel update test to verify the thread safety when updating sent and received.

},
},
"recognized incoming request will record outgoing blocks": {
action: func(gsData *graphsyncTestData) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could reasonably combine this with the first valid incoming request test.

if data.makeRequest != nil {
request = data.makeRequest(requestID, channelID)
} else {
ext := &extension.TransferData{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider extracting these to a well-named function for readability.

@hannahhoward hannahhoward merged commit 9d6a1ef into master Jun 1, 2020
@rvagg rvagg deleted the feat/progress-indicators branch January 5, 2023 01:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants