-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
neutrino+query+rescan: improve rescan speed #236
Changes from 1 commit
d983c73
fd76d6f
2b2c363
0e99431
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -714,7 +714,9 @@ func (s *ChainService) GetCFilter(blockHash chainhash.Hash, | |
log.Debugf("Fetching filters for heights=[%v, %v], stophash=%v", | ||
q.startHeight, q.stopHeight, q.stopHash) | ||
|
||
persistChan := make(chan *filterResponse, len(q.headerIndex)) | ||
go func() { | ||
defer close(persistChan) | ||
defer s.mtxCFilter.Unlock() | ||
|
||
// Hand the query to the work manager, and consume the verified | ||
|
@@ -782,7 +784,33 @@ func (s *ChainService) GetCFilter(blockHash chainhash.Hash, | |
numFilters) | ||
} | ||
|
||
if s.persistToDisk { | ||
persistChan <- resp | ||
} | ||
}() | ||
|
||
if s.persistToDisk { | ||
// Persisting to disk is the bottleneck for fetching filters. | ||
// So we run the persisting logic in a separate goroutine so | ||
// that we can unlock the mtxCFilter mutex as soon as we are | ||
// done with caching the filters in order to allow more | ||
// GetCFilter calls from the caller sooner. | ||
go func() { | ||
var ( | ||
resp *filterResponse | ||
ok bool | ||
) | ||
|
||
for { | ||
select { | ||
case resp, ok = <-persistChan: | ||
if !ok { | ||
return | ||
} | ||
|
||
case <-s.quit: | ||
return | ||
} | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. One worry with this is that we (in worst case sync from genesis) spin up like 700 goroutines (on mainnet). So another option to rate limit this a bit is to have a single outer goroutine that listens on a channel with a large buffer. Then we rate limit ourselves with that buffered channel There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. also gonna look into doing batch writes to the db instead of one by one There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Batch writes would be great, and shouldn't be too difficult to add in. Re the amount of goroutines: we can instead make a basic worker pool here. Or we only allow so many of them to be active at one time via a semaphore. |
||
err = s.FilterDB.PutFilter( | ||
resp.blockHash, resp.filter, | ||
dbFilterType, | ||
|
@@ -795,8 +823,8 @@ func (s *ChainService) GetCFilter(blockHash chainhash.Hash, | |
log.Tracef("Wrote filter for block %s, type %d", | ||
resp.blockHash, dbFilterType) | ||
} | ||
} | ||
}() | ||
}() | ||
} | ||
|
||
var ok bool | ||
var resultFilter *gcs.Filter | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think would be cleaner to send the filter to persist, instead of relying on a chan close above as a signal to check a local closure variable.