-
Notifications
You must be signed in to change notification settings - Fork 20.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core/rawdb, ethdb: introduce batched/atomic reads from ancients #23566
Conversation
0143295
to
ebed66d
Compare
From a design-perspective, maybe the interfaces should look differently. This PR so far just adds So theoretically, this PR allows to do this:
In practice, that would lead to a double- Or maybe it should be something else entirely. |
This is now fixed. |
core/rawdb/accessors_chain.go
Outdated
var data []byte | ||
db.AtomicReadAncients(func(reader ethdb.AncientReader) error { | ||
// Check if the data is in ancients | ||
if h, err := reader.Ancient(freezerHashTable, number); err == nil && common.BytesToHash(h) == hash { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use bytes.Equal
instead of BytesToHash
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks really nice so far!
de88963
to
2a0c9dd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have we ever run the fast/snaps sync with this PR? My concern is that previously ancient writes and reads can happen concurrently but now it's not allowed. Ancient read is pretty common during the sync. Not sure how much performance will be affected.
@@ -90,6 +90,14 @@ type AncientReader interface { | |||
AncientSize(kind string) (uint64, error) | |||
} | |||
|
|||
// AncientBatchReader is the interface for 'batched' or 'atomic' reading. | |||
type AncientBatchReader interface { | |||
AncientReader |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add one more line here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AncientBatchReader
looks confusing. Maybe we can rename the AncientReader
to RawAncientReader
and rename the AncientBatchReader
to AncientReader
.
@@ -90,6 +90,14 @@ type AncientReader interface { | |||
AncientSize(kind string) (uint64, error) | |||
} | |||
|
|||
// AncientBatchReader is the interface for 'batched' or 'atomic' reading. | |||
type AncientBatchReader interface { | |||
AncientReader |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AncientBatchReader
looks confusing. Maybe we can rename the AncientReader
to RawAncientReader
and rename the AncientBatchReader
to AncientReader
.
AncientReader | ||
// ReadAncients runs the given read operation while ensuring that no writes take place | ||
// on the underlying freezer. | ||
ReadAncients(fn func(AncientReader) error) (err error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps ReadAncientsLocked
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ReadAncients
operation is analoguous to ModifyAncients
which is the locked/atomic/batched write operation.
Ancient writes are already non-concurrent. But yes, you might have a point regarding the reads. I've done snap syncs with PR's on top of this one, but I haven't run a head-to-head benchmark. I'll spin one up |
This PR running on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We had a review call about this, and here are my thoughts: This PR is good because it removes the complexity of having to think about concurrent modifications of the freezer while reading it.
The atomic read possibility (ReadAncients
) is added here without removing the previous non-atomic freezer API. The practical difference between these two ways of accessing the freezer is subtle and related to freezer internals. I think that's not great because it increases the complexity of the API. But keeping non-atomic access as a possibility is the correct choice: we should still be able to write ancient chain data while other peers read headers.
This reverts commit edc5a92f3fb0595a9d96691253ec06a7cd5d601e.
d1c39e4
to
797044b
Compare
…reum#23566) This PR adds a new accessor method to the freezer database. This new view offers a consistent interface, guaranteeing that all individual tables (headers, bodies etc) are all on the same number, and that this number is not changes (added/truncated) while the operation is performing.
…reum#23566) This PR adds a new accessor method to the freezer database. This new view offers a consistent interface, guaranteeing that all individual tables (headers, bodies etc) are all on the same number, and that this number is not changes (added/truncated) while the operation is performing.
In several places, we use the following mechanic:
This is because between 1 and 2, there's a (ever so slight) chance that it was written into ancients and deleted from leveldb.
As of #23462, we now do those types of transfers using ancient batches, and thus introduced the ability to do locked writes. This PR extends that ability to support locked reads.
This could be used in the following situations, and both simplify and reduce the code:
ReadCanonicalHash
ReadHeaderRLP
ReadBodyRLP
ReadCanonicalBodyRLP
ReadTdRLP
ReadReceiptsRLP