Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recover file store previous state at open #1632

Conversation

nymius
Copy link
Contributor

@nymius nymius commented Sep 29, 2024

Description

The Store::open method doesn't recovers the previous Store state saved in the file and emplaces the file pointer just after the magic bytes prefix, this later is agravated by Store::append_changeset which sets the file pointer after the last written changeset. The combination of both causes the lost of any previous changeset there may have been in the file.

Is natural to think this shouldn't be the expected behavior, as @KnowWhoami pointed out in #1517, and the Store should recover the previous changesets stored in the file store.

To avoid producing breaking changes the expected logic has been implemented in a new Store::reopen method, which opens the file, iterates the changesets and returns a new Store with the file pointer correctly placed at the end of the already existing changesets. As this method checks the integrity of the stored changesets while iterating them, if any error is found, the method will return a StoreError::EntryIter error with the index of the offending changeset, the error caused by it and the number of bytes read when the error happened.

The StoreError is a new error enum including the following variants:

  • EntryIter: any error related to the decoding of the changesets of the file.
  • Io: errors related with file management internals.
  • InvalidMagicBytes: error caused by mismatching with the expected magic bytes.

StoreError::Io and StoreError::InvalidMagicBytes are the same than FileError::Io and FileError::InvalidMagicBytes. My idea is to propose StoreError as a replacement of FileError in the future. It has been implemented in this way to avoid the addition of extra nested variants at the moment of matching the error.

Fixes #1517.

Notes to the reviewers

I have committed the changes as feat and not fix because, although this PR fixes #1517, the changes do not imply a change in the current Store methods, but the addition of new one and a new Error enum.

Changelog notice

  • New Store::reopen method in file_store.
  • New StoreError enum in file_store.

Checklists

All Submissions:

  • I've signed all my commits
  • I followed the contribution guidelines
  • I ran cargo fmt and cargo clippy before committing

New Features:

  • I've added tests for the new feature
  • I've added docs for the new feature

Bugfixes:

  • This pull request breaks the existing API
  • I've added tests to reproduce the issue which are now passing
  • I'm linking the issue being fixed by this PR

StoreError is an ergonomic way to join all the errors that may occur
while opening a changeset datafile. It implements the following
variants:

- EntryIter: any error related to the decoding of the changesets of the
  file.
- Io: errors related with file management internals.
- InvalidMagicBytes: error caused by mismatching with the expected magic
  bytes.
This new method ensures the integrity of the changesets while opening
the file storing them and also emplaces the file pointer at the EOF to
avoid overwriting any changeset and start appending new changesets right
away.
The following tests have been added:

- reopen_recovers_state_after_last_write: check Store::reopen recovers
  the changeset stored previously in the file.
- fail_to_reopen_if_write_is_short: check Store::reopen reads correct
  changesets and fails to read failing one, retrieving the needed data
  with StoreError::EntryIter to recover the underlying file to a working
  state.
- reopen_fails_to_read_if_invalid_bytes: check Store::reopen recognizes
  garbage data as well as Store::open.
@nymius nymius changed the title I1517/append after open causes overwrites Recover file store previous state at open Sep 30, 2024
/// error variant will be returned, with the index of the failing changeset and the error it
/// caused.
///
/// [`create_new`]: Store::create_new
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm probably missing some background but what's the point of doing this unless you return the data you read?

I think a PR here makes sense because we got rid of the "load" system we had before. Maybe this needs a big re-think including deleting lots of the API here. We probably just want append_changeset and create and load as the API. Does anyone need to actually iter changesets?

Copy link

@KnowWhoami KnowWhoami Sep 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the FileStore has APIs that users likely don't need.

IMO, implementing these api's in the following fashion could make FileStore much simpler.

Key APIs:

  • append_changeset:
    We should ensure that duplicate changesets are not stored. Currently, users must call aggregate_changesets to remove duplicates, but the store isn't updated with the final unique changeset. Instead, we can make append_changeset automatically append the given changeset, aggregate them to remove duplicates, and then store the final aggregated changeset back into the FileStore.

  • load:
    As suggested by @ValuedMammal, it makes sense to open the FileStore in append mode rather than write mode. This will place the file pointer at the end of the file, avoiding overwriting and making it easier to append new changesets.

Other Considerations:

  • aggregate_changesets:
    While I'm not certain if downstream users will need this API, it seems valuable to keep it exposed for now.

  • Iteration Over Changesets:
    From a user's perspective, iterating changesets is unnecessary. Currently, it's only used to collect changesets for aggregation. We could either make it private or move the iteration logic directly into aggregate_changesets.

By handling changeset duplication within append_changeset, we can eliminate the need for users to manually aggregate changesets or iterate over them..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with your vision.

In this PR I proposed a solution of compromise guided by the principle of stabilizing the API rather than breaking it. I don't know if that's currently something BDK wants to do.


As suggested by @ValuedMammal, it makes sense to open the FileStore in append mode rather than write mode. This will place the file pointer at the end of the file, avoiding overwriting and making it easier to append new changesets.

For example, I discarded the append solution because there seemed to be an expected behavior hidden behind the interaction of two functions:

  1. open not moving the file pointer to the EOF when accessing the file and,
  2. append_changeset setting the file pointer (and length) to the end of the last changeset written.

The interaction of both is what made the append_changeset_truncates_invalid_bytes test to pass in the original code, because 1 leaves the file pointer at the end of the MAGIC_BYTES and 2 removes the invalid bytes completely by shorting the file to the end of the last written changeset.


I'm probably missing some background but what's the point of doing this unless you return the data you read?

That's also why I don't provide the user with the well encoded changesets on error, as that would have implied a more complex structure, using generics (to consider the changeset).
I traded it with the minimum data to locate and isolate the failure in the file.

You may find a clearer idea of this in the fail_to_reopen_if_write_is_short test, at this line.


If there is a common agreement to make the refactoring above instead of trying to fix the current API I can close this PR and try to implement the changes in another one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why we want to ensure duplicate changesets are not stored. They do no harm and only happen in the case of some kind of programmer error.

There's no issue with stabalizing the API of this crate I think. We can do what we want with it. To me at least, this reopen makes it make even less sense because it reads in every entry but doesn't return it which is a waste.

I agree with @nymius that opening the file with append is a nice idea in theory but it's actually counter productive here. There's no magic to opening files with append it just makes the operating system handle what we can do better.


I vote for a PR cutting the API down to:

  • create -- rename create_new.
  • load -- does what open does and aggregates the changeset and leaves the file pointer at the right spot. Errors if the file didn't exist or wrong magic bytes.
  • append -- does what append_changeset does
  • create_or_load -- does what open_or_create_new does

Then updating the README to explain that this is the simplest implementation of a bdk compatible database. It's useful for development but should probably not be used beyond that. No effort has been made to make it backwards compatible so you have to do that yourself. Link them to docs on sqlite feature for what they should use in production.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think load should essentially return Result<(C, Self), Error> so that we can get rid of .aggregate_changeset. This forces the caller to only read once.

Copy link

@KnowWhoami KnowWhoami Oct 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The interaction of both is what made the append_changeset_truncates_invalid_bytes test to pass in the original code, because 1 leaves the file pointer at the end of the MAGIC_BYTES and 2 removes the invalid bytes completely by shorting the file to the end of the last written changeset.

Got the point.

I don't understand why we want to ensure duplicate changesets are not stored.

On ensuring this -> we can get rid of iterating over the changeset each time user want to get the total changeset.

Instead, we can make append_changeset automatically append the given changeset, aggregate them to remove duplicates, and then store the final aggregated changeset back into the FileStore

Though in my proposed impl -> we have to iterate at the end -> so doesn't make much diference.

They do no harm and only happen in the case of some kind of programmer error.

Though I have no idea regarding this but if that's the case -> then it does not make sense to try to keep unique set of changesets.

I think load should essentially return Result<(C, Self), Error> so that we can get rid of .aggregate_changeset. This forces the caller to only read once.

Combining both the load/open and aggregate_changeset logic does make sense because IMO, both are correlated to each other as in order to load a wallet , a user has to get the aggregated changeset for which they have to load the store instance.
By this, we don't require to create another public api and users don't have to manually call a function to get the required changeset for loading wallet.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On ensuring this -> we can get rid of iterating over the changeset each time user want to get the total changeset.

If loading gets too slow, we can just have a method to merge changesets. I.e. overwrite the file with just one entry, the new aggregate changeset. Or aggregate x number of changesets at the end.

It does NOT make sense to have a check that stops duplicate changesets being stored. How are we going to implement this? Have an aggregate changeset of all persisted changes in memory that ensures the new changeset is not a subset of it? That typically doesn't happen, and if it does, it means that there is a problem with the in-memory data structures. The in-memory data structures should only output a changeset because there are changes to it.

Copy link
Contributor Author

@nymius nymius Nov 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think load should essentially return Result<(C, Self), Error> so that we can get rid of .aggregate_changeset. This forces the caller to only read once.

I have found an issue with this approach. The (C, Self) return type is hard to adapt to the WalletPersister::initialize trait.

By the documentation I don't have available any file path to load the store from and I must return all data stored in the persister (i.e. return the AggregatedChangeset):

  • The database schema of the persister (if any), should be initialized...
  • The implementation must return all data currently stored in the persister. If there is no data, return an empty changeset (using [ChangeSet::default()]).
  • some persister implementations may NOT require initialization at all (and not error).
    /// [`persist`]: WalletPersister::persist
    fn initialize(persister: &mut Self) -> Result<ChangeSet, Self::Error>;

file_store's trait implementation currently in master assumes persister (i.e. file_store::Store) has been opened or loaded before reaching the trait method (look how aggregate_changesets() is called without create_new/open ceremony).

#[cfg(feature = "file_store")]
impl WalletPersister for bdk_file_store::Store<ChangeSet> {
    type Error = FileStoreError;

    fn initialize(persister: &mut Self) -> Result<ChangeSet, Self::Error> {
        persister
            .aggregate_changesets()
            .map(Option::unwrap_or_default)
            .map_err(FileStoreError::Load)
    }

Summing up, the constraints for file_store::Store are:

  • Must return aggregated changeset.
  • Cannot load from file at WalletPersister::initialize because file_path is not at scope at that moment
  • including a file_path parameter in WalletPersister::initialize will break API and force changes on rusqlite implementation.

Following LLForun's idea, I propose the following new API for file_store::Store:

  • create: rename of create_new
  • load: open file, place file pointer at EOF and return file_store::Store.
  • dump: aggregate changesets and return the aggregated changeset.
  • append: rename of append_changeset
  • load_or_create: Store::create or Store::load.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Working on this I realized we can have both, (C, Store) as return type of Store::load and a Store::dump method to call on <bdk_file_store::Store<ChangeSet> as WalletPersister>::initialize, so I will keep both.

@notmandatory notmandatory added the discussion There's still a discussion ongoing label Oct 15, 2024
@notmandatory notmandatory added this to the 1.0.0-beta milestone Oct 15, 2024
@notmandatory notmandatory added bug Something isn't working api A breaking API change labels Oct 15, 2024
@nymius nymius mentioned this pull request Nov 13, 2024
@nymius
Copy link
Contributor Author

nymius commented Nov 13, 2024

Overrode by #1684

@nymius nymius closed this Nov 13, 2024
@notmandatory notmandatory removed this from the 1.0.0-beta milestone Nov 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api A breaking API change bug Something isn't working discussion There's still a discussion ongoing
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

Appending changesets to Store after opening causes overwriting
5 participants