-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Swing-store export data outside of genesis file #9549
Conversation
@mhofman, This is a great direction and will help us faster imports and exports. Thanks for working on this. |
There are not unit tests covering genesis import / export, so it'd be a significant effort to add those now. I'll try to revive the integration test that existed, but given this feature is not used for normal operations, I'd prefer to not block on tests. |
Deploying agoric-sdk with Cloudflare Pages
|
b7d9083
to
6ef622d
Compare
@toliaqat I added an integration test of mainfork. See https://github.com/Agoric/agoric-sdk/actions/runs/9621035132/job/26540251077?pr=9549#step:9:2017 PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not qualified to review the implementation, but I've thrown in some thoughts on the protocol. I'm merely -0 on hashing a derivative of the data, not -1: if you can't find any other way to do it, and we believe this isn't too hard to change in the future (genesis export is only for debugging right now, we can revisit it before considering relying upon it for an agoric4 replacement chain), then I'm ok with this protocol being landed.
if hashParts[0] != "sha256" { | ||
panic(fmt.Errorf("invalid swing-store export data hash algorithm %s, expected sha256", hashParts[0])) | ||
} | ||
sha256Hash, err := hex.DecodeString(hashParts[1]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I'd name this expectedSHA256Hash
or expectedHash
swingStore.Set(key, []byte(entry.StringValue())) | ||
} | ||
|
||
return encoder.Encode(entry) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know that I'm reading this correctly, but it looks like something else is reading and parsing an artifact to produce entry.Key()
and maybe-nil values, then this function JSON-encodes the entry we got, and then we hash the resulting JSON-encoded bytes. Is that about right?
I'd prefer that we hash the original, and then do whatever parsing we need to use the data. This approach is vulnerable to disagreements between the first parser and the JSON encoder. I can't name any cases that would actually trigger it, but say json.NewEncoder
does many-to-one Unicode normalization of some codepoint X into bytes Z1Z2, and also serializes codepoint Y into the same Z1Z2. Then this could be tricked into passing Y into the swingstore data when the creator meant (and hashed) X.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went back and forth on this until I realized that there is no canonical JSON encoding. As such, hashing the original file isn't quite right. Of course even with a specified JSON encoding of each entry, there is no canonical way in which the entries should appear. In theory we should use an IAVL tree of the data to have a canonical hash :)
Basing the hash on the re-encoding to JSON by the golang encoder is thus motivated both by practicality (it fit better in the abstraction already in place of "export data reader"), and slightly increases the canonical status of the hash.
That said I was also careful that the hash computed this way is exactly equal to sha256sum
of the file created by golang serialization.
many-to-one Unicode normalization of some codepoint X into bytes Z1Z2, and also serializes codepoint Y into the same Z1Z2. Then this could be tricked into passing Y into the swingstore data when the creator meant (and hashed) X.
As mentioned above, it's actually the opposite. There are multiple ways to escape the same string value in JSON encodings, and here we're settling on the one that golang implements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no canonical JSON encoding
there is no canonical way in which the entries should appear
Yeah, that's the clue that we can't/shouldn't rely upon "canonical" encoding at all. In general, the creator of the hashed/signed artifact can encode it any way they want: doesn't matter, nobody else will be attempting to encode it in the same way. What matters is that everybody's decoder will treat the hashed/signed bytes in the same way.
If we find ourselves wanting a canonical encoder, that might mean we're doing encoding on the decoding/verification side, and that's where vulnerabilities live.
return nil, nil | ||
exportDataIterator := eventHandler.swingStore.Iterator(nil, nil) | ||
kvReader := agoric.NewKVIteratorReader(exportDataIterator) | ||
eventHandler.hasher.Reset() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does the hasher need to be reset here? Is it used more than once? I want to make sure we don't have a bug where we only actually hash the last bit of the data, because we reset it on every loop or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically GetExportDataReader
could be called more than once
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, I had assumed that the hasher was being created one-per-export, but maybe the eventHandler
that it lives in is allocated statically, one-per-process?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the artifact provided is passed to WriteSwingStoreExportToDirectory
. That function is well behaved and will only read the stream export data once, but technically it could do it multiple times.
I agree the layering is weird, and in a perfects world I'd decouple the generation of the hash from the consumer that writes the data to disk, however I wanted to avoid that inefficiency. I suppose I could break boundaries, grab the iterator once on the outer scope, assert we only wrap the iterator once, and at the end verify that the iterator has been fully consumed.
encoder.SetEscapeHTML(false) | ||
|
||
return agoric.NewKVHookingReader(kvReader, func(entry agoric.KVEntry) error { | ||
return encoder.Encode(entry) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I can see that the "what do we hash" question is tied into the "what does golang encode" question. Giving golang the responsibility for framing/unframing, and then JSON-encoding the key/value pairs for hashing, is safe against concatenation/framing-confusion attacks, even if it's vulnerable to some hypothetical non-injectiveness in JSON.
If we were to, e.g., skip JSON, and just feed the key into the hasher, then feed the value into the hasher, then repeat for each pair: that would be vulnerable to framing confusion ({ ab: 'cd', ef: 'gh'}
would hash the same as {a: 'b', c: 'de', fg: 'h'}
). Without some kind of encoding/escaping, there's no way to avoid that (simply using a newline as a framing separator could still be confused with newlines in the keys or values).
One option would be to JSON-encode the key/value pairs first, then have the golang codec (struct? proto?) encode the resulting strings (one string per entry, not one for the key and a second for the value). Then the hasher could be fed those JSON.stringify([key,value])
strings, which would be safely escaped, and unambiguously framed. It would look weird, eschewing the built-in encoding support in favor of manual JSON, but it would meet my maxim of "hash (sign) what you say, only use what you hash (sign)". Down with "canonical encoding" in secure protocols!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then the hasher could be fed those
JSON.stringify([key,value])
strings, which would be safely escaped, and unambiguously framed.
That's exactly what this does. Bonus is that the hash ends up strictly equal to hashing the resulting file, which is a concatenation of all the JSON stringified entries terminated by new lines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to make sure I was clear, you wouldn't extract key
and value
from the golang-encoded data and then feed JSON.stringify([key,value])
to the hasher: that's the vulnerable-to-JSON-weirdness order. Rather, you'd feed JSON.stringify([key,value])
to the golang encoder, and have golang do whatever additional/native encoding it wants on that single-string-per-entry. Then, on the decoding side, you'd first hash that single-string-per-entry, then you do JSON.parse
to get the key/value that you need to act upon. The rule could also be summarized as "no encoding during decode".
But I hear you, we're fighting against the golang codec pattern, which has already frustrated me in cosmos txn generation/signing. And we're dependent upon their codec being possible to use securely, which isn't a given (the confusion between nil
and an empty list comes to mind).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The missing bit is that the incoming stream of key value pairs may not be JSON encoded in the first place. It happens that's how we currently feed golang when dealing with export/import, but that may not always be the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please address the fix
comment.
Please choose to address the fix or not
or comment and briefly say why it's not useful/necessary/a priority at this time.
|
||
if len(swingStoreExportData) > 0 { | ||
for _, entry := range swingStoreExportData { | ||
swingStore.Set([]byte(entry.Key), []byte(entry.Value)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix or not: perf concern:
This is reallocating every one of the keys and values to convert from string
to []byte
. Because genesis export/import is RAM intensive, this might be worth fixing with either unsafe
or sync.Pool
. I leave it to you to decide if this is worth it.
https://josestg.medium.com/140x-faster-string-to-byte-and-byte-to-string-conversions-with-zero-allocation-in-go-200b4d7105fc has an example with a 0-alloc conversion.
func StringToBytes(s string) []byte {
p := unsafe.StringData(s)
b := unsafe.Slice(p, len(s))
return b
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
decided to leave as-is for now given the lack of positive impact in rudimentary perf tests
|
||
// Read yields the next KVEntry from the source reader | ||
// Implements KVEntryReader | ||
func (hr kvHookingReader) Read() (next KVEntry, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix or not:
Consider using err = fmt.Errorf("...%w...",err,...)
or sdkioerrors.Wrapf
so the code can explicitly message that a hook failed while not losing info about the underlying error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our agoric-sdk/.github/workflows/golangci-lint.yml
explicitly warns Found %w in ./golang/cosmos; please use %s instead.
in order to discourage making validator-specific source file locations in stack traces part of consensus.
Wrapf
without %w
is safe because it doesn't actually stringify the stack trace of the error, but still exposes it if there's a panic, much like Hardened JS's disciplined stack trace handling for Error
objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused about this feedback. The hook is only called if there was no error in the first place, so there are no errors to wrap.
encoder.SetEscapeHTML(false) | ||
|
||
return agoric.NewKVHookingReader(kvReader, func(entry agoric.KVEntry) error { | ||
key := []byte(entry.Key()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix or not: perf concern
Once again using []byte()
in 2 spots in this function which double the allocation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM conditional on satisfactorily addressing other reviewers' comments.
|
||
// Read yields the next KVEntry from the source reader | ||
// Implements KVEntryReader | ||
func (hr kvHookingReader) Read() (next KVEntry, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our agoric-sdk/.github/workflows/golangci-lint.yml
explicitly warns Found %w in ./golang/cosmos; please use %s instead.
in order to discourage making validator-specific source file locations in stack traces part of consensus.
Wrapf
without %w
is safe because it doesn't actually stringify the stack trace of the error, but still exposes it if there's a panic, much like Hardened JS's disciplined stack trace handling for Error
objects.
b7df1d0
to
17a5374
Compare
I unlayered this from #9568 and will merge as-is given all other feedback has been addressed |
Closes #9567 ## Description Stores the swing-store export data outside of the genesis file, and only include a hash of it in genesis for validation. This is the bulk of the data in the genesis file (on mainnet 6GB vs 300MB for the rest), and it serves little purpose to keep it in there. Furthermore golevelDB chokes when cosmos attempts to store the genesis file inside the DB (limit of 4GB documents) ### Security Considerations None, the data remains validated ### Scaling Considerations Reduces memory usage on genesis export / import. Furthermore we avoid iterating over the data twice on import, which is painfully slow for IAVL. ### Documentation Considerations None ### Testing Considerations Added a3p acceptance test Manually tested as follow: ``` cd packages/cosmic-swingset make scenario2-setup make scenario2-run-chain # wait until there are blocks, then kill mkdir t1/n0/export agd --home t1/n0 export --export-dir t1/n0/export agd --home t1/n0 tendermint unsafe-reset-all mv t1/n0/export/* t1/n0/config make scenario2-run-chain # verify blocks are being produced after genesis restart ``` ### Upgrade Considerations There is a compatibility mode to load genesis files with export data embedded.
Rebase todo: ``` # Branch fix-vow-make-watch-when-more-robust-against-loops-and-hangs-9487- label base-fix-vow-make-watch-when-more-robust-against-loops-and-hangs-9487- pick bcecf52 test(vow): add test of more vow upgrade scenarios pick d7135b2 test: switch vow test to run under xs for metering pick 99fccca test(vow): add test for resolving vow to external promise pick 6d3f88c test(vow): add test for vow based infinite vat ping pong pick c78ff0e test(vow): check vow consumers for busy loops or hangs pick 3c63cba fix(vow): prevent loops and hangs from watching promises pick 188c810 chore(vat-data): remove the deprecated `@agoric/vat-data/vow.js` pick 44a6d16 fix(vow): allow resolving vow to external promise label fix-vow-make-watch-when-more-robust-against-loops-and-hangs-9487- reset base-fix-vow-make-watch-when-more-robust-against-loops-and-hangs-9487- merge -C 4fca040 fix-vow-make-watch-when-more-robust-against-loops-and-hangs-9487- # fix(vow): make watch/when more robust against loops and hangs (#9487) # Branch ci-mergify-strip-merge-commit-HTML-comments-9499- label base-ci-mergify-strip-merge-commit-HTML-comments-9499- pick 63e21ab ci(mergify): strip merge commit HTML comments label ci-mergify-strip-merge-commit-HTML-comments-9499- reset base-ci-mergify-strip-merge-commit-HTML-comments-9499- merge -C 7b93671 ci-mergify-strip-merge-commit-HTML-comments-9499- # ci(mergify): strip merge commit HTML comments (#9499) # Pull request #9506 pick 249a5d4 fix(SwingSet): Undo deviceTools behavioral change from #9153 (#9506) # Pull request #9507 pick a19a964 fix(liveslots): promise watcher to cause unhandled rejection if no handler (#9507) # Branch feat-make-vat-localchain-resumable-9488- label base-feat-make-vat-localchain-resumable-9488- pick 76c17c6 feat: make vat-localchain resumable pick 40ccba1 fix(vow): correct the typing of `unwrap` pick 90e062c fix(localchain): work around TypeScript mapped tuple bug pick 3027adf fix(network): use new `ERef` and `FarRef` label feat-make-vat-localchain-resumable-9488- reset base-feat-make-vat-localchain-resumable-9488- merge -C 5856dc0 feat-make-vat-localchain-resumable-9488- # feat: make vat-localchain resumable (#9488) # Branch ci-mergify-clarify-queue-conditions-and-merge-conditions-9510- label base-ci-mergify-clarify-queue-conditions-and-merge-conditions-9510- pick 7247bd9 ci(mergify): clarify `queue_conditions` and `merge_conditions` label ci-mergify-clarify-queue-conditions-and-merge-conditions-9510- reset base-ci-mergify-clarify-queue-conditions-and-merge-conditions-9510- merge -C 30e56ae ci-mergify-clarify-queue-conditions-and-merge-conditions-9510- # ci(mergify): clarify `queue_conditions` and `merge_conditions` (#9510) # Branch fix-liveslots-cache-delete-does-not-return-a-useful-value-9509- label base-fix-liveslots-cache-delete-does-not-return-a-useful-value-9509- pick 42ea8a3 fix(liveslots): cache.delete() does not return a useful value label fix-liveslots-cache-delete-does-not-return-a-useful-value-9509- reset base-fix-liveslots-cache-delete-does-not-return-a-useful-value-9509- merge -C a2e54e1 fix-liveslots-cache-delete-does-not-return-a-useful-value-9509- # fix(liveslots): cache.delete() does not return a useful value (#9509) # Branch chore-swingset-re-enable-test-of-unrecognizable-orphan-cleanup-8694- label base-chore-swingset-re-enable-test-of-unrecognizable-orphan-cleanup-8694- pick 9930bd3 chore(swingset): re-enable test of unrecognizable orphan cleanup label chore-swingset-re-enable-test-of-unrecognizable-orphan-cleanup-8694- reset base-chore-swingset-re-enable-test-of-unrecognizable-orphan-cleanup-8694- merge -C bc53ef7 chore-swingset-re-enable-test-of-unrecognizable-orphan-cleanup-8694- # chore(swingset): re-enable test of unrecognizable orphan cleanup (#8694) # Pull request #9508 pick 513adc9 refactor(internal): move async helpers using AggregateError to node (#9508) # Branch report-bundle-sizing-in-agoric-run-9503- label base-report-bundle-sizing-in-agoric-run-9503- pick 68af59c refactor: inline addRunOptions pick a0115ed feat: writeCoreEval returns plan pick bd0edcb feat: stat-bundle and stat-plan scripts pick 0405202 feat: agoric run --verbose pick 22b43da feat(stat-bundle): show CLI to explode the bundle label report-bundle-sizing-in-agoric-run-9503- reset base-report-bundle-sizing-in-agoric-run-9503- merge -C 7b30169 report-bundle-sizing-in-agoric-run-9503- # report bundle sizing in agoric run (#9503) # Branch ci-test-boot-split-up-test-jobs-via-AVA-recipe-9511- label base-ci-test-boot-split-up-test-jobs-via-AVA-recipe-9511- pick 5f3c1d1 test(boot): use a single bundle directory for all tests pick 50229bd ci(all-packages): split tests according to AVA recipe label ci-test-boot-split-up-test-jobs-via-AVA-recipe-9511- reset base-ci-test-boot-split-up-test-jobs-via-AVA-recipe-9511- merge -C 5375019 ci-test-boot-split-up-test-jobs-via-AVA-recipe-9511- # ci(test-boot): split up test jobs via AVA recipe (#9511) # Pull request #9514 pick f908f89 fix: endow with original unstructured `assert` (#9514) # Pull request #9535 pick 989aa19 fix(swingset): log vat termination and upgrade better (#9535) # Branch types-zoe-setTestJig-param-type-optional-9533- label base-types-zoe-setTestJig-param-type-optional-9533- pick 426a3be types(zoe): setTestJig param type optional label types-zoe-setTestJig-param-type-optional-9533- reset base-types-zoe-setTestJig-param-type-optional-9533- merge -C bf9f03b types-zoe-setTestJig-param-type-optional-9533- # types(zoe): setTestJig param type optional (#9533) # Branch adopt-TypeScript-5-5-9476- label base-adopt-TypeScript-5-5-9476- pick 381b6a8 chore(deps): bump Typescript to 5.5 release label adopt-TypeScript-5-5-9476- reset base-adopt-TypeScript-5-5-9476- merge -C 0cfea88 adopt-TypeScript-5-5-9476- # adopt TypeScript 5.5 (#9476) # Branch retry-flaky-agoric-cli-integration-test-9550- label base-retry-flaky-agoric-cli-integration-test-9550- pick 2a68510 ci: retry agoric-cli integration test label retry-flaky-agoric-cli-integration-test-9550- reset base-retry-flaky-agoric-cli-integration-test-9550- merge -C c5c52ec retry-flaky-agoric-cli-integration-test-9550- # retry flaky agoric-cli integration test (#9550) # Pull Request #9556 pick 0af876f fix(vow): watcher args instead of context (#9556) # Pull Request #9561 pick a4f86eb fix(vow): handle resolution loops in vows (#9561) # Branch Restore-a3p-tests-9557- label base-Restore-a3p-tests-9557- pick d36382d chore(a3p): restore localchain test pick 5ff628e Revert "test: drop or clean-up old Tests" pick b5cf8bd fix(localchain): `callWhen`s return `PromiseVow` label Restore-a3p-tests-9557- reset base-Restore-a3p-tests-9557- merge -C c65915e Restore-a3p-tests-9557- # Restore a3p tests (#9557) # Pull Request #9559 pick 6073b2b fix(agoric): convey tx opts to `agoric wallet` and subcommands (#9559) # Branch explicit-heapVowTools-9548- label base-explicit-heapVowTools-9548- pick 100de68 feat!: export heapVowTools pick 8cb1ee7 refactor: use heapVowTools import pick 0ac6774 docs: vow vat utils pick 9128f27 feat: export heapVowE pick 3b0c8c1 refactor: use heapVowE pick 9b84bfa BREAKING CHANGE: drop V export pick 6623af5 chore(types): concessions to prepack label explicit-heapVowTools-9548- reset base-explicit-heapVowTools-9548- merge -C 4440ce1 explicit-heapVowTools-9548- # explicit heapVowTools (#9548) # Pull Request #9564 pick 44926a7 fix(bn-patch): fix bad html evasion (#9564) # Branch Fix-upgrade-behaviors-9526- label base-Fix-upgrade-behaviors-9526- pick ef1e0a2 feat: Upgrade Zoe pick e4cc97c Revert "fix(a3p-integration): workaround zoe issues" pick 84dd229 feat: repair KREAd contract on zoe upgrade pick cb77160 test: validate KREAd character purchase pick e1d961e test: move vault upgrade from test to use phase (#9536) label Fix-upgrade-behaviors-9526- reset base-Fix-upgrade-behaviors-9526- merge -C 8d05faf Fix-upgrade-behaviors-9526- # Fix upgrade behaviors (#9526) # Branch Support-for-snapshots-export-command-9563- label base-Support-for-snapshots-export-command-9563- pick 2a3976e refactor(cosmos): use DefaultBaseappOptions for newApp pick 84208e9 fix(cosmos): use dedicated dedicate app creator for non start commands pick 8c1a62d chore(cosmos): refactor cosmos command extension pick 4386f8e feat(cosmos): support snapshot export pick 2dabb52 test(a3p): add test for snapshots export and restore label Support-for-snapshots-export-command-9563- reset base-Support-for-snapshots-export-command-9563- merge -C 309c7e1 Support-for-snapshots-export-command-9563- # Support for `snapshots export` command (#9563) # Branch Swing-store-export-data-outside-of-genesis-file-9549- label base-Swing-store-export-data-outside-of-genesis-file-9549- pick f1eacbe fix(x/swingset): handle defer errors on export write pick 496a430 feat(cosmos): add hooking kv reader pick f476bd5 feat(cosmos): separate swing-store export data from genesis file pick 17a5374 test(a3p): add genesis fork acceptance test label Swing-store-export-data-outside-of-genesis-file-9549- reset base-Swing-store-export-data-outside-of-genesis-file-9549- merge -C 3aa5d66 Swing-store-export-data-outside-of-genesis-file-9549- # Swing-store export data outside of genesis file (#9549) # Branch Remove-scaled-price-authority-upgrade-9585- label base-Remove-scaled-price-authority-upgrade-9585- pick bce49e3 test: add test during upgradeVaults; vaults detect new prices pick 88f6500 test: repair test by dropping upgrade of scaledPriceAuthorities label Remove-scaled-price-authority-upgrade-9585- reset base-Remove-scaled-price-authority-upgrade-9585- merge -C 8376991 Remove-scaled-price-authority-upgrade-9585- # Remove scaled price authority upgrade (#9585) # Branch feat-make-software-upgrade-coreProposals-conditional-on-upgrade-plan-name-9575- label base-feat-make-software-upgrade-coreProposals-conditional-on-upgrade-plan-name-9575- pick 95174a2 feat(builders): non-ambient `strictPriceFeedProposalBuilder` in `priceFeedSupport.js` pick 5cc190d feat(app): establish mechanism for adding core proposals by `upgradePlan.name` pick 52f02b7 fix(builders): use proper `oracleBrand` subkey case pick ddc072d chore(cosmos): extract `app/upgrade.go` pick b3182a4 chore: fix error handling of upgrade vaults proposal pick ea568a2 fix: retry upgrade vaults price quote label feat-make-software-upgrade-coreProposals-conditional-on-upgrade-plan-name-9575- reset base-feat-make-software-upgrade-coreProposals-conditional-on-upgrade-plan-name-9575- merge -C cbe061c feat-make-software-upgrade-coreProposals-conditional-on-upgrade-plan-name-9575- # feat: make software upgrade `coreProposals` conditional on upgrade plan name (#9575) ``` This is followed by a commit to remove the `orchestration` and `async-flow` packages from the release, as these are not baked in yet and are not deployed anyway.
Closes #9567
Description
Stores the swing-store export data outside of the genesis file, and only include a hash of it in genesis for validation.
This is the bulk of the data in the genesis file (on mainnet 6GB vs 300MB for the rest), and it serves little purpose to keep it in there. Furthermore golevelDB chokes when cosmos attempts to store the genesis file inside the DB (limit of 4GB documents)
Security Considerations
None, the data remains validated
Scaling Considerations
Reduces memory usage on genesis export / import. Furthermore we avoid iterating over the data twice on import, which is painfully slow for IAVL.
Documentation Considerations
None
Testing Considerations
Added a3p acceptance test
Manually tested as follow:
Upgrade Considerations
There is a compatibility mode to load genesis files with export data embedded.