-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(cli): object add-link: do not allow blocks over BS limit #8414
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Placement should be ok.
Tested with this one-liner that patches until error – works as expected 👍
$ echo QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn>cid; while ipfs object patch $(<cid) add-link $(<cid).jpg Qmayz4F4UzqcAMitTzU4zCSckDofvxstDuj3y7ajsLLEVs 2>&1 > cid; do; <cid; done
QmcPUbyiR2g9RbTcLQAMHq5S9Fh3gBoXJqso2yK8awEMDC
...
QmTw19KYw2c8w5756s95ccKpFoFhbP6oXa4Zg5cLbLD79N
Error: object API does not support HAMT-sharding. To create big directories, please use the files API (MFS)
Remaining work:
- add the same check to
rm-link
andset-data
andappend-data
- better to have same behavior everywhere and fail har, and add override flag (below)
- add
--force
flag to disable the block size check on impacted commands- we need to have an escape hatch to unblock people who need the old behavior, otherwise some collabs won't be able to upgrade.
- rationale: we want to apply soft pressure, but if someone wants to delay deprecation of
ipfs object
APIs in their codebase, they should have the ability to do so without the need for forking go-ipfs.
- add sharness tests (fine to reuse my one-liner)
core/commands/object/patch.go
Outdated
if err != nil { | ||
return err | ||
} | ||
if modifiedNodeSize > 1024 * 1024 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My guess would be that returning an error wouldn't suffice. The node is already too big and this has to be reverted. But reverting it here is hard, because the reversal can itself cause another error. So I would think this must be rejected at the ObjectAPI level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@petar I see two places where we could potentially put this check deeper in the stack:
-
(api *ObjectAPI) AddLink
: after theInsertNodeAtPath
call (which has the actual 'insertion' logic) using a check similar to the one placed at the CLI level (fetch the modified node and check its size). We might also want to evaluate purging the modified copy from the repo. -
(deeper) in the
go-merkledag
repo executing the actual insertion logic: it has the advantage that already has a permanent/temporary store division where rejecting the modified node wouldn't impact thego-ipfs
repo. The downside is that we would be injecting an arbitrary restriction on a library based on the limitation of another (bitswap) just because both are consumed by the samego-ipfs
logic.
Either is doable so I'll leave to you the decision of where to execute.
Per 2021-09-24 verbal, this is a bigger problem and we're going to take care of the immediate problem and not put a check in deeper like the blockstore. We believe the check is happening at the right level but we should incorporate Petar's feedback to avoid mutating the datastore for the error case. Check to see if can add a method to the object API that checks for the size and rejects if it's too big before it mutates anything. If that's a huge pain, we can leave it as GC will take care of unnecessarily added data. @petar can provide additional comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Let's keep it small and merge it as-is (protect people from creating non-transferrable blocks asap).
Made small changes:
- renamed override flag to --allow-big-block and made error more explicit about what and why
- separate tests for default and override behavior
@schomatis is this still a draft, or ready for merge? confirmed this is good to merge to solve the burning problem ASAP.
- renamed override flag to --allow-big-block - separate tests for default and override behavior
5189eeb
to
77f420e
Compare
Trying not to be too invasive as this is just an ad-hoc check for a deprected command I have added the check at the CLI level; otherwise I can go deeper in the
ObjectAPI
.If the placement of the check is correct will add a test as a sharness. If instead the check should go deeper in the stack then the test should likely be written in Go. Waiting on an initial review to decide course.
Optional additional proposals to the current PR:
set-data
andappend-data
commands which can potentially go over the BS limit just as well.rm-link
but only as a warning to flag if the current block being manipulated is already over the limit.Fixes #7421.