-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix progress bar for "get" command #1279
Conversation
return nil, 0, err | ||
} | ||
|
||
length := uint64(ns.CumulativeSize) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think the cumulative size will be the total ipfs size, no the total unixfs size. (meaning, the ipfs size has some overhead). Maybe this is fine. (and maybe this is why the add / cat progressbar sometimes goes past 100%...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @whyrusleeping ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
afaik add progressbar doesn't use ipfs size, so it's fine. I don't know which one dagreader uses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh it is more correct to use ipfs size? because of the >100%?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, the ipfs size is the real amount of data moved. the unixfs size may be smaller, but could even be larger (say if i have 1000 links to the same, huge object). whatever we pick we should use it correctly (i.e. the >100% i think is because they're perhaps mixed)
I'm very excited to have this merged-- thank you for picking this up @rht |
I think using the CumulativeSize from the node stat should work well. its a better representation of the data involved. |
|
@rht sorry for delay here o/ |
2d034f8
to
1170fd2
Compare
'echo "!"' doesn't escape the '!' |
5e269f9
to
bf7d735
Compare
Sorry will fix this as soon as I can. |
@rht any luck with this? would love to have it in-- get is painful without it |
18e8a83
to
53f80c3
Compare
703d6de
to
97ed6e0
Compare
Is any help from me needed here? |
Can't figure out the source of unexpected EOF here: https://travis-ci.org/ipfs/go-ipfs/jobs/66950969#L2886, https://travis-ci.org/ipfs/go-ipfs/jobs/66950969#L2918, https://travis-ci.org/ipfs/go-ipfs/jobs/66950969#L2918. Can't reproduce locally (sharness test passes just fine) |
@rht an update here? what's needed? |
f50b7d4
to
6086258
Compare
30b963a
to
65963e0
Compare
License: MIT Signed-off-by: rht <rhtbot@gmail.com>
up until now there has been a very annoying bug with get, we would get halting behavior. I'm not 100% sure this commit fixes it, but it should. It certainly fixes others found in the process of digging into the get / tar extractor code. (wish we could repro the bug reliably enough to make a test case). This is a much cleaner tar writer. the ad-hoc, error-prone synch for the tar reader is gone (with i believe was incorrect). it is replaced with a simple pipe and bufio. The tar logic is now in tar.Writer, which writes unixfs dag nodes into a tar archive (no need for synch here). And get's reader is constructed with DagArchive which sets up the pipe + bufio. NOTE: this commit also changes this behavior of `get`: When retrieving a single file, if the file exists, get would fail. this emulated the behavior of wget by default, which (without opts) does not overwrite if the file is there. This change makes get fail if the file is available locally. This seems more intuitive to me as expected from a unix tool-- though perhaps it should be discussed more before adopting. Everything seems to work fine, and i have not been able to reproduce the get halt bug. License: MIT Signed-off-by: Juan Batiz-Benet <juan@benet.ai>
@rht so this is fixed, then? |
Yes it is. My only concern here is when calculating the tar size, this is needed: if err := proto.Unmarshal(nd.Data, pb); err != nil {
return 0, err
} Does this download the blocks underneath? |
@rht the |
ok it shouldn't be confusing to use uncompressed size in .tar.gz and .gz (but the progressbar reader has to be slipped in between tarwriter and gzwriter)? |
License: MIT Signed-off-by: rht <rhtbot@gmail.com>
65963e0
to
a1cebe6
Compare
@jbenet @whyrusleeping it appears that Why not just |
@rht kill it! |
I believe it is also used to download tarballs via the API. That way we can curl > .tar — On Sun, Aug 9, 2015 at 8:24 PM, Jeromy Johnson notifications@github.com
|
Nope, if you |
@rht i was using it in ipfs shell until today when i refactored and pulled the package out here: https://github.com/whyrusleeping/tar-utils |
@whyrusleeping it is used because the uncompressed output is still tar-formatted? Though if #1558 happens, |
Ok this is not more complex. |
#736
Should look like this
@whyrusleeping I use NodeState instead of pbdata. Is this more efficient?