Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--read-timeout option for ipfs get #5541

Open
magik6k opened this issue Sep 28, 2018 · 15 comments
Open

--read-timeout option for ipfs get #5541

magik6k opened this issue Sep 28, 2018 · 15 comments
Labels
exp/novice Someone with a little familiarity can pick up help wanted Seeking public contribution on this issue kind/enhancement A net-new feature or improvement to an existing feature

Comments

@magik6k
Copy link
Member

magik6k commented Sep 28, 2018

It would be useful for ipfs get (and possibly other related commands like ipfs cat) to have a --read-timeout option which would interrupt the command if no new data was read for a set amount of time.

Mentioned in #5537 (comment)

@magik6k magik6k added kind/enhancement A net-new feature or improvement to an existing feature help wanted Seeking public contribution on this issue exp/novice Someone with a little familiarity can pick up labels Sep 28, 2018
@overbool
Copy link
Contributor

@magik6k is there any difference between global option --timeout and --read-timeout

@magik6k
Copy link
Member Author

magik6k commented Sep 28, 2018

Yes, the difference is that --timeout will not work well for large files (which take longer than the specified timeout to download). --read-timeout would count the time since last read byte (assuming that it's blocked on Read)

@overbool
Copy link
Contributor

@magik6k Got it, can i help fix it?

@Stebalien
Copy link
Member

Personally, I'd try to use the word stall somewhere to make it clear what this does.


@alanshaw does this flag sound reasonable? It seems like something js-ipfs should implement as well.

@overbool
Copy link
Contributor

@magik6k I found the Read function about ipfs get in https://github.com/ipfs/go-ipfs-cmds/blob/master/http/responseemitter.go#L287, should we add global --read-timeout option and then we get the read timeout value by req.Options["read-timeout"].(int) in https://github.com/ipfs/go-ipfs-cmds/blob/master/http/responseemitter.go#L287 ?

@JazzTp
Copy link

JazzTp commented Oct 29, 2018

... the difference is that --timeout will not work well for large files (which take longer than the specified timeout to download). --read-timeout would count the time since last read byte (assuming that it's blocked on Read)

After reading through various issues and docs and trying a few test calls, I can't figure out how to use either option.

--read-timeout is actually the one I'd like to be able to use (provided it also works in case there's no resolution at all, that is at 0 bytes downloaded, otherwise it should probably be used in combination with something else)

There are no timeout fields in the default config file either apparentlyl, RecordLifetime is the only time related field I find, nor do I find a timeout field in this doc: https://github.com/ipfs/go-ipfs/blob/master/docs/config.md

@magik6k
Copy link
Member Author

magik6k commented Oct 30, 2018

@overbool I'm not exactly into cmds code, but that may work.

@obo20
Copy link

obo20 commented Jan 15, 2019

I'd also like to voice my support for this. Especially in regards to adding / pinning content by a hash since finding content is not guaranteed and there doesn't seem to be any mechanisms in place to gracefully time out.

it also seems like the interface supports this functionality for dht findprovs:
https://github.com/ipfs/interface-ipfs-core/blob/master/SPEC/DHT.md#dhtfindprovs

@alanshaw
Copy link
Member

@alanshaw does this flag sound reasonable? It seems like something js-ipfs should implement as well.

Apologies I missed this at the time +1 for --read-timeout

@obo20
Copy link

obo20 commented Jan 24, 2019

There'd have to be a way to prevent the DHT request from executing locally if we timed out. So if we "timed out", the DHT request for an add couldn't sneakily be fulfilled at a later point without user knowledge.

@ravenlandpush
Copy link

ravenlandpush commented Jun 23, 2019

Did this ever get added to the project code guys? Anybody else successfully dealing with timeouts gracefully? (without somehing harsh like backgrounding, sleeping and/or killing ipfs pid's. etc.)

Cheers! &
Best,
Adam

@Stebalien
Copy link
Member

This has not yet been implemented. The current "best practice" is to implement it client-side by detecting the transfer rate stalling at 0 for more than N seconds. Kind of like curl's --speed-limit and --speed-time flags.

@ravenlandpush
Copy link

ravenlandpush commented Jun 27, 2019

@Stebalien thanks for responding. I appreciate it.

For me, I run a large multi ipfs_hash mirror service for ravencoin blockchain. For our explorer, I already use -T -t with curl against a gateway/ipfs/{}, it works great by the way!

The problem with this method is that it cannot be seemingly used against recursive IPFS_HASH, right? only ipfs get does the job properly that I can see, but then the timeout problem is back..

Just to the nature of the way html works and the limitations there, I'm trying to support time out for large files that are missing or incomplete on the network, both recursive and non recursive, whilst it is a doddle to do with curl or wget using their timeout options for singular files, it is not clear to me whether it is even possible to do something similar with recursive hashes (at least using curl). Whilst ipfs get {} can be used for the recursive hashes, I have to use a timeout commandline option which is less than ideal. It would be good to detect a stalled transfer, with strace or something? no output > x_timeout_seconds, etc but is there a better way than this?

The only way I can see this working is if I somehow am able to grab recursive ipfs_hash through the gateways by some other method I am not familiar. Otherwise I'm 'hacking' something together which might be seen by some as slightly inadequate solution.

Thanks a lot for your time, please let me know if you think there is a way to use curl --speed-limit etc with an ipfs gateway and support ipfs recursive hashes, because that is the main problem for me. Like you say curl with -t -T is okay for files, but what do we do about recursives?

Best wishes,
Adam

@dokterbob
Copy link
Contributor

It would be great if we can get a way for IPFS requests to behave in a deterministic way, rather than to have to 'box it in' on the client side. Particularly as timeouts on IPFS are extremely frequent; I would estimate that about 50% of all requests timeout (with fresh hashes sniffed from the DHT, for ipfs-search.com).

My point being; this is a feature that solves a common back-ache for many users, it seems easy to implement on the server side. In addition, making this a global API option would allow you to set more conservative global defaults (i.e. lower default timeouts), which would both improve user experience (faster errors instead of frozen browsers and clients for minutes) as well as lower the overall load on the network (less nodes looking for unavailable hashes).

Of course, there are scenario's in which we'd want a higher timeout, it would seem unwise to have lower timeouts without it!

@Stebalien
Copy link
Member

Stebalien commented Mar 8, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exp/novice Someone with a little familiarity can pick up help wanted Seeking public contribution on this issue kind/enhancement A net-new feature or improvement to an existing feature
Projects
None yet
Development

No branches or pull requests

8 participants