-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--read-timeout option for ipfs get
#5541
Comments
@magik6k is there any difference between global option |
Yes, the difference is that |
@magik6k Got it, can i help fix it? |
Personally, I'd try to use the word @alanshaw does this flag sound reasonable? It seems like something js-ipfs should implement as well. |
@magik6k I found the |
After reading through various issues and docs and trying a few test calls, I can't figure out how to use either option. --read-timeout is actually the one I'd like to be able to use (provided it also works in case there's no resolution at all, that is at 0 bytes downloaded, otherwise it should probably be used in combination with something else) There are no timeout fields in the default config file either apparentlyl, RecordLifetime is the only time related field I find, nor do I find a timeout field in this doc: https://github.com/ipfs/go-ipfs/blob/master/docs/config.md |
@overbool I'm not exactly into cmds code, but that may work. |
I'd also like to voice my support for this. Especially in regards to adding / pinning content by a hash since finding content is not guaranteed and there doesn't seem to be any mechanisms in place to gracefully time out. it also seems like the interface supports this functionality for dht findprovs: |
Apologies I missed this at the time +1 for |
There'd have to be a way to prevent the DHT request from executing locally if we timed out. So if we "timed out", the DHT request for an add couldn't sneakily be fulfilled at a later point without user knowledge. |
Did this ever get added to the project code guys? Anybody else successfully dealing with timeouts gracefully? (without somehing harsh like backgrounding, sleeping and/or killing ipfs pid's. etc.) Cheers! & |
This has not yet been implemented. The current "best practice" is to implement it client-side by detecting the transfer rate stalling at 0 for more than N seconds. Kind of like curl's |
@Stebalien thanks for responding. I appreciate it. For me, I run a large multi ipfs_hash mirror service for ravencoin blockchain. For our explorer, I already use -T -t with curl against a gateway/ipfs/{}, it works great by the way! The problem with this method is that it cannot be seemingly used against recursive IPFS_HASH, right? only ipfs get does the job properly that I can see, but then the timeout problem is back.. Just to the nature of the way html works and the limitations there, I'm trying to support time out for large files that are missing or incomplete on the network, both recursive and non recursive, whilst it is a doddle to do with curl or wget using their timeout options for singular files, it is not clear to me whether it is even possible to do something similar with recursive hashes (at least using curl). Whilst ipfs get {} can be used for the recursive hashes, I have to use a timeout commandline option which is less than ideal. It would be good to detect a stalled transfer, with strace or something? no output > x_timeout_seconds, etc but is there a better way than this? The only way I can see this working is if I somehow am able to grab recursive ipfs_hash through the gateways by some other method I am not familiar. Otherwise I'm 'hacking' something together which might be seen by some as slightly inadequate solution. Thanks a lot for your time, please let me know if you think there is a way to use curl --speed-limit etc with an ipfs gateway and support ipfs recursive hashes, because that is the main problem for me. Like you say curl with -t -T is okay for files, but what do we do about recursives? Best wishes, |
It would be great if we can get a way for IPFS requests to behave in a deterministic way, rather than to have to 'box it in' on the client side. Particularly as timeouts on IPFS are extremely frequent; I would estimate that about 50% of all requests timeout (with fresh hashes sniffed from the DHT, for ipfs-search.com). My point being; this is a feature that solves a common back-ache for many users, it seems easy to implement on the server side. In addition, making this a global API option would allow you to set more conservative global defaults (i.e. lower default timeouts), which would both improve user experience (faster errors instead of frozen browsers and clients for minutes) as well as lower the overall load on the network (less nodes looking for unavailable hashes). Of course, there are scenario's in which we'd want a higher timeout, it would seem unwise to have lower timeouts without it! |
Please open a thread in https://discuss.ipfs.io.
|
It would be useful for
ipfs get
(and possibly other related commands likeipfs cat
) to have a--read-timeout
option which would interrupt the command if no new data was read for a set amount of time.Mentioned in #5537 (comment)
The text was updated successfully, but these errors were encountered: