-
Notifications
You must be signed in to change notification settings - Fork 41
CORS issues on preload nodes #447
Comments
This is still happening and can be see here https://klueq.github.io/ (inspect the browser console) |
Adding my two cents to double confirm we're seeing this issue also, described in ipfs/js-ipfs#1694. |
TLDR; Preload was sending preload requests for EVERY dag node of an added file. This is unnecessary as the preload request will recursively slurp down the entire graph. This PR fixes that behaviour. --- Adding file(s) causes the preload module to preload any **root** nodes for the added content. It sends a HTTP request to preload for each root CID because the API on the preload nodes will fetch any children automatically. This greatly reduces the number of HTTP requests we have make when adding large files that are chunked into multiple nodes. However, the issue was that the tests were checking that a CID had been requested for preload, not that it had been requested only once. I was inspecting the debug output for preload because of the recent [CORS issues we've been having](ipfs/infra#447) and noticed that multiple preload requests for the same CID were being sent. Worse still, when adding large files, the child nodes were also being requested 😱 The issues are: 1. `ipfs.add` causes [`object.get`](https://github.com/ipfs/js-ipfs/blob/99fb64e72c1752b3c227537dc6419d8bb837cd5f/src/core/components/files.js#L39) to be called for every file added. The issue with that is that `object.get` will also attempt to preload the CID you pass it. 2. `ipfs.add` causes `pin.add` to be called for each root CID. This in turn causes `dag.get` to be called for _every_ node in the graph. The issue with that is that `dag.get` will also attempt to preload the CID you pass it. The solution in both cases is to tell `object.get` and `pin.add` in the context of `ipfs.add` to not preload the CIDs that they are passed. I've augmented the tests to ensure that only one of the required CIDs is requested from the preload nodes and with that change and the fixes to the code the tests are now passing. License: MIT Signed-off-by: Alan Shaw <alan.shaw@protocol.ai>
Another report with example - https://discuss.ipfs.io/t/trouble-with-content-availability-on-public-gateways-from-js-ipfs/4137/1 I tested with Chrome and Firefox. Trimmed the test case to: https://codepen.io/anon/pen/jQqMPb?editors=0011 |
Preflight IssuesPreload nodes are sometimes slow to respond which translates to a persistent problem of preflight requests (HTTP OPTIONS) failing, that failure being cached by the browser (afaik in Firefox up to 24 hours or until browser is closed) and as a result every following preload request (HTTP GET to Preflight Fix?@kyledrake what if we introduce an easy optimization: detect and return static response for HTTP OPTIONS requests at Nginx. That way preflight requests will not fail due to go-ipfs being under load, because they will never hit the daemon. Static response to copy ( $ curl -X OPTIONS 'https://node0.preload.ipfs.io/api/v0/refs' -v
# (...)
< HTTP/1.1 200 OK
< Date: Thu, 08 Nov 2018 15:09:18 GMT
< Content-Length: 0
< Connection: keep-alive
< Vary: Origin
< Vary: Access-Control-Request-Method
< Vary: Access-Control-Request-Headers
< Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: GET, POST, OPTIONS
< Access-Control-Allow-Headers: X-Requested-With, Range, Content-Range, X-Chunked-Output, X-Stream-Output
< Access-Control-Expose-Headers: Content-Range, X-Chunked-Output, X-Stream-Output Gateway ←→ PreloadNote that even if preflight succeeds, public gateway does not load content – looks like it is missing direct connection. Would adding preload nodes as bootstrap servers for out gateway nodes fix the issue? |
@ipfs/infrastructure any update here? |
I deployed new preload servers that put them on their own dedicated machines and brings them up to the latest version of IPFS, which improved things a lot. We also had someone flooding the preload server with several hundred requests per second, which was causing performance issues. I throttled it so that people can't just flood the server with API calls from a script. I'm going to explore @lidel's idea here when I get a chance, but I'd like to keep this open until I deploy that. |
If you've throttled it - totally reasonable - I hope that the JS-IPFS code has been written so that if preload is slow responding (throttled) that it won't wait for it before trying the local JS-IPFS (not JS-IPFS-API) call. Note that it might not be a script thats doing a fast load, if we load a webpage then its going to try and read up to approx 50 images. In many cases those will be cached but sometimes the js-ipfs is going to be inserting these preload calls. Note - its on our list to turn preload off in the dweb.archive.org code, but we haven't got to it yet. |
@mitra42 afaik preload in js-ipfs is async and does not block regular operations:
@kyledrake additional idea: setting an explicit timeout for caching preflight requests could limit the number of preflight requests for popular CIDs (spec says preflight cache is per URL) and ensure all vendors behave the same, eg. to set maximum possible cache of 10 minutes: Access-Control-Max-Age: 600
Refs: |
Just confirming, I'm still seeing preload CORS errors |
paging @kyledrake |
Reported here also ipfs/js-ipfs#1732 |
There was a known issue with the AMS server which I just resolved, but the other servers should be operating as expected. The monitoring caught the AMS outage, but I didn't see issues with any of other servers. Having all of the bootstrap servers go down at once is of course a major issue, but if it's one out of several, that shouldn't cause any major problems (sans the annoying console output). We're going to have random servers go down from time to time for various reasons. I'm going to close this ticket, as the preload servers CORS is functioning properly. I am working on improving the infrastructure behind the current bootstrap system to make it faster and more reliable in the future, but that's work that's independent of this ticket (which is specifically about CORS issues). |
I'm not clear if you fixed a CORS issue or something else? If the CORS was functioning then one place I saw this on another system was that CORS was working correctly in the "normal" case but working incorrectly in the failure case, so errors were being returned with the wrong CORS and instead of displaying the actual error (e.g. a 500 server error) they were showing up as a CORS error. |
@mitra42 are you able tell which API endpoint and path was missing CORS headers in HTTP 500 responses or preflight requests? Is it still reproducible or was happening randomly? FWIW I checked preload call for invalid CID (returns HTTP 500) and CORS headers look fine right now: $ curl -s -I -X OPTIONS 'https://node0.preload.ipfs.io/api/v0/refs?r=true&arg=INVALID_CID' | grep -i Access-Control
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: X-Requested-With, Range, Content-Range, X-Chunked-Output, X-Stream-Output
Access-Control-Expose-Headers: Content-Range, X-Chunked-Output, X-Stream-Output
$ curl -s -I -X GET 'https://node0.preload.ipfs.io/api/v0/refs?r=true&arg=INVALID_CID' | grep -i Access-Control
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: X-Requested-With, Range, Content-Range, X-Chunked-Output, X-Stream-Output
Access-Control-Expose-Headers: Content-Range, X-Chunked-Output, X-Stream-Output |
Not sure what you mean by API endpoint, I see this in a browser log, and as mentioned by other reporters its intermittent - and I'm usually only looking at the browser log if Im looking for some other problem. If there is other info I can grab from you next time I see it in a log please let me know (here). |
Ok - will do - do you mean something different from that posted in the bug report on js-ipfs#1476. There is a bunch of history to this problem there which of course didn't get copied over when this new one was created. |
Hi Infra team, some of our users are reporting having multiple problems with CORS, can you confirm if this is a known problem?
Issue: ipfs/js-ipfs#1476 (comment)
The text was updated successfully, but these errors were encountered: