Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Cache-Control headers in RTK Query #1575

Closed
sazzer opened this issue Oct 5, 2021 · 7 comments
Closed

Support for Cache-Control headers in RTK Query #1575

sazzer opened this issue Oct 5, 2021 · 7 comments
Milestone

Comments

@sazzer
Copy link

sazzer commented Oct 5, 2021

I'm using an API that makes correct use of Cache-Control headers on the responses to indicate how long the data is viable for. It would be good if RTK Query was able to use this data as part of its caching - e.g. if a response has Cache-Control: private, max-age=3600 then RTK Query can know to not request the same resource again in the next hour (3600 seconds)

Equally, it would be good if there was a way to force the HTTP request to be made even if the browser has cached the resource - by adding Cache-Control: no-cache to the HTTP request. This could be useful when you want to get the most recent version of a resource and not a cached version - e.g. to display an edit form for that resource.

@phryneas
Copy link
Member

phryneas commented Oct 5, 2021

Stupid question, but shouldn't the browser (=fetch) honor those without us doing anything?

@sazzer
Copy link
Author

sazzer commented Oct 5, 2021

Yes it does, but there's a slight flicker. The entry in the Redux store will go "status": "pending" and then almost immediately to "status": "fulfilled", but in the intermediate browser tick the screen will re-render as if the resource was loading.

And then there also needs to be some way to force the HTTP request to be made even through the browser cache. That typically means adding Cache-Control: no-cache to the outgoing request, but it would be great if that could be hooked into the RTK Query caching mechanism as well. However, you'd only want to do that on forcing cache invalidation, not just re-requesting the resource because the RTK Query cache has expired (or on focus change etc)

@phryneas
Copy link
Member

phryneas commented Oct 5, 2021

The problem is that RTK-Query itself is 100% transport-independent. We do not know if you use fetch, axios, graphql, grpc or a websocket in the baseQuery. So the question is really what we could do here. I could imagine adding an additional isInvalidation option that is passed into baseQuery.

@sazzer
Copy link
Author

sazzer commented Oct 5, 2021

That seems like a reasonable option. Add an option that custom clients can use if desired, and have the default client use it correctly, and you'll likely be correct more often than not.

Maybe an option to turn on honouring of Cache-Control in requests and responses too so that people need to opt into it, and then you can't break any existing users.

@phryneas
Copy link
Member

phryneas commented Oct 5, 2021

I think the other way round (should a request be made at all?) will not play a role long-term, since that "short request flicker" will go away as soon as we get suspense. Adding that to RTK-Q would require big architecture changes.

@phryneas
Copy link
Member

#1656 should give more info into the baseQuery and make this more usable. it's not full support out of the box though.

@phryneas phryneas added this to the 1.7 milestone Oct 27, 2021
@markerikson
Copy link
Collaborator

Available in the 1.7 betas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants