-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change in cache doesn't propagate #574
Comments
Hm, it could be that the refetch indeed causes the stream to stop watching the cache, since it has a no cache fetch policy. But if you watch the cache anyway, you don't need to use the request controller anyway - just execute ( |
But no, this is not expected behaviour, and this test: https://github.com/gql-dart/ferry/blob/master/packages/ferry/test/typed_links/request_controller_typed_link_test.dart#L251 should ensure it does not happen. |
It also doesn't work. I had to change |
But that test doesn't use no cache fetchPolicy 🤔 |
But unfortunately it won't solve the issue with fetchMore because it has to have the same requestId in order to merge results into a single list if I am not wrong. |
Yes, but is also does not even use a cache, it just tests the request controller typed link. It's just an assertion that the previous stream is continued to be listened. similar tests with a full client are below (e.g. https://github.com/gql-dart/ferry/blob/master/packages/ferry/test/typed_links/request_controller_typed_link_test.dart?rgh-link-date=2024-01-30T16%3A51%3A24Z#L408 ), maybe you can craft a reproducible example out of that? |
Yes, in that case you can't use the same requestId. |
If you want caching + pagination to work together, I would recommend just merging all the pages in the cache, and removing the arguments for paging from the cache. like here: https://ferrygraphql.com/docs/cache-configuration#the-fields-property alternatively,/additionally you can also add the |
I've already tried that some time ago, but I had some problems. I also use
I have a list of things that I would try to do regarding ferry, but I need to finish migrating to ferry asap and I need to find some free time for improvements/fixes for ferry in the future 🙏 |
To be honest I am a little bit confused and don't remember right now when to use |
I think, what you want is : To set {
"Query":
"posts": [{...}, {...}] // just a single list posts
} instead of {
"Query":
"posts(offset: 0, limit: 3)" : [{...}, {...}],
"posts(offset: 3, limit: 3)" : [{...}, {...}],
} So that there is just a single entry for the query. The default behaviour of the cache is, that when A new result for the same query comes in, We now want to change that, so that the previous and next result are mered. Now, if you watch the cache for the posts query, it will always return all cached results. You can fetch more or refetch by just executing more requests, no need for any requestId or fetchMore params, or .requestController.add(). And mutations that update entries (like your "like" example would continue to just work. You probably don't need to implement the |
Hmm, right now I have {
'posts': FieldPolicy(
keyArgs: ['userId'],
),
} I will try your recommendation tomorrow, but as I understand I have to modify the merge function from docs because it assumes that And btw many thanks for your help 🙏 |
Yes, if they are maps, you should use a LinkedHashMap instead of the LinkedHashSet. btw, here are tests that are close to your scenario:
|
watching a stream with .CacheFirst or .CacheAndNetwork is enough. you can also watch the cache directly using client.cache.watchQuery();
yes, you would add the typepolicies map in the constructor of the cache.
how do you execute the pull to refresh and the fetchmore request?
no, if you do do merging in the cache and watch the cache, you don't need requestId and updateResult. |
Ok so I have both of these 👌
Right now I have GuserTimelinePostsReq(
(b) => b
..fetchPolicy = FetchPolicy.CacheAndNetwork
// ..requestId = '${GuserTimelinePostsReq}_$userId'
..vars.limit = standardFetchCount
..vars.userId = userId,
)
// refetch
Client.request(originalRequest) // changing FetchPolicy causes that request is not made
// fetchMore
Client.request(originalRequest.rebuild((p0) => p0.vars.after = 'some string')) |
do you also listen to the result steam / convert it to a future using .first? otherwise it may not get executed |
Ahhh you are right there was even |
you mean something like? if(options.variables["after"] == null) {
// was refetch, clear existing from cache
return incoming;
}
You could also append the post automatically to the cache after the create post mutation using an updateCacheHandler on your mutation. See Or do you have a field in the object indicating the sort order? you could use this to do the sorting in the cache (e.g. using the |
Yeah, I was thinking about it, but I prefer to avoid sorting, because I know it can cause some bugs and I have many different lists with different types of items so I had to create separate merge function for every query 😞
if(options.vars["after"] == null) {
// was refetch, clear existing from cache
return incoming;
} Yeah, exactly... I am dumb 🤦♂️ (or exhausted) Thank you! So the last problem I see is that I had to remove |
Maybe you can show a reproduction sample. I don't really understand what exactly you are doing, where you set request ids and how you execute the requests. |
I set GuserTimelinePostsReq(
(b) => b
..requestId = '${GuserTimelinePostsReq}_$userId' // adding that doesn't propagate changes from cache to the stream related to that request AFTER refetch or fetchMore action
..vars.limit = standardFetchCount
..vars.userId = userId,
)
// refetch
Client.request(originalRequest.rebuild((b) => b.
..fetchPolicy = FetchPolicy.NetworkOnly))
// fetchMore
Client.request(originalRequest.rebuild((b) => b.
..fetchPolicy = FetchPolicy.NetworkOnly
..vars.after = 'some string')) But sure I will try to prepare a test that fails or a reproduction sample, but probably not faster then in next week 😞 |
A yes, I think this is a real issue. The RequestControllerTypedLink switchMaps to the new request stream, which does not watch the cache: |
Okay, got it. So for now. MANY THANKS @knaeckeKami ❤️ |
Hi @knaeckeKami I have another question 😅 It will be stored under 'post': FieldPolicy(
keyArgs: ['postId', 'sortBy'],
merge: _postCommentsMergeFunction,
), and then in merge function it looks like that: And as I understand it doesn't differentiate Posts object so for every Post I always get the same existing object. Am I right? 🤔 |
Ohh I see probably the problem. I should use |
I don't fully understand the question. You can inspect the cache by getting all the keys of the The |
I am closing the issue as I solved it by listening for changes when using |
Hey! 👋
I have such a case. I have a request let's say
GpostsReq
. I use that request withCacheAndNetwork
FetchPolicy. But for pull to refresh action I put intoClient.requestController.add(modifiedRequest)
modified request which has setNetworkOnly
FetchPolicy, so I display results only from network.And let's check the following steps:
GpostsReq
so I can see that my post has been liked.NetworkOnly
FetchPolicy.The same story with fetchMore action for which I have set
NoCache
FetchPolicy.Am I missing something or that's expected behaviour? 😞
The text was updated successfully, but these errors were encountered: