Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change in cache doesn't propagate #574

Closed
LiLatee opened this issue Jan 30, 2024 · 28 comments
Closed

Change in cache doesn't propagate #574

LiLatee opened this issue Jan 30, 2024 · 28 comments

Comments

@LiLatee
Copy link

LiLatee commented Jan 30, 2024

Hey! 👋

I have such a case. I have a request let's say GpostsReq. I use that request with CacheAndNetwork FetchPolicy. But for pull to refresh action I put into Client.requestController.add(modifiedRequest) modified request which has set NetworkOnly FetchPolicy, so I display results only from network.
And let's check the following steps:

  1. Like a post using another mutation. This mutation modifies cache and that change propagates for GpostsReq so I can see that my post has been liked.
  2. Then make a pull to refresh with modified request which has set NetworkOnly FetchPolicy.
  3. Like another post. Now post doesn't update. It looks like now my query doesn't involve changes from cache. But that wasn't an intention.

The same story with fetchMore action for which I have set NoCache FetchPolicy.

Am I missing something or that's expected behaviour? 😞

@knaeckeKami
Copy link
Collaborator

knaeckeKami commented Jan 30, 2024

Hm, it could be that the refetch indeed causes the stream to stop watching the cache, since it has a no cache fetch policy.

But if you watch the cache anyway, you don't need to use the request controller anyway - just execute (client.request()) the pull to refresh request with .NetworkOnly, this will update the cache, and the cache will refresh the original request that uses .CacheAndNetwork.

@knaeckeKami
Copy link
Collaborator

But no, this is not expected behaviour, and this test: https://github.com/gql-dart/ferry/blob/master/packages/ferry/test/typed_links/request_controller_typed_link_test.dart#L251 should ensure it does not happen.

@LiLatee
Copy link
Author

LiLatee commented Jan 30, 2024

But if you watch the cache anyway, you don't need to use the request controller anyway - just execute (client.request()) the pull to refresh request with .NetworkOnly, this will update the cache, and the cache will refresh the original request that uses .CacheAndNetwork.

It also doesn't work. I had to change requestId to another one and then it worked (either with with use ofrequestController or request) 🤔 So it looks like the problem is that both request and modifiedRequest have the same requestId

@LiLatee
Copy link
Author

LiLatee commented Jan 30, 2024

But no, this is not expected behaviour, and this test: master/packages/ferry/test/typed_links/request_controller_typed_link_test.dart#L251 should ensure it does not happen.

But that test doesn't use no cache fetchPolicy 🤔

@LiLatee
Copy link
Author

LiLatee commented Jan 30, 2024

So it looks like the problem is that both request and modifiedRequest have the same requestId

But unfortunately it won't solve the issue with fetchMore because it has to have the same requestId in order to merge results into a single list if I am not wrong.

@knaeckeKami
Copy link
Collaborator

Yes, but is also does not even use a cache, it just tests the request controller typed link. It's just an assertion that the previous stream is continued to be listened.

similar tests with a full client are below (e.g. https://github.com/gql-dart/ferry/blob/master/packages/ferry/test/typed_links/request_controller_typed_link_test.dart?rgh-link-date=2024-01-30T16%3A51%3A24Z#L408 ), maybe you can craft a reproducible example out of that?

@knaeckeKami
Copy link
Collaborator

It also doesn't work. I had to change requestId to another one and then it worked (either with with use ofrequestController or request) 🤔 So it looks like the problem is that both request and modifiedRequest have the same requested

Yes, in that case you can't use the same requestId.

@knaeckeKami
Copy link
Collaborator

knaeckeKami commented Jan 30, 2024

But unfortunately it won't solve the issue with fetchMore because it has to have the same requestId in order to merge results into a single list if I am not wrong.

If you want caching + pagination to work together, I would recommend just merging all the pages in the cache, and removing the arguments for paging from the cache.

like here: https://ferrygraphql.com/docs/cache-configuration#the-fields-property

alternatively,/additionally you can also add the read function for the type policy and read all the pages there (or just the ones that you want)

@LiLatee
Copy link
Author

LiLatee commented Jan 30, 2024

If you want caching + pagination to work together, I would recommend just merging all the pages in the cache, and removing the arguments for paging from the cache.
like here: ferrygraphql.com/docs/cache-configuration#the-fields-property

I've already tried that some time ago, but I had some problems. I also use updateResult field of OperationRequest and merge results there and because of that, I couldn't find a way to properly merge results. But I will try it tomorrow once again.

maybe you can craft a reproducible example out of that?

I have a list of things that I would try to do regarding ferry, but I need to finish migrating to ferry asap and I need to find some free time for improvements/fixes for ferry in the future 🙏

@LiLatee
Copy link
Author

LiLatee commented Jan 30, 2024

alternatively,/additionally you can also add the read function for the type policy and read all the pages there (or just the ones that you want)

To be honest I am a little bit confused and don't remember right now when to use read, merge and updateResult, but I will take a look 🤞

@knaeckeKami
Copy link
Collaborator

knaeckeKami commented Jan 30, 2024

I think, what you want is :

To set keyArgs for the type policy to an empty array. This means, the Posts request will be cached like this:

{
"Query": 
   "posts": [{...}, {...}] // just a single list posts
}

instead of

{
"Query":
   "posts(offset: 0, limit: 3)" : [{...}, {...}],
   "posts(offset: 3, limit: 3)" : [{...},  {...}],
}

So that there is just a single entry for the query.

The default behaviour of the cache is, that when A new result for the same query comes in,
that the old items are replaced.

We now want to change that, so that the previous and next result are mered.
You can achieve that by using the merge function as in the docs.

Now, if you watch the cache for the posts query, it will always return all cached results.

You can fetch more or refetch by just executing more requests, no need for any requestId or fetchMore params, or .requestController.add().

And mutations that update entries (like your "like" example would continue to just work.

You probably don't need to implement the read function in the type policy.

@LiLatee
Copy link
Author

LiLatee commented Jan 30, 2024

Hmm, right now I have

{
      'posts': FieldPolicy(
        keyArgs: ['userId'],
      ),
}

I will try your recommendation tomorrow, but as I understand I have to modify the merge function from docs because it assumes that existing and incoming are lists, but in my case they are maps and inside these maps I have a list with posts. I will let you tomorrow.

And btw many thanks for your help 🙏

@knaeckeKami
Copy link
Collaborator

knaeckeKami commented Jan 30, 2024

Yes, if they are maps, you should use a LinkedHashMap instead of the LinkedHashSet.

btw, here are tests that are close to your scenario:

test('can merge pages in cache using custom field policy', () async {

@LiLatee
Copy link
Author

LiLatee commented Jan 31, 2024

To set keyArgs for the type policy to an empty array. This means, the Posts request will be cached like this:
...
So that there is just a single entry for the query.

Yes that is basically something I want to achieve and I already had that with a small difference that in keyArgs I have defined userId param because I want to differentiate caches for different users' timelines.
image

And these typePolicies should be put inside Cache class? I see Client class has also typePolicies field

But I don't get the following.

Now, if you watch the cache for the posts query, it will always return all cached results.

Do I have to do something to watch the cache?

You can fetch more or refetch by just executing more requests, no need for any requestId or fetchMore params, or .requestController.add().

So the first call of my request works fine, it executes the merge function. Unfortunately, after calling once again the same query (in order to make pull to refresh) it doesn't make any request (I cannot spot it in devtools).
Ok... And now I noticed that difference between original request and pull to refresh query is that pull to refresh one has NetworkOnly and original one has CacheAndNetwork. After setting both to CacheAndNetwork pull to refresh worked fine. But I would prefer to have NetworkOnly in the pull to refresh 🤔

Similar story with fetchMore, but in that case I have to set after parameter (indicating the page to fetch) and because of that difference with original request that request is not called. In your test you modifies the offset var and everything seems to work fine 🤔

And what about updateResult field of a request? Do I need it? 🤔

@knaeckeKami
Copy link
Collaborator

Do I have to do something to watch the cache?

watching a stream with .CacheFirst or .CacheAndNetwork is enough.

you can also watch the cache directly using client.cache.watchQuery();

Do I have to do something to watch the cache?

yes, you would add the typepolicies map in the constructor of the cache.

Ok... And now I noticed that difference between original request and pull to refresh query is that pull to refresh one has NetworkOnly and original one has CacheAndNetwork. After setting both to CacheAndNetwork pull to refresh worked fine. But I would prefer to have NetworkOnly in the pull to refresh 🤔

how do you execute the pull to refresh and the fetchmore request?

And what about updateResult field of a request? Do I need it? 🤔

no, if you do do merging in the cache and watch the cache, you don't need requestId and updateResult.

@LiLatee
Copy link
Author

LiLatee commented Jan 31, 2024

watching a stream with .CacheFirst or .CacheAndNetwork is enough.
yes, you would add the typepolicies map in the constructor of the cache.

Ok so I have both of these 👌

how do you execute the pull to refresh and the fetchmore request?

Right now I have

GuserTimelinePostsReq(
      (b) => b
        ..fetchPolicy = FetchPolicy.CacheAndNetwork
        // ..requestId = '${GuserTimelinePostsReq}_$userId'
        ..vars.limit = standardFetchCount
        ..vars.userId = userId,
    )
    
    // refetch
Client.request(originalRequest) // changing FetchPolicy causes that request is not made
	// fetchMore
Client.request(originalRequest.rebuild((p0) => p0.vars.after = 'some string'))

@knaeckeKami
Copy link
Collaborator

knaeckeKami commented Jan 31, 2024

do you also listen to the result steam / convert it to a future using .first? otherwise it may not get executed

@LiLatee
Copy link
Author

LiLatee commented Jan 31, 2024

Ahhh you are right there was even executeOnListen parameter as I remember.
Now it makes requests for fetchMore and refetch 🎉 , but I have still one issue.
I don't want to cache all the data. I would like to cache always only the first page so e.g. after a refetch I will see only the first page and rest of them will be removed. I think that was the reason why I used updateResult parameter of Request instead of merge method of Cache 🤔
The reason for that is that e.g. after creating a new post we make a refetch in order to refresh the whole timeline. And because of the current merge method that new post will be placed at the end of the liat, and that's a correct behaviour just for fetchMore action, but in the case of creating a new post it should appear at the beginning of the list because it's a new created post.

@knaeckeKami
Copy link
Collaborator

knaeckeKami commented Jan 31, 2024

I don't want to cache all the data. I would like to cache always only the first page so e.g. after a refetch I will see only the first page and rest of them will be removed. I think that was the reason why I used updateResult parameter of Request instead of merge method of Cache 🤔

you mean something like?

if(options.variables["after"] == null) {
    // was refetch, clear existing from cache
    return incoming;
}

The reason for that is that e.g. after creating a new post we make a refetch in order to refresh the whole timeline. And because of the current merge method that new post will be placed at the end of the liat, and that's a correct behaviour just for fetchMore action, but in the case of creating a new post it should appear at the beginning of the list because it's a new created post.

You could also append the post automatically to the cache after the create post mutation using an updateCacheHandler on your mutation.

See
https://ferrygraphql.com/docs/mutations/#updating-the-cache
this example is very similar to what you want

Or do you have a field in the object indicating the sort order?

you could use this to do the sorting in the cache (e.g. using the read callback in the field policy)

@LiLatee
Copy link
Author

LiLatee commented Jan 31, 2024

Yeah, I was thinking about it, but I prefer to avoid sorting, because I know it can cause some bugs and I have many different lists with different types of items so I had to create separate merge function for every query 😞

you mean something like?

if(options.vars["after"] == null) {
    // was refetch, clear existing from cache
    return incoming;
}

Yeah, exactly... I am dumb 🤦‍♂️ (or exhausted) Thank you!

So the last problem I see is that I had to remove requestId field from request, because adding it causes that changes in cache doesn't propagate. And unfortunately, we use requestId to identify request which we want to refetch after some action. e.g. after creating a new post we call RefetchStore.refetchQueries(['requestId']).
Maybe it won't be a big issue and I will find a workaround, but do you have an idea why I can't set requestId in that case? 🤔

@knaeckeKami
Copy link
Collaborator

So the last problem I see is that I had to remove requestId field from request, because adding it causes that changes in cache doesn't propagate. And unfortunately, we use requestId to identify request which we want to refetch after some action. e.g. after creating a new post we call RefetchStore.refetchQueries(['requestId']).
Maybe it won't be a big issue and I will find a workaround, but do you have an idea why I can't set requestId in that case? 🤔

Maybe you can show a reproduction sample. I don't really understand what exactly you are doing, where you set request ids and how you execute the requests.

@LiLatee
Copy link
Author

LiLatee commented Jan 31, 2024

I set requestId to originalRequest, then I modify original request for refetch and fetchMore

GuserTimelinePostsReq(
      (b) => b
        ..requestId = '${GuserTimelinePostsReq}_$userId' // adding that doesn't propagate changes from cache to the stream related to that request AFTER refetch or fetchMore action
        ..vars.limit = standardFetchCount
        ..vars.userId = userId,
    )
    
    // refetch
Client.request(originalRequest.rebuild((b) => b.
		..fetchPolicy = FetchPolicy.NetworkOnly))
	// fetchMore
Client.request(originalRequest.rebuild((b) => b.
		..fetchPolicy = FetchPolicy.NetworkOnly
		..vars.after = 'some string'))

But sure I will try to prepare a test that fails or a reproduction sample, but probably not faster then in next week 😞

@knaeckeKami
Copy link
Collaborator

knaeckeKami commented Jan 31, 2024

A yes, I think this is a real issue.

The RequestControllerTypedLink switchMaps to the new request stream, which does not watch the cache:

https://github.com/gql-dart/ferry/blob/master/packages/ferry/lib/src/request_controller_typed_link.dart#L63

@LiLatee
Copy link
Author

LiLatee commented Jan 31, 2024

Okay, got it. So for now. MANY THANKS @knaeckeKami ❤️

@LiLatee
Copy link
Author

LiLatee commented Feb 2, 2024

Hi @knaeckeKami I have another question 😅
What if I have such a query?
image

It will be stored under Post key right?

      'post': FieldPolicy(
        keyArgs: ['postId', 'sortBy'],
        merge: _postCommentsMergeFunction,
      ),

and then in merge function it looks like that:
image

And as I understand it doesn't differentiate Posts object so for every Post I always get the same existing object. Am I right? 🤔

@LiLatee
Copy link
Author

LiLatee commented Feb 2, 2024

Ohh I see probably the problem. I should use id instead of postId. My mistake.

@knaeckeKami
Copy link
Collaborator

I don't fully understand the question.

You can inspect the cache by getting all the keys of the Store of the cache and then look up the keys to understand how the cache stores the graph data.

The after key arg won't work, since the post field of the Query does not have and after arg. See the docs how to use nested keys.

@LiLatee
Copy link
Author

LiLatee commented Feb 11, 2024

I am closing the issue as I solved it by listening for changes when using FetchPolicy.NetworkOnly.
More about it here in the PR #579.

@LiLatee LiLatee closed this as completed Feb 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants