-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Perf degradation of AcquireTokenForClient due to default partitioned cache #2826
Comments
Let's treat this as a performance regression. Since before MSAL 4.30, AcquireTokenForClient would not use any JSON operation, but afterwards it does, leading to increased number of allocations. |
Have a look at ITokenCacheAccessor - maybe we can have a smarter implementation there for app token cache only. |
+1, had a similar discussion with @bgavrilMS and @jennyf19 yesterday on the amount of tokens in a single cache line in CCA when targeting multiple downstream dependencies. One of the quick-non-breaking options discussed for cache implementors is to provide some more context in the TokenCacheNotificationArgs (especially the scopes and whatever else is used for in-cache row filtering) so that the Suggested Cache Key can be extended and thus creating more granular partitions. For example a second property Suggested Fully Token Partitioned Key. This still has the downside of working with json serialization :( Just for a single token. |
Ideally there would be a way to skip all serialization for an in memory cache. |
@pmaytak - I think you should focus on idea 1, i.e. a token cache accessor that is key based, i.e. pre-filtered by tenant id. If this works, we can think about generalizing this idea, maybe the app developer can tell us "partition by tenant id AND scope". For idea 2 - we cannot change the SuggestedCacheKey as it would break existing token caches. Instead, we can expose more fields in TokenCacheNotificationArgs as @rymeskar suggests. For perf testing, you can use the WebAPI project to simulate high load or the Benchmarking approach. I think the WebAPI looks at things more holistically and is easier to setup up. |
I thought we were doing Perf testing consistently and on PRs impacting the cache, so why did we miss this perf issue? are we missing tests? wasn't the cache serialized/deserialized before 4.30 release? Do you we have current numbers on perf compared w/previous versions, to see how much of a perf issue we are talking about. |
Included in MSAL.NET 4.36.0 release. |
It's great to see the rate at which you manage to improve the caching and resiliency experience in the past weeks and months. At this front, @pmaytak and @bgavrilMS I was wondering whether are you planning to add support for more granular partitioning even for the non-in-memory-cache scenarios? |
@rymeskar this is already realized thanks to the SuggestedCacheKey property of the TokenSerializationArgs. |
My understanding was that SuggestedCacheKey is coarsely partitioned; i.e. it does not partition by scope/resourceId. |
Being partitioned by resource ID would not allow for using the refresh token to acquire a token for a different resource. |
This makes sense. |
Is your feature request related to a problem? Please describe.
Starting in MSAL 4.30, there's a default in-memory partitioned cache for confidential client applications. For each cache operation, the data is serialized/deserialiazed, which causes a performance hit. Seems to be a bigger issue for apps that have single-tenant partitions with many resources per tenant.
Possible solutions
PartitionedInMemoryTokenCacheAccessor
that implements ITokenCacheAccessor and is similar to InMemoryTokenCacheAccessor except that the token dictionaries are partitioned by tenant id.requestParams.Scope
to the app cache key string._accessor.GetAccessToken
and pass a createdMsalAccessTokenCacheKey
. Some [additional filtering] might still need to be done.Also add performance tests to compare before and after change. (Testing scenario should include single- and multi-tenant cases with many resources.)
The text was updated successfully, but these errors were encountered: