-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OData Client reportedly cause excessive memory usage after .NET 6 upgrade #2503
Comments
Additional information from the client: |
Before closing this task, let me share some updates. While replacing the After conducting additional research and experiments, I was able to explain this gap by the fact that .NET 6 does not dispose of memory immediately release memory back to the OS after it's been "freed". It can retain memory if it thinks it might need to allocate it again, this is a performance optimization. However, if it detects low memory in the system, then it will release the memory. Since the profiling was done in machine which still had nearly half of the memory still free, .NET GC did not release the memory. But under memory pressure, the "freed" memory would be released. Here are potentially related issues: |
For reference's sake, here's a summary of the experiments I ran to see how .NET 6 and .NET Core 3.1 garbage collectors behave in different memory constrained scenarios. I ran series of tests simulating memory constraints. The purpose these tests was to understand how test service running OData Client would behave when it’s about to hit the memory ceiling and compare between .NET 6.0 and .NET Core 3.1. So far our best explanation for this was that GC was no under pressure to do a full compacting Gen2 GC or release memory back to the OS because there was sufficient free memory available. The purpose of these tests was to confirm that GC would be more effective at freeing memory if we were closer to the memory limit. I ran the same tests in docker containers with different memory constraints. In my tests, 6GB was threshold below which application performance and reliability started to degrade, i.e., some requests would fail due to out-of-memory exceptions. This was the case on both .NET 6.0 and .NET Core 3.1. From 6GB and above the application ran successfully consistently. I also noticed slightly better performance in .NET 6 (in terms of total execution time). The GC metrics adjusted to the amount of available memory. For example, when running the tests in containers without memory limits, peak heap size was 5.6GB and committed memory as high 7.8GB. However, when I set the memory limit to 6GB, the peak heap size was 4.1GB and committed memory peaked at 4.4GB with no noticeable drop in performance. I also noticed that the application was more stable when using HttpWebRequest transport mode than HttpClient when memory was more constrained. In such conditions (< 4GB max memory), the container seemed more likely to crash or fail all requests when using HttpClient transport mode. Based on these results, my conclusion is that high memory usage is likely not to be an issue of great concern in production if your load testing environment is representative of the kind of load you would expect during peak season. I don’t think it would be less capable of handling the load than when using .NET Core 3.1. I would suggesting testing with an even higher load just to understand where the breaking point is and how much the system could comfortably handle. I would suggest that you use HttpWebRequest transport mode for now (unless you test the nightly build and it shows that HttpClient mode is better). Each test consisted of sending 60 requests using 30 virtual users (using k6 load testing tool) to the test server. The test server was running in a docker container with adjusted memory limit and .NET version. The test server used OData Client to fetch data from a sample OData service. The OData service exposed a sample Customers entity set endpoint that return 20,000 customers which amounted to about 1.7mb of JSON. This was meant to simulate some of the large requests that we found in the dumps EY shared with us.
|
A customer reported that after upgrading to .NET 6 from .NET Core 3.1 and updating OData Client to 7.10.0, their performance and load testing reveal excessive memory usage and they have attributed it to OData client. Here are parts of the report:
Performance testing has revealed a memory issue: high private bytes being consumed in the App Service during load testing. You can see that before the migration to .netcore6 and the update to OData, the memory was stable; after, we see the same application using massive amounts of memory. Analysis of memory dumps taken during the testing points to OData as the culprit as identified in the analysis below.
-Overall memory consumption is high:
97.965GB of virtualalloc, out of which, GC is only 6GB
GC Committed Heap Size: Size: 0x16b85f000 (6,098,907,136) bytes.
-Machine-wide CPU is very high:
-To find out why long-running threads are busy with Garbage Collection, we will need to look
at the managed heap to find out which objects are accumulating faster causing GC to run frequently. As GC runs frequently, which is an expensive cpu-intensive operation, will shoot-up CPU consumption.
-Here is the most offender list:
As seen clearly, Microsoft.OData.* objects related to Entity Framework are flooding the managed heap.
Here is a partial output of the gcroot
-We also see reachable queue is not cleared. You are explicitly coding destructor/Finalize(), instead Developers should be using Dispose pattern implementing IDisposable interface. If there are no unmanaged objects (that you opened-up explicitly in your code) to clean, then DO NOT implement destructor/Finalize().If there are unmanaged objects to clean, then use the IDisposable (Dispose pattern). It will let the framework clean-up your managed objects while executing your custom code for cleaning/closing any unmanaged object.
Assemblies affected
Which assemblies and versions are known to be affected e.g. OData .Net lib 7.x
Reproduce steps
The simplest set of steps to reproduce the issue. If possible, reference a commit that demonstrates the issue.
Expected result
What would happen if there wasn't a bug.
Actual result
What is actually happening.
Additional detail
Optional, details of the root cause if known. Delete this section if you have no additional details to add.
The text was updated successfully, but these errors were encountered: