-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.NET 6 container throws OutOfMemoryException #58974
Comments
Tagging subscribers to this area: @dotnet/gc Issue DetailsDescriptionWhen running inside a docker container that has a hard memory limit set, Reproduction Steps
With a limit of 3g I observe the following values:
Expected behavior I would expect the high memory load threshold to be less than the available memory, so it has a chance to kick in and run the GC more aggressively prior to going OOM. To that end, I would expect to see values similar to it running outside a container in windows, where I observe:
Or running it inside a container without a memory limit, where I observe:
Configuration.NET Version: net6.0 Regression?I did not confirm it, but I suspect .NET 5 exhibits the same behavior. Other informationhttps://docs.microsoft.com/en-us/dotnet/core/run-time-config/garbage-collector#heap-limit The section for heap limit states:
When running the container with --memory=3g, https://docs.microsoft.com/en-us/dotnet/core/run-time-config/garbage-collector#high-memory-percent The section for High Memory Percent states:
90% of the aforementioned 3221225472 is 2899102924 which matches the value of I think the disconnect here is that
|
So I think the bug is in /src/coreclr/gc/gc.cpp which is unfortunately too large to display in github. Now, I'm not a c++ dev, but I think I figured out what's going on. In if (gc_heap::is_restricted_physical_mem)
{
uint64_t physical_mem_for_gc = gc_heap::total_physical_mem * (uint64_t)75 / (uint64_t)100;
gc_heap::heap_hard_limit = (size_t)max ((20 * 1024 * 1024), physical_mem_for_gc);
} line 42978-42995 has the logic that sets the high memory percentage to 90%: else
{
// We should only use this if we are in the "many process" mode which really is only applicable
// to very powerful machines - before that's implemented, temporarily I am only enabling this for 80GB+ memory.
// For now I am using an estimate to calculate these numbers but this should really be obtained
// programmatically going forward.
// I am assuming 47 processes using WKS GC and 3 using SVR GC.
// I am assuming 3 in part due to the "very high memory load" is 97%.
int available_mem_th = 10;
if (gc_heap::total_physical_mem >= ((uint64_t)80 * 1024 * 1024 * 1024))
{
int adjusted_available_mem_th = 3 + (int)((float)47 / (float)(GCToOSInterface::GetTotalProcessorCount()));
available_mem_th = min (available_mem_th, adjusted_available_mem_th);
}
gc_heap::high_memory_load_th = 100 - available_mem_th;
gc_heap::v_high_memory_load_th = 97;
} It's a bit confusing, but the comment actually only applies to the inner The important part is Line 42986 So that's where the 90% comes from. FixA possible fix would be: If in a restricted physical mem environment, default the high memory threshold to 68% instead of 90% ( to complement the 75% max heap size ). This would be the code change for that: else
{
int available_mem_th = 10;
// If the hard limit is specified, default to 68% instead of 90% of physical memory
if (gc_heap::is_restricted_physical_mem)
{
available_mem_th = 32;
}
// We should only use this if we are in the "many process" mode which really is only applicable
// to very powerful machines - before that's implemented, temporarily I am only enabling this for 80GB+ memory.
// For now I am using an estimate to calculate these numbers but this should really be obtained
// programmatically going forward.
// I am assuming 47 processes using WKS GC and 3 using SVR GC.
// I am assuming 3 in part due to the "very high memory load" is 97%.
if (gc_heap::total_physical_mem >= ((uint64_t)80 * 1024 * 1024 * 1024))
{
int adjusted_available_mem_th = 3 + (int)((float)47 / (float)(GCToOSInterface::GetTotalProcessorCount()));
available_mem_th = min (available_mem_th, adjusted_available_mem_th);
}
gc_heap::high_memory_load_th = 100 - available_mem_th;
gc_heap::v_high_memory_load_th = 97;
} That would resolve the issue, however I think it is indicative of a bigger problem. I wonder if void gc_heap::get_memory_info (uint32_t* memory_load,
uint64_t* available_physical,
uint64_t* available_page_file)
{
GCToOSInterface::GetMemoryStatus(is_restricted_physical_mem ? total_physical_mem : 0, memory_load, available_physical, available_page_file);
} could be updated to |
this is by design. the default limit is set to be 75% assuming that there's native memory usage in the container (container in this context means an environment with a memory limit). we do not assume that all available memory is available for the GC to use. 75% is a number we picked - it's meant to be sufficient for general scenarios. you can change this default by specifying hardlimit. outside a container, we did not have a default percentage for limit to begin with. it's true the same principle applies outside container env as well. but we only added this default limit concept when we added container support. adding it outside container env might regress existing apps and since you could use the same hardlimit config there if you want to change the default, I didn't change the default there. |
.NET will GC to stay below this limit in a container. It won't wait till physical memory reaches For #50414 there seems to be an issue as we're approaching Are you seeing an OOM in your containers? |
Hi @Maoni0 and @tmds - I realize that That's not the problem, the problem is that the
From reading https://raw.githubusercontent.com/dotnet/runtime/main/src/coreclr/gc/gc.cpp I see a few checks for It seems most of the code to detect memory pressure is not based on Am I not interpreting the c++ code correctly? @tmds yes, I'm seeing some OOMs that I can't quite explain and have been investigating. In .net core 3.1 this job had a 5% chance to crash. Of those that would crash, I'd estimate that 75% were shut down by the memory killer for exceeding their memory hard limit and 25% crashed with a After upgrading to .NET 6 preview, the same job now has a 95% chance to crash, and 100% of those crash from throwing a |
it's always 90% because we assume there's other memory usage. so we'd only want to start being aggressive if the total memory load (including both GC usage and other memory usage) is 90%, not when it's reached the 90% of the hard limit. again, this is the default behavior. |
another thing I should mention, if it's not obvious, is that of course we would do a full compacting GC if we cannot commit memory based on the hardlimit. |
My understanding (which may be wrong) is that the GC does accounting on how much of the managed heap is used, and uses that to trigger GCs.
The OOM kill happens by the OS based on total memory used.
That is a major regression. Can you update the issue title to something like .NET 6 container throws OutOfMemoryException? #50414 is also about How does it behave with .NET 5? cc @janvorli |
Hi, yes this is a real application. It's a console app running in production that fetches gzipped csv files and converts them to the parquet format. The container instances have a tight memory limit so I can run multiple container instances per host - when I first created the app it had to process about half an exabyte worth of data, so scaling horizontally in an efficient manner was vital. The way I created the application has some similarities with the artifical workload in #50414 Details on the app are available below. Findings Summary.NET 6 = Container ran with
Detailed ResultsEvery 50_000 rows processed, memory metrics are emitted.
I excluded data from .NET 6 Server GC
.NET 6 Workstation GC
.NET 5 Server GC
.NET 5 Workstation GC
.NET Core 3.1 Server GC
.NET Core 3.1 Workstation GC
.NET 6 Server GC with extra GC SettingsSetting Setting App DetailsI pre-define buffers for each column and create them on application start, because the default behavior of creating a new array that's 50% larger and copying the data over was both slow and wasteful. The buffers themselves use The app reads the remote data into a stream, unzips it and pushes it into a When the app reaches the desired number of rows, a buffer is almost full or there is no more input data, I fetch the That method uses unsafe for pinning and grabbing the pointer, but all the observed crashes happen long before it tries to write any files. |
Thanks for characterizing your workload and providing metrics.
@Maoni0 @janvorli the referenced issue has a reproducer that leads to
@Bio2hazard can you also try using the first 5.0 image (
I think reducing Though I don't think those should be required. GCs should already have occurred when the managed heap usage becomes high.
That the app throws I assume your app uses same allocation APIs independent of it targeting .NET 3.1, 5.0 or 6.0? I wonder if something changed in .NET that would cause your application to require a larger heap. |
I see this issue has changed to an OOM issue. yeah, if it doesn't get OOM with workstation GC but does with Server, or if it doesn't get OOM with a larger limit but with a smaller limit, that's clearly an issue that should be investigated. I'll see what @janvorli finds out. |
.NET 5 Server GC (
|
Version/GC Mode | 2975MB | 3000MB | 3100MB | 3175MB |
---|---|---|---|---|
.NET 3.1 Workstation | Fail | Pass | Pass | Pass |
.NET 5.0 Workstation | Fail | Fail | Pass | Pass |
.NET 3.1 Server | Fail | Fail | Pass | Pass |
.NET 5.0 Server | Fail | Fail | Fail | Pass |
So it seems .NET 5.0 needs an extra 100MB climit ( = 75MB heap limit ) to be happy, compared to .net 3.1. It isn't clear to me yet as to why - I memory profiled .NET 5 workstation GC vs. .NET 3.1 workstation GC and the allocations, and it appears largely identical.
I just updated my dockerfile with
and wanted to report back that for my app nothing has changed.
|
Hmm, it is possible that this issue and the one in #50414 are different then. |
@janvorli, it's also possible that you were hitting that issue (from what you showed me it definitely looked like you were hitting it) but @Bio2hazard was not. |
Ah yes. Same as with increasing Using
From the table it's clear your limits are on the edge. Reducing 100MB memory makes it fail on .NET Core 3.1. Adding 75 MB makes it pass on .NET 5+. .NET 5+ needs a little more heap to run your application. It's a regression, though not a major one. |
@Bio2hazard would you mind trying the experiment with modifying the limits with 6.0 RC1? I am interested in seeing if the behavior persists or if the increment needed to make it pass is smaller. |
Sure thing @janvorli I used the First I tested .NET 6.0 RC1 Workstation GC Those results are the same as .NET 5 Next I tested .NET 6.0 RC1 Server GC Next I re-tested .NET 5.0 Server GC since I previously had only ran it once with 3175MB where it had passed. So it seems when I had tested .NET 5.0 Server GC before it just got lucky. I then wanted to determine the lowest amount of memory ( in 5MB increments ) where .NET 5.0 Server GC can pass 5 runs in a row. This turned out to be 3190MB. I repeated that test for .NET 6.0 RC1, and it ended up being 3190MB as well. In other words, I was unable to observe any differences between .NET 5 and .NET 6 RC1
As a side note, I looked into what is causing the ~120MB worth of Gen 0 garbage per GC cycle and it could be related to #27748 not quite sure, but all the allocations are coming from renting byte arrays from the ArrayPool. Or perhaps it chooses to release them instead of re-using them due to memory pressure? Not sure. |
I replaced the default This significantly dropped allocations and allows Server GC to run successfully on less memory. This + the fact that it also runs successfully on less memory with Workstation GC points towards Server GC choosing to go OOM instead of running a GC cycle. I think there are 2 problems at play here:
With this information I might be able to set something up to reproduce the issue that I can share. |
cc @davidfowl
Or that using Server GC is causing additional allocations on the heap that can't be reclaimed.
It is an interesting hypothesis, which was definitely the case for #50414. |
I think I was able to create some sample code that reproduces the issue: https://gist.github.com/Bio2hazard/ee353c1042ee56a97c0d0b3d62c590bc With workstation GC this runs reliably, at least I didn't get any OOM despite running it for 5 minutes. On Server GC it will eventually go OOM, sometimes quickly, sometimes after a few minutes. The memory requirements of the app should be static, which is why it works on workstation gc. Each cycle adds the same amount of garbage to the heap, so the fact that server GC sometimes goes OOM after 1 second, and sometimes goes OOM after 60 seconds should indicate that the OOM is purely driven by GC behavior, not app requirements. The app itself just allocates a large amount of memory to the LOH and then endlessly thrashes a small amount of memory. It also shows the annoying issue with array pool and async that was leading to unexpected allocations in my production app. Please let me know if the repro sample works for you. |
Hi @tmds, just wanted to follow up and ask if the reproduction code worked. It uses the RC I'd love to see this fixed so I'd like to know if there's anything else I can do to help on this. |
@Bio2hazard I am sorry, I've completely missed your previous comment with a link to the repro. Thank you a lot for that! I'll give it a try. I suspect that the issue might be related to the fact that on Unix, the mmap allocated memory doesn't actually get reflected in any cgroups values that report memory in use until the pages are touched. In fact, they don't contribute to the resident set at all until they are touched. The discrepancy between what GC sees as a committed memory and the memory load reported by cgroups might throw it over the edge in some specific cases. But that's just my theory. |
Hi @janvorli ,
That's a reasonable assumption, but I've actually accounted for that. 👍 In my reproduction code I make sure to touch every buffer right away. When I first started investigating this issue, I stumbled across #55126 (comment) which recommends using TouchPage to ensure the pages are touched, so I've been making sure to account for that throughout my work to root cause this problem. |
@Bio2hazard while you have accounted for the buffers you allocate, you cannot do the same for allocations made by the managed and native parts of the runtime. So there will always be some discrepancy depending on what your app does. I have noticed in the past that e.g. openssl allocates quite a lot of native memory and I am not sure what are the usage patterns around that memory. But as for your repro, I think the difference will be quite small. |
I'm looking at the example. Because of the Lines 31 to 33 in b82e838
That extra memory needed as the app runs could be enough for the OOM. Or, there is still reclaimable memory but the GC doesn't try to collect (or is limited in what it can collect) when an allocation fails. This may be a bug, or it can be by design. @janvorli @Maoni0 what is the expected behavior? |
@janvorli hah good catch, yeah I can't account for the managed and native runtime parts. Glad to hear the repro is working! @tmds yes, I'm aware of the implications of the I think the issue is a bit more severe than saying it uses extra memory. While technically correct, it consistently uses extra memory to the point where the pooling provides almost no benefit at all, and I would argue that most developers would not expect that behavior from ArrayPool.Shared. Since the amount of buffers per pool size is limited, and each thread has its own pool, it appears that any async usage / thread jumps between renting and returning can easily lead to one thread exhausting its available buffers, while the other thread discards returned buffers due to being at the limit. There are 2 problems with this.
It's of course possible that I mis-interpreted the behavior, but between #27748 and the behavior observed in both my production app and the reproduction sample, it definitely seems easy to run into unexpected allocations.
The absolute memory requirements of the reproduction do not exceed the available memory, so it runs without issues on Workstation GC. Furthermore, on server GC it runs for a variable amount of time, so some number of iterations pass. It just seems like the GC tries to squeeze in a bit more than is available and fails out without running a GC cycle that could've prevented it. |
Related: #52098 |
Are you also uncommenting the
It uses the TLS storage for extra performance at the cost of some memory. Also, because your app runs so close to the memory limit, it will trim buffers on gen2 GC also undoing the performance benefit of pooling: Line 292 in b82e838
So for this to work well, you need to pay with some memory.
Workstation GC will be more deterministic. Server GC will reduce GC pause time to improve overall response time but that comes at the cost of extra resources, which may include memory. Because an app runs under a limit with workstation GC does not mean the app will not go OOM under the same limit with server GC.
I'm curious to learn if a GC is performed on the path that leads to the OOM exception, and whether it may be limited in some way. |
Yes, though the numbers at the top may need to be tweaked for optimal reproduction of the issue with
If you look at the Trim code, it uses Line 190 in c344d64
Which in turn uses runtime/src/libraries/System.Private.CoreLib/src/System/Buffers/Utilities.cs Lines 40 to 48 in c344d64
This actually takes us back full circle to the original topic of this issue, that This means for a container with a 3 GB limit, the ArrayPool.Shared implementation will only start aggressively trimming if the Memory Load exceeds
Based on previous comments I was lead to believe that a OOM situation on server GC that doesn't occur on workstation GC was an anomaly, and that the server GC mode should attempt a full GC prior to going OOM:
So at this point I don't know what the expected behavior here is. I'm very curious to see what @janvorli s investigation reveals. |
I agree. This should also consider load of the managed heap in a container, and that should trigger agressive trimming in this example. Looking at the reproducer from performance perspective, you don't want this trimming to occur. |
Any updates on this issue? |
This problem is also encountered in the container
|
@saber-wang your issue seems to be unrelated to this. Could you please create a separate bug with repro or dumps? Usually OOMs are not GC related, but the application not freeing memory appropriately. Also for this particular issue this looks to be something to be fixed in ArrayPool? |
I just want to confirm that we are also seeing this issue with .NET 6 and Docker Swarm. We were seeing a steady stream of System.OutOfMemoryException. Disabling Docker memory limits stopped the errors. The hosts have plenty of free memory and memory limits were set sufficiently above what the containers were actually using. This is with Docker version 24.0.2. |
I want to add that the issue is present even with .NET 6 and Kubernetes (containerd). What I've noticed is that the process does not use more than the 50% of the available memory (limited from K8s). Is there a way to advise the process memory limit? |
Workaround is to define DOTNET_GCHighMemPercent environment variable (in hex): https://learn.microsoft.com/en-us/dotnet/core/runtime-config/garbage-collector#high-memory-percent |
Description
When running inside a docker container that has a hard memory limit set,
GCMemoryInfo
reports using a higher value forHighMemoryLoadThresholdBytes
than is available.Reproduction Steps
mcr.microsoft.com/dotnet/runtime:6.0-bullseye-slim
docker build -t highmemtest -f .\Dockerfile .
docker run --name highmemtest -it --memory=3g --rm highmemtest
With a limit of 3g I observe the following values:
Expected behavior
I would expect the high memory load threshold to be less than the available memory, so it has a chance to kick in and run the GC more aggressively prior to going OOM.
To that end, I would expect to see values similar to it running outside a container in windows, where I observe:
Or running it inside a container without a memory limit, where I observe:
Configuration
.NET Version: net6.0
OS: Observed on both Windows 10 and Amazon Linux 2 when running the
mcr.microsoft.com/dotnet/runtime:6.0-bullseye-slim
docker imageArch: x64
Specific to: cgroup memory limit when running docker container
Regression?
I did not confirm it, but I suspect .NET 5 exhibits the same behavior.
Other information
https://docs.microsoft.com/en-us/dotnet/core/run-time-config/garbage-collector#heap-limit
The section for heap limit states:
When running the container with --memory=3g,
/sys/fs/cgroup/memory/memory.limit_in_bytes
is 322122547275% of that is 2415919104, which exactly matches the value of
TotalAvailableMemoryBytes
onGCMemoryInfo
https://docs.microsoft.com/en-us/dotnet/core/run-time-config/garbage-collector#high-memory-percent
The section for High Memory Percent states:
90% of the aforementioned 3221225472 is 2899102924 which matches the value of
HighMemoryLoadThresholdBytes
onGCMemoryInfo
I think the disconnect here is that
HighMemoryLoadThresholdBytes
does not take into consideration that the Heap Limit is 75% when running inside a container with memory limit and instead assumes a Heap Limit of 100%. It should either be updated to have the same container-awareness logic thatTotalAvailableMemoryBytes
( e.g.When running inside a container that has a specified memory limit set, the default value is 68%
), or just default to 90% of the calculatedTotalAvailableMemoryBytes
value.The text was updated successfully, but these errors were encountered: