Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large number of memfd:doublemapper (deleted) entries #89776

Open
ayende opened this issue Aug 1, 2023 · 23 comments
Open

Large number of memfd:doublemapper (deleted) entries #89776

ayende opened this issue Aug 1, 2023 · 23 comments

Comments

@ayende
Copy link
Contributor

ayende commented Aug 1, 2023

Description

When looking into our process, we noticed a large number of entries like this:

7fb3b0895000-7fb3b0896000 rw-s 03ac6000 00:01 2062                       /memfd:doublemapper (deleted)
7fb3b0896000-7fb3b08a0000 ---s 03ac7000 00:01 2062                       /memfd:doublemapper (deleted)
7fb3b08a0000-7fb3b08ab000 rw-s 03ad1000 00:01 2062                       /memfd:doublemapper (deleted)
7fb3b08ab000-7fb3b08b0000 ---s 03adc000 00:01 2062                       /memfd:doublemapper (deleted)
7fb3b08b0000-7fb3b08b9000 rw-s 03ae1000 00:01 2062                       /memfd:doublemapper (deleted)
7fb3b08b9000-7fb3b08c0000 ---s 03aea000 00:01 2062                       /memfd:doublemapper (deleted)
7fb427006000-7fb427007000 rw-s 00000000 00:01 2062                       /memfd:doublemapper (deleted)

The process has been running for about 8 hours, and we have:

sudo cat /proc/10459/maps | grep doublemapper | wc -l
3308

That number is not stable and grows over time, but we are just now loading data into the system, not yet trying to stress it.

Looking at the code, I found the source of that here:

int fd = memfd_create("doublemapper", MFD_CLOEXEC);

But looking at where this freed, I see:

close((int)(size_t)mapperHandle);

This looks like this will only actually be freed on MaxOS, and not on Linux?

FWIW, I couldn't find where this is called.

Is this number of entries expected? Should we monitor this value?
I understand that this is related to the way the JIT allocate memory?

Related, but we are seeing a large increase in memory usage in some production systems, which is not seen in .NET 6.0 but very noticeable in .NET 7.0

I noticed this:
#80580

And we are investigating whatever we do a lot of dynamic assembly generation (so far we don't think so, but can't rule it out).

Reproduction Steps

When I started writing this post, I had:

 sudo cat /proc/10459/maps | grep doublemapper | wc -l
3308

By the time I got here, I had:

sudo cat /proc/10459/maps | grep doublemapper | wc -l
3312

So I certainly think that there is something that work here.
Note that at this point, the process in question was running for hours, basically in a big loop. So there should be no change in behavior nor would I expect it to run any JIT tiering or some such.

Expected behavior

Not have the runtime allocate indefinitely memory

Actual behavior

We are seeing additional memory mapping over time

Regression?

Yes, we aren't seeing that in .NET 6.0

Known Workarounds

No response

Configuration

No response

Other information

No response

@dotnet-issue-labeler dotnet-issue-labeler bot added the needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners label Aug 1, 2023
@ghost ghost added the untriaged New issue has not been triaged by the area owner label Aug 1, 2023
@vcsjones vcsjones added area-VM-coreclr and removed needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners labels Aug 1, 2023
@hoyosjs
Copy link
Member

hoyosjs commented Aug 2, 2023

@ayende that's an ifndef - so the doublemapper only deallocates on Linux. It's called from the destructor of ExecutableAllocator, which takes memory that's normally RX and creates an RW mapping as needed for W^X purposes. Lambdas that aren't cached and reflection can also cause such behavior too. cc: @janvorli

@ayende
Copy link
Contributor Author

ayende commented Aug 2, 2023

I'm sorry, didn't realize that this was ifndef, read that as ifdef.
What do you mean by "uncached lambda"?

Is there an expectation that this will grow without limit? What sort of reflection would cause this?

@hoyosjs
Copy link
Member

hoyosjs commented Aug 2, 2023

Actually, even in the lambda case of capturing context, I expect allocations of a managed object, but not jitting of new objects. Essentially, I expect tiering, loading, and some debugger operations to cause RX -> RW paging. You can use dotnet counters to see if jitting method count increases. I am not sure what's causing the growth in this case. @janvorli, does this count against the real memory usage? Any ideas what might be contributing to this? I thought since the mapping is deleted it becomes free for the process. I do expect it to count against the max_map_count though.

@janvorli
Copy link
Member

janvorli commented Aug 2, 2023

The shared memory that is visible as /memfd:doublemapper is used for allocating all executable code for JIT and also for runtime generated helpers and data that need to be allocated close to code that references them. This is the base of the W^X feature that ensures that no memory in the process is writeable and executable at the same time. We double map executable code blocks are writeable memory temporarily to write or modify the code.

@ayende
Copy link
Contributor Author

ayende commented Aug 3, 2023

Hi,
For reference, we run the workload with DOTNET_EnableWriteXorExecute=0 and we are seeing 10561 entries in the /proc/PID/maps

It is going up & down a bit but appears to be mostly stable.

Without this flag, we are seeing a lot more mapping, and they are always increasing.

My expectation that with W^X, we'll stop needing those once the system stabilized, but we saw overall increase over time even after hours of running.

@mangod9 mangod9 removed the untriaged New issue has not been triaged by the area owner label Aug 3, 2023
@mangod9 mangod9 added this to the Future milestone Aug 3, 2023
@marcovr
Copy link

marcovr commented Aug 7, 2023

We are currently facing a similar issue, where we found runtime crashes like this:

Fatal error. The RW block to unmap was not found
Repeat 2 times:
--------------------------------
   at System.Runtime.CompilerServices.RuntimeHelpers.CompileMethod(System.RuntimeMethodHandleInternal)
--------------------------------
   at System.Reflection.Emit.DynamicMethod.CreateDelegate(System.Type, System.Object)
   at System.Linq.Expressions.Compiler.LambdaCompiler.Compile(System.Linq.Expressions.LambdaExpression)
   ...

This seems to be the case because we run into the default maximum number of memory maps

$ sysctl vm.max_map_count
vm.max_map_count = 65530

I wanted to find out why this is happening and by examining /proc/1/maps I can see that ~90% of all memorymaps are coming from doublemapper

$  cat /proc/1/maps | grep doublemapper | wc -l
59588
$  cat /proc/1/maps | wc -l
63878

@ayende
Copy link
Contributor Author

ayende commented Aug 7, 2023

We also hit the limit on the # of maps recently in production.

I'm not sure how to point a finger, but it absolutely feels like there is a leak here.

@ayende
Copy link
Contributor Author

ayende commented Aug 9, 2023

We looked in more detail on the /proc/self/maps and we see:

``
/memfd:doublemapper 7,462 times
Unknown 34,901 times


Those unknown are looking roughly like: `7f9a8bb2e000-7f9b08000000 ---p 00000000 00:00 0`

We run roughly the same process on Windows as well, and looked at the VM Map results.

We have 268GB (!) of `Thread Execution Block` ? 
For that matter you can see that there is a TEB here that is 90MB in size, which seems.. really high

![image](https://github.com/dotnet/runtime/assets/116915/467c944c-3310-476c-8e97-16329632e1d3)

Here is the vmmap data:

[Raven-VMMap.zip](https://github.com/dotnet/runtime/files/12301035/Raven-VMMap.zip)

Any ideas what we are looking at here?

@janvorli
Copy link
Member

@ayende I wonder if it would be possible to run your app with and without W^X enabled for about the same time and then share the /proc/{PID}/smaps (smaps have more details than maps) for each of the cases. I'd like to take a look at the mappings to see how they differ between those two cases, as you've mentioned that the number of mappings looked stable with W^X disabled.

@ayende
Copy link
Contributor Author

ayende commented Aug 27, 2023

AWS: c5.xlarge

$  lsb_release -d
Description:    Ubuntu 22.04.2 LTS

$ uname -a
Linux ip-172-31-16-178 5.15.0-1031-aws #35-Ubuntu SMP Fri Feb 10 02:07:18 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

$ wget https://daily-builds.s3.amazonaws.com/RavenDB-5.4.109-linux-x64.tar.bz2

$ sudo apt install bzip

$ tar -xf RavenDB-5.4.109-linux-x64.tar.bz2

$ ./RavenDB/run.sh

$ cat <<EOF > RavenDB/Server/settings.json
{
    "ServerUrl": "http://127.0.0.1:8080",
    "Setup.Mode": "None",
    "DataDir": "RavenData",
    "License.Eula.Accepted": true
}
EOF

$ ./RavenDB/run.sh

In the browser:

$ $ curl "http://localhost:8080/databases/test/admin/smuggler/import?url=https://twitter-2020-rvn-dump.s3.us-west-1.amazonaws.com/2023-03-29-07-46-59.ravendb-full-backup"

** Note: That is a very large file.

What happens under the covers is that we import a lot of data into RavenDB.
There should be no assembly generation in this process, and pretty much all the code that is involved is basically the same big loop.

$ cat /proc/$(pidof Raven.Server)/smaps | grep memfd | wc -l

We see a very rapid growth to ~2,600 memfd items
Then slow growth over time, this takes ~few minutes or so but adds another few over time.

[09:07:10] ubuntu@ip-172-31-16-178:~$ cat /proc/$(pidof Raven.Server)/maps | grep memfd | wc -l
2652
[09:07:11] ubuntu@ip-172-31-16-178:~$ cat /proc/$(pidof Raven.Server)/maps | grep memfd | wc -l
2654
[09:07:15] ubuntu@ip-172-31-16-178:~$ cat /proc/$(pidof Raven.Server)/maps | grep memfd | wc -l
2654
[09:07:28] ubuntu@ip-172-31-16-178:~$ cat /proc/$(pidof Raven.Server)/maps | grep memfd | wc -l
2659
[09:07:40] ubuntu@ip-172-31-16-178:~$ cat /proc/$(pidof Raven.Server)/maps | grep memfd | wc -l
2669

Here is all the maps here:
cat /proc/$(pidof Raven.Server)/maps |wc -l
3958

Output from smaps:

7fed43c26000-7fed43c27000 r-xs 00167000 00:01 3072                       /memfd:doublemapper (deleted)
Size:                  4 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             0 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
FilePmdMapped:         0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:    0
ProtectionKey:         0
VmFlags: rd ex sh mr mw me ms sd
7fed43c27000-7fed43c28000 rw-s 00168000 00:01 3072                       /memfd:doublemapper (deleted)
Size:                  4 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             0 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
FilePmdMapped:         0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:    0
ProtectionKey:         0
VmFlags: rd wr sh mr mw me ms sd

I tried running: $ sudo strace -kfp $(pidof Raven.Server) -e trace=memfd_create

Gave this output:

ubuntu@ip-172-31-16-178:~$ sudo strace -kfp $(pidof Raven.Server) -e trace=memfd_create
strace: Process 2495 attached with 39 threads
strace: Process 2856 attached
[pid  2527] --- SIGRT_2 {si_signo=SIGRT_2, si_code=SI_TKILL, si_pid=2495, si_uid=1000} ---
 > /memfd:doublemapper (deleted)() [0x2861ac6]
 > /memfd:doublemapper (deleted)() [0x27e4535]
[pid  2527] --- SIGRT_2 {si_signo=SIGRT_2, si_code=SI_TKILL, si_pid=2495, si_uid=1000} ---
 > /memfd:doublemapper (deleted)() [0x2861abb]
 > /memfd:doublemapper (deleted)() [0x27e4535]
[pid  2527] --- SIGRT_2 {si_signo=SIGRT_2, si_code=SI_TKILL, si_pid=2495, si_uid=1000} ---
 > /memfd:doublemapper (deleted)() [0x2861d6b]
 > /memfd:doublemapper (deleted)() [0x27e4535]
[pid  2527] --- SIGRT_2 {si_signo=SIGRT_2, si_code=SI_TKILL, si_pid=2495, si_uid=1000} ---
 > /memfd:doublemapper (deleted)() [0x2861abb]
 > /memfd:doublemapper (deleted)() [0x27e4535]
[pid  2856] +++ exited with 0 +++
[pid  2640] --- SIGRT_2 {si_signo=SIGRT_2, si_code=SI_TKILL, si_pid=2495, si_uid=1000} ---
 > /memfd:doublemapper (deleted)() [0x27ad3c5]
 > /memfd:doublemapper (deleted)() [0x27e66df]
 > /memfd:doublemapper (deleted)() [0x27ee81a]
 > /memfd:doublemapper (deleted)() [0x27f3b57]
 > /memfd:doublemapper (deleted)() [0x27f3aa4]
 > /memfd:doublemapper (deleted)() [0x27ee24e]
 > /memfd:doublemapper (deleted)() [0x27f3947]
 > /memfd:doublemapper (deleted)() [0x27f389e]
 > /memfd:doublemapper (deleted)() [0x27ecbc4]
 > /memfd:doublemapper (deleted)() [0x27f2849]
 > /memfd:doublemapper (deleted)() [0x27f2691]
 > /memfd:doublemapper (deleted)() [0x2833af0]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x28331a6]
 > /memfd:doublemapper (deleted)() [0x27edb9c]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x2832f5d]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27ee36f]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x2832d2e]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27e8d06]
 > /memfd:doublemapper (deleted)() [0x2741f2d]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x2832b0e]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27fe57b]
 > /memfd:doublemapper (deleted)() [0x27fe4a6]
 > /memfd:doublemapper (deleted)() [0x28018a8]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x28328fe]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27fb6a5]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x28326de]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27fe57b]
 > /memfd:doublemapper (deleted)() [0x27fe4a6]
 > /memfd:doublemapper (deleted)() [0x281863f]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x283249e]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x2817aed]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x283226e]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27fe57b]
 > /memfd:doublemapper (deleted)() [0x27fe4a6]
 > /memfd:doublemapper (deleted)() [0x2817487]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x2831d1e]
 > /memfd:doublemapper (deleted)() [0x28135c2]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x27fbd3c]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x2814b19]
 > /memfd:doublemapper (deleted)() [0x2754c86]
 > /memfd:doublemapper (deleted)() [0x2727c5c]
 > /home/ubuntu/RavenDB/Server/System.Private.CoreLib.dll() [0x228532]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x2bf057) [0x4c4a57]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xee07e) [0x2f3a7e]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x106182) [0x30bb82]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xb699a) [0x2bc39a]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xb6f9d) [0x2bc99d]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x106257) [0x30bc57]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x44c1ce) [0x651bce]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(pthread_condattr_setpshared+0x513) [0x94b43]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(__xmknodat+0x230) [0x126a00]
[pid  2527] --- SIGRT_2 {si_signo=SIGRT_2, si_code=SI_TKILL, si_pid=2495, si_uid=1000} ---
 > /usr/lib/x86_64-linux-gnu/libc.so.6(__nptl_death_event+0x187) [0x91197]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(pthread_cond_wait+0x211) [0x93ac1]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x43f9db) [0x6453db]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x43f691) [0x645091]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x444072) [0x649a72]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x4442a9) [0x649ca9]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x1d273a) [0x3d813a]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x1d6fe5) [0x3dc9e5]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x15f287) [0x364c87]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x2c00bd) [0x4c5abd]
 > /memfd:doublemapper (deleted)() [0x280fa88]
 > /memfd:doublemapper (deleted)() [0x280f514]
 > /memfd:doublemapper (deleted)() [0x27f6b78]
 > /memfd:doublemapper (deleted)() [0x280cec7]
 > /memfd:doublemapper (deleted)() [0x2845d08]
 > /memfd:doublemapper (deleted)() [0x2843a70]
 > /memfd:doublemapper (deleted)() [0x27b77d8]
 > /memfd:doublemapper (deleted)() [0x22d9c97]
 > /memfd:doublemapper (deleted)() [0x22d992c]
 > /memfd:doublemapper (deleted)() [0x1e676d7]
 > /memfd:doublemapper (deleted)() [0x1e65733]
 > /home/ubuntu/RavenDB/Server/System.Private.CoreLib.dll() [0x216dfb]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x2bf057) [0x4c4a57]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xee07e) [0x2f3a7e]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x106182) [0x30bb82]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xb699a) [0x2bc39a]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xb6f9d) [0x2bc99d]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x106257) [0x30bc57]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x44c1ce) [0x651bce]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(pthread_condattr_setpshared+0x513) [0x94b43]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(__xmknodat+0x230) [0x126a00]
[pid  2640] --- SIGRT_2 {si_signo=SIGRT_2, si_code=SI_TKILL, si_pid=2495, si_uid=1000} ---
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x41d08d) [0x622a8d]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x42cdc3) [0x6327c3]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x43b0fa) [0x640afa]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x1d73e6) [0x3dcde6]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x1356b9) [0x33b0b9]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x224766) [0x42a166]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x23cce5) [0x4426e5]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x26ec31) [0x474631]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x13ae0f) [0x34080f]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x1399eb) [0x33f3eb]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x1561b4) [0x35bbb4]
 > /memfd:doublemapper (deleted)() [0x27409d7]
 > /memfd:doublemapper (deleted)() [0x27ebcc7]
 > /memfd:doublemapper (deleted)() [0x27eaa2f]
 > /memfd:doublemapper (deleted)() [0x27ea354]
 > /memfd:doublemapper (deleted)() [0x27e531e]
 > /memfd:doublemapper (deleted)() [0x27ee81a]
 > /memfd:doublemapper (deleted)() [0x27f3b57]
 > /memfd:doublemapper (deleted)() [0x27f3aa4]
 > /memfd:doublemapper (deleted)() [0x27ee24e]
 > /memfd:doublemapper (deleted)() [0x27f3947]
 > /memfd:doublemapper (deleted)() [0x27f389e]
 > /memfd:doublemapper (deleted)() [0x27ecbc4]
 > /memfd:doublemapper (deleted)() [0x27f2849]
 > /memfd:doublemapper (deleted)() [0x27f2691]
 > /memfd:doublemapper (deleted)() [0x2833af0]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x28331a6]
 > /memfd:doublemapper (deleted)() [0x27edb9c]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x2832f5d]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27ee36f]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x2832d2e]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27e8d06]
 > /memfd:doublemapper (deleted)() [0x2741f2d]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x2832b0e]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27fe57b]
 > /memfd:doublemapper (deleted)() [0x27fe4a6]
 > /memfd:doublemapper (deleted)() [0x28018a8]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x28328fe]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27fb6a5]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x28326de]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27fe57b]
 > /memfd:doublemapper (deleted)() [0x27fe4a6]
 > /memfd:doublemapper (deleted)() [0x281863f]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x283249e]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x2817aed]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x283226e]
 > /memfd:doublemapper (deleted)() [0x276492b]
 > /memfd:doublemapper (deleted)() [0x2724853]
 > /memfd:doublemapper (deleted)() [0x27fe57b]
 > /memfd:doublemapper (deleted)() [0x27fe4a6]
 > /memfd:doublemapper (deleted)() [0x2817487]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x2831d1e]
 > /memfd:doublemapper (deleted)() [0x28135c2]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x27fbd3c]
 > /memfd:doublemapper (deleted)() [0x2724dd9]
 > /memfd:doublemapper (deleted)() [0x2814b19]
 > /memfd:doublemapper (deleted)() [0x2754c86]
 > /memfd:doublemapper (deleted)() [0x2727c5c]
 > /home/ubuntu/RavenDB/Server/System.Private.CoreLib.dll() [0x228532]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x2bf057) [0x4c4a57]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xee07e) [0x2f3a7e]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x106182) [0x30bb82]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xb699a) [0x2bc39a]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xb6f9d) [0x2bc99d]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x106257) [0x30bc57]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x44c1ce) [0x651bce]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(pthread_condattr_setpshared+0x513) [0x94b43]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(__xmknodat+0x230) [0x126a00]
strace: Process 2857 attached
[pid  2527] --- SIGRT_2 {si_signo=SIGRT_2, si_code=SI_TKILL, si_pid=2495, si_uid=1000} ---
 > /usr/lib/x86_64-linux-gnu/libc.so.6(__nss_database_lookup+0x3784a) [0x1afbba]
 > /memfd:doublemapper (deleted)() [0x2720b31]
 > /memfd:doublemapper (deleted)() [0x2850c63]
 > /memfd:doublemapper (deleted)() [0x2853650]
 > /memfd:doublemapper (deleted)() [0x2852fa1]
 > /memfd:doublemapper (deleted)() [0x27f6bcf]
 > /memfd:doublemapper (deleted)() [0x280cec7]
 > /memfd:doublemapper (deleted)() [0x2845d08]
 > /memfd:doublemapper (deleted)() [0x2843a70]
 > /memfd:doublemapper (deleted)() [0x27b77d8]
 > /memfd:doublemapper (deleted)() [0x22d9c97]
 > /memfd:doublemapper (deleted)() [0x22d992c]
 > /memfd:doublemapper (deleted)() [0x1e676d7]
 > /memfd:doublemapper (deleted)() [0x1e65733]
 > /home/ubuntu/RavenDB/Server/System.Private.CoreLib.dll() [0x216dfb]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x2bf057) [0x4c4a57]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xee07e) [0x2f3a7e]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x106182) [0x30bb82]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xb699a) [0x2bc39a]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0xb6f9d) [0x2bc99d]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x106257) [0x30bc57]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so(GetCLRRuntimeHost+0x44c1ce) [0x651bce]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(pthread_condattr_setpshared+0x513) [0x94b43]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(__xmknodat+0x230) [0x126a00]

I then killed the RavenDB process:

$ export DOTNET_EnableWriteXorExecute=0
$ ./RavenDB/run.sh

I deleted and re-created the Test database and then:

$ curl "http://localhost:8080/databases/test/admin/smuggler/import?url=https://twitter-2020-rvn-dump.s3.us-west-1.amazonaws.com/2023-03-29-07-46-59.ravendb-full-backup"

Obviously, there are no memfd items in the maps there, but I tried:

[09:19:47] ubuntu@ip-172-31-16-178:~$ cat /proc/$(pidof Raven.Server)/maps |  wc -l
4123
[09:19:56] ubuntu@ip-172-31-16-178:~$ cat /proc/$(pidof Raven.Server)/maps |  wc -l
4129

I'm adding the full maps from two times, so you can see this over time (without W^X).
maps-no-w^x.zip

And here is the smaps output for both modes:

smaps.zip

@koepalex
Copy link

koepalex commented Sep 4, 2023

I'm currently try to understand why in our application, the process memory (full memory dump) is way bigger, than what we "use" (details see: https://stackoverflow.com/questions/77023695/missmatch-between-expected-memory-size-of-an-dotnet-application-and-real-consume )

So today I checked /proc<pid>/maps and saw that 2452 entries out of 3265 contain /memfd:doublemapper(deleted)
@ayende do you also a missmatch in the expected memory size (Heaps, Stacks, Modules) and the consumed process memory?

@ayende
Copy link
Contributor Author

ayende commented Sep 4, 2023

Yes, we are also seeing some weirdness around that.

@ayende
Copy link
Contributor Author

ayende commented Sep 19, 2023

I just tested this on .NET 8.0 RC1, we are seeing (still) an increase in the number of memfd files over time.

I can reproduce this quite easily:

  • starting RavenDB
  • cat /proc/$(pidof Raven.Server)/maps | grep memfd | wc -l == 838
  • create (empty) database
  • cat ... == 909
  • create sample data
  • cat ... == 1996
  • reset an index
  • cat ... == 2000
  • reset an index
  • cat ... == 2004
  • reset an index
  • cat ... == 2007

I'm getting some results from this:

 strace -f --instruction-pointer --stack-traces -e memfd_create  RavenDB/Server/Raven.Server
[pid  7658] [00007f00a0346c81] --- SIGRT_2 {si_signo=SIGRT_2, si_code=SI_TKILL, si_pid=7565, si_uid=1000} ---
 > /memfd:doublemapper (deleted)() [0x2d67c81]
 > /memfd:doublemapper (deleted)() [0x2da928e]
 > /memfd:doublemapper (deleted)() [0x68ff791]
 > /memfd:doublemapper (deleted)() [0x68ffc2b]
 > /memfd:doublemapper (deleted)() [0x68ff48e]
 > /memfd:doublemapper (deleted)() [0x68c1d16]
 > /memfd:doublemapper (deleted)() [0x68ee05c]
 > /memfd:doublemapper (deleted)() [0x68dd66c]
 > /memfd:doublemapper (deleted)() [0x6821b7f]
 > /memfd:doublemapper (deleted)() [0x636d5a2]
 > /memfd:doublemapper (deleted)() [0x223ef8d]
 > /memfd:doublemapper (deleted)() [0x223c6d9]
 > /memfd:doublemapper (deleted)() [0x2d925b4]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x49b7c7]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x2d5df6]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x2ebab2]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x2a4e05]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x2a53bd]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x2ebb88]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x612a2e]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(pthread_condattr_setpshared+0x513) [0x94b43]
 > unexpected_backtracing_error [0x135e]
[pid  7579] [????????????????] +++ exited with 0 +++
[pid  7658] [00007f00a0389e27] --- SIGRT_2 {si_signo=SIGRT_2, si_code=SI_TKILL, si_pid=7565, si_uid=1000} ---
 > /memfd:doublemapper (deleted)() [0x2daae27]
 > /memfd:doublemapper (deleted)() [0x68ffe9c]
 > /memfd:doublemapper (deleted)() [0x68ff48e]
 > /memfd:doublemapper (deleted)() [0x68c1d16]
 > /memfd:doublemapper (deleted)() [0x68ee05c]
 > /memfd:doublemapper (deleted)() [0x68dd66c]
 > /memfd:doublemapper (deleted)() [0x6821b7f]
 > /memfd:doublemapper (deleted)() [0x636d5a2]
 > /memfd:doublemapper (deleted)() [0x223ef8d]
 > /memfd:doublemapper (deleted)() [0x223c6d9]
 > /memfd:doublemapper (deleted)() [0x2d925b4]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x49b7c7]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x2d5df6]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x2ebab2]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x2a4e05]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x2a53bd]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x2ebb88]
 > /home/ubuntu/RavenDB/Server/libcoreclr.so() [0x612a2e]
 > /usr/lib/x86_64-linux-gnu/libc.so.6(pthread_condattr_setpshared+0x513) [0x94b43]
 > unexpected_backtracing_error [0x7ebf2e4eef40]

I can't get symbols from strace, and I can't get lldb (where I do get symbols) to stop on the right location.

Running with: b VMToOSInterface::CreateDoubleMemoryMapper gives the right output, but doesn't actually stop.

@hoyosjs
Copy link
Member

hoyosjs commented Sep 19, 2023

@ayende dotnet symbol should be able to get them for you with the --symbols flag if it's the Microsoft-built runtime: https://learn.microsoft.com/en-us/dotnet/core/diagnostics/dotnet-symbol

@ayende
Copy link
Contributor Author

ayende commented Sep 20, 2023

Yes, I tried running that, it didn't seem to matter in terms of strace, it did work with lldb, I think, but couldn't get the breakpoint to hit.

@janvorli
Copy link
Member

it didn't seem to matter in terms of strace

Interestingly, I've seen the symbols both working and not working with strace on the same Ubuntu 22.04 (except that one was in WSL2 - that didn't work, and the other in a docker container - which worked). I am currently trying to figure out what makes it different, since I need to get it working for an investigation I am doing.

@theolivenbaum
Copy link

theolivenbaum commented Nov 30, 2023

@janvorli seeing something similar on our application, over >16k /memfd:doublemapper (deleted) after a few days of uptime. Tested on .NET8, but probably the same on .NET7 was we had strange problems with OOM in the past.

We do generate assemblies at runtime using Microsoft.CodeAnalysis.Scripting.Script, is this known for leaking memory like in #80580?

Possibly related: dotnet/roslyn#52217 and dotnet/roslyn#41722

@janvorli
Copy link
Member

@theolivenbaum this is not a leak, the number of allocations of regions marked with /memfd:doublemapper are expected to grow when runtime compiles more and more code. If this code is not inside of a collectible AssemblyLoadContext and thus it is not unloadable, this stuff is not freed either.

@theolivenbaum
Copy link

@theolivenbaum this is not a leak, the number of allocations of regions marked with /memfd:doublemapper are expected to grow when runtime compiles more and more code. If this code is not inside of a collectible AssemblyLoadContext and thus it is not unloadable, this stuff is not freed either.

Thanks! I'm changing our code to use AssemblyLoadContext, I'll check again in a week to see how it behaves

@stg609
Copy link

stg609 commented Jan 13, 2025

How's it going ?
We also generate assemblies at runtime using Microsoft.CodeAnalysis.Scripting.Script and collectiable AssemblyLoadContext for serveral days, the memory has easily reached 3.5 GiB out of a total of 4.5 GiB, ultimately resulting in OOMKilled.
The number of alived AssemblyLoadContext is less than 100. (I have triggered GC.Collect() serveral times.)

The grafana shows lots of memory is cache.
Image

The result of cat /sys/fs/cgroup/memory/memory.stat

cache 2491609088
rss 1338695680
rss_huge 815792128
shmem 2400313344   <----  looks like an issue
mapped_file 394776576
dirty 4096
writeback 0
swap 0
pgpgin 54481949
pgpgout 53756387
pgfault 70779121
pgmajfault 237
inactive_anon 2270507008
active_anon 1533079552
inactive_file 46526464
active_file 44769280
unevictable 0
hierarchical_memory_limit 4718592000
hierarchical_memsw_limit 4718592000
total_cache 2491609088
total_rss 1338695680
total_rss_huge 815792128
total_shmem 2400313344
total_mapped_file 394776576
total_dirty 4096
total_writeback 0
total_swap 0
total_pgpgin 54481949
total_pgpgout 53756387
total_pgfault 70779121
total_pgmajfault 237
total_inactive_anon 2270507008
total_active_anon 1533079552
total_inactive_file 46526464
total_active_file 44769280
total_unevictable 0

I created a dump file using dotnet-dump before OOMKilled. And below is the result of some commands:

eeversion

> eeversion                                                                                                                                                                                                                                                                
8.0.1024.46610
8.0.1024.46610 @Commit: 81cabf2857a01351e5ab578947c7403a5b128ad1
Server mode with 3 gc heaps
SOS Version: 9.0.11.3101 @Commit: 5b61d34de04d6100e6003415f7d7e9c4b971afd4

eeheap -gc shows the size of gc heap is less than 400M:

GC Allocated Heap Size:    Size: 0x1622e368 (371385192) bytes.
GC Committed Heap Size:    Size: 0x17562000 (391520256) bytes.

dumpheap -type LoaderAllocator shows only 65 alived AssemblyLoadContext :

Statistics:
          MT Count TotalSize Class Name
7f459ad18488    66     1,584 System.Reflection.LoaderAllocatorScout
7f459ad18360    65     3,120 System.Reflection.LoaderAllocator
Total 131 objects, 4,704 bytes

!maddress shows the size of Image is 3.54gb:

 +----------------------------------------------------------------------+ 
 | Memory Type         |          Count |         Size |   Size (bytes) | 
 +----------------------------------------------------------------------+ 
 | Image               |            980 |       3.54gb |  3,801,517,056 | 
 | PAGE_READWRITE      |          1,178 |       1.17gb |  1,255,059,968 | 
 | Stack               |             66 |     499.35mb |    523,604,992 | 
 | GCHeap              |             61 |     373.38mb |    391,520,256 | 
 | HighFrequencyHeap   |          1,829 |     115.05mb |    120,635,392 | 
 | LoaderCodeHeap      |            148 |      78.54mb |     82,354,176 | 
 | PAGE_READONLY       |            276 |      78.41mb |     82,223,104 | 
 | LowFrequencyHeap    |            765 |      53.26mb |     55,848,960 | 
 | PAGE_EXECUTE_READ   |          1,042 |      53.20mb |     55,783,424 | 
 | GCHeapToBeFreed     |             13 |      36.14mb |     37,900,288 | 
 | FixupPrecodeHeap    |          1,634 |      25.81mb |     27,066,368 | 
 | GCBookkeeping       |              9 |       8.89mb |      9,318,400 | 
 | HostCodeHeap        |             25 |       3.65mb |      3,825,664 | 
 | ExecutableHeap      |             25 |       1.50mb |      1,576,960 | 
 | IndirectionCellHeap |             82 |       1.34mb |      1,400,832 | 
 | HandleTable         |             14 |     792.00kb |        811,008 | 
 | CacheEntryHeap      |             52 |     484.00kb |        495,616 | 
 | StubHeap            |             51 |     332.00kb |        339,968 | 
 | NewStubPrecodeHeap  |              4 |      64.00kb |         65,536 | 
 +----------------------------------------------------------------------+ 
 | [TOTAL]             |          8,254 |       6.01gb |  6,451,347,968 | 
 +----------------------------------------------------------------------+ 

The number of doublemapper__deleted_ is 65, and the total size of doublemapper__deleted_ is 3.2gb

 | Image               |     7f3fae82e000 |     7f3fb4000000 |      87.82mb | MEM_IMAGE   | MEM_COMMIT  | PAGE_EXECUTE_READ | doublemapper__deleted_                                            | 
 | Image               |     7f3fb4000000 |     7f3fb7fed000 |      63.93mb | MEM_IMAGE   | MEM_COMMIT  | PAGE_READWRITE    | doublemapper__deleted_                                            | 
 | Image               |     7f3fb7fed000 |     7f3fbc000000 |      64.07mb | MEM_IMAGE   | MEM_UNKNOWN | PAGE_UNKNOWN      | doublemapper__deleted_                                            | 
 | Image               |     7f3fbc000000 |     7f3fbfff8000 |      63.97mb | MEM_IMAGE   | MEM_COMMIT  | PAGE_READWRITE    | doublemapper__deleted_                                            | 
 | Image               |     7f3fbfff8000 |     7f3fc4000000 |      64.03mb | MEM_IMAGE   | MEM_UNKNOWN | PAGE_UNKNOWN      | doublemapper__deleted_                                            | 
 | Image               |     7f3fc4000000 |     7f3fcbff5000 |     127.96mb | MEM_IMAGE   | MEM_COMMIT  | PAGE_READWRITE    | doublemapper__deleted_                                            | 
 | Image               |     7f3fcbff5000 |     7f3fcc000000 |      44.00kb | MEM_IMAGE   | MEM_UNKNOWN | PAGE_UNKNOWN      | doublemapper__deleted_                                            | 
 | Image               |     7f3fcc000000 |     7f3fcfffb000 |      63.98mb | MEM_IMAGE   | MEM_COMMIT  | PAGE_READWRITE    | doublemapper__deleted_                                            | 
 | Image               |     7f3fcfffb000 |     7f3fd0000000 |      20.00kb | MEM_IMAGE   | MEM_UNKNOWN | PAGE_UNKNOWN      | doublemapper__deleted_                                            | 
 | Image               |     7f3fd0000000 |     7f3fd3ff6000 |      63.96mb | MEM_IMAGE   | MEM_COMMIT  | PAGE_READWRITE    | doublemapper__deleted_                                            | 
 | Image               |     7f3fd3ff6000 |     7f3fd4000000 |      40.00kb | MEM_IMAGE   | MEM_UNKNOWN | PAGE_UNKNOWN      | doublemapper__deleted_                                            | 
 | Image               |     7f3fd4000000 |     7f3fd7ff7000 |      63.96mb | MEM_IMAGE   | MEM_COMMIT  | PAGE_READWRITE    | doublemapper__deleted_                                            | 
 | Image               |     7f3fd7ff7000 |     7f3fdc000000 |      64.04mb | MEM_IMAGE   | MEM_UNKNOWN | PAGE_UNKNOWN      | doublemapper__deleted_                                            | 
 | Image               |     7f3fdc000000 |     7f3fdfffc000 |      63.98mb | MEM_IMAGE   | MEM_COMMIT  | PAGE_READWRITE    | doublemapper__deleted_                                            | 
....

@janvorli
Copy link
Member

@stg609 would you be able to share the dump? Unless you prefer a different way, you can share the dump with Microsoft by opening a feedback issue at https://developercommunity.visualstudio.com/, attaching the dump to it (the attachments are not publicly visible), and sharing the feedback issue ID here.

@stg609
Copy link

stg609 commented Jan 14, 2025

@stg609 would you be able to share the dump? Unless you prefer a different way, you can share the dump with Microsoft by opening a feedback issue at https://developercommunity.visualstudio.com/, attaching the dump to it (the attachments are not publicly visible), and sharing the feedback issue ID here.

Yes, I have attached the dump file in the comment. see https://developercommunity.visualstudio.com/t/Many-doublemapper-Images-result-in-OOMKi/10827328#T-N10827379

@janvorli
Copy link
Member

janvorli commented Feb 1, 2025

@stg609 I am sorry it took me quite some time to investigate your dump. Some of the numbers reported by the !maddress don't make sense. For example, it says that System.Private.CoreLib.dll has several mappings of size around 64MB and 332.67MB in total, while in reality, the size of the whole file is about 12MB.

In the end, I have written a windbg script in javascript to dump all blocks managed by the executable memory allocator and got this result:

Total RX Blocks: 3666 Total Size: 327094272
Total RW Blocks: 3 Total Size: 28672
Total Free Blocks: 15305 Total Size: 1570635776
  • The RW blocks are temporary mappings, so their number is small.
  • The RX blocks represent memory allocated by the executable allocator. Their total size is ~312MB, which is reasonable given the number of assemblies / load contexts you have
  • But the free blocks consume 1.46GB! They are supposed to be reused and not pile up like this.

I don't have any idea why there are so many free blocks. Many blocks on the free list have sizes that are used very often, so I don't see why they wouldn't be reused.

Would it be possible for you to take two dumps at two different time points of the same process while the memory consumption is growing so that I can check the trend in these numbers?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants