-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Abnormal memory usage on MIPS platform with GOMIPS=softfloat #39174
Comments
This is expected (VSZ reporting ~600 MiB), assuming that first line was generated on As of Go 1.14 the runtime initializes some structures by making large virtual mappings. Note that it does not actually ever map that memory unless it's needed (and how much is needed is proportional to your heap size), so Go 1.14 should not use much more physical memory than Go 1.13. If your process was OOM-killed it may be due to how your overcommit settings are configured (though in my experiments, I found that Linux really doesn't charge you for simply reserving address space, regardless of the overcommit setting), or it could be something else (a low I do not, in general, recommend using |
So, what I'm doing is cross compiling a binary for a MIPS device (a OpenWRT router). If you check the go env output, you will notice that the GOHOSTARCH is amd64, and GOARCH is mipsle. The OOM happens when I execute the compiled binary on MIPS platform. The explanation about virtual mappings makes sense for me. I double checked the log, and noticed that the hello world program actually never get killed by the OOM-killer. The one get killed is the program I'm actually working on, a traffic forwarding program written in Go. So, I guess what is actually happening is, the traffic forwarding program can see a large trunk of virtual memory (~650MB), without knowing that the physical memory is merely 120MB. So it keeps charging new memory without triggering GC. Eventually, it hits the hard limit, OOM-killer jumps up, crash. Does this guess make sense to you? |
I see... then it is somewhat surprising to me that there's such a large mapping on mipsle, which is 32-bit. There may be a real issue here since there isn't much address space on 32-bit.
Got it, thank you for the clarification.
I'm not sure I follow. Does the traffic forwarding program explicitly reference the virtual memory usage for some kind of throttling mechanism? And I'm not sure what you mean by triggering a GC, either; if you have GOGC set to e.g. 100, then GCs will trigger when the heap reaches double the live heap at the end of the last GC (which is all tracked by updating metadata when allocations are made; it doesn't read any OS-reported statistics). It's not triggered based on a memory maximum or anything. |
Ummm? Interesting. I didn't know that the GC in Go is triggered by the increasing rate. Then how does ulimit help relieve my issue? |
Update: I thought the ulimit settings help relieve the issue. But it turns out that what really helped are some iptable rules set by another team. These iptable rules redirects a lot of connections to another router, which means the traffic forwarding program now experiences far less pressure. So I guess the OOM issue is mainly caused by poor memory optimization of the program itself, I'll spend some time working on that. In the mean while, I'll keep the issue open until you figure out whether 650MB virtual memory is normal for mipsle or not. Good luck and have a nice weekend! |
CC @cherrymui |
As @mknyszek replied, the virtual memory mapping size is probably expected. I see similar numbers on 32-bit 386. As that memory is not actually paged in, not sure if it causes any actual problem. I think softfloat is probably not related here (unless there is some bug in softfloat implementation that causes memory size calculation goes wrong). That said, it is still a general problem for how to run Go programs in a memory-limited environment (related: #29696). |
Change https://golang.org/cl/267100 mentions this issue: |
In Go 1.12, we changed the runtime to use MADV_FREE when available on Linux (falling back to MADV_DONTNEED) in CL 135395 to address issue #23687. While MADV_FREE is somewhat faster than MADV_DONTNEED, it doesn't affect many of the statistics that MADV_DONTNEED does until the memory is actually reclaimed under OS memory pressure. This generally leads to poor user experience, like confusing stats in top and other monitoring tools; and bad integration with management systems that respond to memory usage. We've seen numerous issues about this user experience, including #41818, #39295, #37585, #33376, and #30904, many questions on Go mailing lists, and requests for mechanisms to change this behavior at run-time, such as #40870. There are also issues that may be a result of this, but root-causing it can be difficult, such as #41444 and #39174. And there's some evidence it may even be incompatible with Android's process management in #37569. This CL changes the default to prefer MADV_DONTNEED over MADV_FREE, to favor user-friendliness and minimal surprise over performance. I think it's become clear that Linux's implementation of MADV_FREE ultimately doesn't meet our needs. We've also made many improvements to the scavenger since Go 1.12. In particular, it is now far more prompt and it is self-paced, so it will simply trickle memory back to the system a little more slowly with this change. This can still be overridden by setting GODEBUG=madvdontneed=0. Fixes #42330 (meta-issue). Fixes #41818, #39295, #37585, #33376, #30904 (many of which were already closed as "working as intended"). Change-Id: Ib6aa7f2dc8419b32516cc5a5fc402faf576c92e4 Reviewed-on: https://go-review.googlesource.com/c/go/+/267100 Trust: Austin Clements <austin@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com>
Because 05e6d28 is landed on the master, this issue can be closed. |
@changkun 👍 Thanks a lot, which you a nice day. |
That commit is not relevant. OP already said:
If he tried setting For what it's worth, I have this exact same issue and it is driving me nuts. Three identical MIPS machines running the same golang program; one of them OOMs after two hours, one of them OOMs about once a day, and the third one seems unaffected. Closing GC issues as "WORKSFORME" is not cool. GC is inherently a nondeterministic thing. You should expect buggy GCs to work properly for lots of people. |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
What did you expect to see?
The memory usage should be a reasonable number, e.g. 2~3MB.
What did you see instead?
The
top
program reported a crazy number (~650MB), and the program got killed by the OOM-killer later.And More
So I've did some tests and got the following interesting observations:
It has nothing to do with the kernel version. I've tried downgrade the kernel from
5.4.41
to4.14.180
, it makes no difference.I've tried
GOGC=<some number>
, it makes no difference.I've tried
GODEBUG=madvdontneed=1
andGODEBUG=asyncpreemptoff=1
because I noticed that these features had caused some memory issues before. However, I got no luck on these two either.I haven't try
GOARCH=mips
orGOMIPS=hardfloat
yet, because that OpenWRT router is the only MIPS device I have in my hand, and it doesn't support these options.I eventually found a temporary workaround:
ulimit -v 32768
, i.e. limit the memory usage to at most 32MB. After applying this limitation, I got a very reasonable memory usage:The text was updated successfully, but these errors were encountered: