-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: non CGO application segfaulted once #42977
Comments
The crash is happening on line 2738 here: Lines 2734 to 2742 in 1984ee0
From the traceback, we know What's weird is that there is no reason why 0xc0000a278c would be mapped and 0xc0000a1668 wouldn't. In the heap, the runtime manipulates the address space in 64MiB-aligned 64MiB chunks. So either something went horribly wrong and Go managed to unmap a hole in its own heap, or something went wrong with the kernel or the hardware. Given how rarely Go unmaps anything, and the fact that this only happened once, I think the most likely explanation may actually be a hardware issue. If you see it again, we'd love to know. But otherwise I'm not sure there's enough information to debug any further. |
Thanks, that you looked into the issue. I also think the issue is not very valuable as it is and to be honest I don't know how I can add more information. |
Thanks. It's not doing much harm in the Unplanned milestone and it can stay there in case this issue is encountered again. If not, GopherBot will close it in a few months since it has the WaitingForInfo label. |
[acting as gopherbot] |
Issue is more for the record, because I can't reproduce it and saw it only once. @aclements maybe interesting for you, I don't know.
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
I don't know. I can't reproduce it, but saw it once in production.
What operating system and processor architecture are you using (
go env
)?I can't share
go env
, because build setup changes in our CI environment.AWS c5.9xlarge instance /proc/cpuinfo output (don't know if this matters, but CPU flags seem interesting based on #35777)
What did you do?
We run a net/http based proxy skipper, that runs 24x7 serving our business. The application is running in Kubernetes with docker.
Today, I saw a pod restart and got a panic with:
Rest of the panic is >80kB long and does not show any why to me. Only things you would think are normal for a http proxy application.
What did you expect to see?
no panic
What did you see instead?
a panic
additional information
C Code shown in https://github.com/golang/go/wiki/LinuxKernelSignalVectorBug was tested on the same machine of the restarted process:
Shutdown of the application was not triggered (I don't see any logs that show logs from our signal handler). So seems unrelated to #40085
I read #35777 but I don't know if this is related. If so it seems to happen very rarely.
The text was updated successfully, but these errors were encountered: