-
Notifications
You must be signed in to change notification settings - Fork 371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
devspace dev command terminated with exit code 139 #2448
Comments
im acutally getting a very similar bug on m1, but it only seems to happen for images coming from my registry if I reference one directly from docker it works... very interesting. |
I believe I've found root cause for this issue - it's devspacehelper-arm64 failing if there are more then 500 env variables. 🤷 @FabianKramm could you please check? Steps to reproduce:
Resulting error:
After unsetting few vars to make them 500 or less - helper works again. Bit of context - I have namespace with a lot of pods so even empty pod has about 450 env vars, all those
multiplied by service. and few services have a lot of env vars/secrets so that explains why I was seeing this 139 exit code only for some apps. And I guess it also might explain @th3fallen docker vs private registry issue - I guess private images had some extra vars added on top of base image during build. |
@tbondarchuk |
Hi, I'm experiencing a panic when running First of all, worth mentioning that this panic is not happening in all machines I tested, but especially on a single one (making it even more odd). The machine I'm running is a MacBook Pro with M1 chip and 32 GB RAM updated to the latest MacOS version (13.2.1). I'm running Docker Desktop 4.26.0 (was failing as well with 4.25.x). I have reinstalled Docker Desktop several times, and reprovisioned (from scratch) its Kubernetes cluster several times as well. On my tests, running the original
Notice the I played with devspace source code and compiled a version of myself adding some debugging log here and there, to try to figure out where exactly the panic was coming from. No debug was ever printed, meaning the panic was happening before the devspacehelper code was hit. So I tried to unpack the binary with:
Then running the same command as above, with the uncompressed binary worked (no panic). Moreover, one can run this easy test to assess what the max number of env vars can UPX support:
>500 there is a panic. So, to me, this is definitely an upx problem. I went to their GitHub project and I saw no issue opened describing this problem. Before opening an issue on their side (which I would guess should be up to devspace maintainers to do), I would like to know if anybody can reproduce this same behavior on their end. Or even if devspace maintainers could consider an alternative to UPX (I know, bold move) to compress the binary. |
This is a mitigation for devspace-sh#2448 (comment) Signed-off-by: Matthias Riegler <matthias.riegler@ankorstore.com>
For those curious, I opened an issue on UPX project: upx/upx#743 |
This is a mitigation for devspace-sh#2448 (comment) Signed-off-by: Matthias Riegler <matthias.riegler@ankorstore.com> Signed-off-by: André S. Hansen <andre.ok@online.no>
This is a mitigation for devspace-sh#2448 (comment) Signed-off-by: Matthias Riegler <matthias.riegler@ankorstore.com> Signed-off-by: André S. Hansen <andre.ok@online.no>
What happened?
Devspace can not sync with error
start_dev: initial sync: Sync - connection lost: command terminated with exit code 139
with amd64 container in arm64 cluster.In directory i see:
Where qemu_devspacehelper is what appear after each attempt.
What did you expect to happen instead?
Success sync
How can we reproduce the bug? (as minimally and precisely as possible)
Mac ARM (m1/m2)
My devspace.yaml:
Local Environment:
Kubernetes Cluster:
Kustomize Version: v4.5.7
Server Version: v1.24.6+k3s1
Anything else we need to know?
The text was updated successfully, but these errors were encountered: