-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
All conversions fail with "Unspecified error", using Orbstack #908
Comments
Hi there. This is likely due to Orbstack vs Docker Desktop differences and how it impacts gVisor running inside. Please see issue #865 where another user is using a non-Docker container runtime and getting similar symptoms, most likely due to different permissions between the VM image used by Docker Desktop vs the one used by Colima. See that issue for troubleshooting steps as well. The first step is to find out the container runtime command being run ( |
Thanks very much for that. Running Dangerzone via the commandline, this is the output showing the error:
|
Huh. I'm not sure which program exactly is saying the |
Yeah it's weird because it doesn't even seem like a -B is even being passed anywhere from that output. Running with that extra flag, I get the below:
|
Can you try adding |
That did seem to fix it, that error is no longer happening and it seems to be waiting to be given a file to process. Is there a way to run Dangerzone normally via the GUI with that flag included? |
Yes, the behavior of waiting for a file to process is expected when running this command outside of Dangerzone.
Try to replace line 142 here: dangerzone/dangerzone/isolation_provider/container.py Lines 139 to 143 in f739761
with @apyrgio Perhaps we could just set this seccomp profile unconditionally for all runtimes in order to harmonize these differences. WDYT? |
Thanks for debugging this Etienne. I actually wanted to run Orbstack before weighing on this issue. It could be that our custom seccomp profile does not work for whatever reasons, and need to dig deeper. If it does work though, we have to be very careful with setting this seccomp filter unconditionally. My fear is that we may mask updates to that filter from the upstream, e.g., in case of a Linux kernel vulnerability. I'll try to see if Orbstack (and Colima #908) report themselves through Unfortunately, container runtimes don't offer a way to show the default seccomp filter that will be used for the container invocation. Moreover, the Linux kernel does not provide it something helpful there as well (my understanding is that Finally, regarding the @mstgrv: In order to make the change that Etienne suggests, you need to build Dangerzone from source (check out https://github.com/freedomofpress/dangerzone/blob/main/BUILD.md) or edit the installed Python module directly (nasty). Are you able to do that? |
In addition (or as a replacement) to opting-in in our code for a number of container technologies, we could have an option in the interface to enable this custom seccomp profile. Or, if we don't want to bother end users about this (it's fairly technical), we could enable this via a setting that we don't expose in the interface. Because these technologies aren't currently supported, that would offer a way for advanced users to tweak the installation. Note that I am also concerned about the security breaches that it could open... |
I don't think that's a likely scenario, but even if it happened, I don't think blocking the system call at the level of the container runtime syscall filter would make a practical difference, in my opinion. The longer version: Container runtimes' default seccomp profile has to cater to generic workloads and thus need to allow most syscalls through by design. Their decision to block additional ones always carries the risk of breaking currently-working container workloads in ways that are hard for container users to detect. But even if this was the case, I'd still point out that gVisor "seccomps itself" much tighter than the container runtime's default seccomp filter, and this will remain true by design. These two layers of seccomp filters stack, i.e. every system call to the host needs to clear both filters in order to be executed. Now, if a Linux kernel vulnerability were to show up in such a way that it would be easy to block through seccomp filters without impacting legitimate workloads (not often the case; most vulnerabilities usually require a specific sequence of syscalls which seccomp cannot selectively prevent), then perhaps a container runtime's default seccomp filter would add it. But it's difficult to think of a scenario where that would happen while the gVisor-side seccomp filter would not also be updated to block (or already be blocking) this same system call. I'd also point that by using the default container runtime's seccomp profile, the opposite risk exists: system calls are added to the set of allowed system calls that aren't required by Dangerzone. The fact that older versions of Docker didn't allow
Yes, seccomp filters are fairly inscrutable, mostly to prevent applications from figuring out which syscalls they can run at all without trying to run them. This is because seccomp filters can and often are configured to kill the application rather than just return a "permission denied" error (side note: this is another difference between gVisor's seccomp filter and container runtimes', btw; gVisor's filter can afford to kill the entire container, but container runtime filters can't because that would be too disruptive to workolads). Even files like I think the only runtime-agnostic way to find out may be to have Dangerzone run a dummy "probe" Python script that executes a trusted command inside |
Thanks a lot for this comment Etienne, it makes lots of sense.
To this point, would it make sense then to offer a seccomp filter for the outer container, that holds the allowed syscalls for Sentry and Gofer? Actually, is there such a list somewhere that we can take a look at? The only drawbacks I see in this approach are:
We can live with those, I believe.
Nice, cool to know that.
Exactly, I was thinking about something like this, as well. Not for detecting seccomp config issues specifically, but mostly as a Dangerzone health-check. |
gVisor's self-imposed syscall filters depend a lot on gVisor's configuration. The code for generating these filters is here for the Sentry (with the main part being in At runtime, there are also factors that affect the filters. For example, if #898 (turning DirectFS off) is merged, it will cause the filters to start blocking the I'm not convinced that having a tight seccomp filter at the container runtime level would really be meaningful from a security perspective, for two reasons. The first is simplicity. As the gVisor integration design doc states, the goal of the inner sandboxing layer is to act as the security layer, while the outer one is to act mostly as a compatibility layer (with any extra lockdown settings being a bonus). As this bug and #865 demonstrate, seccomp-bpf enforcement at the host container runtime level already means different things depending on the container runtime in use, and there are likely further subtleties depending on machine architecture too that I suspect we haven't seen yet. gVisor already puts a tight seccomp filter on itself, and does so at a time when it is still trusted to do so. Having another filter doesn't really add defense-in-depth, because that just gets stacked in the same host kernel codepath as the one that gVisor's host syscalls go through. By definition, it cannot be stricter than gVisor's own filter, so all it may protect against would be either the scenario above (the container runtime's default filter being updated to ban a specific syscall ahead of Dangerzone/gVisor doing so), or a logic bug in gVisor's syscall filter generation that accidentally allows bad syscalls through, which is IMO quite unlikely. But I'm obviously biased here because I wrote a large part of gVisor's syscall filter generation logic (and I'll be the first to admit that it is quite complex... I wrote about it here). There is an an automated test that verifies that the logic produces a filter that does end up causing a process to be killed if it calls a blocked syscall. Still, it's true that it's possible a bug of this kind may still slip through. The second reason is that the outer container's seccomp filter needs to cover the entire lifetime of the outer container. This is unlike gVisor's system call filters, which are only applied after gVisor sandbox initialization, i.e. when the sandbox is fully initialized but just before any untrusted code starts running. Not at gVisor startup time. This is why gVisor can afford to e.g. block Therefore, while I'd be happy to provide a programmatic way (like say, a But at that point, the resulting set of syscall filters would be quite wide, hence why I'm not sure whether the incremental security it provides is worth the maintenance cost; at least not when approaching the problem from this angle. All that being said, I am in full support of removing per-container-runtime differences, because it would avoid issues like this one from occurring. And since Dangerzone already bears the burden of maintaining an explicit seccomp profile for some container runtimes (for Docker <=24), and that gVisor is confirmed to work under it without issues, then perhaps the incremental cost to tighten this filter is worth it. Then it seems to me like it would be easy to simply apply this seccomp profile under all container runtimes (since there's no reason why the same image and the same command-line would call different syscalls under different container runtimes). Another approach that may be lower-maintenance in order to arrive at a "locked down as much as possible" seccomp filter at the outer container level (even if still relatively loose, as per above) may be the following: write a syscall filter that allows every single system call that exists (explicitly listed one by one, no wildcard), and then whittle it down as needed while ensuring that all tests still work. This process can be done automatically: Have a script try to remove each syscall in order from the "allowed" set, and see if any test breaks. Then remove all the syscalls for which the tests passed from the final generated profile. That would be another good way of arriving at a good profile that can be used at the outer container runtime across all runtimes, and is independent of gVisor implementation details. |
As per Etienne Perot's comment on #908: > Then it seems to me like it would be easy to simply apply this seccomp profile under all container runtimes (since there's no reason why the same image and the same command-line would call different syscalls under different container runtimes).
I confirm that changing the default seccomp policy works in this case: I've been able to run a conversion on Orbstack with the changes listed in #926 |
As per Etienne Perot's comment on #908: > Then it seems to me like it would be easy to simply apply this seccomp profile under all container runtimes (since there's no reason why the same image and the same command-line would call different syscalls under different container runtimes).
As per Etienne Perot's comment on #908: > Then it seems to me like it would be easy to simply apply this seccomp profile under all container runtimes (since there's no reason why the same image and the same command-line would call different syscalls under different container runtimes).
As per Etienne Perot's comment on #908: > Then it seems to me like it would be easy to simply apply this seccomp profile under all container runtimes (since there's no reason why the same image and the same command-line would call different syscalls under different container runtimes).
Trying to use Dangerzone 0.7.0 on some files recently, everything fails with "Unspecified error".
I'm using Dangerzone with Orbstack (https://orbstack.dev/) rather than Docker Desktop, so perhaps there might be some configuration issue or something? Not sure how to generate logs or anything but let me know if there's anything I can provide. I used to use Dangerzone & Orbstack without any issues but perhaps there's been a breaking change somewhere recently on either end.
The text was updated successfully, but these errors were encountered: