-
-
Notifications
You must be signed in to change notification settings - Fork 567
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for a static way to "prove" an AppImage has capabilities such as sandboxes #839
Comments
Thanks @Elv13. Does it need to be so complex? When a new AppImage is opened, the signature could be checked against an (ideally peer-produced) index of "known good" keys, and if it has one, be executed as a fully trusted application without sandboxing; if it doesn't, then run in untrusted = sandboxed mode. Beginnings of this are already coded in the optional What I don't want to do is maintain a central list of "known good" keys, or have it maintained by one person or entity. Instead, we should think about how we can best peer produce a web of trust. |
@probonopd you got it wrong, it seems. It took me a while to understand this as well. I would offer to explain it to you off GitHub this week. Just let me know when you're free. |
That's my point - it is long and complicated but I like short and easy. :-) No offense |
It's actually less complicated than one might think at first. In the end, one of the "disadvantages" users see when looking at AppImage is that it's said to be "less secure" because of the lack of sandboxing. There are multiple problems that need to be solved in order to make sandboxing compatible with our decentralized approach, but this proposal could solve one of these. As I explained to @Elv13 already, this is not interesting until we have a secure way to distribute Firejail profiles within AppImages (which is a big problem we have no solution for yet), but once we can do that, this proposal will become relevant. |
If you distribute the profiles inside the AppImage, then a) you give preference to one certain sandbox and b) a "bad" app could simply ship a "relaxed" profile. I think the level of isolation needs to be determined by something on the system (e.g., |
I think that yes, it has. I am sorry it has to be, but there is little to do about it. Computer security is often a very complex research topic and how to handle problems a very imperative and restrictive process. What is done here is called a chain of trust. This is a very well documented topic with a lot of good read available on the Internet (note that it is not an x.509 certificate chain of trust, that's one of the application of the concept, but not the one proposed here). I will not try to summarize in too much details because there is a lot of existing documentation on the topic. That being said, you can view it that way. When you go from A to B, each steps a payload takes have to carry a "trust" from its predecessor to its successor in a way that can be cryptographically provable. This chain has to be carefully studied to ensure it has no missing links. If it has, the whole chain is worthless. This is why long documentation is mandatory for this field of study. There is official forms for this paperwork but I will spare them to you because they are mindbogglingly verbose and boring. In the case of this proposal, the original trusted objects are:
Those trusted elements have to get as close to the user as possible. They have to get there before the user executes anything. I use bold for that sentence because it is key. This whole scheme exists to allow both user with AppImageLauncher/appimaged and the users without it to get these trusted elements.
This position may be worth reconsidering. While generally true, this solution may be out of scope for such system. What I mean here is that the "software store" might want to parse the
Again, this is generally true, but as the previous question, out of scope. The appimaged may want to enforce "better" sandboxes and it is a good thing. However if the profile can be extracted and statically analyzed before the AppImage is executed, this burden is moved to the static analysis tool. Also, if the user doesn't have This proposal is intended to statically prove a minimum set of sandboxing capabilities without executing anything. "minimal" is the important part. If the user system has toolting to provide a better experience, then great, this user will have a more secure sandboxing. But the fact is that as of now, very few distributions ship such appimage related runtime by default so providing this capability in the middleman is more important in the short term. Plus, it fits great in the decentralized nature of the AppImage concept since your organization has only to provide some elements of the "chain of trust" rather than carry it all. |
The way I see it, the only viable way to distribute profiles for applications is to put those into the AppImages. There is no chance to download profiles automatically for applications from the internet, because how would you securely match an application and the profile? There are no "one fits it all" profiles. Profiles will always be application specific. That's how e.g., AppArmor works, too, the profiles are shipped in the application packages. Until profile distribution hasn't been solved, we don't need any sort of Firejail distribution system, because without any profiles, distributing Firejail is pointless. If someone has suggestions how we (or some external project) could set up a secure infrastructure for distributing the profiles, please provide a detailed description of that. Re. things like "better sandboxes": Again, there are no "one fits all" profiles. You can't just use random profile to sandbox an application. The more strict it is, the more likely it is to break something. The more open it is, the less secure it will be. It is impossible to design some "general" profile. |
I don't get the point of the AppImage defining its own profiles. Isn't this like a potential criminal to define his own laws? |
Of course you need to check whether the profile is "good" (read: secure, trustworthy, etc.). That's the part we need to solve. Firejail currently tries to use some "generic" profile, but it's not secure at all, and doesn't really restrict anything. This just doesn't work. We need AppImage specific profiles, and the easiest way to accomplish this is to ship the profiles as part of the bundle. The only realistic way to check whether the profile is trustworthy is maintaining a public-key infrastructure (similar to how TLS certificates are managed), but this has huge drawbacks, too. And that only proves the profile is created by a trustworthy origin, not that it's secure. As @Elv13 described, it's really complex to get this "trust of chain" right, and many aspects need to be regarded. |
Correct. It is essentially a placeholder.
I am trying to think simple here. Trusted apps should run unrestricted, whereas untrusted apps should run highly restricted (e.g., no write rights in |
What are the differences between AppImage and snap? |
Please see https://github.com/AppImage/AppImageKit/wiki/Similar-projects. In a nutshell, with AppImage, "one app = one file", no runtime needs to be installed on the system first.
There is https://appimage.github.io/apps/ as an overview, but we recommend that you download applications directly from the author's website. |
Is there a data area in AppImages that users can edit (or could there be)? If so, tools could use that to store / extract profiles for things like Firejail. |
AppImages are read-only by design, so that one can e.g., easily calculate checksums, verify signatures, and do delta updates. So, the profile would either have to be supplied as part of the AppImage by its creator, or would have to be stored externally outside of the AppImage. |
This may not be a problem that AppImage has to solve. There seems to be a movement away from blanket permissions that must be granted at install time, to more finely grained permissions that can given or withheld at runtime on an as-needs basis. This has the advantage that users are not pressured into pressing an "accept all" button at the outset but can actually pick and choose which permissions to give based on the features they actually want to use. For example, when installing certain Android apps it used to say "this app needs permission to access files and folders on your device", and you would either grant that permission or you wouldn't be able to install the app. However, these days apps install and run without requiring any permissions until you actually try to use a feature that actually needs permission, so you can use the Dropbox app to browse your online storage, but as soon as you try to access the local storage you are prompted to give permission. Anyway, the point is that full sandboxing will probably become the default, and it will be up to apps to request services on an as-needs basis and to handle withheld permissions gracefully. If this happens there would be no need for AppImages to define security profiles, as that would be something that the application itself would negotiate directly with the system after it was already running. |
@shoogle this is something we plan for AppImageLauncher -- Android-style permission requesting & granting. Authors might optionally include a little meta file requesting a few permissions, otherwise we might pick sane defaults (full network access but no personal data access or something like that). |
@TheAssassin, sounds awesome! I'd have thought that runtime permissions would need to be implemented at a lower level (i.e. in the system libraries) but if you can bolt it on afterwards then that's pretty impressive! I suppose if you can find a way to intercept, and potentially reject, standard filesystem requests then the application will just handle it using its existing exception mechanism. There's a risk the application might display a "permission denied" message that indicates the user doesn't have permission to access the file, when in reality it is the application that doesn't have permission, but apps would eventually be updated to take this into account. |
That's not really possible in POSIX. Attack vector like ROP or even gobject-introspection / QMetaType scripts would easily build a privilege escalation "exploits". Even |
Actually let's plan this for libappimage. |
This cannot be "planned for libappimage", this is waaaaaay out of scope... The "plan" here is to combine something like firejail with AppImageLauncher. AppImageLauncher can enforce (to some extent) the execution through the sandbox, and it can generate a profile for every app by asking the user a set of simple questions ("allow network access", "allow access to your files", ...). |
The beginnings of it have been in |
It's not like anyone can generate a PGP key and replace the signatures in any AppImage... |
Hence we'd need a list of trusted PGP keys. This list could either be centrally stored, or, as is my personal preference, somehow built from the user's "social graph". |
This issue document a discussion with @TheAssassin regarding how to provide a
way to securely prove that an AppImage runs in a FireJail sandbox before
execution with and without AppImageLauncher.
What is the issue
Flatpak and Snap both enforce sandboxing but AppImage doesn't. It isn't a good
way to sell this technology if the user perceive it as less secure. On the
other hand, Flatpak/Snap are much larger and integrated stacks that try to fix
too many problem while AppImage try to solve the bundling and nothing else.
That doesn't say AppImage cannot be secure. In fact, many already use external
sandbox projects to provide some level of extra security. One major downside of
such AppImages is that there is little way to know ahead of time:
Where can it be solved
A solution to this problem and be provided by a combination of improvements to
appimagetool to ensure a sandbox is used. This solution has to be used in 2
main places:
there is guaranteed sandboxing.
AppImages
information
What does this solution does not attempt to solve
In the Unix philosophy, we have little tools that solve a problem and only one
problem. This solution doesn't attempt to prove that:
Enough talking, what's your damn idea?
When creating "SandboxedAppImageTool"
FireJail (probably the runtime is more suitable for that)
cannot weaken it
be provided (first we need to provide a standardized way to distribute firejail profiles within AppImages)
When creating an AppImage
prohibited
When upload an AppImage to a "store"
4.1) Warn about AppImage with old FireJail with known CVE
When the user without AppImageLauncher and/or without FireJail get an AppImage
profile (or trust them to validate it's secure)
When you have AppImageLauncher
to see the profile.
What are the problems with this
Someone will need to get and keep a certificate to sign the "trusted"
runtime and firejail. The AppImage team is unsure whether they can provide that.
It solves a tiny part of the wider problem regarding sandboxing. It doesn't
provide a DBus proxy, doesn't manage overlays, no X11 firewall. Getting such
capability could be implemented in a similar way, but isn't part of this
suggestion.
Why do you still think it's necessary
Because "linting" and "static analysis" are one of the only way to add feature
without bloating the AppImage project with more thing. It should still
concentrate on bundling and let other services to handle their part. You still
have a semi centralized way to prove such services are enabled and this is
what I propose.
The text was updated successfully, but these errors were encountered: