-
Notifications
You must be signed in to change notification settings - Fork 61
aktualizr fails to download reposerver files backed by S3 #1273
Comments
Wild issue. I think I've sorted things out. I've been toying with a new tool, aktualizr-get which helped out on this. When I ran the tool inside the aktualizr container things were working. When I ran them on my rpi3 OE build it failed. I then isolated it to just running the command with the alpine version of curl versus the debian version. I'm guessing debian is carrying around some distro patch that ensures curl is always looking at /etc/ssl/certs because when I added this hack: The bug goes away. I'm not sure if that patch is the way you want this. Feel free to fixing another way or letting me know how you'd like me to proceed. |
Ah, interesting. I'm not completely sure what's going on in libaktualizr, but the garage tools do set |
@rsalveti - has been looking at this a bit. He's found that OE and Alpine both build curl differently than Debian. Quoting him:
he's keen on fixing curl in OE right now. But we may want to fix things here also just to prevent someone from debugging the same thing I just did |
Right, the issue here is that when you set a ca-certificate via cacert that ends up replacing the default ca-bundle, so curl is unable to find additional certificates if needed (e.g. after redirects). Setting curl to also have ca-path by default would fix the issue, but another fix would be to also set it here (which is probably better as it would work on any system). |
So my thinking is that we should actually fix on both places (OE to avoid users to face the same issue and aktualizr to make sure it works correctly across multiple systems). |
That sounds like a good plan. Does it sound like a reasonable plan to have a parameter in our configuration that is a path to the CA certs directory? I don't like having it hardcoded, although it's fine for |
@patrickvacek - I think that makes sense. I just got back from vacation and will get some time to finish this up early next week. |
@patrickvacek - just looked at doing your suggestion a little more closely. I think the problem with making it a config variable is that its going to require all call sites of HttpClient to now pass in a reference to the Config object. I'm thinking it might be easier if we make this a CMake option like -DSSL_CERT_PATH=<value or /etc/ssl/certs> make sense? |
If the cert path was passed to the HttpClient as an optional string, I think that would be fine. I don't particularly like expanding the CMake options if we can help it. That makes things harder to test and less flexible in general. |
Fixes advancedtelematic#1273 Some versions of OE don't explicitly set a default path to the CA certs directory. This ensures its set and can be overidden if needed. Signed-off-by: Andy Doan <andy@foundries.io>
This change basically means that in our communication with device gateway we will trust any certificate signed by the CA in the OS bundle. IMO, this can open us to a range of attacks if one of the CAs gets compromised. |
This should not pose any immediate risk unless we have implemented Uptane incorrectly: the framework security guarantees are independent of transport channel security - even if we work over completely open channel. Having said that I do think that we should double-check that this parameter is clearly documented: some integrators might want to explicitly set CURLOPT_CAPATH to something they control (or even to /nonexistent) without interferring with system-wide certificate store. There's been reports of compromised public CA (those you normally expect to be in /etc/ssl/certs) by individuals [1] and organizations [2] as well as government actors [3] actively seeking to get into the default CA list [3]. On the one hand it's reasonable to expect that on hardened embedded system the /etc/ssl/certs is tightly controlled, on the other hand it's one of those details which is trivial to overlook. After looking at https://docs.ota.here.com/ota-client/latest/aktualizr-config-options.html I can't find anything about CURLOPT_CAPATH - am I looking at the wrong place? Exec summary: having this option is good (it's not up to us to decide client's security policy on which CAs to trust) but it has to be properly documented to make sure that client makes an informed decision instead of whatever-works-default-what's-CA-anyway kind of thing. P. S> And doh! on me for overlooking the related commit. [1] https://blog.erratasec.com/2011/03/comodo-hacker-releases-his-manifesto.html |
Like receiving the initial root.json over this channel? |
Yepp, we're completely screwed in this case. Though the credentials.zip is almost inevitably retrieved via browser which uses the same CA list so if attacker can subvert one of the CAs than he can already substitute URL from which initial root.json will be retrieved. Hence I'm not sure how to assess this risk. Any ideas? Another scenario to consider is when attacker managed to inject redirection to the URL he controls (and for which he can obtain legitimate cert from one of the default CAs) into the endpoint serving root.json. In this case not using the default CA list would certainly help but the likelyhood of compromise that allows redirect but does not allow to replace the root.json is an open question. Overall I think it boils down to what default we set for CURLOPT_CAPATH: /etc/ssl/certs or /nonexistent - whether we think that risk from latter scenario outweights the confusion as described in this ticket. Thoughts? Regardless of that choice I think we should update documentation to reflect this. |
It might be also useful to distinguish between metadata and image fetching. We can set the default CAPATH to /nonexistent for the former and to /etc/ssl/certs for the latter. This should harden security in the aforementioned scenarios without breaking custom-url use cases. I, of course, agree that we still should make this configurable and reflect in documentation. |
I'm going to try and dig into this, but before I get too far I thought I'd describe the issue in case someone happens to already know the root issue. I have a tufreposerver deployed that's backed by an S3 bucked. When I call
PackageManagerInterface::fetchTarget
it gets the 302 redirect to a signed URL, but then the download fails. It looks like it may be something with the HttpClient sending the gateway certs to my S3 bucked (in my case its a google storage acting as s3):This is a bit strange to me because aktualizr has no problems talking to TreeHub which is backed by the exact same S3 bucket. I'll do some poking around tonight, but was curious if anybody had any clues?
The text was updated successfully, but these errors were encountered: