-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using the certificates #18
Comments
You are probably approaching this from a wrong way. You shouldn't store the client certificate in any local keystore - you simply need to check if its fingerprint is published in the Registry. If you're using Java, then perhaps other developers might help you with configuring your stack. In particular, see |
We also had to store the accepted certificates in our Java keystore. @wrygiel do you have any other method of extracting the certificate? |
It just seems safer to validate certificates directly using the RegistryClient, rather than synchronize all client certificates with some global (?) Java KeyStore. But perhaps I misunderstand, and your way is perfectly okay. @mikesteez, could you point me to a file/class/method in which you perform these actions? I will try to find some time to look into your code. I will also speak to @Awerin and ask if he also encountered similar problems, and how he dealt with them (he's also implementing EWP in Java). |
I haven't had the time to look into this in detail, since some of my colleagues are currently away sick. As far as i remember, you have to add it to the keystore when you use it for client authentication as intended by java, since it would otherwise reject the request "before reaching your application". So our solution is to not rely on certificate authentication during ssl and allow all certificates using a DummyTrustManager which is not possible for all web applications and obviously not best practice. Another (in our case bigger) problem is when dealing with reverse proxys and/or loadbalancers. On our webcluster the SSL context is resolved by the proxy and the certificate doesn't get anywhere near the application servers. This can be solved with more or less complicated configuration on the proxy (this, this) or by separating the EWP host from the other applications. Having the XML payload signed could make these issues a lot simlpler to solve. |
The problem, as we see it, is that in order to fulfill the requirement of the echo API (Step 1. Verify the SSL Certificate) the implemented Web Service needs to access the SSL certificate used in the transport. However, this certificate is verified and stripped in a lower transport layer, and is therefore not accessible in the Web Service. We have discussed this with Sweden, and as far as both we and they understand the problem is that in order to get the certificate to "survive" the lower transport layers, the certificate must be stored in the local trust store in advance. We have yet too see a good solution for this problem, unless we have missed something. The solution we have presented here is an alternative solution, if it is technically difficult to implement the SSL verification solution as it is presented today. |
Okay, so two issues were mentioned here.
As to the alternative solution proposed - note that xmldsig is for signing XML, but (currently?) our requests are not XML (only the responses are). Changing this would be a major task, both in specs and in the code. It would also make it impossible to (a) debug APIs in a browser, and to (b) use proxies for signing client requests (all clients would be required to implement xmldsig). |
Ok, we have been having some issues, and it would be nice to see a solution on issue 2. Could you persuade @Awerin to share his solution as it was agreed in June in Warszawa? |
I now had a look at our production environment and our solution is as follows: We use an Apache 2.4 as proxy and load balancer. SSL is terminated by the Apache proxy. Important config part is:
on the application server we use a request filter to do the cert validation.
and inside this filter we do the validation:
Hope this helps somehow. |
The equivalent configuration for Tomcat (7) would be either SSLVerifyClient="optionalNoCA" on an APR/Native connector or clientAuth="true" and trustManagerClassName="custom javax.net.ssl.X509TrustManager" configured. |
Ok, I've been banging my had over this for a while now. The idea of the application digging deep into the transport layer after receiving the message breaks the OSI model and is not something we should recommend for our solution. It would present more problems than it would solve, and it would make it harder or even impossible (depending on what technologies they might be using) for new partners to join EWP. I got a better idea, one that won't require change in the EWP architecture and that could solve both this AND the self/non-self-signed certificate issue. We can put the signature and information about the certificate in the HTTP headers. In fact, I thought that someone surely must have done this before, and found a few texts including an Internet-Draft that's been around for a while, describing such a solution down to the smallest detail. Yet, it is easy to read and understand. Pros:
Cons:
The draft presents also a concrete example. I found a Java implementation as well (based on version 4 of the draft), and it shouldn't be hard to implement in any language. We could of course make our own standards, but using existing ones is often a better idea. |
The draft looks like a good solution to me. I like the idea of using the authentication header for this instead of TLS. Only con I can think of is that it isn't a common/well-known practice by now, which could also be a hurdle for some implementers, but less significant than the current ones I think. |
I thought about such solutions a year ago (in particular, the non-repudiation feature was tempting). However, in the end I convinced myself that the solution with TLS client certificates is a safer way to go:
That said, I still think that both solutions are valid. So, if you manage to convince every partner that your solution is better for them, then I won't oppose (much). |
I think you still fail to understand the main problem. Everything you wrote above would be true if we had complete control over the whole stack (at least from TLS and up). This is, however, not the case. In reality TLS is often implemented in proxies, load balancers, you name it, and contacting these is often not trivial or impossible. Our applications often only get the data after all TLS information has been stripped. This is a practical problem already now, in our lab environment, even with so few partners. The problem will only grow with each new partner wanting to join the project, making it impossible for some to even do so. I completely disagree on the bad PR argument. Even if the standard we relied on ended up being officially dead, if it works and it is well defined, there is no reason for not using it unless someone finds a major security issue with it (which I doubt would be the case, since all the sub-components are well-proven and accepted, this one just puts them together in another way - that is also the reason why I think it would be relatively simple to implement). |
The only problem reported thus far was that some proxies do not accept self-signed client certificates. There were no reported problems with forwarding the client certificate details to the application layer (after the certificate is accepted). |
This very GitHub issue is about that! |
Sorry, I thought you have already solved this. What proxy are you using? |
There were several other problems reported, but already with solutions or workarounds. Use of the Headers should work with all standard compliant proxies out of the box: http://www.w3.org/TR/ct-guidelines/#sec-ProxyReqest There are some resources out there about problems/solutions for other ecosystems: |
It doesn't really matter what we are using for the bigger picture. When we get into production, we won't have control over what proxy that will be used there, and most probably no special configuration will be possible. |
I'm almost convinced.
Me, you, and most other developers understand that. But I don't think business people do. That's why I call this a PR problem (not a technical one). For example, if HEIs have two solutions to choose from (EWP and some competitor), and they hear an argument that one of them is "built upon a dead standard", they will surely be at least somewhat inclined to use the other, won't they? It's easy to exploit such statements when speaking with non-technical people. Do you know if there is any way we can determine the chance of this particular proposal getting killed? |
Wouldn't it be safer to use OAuth 1.0a with |
Doesn't that only work for signing HTTP requests, and only signs request parameters? (in addition, 1.0a is obsolete and not recommended for new implementations... 2.0 has for some reason removed the request signature option completely) |
Yes. I agree that header signatures are better, but OAuth 1.0a is also "good enough" for our purposes. And it has much wider support than header signatures have.
But that's a PR argument - the one you just told us wasn't important for you ;) |
That was me just being surprised at the fact that YOU had suggested it :) but, there is a small difference nonetheless - OAuth 1.0a is on the way out (or, it is out already), while the other one is on the way in. I'm wondering on whether OAuth 1.0a indeed covers our needs or whether it is safe enough. Perhaps it is. If I understand it correctly:
Given that everything is in addition encapsulated in TLS, maybe it is not such a big deal. I'm all for KISS - if it works and is secure enough for our purposes, I'm fine with it. But I still like the other one much better. |
It does cover the POST body, if it has been sent with
OAuth 1.0a has a builtin protection against replay attacks (see |
I have labeled your proposal as "Solution 7" in the original thread. |
Sorry if this has been discussed earlier, couldn't find a reference to it.
While developing our component, we ran into a very concrete problem - getting the information about the certificate that was used on the transport layer, after our application has received the message. There are supposedly several solutions for this but none worked for us. That in addition to the necessity (it seems) to store all certificates we should accept in our local keystores.
As it seems that we're not the only ones struggling with this, and thinking also about scaling and future new partners, would an alternative be to use XML-DSig instead of relying on the transport layer (which potentially poses problems with load balancers and other in-between-things)? We've done this in EMREX for transfer of results, and it seems to be working very fine (that is, XML-DSig -> gzip -> base64 (to make sure to avoid problems with character encoding). Then we can check the signature and the certificate (or public-key) fingerprint in the same manner as planned.
The text was updated successfully, but these errors were encountered: