Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using the certificates #18

Closed
mpuzar opened this issue Jan 31, 2017 · 25 comments
Closed

Using the certificates #18

mpuzar opened this issue Jan 31, 2017 · 25 comments

Comments

@mpuzar
Copy link

mpuzar commented Jan 31, 2017

Sorry if this has been discussed earlier, couldn't find a reference to it.

While developing our component, we ran into a very concrete problem - getting the information about the certificate that was used on the transport layer, after our application has received the message. There are supposedly several solutions for this but none worked for us. That in addition to the necessity (it seems) to store all certificates we should accept in our local keystores.

As it seems that we're not the only ones struggling with this, and thinking also about scaling and future new partners, would an alternative be to use XML-DSig instead of relying on the transport layer (which potentially poses problems with load balancers and other in-between-things)? We've done this in EMREX for transfer of results, and it seems to be working very fine (that is, XML-DSig -> gzip -> base64 (to make sure to avoid problems with character encoding). Then we can check the signature and the certificate (or public-key) fingerprint in the same manner as planned.

@wrygiel
Copy link
Contributor

wrygiel commented Feb 1, 2017

You are probably approaching this from a wrong way. You shouldn't store the client certificate in any local keystore - you simply need to check if its fingerprint is published in the Registry.

If you're using Java, then perhaps other developers might help you with configuring your stack. In particular, see isCertificateKnown(Certificate clientCert) method in the RegistryClient library.

@mikesteez
Copy link

We also had to store the accepted certificates in our Java keystore. @wrygiel do you have any other method of extracting the certificate?

@wrygiel
Copy link
Contributor

wrygiel commented Feb 1, 2017

It just seems safer to validate certificates directly using the RegistryClient, rather than synchronize all client certificates with some global (?) Java KeyStore. But perhaps I misunderstand, and your way is perfectly okay.

@mikesteez, could you point me to a file/class/method in which you perform these actions? I will try to find some time to look into your code. I will also speak to @Awerin and ask if he also encountered similar problems, and how he dealt with them (he's also implementing EWP in Java).

@georgschermann
Copy link

I haven't had the time to look into this in detail, since some of my colleagues are currently away sick.

As far as i remember, you have to add it to the keystore when you use it for client authentication as intended by java, since it would otherwise reject the request "before reaching your application".
We have a (partly) custom security implementation since we had simmilar issues when dealing with austrian authorities which use a "not trusted by java" CA.

So our solution is to not rely on certificate authentication during ssl and allow all certificates using a DummyTrustManager which is not possible for all web applications and obviously not best practice.

Another (in our case bigger) problem is when dealing with reverse proxys and/or loadbalancers. On our webcluster the SSL context is resolved by the proxy and the certificate doesn't get anywhere near the application servers. This can be solved with more or less complicated configuration on the proxy (this, this) or by separating the EWP host from the other applications.

Having the XML payload signed could make these issues a lot simlpler to solve.

@mpuzar
Copy link
Author

mpuzar commented Feb 1, 2017

The problem, as we see it, is that in order to fulfill the requirement of the echo API (Step 1. Verify the SSL Certificate) the implemented Web Service needs to access the SSL certificate used in the transport. However, this certificate is verified and stripped in a lower transport layer, and is therefore not accessible in the Web Service. We have discussed this with Sweden, and as far as both we and they understand the problem is that in order to get the certificate to "survive" the lower transport layers, the certificate must be stored in the local trust store in advance. We have yet too see a good solution for this problem, unless we have missed something.

The solution we have presented here is an alternative solution, if it is technically difficult to implement the SSL verification solution as it is presented today.

@wrygiel
Copy link
Contributor

wrygiel commented Feb 2, 2017

Okay, so two issues were mentioned here.

  1. The first issue is configuring TrustManager. @mpuzar and @mikesteez currently solve this problem by creating custom KeyStores, and then feeding their TrustManagers with those. This would work, but @georgschermann's solution to use a custom TrustManager seems much easier. We can also implement the X509TrustManager interface on our registry-client's ClientImpl. Would you like us to?

  2. The second issue is with proxies. The only thing the application actually needs is the client certificate, and the knowledge that the request was properly signed by this certificate. The application doesn't care how it gets this information. The problem is that many stacks strip this information before the request is received by the application. This way, the application - kind of - gets a bare HTTP request, without any HTTPS context. This is a fairly frequent issue, and it is usually solved by configuring the proxy to add the certificate to the request's header, as @georgschermann had suggested. I also consulted with @Awerin - he does this the same way. I also don't see any reasonable alternatives - this solution seems quite straightforward.

As to the alternative solution proposed - note that xmldsig is for signing XML, but (currently?) our requests are not XML (only the responses are). Changing this would be a major task, both in specs and in the code. It would also make it impossible to (a) debug APIs in a browser, and to (b) use proxies for signing client requests (all clients would be required to implement xmldsig).

@mpuzar
Copy link
Author

mpuzar commented Feb 2, 2017

Ok, we have been having some issues, and it would be nice to see a solution on issue 2. Could you persuade @Awerin to share his solution as it was agreed in June in Warszawa?

@georgschermann
Copy link

georgschermann commented Feb 2, 2017

I now had a look at our production environment and our solution is as follows:

We use an Apache 2.4 as proxy and load balancer.
EWP Hosts are implemented as Jersey Services on AJP worker tomcats

SSL is terminated by the Apache proxy. Important config part is:

    <Location /europe_dev/ewp>
      SSLOptions +StdEnvVars +ExportCertData 
      SSLVerifyClient optional_no_ca
    </Location>
    <Location /europe_dev_bremen/ewp>
      SSLOptions +StdEnvVars +ExportCertData
      SSLVerifyClient optional_no_ca
    </Location>

SSLVerifyClient optional_no_ca asks for a client certificate without validating it to allow self signed certs.
SSLOptions +ExportCertData takes the client/server certs and provides them to the forwarded request

on the application server we use a request filter to do the cert validation.

    	<servlet>
    		<servlet-name>jersey-serlvet-ewp</servlet-name>
    		<servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer
    		</servlet-class>
    		<init-param>
    			<param-name>com.sun.jersey.config.property.packages</param-name>
    			<param-value>com.sop.webservice.ewp</param-value>
    		</init-param>
    		<init-param>
    			<param-name>com.sun.jersey.spi.container.ContainerResponseFilters</param-name>
    			<param-value>com.sop.webservice.ewp.EWPResponseFilter</param-value>
    		</init-param>
    		<init-param>
    			<param-name>com.sun.jersey.spi.container.ContainerRequestFilters</param-name>
    			<param-value>com.sop.webservice.ewp.EWPRequestFilter</param-value>
    		</init-param>
    		<load-on-startup>1</load-on-startup>
    	</servlet>

and inside this filter we do the validation:

		X509Certificate cert = extractCertificate(request);
			
		if(!isValidCertificate(cert))
		{
			respondWithError(HttpStatus.SC_FORBIDDEN);
			return containerRequest;
		}
	protected X509Certificate extractCertificate(HttpServletRequest request) {
	    X509Certificate[] certs = (X509Certificate[]) request.getAttribute("javax.servlet.request.X509Certificate");
	    if (null != certs && certs.length > 0) {
	        return certs[0];
	    }else{
	   	 return null;
	    }
	}

Hope this helps somehow.
Questions or suggestions welcome.

@georgschermann
Copy link

The equivalent configuration for Tomcat (7) would be either SSLVerifyClient="optionalNoCA" on an APR/Native connector or clientAuth="true" and trustManagerClassName="custom javax.net.ssl.X509TrustManager" configured.

@mpuzar
Copy link
Author

mpuzar commented Feb 18, 2017

Ok, I've been banging my had over this for a while now. The idea of the application digging deep into the transport layer after receiving the message breaks the OSI model and is not something we should recommend for our solution. It would present more problems than it would solve, and it would make it harder or even impossible (depending on what technologies they might be using) for new partners to join EWP.

I got a better idea, one that won't require change in the EWP architecture and that could solve both this AND the self/non-self-signed certificate issue. We can put the signature and information about the certificate in the HTTP headers. In fact, I thought that someone surely must have done this before, and found a few texts including an Internet-Draft that's been around for a while, describing such a solution down to the smallest detail. Yet, it is easy to read and understand.

Pros:

  • Simple
  • Does not involve changes in the EWP architecture
  • Does not break the layer architecture, it remains on the application layer which we have full control of
  • Provides non-repudiation, as we can store the signature ifn ecessary
  • We can easily add support for preventing replay attacks as well
  • It is independent on the contents (XML, JSON, binary, empty request...)
  • It is independent on what certificates are being used for HTTPS, whether there are proxies or load balancers or whatever else (we wouldn't even need to care about the transport layer any more!)
  • The presented draft is well described and easily implementable, hopefully it will become a RFC

Cons:

  • Can't really think of any... can you?

The draft presents also a concrete example. I found a Java implementation as well (based on version 4 of the draft), and it shouldn't be hard to implement in any language. We could of course make our own standards, but using existing ones is often a better idea.

@georgschermann
Copy link

The draft looks like a good solution to me. I like the idea of using the authentication header for this instead of TLS. Only con I can think of is that it isn't a common/well-known practice by now, which could also be a hurdle for some implementers, but less significant than the current ones I think.

@wrygiel
Copy link
Contributor

wrygiel commented Feb 22, 2017

I thought about such solutions a year ago (in particular, the non-repudiation feature was tempting). However, in the end I convinced myself that the solution with TLS client certificates is a safer way to go:

  • It's practically already a standard (as opposed to being an internet draft). We don't risk ending up using a "dead standard" (even if it's a minor risk - if it did happen, it would be a really bad PR for us).
  • There are tons of existing libraries to use (in many cases provided with the language itself).
  • It is so popular, that even browsers adopted it. We don't really care about this in EWP, but we can be absolutely sure that the standard won't just "die". Also, as a useful bonus, it allows developers to debug APIs directly in their browsers.
  • It's not true that the current solution is digging into the OSI transport layer (which is TCP in this case). Perhaps Matija meant passing data between the presentation layer and application layer (not so deep, and pretty common).
  • In most cases, we don't really need non-repudiation (and it can be added in specific contexts, when it's needed).
  • We already have a couple of simpler solutions for the problems with self-signed client certificates. Matija's solution will require changes in all existing applications (both servers and clients), while most of the other proposed solutions won't (they will only require switching the client certificates used by the clients).
  • I'm not so sure about the "it shouldn't be hard to implement in any language" bit. I think some developers might simply resign if they couldn't find a good library for their language.

That said, I still think that both solutions are valid. So, if you manage to convince every partner that your solution is better for them, then I won't oppose (much).

@mpuzar
Copy link
Author

mpuzar commented Feb 22, 2017

I think you still fail to understand the main problem. Everything you wrote above would be true if we had complete control over the whole stack (at least from TLS and up). This is, however, not the case. In reality TLS is often implemented in proxies, load balancers, you name it, and contacting these is often not trivial or impossible. Our applications often only get the data after all TLS information has been stripped. This is a practical problem already now, in our lab environment, even with so few partners. The problem will only grow with each new partner wanting to join the project, making it impossible for some to even do so.

I completely disagree on the bad PR argument. Even if the standard we relied on ended up being officially dead, if it works and it is well defined, there is no reason for not using it unless someone finds a major security issue with it (which I doubt would be the case, since all the sub-components are well-proven and accepted, this one just puts them together in another way - that is also the reason why I think it would be relatively simple to implement).

@wrygiel
Copy link
Contributor

wrygiel commented Feb 22, 2017

TLS is often implemented in proxies, load balancers, you name it, and contacting these is often not trivial or impossible.

The only problem reported thus far was that some proxies do not accept self-signed client certificates. There were no reported problems with forwarding the client certificate details to the application layer (after the certificate is accepted).

@mpuzar
Copy link
Author

mpuzar commented Feb 22, 2017

There were no reported problems with forwarding the client certificate details to the application layer (after the certificate is accepted).

This very GitHub issue is about that!

@wrygiel
Copy link
Contributor

wrygiel commented Feb 22, 2017

Sorry, I thought you have already solved this. What proxy are you using?

@georgschermann
Copy link

There were several other problems reported, but already with solutions or workarounds.
There may be issues in other ecosystems where no feasible workaround exists.
Our Solution also requires additional configuration of the Proxy.

Use of the Headers should work with all standard compliant proxies out of the box: http://www.w3.org/TR/ct-guidelines/#sec-ProxyReqest

There are some resources out there about problems/solutions for other ecosystems:
All of them require at least special configuration
http://www.zeitoun.net/articles/client-certificate-x509-authentication-behind-reverse-proxy/start
http://stackoverflow.com/questions/22805705/client-certificate-authentication-with-reverse-proxy
https://blogs.oracle.com/dee/entry/forwarding_client_credentials_through_reverse1
http://serverfault.com/questions/714160/nginx-client-ssl-authentication
https://devcentral.f5.com/questions/certificates-implementation-in-ssl-forward-proxy-client-and-server-authentication-scenario

@mpuzar
Copy link
Author

mpuzar commented Feb 22, 2017

It doesn't really matter what we are using for the bigger picture. When we get into production, we won't have control over what proxy that will be used there, and most probably no special configuration will be possible.

@wrygiel
Copy link
Contributor

wrygiel commented Feb 22, 2017

I'm almost convinced.

I completely disagree on the bad PR argument. Even if the standard we relied on ended up being officially dead, if it works and it is well defined, there is no reason for not using it

Me, you, and most other developers understand that. But I don't think business people do. That's why I call this a PR problem (not a technical one).

For example, if HEIs have two solutions to choose from (EWP and some competitor), and they hear an argument that one of them is "built upon a dead standard", they will surely be at least somewhat inclined to use the other, won't they? It's easy to exploit such statements when speaking with non-technical people.

Do you know if there is any way we can determine the chance of this particular proposal getting killed?

@wrygiel
Copy link
Contributor

wrygiel commented Feb 22, 2017

Wouldn't it be safer to use OAuth 1.0a with oauth_signature_method=RSA-SHA1? (the one-legged version)

@mpuzar
Copy link
Author

mpuzar commented Feb 22, 2017

Doesn't that only work for signing HTTP requests, and only signs request parameters? (in addition, 1.0a is obsolete and not recommended for new implementations... 2.0 has for some reason removed the request signature option completely)

@wrygiel
Copy link
Contributor

wrygiel commented Feb 22, 2017

Doesn't that only work for signing HTTP requests, and only signs request parameters?

Yes. I agree that header signatures are better, but OAuth 1.0a is also "good enough" for our purposes. And it has much wider support than header signatures have.

in addition, 1.0a obsolete and not recommended for new implementations

But that's a PR argument - the one you just told us wasn't important for you ;)

@mpuzar
Copy link
Author

mpuzar commented Feb 22, 2017

That was me just being surprised at the fact that YOU had suggested it :) but, there is a small difference nonetheless - OAuth 1.0a is on the way out (or, it is out already), while the other one is on the way in.

I'm wondering on whether OAuth 1.0a indeed covers our needs or whether it is safe enough. Perhaps it is. If I understand it correctly:

  • the signature doesn't cover the body, so it does not provide non-repudiation
  • the signature for a packet sent from A to a certain resource on B will always be the same and as such will be prone to replay attacks (the resource URL is included, so unless one host deliberately discloses the signature to another one, no one else will be able to use it).

Given that everything is in addition encapsulated in TLS, maybe it is not such a big deal. I'm all for KISS - if it works and is secure enough for our purposes, I'm fine with it. But I still like the other one much better.

@wrygiel
Copy link
Contributor

wrygiel commented Feb 22, 2017

the signature doesn't cover the body, so it does not provide non-repudiation

It does cover the POST body, if it has been sent with application/x-www-form-urlencoded content type.

the signature for a packet sent from A to a certain resource on B will always be the same and as such will be prone to replay attacks

OAuth 1.0a has a builtin protection against replay attacks (see oauth_nonce parameter), but I don't think we need to consider ourselves with this. We are using HTTPS, not HTTP, so whatever we choose, it's already protected.

@wrygiel
Copy link
Contributor

wrygiel commented Feb 22, 2017

I have labeled your proposal as "Solution 7" in the original thread.

@wrygiel wrygiel closed this as completed Apr 10, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants