Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support consul's tls_skip_verify field in service checks #2218

Closed
dbresson opened this issue Jan 20, 2017 · 6 comments
Closed

Support consul's tls_skip_verify field in service checks #2218

dbresson opened this issue Jan 20, 2017 · 6 comments

Comments

@dbresson
Copy link

For https checks, support the tls_skip_verify option that consul accepts. The common name on the certificate will frequently not pass verification because the check connects by IP address.

@dadgar dadgar added this to the v0.5.4 milestone Jan 23, 2017
@dbresson
Copy link
Author

I guess this didn't make it into 0.5.4 after all.

@dadgar dadgar modified the milestones: v0.6.0, v0.5.4 Feb 25, 2017
@dadgar
Copy link
Contributor

dadgar commented Feb 25, 2017

Sorry. Milestones got all wacky do to hot fix releases and some pushed features. It is definitely on our roadmap!

@schmichael
Copy link
Member

Will be done as part of #2478 although it's not currently in the initial PR #2467

@hvindin
Copy link

hvindin commented Apr 17, 2017

I'm not sure if I should create another issue for a problem we're facing that I believe adding tls_skip_verify would resolve.

Essentially we have jobs running using the docker driver where each container ends up with a wildcard cert for the relevant consul cluster in which it resides.

If we add an HTTP health check to the nomad job we need to expose an insecure port to hit the healthcheck endpoint, if we leave the healthcheck endpoint only accessible via HTTPS then consul begins to throw errors that the cert does not contain a valid IP SAN entry for the end point and nomad just lets the container sit there doing nothing useful while consul won't register it is as a valid end point.

This seems backwards to me, I would have thought that when nomad advertised a service to consul, provided some healthchecking information etc. that consul would actually assert that the certificate matches the DNS entry that consul assigns to the server (ie. something.services.dc1.consul) rather than looking for a cert for the specific 10.0.0.0/8 entry that the service happened to be allocated.

Currently the only work workaround we seem to be able to work with either the "allow insecure traffic to the healthcheck" solution or we need to add an Alt DNS on the wildcard which also has the IP address of the host, then mount that certificate from the host into each container, hence each container ends up with a cert with the correct IP address contained. However this entirely defeats the point of the wildcard cert and our attempts to decouple ourselves from the underlying compute nodes.

If it's useful I can write up a proper issue with some examples that can be shared on the internet. Let me know if it seems like a seperate issue or if it's likely covered here anyway.

@dadgar
Copy link
Contributor

dadgar commented Apr 17, 2017

@hvindin The tls_skip_verify flag would be your solution. For health checks Consul is using the system certificates which is why Consul is erroring.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 13, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants