Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(webhook): disable tsl 1.0 and 1.1 on webhook service #2770

Merged
merged 1 commit into from
May 6, 2024

Conversation

ChanYiLin
Copy link
Contributor

ref: longhorn/longhorn#8387

Follow the same implementation in Harvester to disable tls 1.0 and 1.1 on webhook service

Copy link
Contributor

@c3y1huang c3y1huang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've run the sslscan on both v1.6.1 and the master branch clusters.

Interestingly, there is a difference in the results without applying the fix in this PR.

In v1.6.1, as described, TLSv1.0 and TLSv1.1 shows enabled:

> k -n longhorn-system get daemonsets.apps longhorn-manager -o yaml | grep image:
        image: longhornio/longhorn-manager:v1.6.1
   
> k get service -n longhorn-system | grep admission
longhorn-admission-webhook    ClusterIP   10.43.18.107    <none>        9502/TCP       2m17s

> k -n longhorn-system exec -it longhorn-manager-j9pdl -- sslscan 10.43.18.107:9502
Version: 2.0.16
OpenSSL 1.1.1l-fips  24 Aug 2021 SUSE release 150500.17.25.1

Connected to 10.43.18.107

Testing SSL server 10.43.18.107 on port 9502 using SNI name 10.43.18.107

  SSL/TLS Protocols:
SSLv2     disabled
SSLv3     disabled
TLSv1.0   enabled
TLSv1.1   enabled
TLSv1.2   enabled
TLSv1.3   enabled

However, on the master branch, TLSv1.0 and TLSv1.1 shows disabled (without this PR):

> k -n longhorn-system get daemonsets.apps longhorn-manager -o yaml | grep image:
        image: longhornio/longhorn-manager:master-head
    
> k get service -n longhorn-system | grep admission
longhorn-admission-webhook    ClusterIP   10.43.213.78    <none>        9502/TCP       39s

> k -n longhorn-system exec -it longhorn-manager-8ltwk -- sslscan 10.43.213.78:9502
Version: 2.0.16
OpenSSL 1.1.1w-fips  11 Sep 2023 SUSE release 150600.2.11

Connected to 10.43.213.78

Testing SSL server 10.43.213.78 on port 9502 using SNI name 10.43.213.78

  SSL/TLS Protocols:
SSLv2     disabled
SSLv3     disabled
TLSv1.0   disabled
TLSv1.1   disabled
TLSv1.2   enabled
TLSv1.3   enabled

So, I have 2 questions:

  1. What is the cause of the different outcomes when the cluster is running with the master-head?
  2. Is the purpose of this fix to address versions before v1.7? If so, is this fix intentionally implemented on the master branch?

Copy link
Contributor

@c3y1huang c3y1huang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apart from the questions, the fix LGTM if the intention is to backport and introduce a whitelist approach.

@ChanYiLin
Copy link
Contributor Author

ChanYiLin commented May 6, 2024

Hi @c3y1huang
I found the reason.
Because in v1.6.1, we use go 1.21, and the default MinVersion=1.0
Golang community update the MinVersion to 1.2 in go 1.22 here

So for the second question,
we have to backport the fix by specifying the minVersion in the config.

cc @derekbit

ref: longhorn/longhorn 8387

Signed-off-by: Jack Lin <jack.lin@suse.com>
@c3y1huang c3y1huang force-pushed the LH8387_disable_tls1.0_and_1.1 branch from cf1fbc3 to e4d0719 Compare May 6, 2024 03:24
@c3y1huang
Copy link
Contributor

c3y1huang commented May 6, 2024

Hi @c3y1huang I found the reason. Because in v1.6.1, we use go 1.21, and the default MinVersion=1.0 Golang community update the MinVersion to 1.2 in go 1.22 here

So for the second question, we have to backport the fix by specifying the minVersion in the config.

If I understand correctly, updating to Go version 1.22 should resolve this issue. Is there any reason we are implementing a whitelist and the minVersion instead of updating the go version? Seems we've already done so in the master and 1.5.x branches.

@mergify mergify bot merged commit a2dd56a into longhorn:master May 6, 2024
7 checks passed
@c3y1huang
Copy link
Contributor

c3y1huang commented May 6, 2024

Hmm. mergify already merged this. @ChanYiLin please check to see if golang version upgrade is sufficient. If it is, I think we don't need this PR.

cc @derekbit

@derekbit
Copy link
Member

derekbit commented May 6, 2024

Hmm. mergify already merged this. @ChanYiLin please check to see if golang version upgrade is sufficient. If it is, I think we don't need this PR.

cc @derekbit

@ChanYiLin Feel free to revert the PR if @c3y1huang's comment is valid. Thank you.

@derekbit
Copy link
Member

derekbit commented May 6, 2024

Hi @c3y1huang I found the reason. Because in v1.6.1, we use go 1.21, and the default MinVersion=1.0 Golang community update the MinVersion to 1.2 in go 1.22 here
So for the second question, we have to backport the fix by specifying the minVersion in the config.

If I understand correctly, updating to Go version 1.22 should resolve this issue. Is there any reason we are implementing a whitelist and the minVersion instead of updating the go version? Seems we've already done so in the master and 1.5.x branches.

BTW, we need to update v1.6.x's golang version.
https://github.com/longhorn/longhorn-manager/blob/v1.6.x/go.mod#L3

@c3y1huang
Copy link
Contributor

BTW, we need to update v1.6.x's golang version. https://github.com/longhorn/longhorn-manager/blob/v1.6.x/go.mod#L3

#2774

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants