-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Warn on missing TLS secret #9875
Conversation
Issues linked to changelog: |
Visit the preview URL for this PR (updated for commit 2df643e): https://gloo-edge--pr9875-jbohanon-missing-tls-ljjy7n7q.web.app (expires Tue, 27 Aug 2024 15:44:07 GMT) 🔥 via Firebase Hosting GitHub Action 🌎 Sign: 77c2b86e287749579b7ff9cadb81e099042ef677 |
.github/workflows/composite-actions/kubernetes-e2e-tests/action.yaml
Outdated
Show resolved
Hide resolved
…o/gloo into jbohanon/missing-tls-secret
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking great! A few questions just around how specific/generic we go with our error apis and code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Appreciate the thorough testing updates to go with the new Helm value/Settings field
dozer |
* update api and code * codegen * kubernetes e2e * fix tests * add changelog * fixes add warnings to proxy report so it appears in the warnings after translation only return a warning if the error produced by ResolveCommonSslConfig is SslSecretNotFoundError * Adding changelog file to new location * Deleting changelog file from old location * fix listener_subsystem_test * tee gha output and grep for success/fail * fix kubernetes e2e test * kube2e * update action * unset ns env var when test installation run finishes * fix helm test import * revert gha * Adding changelog file to new location * Deleting changelog file from old location * PR feedback * update comment * add settings API for warning * settings option for warning instead of error * update changelog * helm and tests * fix build issues and codegen * missing curlies >_> * helm values fixes * missed one * wrong value in test setup * helm tests are actually passing now... excellent... * fix translation tests * put breaking change verbage in changelog * add setting to preserve missing secret error to test manifest * revert allow_warnings test * add icky sleep * expand admin server assertions, move server tls test * remove extra skeleton * rename bool and fix logic * update setting in always accept test --------- Co-authored-by: soloio-bulldozer[bot] <48420018+soloio-bulldozer[bot]@users.noreply.github.com> Co-authored-by: changelog-bot <changelog-bot>
Description
Updates the condition of a VirtualService referencing a TLS secret that does not exist from an error state to a warning state. This is to allow for eventual consistency with VS creation and TLS secret creation.
Fill out any of the following sections that are relevant and remove the others
API changes
Code changes
Docs changes
TODO
Context
Users ran into this eventual consistency issue when applying a cert-manager
Certificate
resource at the same time as aVirtualService
resource. Because theCertificate
does not synchronously create the TLS secret, theVirtualService
is rejected by validation.Interesting decisions
--connect-to
flag. this is to be less intrusive than refactoring the curl tool to support both--connect-to
AND--resolve
.Testing steps
# if you don't have a cluster, create one kind create cluster
# curl to validate that we're getting traffic curl -k --connect-to vs-1:8443:127.0.0.1 https://vs-1:8443
# curl to show we are still receiving traffic curl -k --connect-to vs-1:8443:127.0.0.1 https://vs-1:8443
# restart gloo deployment to roll the pod k rollout restart deploy/gloo -n gloo-system k rollout status deploy/gloo -n gloo-system
# curl to show that we are NO LONGER receiving traffic, even on the good VS curl -k --connect-to vs-1:8443:127.0.0.1 https://vs-1:8443
# restart gloo deployment to roll the pod k rollout restart deploy/gloo -n gloo-system k rollout status deploy/gloo -n gloo-system
# curl to show that we are receiving traffic on the good VS, but not on the invalid VS curl -k --connect-to vs-1:8443:127.0.0.1 https://vs-1:8443 curl -k --connect-to vs-2:8443:127.0.0.1 https://vs-2:8443
# curl to show that we are receiving traffic on both, now valid VS curl -k --connect-to vs-1:8443:127.0.0.1 https://vs-1:8443 curl -k --connect-to vs-2:8443:127.0.0.1 https://vs-2:8443
Checklist: