-
Notifications
You must be signed in to change notification settings - Fork 31
Conversation
…ve it from the travis script.
``` + kubectl logs -c apiserver -l app=navigator,component=apiserver error: a container cannot be specified when using a selector (-l) ```
@wallrj: Adding do-not-merge/release-note-label-needed because the release note process has not been followed. One of the following labels is required "release-note", "release-note-action-required", or "release-note-none". Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
In #45 (comment) @munnerz wrote:
I don't mind putting it back but my thinking is that we shouldn't be poking around for the state of controller in e2e tests...unless the tests fail. And in any case, the controller is running in the tests right now, but failing because RBAC policy prevents its leader election routine to run.
We should fix that. |
@wallrj agreed that it'd be nice to do a full test like that, but right now we don't have the testing infra to stand up full DB clusters due to resource limits on Travis. I'm working to fix this via our new test-infra, but for the time being, it's important we know whether the controller will work with the default configuration, so I'd like to keep it in. Yep we should fix that (#68), and also should update the tests so they fail as a result of the leader election failing (through a health check perhaps). But this second part is a bit lower priority, as failing leader election is unlikely to be the cause of test failures (once we fix #68) |
.travis.yml
Outdated
@@ -18,9 +18,6 @@ jobs: | |||
# Create a cluster. We do this as root as we are using the 'docker' driver. | |||
# We enable RBAC on the cluster too, to test the RBAC in Navigators chart | |||
- sudo -E CHANGE_MINIKUBE_NONE_USER=true minikube start --vm-driver=none --kubernetes-version="$KUBERNETES_VERSION" --extra-config=apiserver.Authorization.Mode=RBAC | |||
- while true; do if kubectl get nodes; then break; fi; echo "Waiting 5s for kubernetes to be ready..."; sleep 5; done | |||
# Fix problems with kube-dns + rbac | |||
- kubectl create serviceaccount --namespace kube-system kube-dns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed these steps because they are already performed in the prepare-e2e.sh script.
retry TIMEOUT=600 kubectl get nodes | ||
|
||
echo "Waiting for tiller to be ready..." | ||
retry TIMEOUT=60 helm version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And these steps are also performed in prepare-e2e.sh, so I removed them.
kubectl logs -c controller -l app=navigator,component=controller | ||
exit 1 | ||
FAILURE_COUNT=$(($FAILURE_COUNT+1)) | ||
echo "TEST FAILURE: $1" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is so that all the tests are performed rather failing on the first test.
--namespace "${USER_NAMESPACE}" \ | ||
service es-demo; then | ||
fail_test "Navigator controller failed to create elasticsearchcluster service" | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a check to see that a service is created....a clear indication that the navigator controller is reacting to the new escluster resource.
CERT_DIR="$CONFIG_DIR/certs" | ||
mkdir -p $CERT_DIR | ||
TEST_DIR="$CONFIG_DIR/tmp" | ||
mkdir -p $TEST_DIR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
None of this stuff is needed in this script.
kind: ServiceAccount | ||
metadata: | ||
name: kube-dns | ||
namespace: kube-system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added this here rather than creating the service in the travis.yml file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My reasoning behind the separation is so that we are not opinionated in our e2e test scripts about using minikube. It'd be ideal if the e2e tests could be run against any valid/functioning kubernetes cluster (this is more relevant when it comes to us running our tests on GCE/GKE).
Happy to accept this for now, but I think there'll be more refactoring of this stuff soon 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, got it. Well thanks for merging this anyway.
@munnerz Ok. I've added back the check for apiserver and controller replicas and added another check for I know you're planning to create e2e tests with Ginkgo / prow / gce, but I implore you to merge this change so that I can continue to perform some quick local e2e tests on my laptop. |
e2e test failed because of failed
/retest |
@wallrj /retest does not work for travis jobs :( I've hit restart. Are you able to restart jobs on travis? |
Ah yes, I see it here: https://travis-ci.org/jetstack/navigator/pull_requests Thanks @munnerz |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: munnerz The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
Automatic merge from submit-queue. |
See #45 for discussion