-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test Knative in airgapped CKF #140
Comments
As part of this effort we found out that the docs from Knative don't expose to us how to:
We need to find a way to configure these images in the KnativeServing CR |
Looking a little bit into the Knative Operator code I found out that it works the following way:
this means that if the |
So with the above we can try setting the container names of the |
Looks like this is how the custom images feature has been designed to work, thus we can add |
As mentioned in canonical/bundle-kubeflow#680, we bumped onto this #147 so for Knative-serving, we will be configuring it to use 1.8.0 (knative-eventing already uses 1.8.0). |
Deploying knative charms in an airgrap environment works as expected apart from the {"severity":"ERROR","timestamp":"2023-09-04T08:41:28.454200818Z","logger":"activator","caller":"websocket/connection.go:144","message":"Websocket connection could not be established","commit":"e82287d","knative.dev/controller":"activator","knative.dev/pod":"activator-768b674d7c-dzd6f","error":"dial tcp: lookup autoscaler.knative-serving.svc.cluster.local: i/o timeout","stacktrace":"knative.dev/pkg/websocket.NewDurableConnection.func1\n\tknative.dev/pkg@v0.0.0-20221011175852-714b7630a836/websocket/connection.go:144\nknative.dev/pkg/websocket.(*ManagedConnection).connect.func1\n\tknative.dev/pkg@v0.0.0-20221011175852-714b7630a836/websocket/connection.go:225\nk8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1\n\tk8s.io/apimachinery@v0.25.2/pkg/util/wait/wait.go:222\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\tk8s.io/apimachinery@v0.25.2/pkg/util/wait/wait.go:235\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\tk8s.io/apimachinery@v0.25.2/pkg/util/wait/wait.go:228\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\tk8s.io/apimachinery@v0.25.2/pkg/util/wait/wait.go:423\nknative.dev/pkg/websocket.(*ManagedConnection).connect\n\tknative.dev/pkg@v0.0.0-20221011175852-714b7630a836/websocket/connection.go:222\nknative.dev/pkg/websocket.NewDurableConnection.func2\n\tknative.dev/pkg@v0.0.0-20221011175852-714b7630a836/websocket/connection.go:162"}
{"severity":"ERROR","timestamp":"2023-09-04T08:41:28.787749703Z","logger":"activator","caller":"websocket/connection.go:191","message":"Failed to send ping message to ws://autoscaler.knative-serving.svc.cluster.local:8080","commit":"e82287d","knative.dev/controller":"activator","knative.dev/pod":"activator-768b674d7c-dzd6f","error":"connection has not yet been established","stacktrace":"knative.dev/pkg/websocket.NewDurableConnection.func3\n\tknative.dev/pkg@v0.0.0-20221011175852-714b7630a836/websocket/connection.go:191"}
{"severity":"WARNING","timestamp":"2023-09-04T08:41:31.05744278Z","logger":"activator","caller":"handler/healthz_handler.go:36","message":"Healthcheck failed: connection has not yet been established","commit":"e82287d","knative.dev/controller":"activator","knative.dev/pod":"activator-768b674d7c-dzd6f"} Trying to debug this, we also deployed the above charms in a non-airgapped environment and noticed that the pod has the same logs there too, but its container is being able to go to ready. Investigating this further, and inside the airgapped env, we noticed the following in the [INFO] 10.1.205.153:40339 - 44253 "AAAA IN autoscaler.knative-serving.svc.cluster.local.lxd. udp 66 false 512" - - 0 2.000241772s
[INFO] 10.1.205.153:56166 - 44510 "A IN autoscaler.knative-serving.svc.cluster.local.lxd. udp 66 false 512" - - 0 2.000258023s
[ERROR] plugin/errors: 2 autoscaler.knative-serving.svc.cluster.local.lxd. AAAA: read udp 10.1.205.163:34994->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 autoscaler.knative-serving.svc.cluster.local.lxd. A: read udp 10.1.205.163:34020->8.8.4.4:53: i/o timeout Looking at the above, we start to believe that this has to do with the way our airgrapped environment is being set up (more info about environment here canonical/bundle-kubeflow#682):
From the above, we 've lead to believe that it could be that the request to SolutionConfigure airgap environment to immediately reject requests towards outside the cluster. |
Right now we allow users configure the following images for Knative Serving/Eventing
knative-operators/charms/knative-serving/config.yaml
Lines 25 to 34 in 5caa2db
knative-operators/charms/knative-eventing/config.yaml
Lines 11 to 15 in 5caa2db
But once I do a
microk8s ctr images ls
then I see the following relevant Knative imagesFrom the above list of images reported in MicroK8s it seems a couple of images are not part of the Service CR. We'll have to make sur
The text was updated successfully, but these errors were encountered: