Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuration "helloworld-go" is waiting for a Revision to become ready. #2598

Closed
lizrice opened this issue Nov 30, 2018 · 34 comments
Closed
Labels
area/networking kind/bug Categorizes issue or PR as related to a bug.

Comments

@lizrice
Copy link

lizrice commented Nov 30, 2018

Expected Behavior

200 response from helloworld-go service

Actual Behavior

404 response from the helloworld-go service
Service always shows Configuration "helloworld-go" is waiting for a Revision to become ready

$ curl -v -H "Host: helloworld-go.default.example.com" http://$EXTERNAL_IP_ADDRESS
* Rebuilt URL to: http://168.61.16.23/
*   Trying 168.61.16.23...
* TCP_NODELAY set
* Connected to 168.61.16.23 (168.61.16.23) port 80 (#0)
> GET / HTTP/1.1
> Host: helloworld-go.default.example.com
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< location: http://helloworld-go.default.example.com/
< date: Fri, 30 Nov 2018 22:00:16 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host 168.61.16.23 left intact

Steps to Reproduce the Problem

  1. Follow instructions in https://github.com/knative/docs/blob/master/install/Knative-with-AKS.md to run helloworld-go on AKS

Additional Info

Although the symptoms are like #1971 I don't think the cause is the same, as the cluster isn't behind a proxy, and the image is being successfully pulled so accessing the registry isn't the problem.

I've attached the output from kubectl describe for related resources, and also the controller log.
configuration.txt
deploy.txt
ksvc.txt
pod.txt
revision.txt
route.txt
rs.txt
controller.log

@knative-prow-robot knative-prow-robot added area/networking kind/bug Categorizes issue or PR as related to a bug. labels Nov 30, 2018
@ZhiminXiang
Copy link

Could you please run kubectl get pods -n knative-serving and paste the result here? I want to make sure all of the Knative pods are correctly installed.
I suspect it may be the similar issue like #2536, in which the autoscaler was not installed.

@gyliu513
Copy link
Contributor

gyliu513 commented Dec 1, 2018

@ZhiminXiang I think we need update the document a bit by adding a troubleshooting section in case people have this issue again, I will try to follow up with a PR soon.

@lizrice
Copy link
Author

lizrice commented Dec 1, 2018

Here's what I get:

$ kubectl get pods -n knative-serving
NAME                          READY     STATUS    RESTARTS   AGE
activator-db79694db-5zjcp     2/2       Running   0          16h
activator-db79694db-jdqns     2/2       Running   0          16h
activator-db79694db-kwqps     2/2       Running   0          16h
autoscaler-86d954bffc-kztjq   2/2       Running   0          16h
controller-5cc6f8cc95-wwf6v   1/1       Running   0          16h
webhook-654c8d7bff-nc49q      1/1       Running   0          16h

I have also attached the output from kubectl get kpa -o yaml in case that gives any clues

kpa.txt

@lizrice
Copy link
Author

lizrice commented Dec 1, 2018

I do see a lot of errors in the autoscaler logs
autoscaler-logs.txt

@lizrice
Copy link
Author

lizrice commented Dec 1, 2018

I thought the logs might be indicating that the autoscaler didn't have access to API resources, but I just confirmed that

  • the autoscaler is mounting the correct token for the controller SA
  • the service account has permissions to access resources that the log file says are failing, for example
$ kubectl -n knative-serving --as=system:serviceaccount:knative-serving:controller auth can-i get configmaps
yes
  • I can use that token to successfully hit API endpoints that the log file says are failing, for example curl -vk -H "Authorization: Bearer <token>" "https://liz-knativ-knative-group-6de215-2fbcab2f.hcp.westus.azmk8s.io:443/api/v1/namespaces/knative-serving/configmaps?limit=500&resourceVersion=0" works fine

So the RBAC all seems OK, and it must be something else preventing the API requests from the autoscaler from working properly.

@tcnghia
Copy link
Contributor

tcnghia commented Dec 3, 2018

@lizrice can you please try reinstall Knative, but using istio-lean.yaml instead of istio.yaml? that would sidestep Istio sidecar issues (if any) and would further isolate the issue. if possible, can you try on a new cluster, or if using an existing Knative installation, delete istio completely before reinstalling (kubectl delete -f http://knative/release/url/istio.yaml --ignore-not-found) ?

@tcnghia
Copy link
Contributor

tcnghia commented Dec 3, 2018

I believe you are hitting the first issue that @krancour outlined here #2270

@andrewrynhard
Copy link

andrewrynhard commented Dec 5, 2018

@tcnghia I'm seeing 404 but autoscaler works just fine:

⟩ kubectl get routes
NAME            DOMAIN                             READY   REASON
helloworld-go   helloworld-go.ci.ex.ample.com      True
⟩ kubectl get configurations
NAME            LATESTCREATED         LATESTREADY           READY   REASON
helloworld-go   helloworld-go-00001   helloworld-go-00001   True
⟩ kubectl get pods
NAME                                              READY   STATUS    RESTARTS   AGE
helloworld-go-00001-deployment-6864f7c7cc-ghqgb   3/3     Running   0          21m
⟩ curl -vIL http://helloworld-go.ci.ex.ample.com
* Rebuilt URL to: http://helloworld-go.ci.ex.ample.com/
*   Trying 35.167.168.67...
* TCP_NODELAY set
* Connected to helloworld-go.ci.ex.ample.com (35.167.168.67) port 80 (#0)
> HEAD / HTTP/1.1
> Host: helloworld-go.ci.ex.ample.com
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
HTTP/1.1 404 Not Found
< location: http://helloworld-go.ci.ex.ample.com/
location: http://helloworld-go.ci.ex.ample.com/
< date: Wed, 05 Dec 2018 17:53:58 GMT
date: Wed, 05 Dec 2018 17:53:58 GMT
< server: envoy
server: envoy
< transfer-encoding: chunked
transfer-encoding: chunked

Not sure what I'm missing here.

Looks like the pod is showing. 2/2 ready but on a describe the readinessProbe fails.

  Warning  Unhealthy  3m16s (x14 over 6m40s)  kubelet, xxx  Readiness probe failed: Get http://10.244.3.101:8022/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

@tcnghia
Copy link
Contributor

tcnghia commented Jan 3, 2019

@andrewrynhard the readiness probe in this case has not failed after the first 3m, so it is correct that pod is ready. It is still a long time to pass readiness though.

Can you please try out the steps to further diagnose the issue https://github.com/knative/serving/blob/master/docs/debugging/application-debugging-guide.md#check-clusteringressistio-routing

@cdrage
Copy link

cdrage commented Jan 24, 2019

I'm getting the exact same issue with the PHP example,

any pointers?

dev/k8s/k8sfiles  master ✗                                                                                                                                                                                                                                             2d ✖ ⚑ ◒  
▶ kubectl describe ksvc/helloworld-php
Name:         helloworld-php
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"serving.knative.dev/v1alpha1","kind":"Service","metadata":{"annotations":{},"name":"helloworld-php","namespace":"default"},...
API Version:  serving.knative.dev/v1alpha1
Kind:         Service
Metadata:
  Creation Timestamp:  2019-01-24T20:57:05Z
  Generation:          1
  Resource Version:    15397
  Self Link:           /apis/serving.knative.dev/v1alpha1/namespaces/default/services/helloworld-php
  UID:                 9aed31cc-201a-11e9-855e-52540046b08b
Spec:
  Generation:  1
  Run Latest:
    Configuration:
      Revision Template:
        Spec:
          Container:
            Env:
              Name:         TARGET
              Value:        HELLO WORLD!
            Image:          gcr.io/knative-samples/helloworld-php
          Timeout Seconds:  300
Status:
  Conditions:
    Last Transition Time:        2019-01-24T20:57:05Z
    Severity:                    Error
    Status:                      Unknown
    Type:                        ConfigurationsReady
    Last Transition Time:        2019-01-24T20:57:05Z
    Message:                     Configuration "helloworld-php" is waiting for a Revision to become ready.
    Reason:                      RevisionMissing
    Severity:                    Error
    Status:                      Unknown
    Type:                        Ready
    Last Transition Time:        2019-01-24T20:57:05Z
    Message:                     Configuration "helloworld-php" is waiting for a Revision to become ready.
    Reason:                      RevisionMissing
    Severity:                    Error
    Status:                      Unknown
    Type:                        RoutesReady
  Latest Created Revision Name:  helloworld-php-00001
  Observed Generation:           1
Events:
  Type    Reason   Age                From                Message
  ----    ------   ----               ----                -------
  Normal  Created  11s                service-controller  Created Configuration "helloworld-php"
  Normal  Created  11s                service-controller  Created Route "helloworld-php"

@tcnghia
Copy link
Contributor

tcnghia commented Jan 24, 2019

can you please share kubectl describe revision/helloworld-php-00001 as well? thanks

@cdrage
Copy link

cdrage commented Jan 25, 2019

@tcnghia

▶ kubectl describe revision helloworld-php-00001
Name:         helloworld-php-00001
Namespace:    default
Labels:       serving.knative.dev/configuration=helloworld-php
              serving.knative.dev/configurationGeneration=1
              serving.knative.dev/configurationMetadataGeneration=1
              serving.knative.dev/service=helloworld-php
Annotations:  <none>
API Version:  serving.knative.dev/v1alpha1
Kind:         Revision
Metadata:
  Creation Timestamp:  2019-01-24T20:57:05Z
  Generation:          1
  Owner References:
    API Version:           serving.knative.dev/v1alpha1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Configuration
    Name:                  helloworld-php
    UID:                   9aefad90-201a-11e9-855e-52540046b08b
  Resource Version:        16369
  Self Link:               /apis/serving.knative.dev/v1alpha1/namespaces/default/revisions/helloworld-php-00001
  UID:                     9af15913-201a-11e9-855e-52540046b08b
Spec:
  Container:
    Env:
      Name:   TARGET
      Value:  HELLO WORLD!
    Image:    gcr.io/knative-samples/helloworld-php
    Name:
    Resources:
  Generation:       1
  Timeout Seconds:  300
Status:
  Conditions:
    Last Transition Time:  2019-01-25T15:10:49Z
    Severity:              Error
    Status:                True
    Type:                  BuildSucceeded
    Last Transition Time:  2019-01-25T15:11:15Z
    Message:               Unable to fetch image "gcr.io/knative-samples/helloworld-php": Get https://gcr.io/v2/: x509: certificate has expired or is not yet valid
    Reason:                ContainerMissing
    Severity:              Error
    Status:                False
    Type:                  ContainerHealthy
    Last Transition Time:  2019-01-25T15:11:15Z
    Message:               Unable to fetch image "gcr.io/knative-samples/helloworld-php": Get https://gcr.io/v2/: x509: certificate has expired or is not yet valid
    Reason:                ContainerMissing
    Severity:              Error
    Status:                False
    Type:                  Ready
    Last Transition Time:  2019-01-25T15:10:49Z
    Severity:              Error
    Status:                Unknown
    Type:                  ResourcesAvailable
  Log URL:                 http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana#/discover?_a=(query:(match:(kubernetes.labels.knative-dev%2FrevisionUID:(query:'9af15913-201a-11e9-855e-52540046b08b',type:phrase))))

Events:  <none>

Exact same as my other issue here #2991

@tcnghia
Copy link
Contributor

tcnghia commented Jan 31, 2019

@lizrice can you please check if your issue is similar to @cdrage's #2991? Thanks

@mattmoor mattmoor added this to the Needs Triage milestone Feb 11, 2019
@hpandeycodeit
Copy link

I am getting the same issue with one Java example. Any pointers?

kubectl describe ksvc helloworld-new-java
Name:         helloworld-new-java
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"serving.knative.dev/v1alpha1","kind":"Service","metadata":{"annotations":{},"name":"helloworld-new-java","namespace":"default"},"spec":{...
              serving.knative.dev/creator=minikube-user
              serving.knative.dev/lastModifier=minikube-user
API Version:  serving.knative.dev/v1alpha1
Kind:         Service
Metadata:
  Creation Timestamp:  2019-02-24T10:05:36Z
  Generation:          1
  Resource Version:    3640
  Self Link:           /apis/serving.knative.dev/v1alpha1/namespaces/default/services/helloworld-new-java
  UID:                 bb0dd911-381b-11e9-9016-ba4545af638e
Spec:
  Run Latest:
    Configuration:
      Revision Template:
        Metadata:
          Creation Timestamp:  <nil>
        Spec:
          Container:
            Env:
              Name:   TARGET
              Value:  himanshu Sample v1
            Image:    docker.io/xxxxx/helloworld-new-java
            Name:     
            Resources:
          Timeout Seconds:  300
Status:
  Conditions:
    Last Transition Time:        2019-02-24T10:05:36Z
    Status:                      Unknown
    Type:                        ConfigurationsReady
    Last Transition Time:        2019-02-24T10:05:36Z
    Message:                     Configuration "helloworld-new-java" is waiting for a Revision to become ready.
    Reason:                      RevisionMissing
    Status:                      Unknown
    Type:                        Ready
    Last Transition Time:        2019-02-24T10:05:36Z
    Message:                     Configuration "helloworld-new-java" is waiting for a Revision to become ready.
    Reason:                      RevisionMissing
    Status:                      Unknown
    Type:                        RoutesReady
  Latest Created Revision Name:  helloworld-new-java-z2pkx
  Observed Generation:           1
Events:
  Type    Reason   Age                From                Message
  ----    ------   ----               ----                -------
  Normal  Created  15m                service-controller  Created Configuration "helloworld-new-java"
  Normal  Created  15m                service-controller  Created Route "helloworld-new-java"
  Normal  Updated  15m (x2 over 15m)  service-controller  Updated Service "helloworld-new-java"
  

@vagababov
Copy link
Contributor

vagababov commented Feb 24, 2019 via email

@hpandeycodeit
Copy link

@vagababov Here is the describe on revision:

It's saying Message: Requests to the target are being buffered as resources are provisioned.

Namespace:    default
Labels:       serving.knative.dev/configuration=hello-world-mon
              serving.knative.dev/configurationGeneration=1
              serving.knative.dev/configurationMetadataGeneration=1
              serving.knative.dev/service=hello-world-mon
Annotations:  <none>
API Version:  serving.knative.dev/v1alpha1
Kind:         Revision
Metadata:
  Creation Timestamp:  2019-02-25T18:21:04Z
  Generate Name:       hello-world-mon-
  Generation:          1
  Owner References:
    API Version:           serving.knative.dev/v1alpha1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Configuration
    Name:                  hello-world-mon
    UID:                   1cdae69d-392a-11e9-899e-a6b02c032361
  Resource Version:        5862
  Self Link:               /apis/serving.knative.dev/v1alpha1/namespaces/default/revisions/hello-world-mon-6tzz6
  UID:                     1cdc7bb1-392a-11e9-899e-a6b02c032361
Spec:
  Container:
    Env:
      Name:   TARGET
      Value:  himanshu Sample v1
    Image:    docker.io/hpacodeit/hello-world-mon
    Name:     
    Resources:
  Timeout Seconds:  300
Status:
  Conditions:
    Last Transition Time:  2019-02-25T18:21:05Z
    Message:               Requests to the target are being buffered as resources are provisioned.
    Reason:                Queued
    Severity:              Info
    Status:                Unknown
    Type:                  Active
    Last Transition Time:  2019-02-25T18:21:04Z
    Status:                True
    Type:                  BuildSucceeded
    Last Transition Time:  2019-02-25T18:21:05Z
    Reason:                Deploying
    Status:                Unknown
    Type:                  ContainerHealthy
    Last Transition Time:  2019-02-25T18:21:05Z
    Reason:                Deploying
    Status:                Unknown
    Type:                  Ready
    Last Transition Time:  2019-02-25T18:21:05Z
    Reason:                Deploying
    Status:                Unknown
    Type:                  ResourcesAvailable
  Image Digest:            index.docker.io/hpacodeit/hello-world-mon@sha256:04e009a42e6b07eba513eeafb0a3746eadfdf68bd27222e3c27bef238e56d55a
  Log URL:                 http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana#/discover?_a=(query:(match:(kubernetes.labels.knative-dev%2FrevisionUID:(query:'1cdc7bb1-392a-11e9-899e-a6b02c032361',type:phrase))))
  Observed Generation:     1
  Service Name:            hello-world-mon-6tzz6-service
Events:                    <none>

@vagababov
Copy link
Contributor

So what happens when you do
kubectl get deployments and kubectl get pods for your deployment? Are they created? Are they in process of being created? What is the status?

@hpandeycodeit
Copy link

For pods, it's saying running but for deployments, its not available:

NAME                               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-world-mon-6tzz6-deployment   1         1         1            0           17m

kubectl get pods
NAME                                                READY     STATUS    RESTARTS   AGE
hello-world-mon-6tzz6-deployment-56747cf7fd-5ps6q   2/3       Running   0          18m

So I am trying to run this sample app https://github.com/knative/docs/tree/master/serving/samples/helloworld-java and for that I need the domain to execute the java app. But domain is not giving any value:

kubectl get ksvc hello-world-mon
NAME              DOMAIN    LATESTCREATED           LATESTREADY   READY     REASON
hello-world-mon             hello-world-mon-6tzz6                 Unknown   RevisionMissing

@vagababov
Copy link
Contributor

vagababov commented Feb 25, 2019 via email

@ZhiminXiang
Copy link

@hpandeycodeit Looks like you only have 2 containers ready in your pod. could you please run kubectl get pod hello-world-mon-6tzz6-deployment-56747cf7fd-5ps6q -oyaml, and show us what container is not ready?

After you identify the unready container, could you please also check the log of that container, and see what causes failure of starting the container?

@hpandeycodeit
Copy link

Here is the output :

Himanshu:helloworld hpandey$ kubectl get pod hello-world-mon-q5dc6-deployment-cb5fd7545-7td5x  -oyaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    sidecar.istio.io/inject: "true"
    sidecar.istio.io/status: '{"version":"28fe064a36e3b479339d99c4be6e85e25ba4bcb2bc19ee6ce5c94a9907cf30f5","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
    traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
  creationTimestamp: 2019-02-25T19:15:57Z
  generateName: hello-world-mon-q5dc6-deployment-cb5fd7545-
  labels:
    app: hello-world-mon-q5dc6
    pod-template-hash: "761983101"
    serving.knative.dev/configuration: hello-world-mon
    serving.knative.dev/configurationGeneration: "1"
    serving.knative.dev/configurationMetadataGeneration: "1"
    serving.knative.dev/revision: hello-world-mon-q5dc6
    serving.knative.dev/revisionUID: c6ecbc9b-3931-11e9-86aa-92624b03e53c
    serving.knative.dev/service: hello-world-mon
  name: hello-world-mon-q5dc6-deployment-cb5fd7545-7td5x
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: hello-world-mon-q5dc6-deployment-cb5fd7545
    uid: c7429dd0-3931-11e9-86aa-92624b03e53c
  resourceVersion: "2835"
  selfLink: /api/v1/namespaces/default/pods/hello-world-mon-q5dc6-deployment-cb5fd7545-7td5x
  uid: c7466432-3931-11e9-86aa-92624b03e53c
spec:
  containers:
  - env:
    - name: TARGET
      value: himanshu Sample v1
    - name: PORT
      value: "8080"
    - name: K_REVISION
      value: hello-world-mon-q5dc6
    - name: K_CONFIGURATION
      value: hello-world-mon
    - name: K_SERVICE
      value: hello-world-mon
    image: index.docker.io/hpacodeit/hello-world-mon@sha256:04e009a42e6b07eba513eeafb0a3746eadfdf68bd27222e3c27bef238e56d55a
    imagePullPolicy: IfNotPresent
    lifecycle:
      preStop:
        httpGet:
          path: /quitquitquit
          port: 8022
          scheme: HTTP
    name: user-container
    ports:
    - containerPort: 8080
      name: user-port
      protocol: TCP
    resources:
      requests:
        cpu: 400m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /var/log
      name: varlog
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kdttt
      readOnly: true
  - env:
    - name: SERVING_NAMESPACE
      value: default
    - name: SERVING_CONFIGURATION
      value: hello-world-mon
    - name: SERVING_REVISION
      value: hello-world-mon-q5dc6
    - name: SERVING_AUTOSCALER
      value: autoscaler
    - name: SERVING_AUTOSCALER_PORT
      value: "8080"
    - name: CONTAINER_CONCURRENCY
      value: "0"
    - name: REVISION_TIMEOUT_SECONDS
      value: "300"
    - name: SERVING_POD
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: SERVING_LOGGING_CONFIG
      value: |-
        {
          "level": "info",
          "development": false,
          "outputPaths": ["stdout"],
          "errorOutputPaths": ["stderr"],
          "encoding": "json",
          "encoderConfig": {
            "timeKey": "ts",
            "levelKey": "level",
            "nameKey": "logger",
            "callerKey": "caller",
            "messageKey": "msg",
            "stacktraceKey": "stacktrace",
            "lineEnding": "",
            "levelEncoder": "",
            "timeEncoder": "iso8601",
            "durationEncoder": "",
            "callerEncoder": ""
          }
        }
    - name: SERVING_LOGGING_LEVEL
      value: info
    - name: USER_PORT
      value: "8080"
    - name: SYSTEM_NAMESPACE
      value: knative-serving
    image: gcr.io/knative-releases/github.com/knative/serving/cmd/queue@sha256:e19ca17d2b729904d2662a30b6c5c27cf4b62fd64baef2da4125525a4f9346e5
    imagePullPolicy: IfNotPresent
    lifecycle:
      preStop:
        httpGet:
          path: /quitquitquit
          port: 8022
          scheme: HTTP
    name: queue-proxy
    ports:
    - containerPort: 8012
      name: queue-port
      protocol: TCP
    - containerPort: 8022
      name: queueadm-port
      protocol: TCP
    - containerPort: 9090
      name: queue-metrics
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /health
        port: 8022
        scheme: HTTP
      periodSeconds: 1
      successThreshold: 1
      timeoutSeconds: 10
    resources:
      requests:
        cpu: 25m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kdttt
      readOnly: true
  - args:
    - proxy
    - sidecar
    - --configPath
    - /etc/istio/proxy
    - --binaryPath
    - /usr/local/bin/envoy
    - --serviceCluster
    - hello-world-mon-q5dc6
    - --drainDuration
    - 45s
    - --parentShutdownDuration
    - 1m0s
    - --discoveryAddress
    - istio-pilot.istio-system:15007
    - --discoveryRefreshDelay
    - 1s
    - --zipkinAddress
    - zipkin.istio-system:9411
    - --connectTimeout
    - 10s
    - --statsdUdpAddress
    - istio-statsd-prom-bridge.istio-system:9125
    - --proxyAdminPort
    - "15000"
    - --controlPlaneAuthPolicy
    - NONE
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: ISTIO_META_POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: ISTIO_META_INTERCEPTION_MODE
      value: REDIRECT
    image: docker.io/istio/proxyv2:1.0.2
    imagePullPolicy: IfNotPresent
    lifecycle:
      preStop:
        exec:
          command:
          - sh
          - -c
          - sleep 20; while [ $(netstat -plunt | grep tcp | grep -v envoy | wc -l
            | xargs) -ne 0 ]; do sleep 1; done
    name: istio-proxy
    resources:
      requests:
        cpu: 10m
    securityContext:
      readOnlyRootFilesystem: true
      runAsUser: 1337
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
  dnsPolicy: ClusterFirst
  initContainers:
  - args:
    - -p
    - "15001"
    - -u
    - "1337"
    - -m
    - REDIRECT
    - -i
    - '*'
    - -x
    - ""
    - -b
    - 8080, 8012, 8022, 9090,
    - -d
    - ""
    image: docker.io/istio/proxy_init:1.0.2
    imagePullPolicy: IfNotPresent
    name: istio-init
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
  nodeName: minikube
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 300
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: varlog
  - name: default-token-kdttt
    secret:
      defaultMode: 420
      secretName: default-token-kdttt
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      defaultMode: 420
      optional: true
      secretName: istio.default
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2019-02-25T19:15:59Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2019-02-25T19:15:57Z
    message: 'containers with unready status: [queue-proxy]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: null
    message: 'containers with unready status: [queue-proxy]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: 2019-02-25T19:15:57Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://4e5136dbe60ce34096a07f40a4a027674b4fcc85bb9dffc3a9f054a819be6cf3
    image: istio/proxyv2:1.0.2
    imageID: docker-pullable://istio/proxyv2@sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
    lastState: {}
    name: istio-proxy
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2019-02-25T19:16:00Z
  - containerID: docker://0ece6f8c4ff856c863c6c8a7cff068d4b7497426aeed7ee1b6fd410e738e2887
    image: sha256:912692029afc438169bac7d44721ab9f29f98513a14f33136c4beddd80dfb18e
    imageID: docker-pullable://gcr.io/knative-releases/github.com/knative/serving/cmd/queue@sha256:e19ca17d2b729904d2662a30b6c5c27cf4b62fd64baef2da4125525a4f9346e5
    lastState: {}
    name: queue-proxy
    ready: false
    restartCount: 0
    state:
      running:
        startedAt: 2019-02-25T19:16:00Z
  - containerID: docker://64b941c2f64555b1db4e8acafb3bdc9a53e668ce83a2a82dc23d25185a7f632d
    image: sha256:96e93e9cf283f2aee4cef3abdac07dbfd87cf8c0a2ae47d936097d3141f477e6
    imageID: docker-pullable://hpacodeit/hello-world-mon@sha256:04e009a42e6b07eba513eeafb0a3746eadfdf68bd27222e3c27bef238e56d55a
    lastState: {}
    name: user-container
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2019-02-25T19:15:59Z
  hostIP: 192.168.64.24
  initContainerStatuses:
  - containerID: docker://ee15d2b40a6e75a928b9cff94c7241a6a5405a71d861f43a2339c8e31f45ead6
    image: istio/proxy_init:1.0.2
    imageID: docker-pullable://istio/proxy_init@sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
    lastState: {}
    name: istio-init
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: docker://ee15d2b40a6e75a928b9cff94c7241a6a5405a71d861f43a2339c8e31f45ead6
        exitCode: 0
        finishedAt: 2019-02-25T19:15:58Z
        reason: Completed
        startedAt: 2019-02-25T19:15:58Z
  phase: Running
  podIP: 172.17.0.20
  qosClass: Burstable
  startTime: 2019-02-25T19:15:57Z

Looks like queue-proxy is not ready.

@ZhiminXiang
Copy link

Could you please also run
kubectl logs hello-world-mon-q5dc6-deployment-cb5fd7545-7td5x -c queue-proxy and see if there is any error?

@hpandeycodeit
Copy link

This is giving connection refused error:

{"level":"error","ts":"2019-02-25T19:17:22.164Z","logger":"queueproxy
","caller":"queue/main.go:183","msg":"User-container could not be 
probed successfully.","knative.dev/key":"default/
hello-world-mon-q5dc6","knative.dev/pod":"hello-world-mon-q5dc6-deplo
yment-cb5fd7545-7td5x","error":"dial tcp 127.0.0.1:8080: connect: 
connection refused","stacktrace":"main.probeUserContainer\n\t/go/src/
github.com/knative/serving/cmd/queue/main.go:183\ngit.luolix.top/knative/
serving/pkg/queue/health.(*State).HealthHandler.func1\n\t/go/src/
github.com/knative/serving/pkg/queue/health/health_state.go:86\nnet/
http.HandlerFunc.ServeHTTP\n\t/root/sdk/go1.12rc1/src/net/http/
server.go:1995\nnet/http.(*ServeMux).ServeHTTP\n\t/root/sdk/
go1.12rc1/src/net/http/server.go:2375\nnet/
http.serverHandler.ServeHTTP\n\t/root/sdk/go1.12rc1/src/net/http/
server.go:2774\nnet/http.(*conn).serve\n\t/root/sdk/go1.12rc1/src/net
/http/server.go:1878"}

@vagababov
Copy link
Contributor

So it seems your container did not spring up. Let me try to run the example myself.

@vagababov
Copy link
Contributor

I just deployed the sample on my cluster and it worked from the first try.

@jtlz2
Copy link

jtlz2 commented Mar 13, 2019

I have just had this for helloworld-python - it righted itself after 10 minutes, so I am wondering if it's simply really really slow to spin up.

@tcnghia tcnghia closed this as completed Jun 26, 2019
@eallred-google eallred-google modified the milestones: Needs Triage, Ice Box Oct 23, 2019
@Vivek-anand-jain
Copy link

I am still facing this issue:

curl -v -H "Host: http://helloworld-go.default.example.com" http://127.0.0.1:$INGRESS_PORT
* Rebuilt URL to: http://127.0.0.1:31380/
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 31380 (#0)
> GET / HTTP/1.1
> Host: http://helloworld-go.default.example.com
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Thu, 19 Dec 2019 03:55:56 GMT
< server: istio-envoy
< content-length: 0
<
* Connection #0 to host 127.0.0.1 left intact
Kubernetes: v16.3.0
Istio: istio-1.3.5
knative-serving: v0.11

I used istio-lean.yaml for istio installation.

@vagababov
Copy link
Contributor

So are you using minicube?

@Vivek-anand-jain
Copy link

No I have a 2 node cluster, I tried with IP address of both nodes

@vagababov
Copy link
Contributor

Why would the IP be 127.0.0.1 then?

@Vivek-anand-jain
Copy link

Sorry for the misleading configuration. I put random IP for the post. I used IP address I got from this:

export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')

And my request is like:
curl -v -H "Host: http://helloworld-go.default.example.com" http://$INGRESS_HOST:$INGRESS_PORT

@Vivek-anand-jain
Copy link

This gave me export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}') public IP of the system

@vyom-soft
Copy link

Hello,
I am seeing the same issue. Here is the revision info:
`Name: minserver-one-v1
Namespace: vyom
Labels: serving.knative.dev/configuration=minserver-one
serving.knative.dev/configurationGeneration=1
serving.knative.dev/configurationUID=42de3bec-0a17-498a-a070-75998032b6b7
serving.knative.dev/routingState=active
serving.knative.dev/service=minserver-one
serving.knative.dev/serviceUID=9d8f77be-3932-49cb-86e6-7271d25ec7f8
Annotations: autoscaling.knative.dev/maxScale: 5
autoscaling.knative.dev/minScale: 1
autoscaling.knative.dev/target: 1
serving.knative.dev/creator: kubeadmin
serving.knative.dev/routes: minserver-one
serving.knative.dev/routingStateModified: 2021-09-28T08:47:36Z
API Version: serving.knative.dev/v1
Kind: Revision
Metadata:
Creation Timestamp: 2021-09-28T08:47:36Z
Generation: 1
Managed Fields:
API Version: serving.knative.dev/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:autoscaling.knative.dev/maxScale:
f:autoscaling.knative.dev/minScale:
f:autoscaling.knative.dev/target:
f:serving.knative.dev/creator:
f:serving.knative.dev/routes:
f:serving.knative.dev/routingStateModified:
f:labels:
.:
f:serving.knative.dev/configuration:
f:serving.knative.dev/configurationGeneration:
f:serving.knative.dev/configurationUID:
f:serving.knative.dev/routingState:
f:serving.knative.dev/service:
f:serving.knative.dev/serviceUID:
f:ownerReferences:
f:spec:
.:
f:containerConcurrency:
f:containers:
f:enableServiceLinks:
f:timeoutSeconds:
f:status:
.:
f:actualReplicas:
f:conditions:
f:containerStatuses:
f:desiredReplicas:
f:imageDigest:
f:observedGeneration:
f:serviceName:
Manager: controller
Operation: Update
Time: 2021-09-28T08:47:37Z
Owner References:
API Version: serving.knative.dev/v1
Block Owner Deletion: true
Controller: true
Kind: Configuration
Name: minserver-one
UID: 42de3bec-0a17-498a-a070-75998032b6b7
Resource Version: 85400156
Self Link: /apis/serving.knative.dev/v1/namespaces/vyom/revisions/minserver-one-v1
UID: 58b0dccd-5c65-42bd-9fef-e66306d48d3b
Spec:
Container Concurrency: 0
Containers:
Image: docker-world.national/trust/vyom/minserver
Name: user-container
Readiness Probe:
Success Threshold: 1
Tcp Socket:
Port: 0
Resources:
Enable Service Links: false
Timeout Seconds: 300
Status:
Actual Replicas: 0
Conditions:
Last Transition Time: 2021-09-28T08:47:37Z
Message: Requests to the target are being buffered as resources are provisioned.
Reason: Queued
Severity: Info
Status: Unknown
Type: Active
Last Transition Time: 2021-09-28T08:47:37Z
Reason: Deploying
Status: Unknown
Type: ContainerHealthy
Last Transition Time: 2021-09-28T08:47:37Z
Reason: Deploying
Status: Unknown
Type: Ready
Last Transition Time: 2021-09-28T08:47:37Z
Reason: Deploying
Status: Unknown
Type: ResourcesAvailable
Container Statuses:
Image Digest: docker-world.national/trust/vyom/minserver/minserver@sha256:fc4c6304a9d476b5fb57823ee9f5c314f57ed1d34a1afff0c9c4ff28fc8331b6
Name: user-container
Desired Replicas: 1
Image Digest: docker-world.national/trust/vyom/minserver/minserver@sha256:fc4c6304a9d476b5fb57823ee9f5c314f57ed1d34a1afff0c9c4ff28fc8331b6
Observed Generation: 1
Service Name: minserver-one-v1
Events:
Type Reason Age From Message


Warning InternalError 46m revision-controller failed to update deployment "minserver-one-v1-deployment": Operation cannot be fulfilled on deploymen
ts.apps "minserver-one-v1-deployment": the object has been modified; please apply your changes to the latest version and try again`

and the log:

e$ oc logs -f minserver-one-v1-deployment-6b454f955b-xddcd -c queue-proxy
{"severity":"INFO","timestamp":"2021-09-28T10:17:23.8498997Z","caller":"logging/config.go:116","message":"Successfully create
d the logger."}
{"severity":"INFO","timestamp":"2021-09-28T10:17:23.8551836Z","caller":"logging/config.go:117","message":"Logging level set t
o: info"}
{"severity":"INFO","timestamp":"2021-09-28T10:17:23.8552177Z","caller":"logging/config.go:79","message":"Fetch GitHub commit
ID from kodata failed","error":""KO_DATA_PATH" does not exist or is empty"}
{"level":"info","ts":1632824243.8555899,"logger":"fallback","caller":"metrics/metrics_worker.go:76","msg":"Flushing the exist
ing exporter before setting up the new exporter."}
{"level":"info","ts":1632824243.8557427,"logger":"fallback","caller":"metrics/prometheus_exporter.go:51","msg":"Created Prome
theus exporter with config: &{knative.dev/internal/serving revision prometheus 5000000000 false 9091 0.0.0.0 fal
se { false}}. Start the server for Prometheus exporter."}
{"level":"info","ts":1632824243.8557637,"logger":"fallback","caller":"metrics/metrics_worker.go:91","msg":"Successfully updat
ed the metrics exporter; old config: ; new config &{knative.dev/internal/serving revision prometheus 5000000000 false 9091 0.0.0.0 false { false}}"}
aggressive probe error (failed 202 times): dial tcp 127.0.0.1:8080: connect: connection refused
timed out waiting for the condition
aggressive probe error (failed 196 times): dial tcp 127.0.0.1:8080: connect: connection refused
timed out waiting for the condition

@dprotaso dprotaso removed this from the Ice Box milestone Oct 6, 2021
@codershangfeng
Copy link

I met the similar revision missing issue when go through the tutorial from: https://knative.dev/docs/getting-started/first-service/

The local env/cli in my local machine list as below:
kind: v0.14.0
knative-serving: v0.33.0
kubernetes: v1.23.3 (node image: kindest/node:v1.23.3)

The root reason is not clear to me after some debugging info.

Then, I reinstalled the local k8s cluster with kind delete cluster knative and kn quickstart kind.

The helloworld-go can be created/accessed successfully finally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests