Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

w/ flagd provider: max stream connect attempts and uses 8014 (Port confusion and app crashing) #482

Closed
agardnerIT opened this issue May 5, 2023 · 3 comments

Comments

@agardnerIT
Copy link
Contributor

I'm trying to get a dockerised node app to work with the operator. I don't know where the problem(s) are coming from (or whether I've missed something but my app is crashing and doesn't seem to read the FeatureFlagConfiguration CR.

I'd be grateful if someone could assist.

Here's what I have so far:

  1. Installed the operator
  2. Added flagd-provider to my node package.json

package.json

"dependencies": {
      "@openfeature/flagd-provider": "^0.7.6",
}

app.js

import express from "express";
import Router from "express-promise-router";
import cowsay from "cowsay";
import { OpenFeature } from '@openfeature/js-sdk';
import { FlagdProvider } from '@openfeature/flagd-provider';

const app = express();
const routes = Router();

app.use((_, res, next) => {
  res.setHeader("content-type", "text/plain");
  next();
}, routes);

// Start OpenFeature Code
const FLAG_CONFIGURATION = {
  host: '0.0.0.0',
  port: 8014,
}
const featureFlags = OpenFeature.getClient();
const featureFlagProvider = new FlagdProvider(FLAG_CONFIGURATION);
OpenFeature.setProvider(featureFlagProvider);
// End OpenFeature Code

routes.get("/", async (_, res) => {
  const withCow = await featureFlags.getStringValue("WITH_COWS", "no");
  if (withCow === "yes") {
    res.send(cowsay.say({ text: "Hello, world!" }));
  } else {
    res.send("Hello, world!");
  }
});

app.listen(3333, () => {
  console.log("Server running at http://localhost:3333");
});

Kubernetes YAML

apiVersion: core.openfeature.dev/v1alpha2
kind: FeatureFlagConfiguration
metadata:
  name: cow-control
spec:
  featureFlagSpec:
    flags:
      WITH_COWS:
        state: "ENABLED"
        variants:
          first: "no"
          second: "yes"
        defaultVariant: "first"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app.kubernetes.io/name: MyApp
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: MyApp
  template:
    metadata:
      labels:
        app.kubernetes.io/name: MyApp
      annotations:
        openfeature.dev/enabled: "true"
        openfeature.dev/featureflagconfiguration: "cow-control"
    spec:
      containers:
      - name: myapp
        image: gardnera/openfeaturedemo:2
        imagePullPolicy: Always
        ports:
        - containerPort: 3333
        #env:
          #- name: WITH_COWS
          #  value: "no"
        #envFrom:
        #  - configMapRef:
        #      name: cow-control
---
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 3333
      targetPort: 3333

Results

$ kubectl describe pod -l app.kubernetes.io/name=MyApp
Name:             myapp-b65b88f86-p2l6s
Namespace:        default
Priority:         0
Service Account:  default
Node:             k3d-k3s-default-server-0/172.18.0.3
Start Time:       Fri, 05 May 2023 16:35:29 +1000
Labels:           app.kubernetes.io/name=MyApp
                  pod-template-hash=b65b88f86
Annotations:      container.seccomp.security.alpha.kubernetes.io/flagd: runtime/default
                  openfeature.dev/allowkubernetessync: true
                  openfeature.dev/enabled: true
                  openfeature.dev/featureflagconfiguration: cow-control
Status:           Running
IP:               10.42.0.41
IPs:
  IP:           10.42.0.41
Controlled By:  ReplicaSet/myapp-b65b88f86
Containers:
  myapp:
    Container ID:   containerd://b8f8810bde8a424c7bb742fc181e4bb671eca2db10ef391594b1da5793dbf6ba  
    Image:          gardnera/openfeaturedemo:2
    Image ID:       docker.io/gardnera/openfeaturedemo@sha256:1323db71a98daacd0ee0f5befdd8fbadac9152ac3de4411a8b8b657306948565
    Port:           3333/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 05 May 2023 16:48:00 +1000
      Finished:     Fri, 05 May 2023 16:49:03 +1000
    Ready:          False
    Restart Count:  6
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x99rp (ro)
  flagd:
    Container ID:  containerd://560e7d4cbc0c348b3536e6347016a79ccc5a229fd8d8981b6c989edf493704d5   
    Image:         ghcr.io/open-feature/flagd:v0.5.2
    Image ID:      ghcr.io/open-feature/flagd@sha256:bb404af92aad503f7a4dcc9e15289ecdfee6be2612cc7d5202dd43b740aa6f88
    Port:          8014/TCP
    Host Port:     0/TCP
    Args:
      start
      --sources
      [{"uri":"default/cow-control","provider":"kubernetes"}]
    State:          Running
      Started:      Fri, 05 May 2023 16:35:33 +1000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  64M
    Requests:
      cpu:        200m
      memory:     32M
    Liveness:     http-get http://:8014/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:8014/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x99rp (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-x99rp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
enfeaturedemo:2" in 1.990577075s
  Normal   Created    12m (x4 over 16m)   kubelet            Created container myapp
  Normal   Started    12m (x4 over 16m)   kubelet            Started container myapp
  Warning  BackOff    69s (x35 over 14m)  kubelet            Back-off restarting failed container 

Error Logs

Server running at http://localhost:3333
file:///app/node_modules/@openfeature/flagd-provider/index.js:1506
            reject(new Error(errorMessage));
                   ^

Error: FlagdProvider: max stream connect attempts (5 reached)
    at GRPCService.handleError (file:///app/node_modules/@openfeature/flagd-provider/index.js:1506:20)
    at file:///app/node_modules/@openfeature/flagd-provider/index.js:1449:22

    at /app/node_modules/@protobuf-ts/runtime-rpc/build/commonjs/rpc-output-stream.js:86:36        
    at Array.forEach (<anonymous>)
    at RpcOutputStreamController.notifyError (/app/node_modules/@protobuf-ts/runtime-rpc/build/commonjs/rpc-output-stream.js:86:23)
    at ClientReadableStreamImpl.<anonymous> (/app/node_modules/@protobuf-ts/grpc-transport/build/commonjs/grpc-transport.js:90:27)
    at ClientReadableStreamImpl.emit (node:events:513:28)
    at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/build/src/client.js:351:28)
@agardnerIT agardnerIT changed the title Port confusion and app crashing w/ flagd provider: max stream connect attempts and uses 8014 (Port confusion and app crashing) May 5, 2023
@beeme1mr
Copy link
Member

beeme1mr commented May 5, 2023

Hey @agardnerIT, it looks like you're trying to use port 8014 but you should be using 8013. 8014 is the port you would use for scraping metrics. Please update the port and try again.

@toddbaert, a connection error shouldn't cause the app to fail. Can you investigate how we can handle this more cleanly?

@toddbaert
Copy link
Member

toddbaert commented May 5, 2023

@toddbaert, a connection error shouldn't cause the app to fail. Can you investigate how we can handle this more cleanly?

This is how it's been built to behave. As the documentation in the config indicates (as well as the error above) by default the provider will attempt to connect 5 times and then error. If the code is written in a way not to handle that error, and it's a runtime that doesn't support uncaught rejected promises, the app will crash.

The idea is since flagd is usually local, it should generally be very easy to connect to and therefore we opted for a fail-fast approach. That's why this is the default behavior (it's not the default in flagd-web, which reconnects forever).

https://github.com/open-feature/js-sdk-contrib/blob/main/libs/providers/flagd/src/lib/configuration.ts

@agardnerIT
Copy link
Contributor Author

agardnerIT commented May 5, 2023

Very strange, rebuilding the app to use 8013 and deploying on docker desktop macOS now works.
I'm convinced I only changed 8013 to 8014 initially on windows because 8013 didn't work (that's why I went digging into kubectl describe and I notice it said 8014. So the change to 8014 was me grasping at straws.

Anyway, for now, it's working. I will leave this open until I can retry on win and then close.

I've created another issue for docs improvements and (I think) a doc bug I found in operator instructions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants