-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate K8s-go-client caching/load implications #434
Comments
Currently used K8s client of the K8s ISync implementation is not backed by a cache [1]
Next step, consider a caching client if exist. [1] - https://github.com/open-feature/flagd/blob/main/pkg/sync/kubernetes/kubernetes_sync.go#L155 |
One option is to rely on the [1] - https://github.com/open-feature/flagd/blob/main/pkg/sync/kubernetes/kubernetes_sync.go#L168 |
Good reading resources [1] https://kubernetes.io/docs/concepts/cluster-administration/flow-control/ Next steps, investigating improvements in our ISync implementation to reduce API impact and reuse resources. |
PR #443 improves K8s Sync provider internals to utilize the informer store. However, we will still have a fallback to API get if we have a cache mismatch. This will allow us to avoid a caching layer for flagd server |
## This PR fixes #434 This is a refactoring and internal improvement for K8s ISync provider. Improvements include, - Reduce K8s API load by utilizing Informer cache - Yet, provide a fallback if cache miss occurs (Note - we rely on K8s Informer for cache refill and consistency) - Informer now only watches a specific namespace (compared to **\***) - this is a potential performance improvement and a security improvement - Reduced informer handlers with extracted common logics - Unit tests where possible --------- Signed-off-by: Kavindu Dodanduwa <kavindudodanduwa@gmail.com>
Flagd is designed to watch K8s CRD changes and update flag configurations on CRD changes (ex:- updates, removals). This is performed by K8s ISync implementation [1]
The Focus of this task is to know the impact of on-demand CRD fetching on the K8s API. This knowledge is a pre-requisite to remove any caching in-between sync providers and flag evaluations/grpc streams.
Research:
Outcome:
Whether we require a cache OR whether we can rely on K8s API for high load scenarios
[1] - https://github.com/open-feature/flagd/blob/main/pkg/sync/kubernetes/kubernetes_sync.go
The text was updated successfully, but these errors were encountered: