Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: Node-level Traffic Reuse Capability #2060

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

zyjhtangtang
Copy link
Contributor

What type of PR is this?

/kind documentation
/kind design

Copy link

codecov bot commented May 28, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 56.01%. Comparing base (3713163) to head (5f68ca0).

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2060      +/-   ##
==========================================
- Coverage   56.09%   56.01%   -0.08%     
==========================================
  Files         186      186              
  Lines       18092    18092              
==========================================
- Hits        10149    10135      -14     
- Misses       6910     6922      +12     
- Partials     1033     1035       +2     
Flag Coverage Δ
unittests 56.01% <ø> (-0.08%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@rambohe-ch
Copy link
Member

@zyjhtangtang The idea in this proposal is amazing. I think that it is a very useful feature for OpenYurt. because this feature can provide the following pros:

  1. reduce nearly 50% public network traffic(service and endpointslice resource) between cloud and edge
  2. mitigate overhead of kube-apiserver component, because nearly 50% watch connections are reduced.
  3. optimize local cache for yurthub, only one copy of service/endpointslice is needed to be stored on local disk, instead store one copy for each client.

```

&nbsp;&nbsp;The Watch Cache is an in-memory cache that stores watch events and allows quick access to the current state of a resource, combined with a stream of watch events. [reference](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go#L140)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add metrics endpoints in yurthub component that provide details of requests which can reuse traffic.
so end user can check all request status on the edge node before they configure the traffic reuse requests.


## Proposal

&nbsp;&nbsp;To reduce the cloud-edge traffic on a single node, we recommend consolidating the full List/Watch requests for the same resource so that all components can reuse the data from this single request. The main idea is to add a full-list resource caching module within YurtHub, modeled after the cache implementation of the ApiServer. Within this module, using the apiserver as the Storage Backend, the Storage Backend actively initiates list/watch requests to the apiserver, and caches the content of the requests in the WatchCache. Consequently, the full list/watch requests for resources from the components on the node are directly served by the WatchCache.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

end user can configure the kinds of traffic reuse requests through the startup parameter of yurthub. this means the supported traffic reuse requests can not be configured dynamically.

### Non-Goals/Future Work

- The optimization only involves the reuse of requests on a single node and does not pertain to traffic optimization at the node pool level;
- The optimization only pertains to full-scale list/watch requests for resources and does not involve requests with filtering conditions.
Copy link
Member

@rambohe-ch rambohe-ch May 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is a good idea that requests with filtering conditions can also be supported if full-scale list/watch request exists.


&nbsp;&nbsp;To reduce the cloud-edge traffic on a single node, we recommend consolidating the full List/Watch requests for the same resource so that all components can reuse the data from this single request. The main idea is to add a full-list resource caching module within YurtHub, modeled after the cache implementation of the ApiServer. Within this module, using the apiserver as the Storage Backend, the Storage Backend actively initiates list/watch requests to the apiserver, and caches the content of the requests in the WatchCache. Consequently, the full list/watch requests for resources from the components on the node are directly served by the WatchCache.

<img src = "../img/img-20240529002.png" width="800" />
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should cache only one copy of service/endpointslice for all clients, because all of these clients share the copy of data.

Copy link

sonarcloud bot commented Jun 3, 2024

Quality Gate Passed Quality Gate passed

Issues
0 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
No data about Duplication

See analysis details on SonarCloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

None yet

2 participants