-
Notifications
You must be signed in to change notification settings - Fork 406
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Discussion] Extend the capabilities of YurtTunnel to forward not only O&M and monitoring traffic from cloud to edge #527
Comments
@DrmagicE Very appreciate for improving yurt-tunnel. and would you be able to add some background information about these two features? |
@tiezhuoyu You have mentioned that proxy request from cloud to edge through yurt-tunnel in issue: #522. Do you have any comments about these two features? and these features can satisfy your requirements? |
@DrmagicE The solution of case1, i'd like to complement the way of pod configuration for using dns domain
and maybe we can add an admission controller to add dns config for pod automatically when pod with annotation like "AccessEdgeService=true". |
@DrmagicE If we use iptables rule to solve the case2, then client pods(the pod sends request) need to work on the same node with yurt-tunnel-server pod. |
Thanks for your advice, I need some time to investigate and see how it works. |
I think these two greate features is very helpful. In some case, the pods on edge nodes may be scheduled to different hosts (such as virtual machines). It is inconvient with hostNetwork mod if the cloud node try to requeset to the specific pod. |
Yes, and I believe this is a common issue in YurtTunnel DNAT mode. Unless we implement a controller (run as daemonset in every cloud node) to set the DNAT rule in every cloud node. |
@tiezhuoyu Thanks for your feedback. Would you be able to describe some details of your use case that cloud pods access edge pods on different nodes? |
@DrmagicE Ok, I got it. maybe end user need to accept this constraint at the beginning. |
I try to deploy kubevirt, a virtualization platform on k8s, in a openyurt cluster. Component named virt-handler is deployed on each edge node, and virt-api is deployed on cloud node. When it comes to request like 'pause/unpause', virt-api needs send a request to virt-handler, and virt-handler will pause/unpause a specific virtual machine. Unfortunatly, virt-handler is not a hostNetwork pod. |
@tiezhuoyu virt-handler send requests with Pod IP or with service DNS name ? |
Got it, so case2 can satisfy your needs. |
In edge scenario, the edge node may not be accessible through the public network. For example, the edge node is a laptop in your room, a traffic light on the road, etc. If the YurtTunnel can satisfy this kind of requirement, the users will not need to find another solution. |
Could we consider the Submariner, although it original aim for resolve the service connection and discovery for multiple cluster scenario. the Submariner:
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
/pinned |
A new edge network project named |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
The YurtTunnel was originally designed to deal with the challenges of O&M and monitoring traffic in edge scenarios.
In this context, YurtTunnel can :
hostNetwork
pod andlocalhost
on edge nodes with a little configuration.By these two fundamental features, users can execute kubectl exec/logs to edge nodes and scrap metric from exporters on edge nodes, but it does not provide us a convenient way to forward other kinds of traffic from cloud to edge.
Although building a general purpose proxy may not be the goal of Yurttunnel, I am wondering if we can extend the capabilities of Yurttunnel to forward not only O&M and monitoring traffic. (see aslo #522)
For example, we can add more features to YurtTunnel to support these cases:
Here are some rough designs:
For case 1, we can maintain a special zone in CoreDNS using
file
plugin, (for example:.openyurt.tunnel
), all domains belong to the zone will have an A record that holds the yurttunnel server service IP. The DNS name in this zone will follow such schema:<service_name>.<namespace>.<nodepool_name>.openyurt.tunnel
. The components on the cloud side can use this schema to request specific services on specific nodepool. With this method, the user can manage their services in the service topology way, which is provided by data-filtering framework, which means the user do not need to create additional unique service per nodepool.For case 2, we can list watch all pods that do not belong to
podCIDR
of the cloud side, and then add them to DNAT rule according to theyurt-tunnel-server-cfg
configmap configuration.Not sure if we should do this, I would like to hear your opinion. Welcome to discuss.
/kind design
The text was updated successfully, but these errors were encountered: