Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does Sedna support decentralized distributed deep learning on edge servers? #276

Open
wanziyu opened this issue Feb 13, 2022 · 2 comments
Open
Labels
kind/question Indicates an issue that is a support question.

Comments

@wanziyu
Copy link

wanziyu commented Feb 13, 2022

Training large DL models on edge devices is infeasible due to their limited computing resources. In decentralized distributed deep learning system, workers exchange local gradients with each other, and update their local model based on collected gradients. Compared to centralized DL systems, decentralized dl system can avoid communication bottleneck between edge workers and PS nodes. Therefore, I think it can be a new feature for sedna to conduct deep learning on edge nodes.

@wanziyu wanziyu added the kind/question Indicates an issue that is a support question. label Feb 13, 2022
@jaypume
Copy link
Member

jaypume commented Feb 21, 2022

Not supported yet.
Currently Sedna supports Feaderated Learning, a distributed DL system. But it is still centralized, because it requires a central aggregation node.
But I think this is really a great idea, and it will be a trend for those data already on the edge side.

@MooreZheng
Copy link

MooreZheng commented Feb 21, 2022

It seems to me that issue #231 is also about distributed deep learning. It will be appreciated if the difference between this issue and issue #231 is justified. The comment is also available as follows to facilitate easier reference and discussion:

From issue #231 ====
We understand that traditional distributed learning can help to reduce training time. Sedna would like to support both edge-cloud collaborative training and inference, but more focuses on stand-alone models execution at present. Stand-alone models help to make sure the runtime/ robust service on each node, especially when some nodes are offline. We can discuss the pros and cons of stand-along models and split models on weekly meeting if interested.

1. If one is looking for tools
As for distributed learning, Sedna supports both edge-cloud collaborative training and inference. For example, you can do it in a way of federated learning, which is intrinsically distributed multi-task learning without too much consideration on privacy.

One can also provide some real-world requirements/ projects to help community members to justify where there are further technique issues to tackle. We are welcome to real-world applications with Sedna.

2. If one is planning for a related proposal
We also welcome more proposals on distributed learning in Sedna. One can introduce related techniques in routine meetings.

Would you mind introducing more on why Sedna needs this to community members, e.g., applications, requirements, or techniques? Sedna routine meetings run in a weekly manner (every Thursday) with https://zoom.us/my/kubeedge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Indicates an issue that is a support question.
Projects
None yet
Development

No branches or pull requests

3 participants