You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Training large DL models on edge devices is infeasible due to their limited computing resources. In decentralized distributed deep learning system, workers exchange local gradients with each other, and update their local model based on collected gradients. Compared to centralized DL systems, decentralized dl system can avoid communication bottleneck between edge workers and PS nodes. Therefore, I think it can be a new feature for sedna to conduct deep learning on edge nodes.
The text was updated successfully, but these errors were encountered:
Not supported yet.
Currently Sedna supports Feaderated Learning, a distributed DL system. But it is still centralized, because it requires a central aggregation node.
But I think this is really a great idea, and it will be a trend for those data already on the edge side.
It seems to me that issue #231 is also about distributed deep learning. It will be appreciated if the difference between this issue and issue #231 is justified. The comment is also available as follows to facilitate easier reference and discussion:
From issue #231 ====
We understand that traditional distributed learning can help to reduce training time. Sedna would like to support both edge-cloud collaborative training and inference, but more focuses on stand-alone models execution at present. Stand-alone models help to make sure the runtime/ robust service on each node, especially when some nodes are offline. We can discuss the pros and cons of stand-along models and split models on weekly meeting if interested.
1. If one is looking for tools
As for distributed learning, Sedna supports both edge-cloud collaborative training and inference. For example, you can do it in a way of federated learning, which is intrinsically distributed multi-task learning without too much consideration on privacy.
One can also provide some real-world requirements/ projects to help community members to justify where there are further technique issues to tackle. We are welcome to real-world applications with Sedna.
2. If one is planning for a related proposal
We also welcome more proposals on distributed learning in Sedna. One can introduce related techniques in routine meetings.
Would you mind introducing more on why Sedna needs this to community members, e.g., applications, requirements, or techniques? Sedna routine meetings run in a weekly manner (every Thursday) with https://zoom.us/my/kubeedge.
Training large DL models on edge devices is infeasible due to their limited computing resources. In decentralized distributed deep learning system, workers exchange local gradients with each other, and update their local model based on collected gradients. Compared to centralized DL systems, decentralized dl system can avoid communication bottleneck between edge workers and PS nodes. Therefore, I think it can be a new feature for sedna to conduct deep learning on edge nodes.
The text was updated successfully, but these errors were encountered: