You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem?
Currently, due to the size of my repository, downstream syncing perpetually uses the full amount of the bandwidth limit I set.
Our network administrator was alarmed at the level of traffic going out to one of the remote developers I set up to use devspace.
I believe that the current method of syncing remote changes could be improved to not use near as much bandwidth.
This would have the added benefit of reducing the network pressure on your cloud offering as well.
I could lower my bandwidth limit, but doing so would slow down things when a sync is actually necessary.
Which solution do you suggest?
I would suggest that the change detection should happen on the remote deployment side, and not on the client side, to the extent that that is possible.
@KaelBaldwin, thanks for reporting! Yes in big projects with lots of vendor files, the downstream can use a lot of bandwidth for comparing the current state to the previous state.
I'm planning to change this behavior like suggested in #441 and the current approach I'm thinking about is that we inject a small golang binary into the container via kubectl cp and then do the change evaluation right there.
Communication to the remote binary would be done via two streams opened through kubectl exec (like it is done currently) and calls could be made via rpc. With this approach all the comparison and change detection could now take place remotely and would essentially reduce the network traffic for change evaluation heavily. This would also help us to get rid of all the shell scripts currently in the code and would essentially reduce all the container binary requirements to just tar for kubectl cp aswell.
Is your feature request related to a problem?
Currently, due to the size of my repository, downstream syncing perpetually uses the full amount of the bandwidth limit I set.
Our network administrator was alarmed at the level of traffic going out to one of the remote developers I set up to use devspace.
I believe that the current method of syncing remote changes could be improved to not use near as much bandwidth.
This would have the added benefit of reducing the network pressure on your cloud offering as well.
I could lower my bandwidth limit, but doing so would slow down things when a sync is actually necessary.
Which solution do you suggest?
I would suggest that the change detection should happen on the remote deployment side, and not on the client side, to the extent that that is possible.
I could see this being done as a part of #441
/kind feature
The text was updated successfully, but these errors were encountered: