Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make downstream sync more efficient #542

Closed
KaelBaldwin opened this issue May 17, 2019 · 2 comments
Closed

Make downstream sync more efficient #542

KaelBaldwin opened this issue May 17, 2019 · 2 comments
Assignees
Labels
area/sync Issues related to the real-time code synchronization kind/feature New feature or feature request

Comments

@KaelBaldwin
Copy link

KaelBaldwin commented May 17, 2019

Is your feature request related to a problem?
Currently, due to the size of my repository, downstream syncing perpetually uses the full amount of the bandwidth limit I set.

Our network administrator was alarmed at the level of traffic going out to one of the remote developers I set up to use devspace.

I believe that the current method of syncing remote changes could be improved to not use near as much bandwidth.

This would have the added benefit of reducing the network pressure on your cloud offering as well.

I could lower my bandwidth limit, but doing so would slow down things when a sync is actually necessary.

Which solution do you suggest?
I would suggest that the change detection should happen on the remote deployment side, and not on the client side, to the extent that that is possible.

I could see this being done as a part of #441

/kind feature

@KaelBaldwin KaelBaldwin changed the title Make downstream sync more effecient Make downstream sync more efficient May 17, 2019
@FabianKramm
Copy link
Collaborator

@KaelBaldwin, thanks for reporting! Yes in big projects with lots of vendor files, the downstream can use a lot of bandwidth for comparing the current state to the previous state.

I'm planning to change this behavior like suggested in #441 and the current approach I'm thinking about is that we inject a small golang binary into the container via kubectl cp and then do the change evaluation right there.

Communication to the remote binary would be done via two streams opened through kubectl exec (like it is done currently) and calls could be made via rpc. With this approach all the comparison and change detection could now take place remotely and would essentially reduce the network traffic for change evaluation heavily. This would also help us to get rid of all the shell scripts currently in the code and would essentially reduce all the container binary requirements to just tar for kubectl cp aswell.

@FabianKramm FabianKramm self-assigned this May 17, 2019
@FabianKramm FabianKramm added area/sync Issues related to the real-time code synchronization kind/feature New feature or feature request priority/critical labels May 17, 2019
@FabianKramm
Copy link
Collaborator

Implemented in v3.5.5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/sync Issues related to the real-time code synchronization kind/feature New feature or feature request
Projects
None yet
Development

No branches or pull requests

2 participants