Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

Reduce docker volumes for weavewait to just one set per release #1935

Merged
merged 2 commits into from
Jan 29, 2016

Conversation

bboreham
Copy link
Contributor

Fixes #1757

  • Create an auxilliary container 'weavevolumes' to hold the '/w/w'
    files, and map into that instead of creating new volumes every time.
  • Show "/bin/false" as its command in docker ps instead of
    "/home/weave/sigproxy".
  • Remove volumes when cleaning up smoketest containers.

Replaces #1792 to change target branch - some discussion on design there.

- Create an auxilliary container 'weavevolumes' to hold the '/w/w'
  files, and map into that instead of creating new volumes every time.
- Show "/bin/false" as its command in `docker ps` instead of
"/home/weave/sigproxy".
- Remove volumes when cleaning up smoketest containers.
@bboreham bboreham added this to the 1.4.3 milestone Jan 28, 2016
@bboreham bboreham mentioned this pull request Jan 28, 2016
@rade
Copy link
Member

rade commented Jan 28, 2016

before:

$ docker volume ls -q | wc -l
0
$ weave launch; docker volume ls -q | wc -l
6
$ weave stop;   docker volume ls -q | wc -l
6
$ weave launch; docker volume ls -q | wc -l
12
$ weave reset;  docker volume ls -q | wc -l
12
$ weave launch; docker volume ls -q | wc -l
18

after

$ docker volume ls -q | wc -l
0
$ weave launch; docker volume ls -q | wc -l
6
$ weave stop;   docker volume ls -q | wc -l
6
$ weave launch; docker volume ls -q | wc -l
9
$ weave reset;  docker volume ls -q | wc -l
6
$ weave launch; docker volume ls -q | wc -l
12

So this is an improvement, but not a fix. Looks like there is a volume leak in both the proxy and plugin, and this PR fixes the former but not the latter:

$ docker volume ls -q | wc -l
0
$ weave launch-proxy >/dev/null; docker volume ls -q | wc -l
3
$ weave   stop-proxy >/dev/null; docker volume ls -q | wc -l
3
$ weave launch-proxy >/dev/null; docker volume ls -q | wc -l
3
$ weave reset;                   docker volume ls -q | wc -l
0
$ weave launch-proxy >/dev/null; docker volume ls -q | wc -l
3
$ docker volume ls -q | wc -l
0
$ weave launch-plugin >/dev/null; docker volume ls -q | wc -l
3
$ weave   stop-plugin >/dev/null; docker volume ls -q | wc -l
3
$ weave launch-plugin >/dev/null; docker volume ls -q | wc -l
6
$ weave reset;                    docker volume ls -q | wc -l
6
$ weave launch-plugin >/dev/null; docker volume ls -q | wc -l
9

Possibly this is a separate issue.

@rade
Copy link
Member

rade commented Jan 28, 2016

root@xps:/var/lib/docker/volumes# ls -ltR
.:
total 12
drwxr-xr-x 3 root root 4096 Jan 28 17:27 bae39b7a29cbac43dea177694af83c0bacfe26bb7e1df5023bb5840e2e48dd0b
drwxr-xr-x 3 root root 4096 Jan 28 17:27 b7fbce11e5522e49fe7cb7e979ff7b6e6cfb5a4c0a4db65b7bb212efca52ae12
drwxr-xr-x 3 root root 4096 Jan 28 17:27 a577deb53aadc24b22cc3c6f44397b61733592ebcbdf344a6cf0f7992f1c686d

./bae39b7a29cbac43dea177694af83c0bacfe26bb7e1df5023bb5840e2e48dd0b:
total 4
drwxr-xr-x 2 root root 4096 Jan 28 17:27 _data

./bae39b7a29cbac43dea177694af83c0bacfe26bb7e1df5023bb5840e2e48dd0b/_data:
total 2520
-rwxr-xr-x 1 root root 2578952 Jan 28 10:25 w

./b7fbce11e5522e49fe7cb7e979ff7b6e6cfb5a4c0a4db65b7bb212efca52ae12:
total 4
drwxr-xr-x 2 root root 4096 Jan 28 17:27 _data

./b7fbce11e5522e49fe7cb7e979ff7b6e6cfb5a4c0a4db65b7bb212efca52ae12/_data:
total 3164
-rwxr-xr-x 1 root root 3238344 Jan 28 10:25 w

./a577deb53aadc24b22cc3c6f44397b61733592ebcbdf344a6cf0f7992f1c686d:
total 4
drwxr-xr-x 2 root root 4096 Jan 28 17:27 _data

./a577deb53aadc24b22cc3c6f44397b61733592ebcbdf344a6cf0f7992f1c686d/_data:
total 3208
-rwxr-xr-x 1 root root 3284560 Jan 28 10:25 w

Based on the file sizes, these are the three flavours of weavewait.

@bboreham
Copy link
Contributor Author

Are you sure you are running a new plugin?

@bboreham
Copy link
Contributor Author

Do docker history weaveworks/plugin and check it is built off the same layers as the current weaveexec

@rade
Copy link
Member

rade commented Jan 28, 2016

False alarm. The problem was c6d0149.

@rade
Copy link
Member

rade commented Jan 28, 2016

Isn't there a problem here with upgrades? Since we only remove the volume on reset, won't a "soft" upgrade of weave stop; <upgrade>; weave launch end up stick old-version weavewaits into subsequently launched containers?

I guess weavewait doesn't change all that often, and perhaps we should insist on a weave reset when it does. Seems a bit crude though, especially when the change is a bug fix like the one we just did in #1910.

I am actually curious what the current behaviour is: if you have a container that was started via the proxy, then soft-upgrade weave, and then bounce the container, is the new incarnation a) broken (i.e. it references a weavewait entrypoint which no longer exists), b) using the old-version weavewait, or c) using the new-version weavewait?

@bboreham
Copy link
Contributor Author

To your last question, b: it will have the same weavewait as when it was started.

@bboreham
Copy link
Contributor Author

We could attach the weave version number to the weavevolumes container name, so the files would leak very slowly under an upgrade scenario such as you describe.

@rade
Copy link
Member

rade commented Jan 28, 2016

We could attach the weave version number to the weavevolumes container name, so the files would leak very slowly under an upgrade scenario such as you describe.

Was thinking the same. But we then would want weave reset to remove all such containers.

They will be named 'weavevolumes-v1.4.3', 'weavevolumes-latest', etc.
Use a label to identify these containers so they can all be removed on `weave reset`.
@bboreham bboreham changed the title Reduce docker volumes for weavewait to just one set Reduce docker volumes for weavewait to just one set per release Jan 29, 2016
@bboreham bboreham assigned rade and unassigned bboreham Jan 29, 2016
rade added a commit that referenced this pull request Jan 29, 2016
Reduce docker volumes for weavewait to just one set per release

Fixes #1757.
@rade rade merged commit c6f356a into 1.4 Jan 29, 2016
@rade rade deleted the issues/1757-volumes-container branch January 29, 2016 15:24
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants