-
Notifications
You must be signed in to change notification settings - Fork 492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restarting docker container cannot not resume upload? #437
Comments
That's weird. The lock files contain the PID of the tusd process that created the lock file. If the process with this PID does not exist anymore, the lock file should be automatically be removed when an upload resume. So, usually when you restart tusd, its PID changes and the lock file are not an issue. Does another process with the PID from the lock files exist on your system? |
OK I think the problem is the fact that tusd always runs as PID 1 in containers. |
So apparently that switch does not work in docker swarm, which I'm using (docker/docs#5624). Do you know of some other external locking mechanism that is not implemented using PID? |
There is https://github.com/tus/tusd-etcd3-locker. If only one tusd instance is accessing the upload directory, you can also simply remove all
That's good to know! |
I have multiple instances running behind a load balancer with sticky sessions. Furthermore the application makes sure that an upload is always accessed by only one client, so I guess I could do without locks all together. Is there a way to disable them? |
No, there is currently no way to disable locks and we don't have plans to make them optional in the future. I am currently working on some changes allowing tusd to unlock uploads which are locked by other instances (in a safe manner, of course) but that is not completed yet. |
OK thanks for the clarification. |
Hello,
I have a tusd container running behind an nginx reverse proxy container.
If I restart the tusd container I am unable to resume the upload from the client and I see this error message on the server:
I would imagine that the server shutdown would delete the lock, but instead it is still there.
So how can I avoid this problem?
I am running tusproject/tusd:v1.4.0 and the following command
Thanks for any suggestions
The text was updated successfully, but these errors were encountered: