Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When calling volume driver Mount, send opaque ID #21015

Merged
merged 1 commit into from
May 5, 2016

Conversation

cpuguy83
Copy link
Member

@cpuguy83 cpuguy83 commented Mar 8, 2016

This generates an ID string for calls to Mount/Unmount, allowing drivers
to differentiate between two callers of Mount and Unmount.

Fixes #20939

@erikh
Copy link
Contributor

erikh commented Mar 8, 2016

yay

@@ -485,16 +486,18 @@ func (container *Container) CopyImagePathContent(v volume.Volume, destination st
return err
}

path, err := v.Mount()
id := stringid.GenerateNonCryptoID()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why a non-cryptographically strong ID? Presumably uniqueness is important.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We generally use non-crypto for these sort of IDs (even container ID's are non-crypto). These are going to be relatively short-lived and not likely to conflict (and not catastrophic if they happen to).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be pretty catastrophic for me (the bug that got us here in the
first place) if there were duplicates. Can we just ensure no other
containers have the ID and re-roll if needed?

-Erik

On 8 Mar 2016, at 12:25, Brian Goff wrote:

@@ -485,16 +486,18 @@ func (container *Container)
CopyImagePathContent(v volume.Volume, destination st
return err
}

  • path, err := v.Mount()
  • id := stringid.GenerateNonCryptoID()

We generally use non-crypto for these sort of IDs (even container ID's
are non-crypto). These are going to be relatively short-lived and not
likely to conflict (and not catastrophic if they happen to).


Reply to this email directly or view it on GitHub:
https://github.com/docker/docker/pull/21015/files#r55421645

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure, but non-cryptographically doesn't say non-unique, only "possibly predictable", or am I mistaken here? i.e. a UUID may be predictable, thus "non-crypto", but still is unique

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition, we could never guarantee this across hosts anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I didn't mean that, sorry if I was implying otherwise. just ensure that no two existing mounts have the same ID and I'll be fine at least, I think.

@icecrime icecrime added the status/failing-ci Indicates that the PR in its current state fails the test suite label Mar 8, 2016
@cpuguy83 cpuguy83 removed the status/failing-ci Indicates that the PR in its current state fails the test suite label Mar 16, 2016
@erikh
Copy link
Contributor

erikh commented Mar 19, 2016

Is this going to be available in 1.11?

@thaJeztah
Copy link
Member

ping @cpuguy83 looks like this needs a rebase

@cpuguy83
Copy link
Member Author

I'll deal with rebase if we move this to code-review.

@erikh
Copy link
Contributor

erikh commented Apr 5, 2016

@cpuguy83 please tell me this is going to be in 1.11...

@cpuguy83
Copy link
Member Author

cpuguy83 commented Apr 5, 2016

@erikh Nope.

@tiborvass
Copy link
Contributor

@cpuguy83 REBASE!! :P

@cpuguy83
Copy link
Member Author

@tiborvass DONE!!

@vdemeester
Copy link
Member

LGTM 🐯

@thaJeztah
Copy link
Member

ping @cpuguy83 needs a rebase again 😢

ping @tiborvass PTAL

@SvenDowideit
Copy link
Contributor

yay!, one more part i needed :) and docs LGTM

This generates an ID string for calls to Mount/Unmount, allowing drivers
to differentiate between two callers of `Mount` and `Unmount`.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
@cpuguy83
Copy link
Member Author

rebased

@estesp
Copy link
Contributor

estesp commented May 5, 2016

LGTM

@vdemeester
Copy link
Member

docs LGTM 🐸

@vdemeester vdemeester merged commit bb12565 into moby:master May 5, 2016
@cpuguy83 cpuguy83 deleted the add_opaque_mount_id branch May 5, 2016 18:44
@e4jet
Copy link

e4jet commented Jul 28, 2016

I'm playing with docker 1.12.0-rc5. I see the unique id in the /VolumeDriver.Mount request, but always see an empty string in the /VolumeDriver.Unmount request. Is this expected?

I was hoping to verify the ID during the unmount request. Perhaps I'm overthinking this.

Below is an example where 2 containers request the same volume (called swarm5), then exit. Note that the ID is sent in the Mount request, but not the Unmount request.

Thanks!

/VolumeDriver.Mount request={"Name":"swarm5","ID":"dfce4a5f374a573a6f36f2bd78907e2b90191aae54b70957e46a79629ba7af02"} response={"Mountpoint":"/data/swarm5","Err":null}

/VolumeDriver.Mount request={"Name":"swarm5","ID":"ddce68c10c1afe245af1880df68076074c578d2bdb2a22731263a65bc2a97c38"} response={"Mountpoint":"/data/swarm5","Err":null}

/VolumeDriver.Unmount request={"Name":"swarm5","ID":""} response={"Err":null}
/VolumeDriver.Unmount request={"Name":"swarm5","ID":""} response={"Err":null}

@erikh
Copy link
Contributor

erikh commented Jul 28, 2016

that is definitely going to make this ineffective for my needs.

On Wed, Jul 27, 2016 at 5:40 PM, e4jet notifications@github.com wrote:

I'm playing with docker 1.12.0-rc5. I see the unique id in the
/VolumeDriver.Mount request, but always see an empty string in the
/VolumeDriver.Unmount request. Is this expected?

I was hoping to verify the ID during the unmount request. Perhaps I'm
overthinking this.

Below is an example where 2 containers request the same volume (called
swarm5), then exit. Note that the ID is sent in the Mount request, but not
the Unmount request.

Thanks!

/VolumeDriver.Mount
request={"Name":"swarm5","ID":"dfce4a5f374a573a6f36f2bd78907e2b90191aae54b70957e46a79629ba7af02"}
response={"Mountpoint":"/data/swarm5","Err":null}

/VolumeDriver.Mount
request={"Name":"swarm5","ID":"ddce68c10c1afe245af1880df68076074c578d2bdb2a22731263a65bc2a97c38"}
response={"Mountpoint":"/data/swarm5","Err":null}

/VolumeDriver.Unmount request={"Name":"swarm5","ID":""}
response={"Err":null}
/VolumeDriver.Unmount request={"Name":"swarm5","ID":""}
response={"Err":null}


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#21015 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AABJ6zh1AlFo210S7I9xcUqsC41HRmp9ks5qZ_p4gaJpZM4HrZVU
.

@thaJeztah
Copy link
Member

@e4jet @erikh if it's a bug, please open a new issue; I can add it to the 1.12.1 milestone for consideration

@erikh
Copy link
Contributor

erikh commented Jul 28, 2016

I worked around it by tracking state outside docker in both copy and nocopy
modes, so I'm good. File it if you feel it needs to be fixed.

On Thu, Jul 28, 2016 at 12:16 AM, Sebastiaan van Stijn <
notifications@github.com> wrote:

@e4jet https://github.com/e4jet @erikh https://github.com/erikh if
it's a bug, please open a new issue; I can add it to the 1.12.1 milestone
for consideration


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#21015 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AABJ6-hMBIAmJ_E6hgoG19e9ROUecyEiks5qaFc-gaJpZM4HrZVU
.

@e4jet
Copy link

e4jet commented Jul 28, 2016

@thaJeztah & @erikh thanks for the quick response. I can't think of a compelling reason for the engine to have to track this, so I'll take the same approach as @erikh. The only enhancement that I can think of would be to pass the container's uuid instead of creating an opaque one. This would allow some correlation for reporting and debugging. Cheers

@erikh
Copy link
Contributor

erikh commented Jul 29, 2016

Unfortunately even though I originally asked for this feature, I agree;
without mount+container data I'm still querying docker no matter what to
assert runtime state.

On Thu, Jul 28, 2016 at 4:57 PM, e4jet notifications@github.com wrote:

@thaJeztah https://github.com/thaJeztah & @erikh
https://github.com/erikh thanks for the quick response. I can't think
of a compelling reason for the engine to have to track this, so I'll take
the same approach as @erikh https://github.com/erikh. The only
enhancement that I can think of would be to pass the container's uuid
instead of creating an opaque one. This would allow some correlation for
reporting and debugging. Cheers


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#21015 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AABJ62GNBlr0i19iDLBqG0UEKCg1cfXNks5qaUH0gaJpZM4HrZVU
.

@cpuguy83
Copy link
Member Author

@erikh @e4jet Please open an issue to discuss.
I do not see why you would need to query docker (outside of the potential bug in not sending the ID on unmount).

@erikh
Copy link
Contributor

erikh commented Jul 29, 2016

the state tracking is independent, I just have a global mutex protecting a
counter for each volume. I increment it when you send me a mount and
decrement it when you send me an unmount. this is the only way to safely
manage both copy and nocopy mode.

as for querying docker, it'd be great if I could send a signal or when
docker auto-discovers the plugin, sends it a payload (maybe on activate?)
of all the volumes it currently knows about. I have to query docker for
this now when the plugin itself is restarted. This leads to an interesting
race where my poller (which must succeed before the program can continue
safely) races a restart of docker because docker tries to query volplugin,
but volplugin is busy waiting on docker. I think these operations should be
decoupled somehow as well.

Sorry for the braindump but I have extremely little time these days and am
trying to give you the feedback you need in the most efficient way possible.

On Fri, Jul 29, 2016 at 9:20 AM, Brian Goff notifications@github.com
wrote:

@erikh https://github.com/erikh @e4jet https://github.com/e4jet
Please open an issue to discuss.
I do not see why you would need to query docker (outside of the potential
bug in not sending the ID on unmount).


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#21015 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AABJ68X-jwgPf2Kv_bi0zr3RBZA3yksdks5qaihXgaJpZM4HrZVU
.

@@ -124,6 +125,8 @@ name. This is called once per container start. If the same volume_name is reques
more than once, the plugin may need to keep track of each new mount request and provision
at the first mount request and deprovision at the last corresponding unmount request.

`ID` is a unqiue ID for the caller that is requesting the mount.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo "unqiue"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.