Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix resource.UniqueId to be properly ordered over multiple runs #15280

Merged
merged 1 commit into from
Jun 15, 2017

Conversation

glasser
Copy link
Contributor

@glasser glasser commented Jun 13, 2017

The timestamp prefix added in #8249 was removed in #10152 to ensure that
returned IDs really are properly ordered. However, this meant that IDs were no
longer ordered over multiple invocations of terraform, which was the main
motivation for adding the timestamp in the first place. This commit does a
hybrid: timestamp-plus-incrementing-counter instead of just incrementing counter
or timestamp-plus-random.

@glasser glasser force-pushed the glasser/unique-id-timestamp-again branch from ba7ffc2 to 59155bb Compare June 13, 2017 21:07
@jbardin jbardin self-assigned this Jun 13, 2017
@glasser glasser force-pushed the glasser/unique-id-timestamp-again branch from 59155bb to d73f604 Compare June 13, 2017 23:36
@jbardin
Copy link
Member

jbardin commented Jun 14, 2017

Thanks @glasser,

This looks good. The hesitation before has been that time.Now() is not monotonic, but we are close to go1.9 which introduces monotonic time, so this will be fine going forward.

// IDs every 10th of a millisecond, which ought to be good enough.
if timestamp != lastTimestamp {
lastTimestamp = timestamp
idCounter = 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can just skip resetting this altogether, and just let it roll over if it ever gets that far. You could seed it with a pseudo random number of you want to keep it visually interesting, but that's not required ;)

While it doesn't hurt in most cases, until go1.9 there's actually a good chance that a group of resources made in quick succession could have the timestamp bounce back again. Not to mention Windows only has a 15ms resolution.

@glasser
Copy link
Contributor Author

glasser commented Jun 14, 2017 via email

@jbardin
Copy link
Member

jbardin commented Jun 14, 2017

Yes. I'm essentially saying it technically doesn't matter if it overflows (not that you could even run a plan that large) because if it does, we would be on a new timestamp by then, or else this wouldn't work in the first place. ;)

In go1.9 time.Now() will be monotonic https://golang.org/cl/36255

@glasser
Copy link
Contributor Author

glasser commented Jun 14, 2017 via email

@glasser
Copy link
Contributor Author

glasser commented Jun 14, 2017 via email

@jbardin
Copy link
Member

jbardin commented Jun 14, 2017

Oh, I think you might be right - I haven't looked at that doc since it was published, but it excludes Format as changing. I'm not sure if that's because it doesn't take the skew into consideration, or there's just no measurable impact on the output since it's not directly in comparison with something. I'll check it out later.

Regardless, not resetting the counter should cover the situation. If it rolls over in the same timestamp, the current code wouldn't work either, because it's only reset when the timestamp changes.

The change was implemented because I was getting identical IDs on windows and out-of order IDs in tests.

@jbardin
Copy link
Member

jbardin commented Jun 14, 2017

Oh, I see what you mean about the rollover happening in the same timestamp now, it wouldn't collide, but it would be out of order. While hitting that is only hypothetical, if we're going to reset, maybe once/sec?

@glasser
Copy link
Contributor Author

glasser commented Jun 14, 2017 via email

@jbardin
Copy link
Member

jbardin commented Jun 14, 2017

Nope, I don't think we will ever allocate that many IDs in the tests ;) Which is why I think it's OK to never reset it. If it were possible to have it roll over during execution, and it happened within a single timestamp it's not going to cause a duplicate ID, so it's only a minor ordering issue at that point.

@glasser glasser force-pushed the glasser/unique-id-timestamp-again branch from d73f604 to 251cfd3 Compare June 15, 2017 00:10
@glasser
Copy link
Contributor Author

glasser commented Jun 15, 2017

Updated as requested.

@glasser glasser force-pushed the glasser/unique-id-timestamp-again branch from 251cfd3 to f62dd7f Compare June 15, 2017 00:10
Copy link
Member

@jbardin jbardin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a minor comment fix, and this is gtg

// string. The max possible hex value here with 12 random bytes is
// "01000000000000000000000000", so there's no chance of rollover during
// operation.
// idCounter is a monotonic counter for generating ordered unique ids. We reset
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just need to remove the comment about resetting here

@radeksimko radeksimko added bug core waiting-response An issue/pull request is waiting for a response from the community labels Jun 15, 2017
The timestamp prefix added in hashicorp#8249 was removed in hashicorp#10152 to ensure that
returned IDs really are properly ordered.  However, this meant that IDs were no
longer ordered over multiple invocations of terraform, which was the main
motivation for adding the timestamp in the first place.  This commit does a
hybrid: timestamp-plus-incrementing-counter instead of just incrementing counter
or timestamp-plus-random.
@glasser
Copy link
Contributor Author

glasser commented Jun 15, 2017

Oops, done.

@glasser glasser force-pushed the glasser/unique-id-timestamp-again branch from f62dd7f to 0a1f915 Compare June 15, 2017 15:09
@jbardin jbardin merged commit 956ab16 into hashicorp:master Jun 15, 2017
@ghost
Copy link

ghost commented Apr 8, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug core waiting-response An issue/pull request is waiting for a response from the community
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants