Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure old tokens continue to work by removing token_key #362

Closed
wants to merge 5 commits into from

Conversation

jmsmkn
Copy link

@jmsmkn jmsmkn commented Aug 2, 2024

This PR adds a test that ensures old tokens continue to work by removing token_key. See discussion in #358 and #356.

When each token had its own salt hash_token needed to be done per token with hash_token(token, auth_token.salt). token_key was introduced in 913336d so that the comparison could then be done on a smaller number of tokens.

However, the per-token salt was removed in 51a5204. That means that we can now move hash_token out of the filter loop of authenticate as we no longer have to pass auth_token.salt, we only pass the token sent by the user. That gives us the digest straight away, meaning that we can do a direct lookup on digest (the primary key, so unique) to find the relevant token.

This also has the benefit of improving performance as multiple hashes and comparisons no longer need to be made, and solves the problem of tokens being invalidated between versions 4.2.0 -> 5.0.0. Additionally, MAXIMUM_TOKEN_PREFIX_LENGTH is no longer bound by TOKEN_KEY_LENGTH so could be increased if you wish.

knox/models.py Outdated Show resolved Hide resolved
@jmsmkn jmsmkn changed the title Add test for old tokens Ensure old tokens continue to work Aug 2, 2024
knox/auth.py Outdated Show resolved Hide resolved
@jmsmkn jmsmkn changed the title Ensure old tokens continue to work Ensure old tokens continue to work by removing token_key Aug 3, 2024
@jmsmkn
Copy link
Author

jmsmkn commented Aug 3, 2024

@giovannicimolin @johnraz This ended up being a bigger change than expected but this gives the project both a performance boost and solves the issue of invalidated tokens. It's my first contribution here so happy to address any feedback.

@giovannicimolin
Copy link
Contributor

@jmsmkn Adding this to my review queue for next week. Thanks for the contribution and for the bug fix! 🚀

@giovannicimolin
Copy link
Contributor

@jmsmkn Hey! I reviewed your PR and it's mostly looking good, but I wasn't able to properly test it because there's a few things messed up in the library right now. I'll get back to this and merge it as soon as I can figure out what's wrong.

@Fireclunge
Copy link

This would be great to add in as token invalidation is what's stopping me from upgrading to v5. Is this fix still viable?

@mr-niche
Copy link

mr-niche commented Oct 9, 2024

Echoing @Fireclunge 's comment, the invalidation of existing tokens is a non-starter for us when upgrading, unfortunately.

tldr; it seems like make_hex_compatible from #272 is the root of the backwards incompatibility problem. A small tweak to this function would make it possible to upgrade from 4.2 -> 5+ without invalidating previously generated tokens! See below for the details and the proposed solution.


I agree with @jmsmkn 's assessment that token_key can be removed and the authentication logic can be simplified with a direct digest lookup. I want ahead and tested this upgrade path locally and still found that my pre-existing tokens (generated using 4.2.0) where considered invalid. It seems the root of the problem lies with the introduction of token prefixes in #272 - in particular, the make_hex_compatible() function makes a fundamental change about how tokens are hashed, leading to backwards incompatibility.

Before #272 , this is what hash_token looked like:

 def hash_token(token: str) -> str:
    """
    Calculates the hash of a token.
    Token must contain an even number of hex digits or
    a binascii.Error exception will be raised.
    """
    digest = hash_func()
    digest.update(binascii.unhexlify(token))
    return digest.hexdigest()

The important bit is the binascii.unhexlify(token) is a "deconstructed" bytes representation of the token.

> token = 'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
> binascii.unhexlify(token)
b'\xd4[\x05\xeb\xc4\xbc\xe4\xf3e\xc0\x1c\x1e\x01\x00\xf0\x0c\xb079\x18M7J\xb8\xdf\xb1W\xe2}\xb8\xad\x8c'

In #272 , the addition of a token prefix resulted in an additional step of ensuring the token string is hex-compatible. This is reasonable, as tokens like example_3af32... would fail when being converted to bytes:

> binascii.unhexlify(f"example_{token}")
*** binascii.Error: Non-hexadecimal digit found

However, the actual implementation of make_hex_compatible() is a bit odd in that it just converts the token str to a bytes str:

def make_hex_compatible(token: str) -> bytes:
    """
    We need to make sure that the token, that is send is hex-compatible.
    When a token prefix is used, we cannot guarantee that.
    """
    return binascii.unhexlify(binascii.hexlify(bytes(token, 'utf-8')))
> token = 'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
> make_hex_compatible(token)
b'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'

And with a token prefix present, it does work, but it's still just the bytes() version of the token str:

> token = 'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
> make_hex_compatible(f"example_{token}")
b'example_d45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'

All that said, make_hex_compatible is equivalent to:

> bytes(token, 'utf-8')
b'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
> bytes(f"example_{token}", 'utf-8')
b'example_d45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'

Again, this still works, because digest.update() will gladly hash the bytes string without issue. However, if we want to ensure version 4.2 tokens still work with token prefixes, we would have to maintain the original "deconstructed" bytes format that was being hashed. Otherwise, you'll get two different hashes for the same token!

> digest_1 = hash_func()  # hashlib.sha512
> digest_2 = hash_func()
> token = 'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
> digest_1.update(binascii.unhexlify(token))  # the original hash format in v4.2
> digest_2.update(make_hex_compatible(token))  # the new hash format in v5+
> digest_1.hexdigest()
'03fd15d542ded8d635cd7939020f44b813f796ada8f7d6b84dac87fe8e04ce891fcadfb800b31ccc0c0bbe14401c57cf4a250886203cc1187cabdef45705cb5f'
> digest_2.hexdigest()
'a78ad04bb016c54be0ddce4ae54d9476fb8b8d7cee0061631e561f6f915934b91627c0b15d438afb0ee3c231b2d92365fa3cdbc405d49a3738e6851235fcb4ff'

NOTE: This of course assumes you are using the default cryptography.hazmat.primitives.hashes.SHA512 in v4.2, and then use the new default hashlib.sha512 in v5+. Because salt is no longer used, these two algorithms should produce the same sha512 hash. Backwards compatibility is possible here!


(Sorry for the long reply, here's my proposed solution, which I think could slot nicely into this PR)

Update make_hex_compatible as follows:

def make_hex_compatible(token: str) -> bytes:
    """
    Ensure a token, which may contain a TOKEN_PREFIX, is hex-compatible before hashing.
    """
    try:
        # this supports tokens generated in v4.2 and any tokens which do not contain a TOKEN_PREFIX
        return binascii.unhexlify(token)
    except (binascii.Error, ValueError):
        # if a token has a prefix, encode it so that it's hex-compatible and can be hashed
        return binascii.hexlify(token.encode('utf-8'))

Adding this check should be lightweight performance-wise and it maintains backwards compatibility for existing tokens (while supporting newer tokens that might use the token_prefix option).

@giovannicimolin let me know what you think, happy to help support this effort in this PR (if @jmsmkn is interested?) or break it out separately. Also happy to help write tests and whatnot.

@jmsmkn
Copy link
Author

jmsmkn commented Oct 9, 2024

Great find, @mr-niche! I'm unsure of the best way to proceed with this. Even though I was given the old "PRs welcome" treatment when I first raised the backward compatibility issue, it's been two months since I submitted a PR, and I haven't received any feedback. I spent a weekend working on improving the library, but without input from the maintainers, it's hard to move forward. If you'd like to make a PR to my branch to consolidate our efforts, I'd be happy to merge it. However, we ultimately need some guidance from the maintainers to get this resolved.

@mr-niche
Copy link

mr-niche commented Oct 9, 2024

Thanks @jmsmkn ! I'll give @giovannicimolin some time to weigh in on how to proceed. If we get a thumbs up, I'll go ahead and submit a PR to your branch and we can get this ball rolling again. I'll also include some documentation around "Upgrading from v4 -> v5" so that folks who have raised this concern before (i.e. in #356 ) can see if the upgrade path is viable for their use case.

@giovannicimolin
Copy link
Contributor

@jmsmkn @mr-niche Thanks for the contributions here so far, this is a really valuable contribution to the project.

In the last few weeks I haven't had enough time to follow-up on this project, I don't have a lot of bandwidth for this. I'll try to catch up before the end of the week.

@giovannicimolin
Copy link
Contributor

@mr-niche Thanks for the great in-depth investigation of the issue!
I think it'll be great if we can get this shipped soon.

@jmsmkn Can you incorporate @mr-niche's changes into your PR and resolve the PR conflicts?

Let's make this the 5.1 version - lots of folks will be happy with the library being backwards compatible.

@mr-niche
Copy link

Thanks @giovannicimolin ! @jmsmkn , I'll take a stab at adding this to your PR, I have a little bit of time today.

@jmsmkn
Copy link
Author

jmsmkn commented Oct 28, 2024

Thanks @giovannicimolin ! I have fixed the conflicts. One question: we could still keep around token_key as people may be using it as an identifier, even if it is no longer used internally. Let me know what you think.

@giovannicimolin
Copy link
Contributor

@jmsmkn That makes sense, can you keep it around for now?

I'll take some time to test it out tomorrow. :)

@mr-niche
Copy link

@giovannicimolin my changes are in this PR (to be merged into @jmsmkn 's PR) https://github.com/jmsmkn/django-rest-knox/pull/1

We need some guidance on one last consideration: if we switch back to the original unhexlify strategy, we will be breaking compatibility with any tokens generated in the 5.0.* versions, unfortunately.

I'm not sure what the right thing to do here is, we could either:

  1. Make it clear that an update to 5.1.0 breaks 5.0.* tokens but restores compatibility to 4.2.0 (another breaking change 😬 )
  2. Find a way to support both versions if users would like to, eventually migrating them over to the 5.0.* hashing style (see my comment here: https://github.com/jmsmkn/django-rest-knox/pull/1#issuecomment-2444830062 )

Option 2 feels risky. But option 1 is yet another breaking change (albeit, a potentially smaller one - if folks have already upgraded to 5.0.*, they might not be as concerned about breaking changes, but I don't necessarily want to make that assumption).

Let me know what you think/if you have any other ideas here!

@jmsmkn jmsmkn closed this Nov 19, 2024
@Fireclunge
Copy link

Fireclunge commented Nov 22, 2024

Thank you both for your efforts with this. I pieced together bits of your work and created a custom authentication class to allow this to work for me.

A quick summary for anyone that wants to upgrade but has doubts that it will be merged into main.

from knox.auth import TokenAuthentication

class TokenAuthenticationOverride(TokenAuthentication):
    """
    Extension of the TokenAuthentication class
    """

    def _legacy_make_hex_compatible(self, token: str) -> bytes:
        """
        Ensure a token, which may contain a TOKEN_PREFIX, is hex-compatible before hashing.
        Reduce this down to return binascii.unhexlify(token) if you did not use a prefix
        """
        try:
            # this supports tokens generated in v4.2 and any tokens which do not contain a TOKEN_PREFIX
            return binascii.unhexlify(token)
        except (binascii.Error, ValueError):
            # if a token has a prefix, encode it so that it's hex-compatible and can be hashed
            return binascii.hexlify(token.encode('utf-8'))

    def _authenticate_legacy_credentials(self, token):
        # Allows backward compatability with 4.2.0 tokens
        # Tokens that have expired will be deleted and skipped

        msg = _('Invalid token.')
        token = token.decode("utf-8")

        try:
            digest = hash_func()
            digest.update(self._legacy_make_hex_compatible(token))
            digest = digest.hexdigest()
        except (TypeError, binascii.Error):
            raise exceptions.AuthenticationFailed(msg)

        try:
            auth_token = get_token_model().objects.get(digest=digest)
        except get_token_model().DoesNotExist:
            raise exceptions.AuthenticationFailed(msg)

        if self._cleanup_token(auth_token):
            raise exceptions.AuthenticationFailed(msg)

        if knox_settings.AUTO_REFRESH and auth_token.expiry:
            self.renew_token(auth_token)

        return self.validate_user(auth_token)

    def authenticate_credentials(self, token):
        # Allows backward compatability with 4.2.0 tokens
        # https://github.com/jazzband/django-rest-knox/pull/362/files
        # Tokens that have expired will be deleted and skipped
        try:
            # Use new method first - All new tokens after upgrade will use this
            return super().authenticate_credentials(token)  # Knox 5.0.0+ tokens
        except AuthenticationFailed:
            return self._authenticate_legacy_credentials(token)  # Old 4.2.0 tokens

I was able to reduce make_hex_compatible(token) to binascii.unhexlify(token) in my application to remove the need for the extra function. The idea is to deprecate 4.2.0 tokens when an acceptable level of users are using the new tokens. Unfortunately there will be a small performance hit while this is churning through.

I would suggest that this thread should be more visible and not be closed, but it doesn't appear like many people actually seem to care about this? 🫠😂 It's a huge L for the library IMO - Just casually logging out all your users, no big deal. Absolutely nightmarish for B2C websites with a large volume of users

@mr-niche
Copy link

Sweet, thanks for following up on this @Fireclunge ! This is a nice option for rolling forward - the performance hit is a bummer, but I'm glad you found an option that works for you.

@giovannicimolin , if this work isn't going to be supported/merged, I think the snippet above should be visible or linked in the README or CHANGELOG to highlight this as an "upgrade path" for 4.2 -> 5.

Additionally, the performance improvement @jmsmkn suggested (removing the salt loop) is still very relevant to the library, regardless of the make_hex_compatible work, and I think it should be prioritized. When you get a chance, can you provide an update on the support we can expect for this library? Is there anything the community can do to help move these things forward?

Thanks!

@johnraz
Copy link
Collaborator

johnraz commented Dec 14, 2024

While I appreciate the work done here and understand the concern, I think taking a step back is important.

Asking customers to re-login or even integrators of an API to update their token (with a reasonable delay to do so ofc) is by no means the end of the world.

Rotating a token on a regular basis should be considered a good practice anyway - I mean, this all process shouldn’t be « nightmarish » and if it is you should consider discussing this with your product / customer success team more than with us IMO 😉

Sorry if this sound like a rant, it’s really not, but the « drama » around change in our field is sometimes getting under my skin 😂

Cheers !

@jmsmkn
Copy link
Author

jmsmkn commented Dec 14, 2024

Of course token rotation is a good practice, no one is suggesting otherwise. However, the moment that occurs should be dictated by a library as it requires synchronisation across the application. It certainly shouldn't occur simultaneously with a dependency update, especially without a migration strategy in place. It would be absurd if Django invalidated everyone's passwords after a minor version update, and the same principle applies here.

@johnraz
Copy link
Collaborator

johnraz commented Dec 14, 2024

I’m not discussing the fact that we screwed up by introducing a breaking change by mistake without doing a major bump and that it should be avoided in the future.

I’m trying to bring some nuance to the topic which looking at your answer is going to be a difficult task 🤣

Also comparing this lib to Django is … well a bit absurd on its own… We don’t have the same kind of community to back the project or QA process… We don’t even have a decent test suite to begin with.

I just wanted to underline that the situation is not THAT bad, that’s all.

@Fireclunge
Copy link

While I appreciate the work done here and understand the concern, I think taking a step back is important.

Asking customers to re-login or even integrators of an API to update their token (with a reasonable delay to do so ofc) is by no means the end of the world.

Rotating a token on a regular basis should be considered a good practice anyway - I mean, this all process shouldn’t be « nightmarish » and if it is you should consider discussing this with your product / customer success team more than with us IMO 😉

Sorry if this sound like a rant, it’s really not, but the « drama » around change in our field is sometimes getting under my skin 😂

Cheers !

With all due respect we've all gotten together in this thread precisisely so that the "process isn't nightmarish" and to share solutions so that we tackle this as a community. This might shock you but a lot of people don't know or trust themselves enough in hashing and crytography knowledge to go around poking and amending internals of an authentication library that is to be used in production.

Thank god to @jmsmkn and @mr-niche for the work they put into this because I honestly would'nt even know where to start - all I did was provide a quick copy-pastable overview because I think we got the hint that this was never going to be merged in.

You talk as though key rotation is a good practice and should be discussed with your "product / customer success team", completely forgetting that many of us ARE the ****ing product team. - and we have decided that invalidating all tokens in one swift move is a complete no go and we've bashed heads here to provide a gradual solution.

It;s quite frustrating especially because this library is specifically recommended by the Django Rest Framework community ( https://www.django-rest-framework.org/api-guide/authentication/#third-party-packages ) and there's a lot of people that put this library in high regard.

Appreciation to yourself too for all the work you've put into this - but goddam it highly encapsulates why developers wind business people up.

@johnraz
Copy link
Collaborator

johnraz commented Dec 14, 2024

Did I say I value the work being done in this thread? Yes, I’m happy to see people looking for a work around.

Do I think people are overreacting a bit? Yes, as I said I just wanted to bring some nuance to the topic as I (as in my own opinion, don’t have to be yours) don’t think it’s such a big deal.

At no point did I try to start a flame war…

I would be happy to discuss such cases where it really so badly impact the business to a point where it’s a massive nogo and increase my knowledge there.

Now on a more ethical side which has nothing to do with the original discussion but I just can’t let it slip:

« developers wind business people up »
This is clearly out of line considering that you are building the so called business on top of work given away to you by those developers for free.

It is really offensive.

This type of comment and your general tone in your message is the reason why people get burned out with open source, and no this is not me being sensitive, please read about it, it is a real curse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants