Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configurable password hashing algorithm/cost #31234

Merged
merged 31 commits into from
Jun 28, 2018

Conversation

jkakavas
Copy link
Member

for the stored passwords of users for the realms that this applies
(native, reserved). Replaces predefined choice of bcrypt with
cost factor 10.
This also introduces PBKDF2 with configurable cost
(number of iterations) as an algorithm option for password hashing
both for storing passwords and for the user cache.
Doesn't support "on the fly" change of hashing algorithm selection
as pre-existing users won't be able to authenticate.

Documentation additions will be handled in a separate PR.

for the stored passwords of users for the realms that this applies
(native, reserved). Replaces predefined choice of bcrypt with
cost factor 10.

This also introduces PBKDF2 with configurable cost
(number of iterations) as an algorithm option for password hashing
both for storing passwords and for the user cache.
@jkakavas jkakavas added >enhancement review v7.0.0 :Security/Authentication Logging in, Usernames/passwords, Realms (Native/LDAP/AD/SAML/PKI/etc) v6.4.0 labels Jun 10, 2018
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-security

/*
* Do not allow insecure hashing algorithms to be used for password hashing
*/
public static final Setting<String> PASSWORD_HASHING_ALGORITHM = new Setting<>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if we discussed in the team meeting or somewhere else. Just adding a comment for discussion if we need to have extensible mechanism via security extensions. If need be customers can use 'Argon2' or 'scrypt' using non-default implementations? Is it too early to support them?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part has not been discussed, but it is a good idea for a future enhancement. I'd rather wait on opening this up though; once we do that then it has to be supported by us and everything we add creates overhead. We also do not know if there is demand for this and I'd guess that the majority of users would not have a preference as long as we use something that is secure.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR was in the context of supporting a FIPS-140 compliant solution, hence only PBKDF2 was added. I also haven't seen any argon2 or scrypt JAVA implementations that have been used / tested sufficiently (there are bindings for the Argon2 C implementation though.. )

I'm definitely not against allowing for extensions with other algorithm implementations, but not in the context of this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, Jay and Ioannis. I just thought if this was something that we wanted to consider for security extensions. Yes, I would not trust yet not proven fairly new implementations, just wanted to bring it up so we can keep it in our thoughts.

import java.util.Locale;
import java.util.regex.Pattern;

public class HasherFactory {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just in case we want to support security extensions providing their hasher's, I guess this factory could be along the lines of Realm.Factory. Just a thought if need be.

@albertzaharovits
Copy link
Contributor

Rant inbound:

I don't like the flow, specifically I don't like that whenever there's a password there must be a hash name tag close by. I understand why you did it, to make it testable, but I don't think this is how it should work in the end.
I also don't like that the hashing algorithm and the cost are separate settings. The range of values of the cost for one algorithm is not related to another one.
Also, CACHE_HASH_ALGO_SETTING should default to PASSWORD_HASHING_ALGORITHM. I think this realm setting is useless (or let's say, power user focused) but given the FIPS requirements I see it as dangerous. I know, requirements are different for in-memory compared to on disk hashes (they should be faster, that's why the cache is there), but should we really use sha256 for most password hash verifies and only the stored hashes will be PBKDF2 ?

Instead, this is how I see things:
The UserAndPassword entity can also harbor the hash algo tag. This would be picked up, by looking at the hash prefix. When you need to verify the password you use the hash tag of the retrieved hash, instead of using the setting value. The setting value is used when writing the hash entry (i.e. put user or change password).

@jkakavas
Copy link
Member Author

Thanks for the feedback !

I don't like the flow, specifically I don't like that whenever there's a password there must be a hash name tag close by. I understand why you did it, to make it testable, but I don't think this is how it should work in the end.

Not sure I follow you, would you mind elaborating on your thoughts? The algorithm prefixes were there already and do not control the flow, they serve as an indication.

I also don't like that the hashing algorithm and the cost are separate settings. The range of values of the cost for one algorithm is not related to another one.

I see your point. The idea is that I didn't want to handle all possible cost options in the algorithm settings (i.e, bcrypt4 to bcrypt16 is ok but BKDF2_1000 to PBKDF2_100000 not that much). I have no objections removing the cost setting but maybe limit the support to ~10 costs for Bcrypt and ~5 for PBKDF2 , WDYT ?

Also, CACHE_HASH_ALGO_SETTING should default to PASSWORD_HASHING_ALGORITHM

I disagree. Users wanting to run in a FIPS 140 JVM will have to set the CACHE_HASH_ALGO_SETTING to PBKDF2 since this is the only one that is approved.
For all the rest of the users PBKDF2 or bcrypt for cached hashes can be extremely slow and this is not the algorithms intended use. If something, this would bring forward the risk of users setting the cost to something extremely low, thus making the stored hashes more vulnerable to offline brute force attacks ( which is what PBKDF are meant to make difficult )

Instead, this is how I see things:
One potential issue I see with this is that you wouldn't be able to enforce a password hashing algorithm via settings this way. Having PBKDF2 set as PASSWORD_HASHING_ALGORITHM wouldn't forbid the implementation from verifying bcrypt hashes for example. And that means that we would need to keep the hash prefix, which IIUC you suggest we don't.
We could always check the current PASSWORD_HASHING_ALGORITHM setting and see it matches, but I don't see how this is much different from how it's currently done.

WDYT ?

@tvernum
Copy link
Contributor

tvernum commented Jun 13, 2018

I also don't like that the hashing algorithm and the cost are separate settings. The range of values of the cost for one algorithm is not related to another one.

+1

That decision leads to all sorts of weirdness in the code, like allowing bcrypt10 as an algorithm, but ignoring the cost factor if one is explicitly set. Except for the cache, where you can only set it in the algorithm, because there's no cost factor.
But there's no support for using PBKDF2 with a custom cost factor for the cache algo.

@albertzaharovits
Copy link
Contributor

My previous comment was stern and unhelpful, apologies. Let me have another try at it:

GH does not allow for comments on lines that have not been changed, so here is the gist of my previous comment:

     void verifyPassword(String username, final SecureString password, ActionListener<AuthenticationResult> listener) {
         getUserAndPassword(username, ActionListener.wrap((userAndPassword) -> {
             if (userAndPassword == null || userAndPassword.passwordHash() == null) {
                 listener.onResponse(AuthenticationResult.notHandled());
             // HERE: can we have `userAndPassword.verify(password)`?
             } else if (hasher.verify(password, userAndPassword.passwordHash())) {
                 listener.onResponse(AuthenticationResult.success(userAndPassword.user()));
             } else {
                 listener.onResponse(AuthenticationResult.unsuccessful("Password authentication failed for " + username, null));
             }
         }, listener::onFailure));
}

@jkakavas
Copy link
Member Author

That decision leads to all sorts of weirdness in the code like allowing bcrypt10 as an algorithm, but ignoring the cost factor if one is explicitly set.

Since I went down that way I had to make a decision on what to do when the cost is set both explicitly and implicitly

PASSWORD_HASHING_ALGORITHM bcrypt10
PASSWORD_HASHING_COST 11

would give you bcrypt with cost factor 11.

But there's no support for using PBKDF2 with a custom cost factor for the cache algo.

The threat model doesn't usually include offline brute force attacks against the hashed passwords in the cache, thus I assumed the default cost would be sufficient for this.

As said I do see the point at hand, thoughts about

I see your point. The idea is that I didn't want to handle all possible cost options in the algorithm settings (i.e, bcrypt4 to bcrypt16 is ok but BKDF2_1000 to PBKDF2_100000 not that much). I have no objections removing the cost setting but maybe limit the support to ~10 costs for Bcrypt and ~5 for PBKDF2 ,

WDYT ?

@albertzaharovits
Copy link
Contributor

albertzaharovits commented Jun 13, 2018

The idea is that I didn't want to handle all possible cost options in the algorithm settings (i.e, bcrypt4 to bcrypt16 is ok but BKDF2_1000 to PBKDF2_100000 not that much). I have no objections removing the cost setting but maybe limit the support to ~10 costs for Bcrypt and ~5 for PBKDF2 .

I see... validating hash algorithms is equivalent to validating cost factors. Besides pegging them, i.e. having a list of predefined ones, I think the only way is to just try and time the hash function? If it takes longer than 100 msec that we don't support the alg+cost combination? This would be a setting validation function. Sounds wasteful and insubstantial, as the timing at the validation time, for a single value might not reflect the real life timing when multiple requests are inbound.
I am leaning towards having a set of common alg+cost pairs and white list only these, so no timing on validation.

@jkakavas
Copy link
Member Author

Thanks @albertzaharovits , much more clear now. So your suggestion is to change

 if (userAndPassword == null || userAndPassword.passwordHash() == null) {
     listener.onResponse(AuthenticationResult.notHandled());
 } else if (hasher.verify(password, userAndPassword.passwordHash())) {
     listener.onResponse(AuthenticationResult.success(userAndPassword.user()));
 }

to

 if (userAndPassword == null || userAndPassword.passwordHash() == null) {
     listener.onResponse(AuthenticationResult.notHandled());
 } else if (userAndPassword.verify(password)) {
     listener.onResponse(AuthenticationResult.success(userAndPassword.user()));
 }

and the verify() method would then get the appropriate Hasher, based on the prefix of the stored hash.
I'm just thinking this can't be used in the same way in verifyPassword() of FileUserPasswdStore where you have Map<String, char[]> instead of UserAndPassword entities, and in general I don't see the potential issues with the current approach ( apart from the extra Hasher object in each Store ).

I am leaning towards having a set of common alg+cost pairs and white list only these.

I am actually now thinking that we can use a PBKDF2_XXXX format with a sensible format (similar to bcrypt implicit cost factor ) that would allow users to set the cost to arbitrary values as they see fit in their implementation. There is no point in timing the hashing function in advance as this will highly depend on HW and i.e. 200 msec might be ok for a deployment where token service is in use and password hashes are verified every X minutes instead of in every request.

@albertzaharovits
Copy link
Contributor

I disagree. Users wanting to run in a FIPS 140 JVM will have to set the CACHE_HASH_ALGO_SETTING to PBKDF2 since this is the only one that is approved.
For all the rest of the users PBKDF2 or bcrypt for cached hashes can be extremely slow and this is not the algorithms intended use. If something, this would bring forward the risk of users setting the cost to something extremely low, thus making the stored hashes more vulnerable to offline brute force attacks ( which is what PBKDF are meant to make difficult )

Understood, you are right. Probably only folks working with FIPS will touch the CACHE_HASH_ALGO_SETTING and they will have to set it to PBKDF2.

Copy link
Member

@jaymode jaymode left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left a few comments but I think a single setting would be good and also the ability to verify the hash based upon the hash prefix would be ideal.

import java.util.Random;

public enum Hasher {
public interface Hasher {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we go with a list of predefined hash algorithms with costs, then we can set this back to an enum and have singletons that avoid extra allocations

return BCrypt.hashpw(text, salt).toCharArray();
public char[] hash(SecureString data) {
try {
StringBuilder result = new StringBuilder();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rather than use a string builder, can we use a CharBuffer? This avoids the creation of the string of the hash that goes into the string table and instead we just keep it in a char[]


static final class SaltProvider {
private SaltProvider() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -10,6 +10,7 @@

import javax.crypto.Cipher;

import static org.hamcrest.Matchers.equalTo;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not needed?

@jkakavas
Copy link
Member Author

I left a few comments but I think a single setting would be good and also the ability to verify the hash based upon the hash prefix would be ideal.

Great thanks. I had a chat with @albertzaharovits earlier and discussed those two points, concluding to the same decision.

If we go with a list of predefined hash algorithms with costs, then we can set this back to an enum and have singletons that avoid extra allocations

I was thinking something along the lines of PBKDF_XXXXX where the cost can be arbitrary number of iterations. This (arbitrary cost factors) is the original reason behind moving from the enums to this factory paradigm, but if we agree that ~5 predefined cost factors for pbkdf2 is sufficient, I will revert to the former.

@tvernum
Copy link
Contributor

tvernum commented Jun 14, 2018

I realised we've moved on from the previous implementation, but to circle back to this:

The threat model doesn't usually include offline brute force attacks against the hashed passwords in the cache, thus I assumed the default cost would be sufficient for this.

I was thinking the reverse direction. The default cost is too high for the cache. If the thread model determines that a cost of 10,000 is appropriate for long term storage, then it is almost certainly too slow for use in the cache.

- Password hashes validation algorighm selection takes into
  consideration the stored hash prefix instead of the relevant
  x-pack security settting.
- Removes explicit cost factor setting
- Whitelists a number of algorithn+cost options for brypt and
  pbkdf2
- Removes HasherFactory in favor of an ENUM with singletons
@jkakavas
Copy link
Member Author

@jaymode this is ready for a new round. Wasn't sure about adding the check to TransportChangePasswordAction as discussed above. No objections adding this, I just wonder if it is necessary if we go for the breaking change.

Copy link
Member

@jaymode jaymode left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasn't sure about adding the check to TransportChangePasswordAction as discussed above

I think we need it. If the node is configured to use PBKDF2 but a client sends a bcrypt hash, that should be a error.

* @param key the settings key for this setting.
* @param defaultValue the default String value.
* @param properties properties for this setting like scope, filtering...
* @return
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove empty return in javadocs

@@ -472,35 +493,46 @@ public static boolean verifyHash(SecureString data, char[] hash) {
}

private static boolean verifyPbkdf2Hash(SecureString data, char[] hash) {
// Base64 string length : (4*(n/3)) rounded up to the next multiple of 4 because of padding, i.e. 44 for 32 bytes
final int tokenLength = 44;
char[] hashChars = new char[tokenLength];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets set these to null. We never use these arrays. We can have null checks in the finally block.

cost, PBKDF2_KEY_LENGTH);
char[] computedPwdHash = CharArrays.utf8BytesToChars(Base64.getEncoder()
.encode(secretKeyFactory.generateSecret(keySpec).getEncoded()));
boolean result = CharArrays.constantTimeEquals(computedPwdHash, hashChars);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make it final

* Generates an array of {@code length} random bytes using {@link java.security.SecureRandom}
*/
private static byte[] generateSalt(int length) {
Random random = new SecureRandom();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we store the value? SecureRandom objects can be reused and we don't need to keep creating them

public class PasswordHashingAlgorithmBootstrapCheck implements BootstrapCheck {
@Override
public BootstrapCheckResult check(BootstrapContext context) {
final String selectedAlgorithm = XPackSettings.PASSWORD_HASHING_ALGORITHM.get(context.settings);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I told you to move this here, I am sorry. Setting validation should happen on the setting and not be part of a bootstrap check. The availability of the PBKDF2 algorithm is fine as a bootstrap check

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I misread the original comment, I'll revert the setting validation and only check the algo availability here.

@@ -285,7 +285,8 @@ public Security(Settings settings, final Path configPath) {
checks.addAll(Arrays.asList(
new TokenSSLBootstrapCheck(),
new PkiRealmBootstrapCheck(settings, getSslService()),
new TLSLicenseBootstrapCheck()));
new TLSLicenseBootstrapCheck(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indentation needs to be cleaned up

- Adds a check for the algorithm of the hash of incoming change
  password requests
- Move the check for the allowed hashing algorithms back to the
  setting validator
Copy link
Member

@jaymode jaymode left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left a few minor suggestions. Otherwise LGTM

@@ -506,17 +512,21 @@ private static boolean verifyPbkdf2Hash(SecureString data, char[] hash) {
int cost = Integer.parseInt(new String(Arrays.copyOfRange(hash, PBKDF2_PREFIX.length(), hash.length - (2 * tokenLength + 2))));
SecretKeyFactory secretKeyFactory = SecretKeyFactory.getInstance("PBKDF2withHMACSHA512");
PBEKeySpec keySpec = new PBEKeySpec(data.getChars(), Base64.getDecoder().decode(CharArrays.toUtf8Bytes(saltChars)),
cost, PBKDF2_KEY_LENGTH);
cost, 256);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the constant more since it provided context to this "magic number"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I eventually removed it since I didn't use it in calculating the 44 "magic number" in code and it seemed redundant. I can make it more obvious in the comment that n in (4*(n/3)) is 32 because its the key size (256) in bytes

char[] computedPwdHash = CharArrays.utf8BytesToChars(Base64.getEncoder()
.encode(secretKeyFactory.generateSecret(keySpec).getEncoded()));
boolean result = CharArrays.constantTimeEquals(computedPwdHash, hashChars);
final boolean result = CharArrays.constantTimeEquals(computedPwdHash, hashChars);
Arrays.fill(computedPwdHash, '\u0000');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not a big deal, but maybe we should pull the computedPwdHash array out of the try and set it to null initially. Then we can fill it in the finally with the others?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was just trying to zerofill as soon after the use as possible. Do you see any advantages other than consistency?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An exception is thrown during the conversion of utf8 bytes to chars would cause this array to hang around in memory. Like I said, if you prefer how you have it, I am fine with it :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. Nope, I was just trying to see what you're getting at thanks. I'll change it

@@ -17,7 +17,7 @@
public void testPasswordHashingAlgorithmBootstrapCheck() {
Settings settings = Settings.EMPTY;
assertFalse(new PasswordHashingAlgorithmBootstrapCheck().check(new BootstrapContext(settings, null)).isFailure());

// The following two will always pass because for now we only test in environments where PBKDF2WithHMACSHA512 is supported
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add an assume statement that validates PBKDF2 is available?

@jkakavas jkakavas mentioned this pull request Jun 28, 2018
1 task
@jkakavas jkakavas merged commit db6b339 into elastic:master Jun 28, 2018
jkakavas added a commit that referenced this pull request Jun 29, 2018
As part of the changes in #31234,the password verification logic
determines the algorithm used for hashing the password from the
format of the stored password hash itself. Thus, it is generally
possible to validate a password even if it's associated stored hash
was not created with the same algorithm than the one currently set
in the settings.
At the same time, we introduced a check for incoming client change
password requests to make sure that the request's password is hashed
with the same algorithm that is configured to be used in the node
settings.
In the spirit of randomizing the algorithms used, the
{@code SecurityClient} used in the {@code NativeRealmIntegTests} and
{@code ReservedRealmIntegTests} would send all requests dealing with
user passwords by randomly selecting a hashing algorithm each time.
This meant that some change password requests were using a different
password hashing algorithm than the one used for the node and the
request would fail.
This commit changes this behavior in the two aforementioned Integ
tests to use the same password hashing algorithm for the node and the
clients, no matter what the request is.

Resolves #31670
dnhatn added a commit that referenced this pull request Jun 29, 2018
* master:
  Do not check for object existence when deleting repository index files (#31680)
  Remove extra check for object existence in repository-gcs read object (#31661)
  Support multiple system store types (#31650)
  [Test] Clean up some repository-s3 tests (#31601)
  [Docs] Use capital letters in section headings (#31678)
  [DOCS] Add PQL language Plugin (#31237)
  Merge AzureStorageService and AzureStorageServiceImpl and clean up tests (#31607)
  TEST: Fix test task invocation (#31657)
  Revert "[TEST] Mute failing tests in NativeRealmInteg and ReservedRealmInteg"
  Fix RealmInteg test failures
  Extend allowed characters for grok field names (#21745) (#31653)
  [DOCS] Fix licensing API details (#31667)
  [TEST] Mute failing tests in NativeRealmInteg and ReservedRealmInteg
  Fix CreateSnapshotRequestTests Failure (#31630)
  Configurable password hashing algorithm/cost (#31234)
  [TEST] Mute failing NamingConventionsTaskIT tests
  [DOCS] Replace CONFIG_DIR with ES_PATH_CONF (#31635)
  Core: Require all actions have a Task (#31627)
jkakavas added a commit to jkakavas/elasticsearch that referenced this pull request Jul 2, 2018
This changes the default behavior when resolving the hashing
algorithm from unrecognised hash strings, which was introduced in
 elastic#31234

A hash string that doesn't start with an algorithm identifier can
either be a malformed/corrupted hash or a plaintext password when
Hasher.NOOP is used(against warnings).
Do not make assumptions about which of the two is true for such
strings and default to Hasher.NOOP. Hash verification will subsequently
fail for malformed hashes.
Finally, do not log the potentially malformed hash as this can very
well be a plaintext password.

Resolves elastic#31697
Reverts 58cf95a
jkakavas added a commit that referenced this pull request Jul 3, 2018
* Default resolveFromHash to Hasher.NOOP

This changes the default behavior when resolving the hashing
algorithm from unrecognised hash strings, which was introduced in
 #31234

A hash string that doesn't start with an algorithm identifier can
either be a malformed/corrupted hash or a plaintext password when
Hasher.NOOP is used(against warnings).
Do not make assumptions about which of the two is true for such
strings and default to Hasher.NOOP. Hash verification will subsequently
fail for malformed hashes.
Finally, do not log the potentially malformed hash as this can very
well be a plaintext password.

Resolves #31697
Reverts 58cf95a
jkakavas added a commit to jkakavas/elasticsearch that referenced this pull request Jul 13, 2018
Make password hashing algorithm/cost configurable for the
stored passwords of users for the realms that this applies
(native, reserved). Replaces predefined choice of bcrypt with
cost factor 10.
This also introduces PBKDF2 with configurable cost
(number of iterations) as an algorithm option for password hashing
both for storing passwords and for the user cache.
Password hash validation algorithm selection takes into
consideration the stored hash prefix and only a specific number
of algorithnm and cost factor options for brypt and pbkdf2 are
whitelisted and can be selected in the relevant setting.
jkakavas added a commit to jkakavas/elasticsearch that referenced this pull request Jul 13, 2018
* Default resolveFromHash to Hasher.NOOP

This changes the default behavior when resolving the hashing
algorithm from unrecognised hash strings, which was introduced in
 elastic#31234

A hash string that doesn't start with an algorithm identifier can
either be a malformed/corrupted hash or a plaintext password when
Hasher.NOOP is used(against warnings).
Do not make assumptions about which of the two is true for such
strings and default to Hasher.NOOP. Hash verification will subsequently
fail for malformed hashes.
Finally, do not log the potentially malformed hash as this can very
well be a plaintext password.

Resolves elastic#31697
Reverts 58cf95a
jkakavas added a commit to jkakavas/elasticsearch that referenced this pull request Jul 16, 2018
As part of the changes in elastic#31234,the password verification logic
determines the algorithm used for hashing the password from the
format of the stored password hash itself. Thus, it is generally
possible to validate a password even if it's associated stored hash
was not created with the same algorithm than the one currently set
in the settings.
At the same time, we introduced a check for incoming client change
password requests to make sure that the request's password is hashed
with the same algorithm that is configured to be used in the node
settings.
In the spirit of randomizing the algorithms used, the
{@code SecurityClient} used in the {@code NativeRealmIntegTests} and
{@code ReservedRealmIntegTests} would send all requests dealing with
user passwords by randomly selecting a hashing algorithm each time.
This meant that some change password requests were using a different
password hashing algorithm than the one used for the node and the
request would fail.
This commit changes this behavior in the two aforementioned Integ
tests to use the same password hashing algorithm for the node and the
clients, no matter what the request is.

Resolves elastic#31670
jkakavas added a commit that referenced this pull request Jul 18, 2018
* Configurable password hashing algorithm/cost (#31234)

Make password hashing algorithm/cost configurable for the
stored passwords of users for the realms that this applies
(native, reserved). Replaces predefined choice of bcrypt with
cost factor 10.
This also introduces PBKDF2 with configurable cost
(number of iterations) as an algorithm option for password hashing
both for storing passwords and for the user cache.
Password hash validation algorithm selection takes into
consideration the stored hash prefix and only a specific number
of algorithnm and cost factor options for brypt and pbkdf2 are
whitelisted and can be selected in the relevant setting.

* resolveHasher defaults to NOOP (#31723)

This changes the default behavior when resolving the hashing
algorithm from unrecognised hash strings, which was introduced in
 #31234

A hash string that doesn't start with an algorithm identifier can
either be a malformed/corrupted hash or a plaintext password when
Hasher.NOOP is used(against warnings).
Do not make assumptions about which of the two is true for such
strings and default to Hasher.NOOP. Hash verification will subsequently
fail for malformed hashes.
Finally, do not log the potentially malformed hash as this can very
well be a plaintext password.

* Fix RealmInteg test failures

As part of the changes in #31234,the password verification logic
determines the algorithm used for hashing the password from the
format of the stored password hash itself. Thus, it is generally
possible to validate a password even if it's associated stored hash
was not created with the same algorithm than the one currently set
in the settings.
At the same time, we introduced a check for incoming client change
password requests to make sure that the request's password is hashed
with the same algorithm that is configured to be used in the node
settings.
In the spirit of randomizing the algorithms used, the
{@code SecurityClient} used in the {@code NativeRealmIntegTests} and
{@code ReservedRealmIntegTests} would send all requests dealing with
user passwords by randomly selecting a hashing algorithm each time.
This meant that some change password requests were using a different
password hashing algorithm than the one used for the node and the
request would fail.
This commit changes this behavior in the two aforementioned Integ
tests to use the same password hashing algorithm for the node and the
clients, no matter what the request is.
dnhatn added a commit that referenced this pull request Jul 19, 2018
* 6.x:
  Fix rollup on date fields that don't support epoch_millis (#31890)
  Revert "Introduce a Hashing Processor (#31087)" (#32179)
  [test] use randomized runner in packaging tests (#32109)
  Painless: Fix caching bug and clean up addPainlessClass. (#32142)
  Fix BwC Tests looking for UUID Pre 6.4 (#32158) (#32169)
  Call setReferences() on custom referring tokenfilters in _analyze (#32157)
  Add more contexts to painless execute api (#30511)
  Add EC2 credential test for repository-s3 (#31918)
  Fix CP for namingConventions when gradle home has spaces (#31914)
  Convert Version to Java - clusterformation part1 (#32009)
  Fix Java 11 javadoc compile problem
  Improve docs for search preferences (#32098)
  Configurable password hashing algorithm/cost(#31234) (#32092)
  [DOCS] Update TLS on Docker for 6.3
  ESIndexLevelReplicationTestCase doesn't support replicated failures but it's good to know what they are
  Switch distribution to new style Requests (#30595)
  Build: Skip jar tests if jar disabled
  Build: Move shadow customizations into common code (#32014)
  Painless: Add PainlessClassBuilder (#32141)
  Fix accidental duplication of bwc test for script behavior
  Handle missing values in painless (#30975) (#31903)
  Build: Make additional test deps of check (#32015)
  Painless: Fix Bug with Duplicate PainlessClasses (#32110)
  Adjust translog after versionType removed in 7.0 (#32020)
  Disable C2 from using AVX-512 on JDK 10 (#32138)
  [Rollup] Add new capabilities endpoint for concrete rollup indices (#32111)
  Mute :qa:mixed-cluster indices.stats/10_index/Index - all’
  [ML] Wait for aliases in multi-node tests (#32086)
  Ensure to release translog snapshot in primary-replica resync (#32045)
  Docs: Fix missing example script quote (#32010)
  Add Index UUID to `/_stats` Response (#31871) (#32113)
  [ML] Move analyzer dependencies out of categorization config (#32123)
  [ML][DOCS] Add missing 6.3.0 release notes (#32099)
  Updates the build to gradle 4.9 (#32087)
  Update monitoring template version to 6040099 (#32088)
  Fix put mappings java API documentation (#31955)
  Add exclusion option to `keep_types` token filter (#32012)
@jkakavas jkakavas deleted the configurable-pwd-hash branch September 14, 2018 06:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>breaking-java >enhancement :Security/Authentication Logging in, Usernames/passwords, Realms (Native/LDAP/AD/SAML/PKI/etc) v6.4.0 v7.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants