We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In the ht_get_hash function the case hash_b = 0 is well handled by adding one and avoiding infinit cycling with same hash value.
However, there is another case that causes infinite cycling: when hash_b = num_buckets.
The implementation below solves both cases:
static int ht_get_hash( const char* s, const int num_buckets, const int attempt) { const int hash_a = ht_hash(s, HT_PRIME_1, taille); int hash_b = ht_hash(s, HT_PRIME_2, taille);
if (hash_b % taille == 0) { hash_b = 1; }
return (hash_a + (attempt * hash_b)) % taille; // Not adding 1 to hash_b here }
The text was updated successfully, but these errors were encountered:
Thank you, this was really useful; With this fix, I was capable to insert few millions of entries without any hassle!
Sorry, something went wrong.
Glad it helped you :-)
No branches or pull requests
In the ht_get_hash function the case hash_b = 0 is well handled by adding one and avoiding infinit cycling with same hash value.
However, there is another case that causes infinite cycling: when hash_b = num_buckets.
The implementation below solves both cases:
static int ht_get_hash( const char* s, const int num_buckets, const int attempt) {
const int hash_a = ht_hash(s, HT_PRIME_1, taille);
int hash_b = ht_hash(s, HT_PRIME_2, taille);
return (hash_a + (attempt * hash_b)) % taille; // Not adding 1 to hash_b here
}
The text was updated successfully, but these errors were encountered: