-
-
Notifications
You must be signed in to change notification settings - Fork 964
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document Argon2 configuration best practices #572
Comments
You're probably allocating too many resources for Argon2. Running Istio in Minikube is already a performance sink and adding Argon2 to the mix could make your VM unresponsive. |
I did not realize Argon2 is that expensive. 😄 |
Depends on the config but the defaults are pretty high: https://github.com/ory/kratos/blob/master/driver/configuration/provider_viper.go#L111-L115 (4GB RAM with 4 iterations with 2*CPU parallelism) |
I've tried to adjust the config and it's still hanging:
This is my minikube config:
I'm just shocked this could be that resource-heavy. |
Try
|
By the way you're still using 512MB RAM, but on a machine that is already over-utilized by Istio. In our quickstart, we have dialed down everything quite a lot to make sure that it runs everywhere: I wouldn't recommend doing that in prod though. |
I changed the config to:
But it's still crashing. Will try it later on the actual k8s cluster. I acknowledge that istio is super resource-hungry. |
This works fine on a beefier hardware. |
Ok, can I close this then or do you need further clarification? :) |
Sorry for reviving an old thread, but it seems the most appropriate one. https://tools.ietf.org/html/draft-irtf-cfrg-argon2-10#section-4 ("Parameter Choice") says:
I would say that Kratos' is mostly used for Frontend server authentication but its default parameters are tuned for Backend server authentication. Would you accept a patch which tunes the parameters appropriately, or should I just tune them for my cluster and post my parameters on this thread? |
All good, we should definitely document that and maybe also change the defaults used in the demo. If you're up for a PR @alsuren please go ahead :) |
@tricky42 #572 (comment) is relevant to you probably |
I have a similar problem where the Kratos process becomes unresponsive. My Argon2 config is: argon2:
parallelism: 1
memory: 65536
iterations: 1
salt_length: 16
key_length: 16 Still sometimes (not every time) when I try to perform login, Kratos process starts consuming 4+ cores of CPU and 4GB+ memory, then login request dies by timeout. Example log:
I'm running Kratos in Kubernetes, and while that request lasts, the pod becomes unready. |
Make sure to have allocated enough CPU and memory limits! |
Thanks, I'll try it. I was concerned by |
This patch adds the new command "hashers argon2 calibrate" which allows one to pick the desired hashing time for password hashing and then chooses the optimal parameters for the hardware the command is running on: ``` $ kratos hashers argon2 calibrate 500ms Increasing memory to get over 500ms: took 2.846592732s in try 0 took 6.006488824s in try 1 took 4.42657975s with 4.00GB of memory [...] Decreasing iterations to get under 500ms: took 484.257775ms in try 0 took 488.784192ms in try 1 took 486.534204ms with 3 iterations Settled on 3 iterations. { "memory": 1048576, "iterations": 3, "parallelism": 32, "salt_length": 16, "key_length": 32 } ``` Closes #723 Closes #572 Closes #647
Describe the bug
I've been trying to set up kratos
v0.4.4
with selfservice-ui-node locally in minikube. While slow, I managed to succeed with registering and verifying my identity.However, trying to login simply hangs the service. Occasionally it makes the whole minikube unresponsive, so I have to completely shut it down and restart.
I caught the liveness status updates that gives a clue of what happened:
It seems like kratos pod choked on the login request and even stopped responding to health requests, so k8s just restarted the pod.
Here's what was in the kratos logs from the time when I opened the selfservice at https://auth.ips.test (this time k8s weren't able to restart the pod and simply hang):
Reproducing the bug
Here's my config:
I manually updated helm-generated configmap to include
/etc/config/identity.traits.schema.json
, got the default one from the latest tagged kratos release.The text was updated successfully, but these errors were encountered: