-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gcm2 #1949
Gcm2 #1949
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reworking this once again! Just one minor comment below.
Anyway, for all the remaining patches:
Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
For "core: crypto: add new AES-GCM implementation" you may also add:
Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey960)
(I did test with the last commit too, but since I did not specifically test the counter thing I'd rather have the T-b applied to the parent commit)
* Internal weak functions that can be overridden with hardware specific | ||
* implementations. | ||
*/ | ||
void internal_aes_gcm_next_ctr(struct internal_aes_gcm_ctx *ctx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, I'll fix.
Implement a macro mov_imm that can be used to move an immediate constant into a 64-bit register, using between 2 and 4 movz/movk instructions (depending on the operand) Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Rebased and tags applied |
Can you fix the checkpatch warning please? |
Imports assembly code for accelerated GHASH computations and provides an interface for using these low level functions. The assembly code relies on features now available in all ARM cores. No assembly code is enabled at all unless CFG_CRYPTO_WITH_CE = y. Code using the PMULL/PMULL2 instruction is enabled with CFG_HWSUPP_PMULL = y. The assembly code is written by Ard Biesheuvel <ard.biesheuvel@linaro.org> and modified slightly here to fit better into OP-TEE. Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Takes full advantage of LTC GHASH acceleration by using the pmull instruction. Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Takes full advantage of LTC GHASH acceleration by using the pmull instruction. Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Moves CFG_CRYPTO* and _CFG_CRYPTO flags in core/lib/libtomcrypt/sub.mk to core/crypto.mk. Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Adds CFG_CRYPTO_AES_GCM_FROM_CRYPTOLIB which if y will enable AES-GCM in the crypto library and use that as the implementation used by crypto_authen_*() functions. Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Adds a new AES-GCM implementation optimized for hardware acceleration. This implementation is enabled by default, to use the implementation in libTomCrypt instead set CFG_CRYPTO_AES_GCM_FROM_CRYPTOLIB=y. Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey960) Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
In pmull_gcm_encrypt() and pmull_gcm_decrypt() it was assumed that it's enough to only increase the least significant 64-bits of the counter fed to the block cipher. This can hold for 96-bit IVs, but not for IVs of any other length as the number stored in the least significant 64-bits of the counter can't be easily predicted. In this patch pmull_gcm_encrypt() and pmull_gcm_decrypt() are updated to increase the entire counter, at the same time is the interface changed to accept the counter in little endian format instead. Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU, Hikey) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reformatted "core: import GHASH acceleration routines" commit message |
Thanks! |
Reworked as discussed in #1908