Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clear sensitive memory without getting optimized out. #448

Closed
wants to merge 14 commits into from

Conversation

isle2983
Copy link
Contributor

(Previous discussion of this topic in #438 )

explicit_bzero() is used when available on the platform (glibc 2.25+), otherwise it falls back to an inline implementation. There are many possible versions of how the inline could be implemented but I have no reason to prefer one over another.

The commit stack does some refactoring in order to preserve the intended non-'cleanse' functionality done by several of the _clear() functions. I constructed the stack as unsquashed, incremental refactoring changes. Please let me know if a squash is preferred.

The measured delta in performance appears negligible. Trials were done inside a minimal headless VMs with Debian 8 in one for glibc 2.19 and the current base of Arch Linux in the other for glibc 2.25. The full session output of the trials was logged for reference and is available as a gist.

For reference to bare metal performance with the same CPU, timing on the underlying native Debian 8 install (with glibc 2.19 and gcc 4.9.2 running with master:

(wo/ endomorphism)
$ ./bench_sign
ecdsa_sign: min 55.3us / avg 55.4us / max 55.5us
$ ./bench_verify
ecdsa_verify: min 85.6us / avg 85.9us / max 86.3us
ecdsa_verify_openssl: min 476us / avg 484us / max 490us

Trials:

  1. master w/ glibc 2.19, gcc 4.9.2 w/ and wo/ endomorphisim
(wo/ endomorphism)
$ ./bench_sign
ecdsa_sign: min 55.9us / avg 56.4us / max 56.7us
$ ./bench_verify
ecdsa_verify: min 86.2us / avg 86.8us / max 87.7us
ecdsa_verify_openssl: min 487us / avg 488us / max 491us

(w/ endomorphism)
$ ./bench_sign
ecdsa_sign: min 56.1us / avg 56.6us / max 57.2us
$ ./bench_verify
ecdsa_verify: min 66.5us / avg 67.4us / max 68.8us
ecdsa_verify_openssl: min 487us / avg 493us / max 504us
  1. a160340 w/ glibc 2.19 gcc 4.9.2 w/ and wo/ endomorphism
(wo/ endomorphism)
./bench_sign
ecdsa_sign: min 55.8us / avg 56.5us / max 57.0us
$ ./bench_verify
ecdsa_verify: min 86.3us / avg 86.6us / max 86.9us
ecdsa_verify_openssl: min 482us / avg 483us / max 484us

(w/ endomorphism)
$ ./bench_sign
ecdsa_sign: min 56.0us / avg 56.5us / max 56.7us
$ ./bench_verify
ecdsa_verify: min 65.7us / avg 66.0us / max 66.4us
ecdsa_verify_openssl: min 482us / avg 484us / max 489us
  1. master w/ glibc 2.25, gcc 6.3.1 w/ and wo/ endomorphisim
(wo/ endomorphism)
./bench_sign
ecdsa_sign: min 57.2us / avg 57.8us / max 58.1us
$ ./bench_verify
ecdsa_verify: min 76.1us / avg 76.7us / max 77.7us
ecdsa_verify_openssl: min 461us / avg 464us / max 482us

(w/ endomorphism)
$ ./bench_sign
ecdsa_sign: min 57.0us / avg 57.2us / max 57.5us
$ ./bench_verify
ecdsa_verify: min 54.8us / avg 55.1us / max 55.5us
ecdsa_verify_openssl: min 461us / avg 462us / max 467us
  1. a160340 w/ glibc 2.25 gcc 6.3.1 w/ and wo/ endomorphism
(wo/ endomorphism)
./bench_sign
ecdsa_sign: min 56.7us / avg 57.0us / max 57.5us
$ ./bench_verify
ecdsa_verify: min 75.6us / avg 76.1us / max 76.6us
ecdsa_verify_openssl: min 460us / avg 460us / max 461us

(w/ endomorphism)
$ ./bench_sign
ecdsa_sign: min 56.5us / avg 56.8us / max 57.1us
$ ./bench_verify
ecdsa_verify: min 54.7us / avg 54.8us / max 55.0us
ecdsa_verify_openssl: min 462us / avg 463us / max 464us

The desired behavior of not getting optimized out was also verified by looking at the resulting assembly. With the inline implementation on glibc 2.19, the end section of secp256k1_ecmult_gen() from gcc 4.9.2 looks like:

	movl	%r15d, 120(%rbx)	# infinity, r_7(D)->infinity
	movq	128(%rsp), %rax	# %sfp, ivtmp.249
	cmpq	$64, %rax	#, ivtmp.249
	je	.L83	#,
	salq	$34, %rax	#, D.9134
	shrq	$37, %rax	#, D.9134
	movl	304(%rsp,%rax,4), %r9d	# gnb.d, D.9141
	jmp	.L84	#
.L83:
	movq	$memset, 768(%rsp)	#, volatile_memset
	leaq	300(%rsp), %rdi	#, tmp5061
	movq	768(%rsp), %rax	# volatile_memset, D.9135
	movl	$4, %edx	#,
	xorl	%esi, %esi	#
	call	*%rax	# D.9135
	movq	$memset, 816(%rsp)	#, volatile_memset
	leaq	976(%rsp), %rdi	#, tmp5062
	movq	816(%rsp), %rax	# volatile_memset, D.9135
	movl	$84, %edx	#,
	xorl	%esi, %esi	#
	call	*%rax	# D.9135
	movq	$memset, 864(%rsp)	#, volatile_memset
	leaq	304(%rsp), %rdi	#, tmp5063
	movq	864(%rsp), %rax	# volatile_memset, D.9135
	movl	$32, %edx	#,
	xorl	%esi, %esi	#
	call	*%rax	# D.9135
	addq	$1080, %rsp	#,

With the explicit_bzero() from glibc 2.25 linked and with gcc 6.3.1, the same end section of secp256k1_ecmult_gen looks like:

	movl	$0, 240(%rsp)	#, add.infinity
	call	secp256k1_gej_add_ge	#
	addq	$1, (%rsp)	#, %sfp
	movq	8(%rsp), %r10	# %sfp, tmp242
	movq	(%rsp), %rax	# %sfp, ivtmp.629
	movq	16(%rsp), %r9	# %sfp, _90
	movq	24(%rsp), %r8	# %sfp, _85
	cmpq	$64, %rax	#, ivtmp.629
	jne	.L360	#,
	leaq	60(%rsp), %rdi	#, tmp380
	movl	$4, %esi	#,
	call	explicit_bzero	#
	leaq	160(%rsp), %rdi	#, tmp381
	movl	$88, %esi	#,
	call	explicit_bzero	#
	leaq	64(%rsp), %rdi	#, tmp382
	movl	$32, %esi	#,
	call	explicit_bzero	#
	addq	$264, %rsp	#,

Copy link
Contributor

@sipa sipa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall concept ACK.

I'm not sure I like the separation of secp256k1_fe_verify_init, as it places a responsibility on the caller that they shouldn't have (as in: the group code ideally shouldn't need to know there is such a thing as debug variables in secp256k1_fe that need initialization). Given that secp256k1_fe_clear remains, can't the secp256k1_fe_verify_init logic remain in there? Then you can still use a strict separation between SECP256K1_CLEANSE(fe) (meaning fe won't be used anymore) and secp256k1_fe_clear(&fe) (meaning fe must get the value 0). You could even rename the latter to secp256k1_fe_set_zero perhaps.

@@ -76,6 +77,17 @@ static SECP256K1_INLINE void *checked_malloc(const secp256k1_callback* cb, size_
return ret;
}

/* Cleanses memory to prevent leaking sensitive info. Won't be optimized out. */
#ifdef HAVE_EXPLICIT_BZERO
#define SECP256K1_CLEANSE(v) explicit_bzero(&v, sizeof(v))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps use &(v) here so that compound expressions in the macro call work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parens added to the macro as suggested

@@ -300,7 +300,8 @@ SECP256K1_INLINE static int secp256k1_fe_is_odd(const secp256k1_fe *a) {
SECP256K1_INLINE static void secp256k1_fe_clear(secp256k1_fe *a) {
int i;
#ifdef VERIFY
secp256k1_fe_verify_init(a);
a->magnitude = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why inline the call again here (compared to the previous commit)?

@@ -263,7 +263,8 @@ SECP256K1_INLINE static int secp256k1_fe_is_odd(const secp256k1_fe *a) {
SECP256K1_INLINE static void secp256k1_fe_clear(secp256k1_fe *a) {
int i;
#ifdef VERIFY
secp256k1_fe_verify_init(a);
a->magnitude = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And here as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It isn't actually a straight inline. Both values are set to zero as part of the cleanse. The thought was to be a bit pedantic about showing the progression in the commits, but it might have been be overboard.

In the new commit progression with secp256_fe_set_zero(), seeing it side-by-side with secp256k1_fe_clear(), with a->normalized = 1; and a->normalized = 0; respectively is clearer with the intent of the two separated functions.

src/tests.c Outdated
@@ -1889,7 +1889,11 @@ void test_ge(void) {
secp256k1_fe zfi2, zfi3;

secp256k1_gej_set_infinity(&gej[0]);
secp256k1_ge_clear(&ge[0]);
memset(&ge[0], 0, sizeof(secp256k1_ge));
#ifdef VERIFY
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And alternative maybe is using secp256k1_ge_set_gej(&ge[0], &gej[0]) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed as suggested in d056957

…nfig.h

In glibc 2.25, the new explicit_bzero() call is guarded by #ifdef __USE_MISC in
string.h. Via glibc 2.25's features.h, the selectors for __USE_MISC are set
under #ifdef _GNU_SOURCE.

Without this set, the linker still figures out how to link explicit_bzero(),
but it throws a bunch of warnings about a missing prototype for
explicit_bzero().

By setting AC_USE_SYSTEM_EXTENSIONS, the system is detected and
libsecp256k1-config.h is generated with the correct setting for _GNU_SOURCE

It was found that even after this addition, the compile for the bench_internal
executable was still throwing the warning. This was because the <stdio.h> was
included before libsec256k1-config.h (via including src/util.h) and confusing
the preprocessing. It turns out that this <stdio.h> include is redundant since
it is included later from src/util.h in the correct order.
explicit_bzero() is newly available in glibc 2.25 (released 2017-02-05; current
stable).

The fall-back implementation uses the 'volatile' keyword to signal the compiler
to not optimize away the zeroing in the same way as explicit_bzero().

The wrapper macro SECP256K1_CLEANSE() borrows its name from OPENSSL_cleanse()
in the openssl codebase which serves the same purpose and looks reasonably
clear in code context.
There are two uses of the secp256k1_fe_clear() function that are now separated
into these two functions in order to reflect the intent:

1) initializing the memory prior to being used -> converted to fe_set)zero()
2) zeroing the memory after being used such that no sensitive data remains. ->
    remains as fe_clear()

In the latter case, 'magnitude' and 'normalized' need to be overwritten when
VERIFY is enabled.
...rather than relying on zero assignment to set the initial state.

Suggested by Pieter Wuille to avoid using secp256k1_ge_clear() for this
initialization need.
…text_clear

The 'teardown' naming is to complement the 'build' function name.  To 'build'
allocates and sets up the memory for pre_g and pre_g_128. To 'teardown' frees
the memory.
…_gen_context_clear

The 'teardown' naming is to complement the 'build' function name.  To 'build'
allocates and sets up the memory for ctx->prec. To 'teardown' frees the memory.
Fixes clang static analysis issue:

--------------------------------------------------------------------------------
An issue has been found in ./src/ecmult_gen_impl.h:150:5
Type:         Dead assignment
Description:  Value stored to 'bits' is never read

0: ./src/ecmult_gen_impl.h:153:5 - Value stored to 'bits' is never read
--------------------------------------------------------------------------------
All of these conversions:

1) operate on stack memory.
2) happen after the function is done with the variable
3) had an existing memset() action to be replaced

These were found by visual inspection and may not be the total set of
places where SECP256K1_CLEANSE should ideally be applied.
@isle2983
Copy link
Contributor Author

Great suggestions. I introduced secp256k1_fe_set_zero() and re-worked the commit history to avoid the wrangling of the #define VERIFY stuff by converting the remaining clients of secp256k1_fe_set_zero() over first. secp256k1_fe_clear() is then replaced with SECP256K1_CLEANSE() along with the others. secp256k1_fe_set_zero() remains and encapsulates the (former) verify_init() initialization logic.

@gmaxwell
Copy link
Contributor

Concept ACK. I worry about the CLEANSE macro using sizeof, this is sort of begging to do the wrong thing with arrays, no?

@real-or-random
Copy link
Contributor

@isle2983 I considered to work on a similar PR and then found this one, which seems to be pretty mature already. :) Are you still willing to work on this PR?

@gmaxwell
Copy link
Contributor

@real-or-random you could also just pick up this PR, rebase, make your fixups, and just note it in the commit messages.

@real-or-random
Copy link
Contributor

Yes, that's my plan; consider this PR adopted by me. I just couldn't find time to work on this so far.

@sipa
Copy link
Contributor

sipa commented Jul 23, 2019

Closing as superseded by #636.

@sipa sipa closed this Jul 23, 2019
real-or-random added a commit that referenced this pull request Nov 4, 2024
…vival of #636)

765ef53 Clear _gej instances after point multiplication to avoid potential leaks (Sebastian Falbesoner)
349e6ab Introduce separate _clear functions for hash module (Tim Ruffing)
99cc9fd Don't rely on memset to set signed integers to 0 (Tim Ruffing)
97c57f4 Implement various _clear() functions with secp256k1_memclear() (Tim Ruffing)
9bb368d Use secp256k1_memclear() to clear stack memory instead of memset() (Tim Ruffing)
e3497bb Separate between clearing memory and setting to zero in tests (Tim Ruffing)
d79a6cc Separate secp256k1_fe_set_int( . , 0 ) from secp256k1_fe_clear() (Tim Ruffing)
1c08126 Add secp256k1_memclear() for clearing secret data (Tim Ruffing)
e7d3844 Don't clear secrets in pippenger implementation (Tim Ruffing)

Pull request description:

  This PR picks up #636 (which in turn picked up #448, so this is take number three) and is essentially a rebase on master.

  Some changes to the original PR:
  * the clearing function now has the `secp256k1_` prefix again, since the related helper `_memczero` got it as well (see PR #835 / commit e89278f)
  * the original commit b17a7df ("Make _set_fe_int( . , 0 ) set magnitude to 0") is not needed anymore, since it was already applied in PR #943 (commit d49011f)
  * clearing of stack memory with `secp256k1_memclear` is now also done on modules that have been newly introduced since then, i.e. schnorr and ellswift (of course, there is still no guarantee that all places where clearing is necessary are covered)

  So far I haven't looked at any disassembly and possible performance implications yet (there were some concerns expressed in #636 (comment)), happy to go deeper there if this gets Concept ACKed.

  The proposed method of using a memory barrier to prevent optimizating away the memset is still used in BoringSSL (where it was originally picked up from) and in the Linux Kernel, see e.g. https://github.com/google/boringssl/blob/5af122c3dfc163b5d1859f1f450756e8e320a142/crypto/mem.c#L335 and https://github.com/torvalds/linux/blob/d4560686726f7a357922f300fc81f5964be8df04/include/linux/string.h#L348 / https://github.com/torvalds/linux/blob/d4560686726f7a357922f300fc81f5964be8df04/include/linux/compiler.h#L102

  Fixes #185.

ACKs for top commit:
  sipa:
    reACK 765ef53
  real-or-random:
    ACK 765ef53

Tree-SHA512: 5a034d5ad14178c06928022459f3d4f0877d06f576b24ab07b86b3608b0b3e9273217b8309a1db606f024f3032731f13013114b1e0828964b578814d1efb2959
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants