-
-
Notifications
You must be signed in to change notification settings - Fork 30.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bpo-37986: Improve perfomance of PyLong_FromDouble() #15611
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
Improve performance of :c:func:`PyLong_FromDouble` for values that fit into | ||
:c:type:`long`. |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -433,6 +433,21 @@ PyLong_FromSize_t(size_t ival) | |
PyObject * | ||
PyLong_FromDouble(double dval) | ||
{ | ||
/* Try to get out cheap if this fits in a long. When a finite value of real | ||
* floating type is converted to an integer type, the value is truncated | ||
* toward zero. If the value of the integral part cannot be represented by | ||
* the integer type, the behavior is undefined. Thus, we must check that | ||
* value is in range (LONG_MIN - 1, LONG_MAX + 1). If a long has more bits | ||
* of precision than a double, casting LONG_MIN - 1 to double may yield an | ||
* approximation, but LONG_MAX + 1 is a power of two and can be represented | ||
* as double exactly (assuming FLT_RADIX is 2 or 16), so for simplicity | ||
* check against [-(LONG_MAX + 1), LONG_MAX + 1). | ||
*/ | ||
const double int_max = (unsigned long)LONG_MAX + 1; | ||
if (-int_max < dval && dval < int_max) { | ||
return PyLong_FromLong((long)dval); | ||
} | ||
|
||
PyLongObject *v; | ||
double frac; | ||
int i, ndig, expo, neg; | ||
|
@@ -452,8 +467,7 @@ PyLong_FromDouble(double dval) | |
dval = -dval; | ||
} | ||
frac = frexp(dval, &expo); /* dval = frac*2**expo; 0.0 <= frac < 1.0 */ | ||
if (expo <= 0) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is already on the slow path, it seems safest to keep this check in place even though it should've been handled by the above int range checks. smart compilers would see that (no idea how many are smart enough to unroll frexp and understand). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I do not think it makes sense to keep this code. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Either seems fine to me. Personally, I'd probably keep the check out of defensiveness (someone could, for whatever reason, move the fast path out at some point in the future; it's nice if the slow path remains valid in that case), but I'm happy for this to be merged as is. Do we at least have unit tests that cover this case? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
At least gcc is not smart enough. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. a compromise is to turn it into There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is. |
||
return PyLong_FromLong(0L); | ||
assert(expo > 0); | ||
ndig = (expo-1) / PyLong_SHIFT + 1; /* Number of 'digits' in result */ | ||
v = _PyLong_New(ndig); | ||
if (v == NULL) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
int_max
is an imprecise value on platforms where sizeof(long) >= sizeof(double). Most 64-bit systems have long's larger than a double's 53-bit mantissa (and likely all platforms when considering long long per the above comment).Will it be truncated in the right direction (towards zero) to avoid this triggering on values with undefined conversion behavior?
the previous code used
LONG_MIN < v
andv < LONG_MAX
directly rather than LONG_MAX + 1 stored into a double. (I believe the C promotion will promoted those values to a double before comparison as all floating point types have a higher rank than integer types)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original comment explains why you should use
< LONG_MAX
. I would keep the original comment and the code, and just move it intoPyLong_FromDouble()
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I had to add comment about this: I assumed that LONG_MAX == 2 ** (CHAR_BIT * sizeof(long) - 1) - 1 and LONG_MIN == -2 ** (CHAR_BIT * sizeof(long) - 1), i.e.
(unsigned long)LONG_MAX + 1
is a power of two and can be exactly represented by double (assuming that FLT_RADIX == 2). Does that make sense?(Originally I wrote it like this:
const double int_max = pow(2, CHAR_BIT * sizeof(long) - 1)
, see #15611 (comment))There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I'm trying to demonstrate correctness of this approach:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine, under reasonable assumptions on the platform.
LONG_MAX + 1
must be a power of 2 (follows from C99 §6.2.6.2p2), and while it's theoretically possible thatdouble
will be unable to representLONG_MAX + 1
exactly, that seems highly unlikely in practice. So the conversion todouble
must be exact (C99 §6.3.1.4p2).It's not safe based purely on the C standard to assume that
LONG_MIN = -LONG_MAX - 1
: the integer representation could be ones' complement or sign-magnitude, in which caseLONG_MIN = -LONG_MAX
. But that assumption is safe in practice for any platform that Python's likely to meet, and we make the assumption of two's complement for signed integers elsewhere in the codebase. If we're worried enough about this, we could change the-int_max <= dval
comparison to-int_max < dval
. On balance, I'd suggest making that change (partly just for the aesthetics of the symmetry).Believe it or not, it's also not safe based purely on the C standard to assume that
(unsigned long)LONG_MAX + 1
is representable as anunsigned long
: C99 §6.2.5p9 only guarantees that nonnegativelong
values are representable asunsigned long
But the chance of that not being true in practice is negligible (at least until someone tries to port CPython to the DS9000). And the failure mode is benign: we'd just end up never taking the fast path.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re-reading all this, I had one more worry (which is why I dismissed my own review): what happens if the exact value of
dval
lies strictly betweenLONG_MAX
andLONG_MAX + 1
? In that case we could end up converting adouble
that, strictly speaking, is outside the range oflong
. But it turns out that we're safe, because C99 is quite explicit here: §6.3.1.4p1 says (emphasis mine):So any
double
value that's strictly smaller thanLONG_MAX + 1
should be fine.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I think we could use
((double)(LONG_MAX / 2 + 1)) * 2
, but does it worth it?Shouldn't we formally state that we support only two's complement representation?
BTW it was proposed to abandon other representations and it looks like committee is agree with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely not worth it! The C standard permits
LONG_MAX == ULONG_MAX
, but I'd be astonished if you ever find a real implementation (now or in the future) that has this property.Yes, we should, though I'm not sure where would be the best place. But I think it's a non-issue in practice.