Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpo-37986: Improve perfomance of PyLong_FromDouble() #15611

Merged
merged 3 commits into from
May 10, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Improve performance of :c:func:`PyLong_FromDouble` for values that fit into
:c:type:`long`.
22 changes: 1 addition & 21 deletions Objects/floatobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -882,27 +882,7 @@ static PyObject *
float___trunc___impl(PyObject *self)
/*[clinic end generated code: output=dd3e289dd4c6b538 input=591b9ba0d650fdff]*/
{
double x = PyFloat_AsDouble(self);
double wholepart; /* integral portion of x, rounded toward 0 */

(void)modf(x, &wholepart);
/* Try to get out cheap if this fits in a Python int. The attempt
* to cast to long must be protected, as C doesn't define what
* happens if the double is too big to fit in a long. Some rare
* systems raise an exception then (RISCOS was mentioned as one,
* and someone using a non-default option on Sun also bumped into
* that). Note that checking for >= and <= LONG_{MIN,MAX} would
* still be vulnerable: if a long has more bits of precision than
* a double, casting MIN/MAX to double may yield an approximation,
* and if that's rounded up, then, e.g., wholepart=LONG_MAX+1 would
* yield true from the C expression wholepart<=LONG_MAX, despite
* that wholepart is actually greater than LONG_MAX.
*/
if (LONG_MIN < wholepart && wholepart < LONG_MAX) {
const long aslong = (long)wholepart;
return PyLong_FromLong(aslong);
}
return PyLong_FromDouble(wholepart);
return PyLong_FromDouble(PyFloat_AS_DOUBLE(self));
}

/* double_round: rounds a finite double to the closest multiple of
Expand Down
18 changes: 16 additions & 2 deletions Objects/longobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -433,6 +433,21 @@ PyLong_FromSize_t(size_t ival)
PyObject *
PyLong_FromDouble(double dval)
{
/* Try to get out cheap if this fits in a long. When a finite value of real
* floating type is converted to an integer type, the value is truncated
* toward zero. If the value of the integral part cannot be represented by
* the integer type, the behavior is undefined. Thus, we must check that
* value is in range (LONG_MIN - 1, LONG_MAX + 1). If a long has more bits
* of precision than a double, casting LONG_MIN - 1 to double may yield an
* approximation, but LONG_MAX + 1 is a power of two and can be represented
* as double exactly (assuming FLT_RADIX is 2 or 16), so for simplicity
* check against [-(LONG_MAX + 1), LONG_MAX + 1).
*/
const double int_max = (unsigned long)LONG_MAX + 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

int_max is an imprecise value on platforms where sizeof(long) >= sizeof(double). Most 64-bit systems have long's larger than a double's 53-bit mantissa (and likely all platforms when considering long long per the above comment).

Will it be truncated in the right direction (towards zero) to avoid this triggering on values with undefined conversion behavior?

the previous code used LONG_MIN < v and v < LONG_MAX directly rather than LONG_MAX + 1 stored into a double. (I believe the C promotion will promoted those values to a double before comparison as all floating point types have a higher rank than integer types)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original comment explains why you should use < LONG_MAX. I would keep the original comment and the code, and just move it into PyLong_FromDouble().

Copy link
Contributor Author

@sir-sigurd sir-sigurd Sep 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I had to add comment about this: I assumed that LONG_MAX == 2 ** (CHAR_BIT * sizeof(long) - 1) - 1 and LONG_MIN == -2 ** (CHAR_BIT * sizeof(long) - 1), i.e. (unsigned long)LONG_MAX + 1 is a power of two and can be exactly represented by double (assuming that FLT_RADIX == 2). Does that make sense?

(Originally I wrote it like this: const double int_max = pow(2, CHAR_BIT * sizeof(long) - 1), see #15611 (comment))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I'm trying to demonstrate correctness of this approach:

In [66]: SIZEOF_LONG = 8; CHAR_BITS = 8   
In [67]: LONG_MAX = (1 << (SIZEOF_LONG * CHAR_BITS - 1)) - 1; LONG_MIN = -LONG_MAX - 1
In [68]: int_max = float(LONG_MAX + 1)        

In [69]: int_max == LONG_MAX + 1
Out[69]: True

In [70]: def cast_to_long(dval):
    ...:     assert isinstance(dval, float)
    ...:     wholepart = math.trunc(dval)
    ...:     if LONG_MIN <= wholepart <= LONG_MAX:
    ...:         return wholepart
    ...:     raise RuntimeError('undefined behavior')
In [71]: def long_from_double(dval):
    ...:     assert isinstance(dval, float)
    ...:     if -int_max <= dval < int_max:
    ...:         return cast_to_long(dval)
    ...:     raise ValueError('float is out of range, use frexp()')

In [72]: long_from_double(int_max)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-72-280887471997> in <module>()
----> 1 long_from_double(int_max)

<ipython-input-71-ccaef6014bf1> in long_from_double(dval)
      3     if -int_max <= dval < int_max:
      4         return cast_to_long(dval)
----> 5     raise ValueError('float is out of range, use frexp()')

ValueError: float is out of range, use frexp()

In [73]: int_max.hex()
Out[73]: '0x1.0000000000000p+63'

In [74]: long_from_double(float.fromhex('0x1.fffffffffffffp+62'))
Out[74]: 9223372036854774784

In [75]: long_from_double(float.fromhex('-0x1.fffffffffffffp+62'))
Out[75]: -9223372036854774784

In [76]: long_from_double(-int_max)
Out[76]: -9223372036854775808

In [77]: long_from_double(float.fromhex('-0x1.0000000000001p+63'))
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-77-de5e9e1eba23> in <module>()
----> 1 long_from_double(float.fromhex('-0x1.0000000000001p+63'))

<ipython-input-71-ccaef6014bf1> in long_from_double(dval)
      3     if -int_max <= dval < int_max:
      4         return cast_to_long(dval)
----> 5     raise ValueError('float is out of range, use frexp()')

ValueError: float is out of range, use frexp()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is fine, under reasonable assumptions on the platform. LONG_MAX + 1 must be a power of 2 (follows from C99 §6.2.6.2p2), and while it's theoretically possible that double will be unable to represent LONG_MAX + 1 exactly, that seems highly unlikely in practice. So the conversion to double must be exact (C99 §6.3.1.4p2).

It's not safe based purely on the C standard to assume that LONG_MIN = -LONG_MAX - 1: the integer representation could be ones' complement or sign-magnitude, in which case LONG_MIN = -LONG_MAX. But that assumption is safe in practice for any platform that Python's likely to meet, and we make the assumption of two's complement for signed integers elsewhere in the codebase. If we're worried enough about this, we could change the -int_max <= dval comparison to -int_max < dval. On balance, I'd suggest making that change (partly just for the aesthetics of the symmetry).

Believe it or not, it's also not safe based purely on the C standard to assume that (unsigned long)LONG_MAX + 1 is representable as an unsigned long: C99 §6.2.5p9 only guarantees that nonnegative long values are representable as unsigned long But the chance of that not being true in practice is negligible (at least until someone tries to port CPython to the DS9000). And the failure mode is benign: we'd just end up never taking the fast path.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-reading all this, I had one more worry (which is why I dismissed my own review): what happens if the exact value of dval lies strictly between LONG_MAX and LONG_MAX + 1? In that case we could end up converting a double that, strictly speaking, is outside the range of long. But it turns out that we're safe, because C99 is quite explicit here: §6.3.1.4p1 says (emphasis mine):

If the value of the integral part cannot be represented by the integer type, the behavior is undefined.

So any double value that's strictly smaller than LONG_MAX + 1 should be fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's also not safe based purely on the C standard to assume that (unsigned long)LONG_MAX + 1 is representable as an unsigned long

Then I think we could use ((double)(LONG_MAX / 2 + 1)) * 2, but does it worth it?

It's not safe based purely on the C standard to assume that LONG_MIN = -LONG_MAX - 1: the integer representation could be ones' complement or sign-magnitude, in which case LONG_MIN = -LONG_MAX. But that assumption is safe in practice for any platform that Python's likely to meet, and we make the assumption of two's complement for signed integers elsewhere in the codebase.

Shouldn't we formally state that we support only two's complement representation?
BTW it was proposed to abandon other representations and it looks like committee is agree with that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then I think we could use ((double)(LONG_MAX / 2 + 1)) * 2, but does it worth it?

Definitely not worth it! The C standard permits LONG_MAX == ULONG_MAX, but I'd be astonished if you ever find a real implementation (now or in the future) that has this property.

Shouldn't we formally state that we support only two's complement representation?

Yes, we should, though I'm not sure where would be the best place. But I think it's a non-issue in practice.

if (-int_max < dval && dval < int_max) {
return PyLong_FromLong((long)dval);
}

PyLongObject *v;
double frac;
int i, ndig, expo, neg;
Expand All @@ -452,8 +467,7 @@ PyLong_FromDouble(double dval)
dval = -dval;
}
frac = frexp(dval, &expo); /* dval = frac*2**expo; 0.0 <= frac < 1.0 */
if (expo <= 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already on the slow path, it seems safest to keep this check in place even though it should've been handled by the above int range checks. smart compilers would see that (no idea how many are smart enough to unroll frexp and understand).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think it makes sense to keep this code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either seems fine to me. Personally, I'd probably keep the check out of defensiveness (someone could, for whatever reason, move the fast path out at some point in the future; it's nice if the slow path remains valid in that case), but I'm happy for this to be merged as is. Do we at least have unit tests that cover this case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

smart compilers would see that (no idea how many are smart enough to unroll frexp and understand).

At least gcc is not smart enough.

Copy link
Member

@gpshead gpshead Oct 26, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a compromise is to turn it into assert(expr <= 0); as protection against future code changes breaking our assumption. Our buildbots run --with-pydebug builds where assertions are enabled.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is.

return PyLong_FromLong(0L);
assert(expo > 0);
ndig = (expo-1) / PyLong_SHIFT + 1; /* Number of 'digits' in result */
v = _PyLong_New(ndig);
if (v == NULL)
Expand Down