-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't roundtrip json encode/prase properly with ConstrainedDecimal #2293
Can't roundtrip json encode/prase properly with ConstrainedDecimal #2293
Comments
Patch provided in #2294 |
* fix #2293: Properly encode Decimals without any decimal places. * doc: Added changelog entry. * refactor: Move ConstrainedDecimal test from separate file into test_json * docs: Remove prefix from changelog. * test: Changed test_con_decimal_encode to @samuelcolvins recommendations
@samuelcolvin It's great that the fix for the bug in particular got merged, but thinking further about it. Isn't it a bit strange that I still can't provide a custom encoder for the Id type? It would make sense if the encoder I provided would be used. I suspect this is because the actual type checked when encoding is that of the instance and not the type defined in the objects model class. Wouldn't it make sense to check the annotated type first for a custom encoder? |
fix pydantic#2293: Properly encode Decimals without any decimal places. (pydantic#2294)
Checks
Bug
Output of
python -c "import pydantic.utils; print(pydantic.utils.version_info())"
:When using a
ConstrainedDecimal
with zero decimal_places (as provided in example below), pydantic incorrectly encodes the value as a float, thus resulting in failure if one tries to parse the recently encoded json object.Using a decimal with
max_digits = x
anddecimal_places = 0
is a great way of representing for instance aNumeric(22,0)
(where x is 22) from many SQL database schemas. Certain database engines like the popular pyodbc will properly handle and convert such a Decimal value, but won't handle it as an int as this is implicitly interpreted as a 32 bit int by pyodbc. Having a fixed number of max_digits also allows ones query-engine to pre-compile reusable query plans, which other wise would have to be recomputed for every length of of the given number.In other words, using a ConstrainedDecimal for this type of data is ideal.
I have provided a minimal test/example which can both be executed directly but also ran with pytest to showcase the issue at hand.
I have a small patch for pydantic.json-module which I can provide as a pull-request. When using said patch all the tests above will pass, notice that I both handle the case where Decimals do have an negative exponent (decimal_places) and the case where it doesn't. The patch handles both these cases as expected and is written in a minimal, and of course readable fashion.
I am right now in the process of reading through the contributor guidelines to ensure that my patch is up to the standards which the project holds for contributions.
The text was updated successfully, but these errors were encountered: