-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BigDecimal vs Decimal128 #8
Comments
Fabrice Bellard said,
I could see the |
After some more thought and discussion with @waldemarhorwat, I've decided to leave the question of BigDecimal vs Decimal128 undecided for a bit longer, and investigate both paths in more detail. |
I work on Ethereum-based financial software. The largest integer in Ethereum is a |
@novemberborn If you're currently using a uint256, would BigInt work for your needs? How are those decimals currently represented? |
@littledan we've ended up with a few representations unfortunately. While we can represent the raw value as a BigInt, this isn't actually useful. The smallest unit of ETH is a Wei, but thinking of ETH as |
Can you say more about the representations you're using now? You only mentioned uint256 (of Wei?)--I'd be interested in hearing about the others. |
I haven't worked much with the representation we use in our databases. We'll looking at cleaning this up so I'll know more in the next few weeks hopefully. |
On the wire, we either use decimal strings, or a |
Coming in cold to this discussion, but it seems that there aren't any arguments here against the arbitrary-precision approach. The arbitrary precision approach would support options on various operations that would allow one to specify precision, thereby (potentially) gaining some speed & memory benefits in certain use cases, such as when knows, e.g., that at most 10 digits are needed for any calculation. There was a reference to a discussion with @waldemarhorwat. Are the concerns still valid? |
The brittleness and complexity concerns from past discussions on this topic haven't changed. See the past discussions on this topic to understand the problems and dangers that appear with unlimited precision. If precision is an option, what happens when one doesn't specify it? How does one specify it for an arithmetic operator such as |
This proposal suggests adding arbitrary-precision decimals. Another alternative would be to add decimals with fixed precision. My current thoughts on why to go with arbitrary precision decimal (also in the readme):
JavaScript is a high-level language, so it would be optimal to give JS programmers a high-level data type that makes sense logically for their needs, as TC39 did for BigInt, rather than focusing on machine needs. At the same time, many high-level programming languages with decimal data types just include a fixed precision. Because many languages added decimal data types before IEEE standardized one, there's a big variety of different choices that different systems have made.
We haven't seen examples of programmers running into practical problems due to rounding from fixed-precision decimals (across various programming languages that use different details for their decimal representation). This makes IEEE 128-bit decimal seem attractive. Decimal128 would solve certain problems, such as giving a well-defined point to round division to (simply limited by the size of the type).
However, we're proposing unlimited-precision decimal instead, for the following reasons:
The text was updated successfully, but these errors were encountered: