-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User-defined double output precision #19
Conversation
Writing a double to an OutputStream current prints at most 6 significant digits (according to the C standard). The function SetDoublePrecision(), added to the Writer classes can be used to fluently set the precision, i.e. the number of significant digits to use for writing the double: Writer<...> writer(...); d.Accept(writer.SetDoublePrecision(12));
As proposed in other patches, it is convenient to pass a user-defined precision for the (programmatic) output of a single double value to an OutputStream. This patch adds an additional overload with an explicit precision argument to the (Pretty)Writer class templates.
User-defined double output precision
I cannot find the code form c9c2d06 in current master. |
This feature has been superseded by the new Still, in this comment, this question has popped up before. |
@pah, thanks for the update. Shame, as I would really like to have your commit in the lib. I know for a fact that my my code should never be sending more than 3dp, but I'm seeing cumulative rounding errors. Is this truly obsolete? For example, starting with the number 1.23 then repeatedly adding 0.01 to it will eventually result in numbers with long sequences of recurring digits. I'm happy to have this imprecision in my application's state so long as it disappears at the edges -- i.e. where I serialise data to JSON or otherwise. What would you recommend as a viable alternative using the current |
Yes, some 3-decimal digit numbers cannot be represented exactly as I sketched a solution to readd something like this in this comment. The main challenge is to properly define the output semantics for the different cases in I would suggest that you open a new issue and include a proposal how to handle the different cases based on a given "precision limit" from the user. Of course, you're welcome to submit a pull-request as well. 😄 |
Writing a
double
to anOutputStream
current prints at most 6 significantdigits (according to the C standard).
The functions
Set/GetDoublePrecision()
, added to theWriter
classescan be used to fluently set the precision, i.e. the number of significant
digits to use for writing the double:
Additionally, a new
Double(double,int)
overload is added to theWriter
classes,allowing to programmatically write a single
double
value with a custom output precision:An additional unit test
Writer.DoublePrecision
is added as well.See upstream issues
(resubmitted #15 against
master
)