-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update test_add_overflow_with_ansi_enabled
and test_subtraction_overflow_with_ansi_enabled
to check the exception type for Integral case.
#6071
Conversation
Signed-off-by: remzi <13716567376yh@gmail.com>
build |
Why are we trying to get the Spark type from the cudf type? We know both the Spark type of the input and the Spark type of the output from the query plan. We may not know it in |
build |
build |
1 similar comment
build |
Hiccup on the artifact server, kicking the build again |
build |
The As this would cause API changes or maybe more memory allocation, I'm not sure whether this is worth to do when our target is just to match the error message with Spark's. |
build |
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to get the decimal value, we should also pass input to the GpuCast.checkNFixDecimalBounds.
Yep, that or separately pass the input DataType
. Either works.
As this would cause API changes or maybe more memory allocation
We own the APIs in question (they are internal, not public), so this would not be a major change. There's no additional memory allocation involved to pass an existing object instance to a method, not sure what you're getting at there?
Determining the input datatype takes no additional memory, but besides the datatype we would need to extract a specific, offending value from the input to call cannotChangeDecimalPrecisionError
which does require additional GPU processing and memory. As such I'm fine with this just matching the base exception type for now. We can put in the effort to extract a value in a followup PR if deemed necessary.
Signed-off-by: remzi 13716567376yh@gmail.com
This a subtask of #5196.
Changes in this PR.
Update
test_add_overflow_with_ansi_enabled
andtest_subtraction_overflow_with_ansi_enabled
to check the type of exception thrown.Please note that we only update the
IntegralType
case and don't modify theDecimalType
case, because it is difficult to get a Spark'sDecimal
from a cuDF'sColumnView
, which makes it hard to use theRapidsErrorUtil.cannotChangeDecimalPrecisionError
: https://github.com/NVIDIA/spark-rapids/blob/branch-22.08/sql-plugin/src/main/scala/com/nvidia/spark/rapids/GpuCast.scala#L1501-L1516