Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add hex scalar function #449
feat: add hex scalar function #449
Changes from all commits
d5af58d
2bfcf25
1caa6fd
385def8
8033c23
8c049a9
1539d6a
ecd2876
ecf57a8
5e17897
88bdcde
e7062c6
129f00a
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not totally related to this PR. But this seems a bit unexpected. I believe many other codes in scalar_funcs and datafusion doesn't handle the dictionary type specially, such as
spark_ceil
,spark_rpad
andspark_murmur3_hash
or other functions registered in DataFusion. If we are going to handle dictionary types, we should also update these functions too? Or we should do it in a more systematic way, such as flatten the dictionary types first.cc @sunchao @andygrove and @viirya for more inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also ran into issues with this in handling cast operations and ended up flattening the dictionary type first, but just for cast expressions. I agree that we need to look at this and see if there is a more systematic approach we can use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The trick we used to use is adding
Cast(child)
inQueryPlanSerde
that unpacks dictionaries.We should consolidate the unpacking logic, otherwise we will need to add it every function. Or until that happens we can workaround with
Cast
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any concrete examples of how this works by any chance? I remember I saw some
unnecessarycast operation in the query plan serde file, didn't realize it was for unpacking dictionaries.Yes, maybe this logic should added in the rust planner side, which can unpack the dictionary automatically if it knows the expression cannot handle dictionary types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, it's been a while, I cannot find right away...
BTW my comment above is not a blocker. Since this PR already implemented it, we can follow up separately
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not only in Comet. I do remember in DataFusion not all functions/expressions support dictionary types. I suspect if there is a systematic approach to deal with it, because I think there is no general approach to process dictionary-encoded inputs for different functions/expressions. For example, some functions can directly work on dictionary values and re-create a new dictionary with updated values, but for some functions, it is impossible so it needs to unpack dictionary first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW now I remember
unpack_dictionary_type()
unpacks early for primitive types, so like the one for Int64 can be omitted.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, why we don't re-create a new dictionary array of string?
Hex
is a kind of function that doesn't change dictionary-encoded mapping. You can simply take new values and existing keys to create a new dictionary array.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have the same issue currently in cast from string to other types.
@viirya Do we have an example somewhere of converting a dictionary array without unpacking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this case, we can call
to construct a new dictionary array with new values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@viirya, thanks, where I'm a bit unclear is how to have the function return type also be a dictionary. The data type for the hex expression seems to be a Utf8, so I get
org.apache.comet.CometNativeException: Arrow error: Invalid argument error: column types must match schema types, expected Utf8 but found Dictionary(Int32, Utf8)
after making the update to usewith_values
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@viirya Am I correct in understanding that this PR is functionally correct but just not as efficient as possible? Perhaps we could consider having a follow-up issue to optimize this to rewrite the dictionary? It seems that we don't have a full example of dictionary rewrite for contributors to follow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andygrove Yes. It's not efficient but should be correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part of dictionary arrays handling can be rewritten to reduce duplicate actually. It can be follow-up though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I filed #504
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's also add scalar tests too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for bringing this up. I guess I thought scalars weren't supported yet (maybe for UDFs only)? From what I could tell none of the other tests test scalars, i.e. most of them insert into a table, then query the table. E.g.,
Also running scalar tests seems to fail for the other UDFs I spot checked, e.g.,
Should they be working w/ UDFs, and it's just nothing else tests for them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have several scalar tests i.e.
ceil and floor
,scalar decimal arithmetic operations
, etc. I agree that the existing coverage is not great.Since you have the code to handle scalar, it is best to test them. Or it is also okay to disable scalar for now and put it as TODO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I agree it's best to test since they're there. I just genuinely thought it wasn't supported given the test coverage, the errors on the other functions, etc. I'll have a look at adding some tests...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm realizing that I'm not going to have time next week and didn't expect this PR to take this long, so I've removed the scalar handling for now and hopefully can follow an example in the future. 88bdcde
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can keep the scalar functions, and it should be pretty straightforward to test scalar input?
Namely, it should be something like:
The constant literal should be encoded as a ScalarValue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried that already and got test failures because the native function isn't being recognized w/ scalars. The same output I mentioned w/
trim
above.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like QueryPlanSerde is failing the case match.
Wondering if you already have tried
"spark.sql.optimizer.excludedRules" -> "org.apache.spark.sql.catalyst.optimizer.ConstantFolding"
?Also right now, since the scalar code was removed from rust, it will fail if the native code happen to find scalar values...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, yes, I did also try disabling ConstantFolding to no avail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hex(literal)
should be evaluated to literal early in Spark optimizer. So it won't hit the nativehex
. I wonder what test failure you saw?