-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[fix](spark load)partition column is not duplicate key, spark load IndexOutOfBounds error #14661
Conversation
…exOutOfBoundsException error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR approved by at least one committer and no changes requested. |
PR approved by anyone and no changes requested. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR approved by at least one committer and no changes requested. |
TeamCity pipeline, clickbench performance test result: |
Proposed changes
Issue Number: close #14600
Problem summary
In spark load, the job failed when the partition column is not in the duplicate key column list.
We traced the sparkDpp.java and found that, the partition columns ware built with the key index in base metadata. But when partition column was not key, an IndexOutOfBoundsException occurs when obtaining it through DppColumns.
The exception is:
Checklist(Required)
Further comments
If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...