-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: COLUMN #5867
Comments
@hellochueng Can you give us more details about your setup? Is it multi-writer or single writer? Is it consistently reproducible? |
cc @danny0405 |
You mean the error throws because of multi components were trying to modify the same parquet file ? In flink write pipeline, the only component that may modify the parquet files is the |
@hellochueng Can you please share the steps to reproduce the issue? |
@danny0405 @codope 2022-06-24 21:22:55,019 ERROR org.apache.hudi.io.HoodieMergeHandle [] - Error writing record HoodieRecord{key=HoodieKey { recordKey=xxx ea125773f partitionPath=2022-06-21/18}, currentLocation='null', newLocation='null'}
|
Do you mean in |
The issue is expected to be resolved by this pr: #6106, feel free to re-open it if the problem still exists. |
2022-06-14 19:58:19,560 ERROR org.apache.hudi.io.HoodieMergeHandle [] - Error writing record HoodieRecord{key=HoodieKey { recordKey=fdbid:79505959536,fbillid:79505959731,fentryid:16,dim:hz partitionPath=fdatemonth=202203}, currentLocation='null', newLocation='null'}
java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: COLUMN
at org.apache.hudi.org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:192) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:184) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:348) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:171) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.org.apache.parquet.hadoop.InternalParquetRecordWriter.checkBlockSizeReached(InternalParquetRecordWriter.java:148) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:130) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:301) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.io.storage.HoodieParquetWriter.writeAvroWithMetadata(HoodieParquetWriter.java:81) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.io.HoodieMergeHandle.writeRecord(HoodieMergeHandle.java:294) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.io.HoodieMergeHandle.writeInsertRecord(HoodieMergeHandle.java:273) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.io.HoodieMergeHandle.writeIncomingRecords(HoodieMergeHandle.java:369) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.io.HoodieMergeHandle.close(HoodieMergeHandle.java:377) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.table.action.commit.FlinkMergeHelper.runMerge(FlinkMergeHelper.java:108) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.table.HoodieFlinkCopyOnWriteTable.handleUpdateInternal(HoodieFlinkCopyOnWriteTable.java:368) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.table.HoodieFlinkCopyOnWriteTable.handleUpdate(HoodieFlinkCopyOnWriteTable.java:359) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.table.action.compact.HoodieCompactor.compact(HoodieCompactor.java:197) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.sink.compact.CompactFunction.doCompaction(CompactFunction.java:104) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.sink.compact.CompactFunction.lambda$processElement$0(CompactFunction.java:92) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at org.apache.hudi.sink.utils.NonThrownExecutor.lambda$execute$0(NonThrownExecutor.java:93) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_281]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_281]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]
mor upsert
The text was updated successfully, but these errors were encountered: