-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kafka commitOffsetsInFinalize OOM on Flink #20689
Comments
@johnjcasey has been looking at KafkaIO and may have updates on this one. |
Bringing over a significant comment:
|
I have very little idea where we could be leaking memory in commitOffsets, but the code hasn't substantially changed since the original bug report. @Abacn can you take a look at this while working on Kafka testing? |
This seems like P2, empirically. Have we got more reports of similar problems? |
None that I'm aware of |
The effect of this option is set |
We faced this issue with the Flink runner, but we can't give more details since it was long ago. |
#31682 removes a reshuffle to random keys which was previously done when commit offsets in finalize were enabled with the SDF implementation. Previously this was very inefficient and would use much more resources. I'm going to mark this as fixed but reopen if there are more recent OOMs. |
Hi,
I upgraded Beam from 2.19.0 (flink 1.9) to 2.25.0 (flink 1.11.1),And then it doesn't work。
The cluster version I use is:
jdk1.8
apache-zookeeper-3.4.14
hadoop-3.2.1
flink-1.11.1
Submit job use command:
Yarn is ok but taskmanager.log has exceptioins.
Kafka comsumer into an infinite loop, and finally report
java.lang.OutOfMemoryError: GC overhead limit is exceeded.
Below is a partial log. Please help to analyze and solve it.
It worked fine for me with beam version 2.19.0,But 2.25.0 doesn't work。
Imported from Jira BEAM-11148. Original Jira may contain additional context.
Reported by: titansfy.
The text was updated successfully, but these errors were encountered: