Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Potential Memory Leak #120

Open
jrbentzon opened this issue Jul 25, 2024 · 1 comment
Open

[BUG] Potential Memory Leak #120

jrbentzon opened this issue Jul 25, 2024 · 1 comment
Assignees
Labels
code/bug Something isn't working

Comments

@jrbentzon
Copy link
Member

Description

We've seen our Arcane operator increase in memory until it hits our limits to then get restarted by kubernetes
There are only 5 streams attached to the operator, so looks like it could be a memory leak:
image

Steps to reproduce the issue

Monitor Arcane Operator Memory

Describe the results you expected

More or less constant

System information

v0.0.10

@jrbentzon jrbentzon added the code/bug Something isn't working label Jul 25, 2024
@s-vitaliy s-vitaliy self-assigned this Jul 26, 2024
@s-vitaliy
Copy link
Contributor

s-vitaliy commented Jul 26, 2024

Need to investigate. I don't see this behavior on our cluster, but we did not perform the massive migration of our streams. I will add more streams and take a look at how much memory Arcane.Operator consumes. Also a problem may exist with cache and events deduplication.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
code/bug Something isn't working
Projects
Status: Backlog
Development

No branches or pull requests

2 participants