Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sentry::Scope#set_transaction_name leaks memory #2257

Closed
c960657 opened this issue Mar 7, 2024 · 9 comments · Fixed by #2324
Closed

Sentry::Scope#set_transaction_name leaks memory #2257

c960657 opened this issue Mar 7, 2024 · 9 comments · Fixed by #2324
Assignees

Comments

@c960657
Copy link
Contributor

c960657 commented Mar 7, 2024

Issue Description

The method name Sentry::Scope#set_transaction_name suggests that it is a simple setter, but in fact it appends the specified name to an array. AFAICT this array is never popped/truncated, so if #set_transaction_name is called e.g. in an event loop, the array will grow unlimited.

I know that Sentry.with_scope is often a better solution inside a loop, but this is not always a feasible solution, e.g. if exceptions are rescued outside the loop.

Reproduction Steps

while true do
  Sentry.configure_scope do |scope|
    scope.set_transaction_name('foo')
    p scope.transaction_names.size
  end
end

Expected Behavior

The transaction names should not accumulate.

Actual Behavior

An ever increasing number is output, proving that the Sentry::Scope#transaction_names array keeps growing. Eventually a NoMemoryError is raised.

Ruby Version

3.2.3

SDK Version

5.15.0

Integration and Its Version

No response

Sentry Config

No response

@sl0thentr0py
Copy link
Member

thx for the report, this whole transaction name stuff is a bit of mess at the moment. In general, there's no reason it needs to be an array, we just care about the last one anyway.

@henrikbjorn
Copy link

@sl0thentr0py This issue seem to impact different part of a rails system. For example ActionCable would be impacted via https://github.com/getsentry/sentry-ruby/blob/master/sentry-rails/lib/sentry/rails/action_cable.rb#L19

Also it seems that the method now also pushes to another array which makes doubles the amount of memory persisted. https://github.com/getsentry/sentry-ruby/blob/master/sentry-ruby/lib/sentry/scope.rb#L235

What are the next steps?

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Jun 17, 2024
@sl0thentr0py
Copy link
Member

im just gonna fix this, no need for it to be an array

@henrikbjorn
Copy link

@sl0thentr0py Cool

Is there a timeframe for cutting a new release with this fix included?

@sl0thentr0py
Copy link
Member

@henrikbjorn
Copy link

@sl0thentr0py Do you have any testing of this or other memory measurement systems in place?

This have been a long journey for us to stumble upon this in our application.

It seems it have also gradually been worse over time, with addition of new tracing features or bookmarks etc.

@sl0thentr0py
Copy link
Member

We don't really have a real world ruby/rails app to test our SDK on, with python/django monolith we have our own Sentry monolith so we catch these things faster.

To some degree, we rely on our community to report critical problems. In the very long run, we will try to setup non-trivial smoke test and benchmark pipelines but it's hard to dedicate too much engineering resources to this because ultimately we're building libraries and accounting for every possible production app out there is a non-trivial problem.

But thanks for the feedback, we'll try to be more stable.

I'm also curious because this transaction name stuff has been around forever, did you notice this after an update or have you always had this problem?

@henrikbjorn
Copy link

We noticed it after a migration from the Faye websocket framework and onto the default Rails ActionCable framework.

Since then we have had servers crawl slowly up in memory and then get recycled by our ops monitoring.

Recently we saw the same in a new project with a very simple ActionCable also. Disabling Sentry removed all of the memory issues (I have sent graphs to your support team)

@henrikbjorn
Copy link

I have a feeling we haven't seen the same in our servers running web via Puma as Puma have probably restarted its workers after x requests and therefor it hasn't been a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants