Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in recent releases of Sentry (>= 8.6.0) #12317

Closed
3 tasks done
jc3m opened this issue May 31, 2024 · 21 comments · Fixed by #12335
Closed
3 tasks done

Memory leak in recent releases of Sentry (>= 8.6.0) #12317

jc3m opened this issue May 31, 2024 · 21 comments · Fixed by #12335
Assignees
Labels
Package: nextjs Issues related to the Sentry Nextjs SDK Type: Bug

Comments

@jc3m
Copy link

jc3m commented May 31, 2024

Is there an existing issue for this?

How do you use Sentry?

Sentry Saas (sentry.io)

Which SDK are you using?

@sentry/nextjs

SDK Version

8.7.0

Framework Version

No response

Link to Sentry event

No response

SDK Setup

Sentry.init({
  enabled: Configuration.current.sentry.enabled,
  dsn: Configuration.current.sentry.dsn,

  // Adjust this value in production, or use tracesSampler for greater control
  tracesSampleRate: 0.1,

  // Setting this option to true will print useful information to the console while you're setting up Sentry.
  debug: false,

  replaysOnErrorSampleRate: 1.0,

  // This sets the sample rate to be 10%. You may want this to be 100% while
  // in development and sample at a lower rate in production
  replaysSessionSampleRate: 0.1,

  integrations: [
    Sentry.httpClientIntegration(),
    Sentry.replayIntegration({ maskAllText: true, blockAllMedia: true }),
  ],
});

Steps to Reproduce

Next.js application using "@sentry/nextjs": "8.7.0", (here is a full list of dependencies)

Service is deployed via AWS ECS + Fargate

We noticed that our first deploy following an upgrade from 8.3.0 to 8.6.0 started causing our containers to hit their memory limits and crash + restart. We noticed this behavior happening across two separate Next.js applications / containers that were upgraded to 8.6.0 at the same time.

Expected Result

Containers stay under memory limit.

Actual Result

Here is a memory usage graph from one of our containers. Version 8.3.0 does not appear to contain an issue, version > 8.6.0 does, we did not check versions 8.3.0 or 8.4.0

Screenshot 2024-05-31 at 11 42 55 AM

Here are some logs observed at time of crash:

<--- Last few GCs --->
[6:0x6b26810] 13610819 ms: Mark-Compact 251.0 (258.8) -> 250.0 (258.8) MB, 1783.67 / 0.01 ms  (average mu = 0.472, current mu = 0.230) allocation failure; GC in old space requested
[6:0x6b26810] 13613120 ms: Mark-Compact 250.0 (258.8) -> 249.9 (259.1) MB, 1992.86 / 0.00 ms  (average mu = 0.336, current mu = 0.134) allocation failure; GC in old space requested
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0xc99970 node::Abort() [next-server (v14.2.2)]
 2: 0xb6ffcb  [next-server (v14.2.2)]
 3: 0xebe9f0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [next-server (v14.2.2)]
 4: 0xebecd7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [next-server (v14.2.2)]
 5: 0x10d0785  [next-server (v14.2.2)]
 6: 0x10e8608 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [next-server (v14.2.2)]
 7: 0x10be721 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [next-server (v14.2.2)]
 8: 0x10bf8b5 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [next-server (v14.2.2)]
 9: 0x109bef6 v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [next-server (v14.2.2)]
10: 0x108d53c v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawArray(int, v8::internal::AllocationType) [next-server (v14.2.2)]
11: 0x108d923 v8::internal::FactoryBase<v8::internal::Factory>::NewWeakFixedArrayWithMap(v8::internal::Map, int, v8::internal::AllocationType) [next-server (v14.2.2)]
12: 0x10b39c9 v8::internal::Factory::NewTransitionArray(int, int) [next-server (v14.2.2)]
13: 0x13ed56a v8::internal::TransitionsAccessor::Insert(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Map>, v8::internal::Handle<v8::internal::Name>, v8::internal::Handle<v8::internal::Map>, v8::internal::SimpleTransitionFlag) [next-server (v14.2.2)]
14: 0x137f750 v8::internal::Map::ConnectTransition(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Map>, v8::internal::Handle<v8::internal::Map>, v8::internal::Handle<v8::internal::Name>, v8::internal::SimpleTransitionFlag) [next-server (v14.2.2)]
15: 0x1381672 v8::internal::Map::CopyReplaceDescriptors(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Map>, v8::internal::Handle<v8::internal::DescriptorArray>, v8::internal::TransitionFlag, v8::internal::MaybeHandle<v8::internal::Name>, char const*, v8::internal::SimpleTransitionFlag) [next-server (v14.2.2)]
16: 0x1381e56 v8::internal::Map::CopyAddDescriptor(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Map>, v8::internal::Descriptor*, v8::internal::TransitionFlag) [next-server (v14.2.2)]
17: 0x1382004 v8::internal::Map::CopyWithField(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Map>, v8::internal::Handle<v8::internal::Name>, v8::internal::Handle<v8::internal::FieldType>, v8::internal::PropertyAttributes, v8::internal::PropertyConstness, v8::internal::Representation, v8::internal::TransitionFlag) [next-server (v14.2.2)]
18: 0x1383d0a v8::internal::Map::TransitionToDataProperty(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Map>, v8::internal::Handle<v8::internal::Name>, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::PropertyConstness, v8::internal::StoreOrigin) [next-server (v14.2.2)]
19: 0x137327a v8::internal::LookupIterator::PrepareTransitionToDataProperty(v8::internal::Handle<v8::internal::JSReceiver>, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::StoreOrigin) [next-server (v14.2.2)]
20: 0x138ecc8 v8::internal::Object::TransitionAndWriteDataProperty(v8::internal::LookupIterator*, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::Maybe<v8::internal::ShouldThrow>, v8::internal::StoreOrigin) [next-server (v14.2.2)]
21: 0x1308007 v8::internal::JSObject::AddProperty(v8::internal::Isolate*, v8::internal::Handle<v8::internal::JSObject>, v8::internal::Handle<v8::internal::Name>, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes) [next-server (v14.2.2)]
22: 0x10a94c6 v8::internal::Factory::NewFunctionPrototype(v8::internal::Handle<v8::internal::JSFunction>) [next-server (v14.2.2)]
23: 0xf252c8 v8::internal::Accessors::FunctionPrototypeGetter(v8::Local<v8::Name>, v8::PropertyCallbackInfo<v8::Value> const&) [next-server (v14.2.2)]
24: 0x138ffb5 v8::internal::Object::GetPropertyWithAccessor(v8::internal::LookupIterator*) [next-server (v14.2.2)]
25: 0x11955d8 v8::internal::LoadIC::Load(v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Name>, bool, v8::internal::Handle<v8::internal::Object>) [next-server (v14.2.2)]
26: 0x119af34 v8::internal::Runtime_LoadIC_Miss(int, unsigned long*, v8::internal::Isolate*) [next-server (v14.2.2)]
27: 0x1931ef6  [next-server (v14.2.2)]
<--- Last few GCs --->
--
[14:0x6e58810] 16359129 ms: Scavenge 247.7 (257.6) -> 247.1 (257.8) MB, 2.90 / 0.00 ms  (average mu = 0.206, current mu = 0.068) allocation failure;
[14:0x6e58810] 16359208 ms: Scavenge 247.9 (257.8) -> 247.2 (258.1) MB, 3.32 / 1.43 ms  (average mu = 0.206, current mu = 0.068) allocation failure;
[14:0x6e58810] 16360814 ms: Mark-Compact 248.1 (258.1) -> 246.3 (258.1) MB, 1592.98 / 43.31 ms  (average mu = 0.204, current mu = 0.201) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xc99970 node::Abort() [next-server (v14.2.2)]
2: 0xb6ffcb  [next-server (v14.2.2)]
3: 0xebe9f0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [next-server (v14.2.2)]
4: 0xebecd7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [next-server (v14.2.2)]
5: 0x10d0785  [next-server (v14.2.2)]
6: 0x10d0d14 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [next-server (v14.2.2)]
7: 0x10e7c04 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [next-server (v14.2.2)]
8: 0x10e841c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [next-server (v14.2.2)]
9: 0x10be721 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [next-server (v14.2.2)]
10: 0x10bf8b5 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [next-server (v14.2.2)]
11: 0x109ce26 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [next-server (v14.2.2)]
12: 0x14f7c56 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [next-server (v14.2.2)]
@jc3m jc3m added the Type: Bug label May 31, 2024
@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 May 31, 2024
@github-actions github-actions bot added the Package: nextjs Issues related to the Sentry Nextjs SDK label May 31, 2024
@lforst
Copy link
Member

lforst commented Jun 3, 2024

Hi, thanks for writing in about this. We are currently taking a look what might cause this!

A question and an ask:

  • Would you mind sharing either your sentry.server.config.ts and the contents of instrumentation.ts?
  • (Only if it is not too much to ask!) would you mind deploying version 8.4.0 and 8.5.0 to help us narrow down what changes might have introduced the leak.

Also, if you have any custom code around Sentry feel free to share it here!

@getsantry getsantry bot removed the status in GitHub Issues with 👀 3 Jun 3, 2024
@lforst lforst self-assigned this Jun 3, 2024
@AlecRust
Copy link

AlecRust commented Jun 3, 2024

@lforst if it helps, we have the same issue (also using Next.js) and it was caused by upgrading from 8.4.0 to 8.5.0, so I'm fairly confident the issue lies in here.

We are currently pinned to 8.4.0 which doesn't have this issue.

Here is our `instrumentation.ts` if it helps
// This file configures the initialization of Sentry on the server.
// The config you add here will be used whenever the server handles a request.
// https://docs.sentry.io/platforms/javascript/guides/nextjs/

import { checkIfKnownBot, cleanSentryEvent } from '@scripts/clean-sentry-events'
import * as Sentry from '@sentry/nextjs'

/* eslint no-process-env: "off" */
const SENTRY_DSN = process.env.SENTRY_DSN || process.env.NEXT_PUBLIC_SENTRY_DSN

export async function register() {
	if (process.env.NEXT_RUNTIME === 'nodejs') {
		Sentry.init({
			dsn: SENTRY_DSN,
			tracesSampleRate: 0.25,
			environment: process.env.ENVIRONMENT,
			beforeSend: (...args) => {
				const [event, hint] = args

				// Sentry now treats error information as `unknown` which means that the only way to handle this types is really treat them as any and check for existence on runtime
				// https://github.com/getsentry/sentry-javascript/pull/7361

				const originalException = hint?.originalException as any

				const isKnownBot = checkIfKnownBot(event.request?.headers?.['user-agent'])
				if (isKnownBot) {
					return null
				}

				const mainAndMaybeCauseErrors = event.exception?.values?.slice(-2) ?? []

				for (const error of mainAndMaybeCauseErrors) {
					const { type } = error
					const is404 = type === 'Not Found'
					const is403 = type === 'Forbidden'
					const is401 = type === 'Unauthorized'
					const isStripeCardError = originalException?.data?.reason?.raw?.type === 'card_error'
					if (is404 || is401 || is403 || isStripeCardError) {
						return null
					}
				}

				return cleanSentryEvent('server', ...args)
			},
			sendDefaultPii: false,
			ignoreErrors: [
				/**
				 * Error when the user is using a invalid JWT or an JWT that points to a different user.
				 * This is not actionable and is often caused by employee using multiple accounts at the same time.
				 */
				/Not Found/i,
				/Forbidden/i,
				/Unauthorized/i,

				/**
				 * Error when we check an expired or malformed token. This is not actionable.
				 */
				/Received invalid bearer token/i,
			],
			integrations: [
				Sentry.captureConsoleIntegration({
					levels: ['error', 'warn'],
				}),
			],
		})
	}

	if (process.env.NEXT_RUNTIME === 'edge') {
		Sentry.init({
			dsn: SENTRY_DSN,
			tracesSampleRate: 0.25,
			environment: process.env.ENVIRONMENT,
			beforeSend: (...args) => {
				const [event] = args
				const isKnownBot = checkIfKnownBot(event.extra?.userAgent as string)
				if (isKnownBot) {
					return null
				}
				return cleanSentryEvent('edge', ...args)
			},
			sendDefaultPii: false,
			ignoreErrors: [
				/**
				 * Error when the user is using a invalid JWT or an JWT that points to a different user.
				 * This is not actionable and is often caused by employee using multiple accounts at the same time.
				 */
				/Not Found/i,
				/Forbidden/i,
				/Unauthorized/i,

				/**
				 * Error when we check an expired or malformed token. This is not actionable.
				 */
				/Received invalid bearer token/i,

				/**
				 * Validation of expired tokens are not actionable
				 */
				/"exp" claim timestamp check failed/i,
			],
			integrations: [],
		})
	}
}

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Jun 3, 2024
@PodStuart
Copy link

Also seeing this on a bump from 8.3.0 to 8.5.0 last week.

@lforst
Copy link
Member

lforst commented Jun 3, 2024

@AlecRust that helps a lot narrowing it down. Thanks! I'll investigate further.

@lforst
Copy link
Member

lforst commented Jun 3, 2024

One more question to the people following this issue. Does anybody experience this with other SDKs than @sentry/nextjs? So far people exclusively seem to be using that SDK.

@lforst
Copy link
Member

lforst commented Jun 3, 2024

Update: I am struggling to find the culprit. If anybody is able to share a memory profile with this happening that would be awesome. Also, any information on what kind of instrumentation/database ORM is used is super useful.

@AbhiPrasad
Copy link
Member

Another note - if you remove tracesSampleRate/tracesSampler from your config (to disable performance monitoring) does the memory issues still apply? Curious if this leak is tied to spans/performance/tracing.

We've also heard memory concerns with people using Postgres - is anyone using that?

@lforst
Copy link
Member

lforst commented Jun 3, 2024

I am like 90% confident I found the leak and I hope never having to touch a memory profiler in my life again: #12335

Why it leaks, no idea, but it leaks.

@mitsuhiko
Copy link
Member

mitsuhiko commented Jun 4, 2024

I figured out the issue now but it's not entirely clear to me why it was triggered yet. On node calling setTimeout returns a Timeout object. That object is tracked in an internal list of timers and that list is maintained in two places. On the one hand in unenroll which is used by clearTimeout (and clearInterval) and one when the timer runs.

However only the unenroll path also removes a timer from the internal knownTimersById map. This map is updated whenever the Timeout is converted into a primitive. From that moment onwards a timer can be cleared by it's internal async id.

So to get a setTimeout to leak you just need to call +setTimeout(...) and it wait for it to complete. The entry from the knownTimersById map is not removed and we leak.

The memory dump that @lforst shared with me indicates that we have timers leaked in knownTimersById and they all have their Symbol(kHasPrimitive) flag set to true. This means something somewhere converts timeouts into primitives. This still happens even after the patch but I don't know where this would happen.

The repro case is trivial:

// leaks
for (i = 0; i < 500000; i++) {
  +setTimeout(() => {}, 0);
}

This will create 500000 un-collectable Timeouts that can be found in the knownTimersById map in timers.js. Removing the + fixes it. There are other situations in which JavaScript will convert something into a primitive. For instance putting the timeout into a map will do that:

> x = {}
{}
> x[setTimeout(() => {}, 0)] = 42;
42
> x
{ '119': 42 }

So there might be some patterns in either our codebase or in opentelemetry that do that.

Independent of that we should open an issue against node as there is clearly a bug there Timer is removed here from the list but not from knownTimersById: https://github.com/nodejs/node/blob/7d14d1fe068dfb34947eb4d328699680a1f5e75d/lib/internal/timers.js#L544-L545

Compare this to how unenroll clears: https://github.com/nodejs/node/blob/7d14d1fe068dfb34947eb4d328699680a1f5e75d/lib/timers.js#L86-L93

@AbhiPrasad
Copy link
Member

Always love it when SDK bugs actually reveal bugs with node.js or the browser 😂

We spent some time trying to reproduce why the timeout becomes a primitive.

With just initializing the SDK neither setTimeout.toString() changes nor does hasPrimitive become flagged on the timer object. This means in a minimal repro, the SDK does not seem to be causing this behaviour.

Timeout {
  _idleTimeout: 1000,
  _idlePrev: null,
  _idleNext: null,
  _idleStart: 80,
  _onTimeout: [Function (anonymous)],
  _timerArgs: undefined,
  _repeat: null,
  _destroyed: true,
  [Symbol(refed)]: true,
  [Symbol(kHasPrimitive)]: false,
  [Symbol(asyncId)]: 6,
  [Symbol(triggerId)]: 1
}

Next we looked at AsyncLocalStorage, given both the SDK and Next.js relies on this. This also seems to have no impact.

Timeout {
  _idleTimeout: 1000,
  _idlePrev: null,
  _idleNext: null,
  _idleStart: 81,
  _onTimeout: [Function (anonymous)],
  _timerArgs: undefined,
  _repeat: null,
  _destroyed: true,
  [Symbol(refed)]: true,
  [Symbol(kHasPrimitive)]: false,
  [Symbol(asyncId)]: 8,
  [Symbol(triggerId)]: 1,
  [Symbol(kResourceStore)]: 0
}

So this means the problem is with Next.js for sure.

@lforst ran some tests and apparently sometimes Next.js patches setTimeout to cause this behaviour! This is because they have some resource tracking class that holds references to all timers.

image

https://github.com/vercel/next.js/blob/abff797e92012abed0260e4e0db67abe222be06c/packages/next/src/server/web/sandbox/resource-managers.ts#L44

This functionality was introduced in Next to fix another Node.js bug 😓 vercel/next.js#57235

I'll let @lforst say it best

image

@mydea
Copy link
Member

mydea commented Jun 7, 2024

Hey, we've just released 8.8.0 which should hopefully fix this issue! Let us know if you still notice any problems.

@AnthonyDugarte
Copy link

Experiencing huge memory leaks on @sentry/node@8.9.2, will downgrade to 8.3.0

@AbhiPrasad
Copy link
Member

@AnthonyDugarte memory leaks are p0 for us to fix, could you open a new GH issue with details about your setup? We'll get someone on that asap!

@lincolnthree
Copy link

lincolnthree commented Sep 6, 2024

Also getting memory leaks on 8.28.0.

From my crude analysis, it does appear to be the same Timeout issue:

Image

@madsbuch
Copy link

We experience memory leak in (at least) @sentry/bun@8.34.0.

@linfanx
Copy link

linfanx commented Oct 14, 2024

Also getting memory leaks on 8.28.0.

From my crude analysis, it does appear to be the same Timeout issue:

Image

have you resolved this? I also get this ! :((

@linfanx
Copy link

linfanx commented Oct 15, 2024

Hey, I fix this by use node lts 20.18.0 nodejs/node#53337

@IRediTOTO
Copy link

I get this error too. My VPS has upto 8/12GB Ram available...
Is there another way to upload source map after build?

here is my log.

#13 10.58   ▲ Next.js 14.2.15
#13 10.58   - Environments: .env
#13 10.58   - Experiments (use with caution):
#13 10.58     · instrumentationHook
#13 10.58 
#13 10.73    Creating an optimized production build ...
#13 184.5 
#13 184.5 <--- Last few GCs --->
#13 184.5 
#13 184.5 [42:0x31bde10]   179422 ms: Mark-Compact 2035.1 (2084.6) -> 2031.0 (2086.4) MB, 1058.58 / 0.00 ms  (average mu = 0.340, current mu = 0.056) allocation failure; scavenge might not succeed
#13 184.5 [42:0x31bde10]   180641 ms: Mark-Compact 2035.1 (2086.4) -> 2033.1 (2088.1) MB, 1190.60 / 0.00 ms  (average mu = 0.210, current mu = 0.023) allocation failure; scavenge might not succeed
#13 184.5 
#13 184.5 
#13 184.5 <--- JS stacktrace --->
#13 184.5 
#13 184.5 FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
#13 184.5 ----- Native stack trace -----
#13 184.5 
#13 184.5  1: 0xaaae2f node::OOMErrorHandler(char const*, v8::OOMDetails const&) [node]
#13 184.5  2: 0xe308c0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
#13 184.5  3: 0xe30ca4 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
#13 184.5  4: 0x10604c7  [node]
#13 184.5  5: 0x1079039 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
#13 184.5  6: 0x1051ca7 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
#13 184.5  7: 0x10528e4 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
#13 184.5  8: 0x1031c0e v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
#13 184.5  9: 0x149b930 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
#13 184.5 10: 0x18ddef6  [node]
#13 185.0 error: script "build" was terminated by signal SIGABRT (Abort)
#13 185.0 ============================================================
#13 185.0 Bun v1.1.20 (ae194892) Linux x64
#13 185.0 Linux Kernel v5.15.0 | glibc v2.39
#13 185.0 CPU: sse42 popcnt avx avx2
#13 185.0 Args: "bun" "run" "build"
#13 185.0 Features: spawn 
#13 185.0 Elapsed: 183379ms | User: 14ms | Sys: 13ms
#13 185.0 RSS: 1.07GB | Peak: 14.41MB | Commit: 1.07GB | Faults: 3
#13 185.0 
#13 185.0 panic(main thread): Segmentation fault at address 0x0
#13 185.0 oh no: Bun has crashed. This indicates a bug in Bun, not your code.
#13 185.0 
#13 185.0 To send a redacted crash report to Bun's team,
#13 185.0 please file a GitHub issue using the link below:
#13 185.0 
#13 185.0  https://bun.report/1.1.20/lr1ae19489AggggE+7iQ_________A2AA
#13 185.0 
#13 ERROR: process "/bin/bash -ol pipefail -c NODE_OPTIONS=--max-old-space-size=8192 prisma generate && bun run build" did not complete successfully: exit code: 132
------
 > [stage-0  9/11] RUN --mount=type=cache,id=oskgwswo404kcw4ggk40g40s-next/cache,target=/app/.next/cache --mount=type=cache,id=oskgwswo404kcw4ggk40g40s-node_modules/cache,target=/app/node_modules/.cache NODE_OPTIONS=--max-old-space-size=8192 prisma generate && bun run build:
185.0 RSS: 1.07GB | Peak: 14.41MB | Commit: 1.07GB | Faults: 3
185.0 
185.0 panic(main thread): Segmentation fault at address 0x0
185.0 oh no: Bun has crashed. This indicates a bug in Bun, not your code.
185.0 
185.0 To send a redacted crash report to Bun's team,
185.0 please file a GitHub issue using the link below:
185.0 
185.0  https://bun.report/1.1.20/lr1ae19489AggggE+7iQ_________A2AA
185.0 
------
Dockerfile:24
--------------------
  22 |     # build phase
  23 |     COPY . /app/.
  24 | >>> RUN --mount=type=cache,id=oskgwswo404kcw4ggk40g40s-next/cache,target=/app/.next/cache --mount=type=cache,id=oskgwswo404kcw4ggk40g40s-node_modules/cache,target=/app/node_modules/.cache NODE_OPTIONS=--max-old-space-size=8192 prisma generate && bun run build
  25 |     
  26 |     
--------------------
ERROR: failed to solve: process "/bin/bash -ol pipefail -c NODE_OPTIONS=--max-old-space-size=8192 prisma generate && bun run build" did not complete successfully: exit code: 132
Deployment failed. Removing the new version of your application.

@AbhiPrasad
Copy link
Member

@IRediTOTO this seems like a bug in Bun from looking at the logs. I recommend you follow up with the Bun team to fix this. Switching to Node.js in the mean time will probably unblock you.

@IRediTOTO
Copy link

@IRediTOTO this seems like a bug in Bun from looking at the logs. I recommend you follow up with the Bun team to fix this. Switching to Node.js in the mean time will probably unblock you.

No, I tried many ways

  • change RAM swap on server to 12GB
  • Use NodeJS 20 + npm
  • ...
    Only way to fix is the code below in nextjs.config.mjs, do you know any other way?
sourcemaps: {
            disable: true,
        },

@AbhiPrasad
Copy link
Member

If you disable sentry, but enable generating sourcemaps does the error still occur? (basically is the problem with uploading or with actual sourcemap generation)

You can check this with productionBrowserSourceMaps: true and disabling sentry: https://nextjs.org/docs/app/api-reference/next-config-js/productionBrowserSourceMaps

If the problem exists just with sourcemap generation, this is a next.js problem. It's nextjs/webpack which is generating the sourcemap.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Package: nextjs Issues related to the Sentry Nextjs SDK Type: Bug
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.