-
Notifications
You must be signed in to change notification settings - Fork 26.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Next.js development high memory usage #54708
Comments
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as resolved.
This comment was marked as resolved.
@noetix Disagree that this had anything to do with bullying. All I did was explain that my posts are continuously being ignored, including in this issue. Then I explained what to do, which is to provide a runnable application. There is nothing we can do without a runnable example, which was already shared in the initial issue, I even made it bold to highlight that further. Happy to explain it again, the reason we can't do anything without runnable code is that in order to narrow down the memory usage we need to change the Next.js core code in the application, for example to disable client components compilation and such in order to narrow down where the memory usage comes from. There is no way to do that based on screenshots / messages / information you can provide as it would require countless hours of your time and our time (think 2 weeks full time at least) in order to investigate this. The emoji reactions not being shown for off-topic marked posts is a bug in GitHub. As mentioned in the initial issue any posts that don't include a reproduction will be automatically hidden. Since you didn't like the earlier explanation I'll just remove it, don't feel strongly about keeping the comment. Definitely wasn't bullying, you were reading into that. Bullying would be the threats I've received recently from anonymous developers on Twitter that they'll come visit my house soon... @weyert we haven't made changes to development memory usage besides the PR linked in the issue so really all I need is a reproduction, luckily @AhmedChabayta posted one, hopefully that is enough, fingers crossed. @Thinkscape please open a separate issue, that bug would be separate from this issue 👍 |
This comment was marked as off-topic.
This comment was marked as off-topic.
I've posted a reproduction here: https://github.com/limeburst/vercel-nextjs-54708 Start the development server, navigate from
And watch the memory usage grow, until the server restarts. |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
You can use this code to see the issue. The server is getting aborted silently without any errors. https://github.com/codelitdev/courselit/tree/tailwindcss-2 Logs rajat@rajat-laptop:~/projects/courselit$ yarn dev
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env.local
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env
- ready started server on [::]:3000, url: http://localhost:3000
- event compiled client and server successfully in 545 ms (18 modules)
- wait compiling...
- event compiled client and server successfully in 263 ms (18 modules)
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env.local
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env.local
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env
- wait compiling /404 (client and server)...
- wait compiling / (client and server)...
rajat@rajat-laptop:~/projects/courselit$ |
This comment was marked as off-topic.
This comment was marked as off-topic.
I'm getting 'server out of memory' after a while by letting the server run and writing/saving code that calls the following functions a few times. import { google } from "googleapis";
export async function authSheets() {
//Function for authentication object
const auth = new google.auth.GoogleAuth({
keyFile: "./auth/auth-sa-sptk.json",
scopes: ["https://www.googleapis.com/auth/spreadsheets"],
});
//Create client instance for auth
const authClient = await auth.getClient();
//Instance of the Sheets API
const sheets = google.sheets({ version: "v4", auth: authClient });
return {
auth,
authClient,
sheets,
};
} import { authSheets } from "./authSheets";
export async function clearSheetContents(sheetName) {
console.log("sheet =", sheetName);
const SHEET_ID = "123";
const sheetId = SHEET_ID;
const { sheets } = await authSheets();
try {
const result = await sheets.spreadsheets.values.clear({
spreadsheetId: sheetId,
range: sheetName,
});
console.log("result.data =", result.data);
} catch (err) {
// TODO (developer) - Handle exception
throw err;
}
} import { authSheets } from "./authSheets";
// https://developers.google.com/sheets/api/guides/values
export async function setSheetValues(sheetName, input) {
const SHEET_ID = "123";
const sheetId = SHEET_ID;
const values = [input];
const resource = { values };
// Updates require a valid ValueInputOption parameter
const valueInputOption = "RAW"; // The input is not parsed and is inserted as a string.
const { sheets } = await authSheets();
try {
const result = await sheets.spreadsheets.values.append({
spreadsheetId: sheetId,
range: sheetName,
valueInputOption: valueInputOption,
resource,
});
console.log("result.data =", result.data);
} catch (err) {
// TODO (developer) - Handle exception
throw err;
}
} {
"name": "test",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"dependencies": {
"autoprefixer": "10.4.15",
"axios": "^1.5.0",
"encoding": "^0.1.13",
"eslint": "8.48.0",
"eslint-config-next": "13.4.19",
"googleapis": "^126.0.1",
"next": "13.4.19",
"postcss": "8.4.29",
"react": "18.2.0",
"react-dom": "18.2.0",
"tailwindcss": "3.3.3"
}
} Node v. v18.17.1 |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
have the same issue ,when running |
After doing some profiling, it seems like server modules are leaked across HMR in development, which in my company's case leads to rapid growth of memory due to a large backend (see #62217, auto-closed unfortunately). Curious if anyone else can repro / confirm the same thing, or confirm theirs don't leak. |
I have pinned down the issue to next's global fetch implementation. I've created a repo that is very simple and re-creates the memory leak and uses the same logic that by passes it by using node's http module. There seems to be an issue surrounding |
…ependency (#63321) ## History Previously, we added support for `squoosh` because it was a wasm implementation that "just worked" on all platforms when running `next dev` for the first time. However, it was slow so we always recommended manually installing `sharp` for production use cases running `next build` and `next start`. Now that [`sharp` supports webassembly](https://sharp.pixelplumbing.com/install#webassembly), we no longer need to maintain `squoosh`, so it can be removed. We also don't need to make the user install sharp manually because it can be installed under `optionalDependencies`. I left it optional in case there was some platform that still needed to manually install the wasm variant with `npm install --cpu=wasm32 sharp` such as codesandbox/stackblitz (I don't believe sharp has any fallback built in yet). Since we can guarantee `sharp`, we can also remove `get-orientation` dep and upgrade `image-size` dep. I also moved an [existing `sharp` test](#56674) into its own fixture since it was unrelated to image optimization. ## Related Issues - Fixes #41417 - Related #54670 - Related #54708 - Related #44804 - Related #48820 - Related #61810 - Related #61696 - Related #44685 - Closes #64362 ## Breaking Change This is a breaking change because newer versions of `sharp` no longer support `yarn@1`. - lovell/sharp#3750 The workaround is to install with `yarn --ignore-engines` flag. Also note that Vercel no longer defaults to yarn when no lockfile is found - vercel/vercel#11131 - vercel/vercel#11242 Closes NEXT-2823
Similar to @JClackett I just hovering around 6GB memory usage. Also reloads takes like 30s for normal size pages. |
Hey everyone, as part of Turbopack we've also implemented thorough memory tracing which allows for narrowing down why certain applications use a lot of memory. Can you follow these steps when using Turbopack and send us the relevant files: If possible follow these steps which would give me the best picture to investigate:
It's important to note that this is not a memory leak, it's high memory usage, which is very different than leaking memory. Leaking memory means the memory is unused but can't be cleaned up. In this case the memory is used (i.e. memory caches of webpack and such). It's also important to note that Turbopack memory usage is not fully optimized yet, right now it uses more memory than where we want the memory usage to be for Turbopack, at least a 50% reduction is what we're aiming for right now and that is being worked on currently. Incorrect usage of icon libraries is just one cause of high memory usage, will try to explain why a bit: There's libraries that ship 10.000 re-exported modules that all have to be analyzed before tree-shaking. Next.js needs to compile your code (including node_modules) for multiple targets depending on your application:
If you use all of those combined you're compiling not 10.000 modules but:
Obviously people use some combination of these, but it illustrates why misusing icon imports can be one of the problems. |
What about users who don't use Turbopack? |
@AryanJ-NYC You can try capturing heap dumps but there's not much we can do with those without also seeing the source code, so in essence providing the full source code is required to investigate when you are using webpack. See my earlier replies in this issue, there's very little we can do without code that can be profiled. For Turbopack we're in control of the memory allocator as it's not running in JavaScript and that's why we can measure memory usage for Turbopack during tracing. |
@timneutkens could you take a quick look at the dump in #62217? Auto-closed but I shared the heap dumps there (hard to share a repro unfortunately). I'm mainly curious if its expected for old module source-map strings to be retained in general, or if it indicates something not cleaned up properly? If its due to something not properly cleaned up, should there be mechanisms for that (#46018)? And should we expect so many modules to need to be recompiled (#45204)? |
…ependency (#63321) ## History Previously, we added support for `squoosh` because it was a wasm implementation that "just worked" on all platforms when running `next dev` for the first time. However, it was slow so we always recommended manually installing `sharp` for production use cases running `next build` and `next start`. Now that [`sharp` supports webassembly](https://sharp.pixelplumbing.com/install#webassembly), we no longer need to maintain `squoosh`, so it can be removed. We also don't need to make the user install sharp manually because it can be installed under `optionalDependencies`. I left it optional in case there was some platform that still needed to manually install the wasm variant with `npm install --cpu=wasm32 sharp` such as codesandbox/stackblitz (I don't believe sharp has any fallback built in yet). Since we can guarantee `sharp`, we can also remove `get-orientation` dep and upgrade `image-size` dep. I also moved an [existing `sharp` test](#56674) into its own fixture since it was unrelated to image optimization. ## Related Issues - Fixes #41417 - Related #54670 - Related #54708 - Related #44804 - Related #48820 - Related #61810 - Related #61696 - Related #44685 - Closes #64362 ## Breaking Change This is a breaking change because newer versions of `sharp` no longer support `yarn@1`. - lovell/sharp#3750 The workaround is to install with `yarn --ignore-engines` flag. Also note that Vercel no longer defaults to yarn when no lockfile is found - vercel/vercel#11131 - vercel/vercel#11242 Closes NEXT-2823
We are experiencing very high memory usage, build times, compilation times(60s+), and have had no luck tracking down the issue. Hot refreshes can take 5s+. Also our app router migration turned 9000 modules -> 15000 modules. Any help would be greatly appreciated! https://gist.github.com/cryptoMavrik/2697c797d720d99f32e5a93b6fb8c2b0 |
Similar to @JClackett I just hovering around 4GB memory usage. Also reloads takes like 30s for normal size pages. |
Node.js v. 20.15 seems to be the last version, before the big memory leak got introduced in v. 20.16 (a very very minor leak can still be observed, but it's nothing in comparison). |
There's a Node.js memory leak in one of the latest LTS releases? Do you have an official source for that? Is it an undici leak? Oh, and did you try 20.17.0 to compare? |
Check out my updated pictures |
I'm saying you need to use exactly v. 20.15.x not a later major/minor (you're using 22.8 according to your pictures) as some leak was introduced in Node 20.16 - at least that's what we saw. |
This node version seems way too new. Honestly, it feels like the React team is just throwing everything into one big mix, which doesn’t really match their usual style. I genuinely hope they can make it work, but right now, it feels like they’re more focused on flashy new features and buzz rather than practical memory management. sad :( |
|
Before posting a comment on this issue please read this entire post.
Previous work
The past few weeks we've been investigating / optimizing various memory usage issues. Specifically geared towards production memory usage. In investigating these we were able to find there was one memory leak in Node.js itself when using
fetch()
in Node.js versions before18.17.0
(you'll want to use18.17.1
for security patches though).Most of the reports related to memory usage turned out to be reports of "it's higher than the previous version" rather than a memory leak. This was expected because in order to run App Router and Pages Router at the same time with different React versions two separate processes were needed. This has been resolved by reducing the amount of processes to two, one for Routing and App Router rendering, and one for Pages Router rendering. So far we haven't received new reports since the latest release.
In some issues there were reports related to Image Optimization in production, however no reproduction was provided so it could not be investigated adequately, if you have a reproduction for that please refer to this issue: #54482
New
With the memory usage in production resolved we've started investigating reports of development memory usage spikes. Unfortunately these reports suffer from the same problem as the production memory usage issues people raised before, they're full of comments saying
same issue
or posting screenshots of monitoring tools saying "Look, same issue".Unfortunately, as you can image, these replies are not enough to investigate / narrow down what causes the memory usage, for example in multiple cases that we did get a reproduction and could investigate the reason for the high memory usage was:
optimize_barrel
SWC transform and newoptimizePackageImports
config #54572 that should help a bit to reduce the size (and compilation speed too).So far I've been able to make one small change to webpack's memory caching to make it garbage collect a bit more aggressively in #54397. I'm not expecting that change to have a big impact on the reported issues though.
We'd like to investigate these reports further, however we're unable to narrow these down if there is no code to run to collect heap snapshots and profiles for, hence this issue. If you are able to please provide runnable code of what you're experiencing.
Comments that don't include runnable code will be automatically hidden in order to keep this issue productive. This includes comments that only have a screenshot and applications that can't run.
I'm going to combine the other reports into this issue as separate comments.
I've made sure that we have 2-3 engineers on our team available to investigate when we get runnable reproductions to investigate.
Thanks in advance!
NEXT-1569
The text was updated successfully, but these errors were encountered: