Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible Memory Leak #4635

Closed
3 tasks done
petulikan1 opened this issue Dec 22, 2022 · 104 comments
Closed
3 tasks done

Possible Memory Leak #4635

petulikan1 opened this issue Dec 22, 2022 · 104 comments
Labels
not confirmed Report seems plausible but requires additional testing or 3rd part confirmation.

Comments

@petulikan1
Copy link

Current Behavior

As of writing this, we're currently dealing with a RAM issue. Once we start a Spigot server without any plugins just a basic server and we connect to the server more than 100 times, the RAM increases over time. Hovewer the RAM shown in the panel doesn't decrease at all. Once it's allocated it stays that way.

We've made several tests with heapdumps checking if some of the plugins actually doesn't have a memory leak but didn't find any.

image

Here's an image of a running server for more than 20 hours with 4 GB's of RAM, as this server doesn't creates a new threads it's not crashing but on a different server where threads are made, it is.

Expected Behavior

Panel should be able to decrease the RAM of the running container to prevent from unexpected crash. (OOM Killer disabled)
Once the ram get's increased it doesn't decrease and it's a pain for the server itself when there's over 40 plugins over 100 players and unexpectedly crashing once it reaches the container limit because some plugins require to create a new thread and when there's no available memory for the thread itself (Native memory).

Steps to Reproduce

Make a server with a Paper jar, allocate 4 GB's of RAM and connect to it until it reaches 4 GB's.
Leave it for like an hour and then you'll see the same RAM just as you left it.

Panel Version

1.11.1

Wings Version

1.11.0

Games and/or Eggs Affected

Minecraft (Paper)

Docker Image

ghcr.io/pterodactyl/yolks:java_17

Error Logs

No response

Is there an existing issue for this?

  • I have searched the existing issues before opening this issue.
  • I have provided all relevant details, including the specific game and Docker images I am using if this issue is related to running a server.
  • I have checked in the Discord server and believe this is a bug with the software, and not a configuration issue with my specific system.
@petulikan1 petulikan1 added the not confirmed Report seems plausible but requires additional testing or 3rd part confirmation. label Dec 22, 2022
@parkervcp
Copy link
Member

Have you changed the startup command for this server?

@petulikan1
Copy link
Author

petulikan1 commented Dec 23, 2022

For the testing server (where I tested the RAM just by joining it) no I didn't. Left it just as the panel made it.

java -Xms128M -Xmx4096M -jar server.jar

@Mutex21
Copy link

Mutex21 commented Dec 23, 2022

swapaccount=1 cgroup_enable=memory setup -> /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT & GRUB_CMDLINE_LINUX
in the grub line those settings and reboot your dedi server. after that, you need to be sure the allocated swap is 0 in your game server configuration. use the default kernel or compiled by you, not a custom one (liquorix, xanmod) - for xanmod, maybe the LTS version can save you.

P.S: Update the docker to the latest version, I had a lot of problems with wings>1.5.3 version with older docker versions.

@MrBretze
Copy link

I have exact the same problem, and i have added the "swapaccount=1 cgroup_enable=memory setup" to the grub config...

@petulikan1
Copy link
Author

Did that help you resolve this problem?

@Mutex21
Copy link

Mutex21 commented Jan 24, 2023

grub: swapaccount=1 cgroup_enable=memory
docker info and update the docker to the latest version and Allocated Swap from pterodactyl panel (game server) should be 0.

p.s: you need to restart the entire dedi server

@petulikan1
Copy link
Author

Alright thanks, I'll try this and then respond if that worked or not.

@MrBretze
Copy link

MrBretze commented Jan 29, 2023

I have made this change (swapaccount and cgroup_enable has already in this mod in the grub config).
I have rebooted my dedi server and my server doesn't no decrease ram

My dedicated server is a Ryzen 5 3600 and 16Gb of RAM and a RAID 1 of 1TB HDD

if you want another information, just tell me

OS:

Ubuntu 22.04.1 LTS

Docker info

https://pastebin.com/574zu870

Grub Info

https://pastebin.com/ka4nuanG

Panel

1.11.2

Wings

1.11.0

Docker Image

ghcr.io/pterodactyl/yolks:java_17

@petulikan1
Copy link
Author

I have made this change (swapaccount and cgroup_enable has already in this mod in the grub config). I have rebooted my dedi server and my server doesn't no decrease ram

My dedicated server is a Ryzen 5 3600 and 16Gb of RAM and a RAID 1 of 1TB HDD

if you want another information, just tell me

OS:

Ubuntu 22.04.1 LTS

Docker info

https://pastebin.com/574zu870

Grub Info

https://pastebin.com/ka4nuanG

Panel

1.11.2

Wings

1.11.0

Docker Image

ghcr.io/pterodactyl/yolks:java_17

So it was just slowly increasing and at no point decreasing right?

@petulikan1
Copy link
Author

I tried this as well and RAM wasn't decreasing so I'm assuming it's a real problem.

@MrBretze
Copy link

MrBretze commented Jan 29, 2023

So it was just slowly increasing and at no point decreasing right?

Yes

I tried this as well and RAM wasn't decreasing so I'm assuming it's a real problem.

Yes or configuration problem, but I don't know what is the problem...

@MrBretze
Copy link

MrBretze commented Feb 7, 2023

So after some testing, I found a "Java issue" for this one, java NEVER clears the G1 old garbage collector. It's supposed to do it automatically, but I don't know why it doesn't do it automatically with docker.

I tried to add Java Arguments -XX:+UnlockExperimentalVMOptions and XX:+UseContainerSupport but it doesn't help/change the problem

With the spark plugin, if I execute the command spark heapsummary It's forced to clear the G1 old garbage collector and the memory used by the server decreases.

I have forced Java to use the parallel collector (instead of the G1 one) and although I have no memory issues, i get lag spikes so this seems not to be a viable solution.

For now I've added the Java arguments -XX:MinRAMPercentage=25.0 -XX:MaxRAMPercentage=50.0 and it partially works.

I supose its a docker/egg issue not related to the panel...

I have found those two links that may be helpful:
https://developers.redhat.com/blog/2017/03/14/java-inside-docker
https://www.merikan.com/2019/04/jvm-in-a-container/

@petulikan1
Copy link
Author

Thanks for letting me know about this one. I'll look into that once I'm back home.

Once again thanks.

@schrej
Copy link
Member

schrej commented Feb 7, 2023

Has anyone tested running the exact same version of Paper outside of Pterodactyl?
Does your system also report the same amount of memory consumption (e.g. htop)?

@MrBretze
Copy link

MrBretze commented Feb 7, 2023

Does your system also report the same amount of memory consumption (e.g. htop)?

Yes its report the same amout of memory,

Has anyone tested running the exact same version of Paper outside of Pterodactyl?

I tested outside of pterodactyl, I don't see any problem, but I need to retry this correctly

@PatchedNotFixed
Copy link

I have also been having this issue,

been finding for a while now that the docker containers have been using a lot more ram then the servers have been.

It has been common to see one of our minecraft servers that is set to use -Xmx16G to start using 20gb+ after a few hours. God forbid you don't set a container ram limit, iv seen 40GB+.

I noticed this started happening after updating to Ubuntu 22.04.1 LTS from Ubuntu 18. (inherently docker was also updated but i don't know what version we was using)

From all the things i have tried i get the feeling it's a docker related issue, since i was able to recreate this by manually booting a server in docker and seeing the same over memory consumption by the container.

@Loren013
Copy link

Loren013 commented Feb 18, 2023

Hey! I have the same issue,

Has anyone found any reasonable sollution? I thought that the problem is connected with plugins.
Tried to run different servers, w/o plugins, different versions. And still has the issue. Attaching more RAM to the
container helps, but I wonder if SWAP memory can also help.

Anyway, any response with some updated information or solution will be appreciated!
Gosh.. Well, at least the problem for sure isn't connected with plugins.

@MrBretze
Copy link

Has anyone tested running the exact same version of Paper outside of Pterodactyl? Does your system also report the same amount of memory consumption (e.g. htop)?

So after having performed tests outside of pterodactyl, i don't have any issues regarding memory usage. However, I suppose JVM and Docker are the troublemakers here

@MrBretze
Copy link

MrBretze commented Feb 20, 2023

@KugelblitzNinja and @Loren013 Currently, the only 'fix' I have found is to use this command line:
java -Xms128M -XX:+UseContainerSupport -XX:MinRAMPercentage=25 -XX:MaxRAMPercentage=50 -jar {{SERVER_JARFILE}}
It's important not to specify the 'Xmx' Java argument, otherwise it won't work.

@petulikan1
Copy link
Author

@KugelblitzNinja and @Loren013 Currently, the only 'fix' I have found is to use this command line:

java -Xms128M -XX:+UseContainerSupport -XX:MinRAMPercentage=25 -XX:MaxRAMPercentage=50 -jar {{SERVER_JARFILE}}

It's important not to specify the 'Xms' Java argument, otherwise it won't work.

Specify Xms or Xmx?
Asking because in the startup command you wrote 'Xms' and you're saying not to specify the 'Xms'

@MrBretze
Copy link

MrBretze commented Feb 21, 2023

Specify Xms or Xmx? Asking because in the startup command you wrote 'Xms' and you're saying not to specify the 'Xms'

Oops indeed I was wrong!
Sorry !

@PatchedNotFixed
Copy link

PatchedNotFixed commented Feb 21, 2023

-XX:MaxRAMPercentage=50 can work but is a pain, wasting so much ram.

The most useful thing I have tried so far is playing around with diffrent docker bases.

For me its been less of an issue when using docker containers with a base of debian.

Currently i am using kugelblitzninja/pterodactyl-images:debian-zulu-openjdk-19 <-- If you do try using this please let me know how it works for you and also be aware this has extra software added to create a backup on server shutdown!

With this we can run somthing between -XX:MaxRAMPercentage=80 / -XX:MaxRAMPercentage=93.

@PatchedNotFixed
Copy link

PatchedNotFixed commented Feb 21, 2023

On a side note, The follwing is to help on another issue this issue can cause.

If you have not already, disabled the OOM killer.

If you find your servers still being killed insted of going into a zobmie like state (untill assigned more ram), When running low in free ram in the container.

There is a good chance like me you will find out like me that the panal was unable to disable the OOM killer but did not say anything, and the recommended edits in there discord to the grub files was of no help.

I was able to verify this by reading the system logs.

If you find this is the case your going to have to take to google to find other ways to allow the panel to disable it.

An issue I have only has so far had with the host being ubuntu 22.

@PatchedNotFixed
Copy link

I forgot to mention something important in the above,

If your server is getting killed by the OOM killer, This dose not mean your mincraft server ran out of ram! Just that the container is now using it's allocated amount.

(being aware the the OOM killer can fail to be disabled with no notification in the panel)

To see if your minecraft server actually ran out of ram check your log fies to see why it had crashed, you will see out of memory type exceptions, in your server logs file and or in you JVM crash report (look for somthing like hs_err_pid29.log in the root of your server).

if it just died with no error in the log files and they just end, It was the OOM killer. (This can cause world corruption).

@matthewpi
Copy link
Member

To everyone reading this issue and editing their kargs, stop. Reverting to cgroups v1 is not a solution or a fix to any of these problems, it's a terrible workaround that causes more problems, not less.


This problem is caused by many different factors, and is an issue specifically regarding the images and startup commands themselves, nothing else.

First off, setting Xmx to the allocated amount of memory the container has allocated will cause a OOM if all that memory is actually used. If the JVM uses all the memory assigned to the container, there is little to no memory left for anything outside the JVM; Java itself doesn't just use the JVM and requires memory for outside of it. (Setting Xmx also overrides the MaxRAMPercentage flags and disables the automatic container memory limit detection built into newer versions of Java).

Secondly, the ghcr.io/pterodactyl/yolks:java_8 and ghcr.io/pterodactyl/yolks:java_11 both lack container detection support (I am working on a fix for this). They will instead detect 1/4 of the memory available on the host system by default, which will then be affected by the MaxRAMPercentage flag. So if you are running these images and experiencing issues, you will want to set -Xmx to a value below the allocated amount of memory to the container, overhead of a 128MB or so should be more then enough. And for those wondering, no the -XX:+UseContainerSupport flag does not help, and is only required for Java 8; Java 10 and above have it enabled by default, assuming the build of Java actually has the feature, which these specific builds seem to lack. The ghcr.io/pterodactyl/yolks:java_8j9 image does have support for containers, but the -XX:+UseContainerSupport flag will need to be added for it to work.

Finally, for all the Java versions with container detection support, the default MaxRAMPercentage of 95.0 does not provide enough overhead. Because the memory value will be detected as what the container is allocated, the built-in memory overallocation logic in Wings (we assign additional memory to containers rather than the exact amount specified to help prevent issues with OOM) is included in the RAM calculation, meaning the only overhead available is 5%. A MaxRAMPercentage value of 80-90% would allow for much more overhead. The more RAM your server has assigned, the higher this value can be (within reason).


For most users (especially running newer or latest versions of Java), everything should work fine out of the box. However tweaking of the MaxRAMPercentage flag will likely be required for many users.

@PatchedNotFixed
Copy link

PatchedNotFixed commented Feb 26, 2023

@matthewpi

Even with MaxRAMPercentage set too 50% to 75% given a few days on our servers, if the OOM is not disabled it still gets killed by it. Even with containters that have 20GB to 30GB of RAM.

Would you have any advice on why the containers are trying to use so much extra ram?
Any ideas of possible tools and or guides that can be used to diagnose the issue?
Any other suggestions on what other things could be tweeked?

Like this server
image(2)
6GB overhead is a bit too much.

edit : (I dont consider OOM here the primary issue here, more of why the hell is 6GB+ is needed for overhead)

@petulikan1
Copy link
Author

Hey guys, I've got a small update maybe related to this issue.
Not sure what might be causing this, but there's this kind of a limit of threads and it reached it's max limit and is not able to create more of them even tho we have unlimited memory for the server.
Hope it helps figuring out what could be wrong!
image
image

@PatchedNotFixed
Copy link

PatchedNotFixed commented Feb 28, 2023

@petulikan1

For that i think you need to have a look at https://pterodactyl.io/wings/1.0/configuration.html#container-pid-limit , if its still at container_pid_limit: 512 then you going to want to increase it.

It is also worth confirming your host did not run out of ram.

@petulikan1
Copy link
Author

@petulikan1

For that i think you need to have a look at https://pterodactyl.io/wings/1.0/configuration.html#container-pid-limit , if its still at container_pid_limit: 512 then you going to want to increase it.

It is also worth confirming your host did not run out of ram.

Thanks for the link, was trying to find something related to that but wasn't able.

@Chevels
Copy link

Chevels commented Dec 3, 2023

Any news ? Because the error is still current and is very problematic for certain servers, in particular those whose host uses Ptero. My server is affected, I have to use Java OpenJ9 so that my server works without suffering memory leaks and crashes, but this is very limiting because certain plugins do not work, such as Spark which will soon be imposed by PaperMc :(

@NoTranslator6714
Copy link

NoTranslator6714 commented Dec 3, 2023 via email

@Hamziee
Copy link

Hamziee commented Dec 3, 2023

Any news ? Because the error is still current and is very problematic for certain servers, in particular those whose host uses Ptero. My server is affected, I have to use Java OpenJ9 so that my server works without suffering memory leaks and crashes, but this is very limiting because certain plugins do not work, such as Spark which will soon be imposed by PaperMc :(

@Chevels

I've used OpenJ9 too until recently, it has been fixed for me with the new Pterodactyl updates, try to update your Wings. I still experience a memory leak on Forge, but I just make sure to restart it every 12 hours, as you should for every Minecraft server.

@Chevels
Copy link

Chevels commented Dec 3, 2023

Disable the OOM killer and just go based off of what spark tells you

On Sun, Dec 3, 2023 at 12:46 PM Chevels @.> wrote: Any news ? Because the error is still current and is very problematic for certain servers, in particular those whose host uses Ptero. My server is affected, I have to use Java OpenJ9 so that my server works without suffering memory leaks and crashes, but this is very limiting because certain plugins do not work, such as Spark which will soon be imposed by PaperMc :( — Reply to this email directly, view it on GitHub <#4635 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A3OKZDJUHD5HKKBOJZHLXXDYHS3ILAVCNFSM6AAAAAATHGU5JOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZXGU2DSNRVGQ . You are receiving this because you commented.Message ID: @.>

I don't know what is OOM killer, I'm not dev, I'm just a simple girl who own a server for 13 year and who depend of it's server host choice for use of Ptero...

@Hamziee Thank you for your answer ! As I say above, I'm just customer of a server hoster and I don't know what are Wings, except for birds x)
I'll contact my host support to tell them this :)

@parkervcp
Copy link
Member

OOM is short for Out Of Memory and the OOM Killer will kill any server that uses the allocated memory.

@Chevels
Copy link

Chevels commented Dec 3, 2023

OOM is short for Out Of Memory and the OOM Killer will kill any server that uses the allocated memory.

Oh I see, but my server is hosted by a provider and I don't think they're going to remove this option upon request :/ I think they have good reasons, particularly to avoid abuse, to leave this option active. In particular to prevent a single server from reserving 64GB over time (because of the memory leak) when it is only entitled to 16GB.

Thank you for the clarification on the meaning of OOM :)

@mzrtamp
Copy link

mzrtamp commented Mar 17, 2024

Used the Java 17: ghcr.io/pterodactyl/yolks:java_17j9 helped so much, but...

causes this crash instantly when the server started
image
and
image
image

with startup java -Xms128M -XX:MaxRAMPercentage=95.0 -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=200 -XX:+UnlockExperimentalVMOptions -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:G1NewSizePercent=40 -XX:G1MaxNewSizePercent=50 -XX:G1HeapRegionSize=16M -XX:G1ReservePercent=15 -XX:G1HeapWastePercent=5 -XX:G1MixedGCCountTarget=4 -XX:InitiatingHeapOccupancyPercent=20 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32 -XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=1 -XX:+ExplicitGCInvokesConcurrent -XX:+HeapDumpOnOutOfMemoryError -XX:+UseStringDeduplication -Xlog:gc*:logs/gc.log:time,uptime:filecount=5,filesize=1M -Dusing.aikars.flags=https://mcflags.emc.gs -Daikars.new.flags=true -Dfml.queryResult=confirm. -Dterminal.jline=false -Dterminal.ansi=true --add-modules=jdk.incubator.vector -jar {{SERVER_JARFILE}} --nogui

i don't know how to find the impostor among these plugins 😭
image

please help

@mzrtamp
Copy link

mzrtamp commented Mar 17, 2024

Whoa, i tried removed spark plugins and i can reach 5 minutes of uptime, not like before 🤔

@mzrtamp
Copy link

mzrtamp commented Mar 17, 2024

Is there any idea? I need the spark plugin

@mzrtamp
Copy link

mzrtamp commented Mar 17, 2024

Whoa, i tried removed spark plugins and i can reach 5 minutes of uptime, not like before 🤔

image
Nvm, i forgor that i used purpur which is contained its own spark... but now what? It still alive, very confusing...

@MrBretze
Copy link

If you like, you can use this flags generator: https://www.birdflop.com/resources/flags/
Make sure use Environment: Pterodactyl
I do not recommend using the OpenJ9 Java Version
If you use Java 17+, please use the new Shenandoah GC with this simple parameter:
-XX:+UseShenandoahGC -XX:ShenandoahGCMode=iu -XX:+UseNUMA

If you encounter any problems after that, please add this flags: -XX:ShenandoahGCHeuristics=compact

@MrBretze
Copy link

If you want additional information about the new Shenandoah GC, please check here: https://wiki.openjdk.org/display/shenandoah/Main

@NoTranslator6714
Copy link

If you like, you can use this flags generator: https://www.birdflop.com/resources/flags/ Make sure use Environment: Pterodactyl I do not recommend using the OpenJ9 Java Version If you use Java 17+, please use the new Shenandoah GC with this simple parameter: -XX:+UseShenandoahGC -XX:ShenandoahGCMode=iu -XX:+UseNUMA

If you encounter any problems after that, please add this flags: -XX:ShenandoahGCHeuristics=compact

Without OpenJ9, my memory still climbs until it hits OOM. Even after updating flags to Shenandoah instead of Aikar, my memory sits at about 10.5/12gb. For reference, Aikar got me down to about 5gb/12gb

@duranaaron
Copy link

Has this been resolved? @matthewpi

@NoTranslator6714
Copy link

NoTranslator6714 commented May 5, 2024 via email

@duranaaron
Copy link

Upgrading to 1.20.4 fixed it for me. No idea why

On Sun, May 5, 2024 at 1:49 PM Aaron Duran @.> wrote: Has this been resolved? @matthewpi https://github.com/matthewpi — Reply to this email directly, view it on GitHub <#4635 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A3OKZDMV3NW2PUMDHQX45ALZAZWJXAVCNFSM6AAAAAATHGU5JOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJUHA4TCOBTGM . You are receiving this because you commented.Message ID: @.>

It didn't for me. You're telling me your actual server RAM corresponds to what Pterodactyl tells you?

@DeathRyder35
Copy link

dont know if ur issue is solved but had the same problem, found out the deal, if u give a server 14gb u must adjust ram percentage, like, 1024 x 14= 14336 - 2048 = 12288 so startup looks like this java -Xms128M -XX:MaxRAMPercentage=84.7 -Dterminal.jline=false -Dterminal.ansi=true -jar server.jar and paper gets this Maximum memory: 12,160 MB. so when pregenning or players load the map fillout more it wont run out of ram and crash and will clear, general rule of thumb, leave docker 2gb of ram (2048) so it has room also since its tied in with the servers ram as well, if that makes sense to yall.

@iSoulEs
Copy link

iSoulEs commented Aug 28, 2024

its this solved? I have 14gb of ram on my minecraft server and I have to assign 6gb max in the flags, the other memory is consumed by pterodactyl or so I think. What is the problem?
image

java -Xms6144M -Xmx6144M -XX:+DisableExplicitGC -XX:+ParallelRefProcEnabled -XX:+PerfDisableSharedMem -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:G1HeapRegionSize=8M -XX:G1HeapWastePercent=5 -XX:G1MaxNewSizePercent=40 -XX:G1MixedGCCountTarget=4 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1NewSizePercent=30 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:G1ReservePercent=20 -XX:InitiatingHeapOccupancyPercent=15 -XX:MaxGCPauseMillis=200 -XX:MaxTenuringThreshold=1 -XX:SurvivorRatio=32 -Dusing.aikars.flags=https://mcflags.emc.gs -Daikars.new.flags=true -jar server.jar nogui

@DeathRyder35
Copy link

alr, adjust your xmx to 12288 and your xms to 1024 that way your server will get 12gb of ram and docker has 2gb, but if your running a server with the panel, - 4gb for the server os. so lets say u have a 32gb server, -4gb for os if this is panel/server, u would assign the node to have 28gb, then u make a server on that node to w.e. u want it to have, lets say u want a 14gb server u would have startup set at xmx 12288 and xms 1024 and docker will have 2048mb for room since it shares with ur server. i do mine this way and not keep them even is because ram is constantly in use as urs is, i mean u got it but shows full and will show that in os when u login like memory use 98% . i hope this helps, sorry if i rambled on, as i learned the hardway on this as well.

@iSoulEs
Copy link

iSoulEs commented Aug 28, 2024

alr, adjust your xmx to 12288 and your xms to 1024 that way your server will get 12gb of ram and docker has 2gb, but if your running a server with the panel, - 4gb for the server os. so lets say u have a 32gb server, -4gb for os if this is panel/server, u would assign the node to have 28gb, then u make a server on that node to w.e. u want it to have, lets say u want a 14gb server u would have startup set at xmx 12288 and xms 1024 and docker will have 2048mb for room since it shares with ur server. i do mine this way and not keep them even is because ram is constantly in use as urs is, i mean u got it but shows full and will show that in os when u login like memory use 98% . i hope this helps, sorry if i rambled on, as i learned the hardway on this as well.

like this? (i have a vps 16gb with 8 threads ryzen 9)

flags:
java -Xms2048M -Xmx10240M -XX:+DisableExplicitGC -XX:+ParallelRefProcEnabled -XX:+PerfDisableSharedMem -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:G1HeapRegionSize=8M -XX:G1HeapWastePercent=5 -XX:G1MaxNewSizePercent=40 -XX:G1MixedGCCountTarget=4 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1NewSizePercent=30 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:G1ReservePercent=20 -XX:InitiatingHeapOccupancyPercent=15 -XX:MaxGCPauseMillis=200 -XX:MaxTenuringThreshold=1 -XX:SurvivorRatio=32 -Dusing.aikars.flags=https://mcflags.emc.gs -Daikars.new.flags=true -jar server.jar

MEMORY ON NODE: (max) MEMORY ALLOCATED 10,240 / 12,288 MiB

server:
Allocated Memory 10240mb
Allocated Swap 0
cpu limit to 600%

@DeathRyder35
Copy link

DeathRyder35 commented Aug 31, 2024

no no, ill post mine, java -Xms128M -Xmx18432M -Dterminal.jline=false -Dterminal.ansi=true -jar {{SERVER_JARFILE}}
Screenshot from 2024-08-31 03-16-00
gave my server in node area 20 gb, machine has 22gb itself, so i left 2gb for linux, then setup up a node and told it 20gb, then when i went to set the the server up itself, i told it for paper to use 18432M for startup

@JerichoTorrent
Copy link

How is this extremely old issue still a problem... I just did a fresh re-install of pterodactyl the other day. I installed ptero on Oracle Cloud a couple months ago and it runs fantastic even on their crappy Neoverse N1. Yet somehow on a $4,000 AMD EPYC 9734 with this install of ptero everything is borked. I could allocate 500GB of memory to a server and eventually, given enough time, OOMkiller would kill the server after it inevitably reaches 500GB of memory.
Please someone come up with a reasonable solution.

@Hamziee
Copy link

Hamziee commented Aug 31, 2024

How is this extremely old issue still a problem... I just did a fresh re-install of pterodactyl the other day. I installed ptero on Oracle Cloud a couple months ago and it runs fantastic even on their crappy Neoverse N1. Yet somehow on a $4,000 AMD EPYC 9734 with this install of ptero everything is borked. I could allocate 500GB of memory to a server and eventually, given enough time, OOMkiller would kill the server after it inevitably reaches 500GB of memory. Please someone come up with a reasonable solution.

@JerichoTorrent As far as I know this isn't a (huge) problem anymore. I'm using 2 AMD EPYC CPU's, it's doing fine. The only problem I have is with Forge, and that's fixed with a bit of tweaking. What are you exactly trying to run? Keep in mind that Minecraft itself is also leaking a lot, don't expect to run a server without restarting once or twice a day. Also 500GB is exaggerated I hope.

@JerichoTorrent
Copy link

@Hamziee I am using this exact same CPU; AMD EPYC 9634. I'm not blaming pterodactyl; I'm just saying an issue this old with no solution or no path to take to diagnose is.. frustrating. I am just trying to run a velocity network with a few paper backend servers.

@Hamziee
Copy link

Hamziee commented Aug 31, 2024

@JerichoTorrent It's fine, and I understand why are you are frustrated. Because when this issue was made, I was also here trying to figure out what the issue was, and there wasn't much help or any real solutions. As far as I know, Velocity and Paper servers are using RAM like normal, could you maybe explain in more detail what your problem is? Like are they using a lot of memory from the start, or what's exactly the issue?

@PatchedNotFixed
Copy link

PatchedNotFixed commented Aug 31, 2024

(assuming the issue you're suffering is the same as mine where the base container has excessive RAM usage excluding java's expected usage.)

Indeed this is a very frustrating issue, having a container use multiples of the Java RAM allocation, so far my biggest pain about the issue is the lack of tools to diagnose the root cause of the issue.

Edit: You also need to not observe this memory issue when the server is not running in a docker container.

@NoTranslator6714
Copy link

NoTranslator6714 commented Aug 31, 2024 via email

@DeathRyder35
Copy link

DeathRyder35 commented Sep 1, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
not confirmed Report seems plausible but requires additional testing or 3rd part confirmation.
Projects
None yet
Development

No branches or pull requests