-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible Memory Leak #4635
Comments
Have you changed the startup command for this server? |
For the testing server (where I tested the RAM just by joining it) no I didn't. Left it just as the panel made it. java -Xms128M -Xmx4096M -jar server.jar |
swapaccount=1 cgroup_enable=memory setup -> /etc/default/grub P.S: Update the docker to the latest version, I had a lot of problems with wings>1.5.3 version with older docker versions. |
I have exact the same problem, and i have added the "swapaccount=1 cgroup_enable=memory setup" to the grub config... |
Did that help you resolve this problem? |
grub: swapaccount=1 cgroup_enable=memory p.s: you need to restart the entire dedi server |
Alright thanks, I'll try this and then respond if that worked or not. |
I have made this change (swapaccount and cgroup_enable has already in this mod in the grub config). My dedicated server is a Ryzen 5 3600 and 16Gb of RAM and a RAID 1 of 1TB HDD if you want another information, just tell me OS:Ubuntu 22.04.1 LTS Docker infoGrub InfoPanel1.11.2 Wings1.11.0 Docker Imageghcr.io/pterodactyl/yolks:java_17 |
So it was just slowly increasing and at no point decreasing right? |
I tried this as well and RAM wasn't decreasing so I'm assuming it's a real problem. |
Yes
Yes or configuration problem, but I don't know what is the problem... |
So after some testing, I found a "Java issue" for this one, java NEVER clears the G1 old garbage collector. It's supposed to do it automatically, but I don't know why it doesn't do it automatically with docker. I tried to add Java Arguments With the spark plugin, if I execute the command I have forced Java to use the parallel collector (instead of the G1 one) and although I have no memory issues, i get lag spikes so this seems not to be a viable solution. For now I've added the Java arguments I supose its a docker/egg issue not related to the panel... I have found those two links that may be helpful: |
Thanks for letting me know about this one. I'll look into that once I'm back home. Once again thanks. |
Has anyone tested running the exact same version of Paper outside of Pterodactyl? |
Yes its report the same amout of memory,
I tested outside of pterodactyl, I don't see any problem, but I need to retry this correctly |
I have also been having this issue, been finding for a while now that the docker containers have been using a lot more ram then the servers have been. It has been common to see one of our minecraft servers that is set to use -Xmx16G to start using 20gb+ after a few hours. God forbid you don't set a container ram limit, iv seen 40GB+. I noticed this started happening after updating to Ubuntu 22.04.1 LTS from Ubuntu 18. (inherently docker was also updated but i don't know what version we was using) From all the things i have tried i get the feeling it's a docker related issue, since i was able to recreate this by manually booting a server in docker and seeing the same over memory consumption by the container. |
Hey! I have the same issue, Has anyone found any reasonable sollution? I thought that the problem is connected with plugins. Anyway, any response with some updated information or solution will be appreciated! |
So after having performed tests outside of pterodactyl, i don't have any issues regarding memory usage. However, I suppose JVM and Docker are the troublemakers here |
@KugelblitzNinja and @Loren013 Currently, the only 'fix' I have found is to use this command line: |
Specify Xms or Xmx? |
Oops indeed I was wrong! |
The most useful thing I have tried so far is playing around with diffrent docker bases. For me its been less of an issue when using docker containers with a base of debian. Currently i am using With this we can run somthing between |
On a side note, The follwing is to help on another issue this issue can cause. If you have not already, disabled the OOM killer. If you find your servers still being killed insted of going into a zobmie like state (untill assigned more ram), When running low in free ram in the container. There is a good chance like me you will find out like me that the panal was unable to disable the OOM killer but did not say anything, and the recommended edits in there discord to the grub files was of no help. I was able to verify this by reading the system logs. If you find this is the case your going to have to take to google to find other ways to allow the panel to disable it. An issue I have only has so far had with the host being ubuntu 22. |
I forgot to mention something important in the above, If your server is getting killed by the OOM killer, This dose not mean your mincraft server ran out of ram! Just that the container is now using it's allocated amount. (being aware the the OOM killer can fail to be disabled with no notification in the panel) To see if your minecraft server actually ran out of ram check your log fies to see why it had crashed, you will see out of memory type exceptions, in your server logs file and or in you JVM crash report (look for somthing like hs_err_pid29.log in the root of your server). if it just died with no error in the log files and they just end, It was the OOM killer. (This can cause world corruption). |
To everyone reading this issue and editing their kargs, stop. Reverting to cgroups v1 is not a solution or a fix to any of these problems, it's a terrible workaround that causes more problems, not less. This problem is caused by many different factors, and is an issue specifically regarding the images and startup commands themselves, nothing else. First off, setting Secondly, the Finally, for all the Java versions with container detection support, the default For most users (especially running newer or latest versions of Java), everything should work fine out of the box. However tweaking of the |
For that i think you need to have a look at https://pterodactyl.io/wings/1.0/configuration.html#container-pid-limit , if its still at container_pid_limit: 512 then you going to want to increase it. It is also worth confirming your host did not run out of ram. |
Thanks for the link, was trying to find something related to that but wasn't able. |
Any news ? Because the error is still current and is very problematic for certain servers, in particular those whose host uses Ptero. My server is affected, I have to use Java OpenJ9 so that my server works without suffering memory leaks and crashes, but this is very limiting because certain plugins do not work, such as Spark which will soon be imposed by PaperMc :( |
Disable the OOM killer and just go based off of what spark tells you
…On Sun, Dec 3, 2023 at 12:46 PM Chevels ***@***.***> wrote:
Any news ? Because the error is still current and is very problematic for
certain servers, in particular those whose host uses Ptero. My server is
affected, I have to use Java OpenJ9 so that my server works without
suffering memory leaks and crashes, but this is very limiting because
certain plugins do not work, such as Spark which will soon be imposed by
PaperMc :(
—
Reply to this email directly, view it on GitHub
<#4635 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3OKZDJUHD5HKKBOJZHLXXDYHS3ILAVCNFSM6AAAAAATHGU5JOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZXGU2DSNRVGQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I've used OpenJ9 too until recently, it has been fixed for me with the new Pterodactyl updates, try to update your Wings. I still experience a memory leak on Forge, but I just make sure to restart it every 12 hours, as you should for every Minecraft server. |
I don't know what is OOM killer, I'm not dev, I'm just a simple girl who own a server for 13 year and who depend of it's server host choice for use of Ptero... @Hamziee Thank you for your answer ! As I say above, I'm just customer of a server hoster and I don't know what are Wings, except for birds x) |
|
Oh I see, but my server is hosted by a provider and I don't think they're going to remove this option upon request :/ I think they have good reasons, particularly to avoid abuse, to leave this option active. In particular to prevent a single server from reserving 64GB over time (because of the memory leak) when it is only entitled to 16GB. Thank you for the clarification on the meaning of OOM :) |
Whoa, i tried removed spark plugins and i can reach 5 minutes of uptime, not like before 🤔 |
Is there any idea? I need the spark plugin |
If you like, you can use this flags generator: https://www.birdflop.com/resources/flags/ If you encounter any problems after that, please add this flags: |
If you want additional information about the new Shenandoah GC, please check here: https://wiki.openjdk.org/display/shenandoah/Main |
Without OpenJ9, my memory still climbs until it hits OOM. Even after updating flags to Shenandoah instead of Aikar, my memory sits at about 10.5/12gb. For reference, Aikar got me down to about 5gb/12gb |
Has this been resolved? @matthewpi |
Upgrading to 1.20.4 fixed it for me. No idea why
…On Sun, May 5, 2024 at 1:49 PM Aaron Duran ***@***.***> wrote:
Has this been resolved? @matthewpi <https://github.com/matthewpi>
—
Reply to this email directly, view it on GitHub
<#4635 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3OKZDMV3NW2PUMDHQX45ALZAZWJXAVCNFSM6AAAAAATHGU5JOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJUHA4TCOBTGM>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
It didn't for me. You're telling me your actual server RAM corresponds to what Pterodactyl tells you? |
dont know if ur issue is solved but had the same problem, found out the deal, if u give a server 14gb u must adjust ram percentage, like, 1024 x 14= 14336 - 2048 = 12288 so startup looks like this java -Xms128M -XX:MaxRAMPercentage=84.7 -Dterminal.jline=false -Dterminal.ansi=true -jar server.jar and paper gets this Maximum memory: 12,160 MB. so when pregenning or players load the map fillout more it wont run out of ram and crash and will clear, general rule of thumb, leave docker 2gb of ram (2048) so it has room also since its tied in with the servers ram as well, if that makes sense to yall. |
alr, adjust your xmx to 12288 and your xms to 1024 that way your server will get 12gb of ram and docker has 2gb, but if your running a server with the panel, - 4gb for the server os. so lets say u have a 32gb server, -4gb for os if this is panel/server, u would assign the node to have 28gb, then u make a server on that node to w.e. u want it to have, lets say u want a 14gb server u would have startup set at xmx 12288 and xms 1024 and docker will have 2048mb for room since it shares with ur server. i do mine this way and not keep them even is because ram is constantly in use as urs is, i mean u got it but shows full and will show that in os when u login like memory use 98% . i hope this helps, sorry if i rambled on, as i learned the hardway on this as well. |
like this? (i have a vps 16gb with 8 threads ryzen 9) flags: MEMORY ON NODE: (max) MEMORY ALLOCATED 10,240 / 12,288 MiB server: |
How is this extremely old issue still a problem... I just did a fresh re-install of pterodactyl the other day. I installed ptero on Oracle Cloud a couple months ago and it runs fantastic even on their crappy Neoverse N1. Yet somehow on a $4,000 AMD EPYC 9734 with this install of ptero everything is borked. I could allocate 500GB of memory to a server and eventually, given enough time, OOMkiller would kill the server after it inevitably reaches 500GB of memory. |
@JerichoTorrent As far as I know this isn't a (huge) problem anymore. I'm using 2 AMD EPYC CPU's, it's doing fine. The only problem I have is with Forge, and that's fixed with a bit of tweaking. What are you exactly trying to run? Keep in mind that Minecraft itself is also leaking a lot, don't expect to run a server without restarting once or twice a day. Also 500GB is exaggerated I hope. |
@Hamziee I am using this exact same CPU; AMD EPYC 9634. I'm not blaming pterodactyl; I'm just saying an issue this old with no solution or no path to take to diagnose is.. frustrating. I am just trying to run a velocity network with a few paper backend servers. |
@JerichoTorrent It's fine, and I understand why are you are frustrated. Because when this issue was made, I was also here trying to figure out what the issue was, and there wasn't much help or any real solutions. As far as I know, Velocity and Paper servers are using RAM like normal, could you maybe explain in more detail what your problem is? Like are they using a lot of memory from the start, or what's exactly the issue? |
(assuming the issue you're suffering is the same as mine where the base container has excessive RAM usage excluding java's expected usage.) Indeed this is a very frustrating issue, having a container use multiples of the Java RAM allocation, so far my biggest pain about the issue is the lack of tools to diagnose the root cause of the issue. Edit: You also need to not observe this memory issue when the server is not running in a docker container. |
Not sure if this will be any help but this is what I use for my 7gb paper
Ptero server on oracle cloud.
java -Xms1G -Xmx7G -XX:+UseG1GC -XX:+ParallelRefProcEnabled
-XX:MaxGCPauseMillis=200 -XX:+UnlockExperimentalVMOptions
-XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:G1NewSizePercent=30
-XX:G1MaxNewSizePercent=40 -XX:G1HeapRegionSize=8M -XX:G1ReservePercent=20
-XX:G1HeapWastePercent=5 -XX:G1MixedGCCountTarget=4
-XX:InitiatingHeapOccupancyPercent=15 -XX:G1MixedGCLiveThresholdPercent=90
-XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32
-XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=1 -Dusing.aikars.flags=
https://mcflags.emc.gs -Daikars.new.flags=true -jar server.jar --nogui
…On Sat, Aug 31, 2024 at 2:02 PM KugelblitzNinja ***@***.***> wrote:
(assuming the issue you're suffering is the same as mine where the base
container has excessive RAM usage excluding java's expected usage.)
Indeed this is a very frustrating issue, having a container use multiples
of the Java RAM allocation, so far my biggest pain about the issue is the
lack of tools to diagnose the root cause of the issue.
—
Reply to this email directly, view it on GitHub
<#4635 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3OKZDLPQFY6VSFAR6ODO5DZUIALXAVCNFSM6AAAAAATHGU5JOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRSHE4TMMRQGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Docker shares ur ram with paper server hence why I said assigned it 2gb
lower like if main server has total ram is 8gb u minus 2gb (2048) for Linux
os, then assign 6144 for that node and then make a server for 6144 MB of
ram, then where startup is b4 or after server creation either way it can be
changed u change it to 4096 that way ur paper has 4gb to play with while
docker has 2gb. It shows 6gb in panel but in reality the paper server has
4gb, every host I been on doesn't mess with startup they leave it at 95
percent which always crashes and sets players data back. Hence the out of
memory esp if oom is disabled. so if u have a bigger ram server, u have to do this to each server created on that node.
On Sat, Aug 31, 2024, 2:10 PM NoTranslator6714 ***@***.***>
wrote:
… Not sure if this will be any help but this is what I use for my 7gb paper
Ptero server on oracle cloud.
java -Xms1G -Xmx7G -XX:+UseG1GC -XX:+ParallelRefProcEnabled
-XX:MaxGCPauseMillis=200 -XX:+UnlockExperimentalVMOptions
-XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:G1NewSizePercent=30
-XX:G1MaxNewSizePercent=40 -XX:G1HeapRegionSize=8M -XX:G1ReservePercent=20
-XX:G1HeapWastePercent=5 -XX:G1MixedGCCountTarget=4
-XX:InitiatingHeapOccupancyPercent=15 -XX:G1MixedGCLiveThresholdPercent=90
-XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32
-XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=1 -Dusing.aikars.flags=
https://mcflags.emc.gs -Daikars.new.flags=true -jar server.jar --nogui
On Sat, Aug 31, 2024 at 2:02 PM KugelblitzNinja ***@***.***>
wrote:
> (assuming the issue you're suffering is the same as mine where the base
> container has excessive RAM usage excluding java's expected usage.)
>
> Indeed this is a very frustrating issue, having a container use
multiples
> of the Java RAM allocation, so far my biggest pain about the issue is
the
> lack of tools to diagnose the root cause of the issue.
>
> —
> Reply to this email directly, view it on GitHub
> <
#4635 (comment)>,
> or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/A3OKZDLPQFY6VSFAR6ODO5DZUIALXAVCNFSM6AAAAAATHGU5JOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRSHE4TMMRQGE>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
—
Reply to this email directly, view it on GitHub
<#4635 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A76P6FGVLET6DDAH7U3GVTLZUIBH7AVCNFSM6AAAAAATHGU5JOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRSHE4TSOBYGQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Current Behavior
As of writing this, we're currently dealing with a RAM issue. Once we start a Spigot server without any plugins just a basic server and we connect to the server more than 100 times, the RAM increases over time. Hovewer the RAM shown in the panel doesn't decrease at all. Once it's allocated it stays that way.
We've made several tests with heapdumps checking if some of the plugins actually doesn't have a memory leak but didn't find any.
Here's an image of a running server for more than 20 hours with 4 GB's of RAM, as this server doesn't creates a new threads it's not crashing but on a different server where threads are made, it is.
Expected Behavior
Panel should be able to decrease the RAM of the running container to prevent from unexpected crash. (OOM Killer disabled)
Once the ram get's increased it doesn't decrease and it's a pain for the server itself when there's over 40 plugins over 100 players and unexpectedly crashing once it reaches the container limit because some plugins require to create a new thread and when there's no available memory for the thread itself (Native memory).
Steps to Reproduce
Make a server with a Paper jar, allocate 4 GB's of RAM and connect to it until it reaches 4 GB's.
Leave it for like an hour and then you'll see the same RAM just as you left it.
Panel Version
1.11.1
Wings Version
1.11.0
Games and/or Eggs Affected
Minecraft (Paper)
Docker Image
ghcr.io/pterodactyl/yolks:java_17
Error Logs
No response
Is there an existing issue for this?
The text was updated successfully, but these errors were encountered: