-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce memory usage of board representation #61
Comments
It's already doing that (because the Qt text widget eventually crashes if you just keep appending to it). If you look in qgtp.cpp, append_text, it has a limit of 200 lines. Once that is reached, it rotates out an early line for each now one. Maybe that isn't working, but are you sure the GTP log is what's eating the memory (as opposed to, for example, KataGo caching lots of positions)? |
Well, I am running katago from a Google cloud engine box and connecting
q5go to it through ssh, so I don't think that's the problem.
I'll run q5go batch analysis for a bit tonight and pay attention to memory
usage both on local and remote, to be sure.
I also understand that RAM is not necessarily released to the OS even if it
is freed, so I'll take what I see with a grain of salt.
…--
Aldric.
Sent from a mobile device.
On Sat, May 14, 2022, 10:24 Bernd Schmidt ***@***.***> wrote:
It's already doing that (because the Qt text widget eventually crashes if
you just keep appending to it). If you look in qgtp.cpp, append_text, it
has a limit of 200 lines. Once that is reached, it rotates out an early
line for each now one.
Maybe that isn't working, but are you sure the GTP log is what's eating
the memory (as opposed to, for example, KataGo caching lots of positions)?
—
Reply to this email directly, view it on GitHub
<#61 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAQSSLOOC53TF5A6HSVLTDVJ6ZTFANCNFSM5V5TLTMQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Alright, so I left my computer running overnight with In the morning, the box with q5go was frozen, and So ... It may not be the GTP logs, but something is eating all the RAM without ever releasing it. |
Oh! I should mention, in case it makes a difference, that I'm on Linux. |
Hmm. I ran it for a while with the valgrind leak checker and it didn't have very much to complain about. You could try doing the same with "valgrind --leak-check=full q5go", ideally on a build with debug symbols. How exactly did you run it? A batch analysis, or just live analysis in a board window? |
Happy to run it with valgrind, though I'll request you help me build q5go
with debug symbols, I don't know how to do that :)
This was a batch analysis, so it was churning through multiple games, about
6 seconds per board position, adding max of 10 variations per board
position.
…On Sun, May 15, 2022 at 11:46 AM Bernd Schmidt ***@***.***> wrote:
Hmm. I ran it for a while with the valgrind leak checker and it didn't
have very much to complain about. You could try doing the same with
"valgrind --leak-check=full q5go", ideally on a build with debug symbols.
How exactly did you run it? A batch analysis, or just live analysis in a
board window?
—
Reply to this email directly, view it on GitHub
<#61 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAQSSMPWZAOI3QLOYZ4GULVKEL3TANCNFSM5V5TLTMQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
One way of doing that is to find the CONFIG line at the top of q5go.pro and replace "release" with "debug". There's probably some command line way of doing it but I'm not actually all that familiar with qmake. How much analysisPVLen were you giving KataGo? Over enough time the number of positions might add up eventually. Each one probably takes a few KB, and if you're computing millions of them... How much is "all the RAM"? Perhaps it needs an "autosave and close when finished" checkbox. |
The impression that I had was that the RAM kept on building up even if I saved and closed the files. (but I will look at this now, just finished compiling the debug build). I don't know if it matters how katago was configured since katago is running on a different box? |
All the RAM is 10+GB (total RAM is 16GB). Also, valgrind experiment is on hold for me at the moment because this box is broken (long story) so I can't install the glibc debug symbols and I need to troubleshoot that first. Sigh, sigh, sigh. |
It would be helpful to know if the RAM does go down if you close files. |
Intended to lower memory usage, as an experiment to see if that is what's causing the problem in issue #61.
I've made an experimental branch called test-lower-memory. This tries to discard state from board positions and recompute it as necessary. If you could test this to see if it helps with your memory consumption problems, we'll know if this is the right direction to investigate. It's not recommended for general use, there are very probably things that break. |
I was unable to compile it, I came across this problem:
|
Also, I am unable to launch valgrind against q5go at the moment, I get the following error:
I am looking for what I need to install to make this error go away but I have very little experience with C++ / qt5 development so I'm not totally sure how to find the ubuntu packages I need (or what I need to install from source). |
Okay -- got the master branch compiled with debug, and valgrind running on it. I'll run batch analysis with valgrind running for, well, a few hours, and we'll see what that comes out to be. The lower-memory branch is in error at the moment, I'm not sure I know enough to fix that on my own unfortunately. I will also let you know if I am able to notice a memory change when a game that went through batch analysis is saved and closed. |
Okay so on the master branch, built with debug, when I ran batch analysis (two files selected, just enough to start he batch work, really), here's the end of the valgrind output. When I started this, memory usage was around 2.97GB. It's worth noting that when I saved and closed the first file after analysis was complete, memory usage went DOWN ... But not a lot. It went from 4.28GB to 4.26GB. So analysis for a single game took a little over one GB of RAM, which remained in use after I saved and closed the game.
|
Doesn't really look like any big leaks. I fixed up the branch and it should compile now. I had one build directory configured for a different C++ standard and so it worked for me. |
It did disappear from the completed jobs list in the batch analysis window, yes! That freed the .02 GB as I mentioned. I will try to compile the branch now. |
Alright, the "test-lower-memory" branch yielded the following result for batch analysis after one game:
It's just one game, so it's not comprehensive analysis, but 0.2GB is way better than 1.3GB! 200MB is probably still much more than is strictly required to keep in memory for any one game Second game, 0.25GB used, ~0.01GB freed. Hope this helps! |
Okay, another thing to note for some additional information, I've now batch-analyzed 6 games and the RAM usage on my box peaked at 8.5GB and after saving/closing the sixth game, it's back down to 8.43GB. It's not very scientific because I have done or two other small things on the computer, but this is sufficiently promising that I'm going to take another shot at leaving this running all night, and hopefully wake up to another dozen or so games processed and a computer not crashed :D |
Woke up to an additional 37 games batch-analyzed, a non-crashed computer, and a RAM usage at 12.8GB. So it seems like analyzing 43 games total may lead to ~5GB used in RAM. Keeping this running (I have another 20 or so games in this batch), I'll let you know what shakes out, but this is obviously a monumental improvement! |
As a note, saving/closing games now, after 46 games, yields no noticeable change in RAM availability. And now a lot of measurements will go out the door because I'm going to start using this computer for other things too :D |
So I take it the branch has fixed the issue for you? Have you been using it for other purposes as well and has it been stable for you? I'm debating whether to use this solution for the main branch. The memory savings are nice, but I don't really like it from a robustness point of view. |
The branch is stable for general SGF editing and batch analysis. I haven't tried using q5go as a client for online play so I cannot comment on that. FWIW, having done all of the Dosaku games now, the next batch of games I'd like to work on is the Shusaku games, which is 470 games, so I'm happy about these memory changes, they at least will allow me to keep it going overnight. I understand if they're not perfect and you don't want to merge them in, though I would request that you keep looking for a solution that satisfies you more :) |
Hello!
I have just built the new version of the tool with the comment markers and it's exactly what I needed, thank you so much!
When doing batch analysis, I have the program running for hours, and that means the GTP logs equally build up, and eventually take a lot of RAM. I would like to have a max size for the GTP logs, such that q5go would only keep some amount of the most recent messages, so I don't have to worry about running out of RAM on my local machine.
Thanks!
The text was updated successfully, but these errors were encountered: