Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tuning the Linux memory allocator jemallocator to reduce initial memory usage #1672

Closed
dm9pZCAq opened this issue Sep 22, 2024 · 11 comments
Closed
Labels
cant reproduce The issue cannot be reproduced as described enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed

Comments

@dm9pZCAq
Copy link
Contributor

What system are you running Yazi on?

Linux X11

What terminal are you running Yazi in?

st 0.9.2

yazi --debug output

Yazi
    Version: 0.3.3 (631afd0 2024-09-22)
    Debug  : false
    OS     : linux-x86_64 (unix)

Ya
    Version: 0.3.3 (631afd0 2024-09-22)

Emulator
    Emulator.via_env: ("xterm-256color", "tmux")
    Emulator.via_csi: Ok(Unknown([]))
    Emulator.detect : Unknown([])

Adapter
    Adapter.matches: X11

Desktop
    XDG_SESSION_TYPE           : None
    WAYLAND_DISPLAY            : None
    DISPLAY                    : Some(":1")
    SWAYSOCK                   : None
    HYPRLAND_INSTANCE_SIGNATURE: None
    WAYFIRE_SOCKET             : None

SSH
    shared.in_ssh_connection: false

WSL
    WSL: false

Variables
    SHELL              : Some("/bin/zsh")
    EDITOR             : Some("vim")
    VISUAL             : None
    YAZI_FILE_ONE      : None
    YAZI_CONFIG_HOME   : None

Text Opener
    default: Some(Opener { run: "${EDITOR:-vi} \"$@\"", block: true, orphan: false, desc: "$EDITOR", for_: None, spread: true })
    block  : Some(Opener { run: "${EDITOR:-vi} \"$@\"", block: true, orphan: false, desc: "$EDITOR", for_: None, spread: true })

Multiplexers
    TMUX               : true
    tmux version       : 3.4
    ZELLIJ_SESSION_NAME: None
    Zellij version     : No such file or directory (os error 2)

Dependencies
    file             : 5.45
    ueberzugpp       : No such file or directory (os error 2)
    ffmpegthumbnailer: 2.2.2
    magick           : 7.1.1-25
    fzf              : 0.54.3
    fd               : 10.2.0
    rg               : 14.1.0
    chafa            : 1.12.5
    zoxide           : 0.9.4
    7z               : 17.05
    7zz              : No such file or directory (os error 2)
    jq               : 1.7.1

Did you try the latest nightly build to see if the problem got fixed?

Yes, and I updated the debug information above (yazi --debug) to the nightly that I tried

Describe the bug

For me yazi uses around 85MB of RAM on an empty directory, I saw in issue #1368 that for others, it only takes about 20-30MB.

Is it expected behavior for yazi to use so much memory?

yazi in htop

Minimal reproducer

mkdir -p /tmp/empty-dir/another

# start yazi in empty dir
yazi $_

# in another terminal run
ps -p "$(pgrep yazi)" -o rss --no-headers

Anything else?

No response

@dm9pZCAq dm9pZCAq added the bug Something isn't working label Sep 22, 2024
@sxyazi
Copy link
Owner

sxyazi commented Sep 22, 2024

I couldn't reproduce it on both my macOS and Linux VM. I created an empty directory using mkdir -p /tmp/empty-dir/another and ran yazi $_, then observed that the memory usage was around 19M, and the CPU stayed at 0%

screenshot-002023

My version

Yazi 0.3.3 (631afd0 2024-09-22)

Let me know if I missed any details in reproducing it. Also, are you using any custom config/plugins? Can it be reproduced with the default config (mv ~/.config/yazi ~/.config/yazi.bak)?

@sxyazi sxyazi added the waiting on op Waiting for more information from the original poster label Sep 22, 2024
@dm9pZCAq
Copy link
Contributor Author

I don't have any custom config (there is no ~/.config/yazi).

Let me know if I missed any details in reproducing it.

everything seems to be correct


I checked also yazi from release binaries (v0.3.3/yazi-x86_64-unknown-linux-musl.zip) and from nix (nix profile install nixpkgs/nixpkgs-unstable#yazi) and it uses same amount of RAM.

Additional info that might be useful:

  • I using Gentoo linux and custom kernel (but other programs uses RAM as in any other distro)
  • Also I have 32GB RAM totaly (may yazi RAM usage depends on total RAM in system?)
  • I checked now and on Debian server with 1GB of system RAM it uses 18MB

@github-actions github-actions bot removed the waiting on op Waiting for more information from the original poster label Sep 23, 2024
@sxyazi
Copy link
Owner

sxyazi commented Sep 23, 2024

v0.3.3/yazi-x86_64-unknown-linux-musl.zip

Could you try the GNU build instead of musl to see if it uses the same amount of memory?

I using Gentoo linux and custom kernel

Are you using the official binary on Gentoo or the one from system package manager? — some packages may apply patches during installation

may yazi RAM usage depends on total RAM in system?

I'm not quite sure - Yazi doesn't manage memory on its own; it uses jemallocator on Linux. But I haven't looked at its source code, so I'm not sure if it allocates more memory for machines with larger memory.

Maybe you could try commenting this out

#[cfg(all(not(target_os = "macos"), not(target_os = "windows")))]
#[global_allocator]
static GLOBAL: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;

to build with the system's default allocator and see what happens

@sxyazi sxyazi added the waiting on op Waiting for more information from the original poster label Sep 23, 2024
@dm9pZCAq
Copy link
Contributor Author

I checked it on Arch linux (from official repo Version: 0.3.3 (Arch Linux 2024-09-05)) with official kernel (6.6.52-1-lts) and it is the same around 85Mb, so I think custom kernel and musl libc is not relevant


Maybe you could try commenting this out

Yes, it helped!

Now it uses 18Mb not only on empty dir but also in yazi repo dir (with jemalloc enabled it it used around 100Mb in yazi repo dir)

As I understand Jemalloc can be configured to use less RAM (maybe TUNING.md, or some other options)

@github-actions github-actions bot removed the waiting on op Waiting for more information from the original poster label Sep 23, 2024
@sxyazi
Copy link
Owner

sxyazi commented Sep 23, 2024

Nice! I'm glad that we could narrow it down to an issue with jemallocator, but I'm not sure if tweaking these parameters would have a potential impact on performance. Also, since I can't reproduce the issue, it's hard to verify which parameters are truly effective for me.

If anyone familiar with jemallocator and able to reproduce the issue could test it and submit a PR, it would be greatly appreciated 🙏🏻

@sxyazi sxyazi changed the title high RAM usage Tuning the Linux memory allocator jemallocator to reduce initial memory usage Sep 23, 2024
@sxyazi sxyazi added enhancement New feature or request help wanted Extra attention is needed good first issue Good for newcomers cant reproduce The issue cannot be reproduced as described and removed bug Something isn't working labels Sep 23, 2024
@dm9pZCAq
Copy link
Contributor Author

with jemalloc enabled and when running yazi like this:

_RJEM_MALLOC_CONF='narenas:1' yazi

for me it uses 28-29 Mb of RAM

jemalloc.3 here is doc about narenas, as I understand memory usage depends on number of CPUs


If you ok with this option I think I can create PR with this

@sxyazi
Copy link
Owner

sxyazi commented Sep 23, 2024

_RJEM_MALLOC_CONF='narenas:1'

Do you mean creating a shell wrapper for yazi and attaching this env variable when starting Yazi?

as I understand memory usage depends on number of CPUs

How many CPUs do you have? Could you see the memory usage with narenas:2?

@dm9pZCAq
Copy link
Contributor Author

Do you mean creating a shell wrapper

No, I want to create .cargo/config.toml:

[env]
JEMALLOC_SYS_WITH_MALLOC_CONF = "narenas:1"

it will bake this string into jemalloc library (and in yazi) so this will be "default jemalloc config"

(JEMALLOC_SYS_WITH_MALLOC_CONF will pass --with-malloc-conf to jeamalloc ./configure)

How many CPUs do you have?

8

Could you see the memory usage with narenas:2?

it is around 30 Mb

What's strange is that with narenas:8 it takes around 45 Mb and with narenas:100 around 60 Mb

@sxyazi
Copy link
Owner

sxyazi commented Sep 25, 2024

Thanks for the info! I think I can accept a PR if that works for you.

For me, with the same 8-core CPU, after setting _RJEM_MALLOC_CONF='narenas:1', I noticed the memory dropped by 1.1M, from 18.6M to 17.5M.

@sxyazi
Copy link
Owner

sxyazi commented Sep 26, 2024

Done in #1689, thanks @dm9pZCAq

@sxyazi sxyazi closed this as completed Sep 26, 2024
Copy link

I'm going to lock this issue because it has been closed for 30 days. ⏳
This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please file a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 27, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cant reproduce The issue cannot be reproduced as described enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants