Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BTRFS not allocating data properly #936

Open
ETJAKEOC opened this issue Jan 13, 2025 · 1 comment
Open

BTRFS not allocating data properly #936

ETJAKEOC opened this issue Jan 13, 2025 · 1 comment

Comments

@ETJAKEOC
Copy link

I'm playing around with a BTRFS RAID array setup with two SSD's, one 1 TB, the other 2 TB, when I setup my BTRFS across these disks, I setup the 1 TB with a 1 GB EFI partition, the rest of the disk a dedicated RAID partition, with the entire blockdev sdb being the second half of the array. mkfs.btrfs -d raid0 -m raid1 -L PLEX /dev/sd{a2,b} is how I generated this array, which seemed fine up till about about 1 TiB of data hit the disk, then the metadata profile was filled, all 2.00 GiB used up.

btrfs filesystem usage /
Overall:
    Device size:                   2.77TiB
    Device allocated:              2.72TiB
    Device unallocated:           51.96GiB
    Device missing:                  0.00B
    Device slack:                    0.00B
    Used:                       1015.63GiB
    Free (estimated):              1.78TiB      (min: 1.75TiB)
    Free (statfs, df):             1.78TiB
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

Data,RAID0: Size:2.72TiB, Used:1012.89GiB (36.41%)
   /dev/sdb        1.83TiB
   /dev/sda2     902.50GiB

Metadata,RAID1: Size:2.00GiB, Used:1.37GiB (68.61%)
   /dev/sdb        2.00GiB
   /dev/sda2       2.00GiB

System,RAID1: Size:32.00MiB, Used:256.00KiB (0.78%)
   /dev/sdb       32.00MiB
   /dev/sda2      32.00MiB

Unallocated:
   /dev/sdb       25.98GiB
   /dev/sda2      25.98GiB

Now I'm no BTRFS/RAID expert, but 2 GiB of metadata profile, on a RAID1 across 1+2 TB drives sounds absolutely incorrect to me. This should basically duplicate the metadata across the disks, which while they are mismatched, meaning one can hold roughly 500GB data/metadata, while the other can hold 1 TB of each, I should have more than 2 GiB available for my metadata profile.

I'm not sure if this is directly a bug with BTRFS/btrfs-progs, or if I just have this array configured in some funky way that nobody has tried before, which seems unlikely to me, but if anyone has any ideas that can help me address this issue, or perhaps if I did find a bug, I just wanted to make this post and see what happened.

@Zygo
Copy link

Zygo commented Jan 15, 2025

A striped profile for data is not a good choice for this arrangement of drives. A striped profile like raid0 will try to fill the smallest drives first, leaving no space for raid1 metadata allocation when the smallest drive is filled. You can use metadata_ratio to force allocation of raid1 metadata while the filesystem is not yet full, but that approach requires continuous maintenance and intervention to avoid running out of metadata space.

A non-striped data profile like single will fill the largest drive first, the same way raid1 does, leaving maximum space for raid1 allocation until the filesystem fills up all devices.

To recover this filesystem, convert the data to single profile. This should compact the data (2/3 of the space is allocated for data but unused) and release unallocated space in the process of changing the data profile to a non-striped one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants