You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm playing around with a BTRFS RAID array setup with two SSD's, one 1 TB, the other 2 TB, when I setup my BTRFS across these disks, I setup the 1 TB with a 1 GB EFI partition, the rest of the disk a dedicated RAID partition, with the entire blockdev sdb being the second half of the array. mkfs.btrfs -d raid0 -m raid1 -L PLEX /dev/sd{a2,b} is how I generated this array, which seemed fine up till about about 1 TiB of data hit the disk, then the metadata profile was filled, all 2.00 GiB used up.
Now I'm no BTRFS/RAID expert, but 2 GiB of metadata profile, on a RAID1 across 1+2 TB drives sounds absolutely incorrect to me. This should basically duplicate the metadata across the disks, which while they are mismatched, meaning one can hold roughly 500GB data/metadata, while the other can hold 1 TB of each, I should have more than 2 GiB available for my metadata profile.
I'm not sure if this is directly a bug with BTRFS/btrfs-progs, or if I just have this array configured in some funky way that nobody has tried before, which seems unlikely to me, but if anyone has any ideas that can help me address this issue, or perhaps if I did find a bug, I just wanted to make this post and see what happened.
The text was updated successfully, but these errors were encountered:
A striped profile for data is not a good choice for this arrangement of drives. A striped profile like raid0 will try to fill the smallest drives first, leaving no space for raid1 metadata allocation when the smallest drive is filled. You can use metadata_ratio to force allocation of raid1 metadata while the filesystem is not yet full, but that approach requires continuous maintenance and intervention to avoid running out of metadata space.
A non-striped data profile like single will fill the largest drive first, the same way raid1 does, leaving maximum space for raid1 allocation until the filesystem fills up all devices.
To recover this filesystem, convert the data to single profile. This should compact the data (2/3 of the space is allocated for data but unused) and release unallocated space in the process of changing the data profile to a non-striped one.
I'm playing around with a BTRFS RAID array setup with two SSD's, one 1 TB, the other 2 TB, when I setup my BTRFS across these disks, I setup the 1 TB with a 1 GB EFI partition, the rest of the disk a dedicated RAID partition, with the entire blockdev
sdb
being the second half of the array.mkfs.btrfs -d raid0 -m raid1 -L PLEX /dev/sd{a2,b}
is how I generated this array, which seemed fine up till about about 1 TiB of data hit the disk, then the metadata profile was filled, all 2.00 GiB used up.Now I'm no BTRFS/RAID expert, but 2 GiB of metadata profile, on a RAID1 across 1+2 TB drives sounds absolutely incorrect to me. This should basically duplicate the metadata across the disks, which while they are mismatched, meaning one can hold roughly 500GB data/metadata, while the other can hold 1 TB of each, I should have more than 2 GiB available for my metadata profile.
I'm not sure if this is directly a bug with BTRFS/btrfs-progs, or if I just have this array configured in some funky way that nobody has tried before, which seems unlikely to me, but if anyone has any ideas that can help me address this issue, or perhaps if I did find a bug, I just wanted to make this post and see what happened.
The text was updated successfully, but these errors were encountered: