Skip to content

Commit

Permalink
VSHA-536 Zero md Superblocks
Browse files Browse the repository at this point in the history
This change ensures the RAIDs are eradicated and made unrecognizable
by erasing their superblocks in order to resolve sync timing problems
between Vshasta and metal.

The new logic explicitly stops the `md` devices, wipes their magic bits,
and then eradicates the `md` superblocks on each disk.

During testing of VSHA-536 there was some fiddling with how the RAIDs
were wiped to account for some peculiarities with the timings of how
`virtio` synced and updated the kernel. The changes had been tested on
metal without any observed problems, but in my recent series of tests some
fatal inconsistencies were observed. The `partprobe` was revealing `md`
handles, this caused `mdadm` to restart/resume RAIDs that had been
"nuked" and this in turn caused partitioning to fail.

This change also includes some minor fixes:
- The `wipefs` command for sd/nvme devices was not getting piped to the
  log file.
- The info printed when manually sourcing `/lib/metal-md-lib.sh` in a
  dracut shell is now left justified and aligned by colon.
- The extra `/sbin/metal-md-scan` call in `/sbin/metal-md-disks` is
  removed, it is no longer important shouldn't be invoked every loop
  that calls `/sbin/metal-md-disks`.
- `metal-kdump.sh` no longer invokes `/sbin/metal-md-scan` under
  `root=kdump` because that script is already invoked by the initqueue
  (see `metal-genrules.sh`)
- All initqueue calls to `metal-md-scan` have been changed to `--unique`
  and `--onetime` to ensure they never have an opportunity to run
  forever (as witnessed during a kdump test of the LiveCD)

A note about the dependency on `mdraid-cleanup`:

It turns out relying on `mdraid-cleanup` was a bad idea. The
`mdraid-cleanup` script only stops RAIDs, it does not remove any
superblock (or remove the RAIDs for that matter). This means that
there is a (small) possibility that the RAID and its members still exist
when the `partprobe` command fires. The window of time that this issue
can occur is very small, and varies. VShasta has not hit this error in
the 10-20 deployments it has done in the past 3-4 days.
  • Loading branch information
rustydb committed Feb 19, 2023
1 parent d8321b3 commit ba0b119
Show file tree
Hide file tree
Showing 4 changed files with 20 additions and 24 deletions.
4 changes: 2 additions & 2 deletions 90metalmdsquash/metal-genrules.sh
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ command -v getarg > /dev/null 2>&1 || . /lib/dracut-lib.sh

case "$(getarg root)" in
kdump)
/sbin/initqueue --settled /sbin/metal-md-scan
/sbin/initqueue --settled --onetime --unique /sbin/metal-md-scan

# Ensure nothing else in this script is invoked in this case.
exit 0
Expand Down Expand Up @@ -59,7 +59,7 @@ case "${metal_uri_scheme:-}" in
;;
'')
# Boot from block device.
/sbin/initqueue --settled /sbin/metal-md-scan
/sbin/initqueue --settled --onetime --unique /sbin/metal-md-scan
;;
*)
warn "Unknown driver $metal_server; metal.server ignored/discarded"
Expand Down
3 changes: 1 addition & 2 deletions 90metalmdsquash/metal-kdump.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,7 @@ command -v _overlayFS_path_spec > /dev/null 2>&1 || . /lib/metal-lib.sh

case "$(getarg root)" in
kdump)
/sbin/initqueue --settled /sbin/metal-md-scan


# Ensure nothing else in this script is invoked in this case.
exit 0
;;
Expand Down
6 changes: 3 additions & 3 deletions 90metalmdsquash/metal-md-disks.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,12 +39,12 @@ disks_exist || exit 1
# Now that disks exist it's worthwhile to load the libraries.
command -v pave > /dev/null 2>&1 || . /lib/metal-md-lib.sh

# Wipe; this returns early if a wipe was already done.
pave

# Check for existing RAIDs
/sbin/metal-md-scan

# Wipe; this returns early if a wipe was already done.
pave

# At this point this module is required; a disk must be created or the system has nothing to boot.
# Die if no viable disks are found; otherwise continue to disk creation functions.
if [ ! -f /tmp/metalsqfsdisk.done ] && [ "${metal_nowipe}" -eq 0 ]; then
Expand Down
31 changes: 14 additions & 17 deletions 90metalmdsquash/metal-md-lib.sh
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ boot_drive_authority=${boot_fallback#*=}
[ -z "$boot_drive_authority" ] && boot_drive_authority=BOOTRAID
case $boot_drive_scheme in
PATH | path | UUID | uuid | LABEL | label)
info "bootloader will be located on ${boot_drive_scheme}=${boot_drive_authority}"
printf '%-12s: %s\n' 'bootloader' "${boot_drive_scheme}=${boot_drive_authority}"
;;
'')
# no-op; drive disabled
Expand All @@ -88,7 +88,7 @@ esac
[ -z "${sqfs_drive_authority}" ] && sqfs_drive_scheme=SQFSRAID
case $sqfs_drive_scheme in
PATH | path | UUID | uuid | LABEL | label)
info "SquashFS file is on ${sqfs_drive_scheme}=${sqfs_drive_authority}"
printf '%-12s: %s\n' 'squashFS' "${sqfs_drive_scheme}=${sqfs_drive_authority}"
;;
*)
metal_die "Unsupported sqfs-drive-scheme ${sqfs_drive_scheme}\nSupported schemes: PATH, UUID, and LABEL"
Expand All @@ -100,7 +100,7 @@ oval_drive_scheme=${metal_overlay%%=*}
oval_drive_authority=${metal_overlay#*=}
case "$oval_drive_scheme" in
PATH | path | UUID | uuid | LABEL | label)
info "Overlay is on ${oval_drive_scheme}=${oval_drive_authority}"
printf '%-12s: %s\n' 'overlay' "${oval_drive_scheme}=${oval_drive_authority}"
;;
'')
# no-op; disabled
Expand Down Expand Up @@ -402,28 +402,27 @@ pave() {
doomed_raids="$(lsblk -l -o NAME,TYPE | grep raid | sort -u | awk '{print "/dev/"$1}' | tr '\n' ' ' | sed 's/ *$//')"
warn "local storage device wipe is targeting the following RAID(s): [$doomed_raids]"
for doomed_raid in $doomed_raids; do
wipefs --all --force "$doomed_raid" >>"$log" 2>&1
{
wipefs --all --force "$doomed_raid"
mdadm --stop "$doomed_raid"
} >>"$log" 2>&1
done

# 3. NUKE BLOCKs
# Wipe each selected disk and its partitions.
doomed_disks="$(lsblk -b -d -l -o NAME,SUBSYSTEMS,SIZE | grep -E '('"$metal_subsystems"')' | grep -v -E '('"$metal_subsystems_ignore"')' | sort -u | awk '{print ($3 > '$metal_ignore_threshold') ? "/dev/"$1 : ""}' | tr '\n' ' ' | sed 's/ *$//')"
warn "local storage device wipe is targeting the following block devices: [$doomed_disks]"
for doomed_disk in $doomed_disks; do
wipefs --all --force "$doomed_disk"*
{
mdadm --zero-superblock "$doomed_disk"*
wipefs --all --force "$doomed_disk"*
} >>"$log" 2>&1
done

# 4. Cleanup mdadm
# Now that the signatures and volume groups are wiped/gone, mdraid-cleanup can mop up and left
# over /dev/md handles.
{
lsblk
mdraid-cleanup
lsblk
} >>"$log" 2>&1
_trip_udev

# 5. Notify the kernel of the partition changes
# NOTE: This could be done in the same loop that we wipe devices, however mileage has varied.
# 4. Notify the kernel of the partition changes
# NOTE: This could be done in the same loop where we wipe devices, however mileage has varied.
# Running this as a standalone step has had better results.
for doomed_disk in $doomed_disks; do
{
Expand All @@ -433,8 +432,6 @@ pave() {
} >>"$log" 2>&1
done

_trip_udev

warn 'local storage disk wipe complete' && echo 1 > "$METAL_DONE_FILE_PAVED"
{
mount -v
Expand Down

0 comments on commit ba0b119

Please sign in to comment.