Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VSHA-536 Zero md Superblocks #58

Merged
merged 2 commits into from
Feb 20, 2023
Merged

VSHA-536 Zero md Superblocks #58

merged 2 commits into from
Feb 20, 2023

Conversation

rustydb
Copy link
Contributor

@rustydb rustydb commented Feb 19, 2023

Summary and Scope

Issue Type

  • Bugfix Pull Request

This change ensures the RAIDs are eradicated and made unrecognizable by erasing their superblocks in order to resolve sync timing problems between Vshasta and metal.

The new logic explicitly stops the md devices, wipes their magic bits, and then eradicates the md superblocks on each disk.

During testing of VSHA-536 there was some fiddling with how the RAIDs were wiped to account for some peculiarities with the timings of how virtio synced and updated the kernel. The changes had been tested on metal without any observed problems, but in my recent series of tests some fatal inconsistencies were observed. The partprobe was revealing md handles, this caused mdadm to restart/resume RAIDs that had been "nuked" and this in turn caused partitioning to fail.

This change also includes some minor fixes:

  • The wipefs command for sd/nvme devices was not getting piped to the log file.
  • The info printed when manually sourcing /lib/metal-md-lib.sh in a dracut shell is now left justified and aligned by colon.
  • The extra /sbin/metal-md-scan call in /sbin/metal-md-disks is removed, it is no longer important shouldn't be invoked every loop that calls /sbin/metal-md-disks.
  • metal-kdump.sh no longer invokes /sbin/metal-md-scan under root=kdump because that script is already invoked by the initqueue (see metal-genrules.sh)
  • All initqueue calls to metal-md-scan have been changed to --unique and --onetime to ensure they never have an opportunity to run forever (as witnessed during a kdump test of the LiveCD)

A note about the dependency on mdraid-cleanup:

It turns out relying on mdraid-cleanup was a bad idea. The mdraid-cleanup script only stops RAIDs, it does not remove any superblock (or remove the RAIDs for that matter). This means that there is a (small) possibility that the RAID and its members still exist when the partprobe command fires. The window of time that this issue can occur is very small, and varies. VShasta has not hit this error in the 10-20 deployments it has done in the past 3-4 days, my 50+ boots I tested didn't hit this, but the past 10 NCN boots I just attempted hit this almost every time.

Prerequisites

  • I have included documentation in my PR (or it is not required)
  • I tested this on internal system (if yes, please include results or a description of the test)
  • I tested this on a vshasta system (if yes, please include results or a description of the test)

Idempotency

Risks and Mitigations

@rustydb rustydb requested a review from a team as a code owner February 19, 2023 03:40
@rustydb rustydb changed the title VSHA-536 Metal and VShasta Wipe Differnces VSHA-536 Zero md Superblocks Feb 19, 2023
This change ensures the RAIDs are eradicated and made unrecognizable
by erasing their superblocks in order to resolve sync timing problems
between Vshasta and metal.

The new logic explicitly stops the `md` devices, wipes their magic bits,
and then eradicates the `md` superblocks on each disk.

During testing of VSHA-536 there was some fiddling with how the RAIDs
were wiped to account for some peculiarities with the timings of how
`virtio` synced and updated the kernel. The changes had been tested on
metal without any observed problems, but in my recent series of tests some
fatal inconsistencies were observed. The `partprobe` was revealing `md`
handles, this caused `mdadm` to restart/resume RAIDs that had been
"nuked" and this in turn caused partitioning to fail.

This change also includes some minor fixes:
- The `wipefs` command for sd/nvme devices was not getting piped to the
  log file.
- The info printed when manually sourcing `/lib/metal-md-lib.sh` in a
  dracut shell is now left justified and aligned by colon.
- The extra `/sbin/metal-md-scan` call in `/sbin/metal-md-disks` is
  removed, it is no longer important shouldn't be invoked every loop
  that calls `/sbin/metal-md-disks`.
- `metal-kdump.sh` no longer invokes `/sbin/metal-md-scan` under
  `root=kdump` because that script is already invoked by the initqueue
  (see `metal-genrules.sh`)
- All initqueue calls to `metal-md-scan` have been changed to `--unique`
  and `--onetime` to ensure they never have an opportunity to run
  forever (as witnessed during a kdump test of the LiveCD)

A note about the dependency on `mdraid-cleanup`:

It turns out relying on `mdraid-cleanup` was a bad idea. The
`mdraid-cleanup` script only stops RAIDs, it does not remove any
superblock (or remove the RAIDs for that matter). This means that
there is a (small) possibility that the RAID and its members still exist
when the `partprobe` command fires. The window of time that this issue
can occur is very small, and varies. VShasta has not hit this error in
the 10-20 deployments it has done in the past 3-4 days.
@rustydb rustydb force-pushed the VSHA-536-metal-vshasta-wipe branch 7 times, most recently from 532fb4f to 2be44c7 Compare February 20, 2023 09:14
Make all URLs printed by dracut-metal-mdsquash contain a commit hash.

Remove the verbose `mount` and `umount` for pretterier output.

Update the `README.adoc` file with a better/verbose explanation of the
wipe process.
@rustydb rustydb merged commit a2cc1ad into main Feb 20, 2023
@rustydb rustydb deleted the VSHA-536-metal-vshasta-wipe branch February 20, 2023 10:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants