- Creates and manages disk storage stacks under Ubuntu, from simple (single partition) to complex (multiple drives/partitions, various file systems, encryption, RAID and SSD caching in almost any combination). Even the SSD cache can be a RAID.
- Installs a (basic) Ubuntu onto the storage stack it has created. With only a little manual work you get a full desktop or server installation.
- Clones or migrates an existing Ubuntu system and makes it bootable on a different storage stack.
This project started as a simple script for setting up encrypted storage for a few PCs and has come a long way since...
Please use this tool only if your are familiar with a Linux shell, disk partitioning, file systems and with the terms and concepts mentioned in this document.
StorageComposer might do things not intended by you that way, or it might even malfunction badly and corrupt your data, therefore:
StorageComposer consists of a bash script and a few helper files. Currently it requires Ubuntu Bionic or one of its variants. If no such OS is installed on your PC or if you wish to set up a bare-metal system, boot from an Ubuntu Bionic live DVD first.
Download all stcomp.*
files to the same directory and make sure that stcomp.sh
is executable.
Before running StorageComposer, create the partitions that you intend to use for
storage, caching and swapping, e.g. using fdisk
, gdisk
, parted
,
GParted
, QtParted
or similar tools.
Those partitions do not need to have any file system type assigned, nor do they have to be formatted – StorageComposer will take care of that. Consider these comments when partitioning SSDs that are to be used for caching.
StorageComposer must be run from a regular user account in sudo
as root
.
The system running StorageComposer is referred to as “host system” or
“host”; the storage managed by StorageComposer is called the “target system”
or “target”.
Depending on the command line arguments, one of these tasks is performed:
sudo stcomp.sh -b [-i|-c] [-d]
[<config‑file>]
Builds (-b
) a new target system, mounts it at a directory on
the host system and prepares to chroot
into this directory.
Existing data on the underlying target partitions is lost.
An internet connection is required; see here for details: Does StorageComposer alter the system on which it is run?
If desired then StorageComposer can make your target bootable by:
- installing (
-i
) a basic Ubuntu system from the Ubuntu repositories, or - cloning (
-c
) an existing local or remote Ubuntu system: this copies the directory tree of the source system to the target. Then the target is reconfigured according to its storage configuration.
Diagnostic messages (-d
) can be displayed during building and also when
booting the target.
sudo stcomp.sh -m [-i|-c] [-y] [-d]
[<config‑file>]
Mounts (-m
) a previously built target system and prepares it for
chroot
-ing if it contains /bin/bash
.
Optionally, -i
installs Ubuntu
and -c
clones an existing system before mounting.
This will overwrite data on the target devices.
-y
mounts (but does not install or clone) without user interaction,
-d
prints diagnostic messages.
sudo stcomp.sh -u [-y]
[<config‑file>]
Unmounts a target system from its current mount point on the host system.
Use -y
for unattended unmounting.
stcomp.sh -h
displays a help message.
Target configuration is specified interactively and is saved to/loaded from a
<config‑file>
. If omitted from the command line,
<config‑file>
defaults to ~/.stcomp.conf
.
A separate <config‑file>
should be kept for each
target system that is managed by StorageComposer.
Configuration options are entered/edited strictly sequentially (sigh) in the
order of the following sections. Each option defaults to what was last saved to
the <config‑file>
.
For each option, entering ?
will indicate what is a valid input.
Each target system must have a root file system and can have additional file
systems. If the target is bootable then the root file system appears
at /
when it is running. If the target is mounted in the host then the root file system
appears at the Target mount point
.
In either case, additional file systems are mounted relative to the root file system.
The root file system has to be configured first. More file systems can be added afterwards. For each file system, the following prompts appear in this order:
Partition(s) for root file system (two or more make a RAID):
or
Partition(s) for additional file system (two or more make a RAID, empty to continue):
Enter the partition(s) that make up this file system, separated by space. Leading
/dev/
path components may be omitted for brevity, e.g.sde1
is equivalent to/dev/sde1
.RAID level:
If several partitions were specified then an MD/RAID will be built from these components. Enter the RAID level (
0
,1
,4
,5
,6
or10
). The minimum number of components per level is 2, 2, 3, 3, 4 and 4, respectively.A RAID1 consisting of both SSDs and HDDs will prefer reading from the SSDs. This performs similar to an SSD cache in
writethrough
mode.Cache partition(s) (optional, two or more make a RAID):
and
Cache RAID level:
Partition(s) entered here become a cache device (using bcache) for this file system. If more than one partition was entered then you are prompted for the cache RAID level, and an MD/RAID will be built and used as cache device. If the file system is in a RAID then the cache should be, too.
The same partition/combination of partitions can act as a cache for other file systems, too. Swap space must not be cached.
Bucket size (64k...64M):
This is the allocation unit for cache space. It should be set to the erase block size of the cache SSD, or to the largest erase block size of the cache RAID components.
K
,M
,G
andT
can be used as units.Optimum caching performance (supposedly?) depends on choosing the correct setting. SSD data sheets usually do not contain the erase block size, but this survey may be helpful. If in doubt,
64M
is the safest guess but may lead to poorer cache space utilization. On the other hand, selecting too small an erase block size decreases cache write performance.LUKS-encrypted (y/n)?
Will encrypt the partitions of this file system using dm-crypt/LUKS. The caching device will see only encrypted data. All encrypted file systems share the same LUKS passphrase (see section Authorization).
If the target system is bootable and
/boot
is on an encrypted file system then only a conventional passphrase can be used for authorization since key files are not supported by GRUB2.File system:
Select one of
ext2
,ext3
,ext4
,btrfs
orxfs
. After the root file system,swap
is also available and will be used for hibernation.Mount point:
or
Mount points (become top-level subvolumes with leading '@'):
Enter one or several (only btrfs) mount points, separated by spaces. For each btrfs mount point, a corresponding subvolume will be created. The root file system must have mount point
/
.Mount options (optional):
Become effective for this file system whenever the target system is mounted at the
Target mount point
in the host and also when booting from the target.
If any target file system is encrypted then you need to specify how to authorize for opening it. The same LUKS passphrase is used for all encrypted file systems. Salting guarantees that each file system is still encrypted with a different key. The LUKS passphrase can be a conventional passphrase or the content of a file, see below.
LUKS authorization method:
This determines what to use as LUKS passphrase for creating or opening an encrypted file system when building, mounting or booting a target system:
- a conventional passphrase that has to be typed in at the keyboard
- a key file with the following properties:
- arbitrary (preferably random) content
- size between 256 and 8192 bytes
- can be on a LUKS-encrypted partition. When booting, the user will be
prompted for the LUKS partition passphrase. Before building or
mounting such a target in the host system, you must open the
LUKS-encrypted partition yourself, e.g. using
cryptsetup
or your file manager.
- an encrypted key file with the following properties:
- arbitrary content
- decrypted size between 256 and 8192 bytes
- GPG-encrypted
You can create a key file yourself or have StorageComposer create one with random content, see below. Keeping that file on a removable device (USB stick, MMC card) provides two-factor authentication.
Key file (preferably on a removable device):
or
Encrypted key file (preferably on a removable device):
Enter the absolute path of the key file. If the file does not exist you will be asked whether to have one created.
Caution: always keep a copy of your key file offline in a safe place.
LUKS passphrase:
or
Key file passphrase:
Appears whenever a passphrase is required for LUKS authorization methods 1 and 3. When building a target, each passphrase must be repeated for verification. The most recent passphrase per
<config‑file>
and authorization method is remembered for five minutes. Within that time, it does not have to be retyped and can be used for unattended mounting (-m -y
).
These options affect all file systems.
Prefix to mapper names and labels (recommended):
The first mount point of each file system without the leading
/
(root
for the root file system) serves as volume label, MD/RAID device name and/dev/mapper
name.The prefix specified here is prepended to these labels and names in order to avoid conflicts with names already existing in the host system.
Target mount point:
Absolute path to a host directory where to mount the target system.
For
chroot
ing, these special host paths are bind-mounted automatically:/dev
,/dev/pts
,/proc
,/run
,/sys
.
These prompts appear only if you install Ubuntu on the target system:
Hostname:
Determines the hostname of the target when it is running.
Username (empty to copy host user):
,
Login passphrase:
Defines the user account to create on the target system. For convenience, username and passphrase of the current host user (the one running
sudo stcomp.sh
) can be copied to the target.
Other settings are inherited from the host system:
- architecture (
x86
oramd64
) - distribution version (
Bionic
etc.) - main Ubuntu package repository
- locale
- time zone
- keyboard configuration
- console setup (character set and font)
When cloning a directory which contains an Ubuntu system, the source directory tree is copied to the target. Then the target system is reconfigured so that it can boot from its storage. The source directory may be local or on a remote host.
Please note these requirements:
- All device file systems of the source system must be mounted at the source directory. An additional instance of StorageComposer may be helpful if the source storage stack is complex.
- Source subdirectories containing no-device file systems such as
proc
,sysfs
,tmpfs
etc. are not copied. - The source system should be an Ubuntu release supported by StorageComposer but it should not be running. Otherwise, the target may end up in an inconsistent state. Consequently, the source directory should not be the root directory of the host system.
- If the source directory is remote then rsync and an SSH server must be installed at the remote host.
Remote host to clone from (empty for a local directory):
A hostname or an IP address are required here if you wish to clone a remote directory. Leaving this empty will skip the following Remote... prompts.
Remote SSH port:
The port at which the remote SSH server is listening.
Remote username (required only if password authentication):
Enter the remote username for password-based authentication. The password prompt will appear later in the process. Leave this field empty if the host uses a non-interactive authentication method, e.g. public key authentication.
The authenticated user needs sufficient privileges to read everything within the remote source directory.
Remote source directory:
or
Source directory:
The directory where the storage of the source system is mounted.
Subpaths to exclude from copying (optional):
A space-delimited list of files or directories that are not to be copied to the target. These are paths relative to the source directory but nevertheless must start with a
/
.The Target mount point is never copied (for those among us who cannot resist cloning a live system after all).
Since the target storage configuration may differ from the source, please be aware of these restrictions:
- All filesystem-related packages that are required for the target storage are reconfigured from scratch. Excess file system packages copied from the source are purged from the target, see Which packages can be affected by cloning?
- Custom GRUB2 configuration options are lost.
- Swap space is not cloned, and the target swap space remains empty.
- Hard links that would cross file system boundaries on the target system are not preserved.
- Target device names and UUIDs will be different from the source. This can break
existing scripts. Files required for booting such as
/etc/fstab
etc. are adjusted by StorageComposer.
SSDs in a target system built by StorageComposer
can be trimmed by fstrim
even if they are holders of a RAID or an encrypted file system. Recent Ubuntu versions perform weekly batch trims by default (check systemctl status fstrim.timer
)
but disable realtime trimming for performance reasons
(see the discard
option for mount
).
bcache supports only realtime SSD trimming which impacts performance and is therefore disabled. Leaving some 20% of SSD capacity unprovisioned should allow the firmware to do sufficient wear-levelling in the background.
Whenever a target is built (-b
) or mounted (-m)
, a script for testing
the target storage is created. On the host, it is located in the same directory
as the <config-file>
but has -test.sh
appended to the name.
On bootable targets a copy is saved as /usr/local/sbin/stcomp-test.sh
.
The following tests run on all subvolumes, file systems and swap devices that are part of the target:
- Basic data verification: data is written randomly and then read back and verified.
- Sequential read/write performance
- Simulated file server performance: data is written and read randomly. Reads happen more frequently than writes.
Testing is non-destructive on file systems and subvolumes but creates several
files that can be huge. To delete them, run the test script with option -c
.
Swap space is overwritten by testing. If necessary, swapping is disabled
automatically (swapoff
) and re-enabled thereafter (swapon
).
- Testing from within the host file system
Mount the target beforehand if necessary:sudo stcomp.sh -m <config-file>
Then start the test:sudo <config-file>-test.sh
- Testing
chroot
-ed at the Target mount point or in the running target system
Command line:sudo stcomp-test.sh
Please disregard the warnings Multiple writers may overwrite blocks that belong to other jobs
appearing at the beginning. In some cases, the ETA
in the status line can also be misleading. Invoke the test script with
option -h
for help on how to limit the script runtime and on other
options.
The testing backend – the “Flexible I/O Tester” (fio) – is very powerful and produces detailed results. Please refer to section 6 (“Normal output”) of the fio Howto or to the fio manpage for an explanation of the output.
In order to add your own tests or modify existing ones, you need to be familiar with the fio job file format and parameters, see sections 4 (“Job file format”) and 5 (“Detailed list of parameters”) of the fio Howto or the fio manpage.
Custom tests can be added to a section marked as such close to the end of the
test script. Please make a copy of the modified script because the original
<config-file>-test.sh
will be overwritten whenever the target is mounted.
Target drives used in these examples start at /dev/sde
. Drives are hard disks unless
marked USB
(removable USB device) or SSD
(non-removable SSD device).
Only shown for completeness, these examples can also be achieved with other (more user-friendly) tools.
Under-exciting, just for starters...
File systems | ext4 |
---|---|
Mount points | / |
Partitions | sde1 |
Drives | sde |
Similar to what might be done easily with the Ubuntu Live DVD installer.
File systems | btrfs | swap | |
---|---|---|---|
Subvolumes | @ |
@home |
|
Mount points | / |
/home |
|
Partitions | sde1 |
sde2 |
|
Drives | sde |
||
MBR | yes |
Boots from SSD, has the OS on SSD and data and swap space on HDD.
File systems | ext4 | btrfs | swap | |
---|---|---|---|---|
Subvolumes | @home |
@var |
||
Mount points | / |
/home |
/var |
|
Partitions | sde1 |
sdf1 |
sdf2 |
|
Drives | sde (SSD) |
sdf |
||
MBR | yes |
Everything is on a RAID1 except for swap space.
File systems | ext4 | swap | |
---|---|---|---|
Subvolumes | |||
Mount points | / |
||
RAID arrays | RAID1 | ||
Partitions | sde1 |
sdf1 |
sdg1 |
Drives | sde |
sdf |
sdg |
MBR | yes |
If a RAID1 has both SSD as well as HDD components, SSDs are used for reading (if possible), and write-behind is activated on the HDDs. Performance is comparable to an SSD.
File systems | xfs | swap | |
---|---|---|---|
Subvolumes | |||
Mount points | / |
||
RAID arrays | RAID1 | ||
Partitions | sde1 |
sdf1 |
sdf2 |
Drives | sde |
sdf (SSD) |
|
MBR | yes | yes |
Boots from SSD, has the OS on SSD and data and swap space on HDD. Everything is on RAID arrays, even swap space.
File systems | ext4 | btrfs | swap | ||||
---|---|---|---|---|---|---|---|
Subvolumes | @home |
@var |
|||||
Mount points | / |
/home |
/var |
||||
RAID arrays | RAID5 | RAID1 | RAID0 | ||||
Partitions | sde1 |
sdf1 |
sdg1 |
sdh1 |
sdi1 |
sdh2 |
sdi2 |
Drives | sde (SSD) |
sdf (SSD) |
sdg (SSD) |
sdh |
sdi |
sdh |
sdi |
MBR | yes |
Could be used for making backups of an encrypted system.
File systems | xfs |
---|---|
Subvolumes | |
Mount points | / |
LUKS encryption | yes |
RAID arrays | |
Partitions | sde1 |
Drives | sde |
MBR |
Boots from a RAID1, has data in a RAID5 and swap space in a RAID0.
File systems | ext2 | btrfs | swap | ||||
---|---|---|---|---|---|---|---|
Subvolumes | @ |
@home |
@var |
||||
Mount points | /boot |
/ |
/home |
/var |
|||
LUKS encryption | yes | yes | |||||
RAID arrays | RAID1 | RAID5 | RAID0 | ||||
Partitions | sde1 |
sdf1 |
sdg1 |
sdh1 |
sdi1 |
sde2 |
sdf2 |
Drives | sde |
sdf |
sdg |
sdh |
sdi |
sde |
sdf |
MBR | yes | yes |
Bootable system, caching also accelerates booting.
File systems | ext4 | swap |
---|---|---|
Subvolumes | ||
Mount points | / |
|
LUKS encryption | ||
Cache | sde1 (SSD sde ) |
|
RAID arrays | ||
Partitions | sdf1 |
sdf2 |
Drives | sdf |
|
MBR | yes |
... as could be found in a NAS.
File systems | btrfs | |||
---|---|---|---|---|
Subvolumes | @ |
@media |
||
Mount points | / |
/media |
||
LUKS encryption | yes | |||
Cache | sde1 (SSD sde ) |
|||
RAID arrays | RAID6 | |||
Partitions | sdf1 |
sdg1 |
sdh1 |
sdi1 |
Drives | sdf |
sdg |
sdh |
sdi |
MBR |
Boots from an SSD partition and uses another SSD partition of the same device as cache.
File systems | ext2 | ext4 | btrfs | |
---|---|---|---|---|
Subvolumes | @home |
@var |
||
Mount points | /boot |
/ |
/home |
/var |
LUKS encryption | yes | yes | ||
Cache | sde1 (SSD sde ) |
|||
RAID arrays | RAID1 | |||
Partitions | sde2 |
sdf1 |
sdg1 |
sdh1 |
Drives | sde (SSD) |
sdf |
sdg |
sdh |
MBR | yes |
- Which Ubuntu hosts are supported?
- What about Debian hosts and targets?
- Why use an external tool for partitioning?
- How to install a complete Ubuntu desktop or server with StorageComposer?
- Which file systems can be created?
- How to debug booting after installing or cloning?
- Is hibernation supported?
- What does “SSD erase block size” mean and why should I care?
- Can I create a “fully encrypted” target system?
- Where is the key file expected to be located at boot time?
- Do I have to retype my passphrase for each encrypted file system during booting?
- How to avoid retyping my passphrase if
/boot
is encrypted? - How to achieve two-factor authentication for encrypted file systems?
- Is two-factor authentication possible if
/boot
is encrypted? - To which drives is the MBR written?
- Which packages can be affected by cloning?
- Why does StorageComposer sometimes appears to hang when run again shortly after creating a target with MD/RAID?
- Does StorageComposer alter the host system on which it is run?
- What if drive names change between successive runs of StorageComposer?
- How to deal with “Device is mounted or has a holder or is unknown”?
- Pressing Ctrl-C at an input prompt leaves my Ubuntu terminal in a mess
Bionic or later is strongly recommended as the host system. Some packages may behave differently or may not work properly at all in earlier versions.
Although it should not be too hard to adapt the scripts to Debian (jessie), this is still on the wish list.
Partitioning tools for Linux are readily available, such as fdisk
, gdisk
, parted
, GParted
and QtParted
. Attempting to duplicate their
functions in StorageComposer did not appear worthwhile.
Unfortunalety, I could not make the Ubuntu Live DVD installer work with encrypted and cached partitions created by StorageComposer.
Therefore, let StorageComposer install a basic Ubuntu on your target system first.
Then chroot
into your target or boot it and install one of these packages:
{ed,k,l,q,x,}ubuntu-desktop
or ubuntu-server
. The result is
similar but not identical to what the Ubuntu installer produces. Most notably,
you will have to install localization packages such as language-pack-*
by hand.
Currently ext2
, ext3
, ext4
,
btrfs
,
xfs
and swap space.
- Adding option
-d
to an install or clone run removes the boot splash screen and displays boot messages. - Option
-dd
displays not only boot messages but also drops you into theinitramfs
shell before any file system is mounted. - Use
dmesg
to review the boot messages later. To see only StorageComposer-related messages, usedmesg | grep -E 'keyscript|bcache|mdraid'
.
The (largest) swap partition of your configuration will be set up for hiberation
automatically. Use sudo pm-hibernate
to test whether hibernation actually
works on your hardware.
An “erase block” is the smallest unit that a NAND flash can erase (cited from this article). The SSD cache (bcache) allocates SSD space in erase block-sized chunks in order to optimize performance. However, whether or not alignment with erase block size actually affects SSD performance seems unclear as indicated by controversial blogs in the references.
Yes, if the target system is not bootable and is used only for storage, e.g. for backups or for media.
If the target is bootable then the MBR and the boot partition remain unencrypted which makes them vulnerable to “evil maid” attacks. A tool like chkboot could be used to detect whether MBR or boot partition have been tampered with, but unfortunately only after the malware had an opportunity to run. Please note that such systems are frequently called “fully encrypted” although they are not.
Short but incomplete answer: at the same path on the same device as when the file system was built.
Extensive answer: by “key file path” we mean the path of the key file at build time,
relative to the mount point of the key device. If, for instance, your key file
was /media/user/my_key_dev/path/to/keyfile
when the storage was
built then your key device was mounted at /media/user/my_key_dev
and the
“key file path” is /path/to/keyfile
.
At boot time, the following locations are scanned for a file at the “key file path”, in this order:
- The
initramfs
; if the key file was encrypted then you are prompted for a passphrase. Note that StorageComposer cannot create such a setup, this has to be done manually. - All unencrypted partitions on all removable USB and MMC devices; again, a passphrase is requested for an encrypted key file.
- All LUKS-encrypted partitions on all removable USB and MMC devices; you are prompted for a passphrase for each such partition, and these partitions can contain only unencrypted key files.
If /boot
is not on an encrypted file system then you need to enter
your passphrase only once. The LUKS passphrase is derived
from it according to the selected LUKS authorization method
and is saved in the kernel keyring for all your encrypted file systems. The saved
LUKS passphrase is discarded after 60 seconds; by that time, all encrypted file
systems should be open.
On the other hand, if /boot
is on an encrypted file system then your
passphrase is requested twice: first for /boot
by GRUB2 and then for the
actual file system(s) by the initramfs
. This is true even if there is only a
(single) root filesystem. An additional inconvenience is that the keyboard
is in US layout for the first passphrase and in localized layout for the second
one. Although
localized keyboards are possible in GRUB2,
the process is cumbersome and the result is less-than-perfect.
Please note also that there is not so much security to gain from an encrypted /boot
file system; even then, the MBR remains unencrypted and is still vulnerable to
“evil maid” attacks.
Make a separate file system for /boot
(e.g. ext2
) on a LUKS-encrypted
partition, using your passphrase.
Encrypt the remaining file systems with a key file. Save the key file in the
initramfs
in /boot
.
StorageComposer cannot do all of this, some manual work is required.
Use a key file for LUKS authorization (method 2 or 3) and keep it on a removable device (USB stick, MMC card).
Yes, the solution is similar to
How to avoid retyping my passphrase if /boot
is encrypted?
Just create your /boot
file system on a LUKS-encrypted partition of a removable
drive.
If the storage is made bootable then an MBR is written to all target drives
making up the file system mounted at /boot
if such a file system exists.
Otherwise, the MBR goes to all target drives of the root file system.
File systems, caches etc. that are unsupported by StorageComposer can never be part
of the target storage configuration. Therefore, the following packages are purged
from the target in order to get rid of their effects on initramfs
, systemd
services, udev
rules etc.:
f2fs-tools
, nilfs-tools
, jfsutils
, reiserfsprogs
, ocfs2-*
,
zfs-*
, cachefilesd
, flashcache-*
, lvm2
.
Packages from the source that are also required by the target are reconfigured from
scratch, i.e. they are purged and reinstalled only if needed:
mdadm
, bcache-tools
, cryptsetup
, btrfs-tools
, xfsprogs
.
Why does StorageComposer sometimes appears to hang when run again shortly after creating a target with MD/RAID?
Immediately after being created, the RAID starts an initial resync. During that
time, RAID performance is quite low, notably for RAID5 and RAID6. Since
StorageComposer queries all block devices (including RAIDs) repeatedly, this
may cause a long delay until the initial Configuration summary
or a
response appear at the console.
StorageComposer may change system settings temporarily while it is running and restores them when it terminates.
Depending on your storage configuration, one or more of these packages
will be installed permanently (unless already present):
mdadm
, smartmontools
, cryptsetup
, keyutils
, gnupg
, whois
,
bcache-tools
, btrfs-tools
, xfsprogs
, fio
, debconf-utils
,
openssh-client
and debootstrap
.
Some packages copy files to your initramfs
, install systemd
services, add
udev
rules etc. Thus, additional block devices (notably RAID, LUKS and
caching devices) may show up in your system. The lsblk
command provides an
overview.
Drive error and driver timeouts of the RAID components for your storage are
adjusted on the host system and also on the bootable system. For details, see
this blog,
this bug report
and this bug report.
Running dmesg | grep mdraid-helper
shows what was changed.
Changes on the host system persist until shutdown or hibernation.
On reboot, drives and partitions may be assigned differing names, e.g.
/dev/sdd2
may become /dev/sde2
etc. This does not
affect StorageComposer as it identifies partitions by UUID in the
<config‑file>
. Partition names in the user interface are
looked up by UUID and adapt automatically to the current drive naming scheme
of your system.
Apart from the obvious (unknown device), this error message may appear if your build/mount configuration contains a device which is a currently active MD/RAID or bcache component (eventually as the result of a previous run of StorageComposer).
First, verify that the device in question does not belong to your host system
and find out where it is mounted, if at all.
Then run sudo stcomp.sh -u
with a new configuration file. Specify that device
for the root file system (no caching, no encryption, any file system type) and
enter the proper mount point (or any empty directory). This will unlock the device.
Hopefully. MD/RAIDs can be stubborn when syncing.
Unfortunately, this is a known bug in bash. Apparently, the patch did not yet make it into your Ubuntu distribution.
- GPT/UEFI support
- Debian support
- ZFS support
- Producing exactly the same result as the Ubuntu/Xubuntu/Kubuntu Live DVD installers
- Friendly user interface, possibly using Python+PySide or Qt
StorageComposer is licensed under the GPL 3.0. This document is licensed under the Creative Commons license CC BY-SA 3.0.
Credits go to the authors and contributors of these documents:
-
Ext4 (and Ext2/Ext3) Wiki. kernel.org Wiki, 2016-09-20. Retrieved 2016-10-14.
-
btrfs Wiki. kernel.org Wiki, 2016-10-13. Retrieved 2016-10-14.
-
XFS. XFS.org Wiki, 2016-06-06. Retrieved 2016-10-14.
-
Tridgell, Andrew; Mackerras, Paul et al.: rsync manpage. Ubuntu 18.04 LTS Manpage Repository. Retrieved 2019-02-05.
-
SSH. Ubuntu Community Help Wiki, 2015-02-27. Retrieved 2016-10-28.
-
Linux Raid Wiki. kernel.org Wiki, 2016-10-12. Retrieved 2016-10-14.
-
Smith, Andy: Linux Software RAID and drive timeouts. The ongoing struggle (blog), 2015-11-09. Retrieved 2016-10-14.
-
General debian base-system fix: default HDD timeouts cause data loss or corruption (silent controller resets). Debian Bug report #780162, 2015-03-09. Retrieved 2016-10-14.
-
Default HDD block error correction timeouts: make entire! drives fail + high risk of data loss during array re-build. Debian Bug report #780162, 2015-03-10. Retrieved 2016-10-14.
-
Overstreet, Kent: What is bcache? evilpiepirate.org, 2016-08-28. Retrieved 2016-10-14.
-
Rath, Nikolaus: SSD Caching under Linux. Nikolaus Rath's Website (blog), 2016-02-10. Retrieved 2016-10-18.
-
Bcache. Ubuntu Wiki, 2014-10-27. Retrieved 2016-11-05.
-
Wheeler, Eric: [BUG] NULL pointer in raid1_make_request passed to bio_trim when adding md as bcache caching dev. Linux Kernel Mailing List Archive, 2016-03-25. Retrieved 2016-11-03.
-
Flash memory card design. Linaro.org Wiki, 2013-02-18. Retrieved 2016-10-14.
-
Smith, Roderick W.: Linux on 4 KB sector disks: Practical advice. IBM developerWorks, 2014-03-06. Retrieved 2016-10-13.
-
Bergmann, Arnd: Optimizing Linux with cheap flash drives. LWN.net, 2011-02-18. Retrieved 2016-10-13.
-
M550 Erase Block Size. Crucial community forum, 2014-07-18. Retrieved 2016-10-13.
-
dm-crypt. Wikipedia, 2016-10-10. Retrieved 2016-10-14.
-
LUKS. Wikipedia, 2016-05-16. Retrieved 2016-10-14.
-
Saout, Jana; Frühwirth, Clemens; Broz, Milan; Wagner, Arno: cryptsetup manpage. Ubuntu 18.04 LTS Manpage Repository. Retrieved 2019-02-05.
-
Ashley, Mike; Copeland, Matthew; Grahn, Joergen; Wheeler, David A.: The GNU Privacy Handbook: Encrypting and decrypting documents. The Free Software Foundation, 1999. Retrieved 2016-10-14.
-
Multi-factor authentication. Wikipedia, 2016-10-12. Retrieved 2016-10-14.
-
Zak, Karel: mount manpage. Ubuntu 18.04 LTS Manpage Repository. Retrieved 2019-02-05.
-
Czerner, Lukas; Zak, Karel: fstrim manpage. Ubuntu 18.04 LTS Manpage Repository. Retrieved 2019-02-05.
-
Axboe, Jens: fio HOWTO. GitHub, 2016-10-18. Retrieved 2016-10-14.
-
Carroll, Aaron; Axboe, Jens: fio manpage. Ubuntu 18.04 LTS Manpage Repository. Retrieved 2019-02-05.
-
Schneier, Bruce: “Evil Maid” Attacks on Encrypted Hard Drives. Schneier on Security, 2009-10-23. Retrieved 2016-10-14.
-
Schmidt, Jürgen et al.: chkboot. Github, 2014-01-07. Retrieved 2016-10-14.
-
KrisWebDev: How to change grub command-line (grub shell) keyboard layout? Ask Ubuntu (forum), 2016-03-28. Retrieved 2016-10-14.
-
Thomas, Mickaël: read -e does not restore terminal settings correctly when interrupted if a trap is set. bug-bash Archives, 2014-09-08. Retrieved 2016-10-19.