Migrating a running Linux system to XFS from a smaller HDD to SSD

  • Tutorial

Hello, Habr! I present to you the Russian-language version of the article " Migrating CentOS system from HDD to smaller SSD on XFS filesystem " by Denis Savenko.

Cover Image

This article is a Russian-language version of a previously published article in English with minor adjustments to the audience. I’ll make a reservation right away - I’m not a Linux maniac or even a professional system administrator, so it’s quite possible that sometimes I could use unusual, or even stupid solutions. I will be very grateful if you point them out to me in the comments so that I can improve this guide instead of just zipping over the article. Thank you in advance for this.

I think I'm not the only one who at some point decided to buy an SSD-drive for a working system. In my case, it was a working system on CentOS 7 on my tiny home server. Next, I wanted to transfer it "as is" to a new disk. But, as it turned out, this is not so easy to do, given the following:

  • The new SSD was much smaller than the already installed HDD (seriously, SSDs are still quite expensive compared to disk drives).
  • Partitions on the previous drive were formatted on the file system xfs . This is not surprising, given the fact that CentOS , starting with version 7, offers this file system "by default" (along with other systems based on RHEL, such as, in fact, Red Hat Enterprise Linux 7 , Oracle Linux 7 or Scientific Linux 7 ).
  • A working system should remain unchanged, including configuration, installed software, access rights, and other attributes of the file system.

Let me explain why the things I described above are serious complications of our task.

Firstly, if the size of the new disk were the same or larger than the previous one, it would be possible to apply full cloning of data partitions. There are tons of utilities for this, such as dd , ddrescue , partclone, or even clonezilla . The only thing that would have to be done afterwards is to fix a couple of configuration files in the boot section. If you use LVM for your partitions (which is also the default option in CentOS 7 ), the task could be even simpler - a new disk is added to the logical volume, and then using the command pvmove data is transferred to another physical medium inside the logical volume, and the old disk is removed from the partition, however this is not possible with the smaller size of the new physical medium.

Then, if a file system other than xfs , for example, quite popular were used ext4 , it would be possible to “compress” the old disk partition to the size of the new disk. Then any of the actions described above are performed and you are in the ladies. However, changing the size of partitions down to xfs impossible is impossible due to its implementation - you can only increase partitions when using it.

And last but not least, if we had no task to transfer a working configured system with all the metadata saved, it would be easier to just install the system from scratch on a new disk, saving a couple of important files in advance. This did not suit me for two reasons - firstly, my hair stood on end at the thought that I would have to repeat all the same actions that I had been performing for several days in a row, and secondly, it was simply not sports. I was sure that with a proper understanding of the technical features, transferring a working system to another disk should not have been an impossible task.

Nevertheless, I developed and tested at least two different environments a detailed step-by-step plan for transferring a working system, taking into account all the above limitations.

Stage 1. Preparation

Here is a list of what we need to transfer:

  1. Actually, a working system to be migrated. In my case, it was CentOS 7.4, but I’m sure that the situation will not be different from any other Linux system running on xfs .
  2. Live CD or flash drive with any Linux distribution. For simplicity, I will build on the same CentOS 7.4 Live CD. I prefer the version with Gnome, but this is a matter of taste. The choice of this distribution is due to the fact that, firstly, I already had it, and secondly, a set of utilities that we need to transfer, immediately, is delivered with it, and in particular xfsdump . I will not dwell on how to create a boot disk or a USB flash drive with a Live CD distribution, as I assume that the habrachitatel should cope with this without me.
  3. The amount of data on the old drive should not exceed the size of the new disk (whether it be an SSD or any other smaller disk). In my case, only about 10 GB were occupied, so I did not even think about what to get rid of.
  4. A cup of hot coffee.

Stage 2. Migration

We swallow coffee and begin the process of transferring from the launch of the system from the prepared Live CD distribution. Then open a terminal and switch to superuser mode using the command su -

Далее в данном руководстве предполагается, что все команды выполняются от имени суперпользователя ( root )

Step 1. Enable remote access (optional)

It’s more convenient for me to have remote access to the machine on which I will transfer, because I can simply copy-paste pre-prepared commands into my PuTTY terminal. If you downloaded this guide in a browser on the target machine, feel free to proceed to the next step - you will not need everything described here.

In order to allow remote access, you need to set a password for the user root and run the SSH daemon:

systemctl start sshd

Now you can connect to the target machine with your SSH client (for example, PuTTY ).

Step 2. Partition a disk

You can use your favorite utility for this. For example, Gnome has a preinstalled disk management utility. They also praise the Internet gparted , however, I intentionally used it fdisk , because, for example, I gparted simply did not recognize my NVMe SSD.

The disk should be divided in the image and likeness of the previous disk. I am not a partitioning maniac, so my partitioning scheme was standard:

  • /boot - 1GB standard partition;
  • 4G swap partition;
  • / - the root partition, LVM volume group with the name "main", occupying the remaining space on the disk.

So, let's start splitting a new disk:

lsblk # проверим какое наименование в системе получил новый диск, в моём случае это nvme0n1
fdisk /dev/nvme0n1
n # добавить новый раздел (для /boot)
p # primary partition
# оставляем по умолчанию (1-й раздел)
# оставляем по умолчанию (от начала диска)
+1G # размер для раздела /boot
# готово
n # добавить новый раздел (для новой LVM volume group)
p # primary partition
# оставляем по умолчанию (2-й раздел)
# оставляем по умолчанию (от начала предыдущего раздела)
# оставляем по умолчанию (100% оставшегося места)
# готово
a # установка флага "bootable"
1 # для 10го раздела
p # проверим, всё ли в порядке
w # записываем таблицу партиционирования на диск

The partition /boot must be a standard Linux partition, so immediately create a file system on it:

mkfs.xfs /dev/nvme0n1p1 -f

And now we need to create an LVM structure for the second partition on the disk. We’ll name the new LVM group newmain (later renamed):

pvcreate /dev/nvme0n1p2 # создаем новый физический LVM-диск для раздела
vgcreate newmain /dev/nvme0n1p2 # создаем новую LVM-группу и добавляем в неё только что добавленный физический LVM-диск
lvcreate -L 4G -n swap newmain # создаем логический раздел размером 4 ГБ под swap
lvcreate -l 100%FREE -n root newmain # создаем логический раздел на всё оставшееся место под корень

Now we are ready to create a file system on new logical partitions:

mkfs.xfs /dev/newmain/root # создание файловой системы для корневого раздела
mkswap -L swap /dev/newmain/swap # создание swap на новом разделе
swapon /dev/newmain/swap

Step 3. Active phase

Before we begin, we will make both LVM groups active (as we work from under the Live CD):

vgchange -a y main
vgchange -a y newmain

Create directories for mount points, and mount the old and new partitions of our disks to the system:

mkdir -p /mnt/old/boot
mkdir -p /mnt/old/root
mkdir -p /mnt/new/boot
mkdir -p /mnt/new/root
mount /dev/sda1 /mnt/old/boot
mount /dev/nvme0n1p1 /mnt/new/boot
mount /dev/main/root /mnt/old/root
mount /dev/newmain/root /mnt/new/root

Make sure that everything is in order with the command lsblk :

sda                8:0    0 931.5G  0 disk
├─sda1             8:1    0     1G  0 part /mnt/old/boot
└─sda2             8:2    0 930.5G  0 part
  ├─main-swap    253:0    0   3.6G  0 lvm  [SWAP]
  └─main-root    253:1    0 926.9G  0 lvm  /mnt/old/root
nvme0n1          259:0    0 119.2G  0 disk
├─nvme0n1p1      259:3    0     1G  0 part /mnt/new/boot
└─nvme0n1p2      259:4    0 118.2G  0 part
  ├─newmain-swap 253:5    0     4G  0 lvm  [SWAP]
  └─newmain-root 253:6    0 114.2G  0 lvm  /mnt/new/root

If you see something like this, you did everything right. And here the real magic begins - we are going to use xfsdump our data for migration. This utility is quite smart and knows about the internal structure of the file system xfs , which allows it to copy only blocks occupied by data, which, in turn, first allows us to copy data to a smaller disk, and secondly it greatly speeds up the transfer process. So, let's dump the data using this utility, and deploy these data on the fly in a new place:

xfsdump -l0 -J - /mnt/old/boot | xfsrestore -J - /mnt/new/boot
xfsdump -l0 -J - /mnt/old/root | xfsrestore -J - /mnt/new/root

A few words about the flags used:

  • -J reduces the size of feedback;
  • - reports xfsdump and xfsrestore use standard streams stdout and stdin accordingly instead of files.

This procedure may take some time (depending on the amount of your data). Therefore, here you will need a cup of coffee prepared in advance, which is the time to drink, otherwise it will cool.

If everything went well, your data was completely backed up with all metadata saved. Now you need to tweak a couple of configuration files and reinstall Grub2 on a new drive to make it bootable.

Step 4. Making the new disk bootable

The first thing to do is find out the UUID for the old and new boot sections ( /boot ) with the command blkid :

/dev/nvme0n1p1: UUID="3055d690-7b2d-4380-a3ed-4c78cd0456ba" TYPE="xfs"
/dev/sda1: UUID="809fd5ba-3754-4c2f-941a-ca0b6fb5c86e" TYPE="xfs"

Assuming that it sda1 is the old partition \boot and the nvme0n1p1 new one, we replace the UUIDs in the configuration files with the new mount point:

sed -i "s/809fd5ba-3754-4c2f-941a-ca0b6fb5c86e/3055d690-7b2d-4380-a3ed-4c78cd0456ba/g" /mnt/new/root/etc/fstab
sed -i "s/809fd5ba-3754-4c2f-941a-ca0b6fb5c86e/3055d690-7b2d-4380-a3ed-4c78cd0456ba/g" /mnt/new/boot/grub2/grub.cfg

These two commands will prepare your system configuration files for the new drive.

Now it's time to rename the LVM groups and unmount the drives:

umount /mnt/{old,new}/{boot,root}
vgrename -v {,old}main
vgrename -v {new,}main

The only thing left to do is reinstall the bootloader. This must be done using chroot :

mount /dev/main/root /mnt
mkdir -p /mnt/boot
mount /dev/nvme0n1p1 /mnt/boot
mount -t devtmpfs /dev /mnt/dev
mount -t proc /proc /mnt/proc
mount -t sysfs /sys /mnt/sys
chroot /mnt/ grub2-install /dev/nvme0n1

Step 5. The final touch

At this stage, all data should already be transferred and the new disk should become bootable. You just need to restart, remove the Live CD from the drive, and select a new disk in the BIOS to boot:

systemctl reboot -f

Если что-то пошло не так и система не загружается, вы всегда можете "откатиться" на старый диск, просто запустившись с Live CD повторно и переименовав LVM-группы обратно, выполнив vgrename -v {,new}main и vgrename -v {old,}main , и затем снова перезапуститься.

This completes the mandatory part of the program and we successfully transferred the working system to a new disk. The old disk can be removed from the computer.

Using an old HDD as a media storage

If you, like me, after moving the system, want to use your old drive as a media storage, this is also easy.

First, repartition the old drive

fdisk /dev/sda
d # удалить раздел 2
d # удалить раздел 1
n # новый раздел
p # primary partition
# раздел 1
# от начала
# до конца диска
# готово
p # проверим, что всё хорошо
w # перезапишем таблицу партиционирования

We will not create a file system on disk directly. Instead, we will create a new LVM group to which we will add this disk. This will allow us in the future to easily add new drives to this group without unnecessary trouble (the logical drive will remain the same):

pvcreate /dev/sda1 # новый физический LVM-диск
vgcreate media /dev/sda1 # новая LVM-группа
lvcreate -l 100%FREE -n media1 media # новый логический топ размером во весь добавленный диск
vgchange -a y media # делаем новую LVM-группу активной

mkfs.xfs /dev/media/media1 # создаем файловую систему на логическом диске

mkdir -p /var/media # создаем директорию для точки монтирования
mount /dev/media/media1 /var/media # монтируем новый диск к системе

In order to save changes after rebooting the system, you should also add an entry about the mount point in /etc/fstab :

/dev/mapper/media-media1 /var/media                       xfs     defaults        0 0


Now it can be argued that we have successfully transferred the working xfs system to a new smaller disk. As a bonus, we started using the old disk as a media storage.

UPDATE 04/02/2018

Since we were transferring the system to SSD, it is advisable in the comments that it would be nice to enable periodic execution of the command trim (garbage collector for SSD disks, which allows you to not lose performance over time) after the transfer to a working system trim . To do this, run the command:

systemctl enable fstrim.timer

This will enable execution on a new system trim once a week on any systemd system. In case you or your system are not used systemd , there is an exhaustive guide from DigitalOcean for almost any occasion.

Please leave your comments if you find a mistake or know a better way how to perform certain actions. I hope you find this manual helpful!