Sunday 20th July 2025
Replacing disks on server
For some time I had been getting emails about SMART warnings on one of my servers, like these:
Device: /dev/sdb [SAT], 10 Offline uncorrectable sectors Device: /dev/sdb [SAT], 78 Currently unreadable (pending) sectors
The numbers had been growing so I knew I had to replace the disk, which was a Seagate Barracuda 7200.10 320GB, one of a pair (sda, sdb) that were mirrored using mdraid level 1 so the server was still fine for the moment with sda not reporting any issues. I bought a pair of trusty Crucial MX500 500GB SSDs as replacements.
The existing partitioning arrangement was as follows:
Partition |
Start Sec |
End Sec |
Size |
RAID device |
Usage |
|
|||||
/dev/sda: 298.1 GiB (320 GB) with GPT partition table |
|||||
sda1 |
2048 |
821247 |
400.0 MiB |
md0 |
- |
sda2 |
821248 |
2459647 |
800.0 MiB |
md1 |
/boot (OSa) |
sda3 |
2459648 |
4098047 |
800.0 MiB |
md2 |
/boot (OSb) |
sda4 |
4098048 |
171870207 |
80.0 GiB |
md3 |
vgGKOS |
sda5 |
171870208 |
423528447 |
120.0 GiB |
md4 |
vgGKdata |
sda6 |
423528448 |
424347647 |
400.0 MiB |
- |
/boot/efi |
- |
|
|
|
- |
95.7 GiB spare |
|
|||||
/dev/sdb: 298.1 GiB (320 GB) with GPT partition table |
|||||
sdb1 |
2048 |
821247 |
400.0 MiB |
md0 |
- |
sdb2 |
821248 |
2459647 |
800.0 MiB |
md1 |
/boot (OSa) |
sdb3 |
2459648 |
4098047 |
800.0 MiB |
md2 |
/boot (OSb) |
sdb4 |
4098048 |
171870207 |
80.0 GiB |
md3 |
vgGKOS |
sdb5 |
171870208 |
423528447 |
120.0 GiB |
md4 |
vgGKdata |
sdb6 |
423528448 |
424347647 |
400.0 MiB |
- |
/boot/efi (copy) |
- |
|
|
|
- |
95.7 GiB spare |
|
The LVM volume groups were set up as follows:
Volume Group |
Logical Volume |
Size |
Mount Point (OSa) |
Mount Point (OSb) |
vgGKdata |
srv |
40.00 GiB |
/srv |
/srv |
vgGKdata |
home |
40.00 GiB |
/home |
/home |
vgGKdata |
Free Space |
39.93 GiB |
- |
- |
|
||||
vgGKOS |
root.A |
15.00 GiB |
/ |
- |
vgGKOS |
root.B |
15.00 GiB |
- |
/ |
VgGkOS |
tmp |
10.00 GiB |
/tmp |
/tmp |
VgGkOS |
swap |
16.00 GiB |
- |
- |
VgGkOS |
Free Space |
23.93 GiB |
- |
- |
First step was to open up the server and install the new SSDs. That introduced two new drives, sdc and sdd with nothing on them. Since the new drives were bigger than the old ones, I took the opportunity to make the partitions bigger so that I'd be able to expand things with very little effort in the process of replacing the disks. I used gdisk to partition both drives like this:
Partition |
Start Sec |
End Sec |
Size |
Partition Code |
RAID device |
Usage |
|
||||||
/dev/sdc: 465.8 GiB (500 GB) with GPT partition table |
||||||
sdc1 |
2048 |
1128447 |
550.0 MiB |
EF00 |
md0 |
- |
sdc2 |
1128448 |
5322751 |
2.0 GiB |
FD00 |
md1 |
/boot (OSa) |
sdc3 |
5322752 |
9517055 |
2.0 GiB |
FD00 |
md2 |
/boot (OSb) |
sdc4 |
9517056 |
345061375 |
160.0 GiB |
FD00 |
md3 |
vgGKOS |
sdc5 |
345061376 |
764491775 |
200.0 GiB |
FD00 |
md4 |
vgGKdata |
sdc6 |
764491776 |
765618175 |
550.0 MiB |
EF00 |
- |
/boot/efi |
- |
|
|
|
|
- |
100.7 GiB spare |
|
md0 was originally an EFI system partition on top of an mdraid array but I decided a long time ago that that was a bad idea after reading this.
The next step was to add the new partitions to the existing RAID arrays:
# mdadm --detail /dev/md0 /dev/md0: Version : 1.0 Creation Time : Sat Mar 11 04:06:43 2017 Raid Level : raid1 Array Size : 409536 (399.94 MiB 419.36 MB) Used Dev Size : 409536 (399.94 MiB 419.36 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sun Jul 20 14:14:27 2025 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : localhost.localdomain:0 UUID : 3189b1a3:f4ba415b:45d478c0:2e5a08c2 Events : 1067 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 # mdadm /dev/md0 --add /dev/sdc1 /dev/sdd1 mdadm: added /dev/sdc1 mdadm: added /dev/sdd1 # mdadm --detail /dev/md0 /dev/md0: Version : 1.0 Creation Time : Sat Mar 11 04:06:43 2017 Raid Level : raid1 Array Size : 409536 (399.94 MiB 419.36 MB) Used Dev Size : 409536 (399.94 MiB 419.36 MB) Raid Devices : 2 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Jul 20 14:39:25 2025 State : clean Active Devices : 2 Working Devices : 4 Failed Devices : 0 Spare Devices : 2 Consistency Policy : resync Name : localhost.localdomain:0 UUID : 3189b1a3:f4ba415b:45d478c0:2e5a08c2 Events : 1069 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 - spare /dev/sdc1 3 8 49 - spare /dev/sdd1 # mdadm /dev/md0 --grow --raid-devices=4 raid_disks for /dev/md0 set to 4 # mdadm --detail /dev/md0 /dev/md0: Version : 1.0 Creation Time : Sat Mar 11 04:06:43 2017 Raid Level : raid1 Array Size : 409536 (399.94 MiB 419.36 MB) Used Dev Size : 409536 (399.94 MiB 419.36 MB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Jul 20 14:40:00 2025 State : clean, degraded, recovering Active Devices : 2 Working Devices : 4 Failed Devices : 0 Spare Devices : 2 Consistency Policy : resync Rebuild Status : 50% complete Name : localhost.localdomain:0 UUID : 3189b1a3:f4ba415b:45d478c0:2e5a08c2 Events : 1080 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 3 8 49 2 spare rebuilding /dev/sdd1 2 8 33 3 spare rebuilding /dev/sdc1 (some time later) # mdadm --detail /dev/md0 /dev/md0: Version : 1.0 Creation Time : Sat Mar 11 04:06:43 2017 Raid Level : raid1 Array Size : 409536 (399.94 MiB 419.36 MB) Used Dev Size : 409536 (399.94 MiB 419.36 MB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Jul 20 14:40:05 2025 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : localhost.localdomain:0 UUID : 3189b1a3:f4ba415b:45d478c0:2e5a08c2 Events : 1090 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 3 8 49 2 active sync /dev/sdd1 2 8 33 3 active sync /dev/sdc1
I then repeated the process to add new partitions to arrays md1, md2, md3 and md4 in a similar fashion, then waiting for them to complete synchronizing the data on to the new drives..
That just left the filesystems on the sixth partitions of each drive (EFI system partitions) to copy over, which I did using dd:
# dd if=/dev/sda6 of=/dev/sdc6 bs=1024 409600+0 records in 409600+0 records out 419430400 bytes (419 MB, 400 MiB) copied, 34.6945 s, 12.1 MB/s # blkid /dev/sdc6 /dev/sdc6: SEC_TYPE="msdos" UUID="F0A9-5AF2" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI system partition" PARTUUID="6d396dbf-f553-484c-8e35-7b798d584b3d" # dd if=/dev/sdb6 of=/dev/sdd6 bs=1024 409600+0 records in 409600+0 records out 419430400 bytes (419 MB, 400 MiB) copied, 8.80212 s, 47.7 MB/s
Those filesystems will be reformatted next time I install an updated OS on the server, so I didn't bother doing anything to try to increase the size of the filesystems at this time.
The /boot/efi/EFI/fedora/grub.cfg file points to the filesystem containing /boot, which was on md2 and had the same UUID as it always had, so there was nothing to change there.
The next step was to arrange for booting to take place from the new drives instead of the old one. Here was the current situation:
# efibootmgr BootCurrent: 0000 Timeout: 1 seconds BootOrder: 0000,0001,0002,0003,0004,0006,0007,0008,0009,0005 Boot0000* Fedora HD(6,GPT,16233012-926f-4b61-9831-f0bbf24efd21,0x193e8800,0xc8000)/\EFI\FEDORA\SHIMX64.EFI Boot0001* UEFI OS HD(1,GPT,5584e986-3d0e-48d4-a261-f14498dfe02c,0x800,0xc8000)/\EFI\BOOT\BOOTX64.EFI0000424f Boot0002* UEFI OS HD(1,GPT,39ed59ca-e1da-4126-9e16-ee7b14ca4ec7,0x800,0xc8000)/\EFI\BOOT\BOOTX64.EFI0000424f Boot0003* UEFI: IP4 Intel(R) Ethernet Connection (H) I219-LM PciRoot(0x0)/Pci(0x1f,0x6)/MAC(e0071bff0827,0)/IPv4(0.0.0.0,0,DHCP,0.0.0.0,0.0.0.0,0.0.0.0)0000424f Boot0004* UEFI: IP6 Intel(R) Ethernet Connection (H) I219-LM PciRoot(0x0)/Pci(0x1f,0x6)/MAC(e0071bff0827,0)/IPv6([::],0,Static,[::],[::],64)0000424f Boot0005* UEFI: Built-in EFI Shell VenMedia(5023b95c-db26-429b-a648-bd47664c8012)0000424f Boot0006* Fedora HD(1,GPT,5584e986-3d0e-48d4-a261-f14498dfe02c,0x800,0xc8000)/\EFI\FEDORA\SHIM.EFI0000424f Boot0007* Fedora HD(6,GPT,16233012-926f-4b61-9831-f0bbf24efd21,0x193e8800,0xc8000)/\EFI\FEDORA\SHIM.EFI0000424f Boot0008* Fedora HD(1,GPT,39ed59ca-e1da-4126-9e16-ee7b14ca4ec7,0x800,0xc8000)/\EFI\FEDORA\SHIM.EFI0000424f Boot0009* Fedora HD(6,GPT,c3a13747-6fe1-49f3-b4a6-5770d6e2ee1f,0x193e8800,0xc8000)/\EFI\FEDORA\SHIM.EFI0000424f # blkid /dev/sda6 /dev/sda6: SEC_TYPE="msdos" UUID="F0A9-5AF2" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="16233012-926f-4b61-9831-f0bbf24efd21" # blkid /dev/sdc6 /dev/sdc6: PARTLABEL="EFI system partition" PARTUUID="6d396dbf-f553-484c-8e35-7b798d584b3d"
I added a new boot table entry to boot from sdc6:
# efibootmgr -c -d /dev/sdc -p 6 -L "Fedora SSD" -l '\EFI\FEDORA\SHIMX64.EFI' BootCurrent: 0000 Timeout: 1 seconds BootOrder: 000A,0000,0001,0002,0003,0004,0006,0007,0008,0009,0005 Boot0000* Fedora HD(6,GPT,16233012-926f-4b61-9831-f0bbf24efd21,0x193e8800,0xc8000)/\EFI\FEDORA\SHIMX64.EFI Boot0001* UEFI OS HD(1,GPT,5584e986-3d0e-48d4-a261-f14498dfe02c,0x800,0xc8000)/\EFI\BOOT\BOOTX64.EFI0000424f Boot0002* UEFI OS HD(1,GPT,39ed59ca-e1da-4126-9e16-ee7b14ca4ec7,0x800,0xc8000)/\EFI\BOOT\BOOTX64.EFI0000424f Boot0003* UEFI: IP4 Intel(R) Ethernet Connection (H) I219-LM PciRoot(0x0)/Pci(0x1f,0x6)/MAC(e0071bff0827,0)/IPv4(0.0.0.0,0,DHCP,0.0.0.0,0.0.0.0,0.0.0.0)0000424f Boot0004* UEFI: IP6 Intel(R) Ethernet Connection (H) I219-LM PciRoot(0x0)/Pci(0x1f,0x6)/MAC(e0071bff0827,0)/IPv6([::],0,Static,[::],[::],64)0000424f Boot0005* UEFI: Built-in EFI Shell VenMedia(5023b95c-db26-429b-a648-bd47664c8012)0000424f Boot0006* Fedora HD(1,GPT,5584e986-3d0e-48d4-a261-f14498dfe02c,0x800,0xc8000)/\EFI\FEDORA\SHIM.EFI0000424f Boot0007* Fedora HD(6,GPT,16233012-926f-4b61-9831-f0bbf24efd21,0x193e8800,0xc8000)/\EFI\FEDORA\SHIM.EFI0000424f Boot0008* Fedora HD(1,GPT,39ed59ca-e1da-4126-9e16-ee7b14ca4ec7,0x800,0xc8000)/\EFI\FEDORA\SHIM.EFI0000424f Boot0009* Fedora HD(6,GPT,c3a13747-6fe1-49f3-b4a6-5770d6e2ee1f,0x193e8800,0xc8000)/\EFI\FEDORA\SHIM.EFI0000424f Boot000A* Fedora SSD HD(6,GPT,6d396dbf-f553-484c-8e35-7b798d584b3d,0x2d913800,0x113000)/\EFI\FEDORA\SHIMX64.EFI
The next step was to reboot and make sure that the new boot entry worked, which it did. I then shut down the server and physically disconnected the two old drives, the brought the server back up again. That worked too, which was nice, although I did get a bunch of emails about degraded RAID arrays, which was expected.
I now had a bunch of mdraid devices with two working devices (now called sda* and sdb*) and two missing devices, so I dropped the size back to two devices for each array, e.g.:
# mdadm /dev/md0 --grow --raid-devices=2 # mdadm --detail /dev/md0 /dev/md0: Version : 1.0 Creation Time : Sat Mar 11 04:06:43 2017 Raid Level : raid1 Array Size : 409536 (399.94 MiB 419.36 MB) Used Dev Size : 409536 (399.94 MiB 419.36 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sun Jul 20 16:10:52 2025 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : localhost.localdomain:0 UUID : 3189b1a3:f4ba415b:45d478c0:2e5a08c2 Events : 1094 Number Major Minor RaidDevice State 3 8 17 0 active sync /dev/sdb1 2 8 1 1 active sync /dev/sda1
Since the (smaller) partitions of the old drives were no longer part of the array, I could grow the array to fill the new partitions:
# mdadm --grow --size max /dev/md0 mdadm: component size of /dev/md0 has been set to 563184K # mdadm --detail /dev/md0 /dev/md0: Version : 1.0 Creation Time : Sat Mar 11 04:06:43 2017 Raid Level : raid1 Array Size : 563184 (549.98 MiB 576.70 MB) Used Dev Size : 563184 (549.98 MiB 576.70 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sun Jul 20 16:24:36 2025 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Resync Status : 99% complete Name : localhost.localdomain:0 UUID : 3189b1a3:f4ba415b:45d478c0:2e5a08c2 Events : 1099 Number Major Minor RaidDevice State 3 8 17 0 active sync /dev/sdb1 2 8 1 1 active sync /dev/sda1
I waited for the resync to complete before progressing to the next one.
So, I now had md0 with an old EFI system partition on it that I wasn't bothered about, md1 and md2 with ext4 filesystems that would need to be grown to fit the new array sizes, and md3 and md4 containing LVM physical volumes that would also need to be grown to fit the new array sizes:
# resize2fs /dev/md1 resize2fs 1.47.0 (5-Feb-2023) Please run 'e2fsck -f /dev/md1' first. # e2fsck -f /dev/md1 e2fsck 1.47.0 (5-Feb-2023) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Boot.A: 44/51296 files (22.7% non-contiguous), 87780/204784 blocks # resize2fs /dev/md1 resize2fs 1.47.0 (5-Feb-2023) Resizing the filesystem on /dev/md1 to 524284 (4k) blocks. The filesystem on /dev/md1 is now 524284 (4k) blocks long. # resize2fs /dev/md2 resize2fs 1.47.0 (5-Feb-2023) Filesystem at /dev/md2 is mounted on /boot; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/md2 is now 524284 (4k) blocks long. # pvdisplay /dev/md3 --- Physical volume --- PV Name /dev/md3 VG Name vgGKOS PV Size <79.94 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 20463 Free PE 6127 Allocated PE 14336 PV UUID riajtP-OOK7-tK5g-Mypy-X3Qd-7uwE-JGLwWD # pvresize -v /dev/md3 Resizing volume "/dev/md3" to 335413248 sectors. Resizing physical volume /dev/md3 from 20463 to 40943 extents. Updating physical volume "/dev/md3" Archiving volume group "vgGKOS" metadata (seqno 5). Physical volume "/dev/md3" changed Creating volume group backup "/etc/lvm/backup/vgGKOS" (seqno 6). 1 physical volume(s) resized or updated / 0 physical volume(s) not resized # pvdisplay /dev/md3 --- Physical volume --- PV Name /dev/md3 VG Name vgGKOS PV Size <159.94 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 40943 Free PE 26607 Allocated PE 14336 PV UUID riajtP-OOK7-tK5g-Mypy-X3Qd-7uwE-JGLwWD # pvdisplay /dev/md4 --- Physical volume --- PV Name /dev/md4 VG Name vgGKdata PV Size <119.94 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 30703 Free PE 10223 Allocated PE 20480 PV UUID ClxfPu-FRD6-XHC7-Eb6U-80E6-SD0p-02VbT9 # pvresize -v /dev/md4 Resizing volume "/dev/md4" to 419299328 sectors. Resizing physical volume /dev/md4 from 30703 to 51183 extents. Updating physical volume "/dev/md4" Archiving volume group "vgGKdata" metadata (seqno 3). Physical volume "/dev/md4" changed Creating volume group backup "/etc/lvm/backup/vgGKdata" (seqno 4). 1 physical volume(s) resized or updated / 0 physical volume(s) not resized # pvdisplay /dev/md4 --- Physical volume --- PV Name /dev/md4 VG Name vgGKdata PV Size <199.94 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 51183 Free PE 30703 Allocated PE 20480 PV UUID ClxfPu-FRD6-XHC7-Eb6U-80E6-SD0p-02VbT9
Now that my LVM volume groups had more free space in them, I was able to increase the size of some of the logical volumes and the filesystems that were on them:
# vgdisplay -v vgGKOS --- Volume group --- VG Name vgGKOS System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 4 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size 159.93 GiB PE Size 4.00 MiB Total PE 40943 Alloc PE / Size 14336 / 56.00 GiB Free PE / Size 26607 / 103.93 GiB VG UUID fqtzpQ-wcO6-zKdc-FCq6-T7rv-uycT-rZ2Znz --- Logical volume --- LV Path /dev/vgGKOS/root.A LV Name root.A VG Name vgGKOS LV UUID uUA2SK-Dbl0-9UxV-rHBh-BxE5-PE9X-zZod0K LV Write Access read/write LV Creation host, time localhost.localdomain, 2017-03-11 06:01:58 +0000 LV Status available # open 0 LV Size 15.00 GiB Current LE 3840 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:2 --- Logical volume --- LV Path /dev/vgGKOS/root.B LV Name root.B VG Name vgGKOS LV UUID antmFD-DOat-FuYa-luud-GSUP-iOG0-0ZmiJk LV Write Access read/write LV Creation host, time localhost.localdomain, 2017-03-11 06:02:05 +0000 LV Status available # open 1 LV Size 15.00 GiB Current LE 3840 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Path /dev/vgGKOS/swap LV Name swap VG Name vgGKOS LV UUID FqM6KO-cLko-BZdr-Nwq1-ye8Y-Iczm-CON3ob LV Write Access read/write LV Creation host, time localhost.localdomain, 2017-03-11 06:02:24 +0000 LV Status available # open 1 LV Size 16.00 GiB Current LE 4096 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 --- Logical volume --- LV Path /dev/vgGKOS/tmp LV Name tmp VG Name vgGKOS LV UUID 1OHDnb-EwWq-Ccbi-JZ9Y-6Xq6-tH0a-pObqqZ LV Write Access read/write LV Creation host, time localhost.localdomain, 2017-03-11 06:03:08 +0000 LV Status available # open 1 LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:3 --- Physical volumes --- PV Name /dev/md3 PV UUID riajtP-OOK7-tK5g-Mypy-X3Qd-7uwE-JGLwWD PV Status allocatable Total PE / Free PE 40943 / 26607 # lvextend --size 25G /dev/vgGKOS/root.A Size of logical volume vgGKOS/root.A changed from 15.00 GiB (3840 extents) to 25.00 GiB (6400 extents). Logical volume vgGKOS/root.A successfully resized. # lvextend --size 25G /dev/vgGKOS/root.B Size of logical volume vgGKOS/root.B changed from 15.00 GiB (3840 extents) to 25.00 GiB (6400 extents). Logical volume vgGKOS/root.B successfully resized. # df / Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vgGKOS-root.B 15375304 7901936 6670552 55% / # resize2fs /dev/mapper/vgGKOS-root.A resize2fs 1.47.0 (5-Feb-2023) Please run 'e2fsck -f /dev/mapper/vgGKOS-root.A' first. # e2fsck -f /dev/mapper/vgGKOS-root.A e2fsck 1.47.0 (5-Feb-2023) Root.A: recovering journal Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Feature orphan_present is set but orphan file is clean. Clear<y>? yes Root.A: ***** FILE SYSTEM WAS MODIFIED ***** Root.A: 115563/983040 files (0.4% non-contiguous), 2292449/3932160 blocks # resize2fs /dev/mapper/vgGKOS-root.A resize2fs 1.47.0 (5-Feb-2023) Resizing the filesystem on /dev/mapper/vgGKOS-root.A to 6553600 (4k) blocks. The filesystem on /dev/mapper/vgGKOS-root.A is now 6553600 (4k) blocks long. # resize2fs /dev/mapper/vgGKOS-root.B resize2fs 1.47.0 (5-Feb-2023) Filesystem at /dev/mapper/vgGKOS-root.B is mounted on /; on-line resizing required old_desc_blocks = 2, new_desc_blocks = 4 The filesystem on /dev/mapper/vgGKOS-root.B is now 6553600 (4k) blocks long. # df / Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vgGKOS-root.B 25692476 7901972 16557784 33% /
I was similarly able to extend the sizes of /srv and /home on vgGKdata. I rebooted to make sure that all was well, then shut the machine down again to finally remove the old drives from the system before putting the case back together and bringing the server back up.
All in all a great success!