Tuesday 17th January 2017
Storage Clean-Up
My build-system server akira looks like it'll be running out of space in the next few months so I have purchased a pair of WD Red 3TB drives (which, as always, I shall be operating in a RAID1 mirror configuration) and I intend to replace an existing pair of Seagate Barracuda 320GB (/dev/sdc and /dev/sdd), partitioned as follows:
/dev/sdc: 298.1 GiB with GPT partition table |
|||||
Seagate Barracuda 7200.10 (ST3320620AS), Serial # 9QF1FZJ3 |
|||||
Partition |
Start Sec |
End Sec |
Size |
RAID device |
Usage |
sdc1 |
40 |
2087 |
1024.0 KiB |
BIOS boot partition |
|
sdc2 |
2088 |
1026087 |
500.0 MiB |
/dev/md0 |
Not in use |
sdc3 |
1026088 |
2050087 |
500.0 MiB |
/dev/md1 |
Not in use |
sdc4 |
2050088 |
169822247 |
80.0 GiB |
/dev/md4 |
VgOSzion |
sdc5 |
169822248 |
589252647 |
200.0 GiB |
/dev/md5 |
VgServerDataZion |
- |
|
|
|
- |
17.1 GiB spare |
These drives were inherited from my old builder machine zion (long since repurposed as a children's PC) but the only active use I am making of them now is the 200.0 GiB of data in the VgServerDataZion volume group. Fortunately I still had over 200 GiB of free space in that volume group on other drives, so it was a simple job to move the data off /dev/md5 and free up that space:
# pvmove /dev/md5 # vgreduce VgServerDataZion /dev/md5
Having done that, I could get rid of the underlying RAID device too. I got the UUID of the device:
# mdadm --detail /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Nov 20 15:25:31 2011
Raid Level : raid1
Array Size : 209714104 (200.00 GiB 214.75 GB)
Used Dev Size : 209714104 (200.00 GiB 214.75 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Jan 17 13:43:32 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : zion.city-fan.org:5
UUID : e5429a07:9e448a9d:5eb96fa5:cbdda65c
Events : 7020
Number Major Minor RaidDevice State
0 8 37 0 active sync /dev/sdc5
2 8 53 1 active sync /dev/sdd5Then I commented out the corresponding entry in /etc/mdadm.conf:
ARRAY /dev/md/zion.city-fan.org:5 metadata=1.2 level=raid1 UUID=e5429a07:9e448a9d:5eb96fa5:cbdda65c
Then I stopped the device and cleared the RAID metadata from the underlying partitions to stop them being detected at the next boot:
# mdadm --stop /dev/md5 # mdadm --zero-superblock /dev/sdc5 /dev/sdd5
Now, VgOSzion is a hangover from the old machine and can go too:
# vgremove VgOSzion
It was then time to remove the other RAID devices still using the drives:
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Sun Jun 3 17:08:25 2012
Raid Level : raid1
Array Size : 511988 (499.99 MiB 524.28 MB)
Used Dev Size : 511988 (499.99 MiB 524.28 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Jan 15 01:21:06 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : zion.intra.city-fan.org:0
UUID : 85639673:8bd04838:a53c8215:ca9ad125
Events : 737
Number Major Minor RaidDevice State
0 8 34 0 active sync /dev/sdc2
1 8 50 1 active sync /dev/sdd2
# mdadm --detail /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Tue Jan 22 11:14:18 2013
Raid Level : raid1
Array Size : 511936 (499.94 MiB 524.22 MB)
Used Dev Size : 511936 (499.94 MiB 524.22 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Jan 15 02:20:41 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : zion.intra.city-fan.org:1
UUID : 9f162bcf:6b7557dd:eed0f554:9241c60e
Events : 643
Number Major Minor RaidDevice State
0 8 35 0 active sync /dev/sdc3
1 8 51 1 active sync /dev/sdd3
# mdadm --detail /dev/md4
/dev/md4:
Version : 1.2
Creation Time : Sun Nov 20 15:25:10 2011
Raid Level : raid1
Array Size : 83884984 (80.00 GiB 85.90 GB)
Used Dev Size : 83884984 (80.00 GiB 85.90 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Jan 17 16:11:17 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : zion.city-fan.org:4
UUID : 953629e8:fe9f2d1e:570f883f:bd10d8c0
Events : 5369
Number Major Minor RaidDevice State
0 8 36 0 active sync /dev/sdc4
2 8 52 1 active sync /dev/sdd4
# vi /etc/mdadm.conf
(comment out/remove entries corresponding to the above UUIDs for `/dev/md0`, `/dev/md1` and `/dev/md4`)
ARRAY /dev/md/zion.intra.city-fan.org:0 metadata=1.0 level=raid1 UUID=85639673:8bd04838:a53c8215:ca9ad125
ARRAY /dev/md/zion.intra.city-fan.org:1 metadata=1.0 level=raid1 UUID=9f162bcf:6b7557dd:eed0f554:9241c60e
ARRAY /dev/md/zion.city-fan.org:4 metadata=1.2 level=raid1 UUID=953629e8:fe9f2d1e:570f883f:bd10d8c0
# mdadm --stop /dev/md0
# mdadm --stop /dev/md1
# mdadm --stop /dev/md4
# mdadm --zero-superblock /dev/sdc2 /dev/sdd2 /dev/sdc3 /dev/sdd3 /dev/sdc4 /dev/sdd4And that was that. The drives are ready to be replaced.
Fedora Project
Updated GeoIP-GeoLite-data to the January 2017 databases in F-24, F-25, Rawhide, EPEL-5 and EPEL-6
Updated perl-Modern-Perl to 1.20170117 in Rawhide:
- Cleaned up test suite
Fixed Perl 5.25 failures (CPAN RT#114690)
Updated perl-Ref-Util to 0.113 in Rawhide:
- Fix bugtracker link
Local Packages
Updated GeoIP-GeoLite-data to the January 2017 databases as per the Fedora version
Updated perl-Ref-Util to 0.113 as per the Fedora version