Paul's Blog Entries for May 2009
Monday 4th May 2009
Fedora Project
Updated imlib in F-10 and F-11 as per Rawhide as part of my Gnome-1 library stack rebuild
Local Packages
New package perl-Compress-Raw-Bzip2 (2.019)
New package perl-Compress-Raw-Zlib (2.019)
These are the start of an effort to build perl-Archive-Tar, which has a substantial dependency tree.
Tuesday 5th May 2009
Local Packages
New package perl-Archive-Tar (1.48)
New package perl-Compress-Zlib (2.015)
New package perl-IO-Compress-Base (2.015)
New package perl-IO-Compress-Bzip2 (2.015)
New package perl-IO-Compress-Zlib (2.015)
New package perl-IO-String (1.08)
New package perl-IO-Zlib (1.09)
New package perl-Package-Constants (0.02)
Updated perl-ExtUtils-ParseXS to bump epoch to 1 as per the Fedora package
Updated perl-Module-Build to 0.33, which new requires perl(Archive::Tar), hence all the new packages
Wednesday 6th May 2009
Local Packages
New package perl-Regexp-Common (2.122)
Fedora Project
Updated gnome-libs in F-10 and F-11 as per Rawhide as part of my Gnome-1 library stack rebuild
RPM Fusion Project
Completed my first package review, for perl-IP-Country (RPM Fusion Bug #393), for Andreas Thienemann
Thursday 7th May 2009
Fedora Project
Updated bittorrent to fix the missing menu icon for the GUI client, and raised rel-eng ticket #1755 to request that the update be tagged for F-11
Updated libglade in F-10 and F-11 as per Rawhide to complete my Gnome-1 library stack rebuild; the question now is whether or not to include the updated packages in the imminent Fedora 11 release or make them a zero-day update instead (rel-eng ticket #1574)
Local Packages
New package perl-Pod-Readme (0.09)
New package perl-Test-Portability-Files (0.05)
Updated spamass-milter as per the Fedora update from 24th April
Friday 8th May 2009
Fedora Project
Updated libpng10 to 1.0.44 in Rawhide
Local Packages
New package perl-Archive-Zip (1.26)
New package perl-File-Which (0.05)
Updated davfs2 to 1.4.0
Updated libpng10 to 1.0.44
Updated perl-Module-Build to pull in perl(Archive::Zip) and perl(Pod::Readme) at both build and run time
Monday 11th May 2009
Fedora Project
Took ownership of perl-IO-Multiplex, which was about to be orphaned due to an inactive maintainer; it's a dependency of perl-Net-Server
Local Packages
Updated libspf2 to make the apidocs subpackage arch-independent on Fedora 10 onwards
Updated sendmail to make the cf and doc subpackages arch-independent on Fedora 10 onwards
Started the process of creating a Fedora 11 repo:
Built compat-wxPython26, gtorrentviewer, mgdiff, mod_fastcgi, php4-pcntl, php4-pcntl-gtk, python-crypto, python-fpconst, python-twisted, python-twisted-conch, python-twisted-core, python-twisted-lore, python-twisted-mail, python-twisted-names, python-twisted-news, python-twisted-runner, python-twisted-web, python-twisted-words, python-zope-filesystem, python-zope-interface, tcptraceroute, tzip, unrar, and xv for Fedora 11
Tuesday 12th May 2009
Fedora Project
Updated perl-Sysadm-Install to 0.28 for devel
Local Packages
Updated bittorrent (4.4.0) to add the icon back into the menu, as per last month's Fedora update
Updated curl as per Fedora to fix Bug #453612 (infinite loop while loading a private key) and Bug #453612 (curl/nss memory leaks while using client certificate)
Updated perl-Test-Output to 0.14 but only in Fedora 9 onwards as it now uses :seekable from File::Temp, which requires File::Temp 0.17 or later
Wednesday 13th May 2009
Fedora Project
Completed the task started back in March of getting perl-Net-SSH-Perl into EPEL for Xavier Bachelot, who needs it for perl-Net-SFTP:
Got in touch with StevenPritchard who OK-ed me to fix up and build for EPEL the perl-Crypt-DES_EDE3, perl-Class-ErrorHandler, perl-Convert-PEM, and perl-Convert-ASCII-Armour packages
Built my own packages for perl-Crypt-DSA, perl-Crypt-RSA, and perl-Net-SSH-Perl
Whilst I was at it, I noticed that the manpage for Crypt::RSA wasn't UTF-8 encoded, so I fixed that in devel too
Having done all those, Xavier then asked me to do the same for perl-Net-SFTP itself, which was actually another of Steve's packages, so I did, committing and building it after testing that it was functional from a CentOS4 chroot
Local Packages
Updated perl-Test-Output to 0.15 (Fedora 9 onwards)
Built many more packages for the upcoming Fedora 11 repository: bw-whois, grepmail, perl-Algorithm-Diff, perl-Convert-BinHex, perl-Convert-TNEF, perl-Config-Tiny, perl-ConfigReader-Simple, perl-Crypt-GPG, perl-Crypt-SmbHash, perl-Devel-Symdump, perl-Digest-BubbleBabble, perl-Error, perl-Expect, perl-ExtUtils-CBuilder, perl-File-Find-Rule, perl-File-Remove, perl-FileHandle-Unget, perl-HTML-SimpleLinkExtor, perl-HTTP-SimpleLinkChecker, perl-HTTP-Size, perl-IO-Multiplex, perl-IO-stringy, perl-IPC-Run3, perl-Jcode, perl-LMAP-CID2SPF, perl-Mail-Mbox-MessageParser, perl-Mail-Sender, perl-Mail-Sendmail, perl-Mail-SPF, perl-Mail-SPF-Query, perl-Mail-SPF-Test, perl-Mail-SRS
I also did a full rebuild for all releases of perl-File-Remove, with a buildreq of perl(Test::CPAN::Meta) added, as that wasn't available when I originally built the package.
The build of perl-HTML-SimpleLinkExtor proved to be particularly tricky as the test suite mysteriously failed, and even a Fedora 10 build (which had worked when I created the Fedora 10 repo) failed the same way.
Thursday 14th May 2009
Local Packages
Updated dkms to 2.0.22.0
Updated php-Smarty to 2.6.23
Lots more rebuilds for Fedora 11: perl-IO-Socket-INET6, perl-MailTools, perl-MIME-tools, perl-MLDBM, perl-Module-Info, perl-Module-Signature, perl-Net-CIDR-Lite, perl-Net-DNS-Resolver-Programmable, perl-Net-IP, perl-Net-Server, perl-Number-Compare, perl-Package-Generator, perl-Pod-Escapes, perl-Pod-Simple, perl-PPI, perl-Sendmail-AccessDB, perl-Sendmail-PMilter, perl-String-Escape, perl-Sub-Uplevel, perl-Sys-Hostname-Long, perl-Test-ClassAPI, perl-Test-CPAN-Meta, perl-Test-Distribution, perl-Test-Exception, perl-Test-File, perl-Test-HTML-Tidy, perl-Test-Manifest, perl-Test-NoWarnings, perl-Test-Object, perl-Test-Pod, perl-Test-Pod-Coverage, perl-Test-Script, perl-Test-Tester, perl-Test-Warn, perl-Text-Diff, perl-Text-Glob, perl-Text-Template, perl-TimeDate, perl-Tree-DAG_Node, perl-Unicode-MapUTF8, perl-XML-NamespaceSupport, perl-XML-SAX, perl-XML-Simple, pptpconfig, torrentsniff, weblint, weblint++
Noticed that perl-Test-ClassAPI could do with a buildreq of perl(Test::CPAN::Meta) so added that and rebuilt for all distro releases.
Sunday 17th May 2009
Fedora Project
Updated mod_fcgid to make its "run" directory a subdirectory of wherever the symlink /etc/httpd/run points to, as it has changed in Fedora 11 from /var/run to /var/run/httpd and using the wrong directory causes httpd to fail to start after mod_fcgid is installed (Bug #501123)
Local Packages
Updated mod_fcgid as per the Fedora package
Updated perl-Parse-CPAN-Meta to 1.38
Updated perl-Test-Output to 0.16 (Fedora 9 onwards)
Updated php-Smarty to 2.6.24
Monday 18th May 2009
Local Packages
Updated curl to 7.19.5
Updated perl-Test-File to 1.26
Tuesday 19th May 2009
Local Packages
Updated city-fan.org-release (minor cleanups in repo files)
- Published Fedora 11 repository, ready for the forthcoming Fedora 11 release
Thursday 21st May 2009
Local Packages
Updated perl-Parse-CPAN-Meta to 1.39
Friday 22nd May 2009
Fedora Project
Built milter-regex for EPEL 4 and 5 as I've started using it at work on CentOS 5
Local Packages
Updated perl-Test-File to 1.27
Tuesday 26th May 2009
Fedora Project
Updated mod_fcgid for Rawhide and F-11 to use /var/run/mod_fcgid to hold runtime state regardless of where the /etc/httpd/run symlink points to, as in F-11 onwards that's /var/run/httpd/, which is not readable by the apache user and therefore won't work with mod_fcgid (Bug #502273)
Local Packages
Updated fetchyahoo to 2.13.4
Updated mod_fcgid as per the Fedora version
Updated perl-Test-CPAN-Meta to 0.13
Updated php-Smarty to 2.6.25
Friday 29th May 2009
Hard Disk Upgrade
My build machine currently has 4x 320GB drives, with the OS and some data on the first pair of drives and just data on the second pair. I'm getting close to the point where I'm getting cramped for disk space (I use RAID 1 mirroring for virtually everything so I've got around 600 GB of usable space rather than 1.2 TB), so I bought a pair of 1 TB drives to replace the "just data" pair of drives. Ideally I'd just have connected up the new drives, moved the data over, removed the old drives and be done with it, but that wasn't possible as I could only have 4 drives connected at a time and I couldn't remove the OS drives.
Fortunately I was using LVM over RAID 1 for all of the data so it was still possible to move the data over with minimal disruption. The two data drives were partitioned identically, with three RAID partitions:
# fdisk -l /dev/sdc Disk /dev/sdc: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0007ee60 Device Boot Start End Blocks Id System /dev/sdc1 * 1 19400 155830468+ fd Linux raid autodetect /dev/sdc2 19401 29127 78132127+ fd Linux raid autodetect /dev/sdc3 29128 38913 78606045 fd Linux raid autodetect
Each of the three partitions on /dev/sdc was paired with the equivalent partition on /dev/sdd to create a RAID 1 md device, which were named /dev/md3, /dev/md4, and /dev/md5 respectively. Each RAID device was formatted as an LVM physical volume, with /dev/md3 assigned to volume group VgServerData, /dev/md4 assigned to volume group VgBackup, and /dev/md5 assigned to volume group VgBuildSys.
I shut down the machine, disconnected the existing sdd drive and connected up one of the new 1 TB drives in its place. On booting, md devices /dev/md3, /dev/md4, and /dev/md5 all came up with 1 of 2 active members as expected (i.e. the sdc part of the mirrors).
The next step was to create bigger new RAID arrays on the new drives. Only one of them was connected to start with, so I had to start with just that drive. I planned to create a 200 GB md device for VgServerData, another 200 GB md device for VgBackup, and a 300 GB md device for VgBuildSys:
# fdisk /dev/sdd Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x079492c7. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 121601. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-121601, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-121601, default 121601): +200G Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (26110-121601, default 26110): Using default value 26110 Last cylinder, +cylinders or +size{K,M,G} (26110-121601, default 121601): +200G Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 3 First cylinder (52219-121601, default 52219): Using default value 52219 Last cylinder, +cylinders or +size{K,M,G} (52219-121601, default 121601): +300G Command (m for help): p Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x079492c7 Device Boot Start End Blocks Id System /dev/sdd1 1 26109 209720511 83 Linux /dev/sdd2 26110 52218 209720542+ 83 Linux /dev/sdd3 52219 91382 314584830 83 Linux Command (m for help): t Partition number (1-4): 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): fd Changed system type of partition 2 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-4): 3 Hex code (type L to list codes): fd Changed system type of partition 3 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x079492c7 Device Boot Start End Blocks Id System /dev/sdd1 1 26109 209720511 fd Linux raid autodetect /dev/sdd2 26110 52218 209720542+ fd Linux raid autodetect /dev/sdd3 52219 91382 314584830 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
The next available md device numbers were 6, 7, and 8, so I used those:
# mdadm --create --level=1 --raid-devices=2 --auto=md /dev/md6 /dev/sdd1 missing mdadm: array /dev/md6 started. # mdadm --create --level=1 --raid-devices=2 --auto=md /dev/md7 /dev/sdd2 missing mdadm: array /dev/md7 started. # mdadm --create --level=1 --raid-devices=2 --auto=md /dev/md8 /dev/sdd3 missing mdadm: array /dev/md8 started. # mdadm --detail --scan ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=a6023eda:5dd9ef69:a77f13f3:6e25e139 ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=78e55309:7dba3918:1f3e29d4:75f5d52e ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=9ea76464:ea298b64:4dd98395:c2064a2b ARRAY /dev/md4 level=raid1 num-devices=2 metadata=0.90 UUID=fb599c79:d8f72cc9:0fb29f9f:d716c262 ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=451ff0fc:fb610ea3:d05d0076:442ef352 ARRAY /dev/md5 level=raid1 num-devices=2 metadata=0.90 UUID=29034664:e2924612:bc076052:789a4a40 ARRAY /dev/md6 level=raid1 num-devices=2 metadata=0.90 UUID=388c4994:20ab4206:eca7fe6b:1cd5a81d ARRAY /dev/md7 level=raid1 num-devices=2 metadata=0.90 UUID=18e8faab:e7023e48:023064da:5e559004 ARRAY /dev/md8 level=raid1 num-devices=2 metadata=0.90 UUID=1ac85089:76dfcd50:06a2846b:331d458d
The next step was to format the md devices as LVM physical volumes:
# pvcreate /dev/md6 Physical volume "/dev/md6" successfully created # pvcreate /dev/md7 Physical volume "/dev/md7" successfully created # pvcreate /dev/md8 Physical volume "/dev/md8" successfully created
I could then add the new RAID devices to the existing volume groups:
# vgextend VgServerData /dev/md6 Volume group "VgServerData" successfully extended # vgextend VgBackup /dev/md7 Volume group "VgBackup" successfully extended # vgextend VgBuildSys /dev/md8 Volume group "VgBuildSys" successfully extended
There was now enough space in each volume group to move all the data from the physical volumes on the old disks (sdc-based devices /dev/md3, /dev/md4, and /dev/md5) to the physical volumes on the new disks (sdd-based devices /dev/md6, /dev/md7, and /dev/md8):
# pvmove -v /dev/md3 Finding volume group "VgServerData" Archiving volume group "VgServerData" metadata (seqno 8). Creating logical volume pvmove0 Moving 5120 extents of logical volume VgServerData/Softlib Moving 12800 extents of logical volume VgServerData/SrvMain Moving 7680 extents of logical volume VgServerData/lvhome Found volume group "VgServerData" Found volume group "VgServerData" Found volume group "VgServerData" Updating volume group metadata Creating volume group backup "/etc/lvm/backup/VgServerData" (seqno 9). ... Checking progress every 15 seconds /dev/md3: Moved: 0.8% /dev/md3: Moved: 1.7% /dev/md3: Moved: 2.6% /dev/md3: Moved: 3.5% ... /dev/md3: Moved: 100.0% ... Removing temporary pvmove LV Writing out final volume group after pvmove Creating volume group backup "/etc/lvm/backup/VgServerData" (seqno 29). # pvmove -v /dev/md4 ... # pvmove -v /dev/md5 ...
The result of this was that all of the LVM physical extents on the old disks were now "free":
# vgdisplay -v VgServerData Using volume group(s) on command line Finding volume group "VgServerData" ... --- Physical volumes --- PV Name /dev/md3 PV UUID A8DleA-3JFv-pXHQ-8N4C-jlgG-acU6-7IT607 PV Status allocatable Total PE / Free PE 38044 / 38044 PV Name /dev/md6 PV UUID o26olS-jRIM-Gioi-UlPM-O784-IT9D-NgHlXm PV Status allocatable Total PE / Free PE 51201 / 25601 # vgdisplay -v VgBackup Using volume group(s) on command line Finding volume group "VgBackup" ... --- Physical volumes --- PV Name /dev/md4 PV UUID 9XHb4D-1O0Y-vcef-aJ8E-Erei-phJV-hVi2FO PV Status allocatable Total PE / Free PE 19075 / 19075 PV Name /dev/md7 PV UUID Z0vqDk-03yY-7Ggh-YMfG-7bEP-hcbC-29JjdI PV Status allocatable Total PE / Free PE 51201 / 33281 # vgdisplay -v VgBuildSys Using volume group(s) on command line Finding volume group "VgBuildSys" ... --- Physical volumes --- PV Name /dev/md2 PV UUID cs5TR9-jf6w-s182-prpF-87tb-5bB7-b5R70h PV Status allocatable Total PE / Free PE 4687 / 0 PV Name /dev/md5 PV UUID 7j30hi-YaAU-41eX-94a2-WT6G-z5RV-oSrR8y PV Status allocatable Total PE / Free PE 2398 / 2398 PV Name /dev/md8 PV UUID jrNhED-Ucrg-qbY8-w86w-G87r-XCoj-Tj7YKE PV Status allocatable Total PE / Free PE 9600 / 7386
I could now remove the md devices associated with the old disks from the LVM volume groups:
# vgreduce VgServerData /dev/md3 Removed "/dev/md3" from volume group "VgServerData" # vgreduce VgBackup /dev/md4 Removed "/dev/md4" from volume group "VgBackup" # vgreduce VgBuildSys /dev/md5 Removed "/dev/md5" from volume group "VgBuildSys"
None of the data on the old md devices was now being used, so I could remove them from /etc/mdadm.conf, and add the new md devices there whilst I was at it. The output from mdadm --detail --scan was useful here.
# cat /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=a6023eda:5dd9ef69:a77f13f3:6e25e139 ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=451ff0fc:fb610ea3:d05d0076:442ef352 ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=9ea76464:ea298b64:4dd98395:c2064a2b #ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=78e55309:7dba3918:1f3e29d4:75f5d52e #ARRAY /dev/md4 level=raid1 num-devices=2 metadata=0.90 UUID=fb599c79:d8f72cc9:0fb29f9f:d716c262 #ARRAY /dev/md5 level=raid1 num-devices=2 metadata=0.90 UUID=29034664:e2924612:bc076052:789a4a40 ARRAY /dev/md6 level=raid1 num-devices=2 metadata=0.90 UUID=388c4994:20ab4206:eca7fe6b:1cd5a81d ARRAY /dev/md7 level=raid1 num-devices=2 metadata=0.90 UUID=18e8faab:e7023e48:023064da:5e559004 ARRAY /dev/md8 level=raid1 num-devices=2 metadata=0.90 UUID=1ac85089:76dfcd50:06a2846b:331d458d
I was now able to shut the machine down again, replace the second of the old drives with the second new drive and boot up again. Arrays /dev/md3, /dev/md4, and /dev/md5 were no longer present, and arrays /dev/md6, /dev/md7, and /dev/md8 all came up with 1 of 2 drives present as expected.
The next step was to partition the new /dev/sdc as per the new /dev/sdd:
# fdisk -l /dev/sdd Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x079492c7 Device Boot Start End Blocks Id System /dev/sdd1 1 26109 209720511 fd Linux raid autodetect /dev/sdd2 26110 52218 209720542+ fd Linux raid autodetect /dev/sdd3 52219 91382 314584830 fd Linux raid autodetect # fdisk /dev/sdc Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0xd6140c4d. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 121601. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-121601, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-121601, default 121601): 26109 Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (26110-121601, default 26110): Using default value 26110 Last cylinder, +cylinders or +size{K,M,G} (26110-121601, default 121601): 52218 Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 3 First cylinder (52219-121601, default 52219): Using default value 52219 Last cylinder, +cylinders or +size{K,M,G} (52219-121601, default 121601): 91382 Command (m for help): t Partition number (1-4): 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): fd Changed system type of partition 2 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-4): 3 Hex code (type L to list codes): fd Changed system type of partition 3 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xd6140c4d Device Boot Start End Blocks Id System /dev/sdc1 1 26109 209720511 fd Linux raid autodetect /dev/sdc2 26110 52218 209720542+ fd Linux raid autodetect /dev/sdc3 52219 91382 314584830 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
I could then add the newly-created partitions to the RAID arrays and let them resync to each other in the background.
# mdadm --detail /dev/md6 /dev/md6: Version : 0.90 Creation Time : Fri May 29 07:48:43 2009 Raid Level : raid1 Array Size : 209720384 (200.00 GiB 214.75 GB) Used Dev Size : 209720384 (200.00 GiB 214.75 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 6 Persistence : Superblock is persistent Update Time : Fri May 29 18:37:51 2009 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 388c4994:20ab4206:eca7fe6b:1cd5a81d Events : 0.8346 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 0 0 1 removed # mdadm /dev/md6 -a /dev/sdc1 mdadm: added /dev/sdc1 # mdadm --detail /dev/md6 /dev/md6: Version : 0.90 Creation Time : Fri May 29 07:48:43 2009 Raid Level : raid1 Array Size : 209720384 (200.00 GiB 214.75 GB) Used Dev Size : 209720384 (200.00 GiB 214.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 6 Persistence : Superblock is persistent Update Time : Fri May 29 18:38:59 2009 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Rebuild Status : 1% complete UUID : 388c4994:20ab4206:eca7fe6b:1cd5a81d Events : 0.8360 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 2 8 33 1 spare rebuilding /dev/sdc1 # mdadm /dev/md6 -a /dev/sdc1 mdadm: added /dev/sdc1 [root@metropolis ~]# mdadm --detail /dev/md7 /dev/md7: Version : 0.90 Creation Time : Fri May 29 07:48:51 2009 Raid Level : raid1 Array Size : 209720448 (200.01 GiB 214.75 GB) Used Dev Size : 209720448 (200.01 GiB 214.75 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 7 Persistence : Superblock is persistent Update Time : Fri May 29 18:37:36 2009 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 18e8faab:e7023e48:023064da:5e559004 Events : 0.46 Number Major Minor RaidDevice State 0 8 50 0 active sync /dev/sdd2 1 0 0 1 removed # mdadm /dev/md7 -a /dev/sdc2 mdadm: added /dev/sdc2 # mdadm --detail /dev/md7 /dev/md7: Version : 0.90 Creation Time : Fri May 29 07:48:51 2009 Raid Level : raid1 Array Size : 209720448 (200.01 GiB 214.75 GB) Used Dev Size : 209720448 (200.01 GiB 214.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 7 Persistence : Superblock is persistent Update Time : Fri May 29 18:38:43 2009 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 UUID : 18e8faab:e7023e48:023064da:5e559004 Events : 0.50 Number Major Minor RaidDevice State 0 8 50 0 active sync /dev/sdd2 2 8 34 1 spare rebuilding /dev/sdc2 # mdadm /dev/md8 -a /dev/sdc3 mdadm: added /dev/sdc3 ... (wait a while) # mdadm --detail /dev/md8 /dev/md8: Version : 0.90 Creation Time : Fri May 29 07:48:59 2009 Raid Level : raid1 Array Size : 314584704 (300.01 GiB 322.13 GB) Used Dev Size : 314584704 (300.01 GiB 322.13 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 8 Persistence : Superblock is persistent Update Time : Fri May 29 20:34:39 2009 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 1ac85089:76dfcd50:06a2846b:331d458d Events : 0.5706 Number Major Minor RaidDevice State 0 8 51 0 active sync /dev/sdd3 1 8 35 1 active sync /dev/sdc3
I can now think about extending filesystems (using lvextend and resize2fs) on these volume groups at my leisure.
It's worth noting that apart from the time when the machine was shutting down, turned off, or rebooting, all services on the server were functioning as normal, so actual downtime was minimal.
Local Packages
Updated fetchyahoo to 2.13.5
Previous Month: April 2009
Next Month: June 2009