[NBLUG/talk] I feel dirty and degraded.... Software Raid Recovery

droman at romansys.com droman at romansys.com
Sat Nov 11 16:47:34 PST 2006


> On Sat, Nov 11, 2006 at 09:01:21AM -0800, Mark Street wrote:
>> Hi,
>>
>> I have a Fedora Core 4 box with 4 SCSI disks on an LSI controller
>> running
>> Software RAID 5.
>>
>> The machine lost drive 2 and I was not able to login to shut it down
>> properly.
>>
>> I removed the failed drive and tried to boot the machine on the 3
>> remaining
>> drives...  grub came up and the machine would start to boot but it would
>> get
>> to the point of mounting the array and would complain about the array
>> being
>> dirty and degraded and fail to mount ext3 partitions... instant kernel
>> panic.
>>
>> I aquired my new drive and booted the machine with Knoppix 5.  I can see
>> all
>> of my drives and their partitions (3) each for the existing drives in
>> the
>> array, the new drive has no partitions.  Does anyone have sage advice on
>> re-assembling this dirty and degraded array?  I have the mdadm command
>> on
>> Knoppix.
>>
>> mdadm --assemble /dev/md0 --level=5 --raid-devices=3 missing /dev/sdb

Usually the array will boot in degraded mode, however
if the superblock on any of the disks is dirty, the normal startup
assembly probably won't be able to assemble the array.
In that case, you can just use the --run option.
example:
  mdadm --assemble --run /dev/md0 /dev/sdb
This should bring the array up in degraded mode.

After this, if the other device is still around, you may want to do a
remove so that it isn't listed as faulty any longer. (mdadm --remove
/dev/md0 /dev/sda)
Once this is done, hot add in the new drive with the command:
  mdadm --add /dev/md0 /dev/sda

I have found linux software raid to be very, very stable and have been
running a raid5 array for over 5 years (using the 2.4 kernel).

Make sure to do good backups before doing any of this!

Good luck!

Thanks,
    ---Dean Roman
    President, Roman Computer Systems
    www.romansystems.com


>>
>> ??  Then add the new drive once I can clean up the original existing
>> drives
>> with an mdadm /dev/md0 -a /dev/sda2?
>>
>> TIA
>> --
>> Mark Street, D.C., RHCE
>> CTO Alliance Medical Center
>> http://www.oswizards.com
>> http://www.alliancemed.org
>> --
>> "First they ignore you, then they ridicule you, then they fight you,
>> then you
>> win" - Gandhi
>> "If you want truly to understand something, try to change it" - Kurt
>> Lewin
>> --
>> Key fingerprint = 3949 39E4 6317 7C3C 023E  2B1F 6FB3 06E7 D109 56C0
>> GPG key http://www.oswizards.com/pubkey.asc
>>
>
> That more or less looks right to me. Honestly it's strange that the
> machine
> wouldn't boot with the loss of just one drive, RAID5 should be resilient
> against that sort of thing. Maybe try booting again and jot down the exact
> errors you are seeing? Perhaps you lost more than one drive.
>
> --
> Kyle Rankin
> NBLUG President
> The North Bay Linux Users Group
> http://nblug.org
> IRC: greenfly at irc.freenode.net #nblug
> kyle at nblug.org
>
> _______________________________________________
> talk mailing list
> talk at nblug.org
> http://nblug.org/cgi-bin/mailman/listinfo/talk
>





More information about the talk mailing list