wiki:DiscosRaid

identificar bien los discos

Usar el comando

# hdparm -I /dev/sdb | less
/dev/sdb:
ATA device, with non-removable media
        Model Number:       Hitachi HDS721010CLA332                 
        Serial Number:      JP2930HQ0WGG0H
        Firmware Revision:  JP4OA39C
        Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5; Revision: ATA8-AST T13 Project D1697 Revision 0b
.....

Agregar un nuevo array con 2 nuevos discos

  • Particionar los 2 discos con fdisk con el tipo de partición "linux raid autodetect" (FD)
  • Crear el nuevo array con las nuevas particiones, fijarse bien el device raid mdX y las particiones sdXX.
    # mdadm --create --verbose /dev/md3 --level=1 --raid-devices=2  /dev/sdc1 /dev/sdd1
    
  • Formatear el nuevo array en ext3
    # mkfs.ext3 /dev/md3 
    
  • Agregar la configuración del nuevo array al /etc/mdadm/mdam.conf
    # mdadm --detail --scan | grep md3 >> /etc/mdadm/mdadm.conf
    
  • Es una buena idea de hacer un reboot del servidor para ver si el array vuelve bien con un reboot.
  • Agregar el punto de montaje en el archivo /etc/fstab con el UUID correcto.
    # blkid | grep md3
    /dev/md3: UUID="fcecd711-924d-4193-8533-3f7b3ae8bcc7" TYPE="ext3" 
    
    Un ejemplo del archivo /etc/fstab
    UUID=fcecd711-924d-4193-8533-3f7b3ae8bcc7      /home   ext3    defaults        0       2
    
  • Es una buena idea de hacer un reboot del servidor para ver si el punto de montaje vuelve bien después de un reboot.

Agregar Nuevo Disco

  • Copiar tabla de particiones un disco al nuevo
    # sfdisk -d /dev/sda | sfdisk --no-reread /dev/sdb --force
    
  • Agregar al raid1 el nuevo disco (sdb)
# mdadm --add /dev/md0 /dev/sdb2
# mdadm --add /dev/md1 /dev/sdb5
# mdadm --add /dev/md2 /dev/sdb6
  • Quitar particion al raid1
# mdadm --fail /dev/md0 /dev/sdb2
# mdadm -r /dev/md0 /dev/sdb2

Quitar RAID

  • Quitar un disco fallido de una RAID:
# mdadm --remove /dev/md0 /dev/sdb1
  • Limpiar cualquier información previa de un disco RAID (Ej. al reutilizar un disco de otra raid antigua)
# mdadm --zero-superblock /dev/sdb1
  • más info:

http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/

http://svn.debian.org/wsvn/pkg-mdadm/mdadm/trunk/debian/README.recipes?op=file&rev=0&sc=0

https://wiki.koumbit.net/RaidRecovery

http://danielpecos.com/wiki/Howto:_RAID_en_Linux

  • Si los discos se salen del array

Puede ser que los cause el smartd con el kernel. http://kerneltrap.org/mailarchive/linux-scsi/2009/9/14/6409773

  • Grub al disco (para que varios puedan bootear sin que se rompan)
    # grub
    Probing devices to guess BIOS drives. This may take a long time.
    [...]
    grub> device (hd0) /dev/sdb
    device (hd0) /dev/sdb
    grub> root (hd0,0)
    root (hd0,0)
     Filesystem type is ext2fs, partition type 0xfd
    grub> setup (hd0)
    setup (hd0)
     Checking if "/boot/grub/stage1" exists... yes
     Checking if "/boot/grub/stage2" exists... yes
     Checking if "/boot/grub/e2fs_stage1_5" exists... yes
     Running "embed /boot/grub/e2fs_stage1_5 (hd0)"...  16 sectors are embedded.
    succeeded
     Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,1)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded
    Done.
    
  • Cambiar de posición el disco ( slot number )
    mdadm --detail /dev/md1
    Number Major Minor RaidDevice State
    0 0 0 0 removed
    1 8 17 1 active sync /dev/sdb1
    What I wanted to do is to move the device /dev/sdb1 to slot 0 from slot 1. And I found I could do that by running mdadm in grow mode with the degraded Raid 1 array:
    mdadm --grow --force -n 1 /dev/md1
    mdadm --detail /dev/md1
    Number Major Minor RaidDevice State
    0 8 22 0 active sync /dev/sdb1
    mdadm --grow -n 2 /dev/md1
    mdadm --detail /dev/md1
    Number Major Minor RaidDevice State
    0 8 17 0 active sync /dev/sdb1
    1 0 0 1 removed
    
    Then add a new device into md1
    mdadm /dev/md1 -a /dev/sda1
    mdadm: hot added /dev/sda1
    

http://piiis.blogspot.com/2009/03/change-slot-number-of-raid-1-device-by.html

Degraded Boot

Si se rompe un disco "primario" del array, y el "secundario" no tiene instalado GRUB2, seguir las instrucciones de https://help.ubuntu.com/community/Grub2#ChRoot . Ojo: El LiveCD tiene que ser del mismo release y arquitectura.

ChRoot

This method of installation uses the chroot command to gain access to the broken system's files. Once the chroot command is issued, the LiveCD treats the broken system's / as its own. Commands run in a chroot environment will affect the broken systems filesystems and not those of the LiveCD.

   1. Boot to the LiveCD Desktop. The CD should be the same release and architecture (32/64 bit).
   2.

      Open a terminal - Applications, Accessories, Terminal.
   3.

      Only If the normal system partition(s) are on a software RAID (otherwise skip this step): make sure the mdadm tools are installed in the Live CD environment (e.g. by executing sudo apt-get install mdadm). Then assemble the arrays:

      sudo mdadm --assemble --scan

   4. Determine your normal system partition (the switch is a lowercase "L"):

      sudo fdisk -l

      If you aren't sure, run df -Th. Look for the correct disk size and ext3 or ext4 format.
   5. Mount your normal system partition:
          * Substitute the correct partition: sda1, sdb5, etc. 

      sudo mount /dev/sdXX /mnt

          *

            Example 1: sudo mount /dev/sda1 /mnt
          *

            Example 2: sudo mount /dev/md1 /mnt 
   6.

      Only if you have a separate boot partition (where sdYY is the /boot partition designation):

      sudo mount /dev/sdYY /mnt/boot

          *

            Example 1: sudo mount /dev/sdb6 /mnt/boot
          *

            Example 2: sudo mount /dev/md0 /mnt/boot 
   7. Mount the critical virtual filesystems. Run the following as a single command:

      for i in /dev /dev/pts /proc /sys; do sudo mount -B $i /mnt$i; done

   8. Chroot into your normal system device:

      sudo chroot /mnt

   9.

      Only if (some) of the system partitions are on a software RAID (otherwise skip this step): make sure the output of mdadm --examine --scan agrees with the array definitions in /etc/mdadm/mdadm.conf.
  10.

      If the file /boot/grub/grub.cfg does not exist or it is not correct, (re)create it using

      update-grub

  11.

      Reinstall GRUB 2 (substitute the correct device with sda, sdb, etc. Do not specify a partition number):

      grub-install /dev/sdX

      If the system partitions are on a software RAID install GRUB 2 on all disks in the RAID. Example (software RAID using /dev/sda and /dev/sdb):

      grub-install /dev/sda
      grub-install /dev/sdb

  12.

      Verify the install (use the correct device, for example sda. Do not specify a partition):

      grub-install --recheck /dev/sdX

      For a system on a software RAID, repeat this for all devices in the RAID.
  13.

      Exit chroot: CTRL-D on keyboard
  14. Unmount virtual filesystems. Run the following as a single command:

      for i in /sys /proc /dev/pts /dev; do sudo umount /mnt$i; done

  15. If you mounted a separate /boot partition:
          *

            sudo umount /mnt/boot

  16. Unmount last device:

      sudo umount /mnt

  17. Reboot.

      sudo reboot
Last modified 13 years ago Last modified on 04/10/12 13:39:04
Note: See TracWiki for help on using the wiki.