0

I create a RAID1 device with mdadm in an EC2 instance. The version of mdadm is v3.3.2.

/sbin/mdadm --create /dev/md1 --level=1 --raid-devices=2  /dev/xvdf /dev/xvdk

This is the output of mdstat:

cat /proc/mdstat 
Personalities : [raid1] 
md1 : healthy raid1 xvdk[1] xvdf[0]
      41594888 blocks super 1.2 [2/2] [UU]

It's normal. We see that there are two member disks xvdk and xvdf for this RAID1 device.

However, I find the members of MD device become /dev/sd* in "mdadm -D" output:

mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Fri Dec 11 06:29:50 2015
     Raid Level : raid1
     ...

    Number   Major   Minor   RaidDevice State
       0     202       82        0      active sync   /dev/sdf
       1     202      162        1      active sync   /dev/sdk

Then I find these links have been created automatically:

ll /dev/sd*
lrwxrwxrwx. 1 root root 4 Dec 11 06:29 /dev/sdf -> xvdf
lrwxrwxrwx. 1 root root 4 Dec 11 06:29 /dev/sdk -> xvdk

I guess that this is done by mdadm. I never saw this problem before.

I think it's no need to change device name of MD members because it confuses people. How to avoid this problem? Thanks a lot!

4

1 回答 1

0

I have solved this problem by myself. In EC2 instance, there is a UDEV rule which can automatically create links for xvd drives:

$cat /etc/udev/rules.d/99-ami-udev.rules
KERNEL=="xvd*", PROGRAM="/usr/sbin/ami-udev %k", SYMLINK+="%c"

After removing this rule, then everything is OK.

于 2016-06-20T04:23:53.080 回答