Thomas Vogt’s IT Blog

knowledge is power …

Linux SAN Multipathing

There are a lot of SAN multipathing solutions on Linux at the moment. Two of them are discussesed in this blog. The first one is device mapper multipathing that is a failover and load balancing solution with a lot of configuration options. The second one (mdadm multipathing) is just a failover solution with manuel re-anable of a failed path. The advantage of mdadm multiphating is that it is very easy to configure.

Before using a multipathing solution for a production environment on Linux it is also important to determine if the used solution is supportet with the used Hardware. For example HP doesn’t support the Device Mapper Multipathing solution on their servers yet.

Device Mapper Multipathing

Procedure for configuring the system with DM-Multipath:

  1. Install device-mapper-multipath rpm
  2. Edit the multipath.conf configuration file:
    • comment out the default blacklist
    • change any of the existing defaults as needed
  3. Start the multipath daemons
  4. Create the multipath device with the multipath

Install Device Mapper Multipath

# rpm -ivh device-mapper-multipath-0.4.7-8.el5.i386.rpm
warning: device-mapper-multipath-0.4.7-8.el5.i386.rpm: Header V3 DSA signature:
Preparing...                ########################################### [100%]
1:device-mapper-multipath########################################### [100%]

Initial Configuration

Set user_friendly_name. The devices will be created as /dev/mapper/mpath[n]. Uncomment the blacklist.

# vim /etc/multipath.conf

#blacklist {
#        devnode "*"

defaults {
user_friendly_names yes
path_grouping_policy multibus


Load the needed modul and the startup service.

# modprobe dm-multipath
# /etc/init.d/multipathd start
# chkconfig multipathd on

Print out the multipathed device.

# multipath -v2
# multipath -v3


Configure device type in config file.

# cat /sys/block/sda/device/vendor

# cat /sys/block/sda/device/model

# vim /etc/multipath.conf
devices {

device {
vendor                  "HP"
product                 "HSV200"
path_grouping_policy    multibus
no_path_retry           "5"

Configure multipath device in config file.

# cat /var/lib/multipath/bindings

# Format:
# alias wwid
mpath0 3600508b400070aac0000900000080000

# vim /etc/multipath.conf

multipaths {

multipath {
wwid                    3600508b400070aac0000900000080000
alias                   mpath0
path_grouping_policy    multibus
path_checker            readsector0
path_selector           "round-robin 0"
failback                "5"
rr_weight               priorities
no_path_retry           "5"

Set not mutipathed devices on the blacklist. (f.e. local Raid-Devices, Volume Groups)

# vim /etc/multipath.conf

devnode_blacklist {

devnode "^cciss!c[0-9]d[0-9]*"
devnode "^vg*"

Show Configured Multipaths.

# dmsetup ls --target=multipath
mpath0  (253, 1)

# multipath -ll

mpath0 (3600508b400070aac0000900000080000) dm-1 HP,HSV200
[size=10G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=4][active]
\_ 0:0:0:1 sda 8:0   [active][ready]
\_ 0:0:1:1 sdb 8:16  [active][ready]
\_ 1:0:0:1 sdc 8:32  [active][ready]
\_ 1:0:1:1 sdd 8:48  [active][ready]

Format and mount Device

Fdisk cannot be used with /dev/mapper/[dev_name] devices. Use fdisk on the underlying disks and execute the following command when device-mapper multipath maps the device to create a /dev/mapper/mpath[n] device for the partition.

# fdisk /dev/sda

# kpartx -a /dev/mapper/mpath0

# ls /dev/mapper/*
mpath0  mpath0p1

# mkfs.ext3 /dev/mapper/mpath0p1

# mount /dev/mapper/mpath0p1 /mnt/san

After that /dev/mapper/mpath0p1 is the first partition on the multipathed device.

Multipathing with mdadm on Linux

The md multipathing solution is only a failover solution what means that only one path is used at one time and no load balancing is made.
Start the MD Multipathing Service

# chkconfig mdmpd on

# /etc/init.d/mdmpd start

On the first Node (if it is a shared device)
Make Label on Disk

# fdisk /dev/sda
Disk /dev/sdt: 42.9 GB, 42949672960 bytes
64 heads, 32 sectors/track, 40960 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sdt1               1       40960    41943024   fd  Linux raid autodetect

# partprobe

Bind multiple paths together

# mdadm --create /dev/md4 --level=multipath --raid-devices=4 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1


# mdadm --detail /dev/md4
UUID : b13031b5:64c5868f:1e68b273:cb36724e

Set md configuration in config file

# vim /etc/mdadm.conf

# Multiple Paths to RAC SAN
DEVICE /dev/sd[qrst]1
ARRAY /dev/md4 uuid=b13031b5:64c5868f:1e68b273:cb36724e

# cat /proc/mdstat

On the second Node (Copy the /etc/mdadm.conf from the first node)

# mdadm -As

# cat /proc/mdstat

Restore a failed path

# mdadm /dev/md1 -f /dev/sdt1 -r /dev/sdt1 -a /dev/sdt1

November 29, 2007 - Posted by | Linux, RedHat, SAN


  1. Hi Thomas, thanks for posting this article. I was looking for some way of testing multipathing on suse linux or redhat and Looks like dm-multipathing might do it. Thanks

    Comment by gaurav | May 23, 2008 | Reply

  2. Excellent Write up… Very helpful.
    Helped with my configuration.

    Thank you for posting.

    Comment by Retheesh Kumar | January 6, 2009 | Reply

  3. Hello Thomas,

    Many thanks for this article. It has just got my disks from an HP SAN configured nice and quickly.

    Interestingly, Red Hat’s own DM Multipathing document doesn’t cover some of the configuration steps you mention (e.g. use of kpartx, the /sys/block/sda/device/* files and the /var/lib/multipath/bindings file). Knowing where to look for this information saved me some valuable time.

    Thanks again,

    Comment by Colin Brett | September 8, 2009 | Reply

  4. Hi Thomas,

    Thank you very much for this brilliant article. Its very helpful.


    Comment by Zuraidi | November 19, 2009 | Reply

  5. […] あ、このへんに書いてあったね。 Linux SAN Multipathing ; Thomas Vogt ;s IT Blog […]

    Pingback by dm-multipathで後からLUNを追加する « | July 19, 2010 | Reply

  6. I’m using RHEL 5.5 and fdisk works fine with /dev/mapper devices.

    Comment by Brian Schonecker | September 10, 2010 | Reply

  7. […] Answers Anonymous Install RHEL with “linux mpath” option. Otherwise, you can follow the following: […]

    Pingback by Adding multipath to server and change lvm to use new devices | November 26, 2010 | Reply

  8. Hi,
    just a question: you are using ext3 as fs. is that not very risky? afaik it is not a fs recommended for nas systems (multi mount problems etc)

    Comment by sascha | January 27, 2011 | Reply

  9. Looks like you’ve got a couple of sites who have copied your content: and

    Comment by Sonia Hamilton | April 21, 2011 | Reply

  10. Hi Thomas,

    Great article.
    I’m using an EMC Clariion storage with Linux clusters, and tried to use EMC solution ( SolutionEnabler ) to map files/devices to storage devices.
    I found that the SolutionEnabler mapping/resolving utility doesn’t work on multi-path devices while having the user_friendly_names set to yes, and wonder if you can elaborate on the difference between setting the user_friendly_names to YES and setting it to NO, and what is the impact on the OS.

    Thank you

    Comment by Roye AVidor | January 21, 2013 | Reply

  11. Hello Thomas,
    Excellent Article,Thanks for posting such a nice Article

    Comment by Rajesh | April 14, 2013 | Reply

  12. Diseases that cause hormonel imbalances and improper habitat
    temperatures also increase the risk of MBD. Succeeding feeding all through out the day can also be done but only in small quantity.
    There are innumerable dog carrier varieties available on
    the internet.

    Comment by | May 3, 2013 | Reply

  13. Super post! Thanks a million

    Comment by Wambua | August 15, 2013 | Reply

  14. Thanks a lot for your post!

    Comment by bagya | August 21, 2013 | Reply

  15. […] Install RHEL with “linux mpath” option. Otherwise, you can follow the following: […]

    Pingback by Adding multipath to server and change lvm to use new devices - Just just easy answers | September 7, 2013 | Reply

  16. Hi Thomas,

    Nice posting. Just wanted to mention to you that I saw a very similar post at Check it out yourself if you have a moment.


    Comment by Jonathan | October 1, 2013 | Reply

  17. Good one thanks

    Comment by Ganga | May 29, 2014 | Reply

  18. Nice Post. Thank you so much, it was of great help

    Comment by RK | September 12, 2014 | Reply

  19. Very good information. Can you please give the information about the partition. It should be primary, extended or logical. Also please give the partition type. Is it GPT? Thank you.

    Comment by Devakumar | June 23, 2015 | Reply

  20. how to add additional storage in an existing setup. can you please provide all steps.Thanks

    Comment by ram | May 8, 2016 | Reply

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: