RDM Causes ESXi Host Boot Delay


We have a bunch of ESXi 5.0 servers running on the same IBM hardware. Some of them only take about 2 minutes to boot up, but some need roughly 45 minutes to finish the boot process.  I compared the differences of those hosts, the only difference that I can see is that the hosts with boot delay issue are configured to use RDM (Raw Devices Mapping).

By tracking the content is  /var/log/vmkernel.log in the hosts with RDM, I found that ESXi rescan each storage devices including RDM devices at each reboot. And the rescan time is up to more than one minute for each RDM storage. That is to say, the more RDM you have the longer boot time it takes. E.g In my case, the host  is configured with 49 RDM storage, consequently it takes about 45 minutes to boot up.

The solution to speed up the boot process is to preserver those RDM storage by running the following command in ESXi. After implemented this, the boot time dropped dramatically (3 minutes), which is a big improvement!!

esxcli storage core device setconfig -d naa.<Storage ID> –perennially-reserved=true

Sample: esxcli storage core device setconfig -d naa.60050768028081713c00000000000021 –perennially-reserved=true

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s