Blog Archive

Thursday 30 May 2013

Getting an IBM M1015 RAID Controller to work as a datastore with VMware ESXI 5.1 U1

An IBM M1015 SAS / SATA 6GBps Controller with 8 ports (two SFF-8087 SAS connectors)

I recently upgraded my ESXI rig with a few new shiny bits and pieces:

Asus P9X79 WS motherboard
Intel E5-2620 2.0GHz Hex Core CPU

After setting it all up, I noticed my disk IO was pretty crappy, as all I had in there was a single old 1.5TB Seagate Barracuda SATA drive as a datastore. Now that the CPU and mobo were much faster, and running 6 or 7 VM's on a single drive, the disk side of things began to really struggle.

So I did a bit of investigation and ended up buying an IBM M1015 RAID Controller on ebay. It comes with 8 x 6GBps SATA/SAS ports, and RAID0,1,10. It is a rebadged LSI 9240-8i and is pretty much identical.

My goal is to run a RAID 0 stripe in ESXI as a datastore, in order to improve IOPS. I am not really concerned about redundancy or pass-through at this point.

You can upgrade the IBM M1015 with a special key to allow RAID 5, but I would not recommend it, as this controller in RAID 5 has rubbish write speeds of 20MB/sec! But in RAID0, which is what I intend to use it for, it just flies! I have read reports of people getting 2GB/sec read speeds using 4 x SSD's. Not bad :)

So my system currently is setup as so:

P9X79 WS mobo with latest 3401 BIOS (at the time of writing)
IBM M1015 flashed with latest LSI 9240-8i firmware and BIOS (20.11.1-0137 a.k.a 4.10 P2)
ESXI 5.1 Update 1

Initially when I plugged in the M1015 controller and booted ESXI, it gave me the the dreaded hang at the point of loading the megaraid_sas driver. See this post for more detail:

So then I started looking into cross flashing the M1015 into a LSI 9211-8i. The 9211-8i is pretty much the same thing as the M1015, as it uses the same SAS2008 chipset, but the support in ESXI is far better, as it uses a different driver than megaraid_sas.

To cut a very long story short, I could only flash the M1015 into a 9211-8i in IT mode. IT mode is basically an HBA, or a SATA controller with 8 ports. So no RAID :( And that's not what I wanted.

For some reason I could not flash the M1015 into 9211-8i IR mode, which is a basic RAID controller. It kept throwing up the "Chip in RESET state" error, despite trying about 6 different computers and a combination of DOS and UEFI shells.

If you are interested in going down this route, then please see this post:

So I ended up rolling back to the original IBM M1015, by using the latest version of the LSI Megaraid 9240-8i firmware. Remember the M1015 is a rebadged 9240-8i and is identical, even the firmware. So it works just fine.

I followed the procedure of the above post, but used the latest firmware from LSI.

Convert LSI9211-IT/IR back to LSI9240 (IBM M1015)
Type in the following exactly:
Megarec -cleanflash 0
Megarec -writesbr 0 sbrm1015.bin
<reboot, back to USB stick>
Megarec -m0flash 0 0061_lsi.rom (for latest LSI firmware, also included 2x IBM roms too, just change name)

So I was still stuck between a hard place and a rock. I couldn't flash to 9211-8i IR mode and I couldn't seem to get ESXI to accept the card in its native M1015 (9240-8i) mode.

I found a number of posts talking about changing the "PCI Compatibility ROM" from Legacy ROM to EFI compatible ROM. The problem is that my motherboard (P9X79 WS) doesnt have this option (well, BIOS version 3401 doesn't have it)

Eventually I managed to figure out what to do to get the M1015 working in it's native mode!!!

On the Boot tab of the Asus P9X79 WS BIOS there is a CSM (Compatibility Support Module) section:

My settings are as follows:

Launch CSM - Enabled
Boot Device Control - UEFI and Legacy (You need Legacy to boot ESXI)
Boot from Storage Devices - UEFI or Legacy (This depends! - And I will explain why)

NOTE: Throughout the procedure I always had a separate SATA Drive as a boot drive for ESXI. I have not tried booting from the RAID array itself.

In order to initially configure your RAID array using the built in WebBIOS (Ctrl+H) when starting your machine, you need to have "Boot from Storage Devices" enabled as Legacy only.

Once you have configured the RAID array, you can then enter your mobo BIOS again and change the "Boot from Storage Devices" to UEFI. This step is very important as this is what allows ESXI to get past the megaraid_sas hang! For some reason ESXI needs it setup like this.

Also, you may be having issues trying to get into the WebBIOS initially to configure your array. You may have to do the following, besides the  "Boot from Storage Devices" enabled as Legacy only setting:

1. Machines boots up
2. Press CTRL+H to enter RAID array WebBIOS
3. Press F8 (for your boot menu) as soon as you see the normal machine BIOS screen
4. Select boot from RAID device (cant remember the exact wording)
Only then will it boot up into the RAID array WebBIOS. Crappy, I know, but it seems the only way to get into your RAID array, at least on my motherboard.

Also an optional thing, but nice to have... I downloaded the latest ESXI Megaraid driver from LSI, SCP'd it to my ESXI box in the /tmp folder. And then *SSH'd back into the box while it was in maintenance mode.

esxcli software vib install -v /tmp/scsi-megaraid-sas-6.506.51.00.1vmw-1vmw.500.0.0.472560.x86_64.vib --no-sig-check

*If you need to enable SSH access on your ESXI box:

1. Go to Home, Inventory, Hosts & Clusters
2. Highlight your ESXI host.
3. Choose the Configuration tab in the main window.
4. Choose Security Profile down the left hand side.
5. Choose Properties on the upper right corner of the main window.
6. Select SSH from the menu and choose Options.
7. Use the Service Commands button to Start or Stop the SSH process.

Hopefully this will help you get your M1015 working within ESXI :) Mine is purring away nicely with 4 x 750GB Barracuda drives.

No comments: