This is really starting to do my head in.
My DL380 G6 has 3x RAID controllers.
- 1x P410i with 256MB cache (no battery).
- 1x P212 with 256MB cache (no battery).
The P410i came with firmware version 5.70
P212 with v 3.xx
P800 is still v7.xx
The P410i links to the main cage in the server, the P212 to a generic SAS/SATA drive unit and the P800 to a MSA60.
The main reason for the firmware updates on the P410i and P212 was the P212 couldn't see 3TB+ disks.
Since the upgrade, the P410i has 0x14 lockups (1719-Slot 0 Drive Array - A controller failure event occurred prior to this power-up. (Previous lock up code = 0x14)
) in ESXi, however seems fine in SmartStart and a Linux live boot (which I used to update the firmware from).
ESXi starts up and loads fine when the P410i has it's drives out, obviously not loading the VMs on it. (1x 4x146GB SAS as RAID 10, 1x 4x500GB SATA as RAID10).
In desperation I took the server home to try to at least recover the data off and on my workbench (with the P212 and P800 connections off) it booted fine. I began a very cumbersome backup overnight with SCP and at 500KBps it didn't get very far.
As it ran well, I took it back and plugged it in at the datacentre and I'm back to square 1.
Does anyone have any ideas what I can do or what to try? Since ESXi tries to access the datastores off P410i it jams a SSH session when trying to access the datastore and I have to physically reset the server to re-access it as ESXi won't shutdown/reboot.
I've even disabled the Array Accleratior options to see if that helps. Datastores on the P410i are VMFS6 and P800 are VMFS5 (due to GPT vs MBR compatibilities).