ZFS: raw mappings LSI: can't see disks to "raw map"

janhoedt
janhoedt used Ask the Experts™
on
Hi,

I'm trying to setup Oracle Solaris on a vm, but the SSD's behind the LSI, I can't see, so cannot map them either.

esxcfg-mpath -l shows all my disks BUT the ones behind the LSI (2 OCZ 60 GB disks, 3 SATA disks 250 GB but no Kingston 120GB SSD's which are in sharkoon on LSI).
disksPlease advise.
J.
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
DavidPresident
Top Expert 2010

Commented:
Specifically what controller is this?

Author

Commented:
LSIChannelProductsStorage ComponentsLSI SAS 9211-4i
LSI SAS 9211-4i HBA
Low-profile, four-port internal 6Gb/s SATA+SAS HBA with PCIe 2.0 host
Andrew Hancock (VMware vExpert / EE Fellow)VMware and Virtualization Consultant
Fellow 2018
Expert of the Year 2017

Commented:
so the disks attached to the LSI controller you cannot detect in ESXi ?

are you operating in RAID Mode?

you've not created a RAID Array?
Success in ‘20 With a Profitable Pricing Strategy

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

DavidPresident
Top Expert 2010

Commented:
That controller has full pass-through support.  Just tell it to configure n JBOD disk drives.  Do not configure any RAID devices (even though it will let you).
Andrew Hancock (VMware vExpert / EE Fellow)VMware and Virtualization Consultant
Fellow 2018
Expert of the Year 2017

Commented:
also remember what you are trying todo is not supported in VMware ESXi, and it's a fudge/hack for a better word that it works at all!

Author

Commented:
?? Pass-through doesn t work on hp microserver, posted a question for that?
Not supported, the mapping I know, but that doesn t mean it shouldn t work?

Author

Commented:
Didn t configure lsi in any way.

Author

Commented:
Please see here for pass through question: http://mobile.experts-exchange.com/Software/VMWare/Q_27976187.html
Andrew Hancock (VMware vExpert / EE Fellow)VMware and Virtualization Consultant
Fellow 2018
Expert of the Year 2017

Commented:
I could be wrong but I do not think @dlethe is referring to VM Direct Path I/O.

Author

Commented:
What would be the difference? Checked settings, don't see any way to configure vm with drives behind LSI.

Author

Commented:
Could somebody please clarify?

Author

Commented:
Do I need to configure the controller? Esxi/vcenter can see the ssd s, so don t see reason why. But then again via ssh/commandline I can t see them.
DavidPresident
Top Expert 2010

Commented:
Yes, you just have too much going on, and need to step back a bit.  Make a bootable LINUX usb stick (plain, vanilla LINUX) ... go to ubuntu.com home page and it walks you through the process.

Then with it booted to Ubuntu, make sure that Ubuntu sees all the devices.  If not, your problem has nothing to do with VMWARE, and you can diagnose what could very well be simple cabling or power issue.

Author

Commented:
?? Cabling or power?? I can perfectly boot into esxi and see all disks + lsi controller! Thanks, but that test would be not relevant.
DavidPresident
Top Expert 2010

Commented:
If you could "perfectly boot into ESXI and see all the disks" ... then you wouldn't be posting a question.  So again, if you boot to LINUX, does the linux kernel see all of the individual disks and can you read/write to ALL of them ... from LINUX.

Author

Commented:
Thanks, bu I just don t see the logic behind it. Yes, the gui/vcenter sees all the disks and via ssh it does not. But only the ones behind the lsi. So if I would boot into another liinux and it would show me all disks, what would that make me wiser? There is nothing I can change on my current config. Please clarify that.
DavidPresident
Top Expert 2010

Commented:
Because "seeing" all of the disks from ESXi is not the same as being able to read/write from them when booted to plain, vanilla LINUX.  IF you can NOT read/write to all of the disks (use dd) from LINUX, then you know your problem is NOT specific to ESXi, and it is much easier to fix.

In your case, you learn something if it does not work, not if it does work.

Author

Commented:
Ok, but what could I do then? I d better do that now and save time. All cablings are ok, all ssd s are ok. Yes, I can write to the ssd s via esxi, sonce I can add them as datastores and add vm s. ssd also worked via sata2 before. If any raid change could help, I d better try that now. Still don t see added value of booting into other linux.
DavidPresident
Top Expert 2010

Commented:
"... any raid change".   Are these not all individual disks so all of them are exposed to whatever O/S you are booting??

Author

Commented:
Yes, but it worked correctly without the lsi, so lsi should be root cause.
DavidPresident
Top Expert 2010

Commented:
The LSI controller has an expander, and it does a protocol conversion so your SATA SSDs emulate SAS devices.  So when you say it "worked correctly without the LSI", then if those SSDs were not attached to the LSI, then you made a fundamental configuration change.

Your system may have another expander, I do not know, you never got into specifics on the hardware.  But expanders are not equal.  Some have *horrible* emulation and will never work properly with ESXi, or even LINUX (But they will work on Windows).  

So easy test .. If I understand the problem, you say that SSDs work behind the LSI, and those that aren't direct-attached to the LSI do not work.    That being the case, move SSDs around so that ones that formerly did not work are attached to the LSI.   If they work ... then your problem is a crappy expander, and there is nothing you can do about it, except hope there is an upgrade for it that works, or you can get your money back.
Andrew Hancock (VMware vExpert / EE Fellow)VMware and Virtualization Consultant
Fellow 2018
Expert of the Year 2017

Commented:
i would dump the idea of using Solaris inma VM, install direct on the hardware.

Author

Commented:
Ok, but that I cannot make it work. Boot from us is stimuck on "grub"
DavidPresident
Top Expert 2010

Commented:
"i would dump the idea of using Solaris inma VM, install direct on the hardware."

Same here ... the idea is nuts, as ESXi adds no value, and actually hurts functionality, performance, and flexibility.  Solaris already gives you robust things like online LUN expansion, hot snapshots, clustering, mirrored boot, ...  

Solaris is quite popular with ISPs and cloud providers, and they do not make the mistake to virtualize Solaris.

Author

Commented:
Ok, but that I cannot make it work. Boot from us is stuck on "grub"  (see other post).
Did anybody already try to boot from an usb to install? Did it work?
DavidPresident
Top Expert 2010

Commented:
I can boot from USB sticks all day long on any computer. It never fails, unless there is a hardware problem, which is the entire point of the exercise.  If a system fails to boot, remove components until it does, starting with turning off power to peripherals.   Such a symptom is probably root cause for the headache to begin with.

Did you go to ubuntu.com and use the technique and software on their home page?   Be sure to use the 64-bit server version as it will certainly have drivers.

P.S. What "other" post?  Got a link?
Commented:
Booting from external usb dvd device did the  trick!
Commented:
Note: booting from usb works (esxi) but not for solaris, it s not hardware, it s the usb stick with solaris on, but never mind now.

Author

Commented:
Root cause is (usb stick with) solaris, workaround = booting with cd/dvd and nexenta (booting dvd with solaris didn t work either).

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial