ZFS: raw mappings LSI: can't see disks to "raw map"

Hi,

I'm trying to setup Oracle Solaris on a vm, but the SSD's behind the LSI, I can't see, so cannot map them either.

esxcfg-mpath -l shows all my disks BUT the ones behind the LSI (2 OCZ 60 GB disks, 3 SATA disks 250 GB but no Kingston 120GB SSD's which are in sharkoon on LSI).
disksPlease advise.
J.
janhoedtAsked:
Who is Participating?
 
janhoedtAuthor Commented:
Booting from external usb dvd device did the  trick!
0
 
DavidPresidentCommented:
Specifically what controller is this?
0
 
janhoedtAuthor Commented:
LSIChannelProductsStorage ComponentsLSI SAS 9211-4i
LSI SAS 9211-4i HBA
Low-profile, four-port internal 6Gb/s SATA+SAS HBA with PCIe 2.0 host
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
so the disks attached to the LSI controller you cannot detect in ESXi ?

are you operating in RAID Mode?

you've not created a RAID Array?
0
 
DavidPresidentCommented:
That controller has full pass-through support.  Just tell it to configure n JBOD disk drives.  Do not configure any RAID devices (even though it will let you).
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
also remember what you are trying todo is not supported in VMware ESXi, and it's a fudge/hack for a better word that it works at all!
0
 
janhoedtAuthor Commented:
?? Pass-through doesn t work on hp microserver, posted a question for that?
Not supported, the mapping I know, but that doesn t mean it shouldn t work?
0
 
janhoedtAuthor Commented:
Didn t configure lsi in any way.
0
 
janhoedtAuthor Commented:
Please see here for pass through question: http://mobile.experts-exchange.com/Software/VMWare/Q_27976187.html
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
I could be wrong but I do not think @dlethe is referring to VM Direct Path I/O.
0
 
janhoedtAuthor Commented:
What would be the difference? Checked settings, don't see any way to configure vm with drives behind LSI.
0
 
janhoedtAuthor Commented:
Could somebody please clarify?
0
 
janhoedtAuthor Commented:
Do I need to configure the controller? Esxi/vcenter can see the ssd s, so don t see reason why. But then again via ssh/commandline I can t see them.
0
 
DavidPresidentCommented:
Yes, you just have too much going on, and need to step back a bit.  Make a bootable LINUX usb stick (plain, vanilla LINUX) ... go to ubuntu.com home page and it walks you through the process.

Then with it booted to Ubuntu, make sure that Ubuntu sees all the devices.  If not, your problem has nothing to do with VMWARE, and you can diagnose what could very well be simple cabling or power issue.
0
 
janhoedtAuthor Commented:
?? Cabling or power?? I can perfectly boot into esxi and see all disks + lsi controller! Thanks, but that test would be not relevant.
0
 
DavidPresidentCommented:
If you could "perfectly boot into ESXI and see all the disks" ... then you wouldn't be posting a question.  So again, if you boot to LINUX, does the linux kernel see all of the individual disks and can you read/write to ALL of them ... from LINUX.
0
 
janhoedtAuthor Commented:
Thanks, bu I just don t see the logic behind it. Yes, the gui/vcenter sees all the disks and via ssh it does not. But only the ones behind the lsi. So if I would boot into another liinux and it would show me all disks, what would that make me wiser? There is nothing I can change on my current config. Please clarify that.
0
 
DavidPresidentCommented:
Because "seeing" all of the disks from ESXi is not the same as being able to read/write from them when booted to plain, vanilla LINUX.  IF you can NOT read/write to all of the disks (use dd) from LINUX, then you know your problem is NOT specific to ESXi, and it is much easier to fix.

In your case, you learn something if it does not work, not if it does work.
0
 
janhoedtAuthor Commented:
Ok, but what could I do then? I d better do that now and save time. All cablings are ok, all ssd s are ok. Yes, I can write to the ssd s via esxi, sonce I can add them as datastores and add vm s. ssd also worked via sata2 before. If any raid change could help, I d better try that now. Still don t see added value of booting into other linux.
0
 
DavidPresidentCommented:
"... any raid change".   Are these not all individual disks so all of them are exposed to whatever O/S you are booting??
0
 
janhoedtAuthor Commented:
Yes, but it worked correctly without the lsi, so lsi should be root cause.
0
 
DavidPresidentCommented:
The LSI controller has an expander, and it does a protocol conversion so your SATA SSDs emulate SAS devices.  So when you say it "worked correctly without the LSI", then if those SSDs were not attached to the LSI, then you made a fundamental configuration change.

Your system may have another expander, I do not know, you never got into specifics on the hardware.  But expanders are not equal.  Some have *horrible* emulation and will never work properly with ESXi, or even LINUX (But they will work on Windows).  

So easy test .. If I understand the problem, you say that SSDs work behind the LSI, and those that aren't direct-attached to the LSI do not work.    That being the case, move SSDs around so that ones that formerly did not work are attached to the LSI.   If they work ... then your problem is a crappy expander, and there is nothing you can do about it, except hope there is an upgrade for it that works, or you can get your money back.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
i would dump the idea of using Solaris inma VM, install direct on the hardware.
0
 
janhoedtAuthor Commented:
Ok, but that I cannot make it work. Boot from us is stimuck on "grub"
0
 
DavidPresidentCommented:
"i would dump the idea of using Solaris inma VM, install direct on the hardware."

Same here ... the idea is nuts, as ESXi adds no value, and actually hurts functionality, performance, and flexibility.  Solaris already gives you robust things like online LUN expansion, hot snapshots, clustering, mirrored boot, ...  

Solaris is quite popular with ISPs and cloud providers, and they do not make the mistake to virtualize Solaris.
0
 
janhoedtAuthor Commented:
Ok, but that I cannot make it work. Boot from us is stuck on "grub"  (see other post).
Did anybody already try to boot from an usb to install? Did it work?
0
 
DavidPresidentCommented:
I can boot from USB sticks all day long on any computer. It never fails, unless there is a hardware problem, which is the entire point of the exercise.  If a system fails to boot, remove components until it does, starting with turning off power to peripherals.   Such a symptom is probably root cause for the headache to begin with.

Did you go to ubuntu.com and use the technique and software on their home page?   Be sure to use the 64-bit server version as it will certainly have drivers.

P.S. What "other" post?  Got a link?
0
 
janhoedtAuthor Commented:
Note: booting from usb works (esxi) but not for solaris, it s not hardware, it s the usb stick with solaris on, but never mind now.
0
 
janhoedtAuthor Commented:
Root cause is (usb stick with) solaris, workaround = booting with cd/dvd and nexenta (booting dvd with solaris didn t work either).
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.