Xangaiti not picking up datastores to monitor


Urgent help on xangati - Not picking up datasatores to monitor - odd 1 maybe bug. But I cant performance monitor on the hybrid volumes attached to vcenter 5.1 - the only datastore being picked up is the sata datastore. - I cant drill down into at any granular level any other datastore?

Netapp cluster mode release 8.3 - xangaiti only monitoring the 1 (least used sata data-stores). Vcenter 5.1 - could be esx bug vcenter

Hopefully Andrew catches this question :)
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Andrew Hancock (VMware vExpert / EE Fellow)VMware and Virtualization ConsultantCommented:
So Xangati is not detecting any shared network storage ?

just local storage ?

and I assume that all these shared storage volumes, are connected to all hosts ?

Are these iSCSI or NFS ?

What version of vCenter Server and ESXi 5.1 ? latest versions ?

If they are seen by vCenter Server they should be picked up by Xangati?

You have refresh and re-scanned the datastores ?

or a bug with Xangati
philb19Author Commented:
Thanks Andrew

No the SATA is shared storage - NFS  - there is no iscsi at play here all NFS
Vcenter 5.1.0 build 1473063
no local storage - all netapp
connected to all hosts Yes
esx 5.1.0 version 2000251
refresh tried yes.
philb19Author Commented:
its the hybrid Netapp  sas ssd datastores that cant be seen

attached to all hosts in prod cluster (esx)
Your Guide to Achieving IT Business Success

The IT Service Excellence Tool Kit has best practices to keep your clients happy and business booming. Inside, you’ll find everything you need to increase client satisfaction and retention, become more competitive, and increase your overall success.

Andrew Hancock (VMware vExpert / EE Fellow)VMware and Virtualization ConsultantCommented:
That seems very odd, as at that layer, ESXi cannot see the underlying storage, it's all seen as NFS.

does it look any different, in the way it's presented to ESXi ?
philb19Author Commented:
Hi Andrew - This is cluster mode Netapp - I was more familiar with the NFS export process on 7mode which changed somewhat in cluster-mode.
But I will have a look at more closely tomorrow based on your question,

I have another question - let me know if you require me to place in a new question.
There is 1 VM OS win2008 32bit (not r2)  that has an e1000 driver for it vNIC  - so not using VMXNET driver. I realise that the vmxnet driver provides better performance. Im just wondering if the performance loss is significant using e1000 and is likely to effect application performance significantly?

The reason a e1000 driver was used is that the vm was put in early days running esx5 or 4.1 - at the time I recall that we couldn't get the vmware nic driver to work in the OS so we fell back to e1000 that worked everytime!
Thankyou for help
Andrew Hancock (VMware vExpert / EE Fellow)VMware and Virtualization ConsultantCommented:
Technically a new question. but I'll answer quickly, and ALL VMs should use the virtual aware VMXNET3 driver, supported by VMware Tools.

VMXNET driver, in versions 1,2 and now 3 has been available for many years before 4.1.

It does perform better, and should be used, rather than the emulated vE1000 driver!

e.g. 10GBps network interface rather than 1GBps!

(always backup servers in production before changes are made!)
philb19Author Commented:
Thanks Andrew Awesome as always :)

I did pickup a difference in the presentation of the datastores:

the SATA nfs which xangati can get monitoring from is presented over a different VIF ( IP = .6) to the esx hosts
While the Hybrid ssd/sas nfs datastores are presented over the other vif (IP = .7) to the esx hosts

Now while all datastores are of course all mounted on all of the esx hosts on the cluster - this is a difference.

In fact around a month ago - we couldn't nfs mount using 1 of the vifs from a linux server. Just by trial and error we tried going to the other vif and it did nfs mount ok. We then went straight back to the other vif and suddenly it mounted ok as well. - Some weird netapp issue maybe.

Our external guy seems certain that the xangati issue is a bug in the vcenter version.

any further ideas? Cheers
Andrew Hancock (VMware vExpert / EE Fellow)VMware and Virtualization ConsultantCommented:
I don't think different VIFS should make any difference, unless IP information is confused, we've had several issue with the NetApp Storage VSA appliance, but this is rather different to your issue.

You are using the latest vCenter Server?

Otherwise, escalate to Xanagti support.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.