Link to home
Start Free TrialLog in
Avatar of Alex
AlexFlag for United Kingdom of Great Britain and Northern Ireland

asked on

VMware vSphere and Dell MD3200, and other shared storage options

Hi guys,

I've a project coming up with several options and I would like to make sure I get this right. Dell is very naughty with their shared storage hardware where it doesn't permit you to use non Dell disks, I've tried even the same model number of drives but because Dell puts their own firmware, you can't use any other drive then their own. They get a normal SAS drive, mark up 4 times the price and put out in the market.

So, cheap SSD for read only applications on the MD3200 is a no go as I can't use anything that is not sold by Dell. It is a shame, I have Citrix servers here that barely do any read and a couple of RAID10 arrays would do very well.

I could perhaps use Solarwinds iscsi software on a Windows server and share the storage, but this is scsi, hiher latency and gbit network. I have already a DAS device connected through SAS which is a very low latency device.

The virtual SAN from vmware seems quite expensive and not fully matured for proper production use, it would be good to have shared storage across the hosts that is replicated to other hosts so it works as a "shared" storage, but god knows if that works well.

What other options do you guys suggest?

Thanks!
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

What is your budget ?

iSCSI and NFS are very popular, or you could use Nexenta Community Edition based on ZFS using SSD for Cache.

How many ESXi servers do you have?

What license, do you want to do, vMotion, HA, DRS, Storage vMotion ?
Avatar of Alex

ASKER

about 20-30k

iSCSI and NFS would first require 10gbit, second I don't like the latency. I've already got 2x MD3200 with dual controllers from dell, they are worth 15k. I've tested Nexenta not long ago, what a fucking frustrating piece of crap. You can't get support with community of trial, never got it to work properly, I won't pay thousands to find out is is no good for me. Diffidently don't recommend it.

I've 8 ESXi hosts but they will be split into two companies so each having its own DAS

I need vMotion and HA, so it seems the basic standard license will do it. I'm wondering now about being a cloud provider depending on license costs.
EMC ViPR Software-Defined Storage is a mature SDS product (more than VSAN, anyway).
VMware VSAN will release their new version this year (supposedly).
FreeNAS.
Windows 2008 / 2012 iSCSI.
Avatar of Alex

ASKER

EMC ViPR seems interesting, but I assume it is all iSCSI too?

VSAN hopefully will get better but I don't wanna risk anything for these clients I have

Freenas unfortunatelly I think is not geared towards a high availability client like that

I heard some good stuff about 2012R2 iSCSI like multi level caching, zimilar to ZFS


What about the MD3200, do you guys have any ideas what I could do with them? I'm thinking about just putting 12x 600GB 15K SAS as I did for another client and a big RAID10 array. I would love to have a couple of SSDs but Dell selling each for £2k is asking too much! I have a T620 with 4x Intel 730 480GB in RAID10 and it is incredible, outperform the DAS with 12x 600GB 15K in every single aspect.
Why not use CACHING TECHNOLOGY in the HOST ESXi server with SSDs, so ALL traffic is cached before it hits the MD3200?

See here

http://www.pernixdata.com/

Put your SSDs where they need to be close to the CPU, and all read and write cache go through the cache before VMFS...

and you can turn on and off for specific VMs

We use it with great results on slow SANs!
Avatar of Alex

ASKER

that is just with the enterprise plus license so far I know, it is damn expensive, cheaper to buy a proper hardware SAN, isn't it?
VMware's offering is very poor by comparison to Pernixdata!

Even with a proper SAN, you may still not get the performance you require, and rather than adding more disks, putting SSDs in the Host near the CPU is the answer!
Avatar of Alex

ASKER

Have you used their software? These days pretty much everyone is developing their own iSCSI san and other accelerator things like solarwind so I'm a bit sceptical about throwing all my data into a software storage.

I might go to use the DAS with a big raid 10 container for the servers like exchange and DCs and local SSDs on each host for teh Citrix/remote desktop servers.

How far have you played with these software accelerators for VMs?
Yes, we use their software in Production, in-front of LARGE SANs to BOOST IOPS!
Avatar of Alex

ASKER

The SMB version is $9,999 for up to four hosts and 100 VMs

Hmm, quite pricey. It can do a lot but for this price I can put a lot of SSDs on each host, as they will run only citrix server if one goes down i have 10 left to load balance.

I think I'll go for the local SSD route due to costs.

It is funny this SMB version for VMware essentials that costs 10k, Essentials plus costs half this!

Perhaps when we get bigger we can use it but at this moment it is too pricey.
EMC ViPR is iSCSI, NFS and CIFS to give storage and FC and the rest to get storage.

v1.0 is free to try, if you like it, then you can evaluate buying v2.0.

But then again, I don't know it's price.
vmware vsan works very well, just like any software appliance.
still nowadays flash storage has very fast write wear, so it is best used as cache.
(it takes some esxcli to get vmware to understand them as flash cache)
We do use 32-64-128GB microsdh/xc UHS-1 SD cards as local flash cache. While it sounds like lightly retarded, in reality it does full spec 10k read IOPS and beats the 100IOPs you can get from <overstrike>old laptop drives rebranded as </overstike> premium server disks of the time.
Avatar of Alex

ASKER

The EMC solution is more towards a big enterprise sector, my area is more SBS and medium business, the costs are a major concern for us.
Gheist, how are you using these devices as cache? is in VSAN?
HP P2000 series ? iSCSI based?

Dell Equallogic ?

or are these too expensive ?
Avatar of Alex

ASKER

Yep, as we have already in place a couple of MD3200s, migrating to iSCSI doesn't make much sense in my opinion plus we add latency of the network instead of using SAS
So what are you wanting to achieve?

Faster performing SAN?

Faster performing local disk?

Do you have a need for shared storage /?
Avatar of Alex

ASKER

Fast shared storage, but if possible without the Dell proprietary disks. Is there any work around this requirement for the Dell MD3200?

Yes I need shared storage across 4 hosts, each of them with redundant connections to the DAS as it is now

I might use local SSDs in RAID1 for the Citrix servers though is it will be cheap
Fast Shared Storage keeping the Dell MD3200 in the configuration ?
Avatar of Alex

ASKER

Yes, I can't just throw it away to buy a physical iSCSI device, the costs I would "save" getting generic disks would go away with the purchase of the device. Cost is key here so I use the existing MD3200 or I believe the other simple option is to get in Windows 2012 R2 physical server and have the iSCSI on it as I heard it has improved a lot, very close to ZFS is now these days skipping the need for a RAID controller and good at managing cache
Avatar of Alex

ASKER

this might be more in line with what I'm looking for

http://www.aidanfinn.com/?p=15430

So, I'll still need a host that users a hardware raid controller, looks like software raid is still similar to what it used to be in 2008, not very reliable or good performer.

perhaps this with 10gbit network cards would be a good cost overall
Put SSDs in your hosts and trial

http://www.pernixdata.com/

Keep your old SAN and boost its performance using SSD close to the Host CPU, and then select the VMs you need the boost on or enable for ALL.
Avatar of Alex

ASKER

I've just sent them an email asking for costs first, doesn't help me to try, it works fine but the price is far from my client reality.
Most fast storage systems, not longer use RAID, and just use a JBOD of disks, and SSDs for CACHE (ZFS - Arc and ZIL), the issue with the MD3200 as an ISCSI solution, is you  can only connect it to iSCSI, so to speed up, you need Cache in front of it, if it's IOPS you are trying to BOOST, because you cannot insert or want to pay for Dell SSDs.
Avatar of Alex

ASKER

7500 dollar for a essentials version for 3 hosts

about £5000, not cheap, not too pricey if it does what is says

I'll evaluate it next month if my client go forward with the project
Avatar of Alex

ASKER

The MD3200 I use is SAS, I don't use the iSCSI version.

You mean most systems running freebsd/Solaris and variations that support ZFS, right?
You'll not get much else for £5k in terms of fast storage, unless you want to build something from scratch.

How much storage do you have in the MD3200 ?

Okay, if you have a SAS version, you could connect to a Solaris Server.....

add some SSD for ZIL and ARC, add 32GB of RAM, and then offer NFS or iSCSI to ESXi.

or you could look at the "Commerical" or "Community version of Nexenta" which is ZFS based.

It depends on how the MD3200 presents itself to a server, e.g. ignoring the RAID card, does it appear as a JBOD.

e.g. the MD3200 is not just an enclosure of disks!
Avatar of Alex

ASKER

I've got 12x 600K disks in RAID10

Solaris still pure command line to set up and maintain ZFS storage right? I remember the license costs were a bit high, I was playing with Solaris express a few year ago and ZFS but then ended up finding out is was not going to be possible to deploy to a client due to costs. It was then when I looked into Nexenta.

Regarding adding teh Solaris box in between DAS and the ESXi hosts, have you done something like this? I try always to leave things as simple as possible, so I'm wondering if this is a good option for a production environment.

Thanks!
We use ZFS on Solaris for production in our offices, we have 12TB and 24TB ZFS SANs!

Your issue, is the RAID.....because ZFS does it's own RAID in software, disks are presented to server as a JBOD!

ZFS has many implementations now, the latest versions are always on Oracle Solaris.

But ZFS version exist for FreeBSD, Linux.

You can use Napp-it

http://www.napp-it.org/

a front end GUI to configure if you like....
Avatar of Alex

ASKER

How much do you guys pay for the Solaris license to have it working as a proper ZFS box?

Yeah I'm away I used some Dell Perc H200 in IT mode to test ZFS, I used napp-it but it was still very early stages, lots of bugs and problems.
Avatar of Alex

ASKER

Hi Andrew,

What were the license costs involved with Solaris to use as a ZFS storage device?
Avatar of Alex

ASKER

My concern is support in the future in case it is needed. We use a freepbx software one client we got on-board had already in place and when it was time to get some support for it when we ran out of ideas, it ended up being a pain.

Comparing to pfsense for example, you have a opensource project, but also can pay for commercial support if needed.

I would prefer Solaris to get it working, do you have any idea of what figures would I be looking for the licenses?
You could use any ZFS on any version of FreeBSD (FreeNAS), Linux, (Centos!).

or you could use Nexenta....(base on Oracle Solaris and use NFS)

http://www.nexentastor.org/


I'm afraid what we paid is irrelevant, because you will not be able to obtain the same price!
Avatar of Alex

ASKER

I tried Nexenta version 3 a few years ago and never got some performance problems sorted, tried both community and the trial. Freenas didn't alert if a physical drive failed as it was not included into the freebsd code back then, so it would be a risky to not know if a drive had failed, no notifications. Perhaps this has changed now?

I'll give Nexenta version 4 another go, as the community version and see if there are improvements (I hope there are)

As it was a long time ago since I last tried ZFS, things may have hanged a lot, so is the linux implementation mature enough for production these days in your opinion? Thanks!
ASKER CERTIFIED SOLUTION
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Alex

ASKER

The explanations all together helped me to choose what do to next. I'll try again ZFS based on Andrew's advise.