Nimble and/or Pure storage array question

Hello experts, I was hoping some of you have 1 or more of the following items in house and can answer a couple questions for me.  I am looking for folks that actually use these SANS on a daily basis, and not someone that has done their own research and assumes they know the answers.(not to be rude at all, as I have done extensive research and still do not have a clear answer/understanding).

For Nimble Storage, Do any of you have the CS300 in house and if so, how much usable disk space do you have available as soon as you plug it in versus what you thought you were getting when you signed the PO(or initiated a POC).

We are looking at a Nimble CS300 with what they claim is roughly 23TB "base raw disk capacity".  I know they are saying 23TB raw, but that number appears to be done before the overhead, hotspare, etc...  So I am wondering exactly how much disk space I would see if I took this unit, made 1 giant LUN, how much capacity would I see before any compression is done at all.

The same question applies for the Pure Storage FA-405 and FA-420.  I am nervous as to the wording when we look at these specs,  The 405 claims it gives "up to 40TB", but the spec sheet states "2.75 - 11TB raw capacity".

The FA-420 states it gives "up to 125TB" but the spec sheet states "11 - 35TB raw capacity".  I can not seem to get a straight answer.

I want to see 20 or more TB available when I fire up the unit.  I do not want to rely on compression/deduplication to achieve my desired capacity and I want that capacity to be 20TB or more AFTER all the overhead/hotspares/etc...are factored in.

Do any of you have any real life experience with either of these units and can you tell me what your actual usable capacity is/was when you first plugged it in, and also how they are performing.

In the case of Nimble, As they do not do dedupe, I am curious as to how well it does in compression in a fairly sized exchange environment.(3-4TB).

Jonathan BriteSystem AdminAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

I have the CS300 with the larger 3TB drives. As I recall Nimble Storage gives the useable capacity on their web site, which is 9 * drive capacity. I see that they are publishing 8 *, with the difference probably being formatting as a 2 TB drive formats to a bit less than that.
Of course Nimble offers compression but not yet deduplication. When it does become available it will be inline, so if you want to dedupe an entire volume you will need to copy it to a new one.
Exchange already compresses the databases. I will need to check my compression rates when I get back to the office.
I can  only speak to the Pure system.  Pure is an all SSD system with proprietary algorithms for compression and Deduplication. We have one of the smaller units - 5.5 TB raw. We have all of our server environment on it, including a couple of SQL servers and an Exchange server as well as all of our VDI environment - about 300 desktops plus the associated servers. This probably constitutes pretty close to 20 TB of real data and we are still only around 75% full.

This was a great boon for us in the VDI (View) world as iops are a key thing with VDI. Even with all of this running on a single shelf, we do not notice any issues with speed or access from a user perspective. I can't speak for the Nimble system as I have no experience with it, but if you have a need for a good, fast, high speed SAN, I can highly recommend Pure.

I can't say that I really looked at what was "available" when I fired it up.  And you might not "see" 20 TB free because I'm not sure that's how the thing works. It's when you start to load it up that the good stuff happens. For example, we created a server LUN of 20 TB.  Since all storage on Pure is thin provisioned it allowed us to do that. Now I could have problaby created one at 50 TB and it would probably have let me, but I probably really can't have 50 TB of data on it... Just like in VMware you can create all of these thin provisioned VMs, but there's really only so much real space for it to set on.

So you create the LUN size to what you think you will need so that you can actually store that much stuff there, and then you use their management console to monitor what's actually happening. I have way more space provisioned across a number of LUNS than I will need or be able to use, but that's OK, since the monitor will keep you apprised of what's really going on.  If you bought the 11 TB box, depending on what you are storing and how much de-duping and compression can be applied, I'd suspect that you'd easily get 40 TB.

I do suspect that if it were possible to turn off compression and de-duping, then you'd probably only see what you would expect to see on any other "regular" SAN.  They can't tell you for sure what you're going to get because they don't know how compressible your data is, but I'd suspect at least a 5:1 space savings.

You may want to get a meeting with a Pure sales engineer who can better explain this technology to you.
Jonathan BriteSystem AdminAuthor Commented:
yeah we have been in all the meetings and and we are now at the POC phase, but we only want to POC 1, maybe 2 of them.  As far as our current data sets, we are nearly full(currently using NetAPP).  We do however have another server we want to virtualize.  Its our Enterprise Vault and it is huge, thus the reason I am nervous about what actual space I would be getting with whatever new device we choose.  Especially as Exchange already does compression, and so does Enterprise Vault.

I understand that these engineers claim we would not have space issues, but Pure scares me because if we did run out of space, we would essentially brick our unit and thats not something we want to even come close to happening.  We are going to keep the NetApp just because we just paid for another year of maintenance and it is still usable.  We have both SAS and SATA drives on it, roughly 80TB RAW, 35 usable or so after all the overhead, and other features.

It is all a matter of sizing properly in my opinion, and we want to make sure we have a complete and clear understanding of what we need now, and what we will need over the next 3 years.  This is where the challenge comes in as no one is going to be able to tell me how compressible my data is, and the only way we will know for sure is to POC a device and see for ourselves.

So you like the PURE device, and that is great.  I have heard nothing but good things about the way it works and the support they provide but we would need something bigger than 5.5 TB RAW I assume(no big deal really).

As far as Nimble Kevin, are you happy with the performance of your array?  I would love to know what compression rates you are currently getting.  I am also assuming that the Nimble(and probably Pure as well) are also doing some better compression on the exchange volumes, so I am interested in seeing how much space is being used in Exchange, versus how much disk space is actually being used on the SAN?

Thanks for the replies guys,  I appreciate it and will keep this thread open for the day to hopefully get more feedback.
Determine the Perfect Price for Your IT Services

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden with our free interactive tool and use it to determine the right price for your IT services. Download your free eBook now!

Yes you would need something larger than the 5.5 TB raw that we have, but our environment sounds smaller than yours.

Probably a conservative approach to figure this out would be to look at the real amount of data that you have in the systems you want to put on the new SAN and then do a 4:1 to 5:1 compression. That's probably on the light side, but it would be better to error that way than the other. Also, the Pure system will alert both you and Pure if there are issues with the hardware and it will also alert as you approach full so typically you will have some warning before the system fills. We are running typically in the mid 70% full, but as we approached 80% it alert us and we had time to deal with it. This particular instance wasn't us filling it up so much as ESXi not doing a good job of reclaiming the freed space.

There are commands we run in a script on the ESXi hosts that take care of reclaiming the space better and it keeps us from climbing into the more full range, assuming we aren't really filling up the space.
If I understand Enterprise Vault, that is archive storage. Putting that on tier 0, high performance flash storage is generally considered pretty wasteful. That's what you use low performance storage for.

So far performance has been good. Writes seem to be good, and so are most of my reads. I don't think that I've ever pushed it that hard, and would probably bottleneck at the network as my hosts are only on gigabit, and I think the teaming is sending most traffic through a single NIC instead of through multiple NICs in the team.

My Exchange data is about 3.5 TB, and I need to check the read latency and cache hits on it. I just moved Exchange onto Nimble last weekend, so we're still settling in. My backups are done against a different server with Equallogic storage, so I can't say how it performs for large streaming jobs. I will check compression when I get to the office.
Jonathan BriteSystem AdminAuthor Commented:
Thanks for helping all.  Yeah, as our EV is physical now, the plan is to put it onto the SATA aggregate of our existing NetAPP.  We would also be moving all the high workload servers to the Nimble/Pure which would then free up more space on our NetAPP.  Specifically, Exchange would be moving to the new device, also our entire SQL environment(ideally).
Jonathan BriteSystem AdminAuthor Commented:
I am trying to give both of you credit here...Am I able to edit this accepted solution?
Yes, you can, but you need to request attention which it looks like you already have. They should be able to reset it for you so that you can split the points appropriately.
I was going to suggest using the NetApp for EV unless the maintenance was so high that it would justify buying something else.

Here's the screenshot for my Exchange volume in terms of compression and such. Logs are on a separate volume.
Nimble Storage Exchange Database Volume
I am including a 5 minute performance report, which gives the most detail in terms of latency, cache misses, etc. As you increase the time scale, the cache hit rate looks better. This shows you the most deviation from the mean, and even then latency looks pretty good. Of course, if you want every single IO to be as quick as possible, buy Pure Storage. If you want most IOPS to be really good and you want more guaranteed capacity, buy Nimble.

Nimble Storage Database volume 5 minute performance
Jonathan BriteSystem AdminAuthor Commented:
That looks like a lot of misses.  I know it is only a short window, but in your environment, what is your average Cache Hit Rate for say, an entire day, or at least a few hours?  I am just worried that each one of those misses is hitting 7200 RPM spinning disks and bogging down.  I am guessing this chart would be much more sexy with a longer time frame.(it actually looks pretty good even now to be honest)

Thank you btw, this is EXACTLY the kind of info I am looking for.
Even though it can look like a lot of misses for the 5 minutes, the average latency still looks good. Now of course, maybe 400 IOPS are at 1 ms and then 10 IOPS are at 20 ms, so the average still looks pretty good while 10 of the reads are much slower.

Last 24 hours for Exchange data only.Last 24 hours of Exchange data performance
Last 24 hours for all volumes
Last 24 hours for all volumes

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
I figured that this would be useful too.Overall summary
Hashes represent compression savings. Two volumes have incompressible data.
Space view of all volumes
Jonathan BriteSystem AdminAuthor Commented:
this is absolutely awesome Kevin.  Thanks you so much.  These 2 companies really look great and I know if I want crazy IOPS and money is not an issue, go with Pure.  If I want to get great IOPS for far cheaper, go with Nimble.....  Just so you know, we gave actually recieved 3 quotes so far..Pure was by far the most expensive but also gave us the most usable capacity(86TB combined for both sites), NetApp(not even a consideration) about 130k less than the Pure quote, and finally Nimble(40TB usable combined across both sites) - 150k less than NetApp, and 300K less than Pure.
OMG, with numbers like those you can totally go Nimble and come out way ahead. I was originally looking at CS300 with 24/16 (raw/useable) capacity, 640 GB flash. We went CS300 with 36/25 GB and 1.2 TB flash, and I am totally glad we did as we are currently at 45% cache utilization, and with the smaller cache we would be at 100%. I found that the Nimble pricing to be pretty favorable for adding capacity and cache (like 50% more capacity and 100% more cache added maybe 25-30% to the cost) (rough numbers as I recall).

I could have bought four of my CS300 units with 3 year 4-hour parts for the price delta between your Pure and Nimble quotes.
Jonathan BriteSystem AdminAuthor Commented:
well, that's the thought process.  I mean, so maybe we need to expand next year, or possibly dump our old SAN all together, spend another 100k or so with Nimble and we are now a full Nimble Shop.
Philip ElderTechnical Architect - HA/Compute/StorageCommented:

Not direct to your question per se but we build Scale-Out File Server clusters that range in usable capacity from 10-12TB (24 disk 2U JBOD) to 80TB (60 disk 4U JBOD). From there we scale out to up to four nodes and four JBODS in a cluster and four times the storage with full resilience all based on Storage Spaces that's _built-in_ to Windows Server 2012 R2. That's 320TB of fully fault resilient storage.

Our 24 drive 2U JBOD setup with one SOFS node set to go ran 377K IOPs with 24 HGST SAS SSDs (we discovered that's the IOPS limit with two 6Gb SAS HBAs and one SAS cable per HBA for connectivity) easily with scaling up to 1.2M IOPS.

SOFS and Storage Spaces is a very flexible and high output setup when done right. Plus, the price can be very attractive too.

EDIT: Three or more enclosures gives us enclosure reliance meaning the loss of one enclosure and things keep moving along.
Jonathan BriteSystem AdminAuthor Commented:
Perfect info for what I was looking for.  Thank you!
Naomi GoldbergCommented:
It is always really useful to hear from real users when you are making big buyer decisions, and it seems like you've got some awesome feedback so far.

If you'd like to do further research into these options you can also take a look at IT Central Station, where we've got real user reviews for Nimble, Pure and NetApp. You can also ask reviewers questions directly from the site. I hope it helps you in making your decision, and feel free to add your own reviews to help out others too.
Pure Storage
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.