Solved

Question about SANs and their performance

Posted on 2013-01-31
12
296 Views
Last Modified: 2013-02-19
Hi
I'm new to SANs, but we're getting a iSCSI SAN installed at my office, and I'm just curious about a few things:

First, I understand the SAN is faster as far as drive performance than the storage in the local host (i.e. 24 drives in the SAN vs 4 drives in the local host), and I understand the benefit not just in performance of the drives, but in redundancy/resiliency as well.  HOWEVER, what i'm concerned about (or maybe just don't understand), is with the drives in the local storage, they're connected via SAS, SCSI, SATA, or whatever option directly to the board.  Whereas the SAN only has a few ISCSI gigabit connections between SAN and each ESXi host (we use vmware).  Isn't the connection between the SAN and the host going to be a much bigger bottleneck than local storage speed?  Wouldn't a single SATA drive connected directly to the local controller on the host be faster than a 60 disk SAN connected via a few gigabit ISCSI adapters?
0
Comment
Question by:Mystical_Ice
12 Comments
 
LVL 47

Accepted Solution

by:
dlethe earned 500 total points
ID: 38842334
Whomever told you a SAN is faster than local storage lied, unless either you have a profoundly fast ethernet config or you are comparing performance to an ancient HDD.

Also iSCSI has could be a huge CPU penalty on your host system (25% or more of CPU could be chewed up) depending on whether or not your NIC has a specialized processor and whether or not you are using encryption.

If your connection to the desktop is 1Gbit, then real-world, you'll be lucky to get 70MB/sec to a desktop.  A poor choice on a a NIC can make this closer to 50MB/sec.  You also need a decent switch depending on number of users.

Now get yourself a pair of $99 SATA disks, use software RAID1, and you can easily sustain 200MB/sec worth of real data on reads.
0
 

Author Comment

by:Mystical_Ice
ID: 38842397
Connection to the desktop is 1Gb, yes, but the connection between the SAN and the esxi hosts would be at a minimum 2 ports (gigabit), on a dedicated VLAN, with jumbo frames, flow control, and so forth.

Maybe it's never going to be faster than local storage, but for a SAN with a flash tier of drives (for performance), is there going to be a noticeable difference between the local storage and the SAN in terms of performance?  The drive speeds on the SAN are a lot quicker (SSD on the SAN vs 4x 15k SAS spindles in RAID 5 on the local storage).  But i'm still having a hard time wrapping my head around what in my mind is the bottleneck - the 2x 1GB connections between each host and the SAN.
0
 
LVL 47

Expert Comment

by:dlethe
ID: 38842429
the drive speeds behind the RAID controller is relatively insignificant until you look at the RAID levels, chunk sizes, # of disks, I/O size and so on.

Just take iSCSI out of the equation.  Figure 70MB/sec (let's ignore IOPS) per 1Gbit connection from appliance to switch.  So if you have 2 ports from SAN to the switch, in perfect world you have 140MB/sec tops to share between everybody.

I can take 4 high-performance SAS-2 disks on a RAID controller, direct attach, and get anywhere from 500MB/sec sustained reads/writes in a RAID0 with 64KB I/O size to around 50MB/sec in a RAID6 with transactional I/O once the write cache fills up.  

So in perfect world, most I can expect to get from the SAN is 140MB/sec, least is 50MB/sec.  Now divide that 140MB (assuming you have 2 ports from the SAN to the switch) by total number of users to get an average. Certainly they won't need all that I/O at the same time.

Your bottleneck is between the SAN and the switch.

BTW, even if you had SSDs then throughput would not change ... LATENCY or I/Os per second will be much better. (Well, the RAID5/6 would be better, but still won't exceed the size of the pipe) A SSD has latency in microseconds, a mechanical disk has latency in milliseconds.   Throughput is always going to be constrained by the size of the pipe from the SAN to the switch.
0
 
LVL 37

Expert Comment

by:Neil Russell
ID: 38842794
I agree with the above, lose the iSCSI connection. Can you not aford to go to a fibre connected SAN?
If you really need performance and are working in a vmware cluster then thats the best advice you can get. Forget iSCSI

How big is your vmware environment? How many hosts?
0
 

Author Comment

by:Mystical_Ice
ID: 38843420
Actually I say iSCSI, but we're in actuality using AoE (in a CoRAID SAN), which has a lot less overhead (apparently) than iSCSI.

3 hosts in the vmware environment, with a total of ~15-20 guests at any given time.

Our average IOPS across all of our hosts, all guests, is around 900 IOPS, with our peak at 2000.

Our SAN has 12x 2TB SATA drives (in RAID 10) with 4x 200GB SSDs in a stripe used for cache.

There are 4 ports from the SAN to the switch, and 4 ports from the switch to each ESXi host, BUT it's my understanding that it's actually 2x2, with the second pair in failover, not teamed.
0
 
LVL 16

Expert Comment

by:Gerald Connolly
ID: 38843475
AoE - (ATA over Ethernet) Its probably excellent, but very propriatary
0
Top 6 Sources for Identifying Threat Actor TTPs

Understanding your enemy is essential. These six sources will help you identify the most popular threat actor tactics, techniques, and procedures (TTPs).

 
LVL 47

Expert Comment

by:dlethe
ID: 38843549
3 whole hosts?  By the time you factor in costs of the switch and networking then you'll come out  much better in terms of cost & performance if you just direct attach a small RAID controller and disks.   Add one pair of SSDs in a RAID1 dedicated for index, and scratch table space. Then use the rest in RAID10.

You'll get 20K-50K IOPS from a RAID1 (twice IOPS on read than write in perfect world), then you'll get more than you have right now on the RAID10.    By having dedicated storage to each host than no single point of failure.

If you want ability to quickly move things around due to a host failure, than use an external subsystem with multiple expanders so you can cable a subset of disks to each host.  Do the math, you'll still come out much better in price as well as performance with DAS.

Sounds like somebody got you hooked on a SAN, but with only 3 hosts it just isn't worth it because you WILL need to pay a premium for switching and NICs.
0
 

Author Comment

by:Mystical_Ice
ID: 38844155
I learned a long time ago the "do it yourself" way doesn't work in an enterprise environment.  Building a home-grown DAS for a $250mm company to run on is bad news several times over.  And the method of moving things around is primitive at best.  When you want to leverage vcenter features such as HA, DRS, vmotion, and so forth, DAS won't work.

Remember "cost" and "performance" are not as important as "reliability" in a SAN, which is the reason most move to one I would imagine.

Anyway, I appreciate the responses and (some of) the advice.
0
 
LVL 47

Expert Comment

by:dlethe
ID: 38844186
I am not talking about a pure DIY play.  Buy a DAS RAID product that talks via SAS (or something used and cheap that talks via fibre channel) if you want external RAID with multi-host connectivity.   Do NOT use FCOE, AoE, or iSCSI. It is the wrong tool for your job.

Or just buy internal disks and an appropriate RAID controller an make sure the internal disks are qualified for that particular enclosure.  Go Dell and get a PERC with Dell disks as turnkey with Dell support if you like.
0
 

Author Comment

by:Mystical_Ice
ID: 38844634
Our hosts all have 700 raid controllers, but at the end of the day it's still storage on a local host, subject to failure of the host.

There are millions of companies, bigger than we are, that use FCOE, AoE, or iSCSI - what are you saying is the only way to connect to the SAN fabric?  Fiber channel?

That's late 90s - there are other methods today that work just fine.
0
 
LVL 47

Expert Comment

by:dlethe
ID: 38844778
SAS2 is way to go with shared external storage.  SAS has switching and zoning too.  But with only 3 host computers, then you can go DAS.  Get an box that has FC ports, and internal switch.

Then you don't need to buy a switch and have 8-16 ports on the FC-attached RAID and the box itself does the swiching.  Or get a SAS RAID.  I don't care. If you want to go FCOE, AoE or iSCSI then it is important to realize that these access methods are for convenience of using existing wiring and existing switched ethernet infrastructure.  You have to beef up by adding a switch and ToE cards to make them less of a burden, but the protocol and latency issues in real-world use won't be anything near what theoretical performance (or marketing performance) is for most types of applications.

They are NOT NOT NOT to be used with new installations for small number of hosts when you want performance.   FC and SAS-2 direct attach is faster and cheaper if you need shared storage.
0
 
LVL 55

Expert Comment

by:andyalder
ID: 38847178
>SAS2 is way to go with shared external storage

I'd agree with that, lots of supported SAS connected "SANlets" out there, work the same way as iSCSI or fibre channel but have just 4 host ports per controller. I wouldn't call it DAS since it's shared storage with intelligent controllers in the storage and dumb HBAs to put in the hosts. You can tell one from the other since a SAN with SAS host connect costs about the same as a fibre channel or iSCSI box as opposed to a dumb shelf that needs a RAID controller in the host. Dell and IBM and a few others rebadge the Engenio and HP rebadge Dot Hill so you can probably get one to match the make of your servers which will ensure it's fully tested and supported.
0

Featured Post

How to Backup Ubuntu to Amazon S3

CloudBerry Backup offers automatic cloud backup and restoration for Linux. It has both GUI and command line interface (CLI) ensuring its flexibility in use. Find out more

Join & Write a Comment

Hi, I've made you some graphics for a better understanding how RAID works. First of all, there are two ways a raid can be generated: - By hardware - By software What does that mean? This means: If you have a hardware RAID controller, there…
We all have limited time to study long and complicated information about RAID theories, but you may be interested as to how RAID 5 works. We made it simple for you by providing the shortest and easiest explanation ever.   First we need to remind …
This tutorial will walk an individual through the process of installing the necessary services and then configuring a Windows Server 2012 system as an iSCSI target. To install the necessary roles, go to Server Manager, and select Add Roles and Featu…
This Micro Tutorial will teach you how to reformat your flash drive. Sometimes your flash drive may have issues carrying files so this will completely restore it to manufacturing settings. Make sure to backup all files before reformatting. This w…

757 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

18 Experts available now in Live!

Get 1:1 Help Now