Improve company productivity with a Business Account.Sign Up

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 312
  • Last Modified:

Question about SANs and their performance

I'm new to SANs, but we're getting a iSCSI SAN installed at my office, and I'm just curious about a few things:

First, I understand the SAN is faster as far as drive performance than the storage in the local host (i.e. 24 drives in the SAN vs 4 drives in the local host), and I understand the benefit not just in performance of the drives, but in redundancy/resiliency as well.  HOWEVER, what i'm concerned about (or maybe just don't understand), is with the drives in the local storage, they're connected via SAS, SCSI, SATA, or whatever option directly to the board.  Whereas the SAN only has a few ISCSI gigabit connections between SAN and each ESXi host (we use vmware).  Isn't the connection between the SAN and the host going to be a much bigger bottleneck than local storage speed?  Wouldn't a single SATA drive connected directly to the local controller on the host be faster than a 60 disk SAN connected via a few gigabit ISCSI adapters?
1 Solution
Whomever told you a SAN is faster than local storage lied, unless either you have a profoundly fast ethernet config or you are comparing performance to an ancient HDD.

Also iSCSI has could be a huge CPU penalty on your host system (25% or more of CPU could be chewed up) depending on whether or not your NIC has a specialized processor and whether or not you are using encryption.

If your connection to the desktop is 1Gbit, then real-world, you'll be lucky to get 70MB/sec to a desktop.  A poor choice on a a NIC can make this closer to 50MB/sec.  You also need a decent switch depending on number of users.

Now get yourself a pair of $99 SATA disks, use software RAID1, and you can easily sustain 200MB/sec worth of real data on reads.
Mystical_IceAuthor Commented:
Connection to the desktop is 1Gb, yes, but the connection between the SAN and the esxi hosts would be at a minimum 2 ports (gigabit), on a dedicated VLAN, with jumbo frames, flow control, and so forth.

Maybe it's never going to be faster than local storage, but for a SAN with a flash tier of drives (for performance), is there going to be a noticeable difference between the local storage and the SAN in terms of performance?  The drive speeds on the SAN are a lot quicker (SSD on the SAN vs 4x 15k SAS spindles in RAID 5 on the local storage).  But i'm still having a hard time wrapping my head around what in my mind is the bottleneck - the 2x 1GB connections between each host and the SAN.
the drive speeds behind the RAID controller is relatively insignificant until you look at the RAID levels, chunk sizes, # of disks, I/O size and so on.

Just take iSCSI out of the equation.  Figure 70MB/sec (let's ignore IOPS) per 1Gbit connection from appliance to switch.  So if you have 2 ports from SAN to the switch, in perfect world you have 140MB/sec tops to share between everybody.

I can take 4 high-performance SAS-2 disks on a RAID controller, direct attach, and get anywhere from 500MB/sec sustained reads/writes in a RAID0 with 64KB I/O size to around 50MB/sec in a RAID6 with transactional I/O once the write cache fills up.  

So in perfect world, most I can expect to get from the SAN is 140MB/sec, least is 50MB/sec.  Now divide that 140MB (assuming you have 2 ports from the SAN to the switch) by total number of users to get an average. Certainly they won't need all that I/O at the same time.

Your bottleneck is between the SAN and the switch.

BTW, even if you had SSDs then throughput would not change ... LATENCY or I/Os per second will be much better. (Well, the RAID5/6 would be better, but still won't exceed the size of the pipe) A SSD has latency in microseconds, a mechanical disk has latency in milliseconds.   Throughput is always going to be constrained by the size of the pipe from the SAN to the switch.
The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

Neil RussellTechnical Development LeadCommented:
I agree with the above, lose the iSCSI connection. Can you not aford to go to a fibre connected SAN?
If you really need performance and are working in a vmware cluster then thats the best advice you can get. Forget iSCSI

How big is your vmware environment? How many hosts?
Mystical_IceAuthor Commented:
Actually I say iSCSI, but we're in actuality using AoE (in a CoRAID SAN), which has a lot less overhead (apparently) than iSCSI.

3 hosts in the vmware environment, with a total of ~15-20 guests at any given time.

Our average IOPS across all of our hosts, all guests, is around 900 IOPS, with our peak at 2000.

Our SAN has 12x 2TB SATA drives (in RAID 10) with 4x 200GB SSDs in a stripe used for cache.

There are 4 ports from the SAN to the switch, and 4 ports from the switch to each ESXi host, BUT it's my understanding that it's actually 2x2, with the second pair in failover, not teamed.
Gerald ConnollyCommented:
AoE - (ATA over Ethernet) Its probably excellent, but very propriatary
3 whole hosts?  By the time you factor in costs of the switch and networking then you'll come out  much better in terms of cost & performance if you just direct attach a small RAID controller and disks.   Add one pair of SSDs in a RAID1 dedicated for index, and scratch table space. Then use the rest in RAID10.

You'll get 20K-50K IOPS from a RAID1 (twice IOPS on read than write in perfect world), then you'll get more than you have right now on the RAID10.    By having dedicated storage to each host than no single point of failure.

If you want ability to quickly move things around due to a host failure, than use an external subsystem with multiple expanders so you can cable a subset of disks to each host.  Do the math, you'll still come out much better in price as well as performance with DAS.

Sounds like somebody got you hooked on a SAN, but with only 3 hosts it just isn't worth it because you WILL need to pay a premium for switching and NICs.
Mystical_IceAuthor Commented:
I learned a long time ago the "do it yourself" way doesn't work in an enterprise environment.  Building a home-grown DAS for a $250mm company to run on is bad news several times over.  And the method of moving things around is primitive at best.  When you want to leverage vcenter features such as HA, DRS, vmotion, and so forth, DAS won't work.

Remember "cost" and "performance" are not as important as "reliability" in a SAN, which is the reason most move to one I would imagine.

Anyway, I appreciate the responses and (some of) the advice.
I am not talking about a pure DIY play.  Buy a DAS RAID product that talks via SAS (or something used and cheap that talks via fibre channel) if you want external RAID with multi-host connectivity.   Do NOT use FCOE, AoE, or iSCSI. It is the wrong tool for your job.

Or just buy internal disks and an appropriate RAID controller an make sure the internal disks are qualified for that particular enclosure.  Go Dell and get a PERC with Dell disks as turnkey with Dell support if you like.
Mystical_IceAuthor Commented:
Our hosts all have 700 raid controllers, but at the end of the day it's still storage on a local host, subject to failure of the host.

There are millions of companies, bigger than we are, that use FCOE, AoE, or iSCSI - what are you saying is the only way to connect to the SAN fabric?  Fiber channel?

That's late 90s - there are other methods today that work just fine.
SAS2 is way to go with shared external storage.  SAS has switching and zoning too.  But with only 3 host computers, then you can go DAS.  Get an box that has FC ports, and internal switch.

Then you don't need to buy a switch and have 8-16 ports on the FC-attached RAID and the box itself does the swiching.  Or get a SAS RAID.  I don't care. If you want to go FCOE, AoE or iSCSI then it is important to realize that these access methods are for convenience of using existing wiring and existing switched ethernet infrastructure.  You have to beef up by adding a switch and ToE cards to make them less of a burden, but the protocol and latency issues in real-world use won't be anything near what theoretical performance (or marketing performance) is for most types of applications.

They are NOT NOT NOT to be used with new installations for small number of hosts when you want performance.   FC and SAS-2 direct attach is faster and cheaper if you need shared storage.
>SAS2 is way to go with shared external storage

I'd agree with that, lots of supported SAS connected "SANlets" out there, work the same way as iSCSI or fibre channel but have just 4 host ports per controller. I wouldn't call it DAS since it's shared storage with intelligent controllers in the storage and dumb HBAs to put in the hosts. You can tell one from the other since a SAN with SAS host connect costs about the same as a fibre channel or iSCSI box as opposed to a dumb shelf that needs a RAID controller in the host. Dell and IBM and a few others rebadge the Engenio and HP rebadge Dot Hill so you can probably get one to match the make of your servers which will ensure it's fully tested and supported.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now