4Kn SAS drives

Just trying to get a handle on this and have a couple questions:

1) Everything is going 4Kn?
2) Still SAS over SATA for performance?
3) Why in the world would anyone buy a 512e today?
4) I love Neweggs'  reviews but none of their 4Kn drives have any reviews. Any idea why?
5) Would Seagate be everyone's choice?
LVL 15
LockDown32OwnerAsked:
Who is Participating?
 
David Johnson, CD, MVPOwnerCommented:
2. slow SAS is 10K, Slow SATA is 5.4K RPM
3. Some older software depends upon 512Ke
4. SAS is used by Enterprise most enterprise don't use Newegg et al, they use whatever comes with their Server i.e. HP/Dell/NetApp as they prefer a consistent warranty.  Who has time to write reviews when one is fighting fires.
5. What is your favourite flavour of ice cream? There really isn't much choice among Hard Drive Manufacturers.. Seagate, Western Digital, and Toshiba are all that are left. I prefer HGST
1
 
DavidPresidentCommented:
1) Everything is going 4Kn?
 - Not everything, but datacenter class storage is, and it is an absolute NECESSITY as HDD capacity increases

2) Still SAS over SATA for performance?
Always has, always will be.    One of the reasons most people are not aware of is that SAS disks are dual ported.  So in perfect world you can get twice the I/O.  (Or more correctly, lower latency and more IOPs due to load balancing)

3) Why in the world would anyone buy a 512e today?
Because their controller doesn't support 4Kn, or they don't know any better.

4) I love Neweggs'  reviews but none of their 4Kn drives have any reviews. Any idea why?

I have no idea why you like their reviews.  
5) Would Seagate be everyone's choice?

Not in the least.   Look into HGST because HGST, SANDISK & WD all one big happy company now.
0
 
kevinhsiehCommented:
SATA SSD is much faster than SAS HDD. The significantly lower latency of SSD means that deep queues in SAS aren't needed in most cases.

15K SAS makes no sense these days as a general use case. 10K SAS is an edge case that would be better served by SSD in most cases. 7.2K SAS HDD is for cheap bulk slow storage.
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

 
DavidPresidentCommented:
Kevin - surely you made a typo saying SATA is faster than SAS ...

the latency is nearly insignificant when assessing SAS vs SATA in the real world.
SAS is dual ported
SAS has much, much deeper and more intelligent queue depth/reordering & pre-fetch.  It is also fully programmable by the O/S & controller; so a SAS drive can queue up I/O requests that it knows about prior to satisfying them.  In that case, latency is zero because it gets data it will need as available

SAS disks have hundreds of configurable parameters which can be used to optimize how it satisfies I/O requests within the device driver and controller.  (Like you can set specific delay times and retry algorithms.  With SATA disks you are stuck with something that is NEVER going to be appropriate for most servers)

I could go on and on as I am a storage architect and write firmware, drivers, multi path logic, etc...  

The RPM is about as significant to overall performance as the serial number.

Besides .. there are SAS and SATA SSDs,  so the whole RPM argument is moot.  Try reading specs comparing performance of a SSD that is available in both SAS and SATA.  It will prove that RPM is insignificant differentiator.
0
 
DavidPresidentCommented:
P.S.  If your I/O is not aligned  properly or I/Os are not the correct multiple on most SSDs, then performance on a SATA SSD (I am talking consumer class, not enterprise) then a decent SAS HDD will outperform it if the workload is write intensive and/or there is a significant amount of sequential I/O.

SAS SSDs have same issues, but their improved wear leveling and write precomp / sparing will provide improved performance.

... real world.
0
 
LockDown32OwnerAuthor Commented:
Thanks guys but most of the questions were missed. The specific case I am looking at is server storage behind a RAID controller. Entry level. That rules out SSD because of TRIM. The specific controller is a (Adaptec) 8405.
0
 
DavidPresidentCommented:
It does not rule out SSDs.  Some SSDs are designed so you don't have to worry about TRIM.
0
 
LockDown32OwnerAuthor Commented:
I have yet to see a RAID controller that supports TRIM. If you know of one please advise.
0
 
kevinhsiehCommented:
I use SATA SSD on Dell PERC 5/6 RAID controllers in RAID 1 with no known issues. The storage performs more than well enough for the workloads which is to have Hyper-V and a RODC/fileserver which also has user profile disks for a Remote Desktop Session Host as the second VM.

I think that real world experience is showing that general applications usage doesn't require high endurance flash, nor is TRIM required. Even if you are going to need to rewrite a bunch of cells without TRIM, overall performance of a SATA SSD is going to be much better than a SAS HDD in almost all cases.

David, note that I am saying that the media type is more important than the interface. NAND vs spinning rust is no comparison for random workloads, regardless of interface.
0
 
DavidPresidentCommented:
The PERC5/6 RAID1 implementation is effectively a pure pass-through implementation (it mirrors writes and merely adds an offset, plus does load balancing)   and if the partition is aligned then I/O will be aligned, because it is not possible to do I/O that is not in multiple of 4K.

BUT some SSDs don't use 4K alignment, and a XOR parity RAID configuration that has an EVEN number of disk drives could result in   unalignedI/O as well as I/O that is not a whole number multiple of the internal NAND drives.   Other RAID controllers also have metadata that is not a multiple of 256KB.       You can't even guarantee alignment when there is metadata on the controller, because logical block != physical block.  

Also since it is doing load balancing on reads, then if you did a more intense workload then you might see some performance issues.  That depends on your SSD and how it implemented garbage cleanup, and whether it uses compression.  Even doing benchmarks with highly compressed data, or worse all zeros or ones can skew results.  

So just because it works with the PERC5/6 in RAID1 does NOT mean it will work well with other controllers.  in fact, it won't.
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
1) Everything is going 4Kn?
Yes. In the Microsoft world, virtual hard disks were realigned from 512byte in VHD to 4096byte in VHDX. Performance and size are two key reasons among others.

2) Still SAS over SATA for performance?
Yes. SATA and the push into the data centre with "Enterprise" class drives was and is IMNSHO a dismal failure.
SAS has dual ports allowing twice the throughput and access  path resilience (MPIO)
SAS has 256 queues and queues within those queues while NCQ (Native Command Queueing) in SATA has 32. Period. NCQ doesn't really work either.
SAS has the logic in place to allow a drive to be shared by multiple systems that have storage arbitration in place (Storage Spaces for example). SATA does not.
IMNSHO, SATA does not belong anywhere near an enterprise class solution. NearLine SAS solved the cost/capacity problem a long time ago.

3) Why in the world would anyone buy a 512e today?
Application compatibility. Data driven applications that are set up on a 512n storage boundary won't work when moved to a 4096n boundary storage solution.

4) I love Neweggs'  reviews but none of their 4Kn drives have any reviews. Any idea why?
No. We buy through distribution at scale.

5) Would Seagate be everyone's choice?
That depends. ;)
For SAS SSDs we only run with HGST. We were burned really bad by bad firmware in Seagate SAS SSDs so we avoid them.
For 10K SAS 2.5" we run with Seagate or HGST
For NearLine 3.5" SAS we run with Seagate or HGST
The drives we choose all depend on what's on the tested and approved hardware list provided by our storage product vendors.
0
 
LockDown32OwnerAuthor Commented:
The PERC5/6 is not a simple pass through in terms of TRIM. The only "controllers" capable of doing TRIM are the Intel motherboard controllers with the Series 8 chip-set or newer and even then will only work in RAID 0 or 1.  

   Kevin to be honest I did the same thing you did. I used some consumer SSDs in a RAID 5 configuration behind a Adaptec 8405. The two major warning I heard was 1) Sooner or later the IO will come to its knees because TRIM isn't supported but I too, in my limited knowledge of TRIM, figured worst case it would simply be re-writing a bunch of cells. 2) Unless you use the Enterprise SSDs you would hit a problem where loss of power would corrupt the cache. I figured like a battery cache on a RAID controller a good APC battery backup with shutdown would more or less solve that problem. So I even took more of a chance and used consumer SSDs. It has been over a year and no issues. My problem is that doing something that isn't recommend, in a server no less, kind of bothers me. I figure the safer bet would by going back to platters. Hence the question.

   Thanks Phillip. You have been the only one to answer my questions :) #3 is still bothering me. I understand application and OS compatibility between 512n and 4Kn buy why bother with 512e? What not just stick with 512n?
0
 
kevinhsiehCommented:
I don't use consumer SSD. I use Micron m500 DC series SSD which have fully power loss protection. A battery on the RAID controller doesn't mitigate the need for good power loss protection on the SSD, because the RAID controller doesn't know that the SSD hasn't committed the data to NAND. A 100% reliable UPS with shutdown would possibly eliminate data corruption from power loss on consumer SSD, but I don't have 100% faith in any UPS solution and proper shutdown.
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
Dan Lovinger has a great explanation of why consumer grade SSDs don't belong in server settings.

512e allows manufacturers to run native 4Kn underneath. That saves on costs as one line produces all products instead of two.
0
 
LockDown32OwnerAuthor Commented:
A battery on the RAID controller isn't really related in any way to the cache on any drice SSD or otherwise. All of these "Power Loss Capacitors" and "RAID Battery Cache Backups" are insurance and as in real life you can easily purchase too

   We are looping back around to the original question. "Enterprise" SSDs  and SAS SSDs are way to expensive for entry level servers. 15K RPM SAS drives lack capacity. I put consumer SSDs in a small server and have been very impressed so far but know that I am doing something that is not generally a good idea hence the move back to platters. It just seems like the 3.5" 7200RPM SAS drives make the most sense. Can't see going SATA.

   That is interesting about the 512e. Didn't see how it benefited the end user. Guess it doesn't :)
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
We have an all-flash server setup that's being deployed for a local client. An Intel R1208SPOSHORR with 8x 1.2TB Intel DC S3520 SSDs in RAID 6 via Intel RAID controller with flash based cache. For this particular environment the setup just works and will give them the IOPS they need at a reasonable cost to them.

Another setup on an Intel R2224WFTZS has 5x 1.9TB Intel DC S4600 in RAID 5 plus 19x 1.2TB 10K Seagate SAS in RAID 6 via Intel high performance RAID with flash backed cache. We stuck with RAID 5 to pull more storage out and because a rebuild on SSD won't take that long. This is a significant step up price wise.

Finally, a two node Kepler-47 based Storage Spaces Direct (S2D) cluster with 2x Intel DC S4600 for SSD cache and 6x 2TB WD Black for capacity is the foundation of our own internal network. Cost wise the pair are actually priced less than the first example above.

Ultimately, the end-game/goal should drive the storage setup relative to cost.
0
 
DavidPresidentCommented:
512e is simply a necessary evil.  The drives ARE 4k, 8, or 16 internally. The OS ,firmware, and/or controllers on many systems are frozen at 512.  So the 512e emulates 512 byte I/O so at least it will somewhat work.

(I.e, when the disk receives the SCSI command to report # of blocks and bytes/per block,  it lies.  Then all CDBs that are block I/O oriented are emulated in the firmware.   This sucks when you read anything other than multiples of 4K,  especially if you want to modify 512 byte chunk.   The disk has to do read/modify/write cycles internally.   That causes all sorts of performance issues, and God forbid you get a unrecovered I/O error and a deferred SENSE message.    Such reasons are why 512e blows up in most RAID configs.  It just isn't possible to do this reliably unless all the code plays nice)

If you have hardware that can only deal with 512 byte sectors, and you can't get a 512n disk, then your choices are to upgrade hardware OR do 512e.
0
 
LockDown32OwnerAuthor Commented:
Thanks guys. Very informative information.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.