Solved

Storage and protocols guide

Posted on 2015-01-29
8
248 Views
Last Modified: 2015-01-30
Hello Experts,

Can someone please summarize all storage and protocols available in the market today? I believe I read a document a while ago here on this site, where someone compiled all storage and protocols available for Windows and Unix servers

Can someone please provide the guide and summarize all types of storage and their protocol? please consider virtual and physical servers, all windows server versions and Unix, Linux, what are the hardware requirements to implement a full storage HA environment? include for each protocol what is required, for example for SAN, HBA, switches, cable, protocol, and provide HA for all components

Fiber channel, ISCSI, SMB, NFS, and so on
0
Comment
Question by:Jerry Seinfield
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 3
  • 2
8 Comments
 

Author Comment

by:Jerry Seinfield
ID: 40578843
Any updates experts? Please, do not neglect this question as well. This would be the 4th question neglected question in less than a month
0
 
LVL 62

Expert Comment

by:gheist
ID: 40579584
Later 3 work over general purpose IP network you already have.
Which virtualisation technology you envision?
e.g. check here the "Array type" box in this page:
http://www.vmware.com/resources/compatibility/search.php?deviceCategory=san

Windows NT server 3.51 supports only SMB. later versions may be better...
All UNIX servers after 1980 supports one or other version of NFS
Please define your HA requirements. Maybe overloaded windows server that reboots in 10 minutes suits them.
Full storage will not bring you very far, just make copies of a DVD with your data.
0
 

Author Comment

by:Jerry Seinfield
ID: 40579816
thanks gheis, I was looking for a explanation of each protocol, and storage definition, rather than a compatibility guide

Anyone else?
0
Threat Trends for MSPs to Watch

See the findings.
Despite its humble beginnings, phishing has come a long way since those first crudely constructed emails. Today, phishing sites can appear and disappear in the length of a coffee break, and it takes more than a little know-how to keep your clients secure.

 
LVL 62

Expert Comment

by:gheist
ID: 40579905
It is a bit overbroad to ask. Just some 7 protocols each described on short 1000 pages...
If you dont elaborate on context you are asking, sadly all answers will be wrong.
0
 
LVL 2

Accepted Solution

by:
Jim_Nim earned 500 total points
ID: 40580244
Well, I can take a whack at it... this may not be all-inclusive though, as it's based on personal industry experience, and lightly salted with opinion... plus you didn't provide much context for what type of application you want the information for. I'm assuming you're interested in information on external storage devices (maybe for some kind of hypervisor cluster) as you likely wouldn't have this question if you were dealing with internal storage on HPCCs.

Block storage protocols (external storage):
Fibre Channel, iSCSI, SAS, Infiniband

These protocols give the host server block-level access to a disk device (typically referred to as a "LUN" or Virtual Disk), giving them direct control over I/O operations on each LBA available to them. The host "decides" what file system to put in place, and shared access between multiple hosts requires that they be "cluster aware" to some extent if more than one host is going to perform any kind of writes to the disk. In the past, block storage has usually provided superior performance in comparison to network file storage protocols, but recent advancements in NFS and SMB performance are closing or eliminating that gap.

Fibre Channel - typically seems to be the most reliable, scalable, and expensive option of these (when considering both enclosures and the storage fabric). This is typically implemented using Fibre Channel switches, which usually bring a big somewhat-hidden cost in licensing based on the number of ports used, but these switches have a tendency to be very predictable in terms of behavior and performance. There's also the option of FCoE (Fibre Channel over Ethernet), which encapsulates FC communication in Ethernet frames for transport over Ethernet switches - this allows for much lower-cost networking, especially when considering that the switches can be shared for use by other types of traffic in a datacenter simultaneously. Fibre Channel storage is widely available from nearly any major storage vendor (e.g. Dell/Compellent, EMC, NetApp, IBM, HP, Nimble, etc). Typically this is only implemented with an actual "storage array" - I've never heard of someone using a "build your own" approach with a server as the "target".

iSCSI - potentially very scalable as well, and much more affordable since normal Ethernet switches can be used, though sometimes at the cost of less reliability and performance especially when "going too cheap". This can be implemented with a storage array from a vendor (which can vary widely in cost based on features and hardware), or even on a system that you build (Windows or Linux) utilizing iSCSI target software. iSCSI seems to be the go-to option when you have a large number of hosts (exceeding what SAS-attached could support) which need shared access to storage for something like virtualization when you can't afford (or don't want) Fiber Channel.

SAS - with external storage arrays, the communication protocol is essentially identical to that of internal storage devices on a server. It's cheap, and very fast (very low-latency), but usually not very scalable (not including "vertical" scaling w/ expansion enclosures). This is most commonly used for smaller deployments that either don't require any shared storage, or have a small number of hosts needing to share access. Where Fibre Channel and iSCSI storage is usually going to involve some type of controller module / head unit implementation to make the storage "smart" (controlling/managing host access to each LUN, controlling actual I/Os sent to each physical disk, etc), SAS sometimes allows that functionality to be moved to the host(s) when "dumb" JBOD-style enclosures are used. The I/Os and RAID calculations are either handled by a PCIe RAID controller module in the server, or an implementation of software RAID can be used where the server CPU is handling that (e.g. Microsoft's Storage Spaces, which actually allows for multiple servers to share access to the same storage enclosure when clustered, even though each physical disk is directly "presented" to each server).

Infiniband - similar to SAS (the connectors look identical), but usually much more scalable as "switches" are often supported (some SAS arrays support switching too, but it's rare) so that a larger number of hosts can share access. This type of storage is not very common, but I've heard it's high in cost. Performance is up in the same range as SAS (I believe with even lower latency, possibly higher throughput). This is most commonly used for interconnects between supercomputer or HPCC nodes, and isn't widely used for external storage.

Network file storage protocols:
SMB/CIFS/SAMBA, NFS, AFP

File system storage leaves the block-level access to the storage array or server - only the actual file system is "seen" by servers/hosts. These protocols bring a lot of access permissions into the mix which I won't elaborate on, because there's far too much to begin to try to address here, especially when you get into sharing access between Windows and Unix based systems and they style of file permissions they use.

SMB - Multiple versions (SMB v1, v2, v3, and minor revisions for each). This protocol seems to have the best cross-platform support for hosts/clients on typical corporate file shares (compatible with Linux/Unix, recommended/preferred with Windows and now also with Apple). SMB v3 (Microsoft's original implementation) adds numerous enhancements, including "multi-channel", which can vastly improve performance. I've seen environments where data transfer between a single host and server was nearly able to hit the line rate on 10Gbps networking. Microsoft's HyperV now supports using SMB3 shares for cluster storage of virtual machines.

CIFS - This is an "open" version of SMB, but I've seen inconsistency in terminology, and with when this "name" is used. Many NAS storage arrays will label a share as CIFS regardless of which SMB version they're supporting - this typically seems to be the case when SMB is re-implemented within proprietary software/firmware. I'm not aware of any 3rd parties who support multi-channel on SMB3, though most of the other enhancements are replicated.

SAMBA - This is a re-implementation of SMB/CIFS, and essentially adheres to the same "rules" as the SMB versions it intends to emulate/support. This is the "flavor" that would be used when you want to host an SMB share from a Unix/Linux based server. I've read that support for SMB v3's multi-channel is still in the works (it was in "alpha" when I last checked), so that may not be far away.

NFS - Unix/Linux-based network file system storage. Supported for ESX "datastores", where in some cases you can achieve better performance than with block storage protocols. The latest v4 is making great strides in performance and scalability, though I'm not sure how it compares to SMB3.

AFP - Apple's proprietary file sharing protocol. I've never heard of this being used for shared datacenter storage, only for corporate file shares in an all-Apple shop. This protocol is on its way out the door - Apple recommends using SMB for best results.
0
 
LVL 2

Expert Comment

by:Jim_Nim
ID: 40580248
Note: Any of the file system protocols I mentioned (AFP excluded) can be implemented by a storage array or by a server using internal storage or another storage array as the back-end... both Windows and Linux, though with varying results depending on how you mix-and-match.
0
 
LVL 62

Expert Comment

by:gheist
ID: 40580323
Infiniband is general purpose cable just like wifi or ethernet. There are no extra storage provisions in protocol.
0
 
LVL 2

Expert Comment

by:Jim_Nim
ID: 40580338
Thanks for the clarification gheist - I've only dealt with one storage system that (supposedly) used Infiniband for connectivity, and in that case I suppose it was really just SAS storage.
0

Featured Post

On Demand Webinar: Networking for the Cloud Era

Ready to improve network connectivity? Watch this webinar to learn how SD-WANs and a one-click instant connect tool can boost provisions, deployment, and management of your cloud connection.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

When we purchase storage, we typically are advertised storage of 500GB, 1TB, 2TB and so on. However, when you actually install it into your computer, your 500GB HDD will actually show up as 465GB. Why? It has to do with the way people and computers…
The business world is becoming increasingly integrated with tech. It’s not just for a select few anymore — but what about if you have a small business? It may be easier than you think to integrate technology into your small business, and it’s likely…
How to install and configure Citrix XenApp 6.5 - Part 1. In this video tutorial we have explained step by step installation of Citrix XenApp 6.5 Server on Windows Server 2008 R2 is explained in this video. We have explained the difference between…
In this video tutorial I show you the main steps to install and configure  a VMware ESXi6.0 server. The video has my comments as text on the screen and you can pause anytime when needed. Hope this will be helpful. Verify that your hardware and BIO…
Suggested Courses

622 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question