Link to home
Start Free TrialLog in
Avatar of AXISHK
AXISHK

asked on

Storage option for HP DL380G7

Need to look for a disk expand option for HP DL380G7. Currently, there is no room to add HDD.

Will there be a cage option for the server to allocate more HDD ? What is the performance ? What's the connector bandwidth and performance connecting  to this cage ?

Base on my calculation (let me know in case my estimation is not correct), local disks should give the best performance, follow by network storage and USB 2.0 disk. Correct ?

What's the difference between iSCSI & NAS as they are both connected through network ? What's the performance different ?

I need to gather all alternative solution before making a final decision, with cost vs performance justification. Any idea and information is appreciate. Tks


1Gbps network = 1,000,000,000 bps -> 1,000,000 kbps -> 125,000 kB/s -> 125 MB/s
For 4 x SAS RAID-5 hdd, there will be 140MB/s x 4 = 560MB/s
USB 2.0 : 35 - 40MB/s
Avatar of Akinsd
Akinsd
Flag of United States of America image

I think you've done enough research for your options.

I think what you meant is difference between SAN and NAS. iSCSI is not a storage system but rather a connector type just like USB

It's like asking what the difference between USB and NAS

With that said, SAN and NAS both provide shared resources on the network.

What you currently have is a NAS - A dedicated server with resourses on local disks shared on the network (paraphrased definition)

SAN on the other hand is a dedicated network device which lets you use iSCSI connection to directly attach the device to the server making it look like the drive is local on the server just like C drive. The cool thing with SAN is you can attach multiple servers to the NAS and allocate spaces locally.

I think what you need is an external drive with whatever connection type you prefer. The drive or SAN, whichever you choose will appear like a local drive on your server and you can share it out as desired.

I hope this helps
Avatar of Member_2_231077
Member_2_231077

>Will there be a cage option for the server to allocate more HDD.

Yes, there is an additional cage so it takes 16 disks internally although you have to take the DVD out. If you want a second array you get the additional cage plus a second RAID card (P410); if you want to combine disks from both cages into one array then you buy the cage plus the SAS expander card. All the screws and cables are included with the internal cage, you just have to put the expander or additional controller in a PCIe slot and connect the cables. 2 RAID controllers is better than the expander if you can get away with separate arrays since cabling is easier and you get more cache with two than with one.

Alternatively you can add a RAID card with external SAS connectors and add an external D2000 shelf. That is as fast as the internal cage since it is direct attached storage.

http://h18004.www1.hp.com/products/quickspecs/13595_na/13595_na.HTML#Storage
http://h18000.www1.hp.com/products/quickspecs/13404_na/13404_na.html
hi. let us start with general question and terminology.

What's the difference between iSCSI & NAS as they are both connected through network ? What's the performance different ?

iSCSI is just a communication protocol. like FTP or SSH. it is the interface, way of connection for some networked storage. NAS on the other hand is a concept - Network Attached Storage. simply speaking - a dedicated peace of hardware, attached to network, that stores some (or a lot of) data. if speaking in analogies - iSCSI is like rails for railway. NAS is a container. you may transport (connect) that container by rail, using appropriate cars (networking hardware). or, you may choose water transport or truck (say, other types of connection to that hardware, like NFS or CIF). and railway may carry passengers instead of storage containers (striktly speaking NAS is not the only thing that may use iSCSI).
iSCSI typically runs over IP. usually that means ethernet - and all common networking hardware - switches, routers, and so on. client system sees devices on iSCSI as virtual devices. one may wish to isolate storage (iSCSI) traffic from common (internet) dataflows, and this way Storage Area Network (SAN) is created. you are not limited with 1G links by iSCSI (or any SAN/NAS), usually you have more than one ethernet card, and may (and are encouraged) to use multiple link bonding to get higher speeds and redundancy. it is quite common novadays for iSCSI storage appliances to have 4 gigabit ports, so, you may easyly have 4G link toward your network. there are solutions, that even use dual iSCSI controllers, that may both serve as redundant failover and at the same time double the bandwidth (i.e. dell powervault). and 10G/20G interfaces to storage arrays are not rare.  
alternative to iSCSI is FibreChannel and InfiniBand (to some extent). both will need an extra interface adapter in your servers to use them, and storage will show up as local.
FibreChannel gives you speeds of 2-4-8-16Gbit per link, depends on FC generation and interface modules you use. failover, multipath and multilink aggregation are part of protocol. you may use FC Switch to connect more than one storage node to more than one server node. this also will form a SAN - dedicated network of storage appliances and servers, that use that storage.

in two words - local (in regard to the application server) storage is not NAS, even, if someone links to that server through network and uses that storage. also, external (not-in-case) storage does not make it NAS, it depends on how it's connected. external SCSI/SAS cages are still local storage. something hooked up by USB is also local storage. if there is a way to connect to said appliance for number of servers, all independantly, than it is NAS. if storage traffic is isolated from common traffic - it is SAN.

so based on this, you might easily crunch numbers to get maximum theoretical speeds. in real life these theoretical limits are rarely seriously pushed (this is where you should design carefully and plan well ahead). there are almost no real life situations, when using stripe of 4 disks gives you quadruple speed increase. of course, if you have fat array of fast disks, and link it by single 1G link, the link probably will be the bottleneck. judge for yourself, if you really will use over that 1G. with SAS/SATA theoretical interface speeds reach 6Gbits per HDD. to get more realistic numbers, you have to look at HDD tests. probably 500MB/sec is the limit. also, if you have mechanical drives, IOPs definately are a concern. also, i do not know, if that SAS/SATA controller is a performer, you may have a bottleneck there. if you would use FC connected NAS, then there will be a Host Bus Adapter (BHA) - a peace of hardware, pretty much like network card. NAS itself most likely would implement some kind of access acceleation technology, that makes speed of individual HDD/SSD less relevant to raw speed/IOPS numbers, you would get from that construct.
Avatar of AXISHK

ASKER

Tks. In my case, my netowrk is 1Gbps with data partition of 4 x SAS RAID5 (not more expand slot). So, I should consider extend with a cage if performance is the first criteria.

1Gbps network = 125 MB/s
For 4 x SAS RAID-5 hdd, there will be 140MB/s x 4 = 560MB/s (assign extend with 4 HDD).

Tks
Avatar of AXISHK

ASKER

I've requested that this question be deleted for the following reason:

no feedback.
AXISHK: it looks, that you are mixing up what you have and what you want. a lot depends on what you plan to do with that server - if it's database, plain storage, workgroup storage or o contraire - an application server.

basically you have following expansion options:

0) extra disks hooked up by USB.
it's cheap, usually - not fast (only sequential reads are pushing USBs speed), and definately least secure/safe. cheap enclosures require additional power, hence - extra wires and lots of badly used space. these enclosures usually are not best workmanship, disks are heating up and it's the fastest way to failure.
when to use - if you are accessing that data really rarely - like installs, ISOs or backups. if it's temporary solution (like just for a week or two, until you get a better solution), or if you have to get that storage really mobile - like remove on request or just swap out some stuff (like those backups).
when to avoid - if you have any serious load on that storage.

1) change disks for larger ones. if you say, that 4HDDs use up all your space, i assume, you have 3.5" disks.  largest ones are 4TB, however, i'm not sure, if g7 accepts this size. you will have to check.

local storage is definately the cheapest one, performance depends on what disks you use. theoretically you may get between 70MBs/ and 150MB/s from one local disk, depending on if disks are SATA or SAS. this applies to sequential reads only - random reads (and writes - too) are much slower. you may get around 200 IOPS per mechanical disk, so you have to consider what type of data you plan to move around. also - you planned to use raid5, then your calculations are wrong - you will not get 4 times of a single disk performance. while _reading_ sequentially you might get up to 3 times (4 minus 1 for parity), while on writing gain might not be significant at all - it largely depends on how good your raid-card performs. i usually count that all local solutions (including mirrors) are performing about as good as single disk. if you are striving for IOPS (databases or many-many small files are read/written) think about SSD. this can raise IOPS numbers by 2+ factors of magnitude, up to 40K IOPS or more, depending on what SSDs you get. in the most extreme cases look for PCIe SSD cards.

1A) external SAS cage.
* that might require an additional SAS HBA - an extra card, that has external sas connectors, that you will use to hook up SAS cage. that will still be a local storage, so, all concerns for option 1 will apply, plus following - standart SAS HBA external connector usually is 4-lines, that means 4*6Gbit theoretical speed. there are SAS cages, that might contain tens of drives (i've seen 48-drive cage myself), all hooked up by one or two external SAS connectors. that still will be local storage, however, those 4 (or 8) SAS lines will be shared by all those numerous disks. usually it's not a big concern, except, if you wish to push all that data somewhere further.

2) some kind of NAS/SAN.
2a) external storage server with cifs/nfs. probably a solution, if you are short in $. basically, it's same as 2A, only you use much less specific hardware (like normal server, and not special storage cage), most of things are done by software. and you use much less sophisticated stoarge protocol. a definite backdraw - you will probably stick with ethernet, so 1-2-4Gbits is your throughput limit. however, by means of price/performance its a really good performer.
2b) - iSCSI - might use your existing network. in case of 1G it is recomended to split storage traffic and external traffic to different network cards. has high latency because works over ethernet/IP. in case of 10G you usually need an extra ethernet card anyways. would not recommend that for databases, however, it really depends on load. pros: lot's of software has software iSCSI adapters, so you might just buy an extra 4-port ethernet adapter.
2c) - fibrechannel. almost always you will need extra adapters. if you hook up only one storage array - you can use direct cables, if more than one - you will need FC switch. link speed is 8G, however, latency is almost as big as with iSCSI. there are "FC over Ethernet" solutions in the market, however, it gets worst part from both worlds. and i would not recommend that for databases either.
2d) infiniband. fast. really fast. and costly, too - about the same as 10G ethernet. you will definately need extra adapter for that one. this is the choice for speed-hungry and low-latency demanding applications.

so. in two words -- if you are looking for storage, that should be shared by many servers - NAS solutions are the ones you should think of. if speed matters and prie does not - go for infiniband. if price matters, and speed is not crucial - iSCSI or even "storage server" is your choice. if you're on the other side - storage is just for one server - then local probably is the best. if you need it permanently and think of great expansion possibilities - then an external SAS cage might be good solution. if you're really tight on $ - look for USB drive cage or build your own cheap storage server.
the question is quite wide. to give a good advice we should know a bit more about this particular setup. please look at problem breakdown in previous answer.
All the answers to the question was provided. I think there's a little confusion. Please check the posts above again for the solutions.

Thanks
Deleted because of no feedback? What about the 2nd drive cage? I'm assuming it has small form factor drives since he LFF model is very rare.
Avatar of AXISHK

ASKER

Sorry, want to clarify one thing :

1Gbps network = 125 MB/s
For 4 x SAS RAID-5 hdd, there will be 140MB/s x 4 = 560MB/s (assign extend with 4 HDD

Under this assumption, local disk througput should better than NAS / SAN performance ?

Tks
actually - it's not that simple. it really depends of what data and how you plan to move. same as weight is not the only thing that must be considered to move something around, volume and packaging also must be taken into consideration. if you can move 10 ton steel roll in one minute for one kilometer, it does not imply, that you can as easily move 10 tons of grapes or 10 tons of porcelain tableware.

in short words - 1Gbit network is 125Mbit raw speed.. however, there is some overhead, and actual speed is less. i would say - between 110 and 120MB/s. also - your server has 2 1g ports, and is relatively easily expanded to 6x1G or 2x10G. that could give real boost to raw network throughput. (up to 1GByte/sec if using single 10G interface or ~200MBytes /sec if you just use interface aggregation with your existing two 1G ports). so - i assume, you might really easily get 480MB/s from iSCSI NAS. and what you get here - parallel accessibility by other networkied hardware.

now for local (directly attached) storage - it _never_ allows you to summ up per-spindle performance the way you did - 4 disk raid5 is in no way able to perform at 4x1disk speed (so your 140MB/s x 4 is really wrong). at best, when reading _sequentially_ (large, unfragmented files, 1 thread, and absolutely ideal conditions) you may get single spindle speed tripled. that speed is also largely relative to your RAID controllers performance. speaking frankly - i've never seen any non-mirror RAID perform better than single disk. and that especially applies to write operations, too.

if you would be using large external drive cage (and hook it up through SAS HBA) - you could get around 12Gbit link (quad SAS3G links) in raw bandwidth, to/from cage, however, again - most probably it's not the narrowest part in your system.  if you do not plan to share this storage (you''ll be using it locally), and you still need a good expansion potential - i would go with this solution. look for links that andyalder gave you - this is HPs own solution, but for sure you can use just any drive cage and any SAS card with external ports.
what you get here (by using any directly attached storage) - good raw speed, low latency. what you loose - simultaneous accesibility for other systems.
Avatar of AXISHK

ASKER

"your server has 2 1g ports, and is relatively easily expanded to 6x1G or 2x10G. that could give real boost to raw network throughput.".

Current switch port for servers is only 1Gpbs. Does it mean the NAS & switch should be attached with dedicated switches to archieve such a performance.

If I simply plug the NAS to my 1Gpbs for server access, does it mean the performance will be pooer compared with local SAS disks.

Tks
Performance of NAS/SAN is always worse than local disks simply due to added latency. Say you had a DL380 G7 with Windows 2008 iSCSI target acting as you SAN and you installed the iSCSI initiator on your current server, you would have exactly the same internal disk speed but all the data would have to go through the TCP/IP stack on both servers and across a bit of Ethernet cable, that introduces an additional latency.
Avatar of AXISHK

ASKER

So with external SAS cage that can address more Hdd with performance better than SAN, it doesn't make sense for investing a SAN solution. Correct ?

Tks.
SOLUTION
Avatar of Member_2_231077
Member_2_231077

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
AXISHK:
q1: no, it's not a requirement, just a recommendation. you just have to keep in mind, that both storage and internet traffic is using same pipe. if you plan to constantly use that pipe over 80% - better split the traffic. if you do not use the pipe for internets (or other inter-server communications) - then it's free for storage.

q2: in most cases - yes. however, it still depends on how are you accessing your data and how NAS is configured. i can imagine situations, when NAS outperforms your "single SAS drive".
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of AXISHK

ASKER

Tks