Fibre Channel SAN


Please could someone provide me with guidance and whether or not all of the below are compatible with each other. As a side note I've never worked with SAN's or Fibre before.

5x Servers
HP ProLiant DL380p Gen8 High Performance (662257-421)

5x Fibre HBA
HP StorageWorks 82Q PCI-e Fibre Channel Host Bus Adapter Dual Port (AJ764A)

2x Fibre Switch
HP StorageWorks 8/20q Fibre Channel Switch (AQ233A)

1x SAN
HP StorageWorks Modular Smart Array P2000 G3 FC Dual Controller SFF Array (AP846A)

12x Fibre Cables
HP Premier Flex OM4 (QK732A)

Who is Participating?
cgladminAuthor Commented:
As the first link is to Brocade is it better to use HP StorageWorks 82B PCI-e Fibre Channel Host Bus Adapter Dual Port (AP770A) rather than QLogic HP StorageWorks 82Q PCI-e Fibre Channel Host Bus Adapter Dual Port (AJ764A)?

The servers will be running ESXi5 and I'm not sure which if either drivers would be support by default?

SHBStorage Network SpecialistCommented:
Protect Your Employees from Wi-Fi Threats

As Wi-Fi growth and popularity continues to climb, not everyone understands the risks that come with connecting to public Wi-Fi or even offering Wi-Fi to employees, visitors and guests. Download the resource kit to make sure your safe wherever business takes you!

I'd stick to Emulex or Qlogic rather than use Brocade HBAs since they've been in the HBA business longer, similarly I'd use Brocade rather than Qlogic switches.

As far as what's supported the HP SAN Design Guide is the ultimate reference -

You may also find HP SPOCK a useful reference site since it lists tested driver versions - (need to create free HP passport to login)
gsmartinManager of ITCommented:
I agree.  In my environment I use Brocade Fibre Channel Switches with QLogic FC HBA cards.  

In terms of compatibility they are all using industry standards.  Therefore, compatibility will not be an issue.  

The point when buying a SAN is to leverage the technologies that they offer.  Cause SANs are not all created equal.  I have all HP environment with the exception of my SAN.  I am running a Dell | Compellent SAN I had bought it prior to Dell aquiring Compellent.  However, the point is that I selected Compellent based on a variety of technologies that they've incorporated into their SAN unlike other vendors.  When you buy a SAN you shouldn't just to buy any SAN.   It should be about the technologies the SAN vendor uses to enhance your capabilities and your business such as Thin Provisioning, storage virtualization, Thin Replication, Live Volume, Automated Tiered storage, continuous snapshots, boot from SAN, and more including support for Fibre Channel, iSCSI, and FCOE.   The point is not all SANs provide the same capabilities.  And, if you are really looking for performance especially in virtualuzation, Citrix, and/or VDI type environments I would recommend Fusion IO or OCZ Technology R4 NAND Flash Cache PCIe cards that are less expensive and provide greater throughput than any SSD on the market.  HP as well as Dell and other vendors OEM the Fusion IO cards in a 1.2 and 2.4 TeraByte card configuration performing at 100,000 IOPS per card.  You can put a couple of the cards in a DL380 G8 and create a little Fibre Channel and/or iSCSI SAN and have incredible performance for key points of your environment.

But again, compatibility is less of a concern these days the majority of vendors besides some of their proprietary natures have to also support industry standards; especially with connectivity.
cgladminAuthor Commented:
Thanks all for the very helpful information!

Regarding virtualisation and VDIs we're looking to use vmware view.
Would the HP ioDrive IO Accelerator 365 GB for ProLiant Servers (673642-B21) make a huge improvement for £4k? In simple terms how do they work?

Thanks again.
gsmartinManager of ITCommented:
Absolutely, incredible performance.

The links below are some examples of the performance benefits of Nand Flash PCIe Cards:

VMWare View on worlds fastest FusionIO SSD Storage

Fusion IO NAND Flash (SSD) PCIe Card

OCZ Technology CES2012 Demos Z-Drive R5 and Other New SSDs at CES 2012
Bear in mind that for a HA cluster the Fusion-io card can only be used as server side read cache, writes have to be committed to the SAN straight away. Still probably worth having though since it frees up the SAN from those read tasks so it can spend more time on writes. There's also reports that Qlogic are going to put NAND flash cache on their next generation of HBAs but they haven't made a formal announcement yet.
gsmartinManager of ITCommented:
That's not entirely accurate.  Fusion IO has multiple products, caching and Direct Accelearation.  The Caching Acceleration products (directCache, ioTurbine, ioCache) are as you describe.  However, their Direct Acceleration products (ioDrive2, ioDrive2 Duo, ioDrive Octal, and other models) accelerate both read and write performance.

Also, correction on the IOPS.  The actual IO per second will vary depending on the card, which can go as high as 1,300,000 Read IOPS and 1,240,000 Write IOPS on the ioDrive Octal.  So performance will vary depending on what you can afford for your needs.

You can install FusionIO cards directly on to a server for a specific purpose/application or create your own SAN with their ION Data Accelerator and Virtual Storage Layer softwares allow Shared (SAN like) Acceleration over FC, Infiniband (SRP), or iSCSI.

Fusion IO SAN Topologies
Fusion IO HA SAN Configuration
ION Server SAN Example:
HP DL380 G8
Approach       Tier 1 Server
Interface        FC, IB
IOPS (4K)        1,000,592
Bandwidth      6 GB/s
R/W Latency      73us/56us
Capacity (MLC)      20TB
HA              Yes
Software         ION Data Accelerator
                                   Virtual Storage Layer

ION Data Accelerator software

ioMemory Virtual Storage Layer (VSL)

Purpose-built performance for virtualizing data-intensive applications
- Unparalleled Low Latency Performance for Virtualizing Data-Intensive Applications
- Supports vMotion to preserve IT Infrastructure Agility
- Transparent Operation with Existing Infrastructure
- Increase VM Density and Further Consolidate Servers
- Reduce Spindle Count and Increase Performance and Efficiency of Storage Assets

Direct Acceleration:
  ioDriveIIDuo    1.2TB  and  2.4TB

  ioDriveOctal    5.12TB  and  10.24TB
Well maybe not entirely accurate but I was talking about the bare PCI card for £4k as mentioned by the asker, not additional software that costs a similar amount on top. You could similarly use LeftHand VSA plus the fusion-io cards so that the SAN was inside the hosts, but I would still refer to that as the cards being part of the SAN, not part of the hosts.
cgladminAuthor Commented:
Okay the speed is insane! If was to purchase the HP ioDrive IO Accelerator 365 GB for ProLiant Servers (673642-B21) and used it for just 16 VMware View workstations how would I go about high availability? It seems like I would at least have to purchase one other and at £4k each seems to be rather costly…

Plus the cost of the ION software to make it into a virtual SAN.
gsmartinManager of ITCommented:
Just to add a few details...  when cost is an issue you can always come up with alternatives.  OCz Technology is a competitor to Fusion IO and about 30 - 40% cheaper.  To my understanding they are not OEM'd like Fushion IO and do not have SAN like offering.  SAN vendors like NexGen Storage and NetApp have incorporate into to the their SAN offering.  NexGen is offering it in s Teiring model with Nearline SAS drives and NetAPP in a caching configuration.

Nevertheless, you don't have to implement them this way either.  From a server perspective you can build a couple of Windows 2012 servers (for whatever purpose) and leverage VSS for HA.  Or you can do HA with multiple controllers in a single box.  

The Fusion IO flash drives are less expensive (per MB) then the Enterprise SSD drives I recently bought for my Compellent SAN.

Obviously, this is not an inexpensive option.  This is where you need to determine if the ROI is justifiable for your company.  If your company is small and users don't complain about system performance then you may not be able to justify the cost. For 16 VDI desktops is not enough to justify it.  

In my case, with 250 users running XenAPP and XenDesktop and can leverage it for both environments and are core database.

Eitherway, prices will eventually come down and make it somewhat more affordable.
Must be more than 16 VDI desktops, they have 2 * E5-2690 CPUs in each of their 5 servers, and they can be RAMmed up to 768GB in each server.
gsmartinManager of ITCommented:
He was refferencing 16 VDIs based on the HP ioDrive IO Accelerator 365 GB for ProLiant Servers (673642-B21).  

I think one of the ways I am looking at approaching this is for the system pagefiles for my Citrix XenApp servers and XenDesktop VDIs.  This should provide a significant performance gain while also allowing for higher density from a systems to ioDrive ratio.

I have a meeting with FusionIO later today to see how I can best use their product(s) in my environment.  I also plan on doing a POC as well.  They have a try and buy type eval and a POC, which the POC allows you to test for a longer period.

I provided some information earlier about HA.  For HA, you can either do HA with in a single system between two cards or between two systems with at least one card per.  I will see what else I can find out during my meeting.  I am more interested in doing a HA against my SAN vs. having to buy an additional card for HA.  I don't see why that wouldn't be possible, but I will find out more.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.