• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 450
  • Last Modified:

Building a SAN

I am contemplating building a SAN for experimentation purposes only.  I have a few questions regarding this, such as:

1.  Are SAN's always fibre?

2.  If fibre is not required, what is the cheapest possible way to make a functional SAN with or without RAID?  I plan on connecting 2 servers running W2K3 with Exchange 2k3.  

3.  I read that you need 2 NIC's per server.  I'm assuming 1 for the LAN, and 1 for the SAN.  Is it possible to connect the SAN cable from each server to a switch, and then connect the switch to a disk array in some fashion?

I don't care if the 'storage array' is just one disk, as long as it works.
0
bleujaegel
Asked:
bleujaegel
  • 6
  • 5
1 Solution
 
Lee W, MVPTechnology and Business Process AdvisorCommented:
Hi bleujaegel,

It doesn't seem you quite know all the details.  It sounds like you want to build a cluster, not a SAN.  There's a difference.  A SAN is a storage area network.  A cluster is where two or more servers (nodes) cooperate to provide a service.  A Windows cluster typically has one cluster node perform the work and the other just sitting there "just in case" the first server fails, at which time it takes over.

But to answer your questions...
> 1.  Are SAN's always fibre?

No, you can use iSCSI or other type of storage.  SAN storage is basically storage that has it's own type of network, a storage area NETWORK.  It's usually Fiber or iSCSI today.  SAN Storage appears on the servers as if it were local drives.  NAS (Network Attached Storage) drives would appear as network drives.


> 2.  If fibre is not required, what is the cheapest possible way to make
> a functional SAN with or without RAID?  I plan on connecting 2 servers
> running W2K3 with Exchange 2k3.

If you want a SAN then you need a SAN class device OR a device with iSCSI target ability.  Linux can be configured to provide this and there are third party solutions for windows.  Windows has available an iSCSI connector that can connect to the target, but it cannot act as a target.

If you're trying to make a cluster, then you can use SCSI with simple SCSI controllers - though this will be an UNSUPPORTED cluster and MAY fail outright, I've done it using a couple of 2910 and 2920 Adaptec cards (you need to change the SCSI ID on ONE of the cards so it remains unique in the SCSI bus.


> 3.  I read that you need 2 NIC's per server.  I'm assuming 1 for the
> LAN, and 1 for the SAN.  Is it possible to connect the SAN cable from
> each server to a switch, and then connect the switch to a disk array
> in some fashion?

If you use iSCSI, then you CAN share a switch, but for production environments, it's HIGHLY recommended that you keep the iSCSI network seperate from the LAN network.  For a cluster, you need a second network card for a heartbeat that monitors the connection to the other server so it knows when the other server has failed.

Cheers!
0
 
bleujaegelAuthor Commented:
Can you clarify the SCSI cluster configuration?  If I have a 2920 card in each server, what's next?  How do the Windows Servers share a common drive?   Are they connecting to an external disk array?  From what I understand, how clustering works (in active/passive mode) is that if one fails, the other one can step in and take over.  Am I wrong here?
0
 
Lee W, MVPTechnology and Business Process AdvisorCommented:
bleujaegel,
> Can you clarify the SCSI cluster configuration?  If I have a 2920 card
> in each server, what's next?  How do the Windows Servers share a common
> drive?   Are they connecting to an external disk array?  

Yes, the controllers need to connect to an external SCSI array.  Just go into ONE controller's BIOS and change the SCSI ID.  Again, this is a NON-SUPPORTED method, but I HAVE created one cluster that worked for some time like this and another, using 2940 cards, that failed about an hour later and I couldn't get running again.

> From what I
> understand, how clustering works (in active/passive mode) is that if
> one fails, the other one can step in and take over.  Am I wrong here?

That's essentially correct.  There are ways of making it active/active - IF you have two or more functions and two or more disks.
0
What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

 
bleujaegelAuthor Commented:
>> Just go into ONE controller's BIOS and change the SCSI ID

Do you mean, for example, change one of the controllers SCSI ID from 7 to 8, so that they both aren't on ID 7?

Lastly, do the external SCSI arrays always have at least 2 SCSI connectors?  Do you think this one would work?

http://cgi.ebay.com/Sun-Multipack-711-SCSI-Hard-Disk-Array-12-bay-NICE-NR_W0QQitemZ9733085525QQcategoryZ51239QQrdZ1QQcmdZViewItem
0
 
Lee W, MVPTechnology and Business Process AdvisorCommented:
That may work - I used a generic one myself, but that may work - note it's for SCA SCSI hard drives, not otherwise standard LVD drives.  Nothing wrong with that, but you should make sure you have the correct disks.

And yes, that's what I mean, change the ID. Only thing, a 2920 I THINK only supports 7 devices - that Sun unit looks to support 12 which might cause problems getting the IDs right.  Almost every external SCSI storage unit I've seen has two connectors because the nature of SCSI allows things to be chained together.

If you wanted to do something more "supported", look into getting yourself a couple of PERC 3 controllers (Dell controllers) and a PowerVault 200S or 210S.  These controllers support clustering and the PowerVaults also support clustering - provided they have two controller boards in them.  (They're what I've used in production environments).
0
 
bleujaegelAuthor Commented:
Excellent.  That looks like the right price.  I will install the PERC in each server, change the ID on one controller, connect them to the storage array, which I will install 3 SCSI drives and build a RAID 5 configuration.  Does that sound good to you?
0
 
Lee W, MVPTechnology and Business Process AdvisorCommented:
Sounds fine if you're trying to setup a cluster.  NOTE: both the PERC cards and the 210S have switches for cluster mode.  It's preferable to use those.
0
 
bleujaegelAuthor Commented:
Thanks a million!
0
 
Lee W, MVPTechnology and Business Process AdvisorCommented:
Also, the PERC card switches include BIOS settings!
0
 
bleujaegelAuthor Commented:
All the more to tweak!
0

Featured Post

VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

  • 6
  • 5
Tackle projects and never again get stuck behind a technical roadblock.
Join Now