Scotch Tech
asked on
How to present SAN storage to a physical Windows 2016 server via fiber connection?
We have several physical Windows 2016 servers with local storage and with Exchange 2016 installed. We want to add external SAN storage via Fiber, and present that to the physical servers. How can we do this? Any docs would be greatly appreciated.
Your talking about enterprise hardware which likely has its own methods for connecting. The OVERVIEW of how you do it, Get controllers that connect to your SAN, the Management Software for your SAN, and carve out space on the SAN that can be allocated to the servers. Exactly how depends on the hardware you have and the software that hardware uses.
Probably best to find a SAN / NAS vendor who can sell you a complete package deal to work with your existing hardware. Trying to roll your own solution, with fiber, if you have never done it before, will probably not yield the results you desire.
There's also the support issue. If a vendor says "We'll sell you a solution that does X at Y rate and can serve Z servers at once with K requests per second" -- they have to deliver, or no payment. And if there are issues down the road, it's comforting to know you can go back to the vendor for support. Along those lines ... make sure you buy from a company known in the industry that's been around for over ten years, so that if you do need to go back to them, they might still be there.
There's also the support issue. If a vendor says "We'll sell you a solution that does X at Y rate and can serve Z servers at once with K requests per second" -- they have to deliver, or no payment. And if there are issues down the road, it's comforting to know you can go back to the vendor for support. Along those lines ... make sure you buy from a company known in the industry that's been around for over ten years, so that if you do need to go back to them, they might still be there.
There are two major methods for connecting SAN storage to a server via fiber. The classic method is via Fibre Channel, and then there is also iSCSI over Ethernet. Those are two very different protocols with different cabling and methods.
First step is to figure out what type of SAN storage you have, and what the connectivity method is.
First step is to figure out what type of SAN storage you have, and what the connectivity method is.
ASKER
thanks all. that makes a lot of sense.
Why would you want to use a SAN for Exchange when local disks are cheaper and just as good?
Take a look at the "physical disk types" section of https://docs.microsoft.com/en-us/exchange/plan-and-deploy/deployment-ref/storage-configuration?view=exchserver-2019
In general, choose SATA disks for Exchange 2016 mailbox storage when you have the following design requirements:
High capacity
Moderate performance
Moderate power utilization
In general, choose Serial Attached SCSI disks for Exchange 2016 mailbox storage when you have the following design requirements:
Moderate capacity
High performance
Moderate power utilization
In general, choose Fibre Channel disks for Exchange 2016 mailbox storage when you have the following design requirements:
Moderate capacity
High performance
SAN connectivity
So from Microsoft's recommendations the only reason to use SAN storage is because you want the added complexity and cost of SAN storage! They even go further, best practices say don't share disk pools with other applications and don't use tiering, two of the main reasons or choosing a SAN over direct attached storage.
SANs provide shared storage for clustering, Exchange has its own clustering solution in DAGs so shared storage is not needed, it's just throwing money away.
Take a look at the "physical disk types" section of https://docs.microsoft.com/en-us/exchange/plan-and-deploy/deployment-ref/storage-configuration?view=exchserver-2019
In general, choose SATA disks for Exchange 2016 mailbox storage when you have the following design requirements:
High capacity
Moderate performance
Moderate power utilization
In general, choose Serial Attached SCSI disks for Exchange 2016 mailbox storage when you have the following design requirements:
Moderate capacity
High performance
Moderate power utilization
In general, choose Fibre Channel disks for Exchange 2016 mailbox storage when you have the following design requirements:
Moderate capacity
High performance
SAN connectivity
So from Microsoft's recommendations the only reason to use SAN storage is because you want the added complexity and cost of SAN storage! They even go further, best practices say don't share disk pools with other applications and don't use tiering, two of the main reasons or choosing a SAN over direct attached storage.
SANs provide shared storage for clustering, Exchange has its own clustering solution in DAGs so shared storage is not needed, it's just throwing money away.
ASKER
thanks for the suggestion. But we have no more slots available and all disks are being used up. We expect Archiving storage to reach at least 40TB once we've put the policy in place.
Still cheaper to add extra DAS shelves and PCIe RAID controllers to the servers. Example price from Dell $8,845.18 for ME4012 iSCSI SAN Vs $3,784.88 for MD1400 external disk shelf (both without disks and PCIe controller/HBA/NIC). Admittedly the RAID controller for the server costs a bit more than a HBA/NIC but it's still about $4K less for a 12 disk shelf than a 12 disk SAN.
To be clear which SAN is being considered?
The short addition to what others have covered.
1) how many systems do you envision connecting to the SAN.
A) commonly a SAN has two controllers, with two fiber ports on each for a total of four.
B) for redundancy each host should have two paths one to each controller
2) when you have many hosts, you would need a fiber infrustracture, fiver switches (a pair for redundancy)
A) each switch will have one feed from each controller
B) each port will need to be configured on which LUNs it passes to the connected host
3) each server will need to have fiber hba meeting the connection the switch provides
A) you can have a dual fiver hba or two HBAs, the individual hba provide redundancy from an hba failure
4) then you need the cabling
The short addition to what others have covered.
1) how many systems do you envision connecting to the SAN.
A) commonly a SAN has two controllers, with two fiber ports on each for a total of four.
B) for redundancy each host should have two paths one to each controller
2) when you have many hosts, you would need a fiber infrustracture, fiver switches (a pair for redundancy)
A) each switch will have one feed from each controller
B) each port will need to be configured on which LUNs it passes to the connected host
3) each server will need to have fiber hba meeting the connection the switch provides
A) you can have a dual fiver hba or two HBAs, the individual hba provide redundancy from an hba failure
4) then you need the cabling
As Arnold said you need a lot of infrastructure for a SAN
Obviously if you go down the FC route you will need a seperate FC infrastructure, but don’t forget if you go down the iSCSI route, you will need to config extra IP infrastructure to cope with the storage traffic
Dual redundant Fabrics are the defacto standard for a SAN, ie you need dual independent switches, NICs/HBAs etc
Obviously if you go down the FC route you will need a seperate FC infrastructure, but don’t forget if you go down the iSCSI route, you will need to config extra IP infrastructure to cope with the storage traffic
Dual redundant Fabrics are the defacto standard for a SAN, ie you need dual independent switches, NICs/HBAs etc
How many Exchange servers? Are you using VMs? The extra storage is for archive? Do you have a hypervisor cluster? A SAN may or may not be the best solution, but we need more details.
FWIW, deploying FC switches really doesn't make any sense for a greenfield deployment like this. SAS or iSCSI, but not FC.
FWIW, deploying FC switches really doesn't make any sense for a greenfield deployment like this. SAS or iSCSI, but not FC.
ASKER
We already have a SAN infrastructure in place, just need the HBA cards and connect; thus i think it would be easier.
we have 5 Exchange servers. we will ultimately archive about 25-35TB of data.
we have 5 Exchange servers. we will ultimately archive about 25-35TB of data.
ASKER
I meant to say, we already have a Fiber infrastructure with a SAN in place.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
great, thanks all.
You already have a Fiber infrastructure with a SAN in place.
Ask the current maintainer about expanding it then, if you have a contract with the manufacturer it would be pretty easy although probably rather costly. But if you had a contract with the manufacturer you wouldn't be asking us so we have to assume no manufacturer supported upgrade contract. Adding capacity to an existing SAN without manufacturer contract is just about impossible, they have you by the short and curlies because old kit won't take new disks without a firmware upgrade.
Tell us make/model and one of us can probably tell you expansion cost.
Ask the current maintainer about expanding it then, if you have a contract with the manufacturer it would be pretty easy although probably rather costly. But if you had a contract with the manufacturer you wouldn't be asking us so we have to assume no manufacturer supported upgrade contract. Adding capacity to an existing SAN without manufacturer contract is just about impossible, they have you by the short and curlies because old kit won't take new disks without a firmware upgrade.
Tell us make/model and one of us can probably tell you expansion cost.