• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1242
  • Last Modified:

NAS connectivity with ESXi 5

We are looking at an inexpensive NAS solution for VM backups.  Our VMs run on our SAN so we don't need something to run VMs, or anything fast, just something to easily backup VMs to.  

I assume mostly what I need to look at is connectivity to our ESXi hosts or the SAN.  I think I am seeing that some lower priced NAS either don't have iSCSI ports or they don't have the ability to be seen by more than one ESXi host at once.  SPC-3 capability?

We have an HA cluster of three esxi hosts connected via fiber to our SAN.  What is the best way to be able to backup to a NAS from all three hosts?  Can the nas connect to the SAN?  Do I need to just put it on the network and create an NFS share on it?  What is the best way?  

One I was looking at was the Buffalo TeraStation 3.  12TB.  It's about $1800 but doesn't have iscsi.  The ones that does can only be seen by one host at a time.  Probably won't work then.  

thanks for any advice.
0
readymade
Asked:
readymade
  • 3
  • 2
3 Solutions
 
Paul SolovyovskyCommented:
The way this would be architected is you would attach the NAS to the network or to the ESXi hosts depending on the backup solution.  If the backup solution is ESXi host to ESXi host then it would need to be connected to the cluster, otherwise it would be a CIFS or NFS share for an application like VEEAM.  

Be carefull in buying very inexpensive NAS units as the IOPS may be a limiting factor and cause yoru backups to queue up and fail.  A NAS can't connect directly to the SAN in most cases but some backup applications can perform LAN free backups by connecting read only to the LUNs

How are you looking to backup your VMs?
0
 
readymadeAuthor Commented:
FroWe are using PHD Virtual.  Not great, but inexpensive and has worked pretty good so far.  Our company is cheaping out on a lot of things.  PHD  Virtual works best if it backs up to an attached vmdk or attached disk.  How would I do this with a NAS?  I guess I would just attach the NAS directly to the esx host that PHD virtual lives on?  

I'm not familiar with NAS, and i'm not understanding how I would carve a disk or vmdk out of the NAS and attach it to the phd virtual vm.  Or do I create a share, and add that?  I'm not familiar with the NAS file system or how you use it.  

Can you give me some specific details on how I would connect it to the cluster, or connect it as CIFS and NFS?  From the NAS side's setup?  I am very familiar with esxi and adding NFS, LUNs, and datastores to it.

thanks!
0
 
Paul SolovyovskyCommented:
For VMWare NAS indicates a system that is running NFS.  Typically you would want to ensure that it is VMware certified but you would attach it as a datastore and you can then create additional virtual disks and attach them to your current VMs as needed.

You could also use it as a CIFs share if PHD supports it as a secondary option.
0
 
readymadeAuthor Commented:
To attach the NAS as a datastore, do you mean I can somehow make VMware see the whole NAS as a datastore, or do I have to create a share on the NAS first?  

Because i've never used a NAS, i'm not sure of the level on integration the NAS has with vmware.  

I've created NFS shares on servers and then shared them to vmware and i'm just wondering if it's easier with a NAS.
0
 
Paul SolovyovskyCommented:
You would create an NFS share, ensure that you have "no_root_squash" which allows ESXi to connect without credentials (which is why it's a good idea to put on a non routed vlan)

Just like a windows cifs share from a windows server.  It's actually more flexible as it allows you to shrink and grow the datastore on the fly.
0

Featured Post

Visualize your virtual and backup environments

Create well-organized and polished visualizations of your virtual and backup environments when planning VMware vSphere, Microsoft Hyper-V or Veeam deployments. It helps you to gain better visibility and valuable business insights.

  • 3
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now