• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1268
  • Last Modified:

Openfiler for VmWare

Hi,

In the past i had an openfiler running with one NFS share for my two esxi servers.
With about 10 VM's i was using Vsphere with VMotion uttilities. It work fine however sometimes a bit slow.

After some maintenance of a few machines i had to reconfigure the openfiler. Now I want to set up my envoirenment a little better and safer.
So I created on the openfiler with a hardware raid card 3 virtual disk.
Virtual disk 1: configured in raid 1 for the openfiler OS.
Virtual disk 2 en 3 are configured in raid 5 for the storage.

Now i want to configure my openfiler that is one volume group. But that it first start to write on virtual disk 2 and not on spread the writing on the two virtual disk. I think it's a little bit secure this way.

The next thing i want to do is share this volume group to my vmware esxi servers and also as an SMB share for the windows clients. Should i choose for NFS again or iSCSI? I learned that iSCSI is a faster and better for the esxi server.  Is it possible to use the same share for iSCSI and SMB for windows clients? Can I still use VMmotion with the iSCSI?

Want would you recommend.
Thanks a lot, regards.
0
jonas-p
Asked:
jonas-p
  • 8
  • 5
1 Solution
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
NFS is faster, make sure you use Jumbo Frames, and doesn't require individual iSCS LUNs to be created on Openfiler.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
vMotion can be used with iSCSI. I suggest you test NFS versus iSCSI on Openfiler, and see what works for you sceneraio and setup.

CIFs and NFS can work together, but you have to create iSCSI LUNs on Openfiler, that cannot be used as CIFs shares.
0
 
jonas-pAuthor Commented:
Okay thanks for the quick response guys.
What is the main difference between NFS and iSCSI? Why is it faster?

Is it possible to get one volume. After the volume are different virtual disk from the hardware raid.
I want that the share first start to right a the beginning of the volume so at virtual disk 1 after that is full is starts writing on virtual disk 2. Is that possible? Is it a safe solution?

I'm right that their is one volume the SAN share. And that their are multiple LUN's (the hardware raid virtual disk) after the volume ? What is the most common setup for such envoirenment.

Also keep in mind that in the furture (really soon) i will deploy a second openfiler.
I want to create/adjust a big SAN share for my esxi servers and windows clients. With adjust i mean add the second openfiler to the same volume. Or how you guys would set it up?

Thanks a lot, regards.
0
What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
In the implementation of Openfiler NFS in our tests we have seen NFS is faster than iSCSI and performs better. (this could be do to the native conversion and overhead of SCSI to ISCSI packets). Couple with the benefits of you can create a single NFS/CIFs volume for storage of VMs against creating iSCSI LUNs.

This is why I suggested, trying iSCSI and NFS and checking which performs better, Jumbo Frames will help your performance whether you select iSCSI or NFS.

You cannot join Openfiler servers.

Is this for production?

Could you explain, in a little bit more detail this

"Is it possible to get one volume. After the volume are different virtual disk from the hardware raid.
I want that the share first start to right a the beginning of the volume so at virtual disk 1 after that is full is starts writing on virtual disk 2. Is that possible? Is it a safe solution?

I'm right that their is one volume the SAN share. And that their are multiple LUN's (the hardware raid virtual disk) after the volume ? What is the most common setup for such envoirenment."


I'm a little unclear what you are trying to do.
0
 
jonas-pAuthor Commented:
Okay thanks for explaining the difference.
Yes this is for a small production but can't be to expensive, that's why openfiler.

What i tried to explain is. In many production site they have one big SAN storage.
After the SAN storage their are many virtuals disks/lun's available. So it's easy to deploy extra storage or when a virtual disk fails you can easilly recover with the raid controller and the SAN doesn't recognize anything.

I want to deploy something like that but on a smaller scale off course.

Thanks, regards.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Okay, Openfiler is okay but it's performance does suffer, when you have many Virtual Machines. (using either NFS or iSCSI).

Okay, ignoring Openfiler for the moment, ensure that the hardware system you have in place, has a good hardware array system, the Operating System which sits on to is software after all.

Some our our clients, have re-used older non-supported Fibre Channel SANs with many disks (14 disk chassis) to re-purpose them, because of the investments made in the storage. But the SANs are no longer supported by VMware, the sans are MSA 500, 1000 and 1500, older EVAs and MAs.

The hardware systems are excellent at redundancy, they have then connected these older systems using Fibre Channel to older non-supported HP Servers (no longer supported by VMware) to create Moderately fast Openfiler SANs using good redundant hardware to support existing small scale production VMware enbvironments.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
They have then created large Volume Groups for use with iSCSI and NFS. It's not the software that fails (usually), it's the underlying storage RAID system. So if you hardware RAID is good, your volume groups should be good, and you'll have a successful Openfiler deployment.
0
 
jonas-pAuthor Commented:
Okay thanks,

But are you explaining that you have to configure different groups of volume for hardware raid volume's?
How should i configure it in my situation:
- 3 virtual hardware raid volume's

1.) configure it as one volume shared as NFS for vmware and windows clients?
2.) configure different volumes shared as NFS and iSCSCI for vmware and windows clients?
3.) ...

Thanks, regards.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
You create one Volume Group for one hardware logical array. So if you have three RAID Arrays, you would create 1 volume group per RAID array. (don't span them because that is dangerous, keep them 1 to 1)

Out of those volume groups you need to then configure them as either NFS or create iSCSI LUN containers on the volume group.
0
 
jonas-pAuthor Commented:
Okay many thanks !
I just created today 3 volumes (1 volume to raid array 1, 2 volume to raid array 2, ...)
I will use the first volumes as iSCSI for the esxi servers.
And volume 2 and 3 i will use them als NFS for esxi as for windows clients.
Should i combine volume 2 and 3 as one volume group ?
Is this a good and safe set-up?

What did you mean by Jumbo Frames? Is that the block size you choose in the vmware? Should it be high like example 4MB for maximum file size 1024GB ?

Thanks, regards.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
its safer not to span arrays with volume groups.

Jumbo Frames aide performance over networks, but your network switches, openfiler and esx must be all configured. This is the scope of another question jumbo frame setup.
0
 
jonas-pAuthor Commented:
Okay thanks thats good for the question i asked.
Could you provide me where i can find some information over de jumbo frame setup?

Thanks.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
ask another question with reference to jumbo frames to br fair to other Experts.
0

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

  • 8
  • 5
Tackle projects and never again get stuck behind a technical roadblock.
Join Now