svMotion between two different datastore limitation ?

Hi All,

Is there any reason as to why the sVmotion from my iSCSI VMFS data store to my NFS data store is limited to just 4x VMs at the same time ?

svMotion screenshot
Is this because of the ethernet MPIO bandwidth limitation, NAS IOPS performance limitation or ESXi 6 limitation ?

How to increase it into its maximum value ?

See the screenshot below from the QNAP network traffic and each own data store network performance:
QNAP NAS traffic status
LVL 8
Senior IT System EngineerIT ProfessionalAsked:
Who is Participating?
 
Andrew Hancock (VMware vExpert / EE MVE^2)Connect With a Mentor VMware and Virtualization ConsultantCommented:
Is this because of the ethernet MPIO bandwidth limitation, NAS IOPS performance limitation or ESXi 6 limitation ?

Neither.

It's - vMotion Bandwidth. (Storage vMotion Bandwidth). e.g. VMKernel Portgroups. So add more VMKernel Portgroups, e.g. up to 4 x 10-GB pNICs or 16 1-GB pNICs.

enable Jumbo Frames can make the transfer quicker.

Also the Maximum number is 8 concurrently, if using 10-GB. (per host).

It used to be 1 per host.

You can check all the stats here

Configuration Maximums vSphere 6.0

also remember that vMotion and Storage vMotion are different, and have different values.

So unless you change your networking, unlikely to change.
1
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Thanks Andrew !
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
no pronlems
0
All Courses

From novice to tech pro — start learning today.