Link to home
Start Free TrialLog in
Avatar of IT_Group1
IT_Group1Flag for Israel

asked on

Best practice for migrating VM's from different ESX hosts which some shares the same LAN, and some not.

ESX4 to ESXi 4.1 VM's copy:

5 Different hosts;  some are in the same LAN as the new hosts, and some aren't.
When i try to copy a VM from a host to the target host, i'm receiving the following error during the clone virtual machine wizard when choosing the target host: Network interface Network adapter 1 uses network VM Net, which is not accessible.

I think i'm understanding the issue, but how can I share the network between the hosts?
And an additional point: if the target LUN which is on a different segment (iSCSI network), which mapped to the target host in the same LAN, will the transfer succeed?

Thx
Avatar of bgoering
bgoering
Flag of United States of America image

If the storage is on both the old and the new networks all you need to do is un-register (remove from inventory) on the old host, then browse the datastore to the vmx file, select it and add to inventory on the new host.

Another possibility is using VMware Converter (http://www.vmware.com/products/converter) to migrate your hosts from the old to the new networks... Might be better than a clone operation in vCenter for hosts on different networks.

Good Luck
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
You've probably not got VM Net on that host.

Create it on the vSwitch.
It's just the portgroup label it's looking for. It doesn't know what physical network it's connected to.
Avatar of IT_Group1

ASKER

hanccocka, indeed changing the vm network name was helping by proceeding, bu yet the result is the same; it's been processing for ~10 min, and then shouts `cannot connect host`.

Any ideas?
So your cloning from an ESX 4 host to a ESXi 4.1 host?
Yap
and cloning from ESX 4 datastore to ESXi 4 different datastore?

are the ESX4 servers and ESXi 4 servers on the same LAN? (managemrnt LAN)

but the virtual server you are cloning maybe on a different LAN on both hosts?

it would seem it's having difficulty cloning to the ESXi 4.1 Hos via service console network on ESXi4.1 server.

You right, they share she same LAN, but each of the hosts got additional segments, which aren't shared. The shared LAN is the management LAN, and again you were correct to guess that VM is on the segment which is not shared.

Are you David Copperfield by any chance??
I'm looking over your shoulder!

Have you heard of Remote Viewers!

anyway back to this, the commuication channel is between ESX to ESXi, so this problem, shouldn't occur.

I think you might have to look at our friend VMware Converter.
But it's mainly Linux stuff (centos), and the Vmware converter treat it like *%# (basically don't recognize the guests), and not due to a login problem.

?? - ideas?
ASKER CERTIFIED SOLUTION
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Take a look at this post: http://communities.vmware.com/message/1339808

It has you editing /etc/opt/vmware/vpxa/vpxa.cfg

Make sure the hostIp has the address of your ESX or ESXi host, and the serverIp has the correct address for you vCenter server.

Also the error VM network not found simply means you don't have a portgroup label of the VM net on your destination ESXi 4.1. I believe that is just a warning, but you will need to assign the proper network once the cloning operation completes.

Good Luck
Here is the VMware KB article - describes the problem a little bit better

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010837
Ok, veeam 5 is being installed as we speak.
One more issue strike us; after copying the VM's FROM the datastore to ext USB device in a reasonable transfer rate (~250 mbps), the upload back to the new servers the is sluggish as hell, and it's about 25Mbps...! It's been tried to both local SAS and a new empty LUN on the new servers.

Both servers share the same segment with a 1Gb eth interfaces.
The new servers has also 2x 1Gb eth NIC's which is connected directly to the iSCSI separated network.

Please assist.

thx
Same upload rate with the veeam - between datastores (which the traffic is being sent through the PC which the veeam is installed on).

Guys anything?
It appears to be a slow writes on the destination server. Make sure your RAID controller has battery backed write cache (BBWC) and is configured for "write back" mode and not "write through" mode. Lack of BBWC or misconfiguration can have a dramatic affect on write performance.

Also make sure enough write cache is allocated - I generally use 75% write and 25% read.
bgoering, thx but i suspect that's not an IOPS issue.
The destination hosts are brand new Dell R610 with BBWC of 512MB; where do i check if it's set to "write back" mode and not "write through" mode?

Another intresting issue, when i try to copy directly from SSH console of the source server to the destination server, i get the following error which says that port 22 is closed (which obviously isn't since i'm connecting to him via SSH & via veeam.

[root@esxprod JiraNew]# ssh 192.168.0.212
ssh: connect to host 192.168.0.212 port 22: Connection refused

What is going on?
The R610 probably has the Perc 6i controller. In the BIOS raid configuration under advanced settings you can configure the write policy. Take a look at http://www.thegeekstuff.com/2009/05/dell-tutorial-create-raid-using-perc-6i-integrated-bios-configuration-utility/

I believe the bios setup utility is close to the same for the H700 controller.

Have you configured lockdown mode? Have you enabled remote tech support? Either one of those will keep you from connecting with a ssh client like PuTTy.
I'm checking the raid settings. As for the putty - I can connect via ssh, but I can't connect via ssh or scp from other Esx hosts on the same segment when connecting TO them via ssh.
Checked. It's set on Write Back.
How can see the BBWC type/RAM size?

Thx
Ok, again - my bad - it was a switch misconfiguration, and after testing with diffrent switch - all is great.
BTW, do you know how i disable jumbo frames (which i suspect is the reason for this) for only vlan 1 on Dell PowerConnect 5424? And if it's not Jumbo frames, which setting can cause this slowness in data transfer?
Here's the Switch conf, i'm sure somthing is misconfigured. We're looking at Vlan 1 (ports 16-24).
What does FlowControl stands for?

Thx
iscsi1# show running-config
spanning-tree mode rstp
interface range ethernet all
spanning-tree portfast
exit
interface range ethernet all
flowcontrol on
exit
port jumbo-frame
interface range ethernet g(1-15)
switchport mode general
exit
vlan database
vlan 2
exit
interface ethernet g1
switchport general pvid 2
exit
interface ethernet g2
switchport general pvid 2
exit
interface ethernet g3
switchport general pvid 2
exit
interface ethernet g4
switchport general pvid 2
exit
interface ethernet g5
switchport general pvid 2
exit
interface ethernet g6
switchport general pvid 2
exit
interface ethernet g7
switchport general pvid 2
exit
interface ethernet g8
switchport general pvid 2
exit
interface ethernet g9
switchport general pvid 2
exit
interface ethernet g10
switchport general pvid 2
exit
interface ethernet g11
switchport general pvid 2
exit
interface ethernet g12
switchport general pvid 2
exit
interface ethernet g13
switchport general pvid 2
exit
interface ethernet g14
switchport general pvid 2
exit
interface ethernet g15
switchport general pvid 2
exit
interface range ethernet g(1-15)
switchport general allowed vlan add 2 untagged
exit
interface vlan 2
name iscsi
exit
voice vlan oui-table add 0001e3 Siemens_AG_phone________
voice vlan oui-table add 00036b Cisco_phone_____________
voice vlan oui-table add 00096e Avaya___________________
voice vlan oui-table add 000fe2 H3C_Aolynk______________
voice vlan oui-table add 0060b9 Philips_and_NEC_AG_phone
voice vlan oui-table add 00d01e Pingtel_phone___________
voice vlan oui-table add 00e075 Polycom/Veritel_phone___
voice vlan oui-table add 00e0bb 3Com_phone______________
iscsi target port 860 address 0.0.0.0
iscsi target port 3260 address 0.0.0.0
iscsi target port 9876 address 0.0.0.0
iscsi target port 20002 address 0.0.0.0
iscsi target port 20003 address 0.0.0.0
iscsi target port 25555 address 0.0.0.0
interface vlan 1
ip address 192.168.0.201 255.255.255.0
exit
interface vlan 2
ip address 197.167.2.1 255.255.255.0
exit
ip default-gateway 192.168.0.254
hostname iscsi1
line telnet
password 707db5b050d4ffd5cc3ff29d3624aab5 encrypted
exit
username admin password 707db5b050d4ffd5cc3ff29d3624aab5 level 15 encrypted
snmp-server community Dell_Network_Manager rw view DefaultSuper






Default settings:
Service tag: 3B91GH1

SW version 2.0.0.43 (date  02-Sep-2010 time  09:01:52)

Gigabit Ethernet Ports
=============================
no shutdown
speed 1000
duplex full
negotiation
flow-control off
mdix auto
no back-pressure

interface vlan 1
interface port-channel 1 - 8

spanning-tree
spanning-tree mode STP

qos basic
qos trust cos
iscsi1#

Open in new window


iscsi1# show ports jumbo-frame

 Jumbo frames are enabled
 Jumbo frames will be enabled after reset
iscsi1#

Open in new window

The flowcontrol Interface Configuration mode command configures the Flow Control on a given interface. To restore the default, use the no form of this command.

When Flow Control is ON, the head-of-line-blocking mechanism of this port is disabled.


(this is really another question!)
"When Flow Control is ON, the head-of-line-blocking mechanism of this port is disabled. " What dows it mean?
I think port jumbo-frame is enabled for the device.
 
On this switch you have the option of Jumbo Frame on or off for the switch. I don't think you can specify a port.

here's a good article on Flow Control, which also explains HOL!

http://virtualthreads.blogspot.com/2006/02/beware-ethernet-flow-control.html
Try "no port jumbo frame" to turn it off for the switch. I mostly use Cisco though so that might not work on Dell switch.
@bgoering: no port jumbo-frame is correct.

you also need to do a switch restart.

just been chatting to my network guy about this, who has stated that the Power connect 5324 (now discontinued) doesn't support Jumbo Frames with iSCSI very well, due to lack of buffers.

So you may want to test this in prioduction.
networking bid just confirmed The 5324 switch supports jumbo frame configuration at the switch level only, not per port.

Upgrade to 62xx switches for per port.
PowerConnect 5424, 5448 aree more suitable for Jumbo Frames and iSCSI use because they do not have the limitation of the 5324 with limited buffers.
I actually would recommend Cisco 3750 or above class switches :)
Guys, followed your advice and disabled Jumbo frames - yet no good.
What else can cause it? Take into consideration that  those 2x 5424 switches are connected to both MD3200i w/ 2 controllers (Vlan 2) + office LAN (Vlan 1) - which is directly connected to an HP ProCurve 48g (can't remember the exact model) - which I'm sure no Jumbo frames are set on.

Gotta solve this - it holding me back from deliver the `key` to customer!

Thx
my apoologies, I don't know why I went off on 5324 switches, when there are 5424 (I thought it was odd you were using discontinured switches!).


do these Dell servers have any local storage you could transfer to, and then shunt the VMs to the iSCSI datastore?

have you tested transfer speeds from servers local disk to iSCSI datastores?
Remember you have to turn off the jumbo frames everywhere - not just the switch. Turn it off on ESX and on your storage as well.
Any idea where do I turn off the Jumbo frames on the MD3200i?
I don't have one of those devices, but from http://www.dell.com/downloads/global/products/pvaul/en/powervault-md3200-array-tuning.pdf

"To edit the Jumbo Frames settings in the MDSM, select the Setup tab, Configure iSCSI Host Ports, Advanced (see Figure 6). Jumbo Frames can be also be set using the CLI. If using an MD3200i with Jumbo Frames ensures the Ethernet switches and host NIC(s) also have Jumbo Frames enabled and are set to an equivalent value."
Using the Modular Disk Storage Manager (MDSM) interface, go to iSCSI > Configure iSCSI Host Ports. For each port, click Advanced Host Port Settings. In the Advanced Host Port Settings window, check the Enable jumbo frames check box, and set MTU size to 9000. Click Ok.

It will REBOOT the controllers!!
so disable it!

It will REBOOT the controllers!!
Actually uncheck the Enable box and set MTU to 1500 to disable jumbo frames...
Did you not ENABLE jumbo frames on this Storage to Start with?
When you untick the box, it defaults to 1500 MTU!
Think so, I'll check tomorrow.
From your experience - how much jumbo frames really have effect in an iscsi env such as this?
For what its worth - if Jumbo frames are to work at all they must be enabled (with the same MTU) on both the server and the storage. They must also be enabled on any switches in the infrastructure with the same or larger MTU. The MTU negotiation is done between the server and the storage, so as long as the switch supports at least the MTU on each end, all should work.

By convention (there is no official document) a MTU of 9000 is typically used.

As hanccocka asked, was it enabled on the storage to begin with? If not that may have been the root of the problem as the MTU would have had to been negotiated constantly.
if we have the ability to enable jumbo frames on equipment on supported storage networks we do it. We've seen massive performance increases, with read and write latencys.

and we've recommended to some customers, to remove Hardware iSCSI initiators, which don't support Jumbo Frames, in favour of the Software iSCSI initiator and Jumbo Frames!

Performance is better!
I don't use iSCSI on production networks (Only fiber channel there), only in lab/development networks without enough traffic to quantify. But I understand the gains can be significant.
Avatar of jmgallo
jmgallo

Purchasing two servers and MD3200, and Dell is trying to sell us two 6224's at $2400, but our online backup vendor we use (who we have a good relationship with) says 6224's are overkill and they use older 5224's and buy extra's for spares. Do we really need $800 (5424) to $1200 (6224) switches for a small SAN.  I don't want to overload the switch, but we don't have an blank/open checkbook either.  FYI, we're going to use Hyper-V...I know this post is VMware.

Thanks.
Was also thinking of going to the newer 5324's since they're almost the same price, and a few years newer.
@jmgallo - shouldn't you create a question for that information?
Yes, sorry if I didn't create my question, I just did a search and this popped up.  Where there was so much mentioned of the 5x24 & 6424 switches, I figured that one of you would be knowledgeable to give me a quick answer.

My apoligies.