Link to home
Start Free TrialLog in
Avatar of Croftkey
Croftkey

asked on

Dl385g6 clustering hyperv using iscsi - issues

Hi I wonder if someone can advise me, with what's been so far a real headache. I have 3 HP dl385g6 servers running windows server 2008 r2.
These servers have 4 embedded gigabit ether ports and 1 marked ILO.
For each server, I have also installed a quad port nc375t which also gives 4 1 gigabit ether ports.

I built out the servers with HP smartstart 8.50 selecting all options including psp and branded media with disk image from Microsoft gold partner site.

No issues with the build, the os is on a mirrored 146gb volume, all the volume is dedicated to c: drive. Not a lot of space after the data centre build about 70gb free.

I have a  msa2000 with 24 sas 500gb drives, attached to a msa70 enclosure with 19 500 gb drives. The storage utility via web browser works fine and I can set up disks and volumes fine.

The issue that's driving me mad now, is that I can invoke Microsoft iscsi initiator fine and see the portals on the storage and assign targets okay. I created a 2tb lun for the vm VHd disks and mapped to all 3 dl385 servers. They see this okay in the device manager.

The disk appears in storage management as offline when I try to bring online, it hangs on all 3.

The servers are attached as follows to 2 switches (layer3)

Ports 5 and 6 (nc375t) are connected using cat 5 for now to ports on separate switches all of which are set up on the same vlan. I have used ncu to team these and assigned a logical ip4 address which pings fine. I didn't use the 4 embedded NIC's as ncu doesn't allow me to assign as iscsi. So I have used 2 of the embedded ones as paths to the switches, again separated for redundancy. I know you should not team these as iscsi teaming is not allowed. I believe the iscsi initiator takes care of the mpio.

The ILO is connected to one switch.

I do intend to use cat5e where possible to take advantage of 1000 and jumbo packet etc when this goes live.

Apart from it also hanging, I find that although I assign static ips using the ncu, when I ran cluster verification, it reported the iscsi NIC's were running apipa assigned addresses yet the ncu still shows the static addresses I assigned.

From the switches I have 2 cables, 1 going to each controller nic on the San. I have also 4 connections going to the 4 iscsi ports (2 on each controller).

I see from some forums that some people don't use ncu or install before they do hyper v. Yet how can I remove ncu and how can I team those cards without it?

My plan was to set up cabling, get the storage ready, get the failure clustering done and then install hyperv and get the vms tested for failover. Yet I am stuck.

I had to do many reboots of server and San to get me back to the stage I am at now, as virtual disk services wouldn't start, etc. I was about to do a full rebuild.

I am going to run HP smart update on all 3 machines to get everything to latest level, and I have removed the 2tb lun and set up a smaller 1tb lun. I plan to use the native adaptor settings to assign static addresses o the iscsi, and then see if that helps, but if anyone out there has had similar issues setting up this type of environment, p,ease an you advise me?

Other issues seen were at one point I could not get ncu to run on the first node, as it said network properties open yet I didn't have, even after a few reboots.

All hardware is listed as being win2008 r2 ready and hyperv ready on the HP website.


ASKER CERTIFIED SOLUTION
Avatar of rdhoore108
rdhoore108
Flag of Belgium image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Croftkey
Croftkey

ASKER

Yes going to 1TB has allowed it to work and also updating to latest psp