Using Microsoft iSCSI Initiator inside Win2k3 x64 as Guest OS to connect to SAN

Hi All,

How to enable Microsoft iSCSI Initiator insidethe WIndows 2003 x64 guest OS / Virtual Machine ?
I've created the LUN inside my SAN but suddenly i realized that by using the current network configuration i couldn't locate the iSCSI SAN IP.

what i'd like to achieve here is the better performance of using iSCSI storage through Enhanced VMXNet to the iSCSI LUN that I've just created.


Open in new window

Who is Participating?

Improve company productivity with a Business Account.Sign Up

Brett DanneyConnect With a Mentor IT ArchitectCommented:
I honestly can not think of one good reason you would get better performance out of the iSCSI connecting it directly to the vm. Technically if you set it up the conventional way you would be achieving the same result. Anyway what you would need to do is add another virtual switch that connects to the iSCSI network. Then on the vm add a NIC to connect to this virtual switch, load your initiator and configure, do the config on the iSCSI to allow the host and that should have you up and running.
jjozAuthor Commented:

the reason was that I'm suffering very slow performance in using my VM deployed on the
 iSCSI SAN-VMFS datastore, the deployment diagram above showed above i believe already according to the best practice around the net by segregating the network from SAN into the server.

MD3000i is just a small entry level SAN device which can only use one single cable to access the iSCSI target, so no matter how complex the configuration is, the I/O performance will not be as great as the adding managed switch to perform VLAN trunking. --> the last question #4 is the eye opener

so by using the deployment diagram that i supplied on top, it is not possible to achieve high performance greater than single cable connection :-|

jjozAuthor Commented:
Hi Robocat,

thanks for your reply.

I've successfully configure the network path in the ESXi part so considered as done.
however I tried to configure the Microsoft iSCSI initiator through:,39045058,62030894,00.htm

but it failed as per below error message.

Event Type:	Error
Event Source:	Virtual Disk Service
Event Category:	None
Event ID:	10
Date:		14/05/2009
Time:		7:49:47 PM
User:		N/A
Computer:	VMWS03-01
Unexpected provider failure. Restarting the service may fix the problem. Error code: 80042405@0200001A
For more information, see Help and Support Center at

Open in new window

Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

jjozAuthor Commented:
yes, it seems that the performance is all the same "SLOW" because the VM is also in the same SAN LUN RAID-5 diskgroup.
Duncan MeyersConnect With a Mentor Commented:
What is slow? How many MB per second are you getting?

Your comment that: "yes, it seems that the performance is all the same "SLOW" because the VM is also in the same SAN LUN RAID-5 diskgroup." is a much more likely cause of performance issues that a single 1Gb path between storage and server.
jjozAuthor Commented:
yes i believe that this is the price to pay with ESXi iSCSI networking.

see the attachment and what do you reckon about the benchmark result.


Duncan MeyersConnect With a Mentor Commented:
That's pretty damn good. You're getting a consistent 60MB/sec out of the MD3000i across iSCSI. The burst speed is largely irrelevant - it's an indication of how fast you can write to on-disc cache, and the burst may be in the order of single millisecond (the performance graph doesn't show this) . Sustained throughput is what you need, and your system is working well. Likewise, the access time is very good at 4.8ms on the MD3000i, while it is 12.3ms on the single disc. 60-80MB/sec is about as good as it gets across a 1Gb/sec Ethernet link once you allow for IP and Ethernet overheads etc. You can get some worthwhile improvements by enabling jumbo frames on the storage array, VMware ESX (minimum of ESX 3.5 )and the switches that the frames pass through. Jumbo frames improve matters because you can transport up to 9K per packet rather than 1.5K, so the amount of data carried relative to the overhead goes through the roof.

Bottom line is that this is *not* slow at all. A VMware environment generates highly random I/O (IOPS) and not a huge amount of bandwidth (MB/sec), and on what you've posted, you've got a pretty strong set-up. As the number of VM increases, you will have to add discs to the array to maintain performance. With the enormous discs now available, it is too easy to size for the storage you need whereas you need to size for the performance requirement for VMware. Get the performance right and storage usually takes care of itself.
jjozAuthor Commented:

I'd say the performance is standard, not that outstanding though, you could have a look from the benchmark result which is not still lower than the local 7200 rpm SATA disk for linear access and throughput (good for Multimedia streaming file and large file transfer).
 but for my case, the Random access test is performing better plus the seektime / IO latency due to the 15k RPM SAS that i use (therefore i use this SAN just for my Application server and database VM).
  Based on the following article from VMWare:
 "For any given target (iSCSI target IP), you establish one link through a NIC in the team. An additional session to a separate target (different iSCSI target IP) can establish a connection through the second NIC in the team. This is not
 guaranteed because it is dependent on a number of variables we cannot
 control. All luns presented on one IP address passes data through the
 connection established during the iSCSI session login."
 "Note  *You can configure some ESX Server systems to load balance traffic across multiple HBAs to multiple LUNs with certain active-active arrays. To do this, assign preferred paths to your LUNs so that your HBAs are being used evenly.*" --> this is applied to My iSCSI MD3000i too which is Active-active since it has got dual controllers.
 hope this can be a helpful article for you too.
Duncan MeyersConnect With a Mentor Commented:
>I'd say the performance is standard, not that outstanding though, you could have a look from the benchmark result which is not still lower than the local 7200 rpm SATA disk for linear access and throughput (good for Multimedia streaming file and large file transfer).

All good. Sequential transfers look great in benchmark results, but are almost completely irrelevant in a VMware environment. One VM may be writing sequentially, but once you mix that I/O pattern in with I/O from every other VM, you get *highly* random I/O at the physical disc, so you have to size your storage for highly random I/O. A single SATA drive tops out at 80-140 IOPS. A single 15K SAS drive can produce 180-360 IOPS. A RAID set of 10 SAS drives could produce 1800-3600 IOPS. Your benchmark results show good latency, so the MD3000i is working nicely with teh benchmark workload you are presenting, but keep an eye on the latency. The first sign that you need more discs to handle the workload is that latency will start increasing.

Again, once your VMware environment is carrying a production load, you'll see high IOPS and relatively low MB/sec.

BTW - best practice is not to have the service console and production network on the same NIC. You may run into problems doing Vmotions.
jjozAuthor Commented:
great, sounds like I'm on the right track MeyersD.

a little more background of my SAN:

14 x 300 GB 15k rpm SAS all in RAID-5 presented as single target containing multiple Virtual Disk (LUN).
Duncan MeyersConnect With a Mentor Commented:
Best practice is to have multiple smaller LUNs and VMFS partitions of about 500GB each, although if you've got less than 16 or so VMs, then you shouldn't have an issue. If you can, I'd create multiple 500GB LUNs on the MD3000i and present them instead so that you don't have to rework the array in 6 to 12 months time.
Duncan MeyersCommented:
BTW - you should get about 2500 - 5000 IOPS day in, day out with that configuration. Nice!
jjozAuthor Commented:
thanks to SagiEDoc and MeyersD who answered my questions and giving true explanation about my current problem.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.