Solved

Using Microsoft iSCSI Initiator inside Win2k3 x64 as Guest OS to connect to SAN

Posted on 2009-05-14
14
2,179 Views
Last Modified: 2013-11-14
Hi All,

How to enable Microsoft iSCSI Initiator insidethe WIndows 2003 x64 guest OS / Virtual Machine ?
I've created the LUN inside my SAN but suddenly i realized that by using the current network configuration i couldn't locate the iSCSI SAN IP.

what i'd like to achieve here is the better performance of using iSCSI storage through Enhanced VMXNet to the iSCSI LUN that I've just created.

Thanks.
http://www.microsoft.com/downloads/details.aspx?familyid=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en

Open in new window

Deployment.jpg
vNetwork.jpg
0
Comment
Question by:jjoz
  • 7
  • 5
14 Comments
 
LVL 13

Accepted Solution

by:
SagiEDoc earned 150 total points
ID: 24382841
I honestly can not think of one good reason you would get better performance out of the iSCSI connecting it directly to the vm. Technically if you set it up the conventional way you would be achieving the same result. Anyway what you would need to do is add another virtual switch that connects to the iSCSI network. Then on the vm add a NIC to connect to this virtual switch, load your initiator and configure, do the config on the iSCSI to allow the host and that should have you up and running.
0
 
LVL 1

Author Comment

by:jjoz
ID: 24382887
ok,

the reason was that I'm suffering very slow performance in using my VM deployed on the
 iSCSI SAN-VMFS datastore, the deployment diagram above showed above i believe already according to the best practice around the net by segregating the network from SAN into the server.

MD3000i is just a small entry level SAN device which can only use one single cable to access the iSCSI target, so no matter how complex the configuration is, the I/O performance will not be as great as the adding managed switch to perform VLAN trunking.

http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html --> the last question #4 is the eye opener

so by using the deployment diagram that i supplied on top, it is not possible to achieve high performance greater than single cable connection :-|

0
 
LVL 1

Author Comment

by:jjoz
ID: 24383123
Hi Robocat,

thanks for your reply.

I've successfully configure the network path in the ESXi part so considered as done.
however I tried to configure the Microsoft iSCSI initiator through: http://www.zdnetasia.com/techguide/storage/0,39045058,62030894,00.htm

but it failed as per below error message.



Event Type:	Error

Event Source:	Virtual Disk Service

Event Category:	None

Event ID:	10

Date:		14/05/2009

Time:		7:49:47 PM

User:		N/A

Computer:	VMWS03-01

Description:

Unexpected provider failure. Restarting the service may fix the problem. Error code: 80042405@0200001A
 

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

Open in new window

0
 
LVL 1

Author Comment

by:jjoz
ID: 24393722
yes, it seems that the performance is all the same "SLOW" because the VM is also in the same SAN LUN RAID-5 diskgroup.
0
 
LVL 30

Assisted Solution

by:Duncan Meyers
Duncan Meyers earned 350 total points
ID: 24436971
What is slow? How many MB per second are you getting?

Your comment that: "yes, it seems that the performance is all the same "SLOW" because the VM is also in the same SAN LUN RAID-5 diskgroup." is a much more likely cause of performance issues that a single 1Gb path between storage and server.
0
 
LVL 1

Author Comment

by:jjoz
ID: 24437018
yes i believe that this is the price to pay with ESXi iSCSI networking.

see the attachment and what do you reckon about the benchmark result.

Thanks.

iSCSI-SAN-SAS-15krpm.jpg
Local-SATA-7200rpm.jpg
0
What Should I Do With This Threat Intelligence?

Are you wondering if you actually need threat intelligence? The answer is yes. We explain the basics for creating useful threat intelligence.

 
LVL 30

Assisted Solution

by:Duncan Meyers
Duncan Meyers earned 350 total points
ID: 24437322
That's pretty damn good. You're getting a consistent 60MB/sec out of the MD3000i across iSCSI. The burst speed is largely irrelevant - it's an indication of how fast you can write to on-disc cache, and the burst may be in the order of single millisecond (the performance graph doesn't show this) . Sustained throughput is what you need, and your system is working well. Likewise, the access time is very good at 4.8ms on the MD3000i, while it is 12.3ms on the single disc. 60-80MB/sec is about as good as it gets across a 1Gb/sec Ethernet link once you allow for IP and Ethernet overheads etc. You can get some worthwhile improvements by enabling jumbo frames on the storage array, VMware ESX (minimum of ESX 3.5 )and the switches that the frames pass through. Jumbo frames improve matters because you can transport up to 9K per packet rather than 1.5K, so the amount of data carried relative to the overhead goes through the roof.

Bottom line is that this is *not* slow at all. A VMware environment generates highly random I/O (IOPS) and not a huge amount of bandwidth (MB/sec), and on what you've posted, you've got a pretty strong set-up. As the number of VM increases, you will have to add discs to the array to maintain performance. With the enormous discs now available, it is too easy to size for the storage you need whereas you need to size for the performance requirement for VMware. Get the performance right and storage usually takes care of itself.
0
 
LVL 1

Author Comment

by:jjoz
ID: 24437464

I'd say the performance is standard, not that outstanding though, you could have a look from the benchmark result which is not still lower than the local 7200 rpm SATA disk for linear access and throughput (good for Multimedia streaming file and large file transfer).
 
 but for my case, the Random access test is performing better plus the seektime / IO latency due to the 15k RPM SAS that i use (therefore i use this SAN just for my Application server and database VM).
 
  Based on the following article from VMWare:

 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1001251
 "For any given target (iSCSI target IP), you establish one link through a NIC in the team. An additional session to a separate target (different iSCSI target IP) can establish a connection through the second NIC in the team. This is not
 guaranteed because it is dependent on a number of variables we cannot
 control. All luns presented on one IP address passes data through the
 connection established during the iSCSI session login."

  http://pubs.vmware.com/vi3i_i35/iscsi_san_config/wwhelp/wwhimpl/common/html/wwhelp.htm?context=iscsi_san_config&file=esx_san_cfg_reqs.4.7.html
 "Note  *You can configure some ESX Server systems to load balance traffic across multiple HBAs to multiple LUNs with certain active-active arrays. To do this, assign preferred paths to your LUNs so that your HBAs are being used evenly.*" --> this is applied to My iSCSI MD3000i too which is Active-active since it has got dual controllers.
 
 hope this can be a helpful article for you too.
0
 
LVL 30

Assisted Solution

by:Duncan Meyers
Duncan Meyers earned 350 total points
ID: 24437548
>I'd say the performance is standard, not that outstanding though, you could have a look from the benchmark result which is not still lower than the local 7200 rpm SATA disk for linear access and throughput (good for Multimedia streaming file and large file transfer).

All good. Sequential transfers look great in benchmark results, but are almost completely irrelevant in a VMware environment. One VM may be writing sequentially, but once you mix that I/O pattern in with I/O from every other VM, you get *highly* random I/O at the physical disc, so you have to size your storage for highly random I/O. A single SATA drive tops out at 80-140 IOPS. A single 15K SAS drive can produce 180-360 IOPS. A RAID set of 10 SAS drives could produce 1800-3600 IOPS. Your benchmark results show good latency, so the MD3000i is working nicely with teh benchmark workload you are presenting, but keep an eye on the latency. The first sign that you need more discs to handle the workload is that latency will start increasing.

Again, once your VMware environment is carrying a production load, you'll see high IOPS and relatively low MB/sec.

BTW - best practice is not to have the service console and production network on the same NIC. You may run into problems doing Vmotions.
0
 
LVL 1

Author Comment

by:jjoz
ID: 24437596
great, sounds like I'm on the right track MeyersD.

a little more background of my SAN:

14 x 300 GB 15k rpm SAS all in RAID-5 presented as single target containing multiple Virtual Disk (LUN).
0
 
LVL 30

Assisted Solution

by:Duncan Meyers
Duncan Meyers earned 350 total points
ID: 24437634
Best practice is to have multiple smaller LUNs and VMFS partitions of about 500GB each, although if you've got less than 16 or so VMs, then you shouldn't have an issue. If you can, I'd create multiple 500GB LUNs on the MD3000i and present them instead so that you don't have to rework the array in 6 to 12 months time.
0
 
LVL 30

Expert Comment

by:Duncan Meyers
ID: 24437636
BTW - you should get about 2500 - 5000 IOPS day in, day out with that configuration. Nice!
0
 
LVL 1

Author Closing Comment

by:jjoz
ID: 31581342
thanks to SagiEDoc and MeyersD who answered my questions and giving true explanation about my current problem.
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

Suggested Solutions

HOW TO: Connect to the VMware vSphere Hypervisor 6.5 (ESXi 6.5) using the vSphere (HTML5 Web) Host Client 6.5, and perform a simple configuration task of adding a new VMFS 6 datastore.
In this article, I will show you HOW TO: Create your first Windows Virtual Machine on a VMware vSphere Hypervisor 6.5 (ESXi 6.5) Host Server, the Windows OS we will install is Windows Server 2016.
This tutorial will walk an individual through the process of installing the necessary services and then configuring a Windows Server 2012 system as an iSCSI target. To install the necessary roles, go to Server Manager, and select Add Roles and Featu…
This video shows you how to use a vSphere client to connect to your ESX host as the root user. Demonstrates the basic connection of bypassing certification set up. Demonstrates how to access the traditional view to begin managing your virtual mac…

762 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

18 Experts available now in Live!

Get 1:1 Help Now