MrVault
asked on
Determine iSCSI overhead via perfmon?
Is there any way to determine how much overhead there is because we're using Microsoft' iSCSI software initiator with an onboard NIC? We're not sure if we want to invest $500+ in an iSCSI HBA card. I've heard it can save 60% of CPU utilization.
Trying adding a Process monitor, % of Processor Time with the iSCSI process selected.
ASKER
What is the iSCSI process called?
what is your cpu running at now?
ASKER
some are running at 40-70%. They are SQL servers running intensive operations, but it's hard to tell if the iSCSI piece is a bottleneck worth investing in.
ASKER CERTIFIED SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
when you say just how 10%, do you mean it increases it by 10% or do you mean it categorizes iSCSI overhead as only 10%?
I've never used iometer before.
I've never used iometer before.
SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
iSCSI overhead, is that tied to activity? Meaning is the overhead percent the same whether there's data going over iSCSI to the bulk storage somewhere or if there's really no data being transferred at all?
I won't call myself an expert in this area, but I imagine there is some cpu load attributed to the simulation of a hardware iScsi adapter. Â So I would think it only manifests during I/O, which is why I thought iometer would be a good way to exercise it. Â You get kind of the simplest i/o you can get. Â The trouble with looking at cpu while running sql is that a large portion of the cpu is due to sql activity, which a smaller portion due to i/o, and of that even a smaller portion would be due to having a software iScsi adapter. Â On our servers, I've observed that total cpu is fairly low during stress testing of the i/o so I've never worried about needing a hardware adapter. Â The main drawback to the test I suggest is that if you get a high cpu, you can't reach any conclusion. Â But a low cpu would lead to the conclusion that there would be little benefit to going hardware.
SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
LOL. A UAT system? If only I could convince them to spend the $$ to have an extra for testing. In all serious though I am trying to go that route.
Connollyg, I'm curious if you'd ever see the benefit of an iSCSI hba because if you're thinking it's not worth the benefit if it' near 70% utilization, then too much higher and the server is hosed. We've seen spikes easily hold 99% for a long time. the 40-70 is more a typical utilization. How far would you have it go before seeing the benefit? Most documents point to a reduction to 30% total utilization after implementing HBAs. of course YMMV.
Connollyg, I'm curious if you'd ever see the benefit of an iSCSI hba because if you're thinking it's not worth the benefit if it' near 70% utilization, then too much higher and the server is hosed. We've seen spikes easily hold 99% for a long time. the 40-70 is more a typical utilization. How far would you have it go before seeing the benefit? Most documents point to a reduction to 30% total utilization after implementing HBAs. of course YMMV.
99%! Well you didnt say that last time!
But seriously, it comes down to a cost benefit! If your server is running short of CPU resource then you have to work though the options of why and what, and then how you could reduce it.
Then you have to balance the resultant actions against cost! Would some other upgrade give you a bigger bang for your buck in overall system performance, or would, what we used to call a ToE (TCP Offload Engine) be cost effective. [ i know a iSCSI HBA isnt quite the same thing as a ToE, but close enough ]
And YES i am in favour of iSCSI HBA's but they have to be cost effective too.
But seriously, it comes down to a cost benefit! If your server is running short of CPU resource then you have to work though the options of why and what, and then how you could reduce it.
Then you have to balance the resultant actions against cost! Would some other upgrade give you a bigger bang for your buck in overall system performance, or would, what we used to call a ToE (TCP Offload Engine) be cost effective. [ i know a iSCSI HBA isnt quite the same thing as a ToE, but close enough ]
And YES i am in favour of iSCSI HBA's but they have to be cost effective too.
ASKER
Sorry for not getting back sooner. We're not always running at 99%, but it does happen on some of our servers for extended periods of time.
Does anyone know how to monitor the process as "mattvmotas" suggested?
Does anyone know how to monitor the process as "mattvmotas" suggested?
SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
Thanks. These are all 2 year old servers with 2 quad core processors. I know it's not a guarantee but I'm pretty sure they all support PCIe.
They are all use the PERC6i raid controllers for the non-OS disks.
They are all use the PERC6i raid controllers for the non-OS disks.
Then personally, I would not worry about it unless I needed to try to squeeze a little more network performance out of the systems. Keep an eye out for a bargain on a used card on ebay in the interim, but I wouldn't make it a priority.
ASKER
Do you think the same conclusion should be drawn regarding segregating our network with VLANs and/or physical switches? Right now they have 3 switches on a 4gig backplane all in a single vlan so the wan connections, iscsi connections, san replication, etc are all over the same subnet. best pracitces are to segregate it out both for performance and security reasons. not to dismiss security, but right now their biggest concern is performance. I need to determine if this network config is a bottleneck and if so pitch a solution (like getting separate switches for iscsi traffic or segregating via VLANs).
SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
And which things should I be looking for specifically? Seeing if they are running close to 1Gbps per port? Or looking for collisions or retransmits? They are Foundry managed switches so i'm guessing they support SNMP.
ASKER
thanks guys. thoughts were helpful but it appears there's no metric to measure iSCSI overhead on a system without a test system or changing the config which involves downtime. I was hoping there was some perfmon counter. the first guy seemed to think there was a process but I'm guessing he was mistaken. oh well.
There has been too much debate and input from several experts to give away all this expertise without assigning points!
ASKER
totally true. didn't mean to imply there'd be no points assigned. just hoping my latest comment might remind someone of a way :-)
Points awarded now.
Points awarded now.
ASKER
no live method was given, but I'm guessing there is none. Only option is to have a like system with an HBA and measure differences.