Why do servers perform slower when NICs are set at 100Mbs than at 10 Mbs?

I recently experienced a very frustrating situation involving poor communication between several of my Windows 2003 servers.   This had the effect of drastically slowing down my application to the point of almost not working.  In the end it ended up being some incompatibility of my new servers with my old HP 10/100 switch.  I deployed a new Alcatel 10/100/1000 switch and everything is lightening fast now.  

In attempting to reslove the problem it became apparant that when I set the NICs on the servers to 10Mbs communication was better than when they were set to 100 Mbs and this was true even on the new Alcatel switch.  The best set up, the one that I'm using now is with the NICs set to Auto Detect.  

Anyway,  I ended up calling Microsoft to help resolve the issue and after it was "fixed" with the new switch, I asked why it would be slower to set the NICs at 100 Mbs than at 10 Mbs.  I got a very vague answer that really didn't explain it.

So can anyone tell me why this would happen?
Who is Participating?
Keith AlabasterEnterprise ArchitectCommented:
To add to Robs excellent comments I would also highlight.... they don't unless there is an issue.

All the cables you use at speeds greater than 10Mb should be marked with CAT5E or CAT6 on the plastic sleeve.
Have you checked the port settings on your switches? Are they managed or dumb units?
When you move the cursor over the connection icons in the bottom right (assuming you have them set to display when connected), what sppeds are they reporting?

If you have a mismatch of port speeds or duplex settings, then you will get more errors, data retransmits due to dropped packets and CRC checks because the data is being sent more quickly. Sounds silly but imagine it like this.... A few people arrive at a turnstile and they can generally sort themselves out reasonably quickly even if more people turn up behind them. There are jams and problems but bit by bit they get through. If people turn up at the turnstile 10 times more quickly, the log jam gets bigger with more people trying to get through at the same time and all being pushed back. A good example is a traffic jam on the road. A couple of cars break suddenly on the motorway and then speed up again. Within a few minutes, the ripple behind these couple of cars will practically bring the following traffic to a halt as they all see the brake lights come on.

The same happens with network traffic, the more data arriving at a failing point, the greater the bottleneck. If the system works better with autodetect, this suugests that previously there was a mismatch somewhere along the line between settings.

Download and install something like Paesslers PRTG software or even Ethereal from Wireshark and this will let you monitor your traffic (for free).

keith                  -  Sorry for the blurb but it made sense when I wrote it.
Rob WilliamsCommented:
One thing to verify would be the quality of the cabling. Network cabling is very fussy. Even though you have a working connection, poor terminations, kinks, coils of cable near a power source, can all lead to ""cross-talk" and induced "noise'. If this is the case you will often get a large percentage of re-transmissions. A common verification of this is to reduce the network speed for 100 to 10 mbps creating a more stable connection. If this is a possibility I would recommend having someone with a proper certification meter like a Fluke or Wirescope come in and test your network. Where you mention "between servers" perhaps these are just patch cords, you might want to try swapping them out. It could also have to do with your switches, and or configuration, but I thought I would point out the wiring possibility.
If these were *old* HP 10/100 switches, I have two suspects in mind.  Some of the original "dual-speed" switches weren't really dual speed.  They ran at the speed of the slowest connected device.  If you had one 10Mb device plugged in, the whole switch ran at 10Mb.

Also, some of the early HP 100 megabit hubs and switches did not use 100BaseTX.  They used 100BaseT4, which used all 4 pairs of the network cable, instead of just the two pairs that 100BaseTX uses.  Of course, you had to have 100Base T4 nics as well.  The advantage was that T4 would run over Cat3 cable, which was the default in early ethernet deployments.  More info here: http://en.wikipedia.org/wiki/100baseT
Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

Rob WilliamsCommented:
Good analogy Keith !!  I like that :-)
NWMCHAAuthor Commented:
Sorry to have let this sit for so long.  As I had stated originally, the problem had already been fixed with the new Alcatel switches, so I sort of lost track of this.  I think all three suggestions were helpful and appreciated.  
Rob WilliamsCommented:
Thanks NWMCHA.
Cheers !
Keith AlabasterEnterprise ArchitectCommented:
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.