Solved

2 bonded ethernet cards only transferring 10mbit/s

Posted on 2004-08-02
9
1,747 Views
Last Modified: 2008-01-09
I've read a number of papers, including the provided bonding.txt.  I appear to HAVE everything working... the ifconfig -a returns the proper values in bond0, eth0 and eth1.

My Routing table shows Bond0 with all the appropiate traffic.

mii-tool on both eth0 and eth1 show 100baseTx-FD.  So those are good.  But when I ran iperf to test I noticed I was only getting between 8-13mbit/s bandwidth.

So on a whim I did mii-tool bond0:

bond0:  10 Mbit, half duplex, link ok

That only shows on all my Fedora Core 1 machines.  On the 1 Mandrake machine I have it returns:

SIOCGMIIPHY on 'bond0' failed: No such device

That is what I am sure is SUPPOSED to happen, since Bond0 is not an ACTUAL device.  But on all the Fedora Machines I get the message above.  That is telling me that my Fedora Machines are forcing a Bottleneck of 10 MBit, half duplex.  Which corresponds to me getting between 8-13 mbit/s using iperf. (each interface giving around 5 mbit/s).

I can't do mii-tool bond0 -F 100baseTx-FD ... it Says No such device, obviously.

I have added the following in modules.conf:

alias bond0 bonding
options bond0 mode=0 miimon=100
probeall bond0 eth0 eth1 bonding

And when I boot up my exact commands to get the network started is :

ifconfig bond0 xxx.xxx.xxx.xxx netmask 255.255.255.0 broadcast xxx.xxx.xxx.255 up
ifenslave bond0 eth0 eth1

It enslaves the network cards, reports both of the cards auto-negotiated (or were forced, depending on machine) to 100 MBit Full Duplex.

I attempted to Assign eth0 a IP Address and network, and NOT assign bond0 anything via ifconfig, but when i run

ifconfig bond0 eth0 eth1

It reports with a bunch of errors saying unable to get IP Address/MAC Address/Broadcast address etc. from Master Device

then says interface 'bond0' not up.

I am running a Migshm patched Openmosix 2.4.24 kernel..  All machines Mandrake and Fedora alike are running the exact same kernel with the same drivers.

0
Comment
Question by:fatalsaint
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 5
  • 4
9 Comments
 
LVL 40

Expert Comment

by:jlevie
ID: 11695929
Link mode & speed really has no meaning for a bond device and I expect that mii-tool is juts confused. The underlying ethernet devices and what they connect to determine the link mode & speed.

What aregments are you using for the iperf runs?

Do you get similar throughputs when transfering 3-5Mb files around with FTP?
0
 
LVL 1

Author Comment

by:fatalsaint
ID: 11696043
the iperf is simply:

on one machine:

iperf -s

on the other

iperf -c xxx.xxx.xxx.xxx -t 60

i've tried times from 10 seconds to 2 minutes.  It seems the less time the higher the bandwidth - I.E. my highest Bandwidth, 13.8 mbit/s, was done at 10 seconds and 15 seconds.  While 60 Seconds and above all go between 8-12 mbit/s.

I have not tried the FTP idea.. I will write back when I get the chance.  

Thanks
0
 
LVL 40

Expert Comment

by:jlevie
ID: 11696483
What do you get on each NIC when they aren't bonded?
0
Don't Cry: How Liquid Web is Ensuring Security

WannaCry is just the start. Read how Liquid Web is protecting itself and its customers against new threats.

 
LVL 1

Author Comment

by:fatalsaint
ID: 11700057
I used SCP to transfer a 6.0 Megabyte MP3.

With the Bonding up, i got 1.2 - 1.4 Megabytes/s

I took bonding down and did just each nic.  1 nic did an 850 kb/s transfer, the other did a 1.2 mb/s transfer.

the iperf remained the same, 8-12 mbit/s

I think my iperf is messed up.  I'll retry.

But bonding up or down, it seems the max I get in a transfer is 1.2 mb/s.  That's hardly what I should be getting I believe, isn't it?
0
 
LVL 40

Expert Comment

by:jlevie
ID: 11703767
That would make me think that something is wrong at a level below bonding. When using either of the NIC's directly you should get something on the order of 6-8Mb/sec. So it looks like iperf is telling the truth and you've got a more fundamental problem.
0
 
LVL 1

Author Comment

by:fatalsaint
ID: 11704339
All the machines in question are toshiba laptops:

Satellite 4200
Satellite 4090XDVD
Satellite 4030
2 Tecra 8000's

I use various dongle-type ethernet PCMCIA cards

3Com 10/100 Megahertz
Linksys 10/100 PC Card

Then I have some regular Linksys 10/100 PCMCIA cards that are Not Dongled.

On all machines but one there is 1 non-dongled Linksys, and then a dongled either Linksys or 3Com card - Only have 2 pcmcia slots and needed to fit two ethernet cards.

On the odd machine it has 1 3Com dongle card, and a 4-Port USB 2.0 PCMCIA card (for external drives), which has a Compact Linksys USB-Ethernet Adapter.

The backbone is 2 NetGear FS608 8 port 10/100 Switches

All eth0's to one switch, all eth1's to the other one.

I don't understand the problem... Because I did a test on a Completely seperate network.

Desktop P4 1.4ghz 756mb RAM to a Celeron 750mhz, using regular ethernet PCI Cards (one from 3Com, the other I am not sure, it's a generic I bought a while ago) with a 5-Port switch on it's backbone... that ALSO only did a 1.4 mb/s SCP transfer of a 300mb file.

So what would cause two completely unrelated networks to have the same Bandwidth problem.  The only similarity is Fedora Core is on most of the laptops (but one) and on the Desktop's... but the Laptop's ALL have custom Kernel's (making them not 'technically' fedora core) and the Desktop has the latest updated Fedora Core 1 kernel.
0
 
LVL 40

Accepted Solution

by:
jlevie earned 75 total points
ID: 11709560
Laptops are twitchy little beasts and it wouldn't be overly surprising to encounter networking problems. However, the lack of throughput on the desktop box, with the standard Fedora kernel, tends to indicate that the problem lies elsewhere.  I'm not aware of any generic problems with Fedora Core 1 that would cause this.

I don't know the details of the test you've done. But the first thing I'd try would be to pick a pair of the desktop boxes, connect them (and only them) to a 100Mbps switch, give each the IP/hostname of the other in /etc/hosts, and try iperf or file transfers.  That should yield ~80% of theoretical bandwidth.

Once you've achieved that move those boxes to the same switch on the real network and see what you get for transfer rates between the two. If it falls significantly it might indicate a problem with the switch or with the volume of traffic on that swtich.
0
 
LVL 1

Author Comment

by:fatalsaint
ID: 11714406
Well,

Iperf between the two desktops showed the correct, 75-85 mbit/s even though the scp was only reporting going at 1.4 mb/s.  So apparantly those are ok.

I took all the pcmcia cards out of two laptops, save one ethernet card, put them on their own switch (same switch used with the desktops ), and still was only getting 7-9 mbit/s directly between those machines.  I even tried rebooting into Standard Mandrake 10 and FC1 kernel's and tried it that way (testing drivers) - same issue.  

Thanks for the help, It's quite apparant I have other issues somewhere between the laptops.  I don't know what, as I know someone else with same laptops and same PCMCIA cards getting better throughput than their WinNT Desktop counterparts.  

I'm accepting your answer as you've showed me my problem has nothing to do with bonding - so i'll close out the thread.  

Thanks again.
0
 
LVL 40

Expert Comment

by:jlevie
ID: 11715526
Laptops can be freaky little boxes. A case I had, similar to yours, was an IBM iSeries that could not be made to do better than about 5Mbps with any PCMCIA 10/100 NIC when running Linux. It worked better with windows (~20Mbps), but never got the throughput it should.

I've got an IBM A30 & A31 that have throughput issues with USB. Attached USB devices like disks work, but much slower than they do on a desktop box. All of that leads me to believe that many laptops have issues with interrupt handing. These IBM like to  funnel interrupts from a number of devices through a single IRQ, which works sort of okay in windows but not well at all for Linux. The poor performance with Linux as compared to windows isn't a big surprise since Linux wants to operate in an interrupt driven mode and windows tends to use polling.
0

Featured Post

Manage your data center from practically anywhere

The KN8164V features HD resolution of 1920 x 1200, FIPS 140-2 with level 1 security standards and virtual media transmissions at twice the speed. Built for reliability, the KN series provides local console and remote over IP access, ensuring 24/7 availability to all servers.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

I have seen several blogs and forum entries elsewhere state that because NTFS volumes do not support linux ownership or permissions, they cannot be used for anonymous ftp upload through the vsftpd program.   IT can be done and here's how to get i…
Note: for this to work properly you need to use a Cross-Over network cable. 1. Connect both servers S1 and S2 on the second network slots respectively. Note that you can use the 1st slots but usually these would be occupied by the Service Provide…
If you're a developer or IT admin, you’re probably tasked with managing multiple websites, servers, applications, and levels of security on a daily basis. While this can be extremely time consuming, it can also be frustrating when systems aren't wor…
In this video, viewers will be given step by step instructions on adjusting mouse, pointer and cursor visibility in Microsoft Windows 10. The video seeks to educate those who are struggling with the new Windows 10 Graphical User Interface. Change Cu…

707 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question