• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 276
  • Last Modified:

Combine two frame relay T1s

Hello,

   I am about to have a secondary T1 installed, and I'm wondering how to go about combining them at the router so that all 3 MB of bandwidth is available as one pipe line.  I'm under the impression I need to set up a bridge, but I'm not sure exactly what commands I need to use.  Here are the stats

Cisco 2600 router, cisco ios 12.2, two wan cards, two 1.5Mb/s frame relay lines from the same provider(uunet).

The provider can set up whatever I need them to on their end, but I just need to figure out what I should do on my end.

Any comments?
0
ripvannwinkler
Asked:
ripvannwinkler
  • 5
  • 2
  • 2
1 Solution
 
ripvannwinklerAuthor Commented:
In addition, I have the interfaces currently set up as two serial sub ints, ip unnumbered FastEthernet0/0, the ethernet port goes to the firewall, currently have two route statements such as:

route 0.0.0.0 0.0.0.0 serial0/0.1
route 0.0.0.0 0.0.0.0 serial0/1.1
0
 
ripvannwinklerAuthor Commented:
Ok, I 've done further reading, and what I appear to need is MLFR(Multilink Frame Relay).  Anybody done this before on a Cisco 2600 router?  Is it possible?  If so, some hints to the commands necessary would be extremely helpful.

Still a little inexperienced here with some of the advanced routing features.. so bear with me!
0
 
lrmooreCommented:
No bridge
Two default routes just like you have gives you packet-by-packet loadbalancing and automatic failover. My only suggestion would be to use numbered interfaces on the serial links, and point the default to the explicit ip of the upstream router vs the interface. Saves lots of arp traffic.

MLFR is done at the carrier level, and you would need an inverse multiplexer at your end, with one HSSI interface. I don't think they make HSSI for 2600.

If you enable ip CEF, you will load balance across the two T1's with the option of per-connection or per-packet. I think you'll like it. I use in in several applications.
http://www.cisco.com/en/US/partner/tech/tk648/tk365/technologies_tech_note09186a0080094806.shtml

Multilink PPP would give you one virtual interface that spans the two t1's, but that would have also be configured at the ISP also. This actually slows down the traffic because all packets now must be process switched, vs fast-switched.
0
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
ripvannwinklerAuthor Commented:
I am perfectly fine with the per packet load balancing using floating static routes as I already have.  However, this is not my location to make the decision for, and I am pretty sure they want to combine the lines to get say, 3 mb worth of download speed in one tcp transmission.  Is MLFR the only way to go here? or does ppp multilink provide the same?  The concern here is that a VPN is being run across the T1s, and each connection needs as much bandwidth as it can get.
0
 
lrmooreCommented:
I would go with the MLFR if the ISP supports it. Since that is L2, you don't have the processing overhead of the MLPPP
0
 
Dr-IPCommented:
I do what you want to do all the time; the only thing is I usually use HDLC or PPP encapsulation instead of frame relay. Each link gets configured the way you would configure normal single link. Then you assignee the same routes to both interfaces with the same cost as you already have surmised.

Then you have to decide if you want to do per destination load balancing or per packed bases instead. This is configured on the interfaces themselves using the “ip load-sharing per-destination”, or “ip load-sharing per-packed” command. This has to be configured the same on both sides by the way.

Per packet mode sends every other packet to each interface, IE P1 to S0. P2 to S1, P3 to S0, P4 to S1 and so on. If you have a high packet rate this puts a hefty load on the CPU and frequently overloads low end routers, but you get a full 3 meg link that is 100% utilizable.

Per route mode works by balancing the destinations between the links. If you have ten people surfing the web the router will route half of them through each T1. This puts a lot less load on the CPU than per packet mode, yet no singe user will ever get more than 1 T1’s worth of bandwidth, and because of the disparity of bandwidth use per user you could find yourself with one T1 overload while the other one still has bandwidth available, but in most cases on the average it balances out close enough that both T’s end up with about the same utilization.    
0
 
ripvannwinklerAuthor Commented:
"This has to be configured the same on both sides by the way."

^^^
Does that statement mean it has to be configured the same for each interface, or on both sides of the line, aka local + carrier? I would assume it means only the interfaces at the local end...
0
 
Dr-IPCommented:
What I mean by both ends you and the carrier have to configure all the T1 interfaces between you to do the same kind of load sharing. By default it’s in per destination mode for Cisco and most other routers. To do per packet load balancing you will need to add the “load-sharing per-packet” command to your T1 interfaces and your ISP will have to do the same.
0
 
ripvannwinklerAuthor Commented:
Irmoore, your advice was on target as well, and for that, I'll be glad to award you 50 points too.  Let me know if you want em...
0

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

  • 5
  • 2
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now