Link to home
Start Free TrialLog in
Avatar of MrVault
MrVault

asked on

switching on switch

we have foundry switches and router. question about layer 2 or 3 routing.

if a switch is connected to a router and said switch has a server and an iSCSI SAN connected to it, but in different subnets and the server has a volume on the iSCSI SAN connected to it...

will the path that the data takes to be stored on the SAN go from the server to the switch, up to the router, back down to the same switch and down to the san? Or will the switch handle this handoff even though the connections are in different subnets and vlans?
SOLUTION
Avatar of Member_2_231077
Member_2_231077

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault
MrVault

ASKER

the reason for putting on separate subnets and vlans is to reduce the overhead broadcasting, traffic security. I'm not a networking person but every best practice article I've ready on building a an iSCSI traffic network recommends segregating the traffic at least virtually if not physically from the rest of the traffic. I guess I was hoping that a layer 2 switch would be able to switch the traffic locally if both devices were plugged into it instead of having to go up a hop.
You put your servers iSCSI HBA on a different subnet than the storage it talks to to reduce broadcasting?

I think you need to think that one out again; you can certainly use vlans to segregate traffic (you don't want your LAN and SAN on the same subnet) but you don't use them to separate things that continually talk to each other.  

Say you only had one NIC per server but it supported vlan tagging, you could split that NIC into four logical ones so pretend you've got 4 NICs in it for the design phase and also pretend your vlan capable switch is also four physical boxes. You wouldn't cross connect them and add a router would you?
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault

ASKER

Thanks people. I totally agree about separating them out with separate NICs, subnets, and VLANs.

The thing was I thought I remembered someone once telling me that if I couldn't afford separate switches for iSCSI and LAN traffic, that I could just VLAN out a single switch and use ACLs to keep the traffic separate. My question though is if the data packets would actually go from server iscsi NIC to switchport on iSCSI VLAN, to router, back to switch, out SAN iSCSI switchport and down to the SAN or if it would never go up to the router.

If we are able to purchase separate iSCSI switches, we will still need to give them an IP because we need to replicate the SAN to another SAN that's is only accessible down the path that goes through the router.
VLANs will keep them seperate if you use one subnet for the LAN and one for the SAN, so ACLs shouldn't come into it as traffic from one address on a subnet doesn't go through a router to get to another IP address on the same subnet. Only the remote replication iSCSI traffic will go through the router as your remote SAN is on a different subnet.
Avatar of MrVault

ASKER

so here's our situation.

we have one router on on side of the campus. on the other side we have a switch with a SAN and a server attached to the switch. we want to segregate the traffic with VLANs but our hope was that traffic wouldn't have to go all the way back to the router when the server is writing data to it's iSCSI volume. I realize one option is to purchase a separate switch for iSCSI that the server's iSCSI NIC would connect to and the SAN would connect to. I was just hoping that we could avoid buying another switch and still separate out the traffic with VLANs.

thanks!
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault

ASKER

thanks everyone. I think I've got it now. just one last question - what advantage does vlan tagging have on a single NIC? the nic's utilization would be the same, no?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of MrVault

ASKER

thanks.
It would also allow you to run jumbo frames for iSCSI and normal frames for data wouldn't it? I thought all server grade NICs supported vlan tagging but yeah, it could have a cheap one in it.
andyalder: While the nic could do it, the os/nic drivers nead to deal with it.  Jumbo frames can help, but be sure the jubmo frames are supported all the way from server to server.  If the switch needs to fragment then you could take a very big hit.

eg: All the way (mtu size for example only)
Svr1 9000 MTU -> Switch 1 9000 MTU -> Switch 2 9000 MTU -> Svr 2 9000 MTU
this will work well as the data to overhead is very good.

Bad example
Svr1 9000 MTU -> Switch 1 9000 MTU -> Switch 2 1500 MTU -> Svr 2 9000 MTU
In this example, when the 9000 byte packet gets to switch 2, switch 2 (depending on setup and packet flags) will
a) Just drop the packet (very very bad) Old gear can do this.
b) Send an ICMP "need to fragement, but dont fragment flag set
   Will happen so ensure ICMP packets are allowed on any server level firewall.
c) Fragment the packet.
 Switch 2 will take the 9000 byte packet, then breakup into 1500 byte packets (with all the header info), this will take switch cpu time and be slow.  Then Server 2 will need to take in all packets and rebuild prior to passing up the OSI Levels to the application.  2nd level delay.

You dont need vlans for jumbo frames, you can do them on a single nic, but your switch may support per vlan MTU settings so correct placement of device into the correct vlan would be better.
Avatar of MrVault

ASKER

I wonder if the jumbo frame topic is why we're having issues right now.

Right now our servers have 2 onboard NICs which in a flat network provide the LAN and iSCSI traffic (single IP per NIC, one VLAN).

on the primary NIC we have all the windows protocols necessary enabled, but the NIC is configured for iSCSI (jumbo frames, MTU 9000, flow control, etc). The other NIC has same settings, but no protocols, just IPv4 and no gateway. Every once in a while some random server goes blank when we RDP to it. There's no way to fix it except rebooting. The server is still working just fine in terms of customer connections, iSCSI load, etc. But we can't get in. I wonder if because we're RDP-ing over a NIC that has MTU 9000 (and the switch is MTU 10222) that it randomly loses control of the main connection.
As long as the switchs have bigger MTUs then the servers it will be fine.  The MTU Path should be the same (or bigger) then the mtu each end device is using.  When you vlan the switch will add a few bytes to the packet for the vlan tag.  

If you can use and understand wireshark (or some packet sniffer) use that to see if DUP packets are seen or ICMP need to fragment are seen.  
In a stock network I would have 1500MTU on the clients and 1546 for the switch MTU.  the extra 46 are to support QOS, VLAN and other tags added to the packet for our network setup.  In this model, the full client 1500 byte packet will make it from host to host.

You may find the jumbo MTU is a little too big under load, so you could try making it a little lower, but ensure each end can deal with the jumbo size.
 
I wouldn't actually bother with jumbo frames, I just introduced it as an example of how you might need VLAN tagging at the server NIC.

I don't think you're right with "You dont need vlans for jumbo frames, you can do them on a single nic," though, at least not with a server with a single NIC because you must not run more than one frame size on any particular vlan, so you mustn't run normal LAN sized frames plus jumbos on the same vlan.
Comment accepted andyalder.  If you were to use jumbo frames for both iSCSI and client access, this could be bad.  My comment was more about if you have a nic and it is only used for the iSCSI, then you dont need to vlan it, but can still use jumbo frames.
Agreed, we're talking at cross-purposses to some extent, it would be a lot better to have separate NICs and switches dedicated to LAN and SAN. I come from a fibre channel background so there's none of this trying to run two protocols on one network.
Avatar of MrVault

ASKER

what do you guys think about iSCSI HBAs verses just separate regular PCIe NICs? Our servers have 2 onboard NICs and we want to give those to HA LAN traffic, so we're adding extra NICs for iSCSI. But vendor is saying we don't need iSCSI HBA just regular NICs. right now we don't boot from SAN, but our CPU utilization does get high so we're thinking it'd be good to offload the iSCSI overhead on the CPU.

Thoughts? These are Dell poweredge 2950, r510, and r710 servers.
When I hear HBA I generally think Fibre so maybe that is what you're vendor is thinking.
HBA can really be applied to anytype of adaptor though I guess. All the iSCSI connections I've worked with so far have been using standard ethernet NICs. They're perfectly acceptable as far as I'm concerned.
Avatar of MrVault

ASKER

yeah, there are iSCSI hba and fibre hba. and yes i could use a regular nic too, but there has to be some sort of advantage between a regular nic, and toe nic and a iscsi hba. thanks though.
Whilst iSCSI and TCP offload do save CPU resource at $500 per port it's generally cheaper to buy faster CPUs.
I am only just playing with this now, so keen to see how it pans out.
I setup a fedora 14 box with ext3.  Then on that setup the free iSCSI server (unsing flat files on the ext3).  I kinda saw this as a worst case setup.  I made the iSCSI connection from the free esxi vm server and setup 2003 srv on the iSCSI LUN.  the nic on the VM Server went to a cisco 3750 switch and the linux server was on that same switch (no other devices) as an out of band connection.

The 2003 "vm nic" went onto the live network.  I was getting between 850-900 Mbps transfer from my notebook to the 2003 server then out the other nic to the linux iSCSI lun.  Since each nic was a 1Gbps, I thought this was not too bad.

That said, a direct write on the linux server to create the flat file wrote at a rate of 3Gbps for a 100Gig file.