Trouble with configuring Multipathing in Solaris

Hi,

I am currently involved in the NIC redundancy project for our Solaris servers.  I understand that to achieve NIC redundancy for Solaris servers, we need to utilize Multipathing that was introduced to Solaris 8.  After testing out this feature on one of our Solaris servers, I’ve came across some what of a ‘show stopping’ issue:

In order to properly set up Multipathing, for every Solaris server we have, we’ll need to allocate one additional IP address for each NIC on the server.  These additional IP addresses are used by the server to detect the integrity of each NIC.  The issue here is that these IP addresses need to be on the SAME VLAN segment as the host’s primary or public IP address.  

Most of our Solaris servers reside in the .20 VLAN segment.  After consulting with our network engineers, we simply do not have any spare IP addresses on the same VLAN for me to use.

At the moment, this puts my project at a halt.  Any ideas or comments would be greatly appreciated!
forestlaw888888Asked:
Who is Participating?
 
NukfrorConnect With a Mentor Commented:
As jekl2000 mentioned, you need a minimum of 3 IPs address to setup IPMP between 2 physical interfaces.  I'm pretty sure VNC MultiNic does nothing other then monitor IPMP status.  

If you have a highly-available *single* switch and it supports 802.3ad, you could use Sun Trunking to get availability as well (and a side effect of more performance over your network interfaces).  What's nice about trunking is it requires *0* IP addresses and you can add up to 4 interfaces into a "trunk".  And the trunk name is the same as what's known as the "head" interface.  So if you have qfe0, qfe1, qfe2 in a trunk configuration, the trunk will be known as qfe0.  VCS supposedly supports trunks as they are 100% transparent to the cluster framework.

I've seen several places use trunks rathers then IPMP because 1) they don't want to blow a bunch of IPs and 2) they have built a very highly-available *single* switch configuration.  Trunk failure detections is faster then IPMP failure detection.
0
 
jekl2000Commented:
If you use Veritas Cluster Server, it has what is called IPMultinic or Multinic that will fail it over. I have not set it up before but I believe it just requires 2 IPs and a virtual IP.
0
 
PetineCommented:
sorrry, but as i know it, you only need 1 public ip, the others 2 is for solaris control, that has nothing to do with the public ip.
lets say that you want 100.100.81.2 as public ip
you put 100.101.101.1 for the first NIC and 100.101.101.2 for second NIC
those 2 is only for solaris control, nothing more than that
the only IP that need to be free is the 100.100.81.2(public ip)

dont know if will works for you but it worked for me...
0
Cloud Class® Course: CompTIA Healthcare IT Tech

This course will help prep you to earn the CompTIA Healthcare IT Technician certification showing that you have the knowledge and skills needed to succeed in installing, managing, and troubleshooting IT systems in medical and clinical settings.

 
PetineCommented:
0
 
NukfrorCommented:
Petine, your last link relating to ".....ds-netmultipath" doesn't work.

I'm not sure I understand your post.  IPMP requires at a minimum *THREE* IP addresses - not two ... assuming I understand what your post is trying to say.  Using IPMP on a single physical interface is possible with *TWO* IPs addresses, one test and one public, but this gives you nothing more then elevated monitoring capabilities.  

With two interfaces, you configure two test addresses on each physical interface.  These IPs are deprecated which amoung other things tells the in.mpathd which addresses to use for the ICMP test probes it send outs to the target hosts.  Target host being the default route(s) or specifically defined HOST target (which are defined using host-to-host routes with the route command).  The third IP, the public IP, is the address that floats between the two physical interfaces during failure situations - and it has everything to do with IPMP.
0
 
NukfrorCommented:
Forestlawn, the specifics for IPMP with Solaris 9 can be found here:

http://docs.sun.com/app/docs/doc/806-4075/6jd69oakv?q=multipath&a=view

Solaris 10 has a similar section.
0
 
NukfrorCommented:
Oh and something I never answered, IPMP does require all physical interfaces in an interface group to be on the same subnet.  It spelled out in the link above in the "Grouping Physical Interfaces" section:

Placing the IPv4 instance under a particular group automatically places the IPv6 instance under the same group. Also, you can place a second interface, ****connected to the same subnet****, in the same group by using the same command. See How to Configure a Multipathing Interface Group With Two Interfaces.


0
 
forestlaw888888Author Commented:
Hi everyone,

Thank you for taking the time to write up the comments for my question.  I have previously read all of the documentations you suggested above.  I have a pretty good grasp of how Multipathing works and how to set it up.  During my work with the test server, I've successfully configured Multipathing and it did work.  However, the issue here is simply the lack of IP addresses on the SAME subnet as the host's public IP address.  

A friend of mine has suggested that the since the test addresses are marked by the OS as having the flags "DEPRECATED and "NOFAILOVER".  This means that it can't be used as the destination gateway for any routing requests and its ARP entry is not advertised.  Consequently, on a single VLAN we should be able to REUSE the same 2 IP addresses for all of our servers.  Any thoughts on this being true?

If all else fails, we might be looking into moving all the non-forward facking servers to the private space for their primary NIC and then there will be plenty of IP space for the second connections.  Of course, this will make my project significantly more complicated...
0
 
NukfrorCommented:
"Consequently, on a single VLAN we should be able to REUSE the same 2 IP addresses for all of our servers.  Any thoughts on this being true?"

A very very *BAD* assumption.  Those deprecated addresses *are* used by the ICMP packets sent to the test target addresses.  That's how those nofailover interfaces detect whether or not they have a problem as the ICMP packets use those deprecated/nofailver IP addresses.  You can still use those deprecated addresses.  Just try it ... ssh to one of them from another machine.  You will be able to log in.  Then on that other machine, take a look at the arp table.

The other option you might consider is my suggestion on trunked interfaces as I suggested above.
0
 
NukfrorCommented:
Or I guess your other option is to make your project "significantly more complicated ..." :(
0
 
forestlaw888888Author Commented:

Nukfror, thanks again for getting back to me.  I will try out my "bad" theory and see how it works ;)  I have also downloaded Sun Trunking 1.3 Installation and User Guide from Sun's website and will go through it today.
0
 
NukfrorCommented:
Just a note on Sun Trunk.  Sun Trunk is *free* for Solaris 10.  With Solaris 9 and below you have to pay about $900/Srv to use it.  Unfortunately, there is no "try and buy" like program with Sun Trunk - you gotta pay to get it.
0
 
NukfrorCommented:
Oh and you can try the bad theory out ... but I can't stress more strongly how ****BBBBAAAADDDD**** it would be to put it into a production critical environment.
0
 
PetineCommented:
i figured out what u wanted to know now..

sorry no clue how to solve that :/
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.