Link to home
Start Free TrialLog in
Avatar of Forrest
ForrestFlag for United States of America

asked on

Preparing Solaris/ZFS with a new external array

We have a Solaris 10 system which I will be adding a secondary external Infortrend 24-bay array.

There exists already another 24-bay, which is being managed by Veritas, which I don't care for.

Regarding the new array, it needs to be configured -- LUNs, Logical Volumes, etc.   In this case, I wanted to know what the ideal way would be to present this to the OS for use with ZFS.

The array itself can do up to RAID6, so we won't really need to utilize ZFS' software-based redundancy.

We could present it as one big fat LV, or split it down.  Or, even present it as individual drives I suppose.

I'd appreciate some advice on how to do this optimally.

Note this is Solaris 10, not OpenSolaris (which may have other functionality).

Also, being more of a Linux person, I'm not sure how Solaris presents the /dev devices for use.  I assume that the "format" command shows only what is available for use and not the drives that are currently in use (ie: the veritas array doesn't show up).   We have:

AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci@1c,600000/scsi@2/sd@0,0
       1. c1t0d0 <IFT-A24U-G2421-1-347G-1.95TB>
          /pci@1c,600000/scsi@2,1/sd@0,0
       2. c1t0d1 <IFT-A24U-G2421-1-347G-1.95TB>
          /pci@1c,600000/scsi@2,1/sd@0,1
       3. c1t0d2 <IFT-A24U-G2421-1-347G-1.95TB>
          /pci@1c,600000/scsi@2,1/sd@0,2
       4. c1t0d3 <IFT-A24U-G2421-1-347G-1.95TB>
          /pci@1c,600000/scsi@2,1/sd@0,3
       5. c1t0d4 <IFT-A24U-G2421-1-347G-1.95TB>
          /pci@1c,600000/scsi@2,1/sd@0,4
       6. c1t0d5 <IFT-A24U-G2421-1-347G-1.95TB>
          /pci@1c,600000/scsi@2,1/sd@0,5
       7. c1t0d6 <IFT-A24U-G2421-1-347G-1.95TB>
          /pci@1c,600000/scsi@2,1/sd@0,6
       8. c1t0d7 <IFT-A24U-G2421-1-347G-1.33TB>

I'm not sure which controller that's going to (the on-board or the PCI card).  I may have the new array configured incorrectly.


Thank you.


Avatar of bummerlord
bummerlord
Flag of Sweden image

format should list all configured/detected disks, also those already in use by veritas.
If you have attached the new array to a newly installed or previously unconfigured controller, and did not do a reconfiguration boot, you may have to configure the controller using cfgadm.
E.g "cfgadm -c configure cN" (where cN is the new controller id. Look for 'unconfigured' controllers in the output from cfgadm -al. the ap-id is in the leftmost column)
Then run "devfsadm -c disk" to have the driver probe for LUN's and setup device links.
....or do a reconfiguration boot. ("reboot -- -r" in a terminal, or "boot -r" from OBP)
Then run format again, and you should see the LUNs configured on the new array.

For the rest, it will probably depend a lot on the needs defined for the applications you are running, and how you care to manage them and the system as a whole.
There may be performance considerations as well. Possible gains in one setup vs another may very likely be related to how the data is distributed in the file system (small vs large files) and how the application accesses this data (frequent reads vs writes etc). Also how the array is connected to the host system, how many connections are available, and how the array operates internally (write cache sizes, io-operations queues, stripe sizes etc) - stuff I barely know anything about, but have overheard in conversations :-)) I dare say that there is no ultimate setup for all possible scenarios.

The link below should provide you with most things to consider, with regard to ZFS, that may or may not influence how you decide to 'partition' the array.

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Finally...
c0t0d0 (/pci@1c,600000/scsi@2/sd@0,0) is connected internally.
The others (..../scsi@2,1/....) would be on the external scsi bus (still the same contoller)

ref http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/Example/documents/21216
http://developers.sun.com/solaris/articles/devicemapping.html

Since you mention "the pci card", I am guessing that the previous array is connected to the external scsi bus of the built in controller, and your new array is on a separate controller, that you now need to configure to see the luns with format.
Avatar of Forrest

ASKER

In this case, regardless of what I do, the system will not configure c3 or c4 -- it says it's connected and not configured.  I also got this error during boot:


WARNING: /pci@1d,700000/pci@1/scsi@4/sd@0,0 (sd31):
        disk capacity is too large for current cdb length


Googling around, it suggests that maybe the controller I have is too old to handle this large of an array.

The drives are all 1TB (24 of them), and I tried making clusters of 3, still no luck.

Is there a command I can issue that will probe the PCI card and tell us what version/make it is?  Maybe that will help.


Thank you.
Avatar of Forrest

ASKER

I wonder if the SCSI cable that came with the array is the right version -- it fits, but it may be incorrect?  None of this makes sense - Solaris should just "work" with this one.
ASKER CERTIFIED SOLUTION
Avatar of Forrest
Forrest
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial