Dealing with leased dedicated Linux RH4 server with RAID 1 drives, no direct access to server.
The partition failed... at the time tech support was able to access the drives but could not get system to boot. The provider removed the two (RAID 1) drives from the server and commissioned a new box and added a USB enclosure, stating that the drive old drive could be mounted and enclosure and all files then backedup. However, the ISP refuesed to mount the drive.
Here is what I know:
# ls -al sd*
brw-rw---- 1 root disk 8, 0 Nov 1 10:30 sda
brw-rw---- 1 root disk 8, 1 Nov 1 10:30 sda1
brw-rw---- 1 root disk 8, 2 Nov 1 10:30 sda2
brw-rw---- 1 root disk 8, 3 Nov 1 10:30 sda3
brw-rw---- 1 root disk 8, 16 Nov 3 03:53 sdb
# cd /dev
# mkdir /mnt/olddrive
# mount /dev/sdb3 /mnt/olddrive
mount: special device /dev/sdb3 does not exist
# mount /dev/sdb /mnt/olddrive
mount: you must specify the filesystem type
Another request was made for IPS to mount drive between running last and next mount attempts. Support responded:
"The dedicated server hard drives are formatted in ext3 format by default. The second disk would be listed as sdb. The information provided to mount the second disk was an example and may not be the exact information. We suggest researching online using a search engine or forum how to properly mount and access the second hard drive. "
Great support, eah?
Next I tried:
# mount -r -t ext3 /dev/sdb /mnt/olddrive
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
or too many mounted file systems
I want to tread very lightly as there is data on these drives that does not exist elsewhere.
Does any one have any suggestions?