Link to home
Start Free TrialLog in
Avatar of Thomas Zucker-Scharff
Thomas Zucker-ScharffFlag for United States of America

asked on

how to replace primary drive in qnap nas 451

I own a qnap 451 4 bay system with first 3 drives populated  (2gb,3gb,3gb). The two 3gb drivers are in a basic pool and the 2gb primary drive is a jbod. I want to replace the primary drive with a 6tb drive. Is it best to put in a 6tb drive in bay 4 and have it copy some way,  or is the another way?  

What is the best way to do this without losing my config?
Avatar of Alex
Alex
Flag of United Kingdom of Great Britain and Northern Ireland image

Clone it, there is plenty of software out there capable of doing this.

I've used acronis and I'm pretty sure it is compatible with cloning this drive, you may need to take the drive out and put it into a windows machine and do it on that.
Avatar of Member_2_231077
Member_2_231077

Remove all the drives labeling which drives are in bays 2 and 3.
Install the new drive and install the OS onto it.
Power off, insert the new drives and power on.

In effect as you are replacing the primary drive you are migrating the two 3TB disks so http://docs.qnap.com/nas/4.1/Home/en/index.html?system_migration.htm applies.

OS is at https://www.qnap.com/en-us/download?model=ts-451%2B&category=firmware .
SOLUTION
Avatar of noci
noci

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Thomas Zucker-Scharff

ASKER

It sounds like I will not be able to preserve the current config. The suggestions sound like i am just replacing the primary drive. I was hoping to preserve users, groups, perms, etc.
You did not stipulate that when you asked the question.
@andyalder

The last line of the question was
What is the best way to do this without losing my config?
The config is in that first partition, and it should be mirrored to other drives.  (at least there is space on all drives).....
That there is a JBOD disk and a raid set is about the remainder of the disk,

Here a snapshot from mine:
[~] # cat /proc/mdstat
Personalities : [raid1] [linear] [raid0] [raid10] [raid6] [raid5] [raid4] 
md0 : active raid5 sda3[0] sdc3[2] sdb3[1]
                 7810899072 blocks super 1.0 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
                 bitmap: 6/8 pages [24KB], 262144KB chunk

md4 : active raid1 sdc2[3](S) sdb2[2] sda2[0]
                 530128 blocks super 1.0 [2/2] [UU]
                 
md13 : active raid1 sda4[0] sdc4[2] sdb4[1]
                 458880 blocks [4/3] [UUU_]
                 bitmap: 49/57 pages [196KB], 4KB chunk

md9 : active raid1 sda1[0] sdc1[2] sdb1[1]
                 530048 blocks [4/3] [UUU_]
                 bitmap: 65/65 pages [260KB], 4KB chunk

unused devices: <none>

Open in new window

In this case 3 3device raid 5 set (md0) first public raid set.
raid1 with spares:  swap space (md4) swapspace
raid1 with 3 out of 4 units present (md9)  HDA_ROOT (where hidden directory .config) hold the config.
raid1 with 3 out ot 4 units present (md13)  is actualy named /dev/sda4.. (see later), and contains all add-on applications (QPKG).
(check ls -l /etc/config )

Your jbod disk (probably /dev/sda )  should only be /dev/sda3....
(Oh btw... /dev/sda4 is acutaly a reamed version of /dev/md13....,  the original /dev/sda4 is named /dev/sdareal4 )

# mdadm --detail /dev/sda4
/dev/sda4:
        Version : 00.90.03
  Creation Time : Wed Sep 24 10:35:34 2014
     Raid Level : raid1
     Array Size : 458880 (448.20 MiB 469.89 MB)
  Used Dev Size : 458880 (448.20 MiB 469.89 MB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 13
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Jul  7 01:15:21 2018
          State : active, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           UUID : b2470823:47163d1b:32712378:cc34a9cc
         Events : 0.534279

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sdareal4
       1       8       20        1      active sync   /dev/sdb4
       2       8       36        2      active sync   /dev/sdc4
       3       0        0        3      removed
@noci,

I have only used the GUI, but am capable enough that I upgraded the RAM (no mean feat on a ts-451).  I have used putty, although not often.  Is this mirroring of the OS automatic, because I never set it up that way?  Currently, my QNAP is in sleep mode and I am not at home.  I assume I can use putty remotely, once it awakens (for some reason Wake-On-LAN is not working very well, although it used to).  When it does awaken, I will use the command you suggested and post the results here.
I seem to need more help than I thought.  I can't even get a putty instance started, no less issue a command.  I see most of that info in the disks/storage GUI on the NAS.
you need to logon as admin@qnap.example.com   and use the same password as through the web interface.
You may need to enable the port 22 ssh protocol if not done before.
I was attempting to login as admin.  How does one enable port 22 on this NAS (I'm logged in as admin right now)?
Control panel,   Telnet & SSH, there enable SSH, TELNET is of no use
Thought so, I did that, but I keep getting timed out when I launch putty and go to my nas by <mynasname>.myqnapcloud.com
ah no ..., you need to use your local address.....  (192.168..... or whatever). Otherwise you need to also enable & set port forward in your routers/firewalls etc.
but doesn't that mean I need to be on my local wifi?
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I was able to log in by enabling port forwarding and issued this command cat /proc/mdstat

Here are the results:
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md2 : active raid1 sda3[2] sdc3[1]
      2920311616 blocks super 1.0 [2/2] [UU]

md1 : active raid1 sdb3[0]
      1943559616 blocks super 1.0 [1/1] [U]

md322 : active raid1 sdc5[2](S) sda5[1] sdb5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdc2[2](S) sda2[1] sdb2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md13 : active raid1 sda4[24] sdc4[25] sdb4[0]
      458880 blocks super 1.0 [24/3] [UUU_____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sda1[24] sdc1[25] sdb1[0]
      530048 blocks super 1.0 [24/3] [UUU_____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

Open in new window


issuing the mdadm --detail command returns the same thing no matter which device I specify

mdadm: /dev/sda3 does not appear to be an md device (i tried everything other than a3 as well)
I was finally able to get some information by issuing the command

mdadm --detail /dev/*

The results were:
mdadm: /dev/aer_inject does not appear to be an md device
mdadm: cannot open /dev/audio: No such device
mdadm: cannot open /dev/audio0: No such device
mdadm: /dev/audio1 does not appear to be an md device
mdadm: cannot open /dev/audio2: No such device
mdadm: cannot open /dev/audio3: No such device
mdadm: cannot open /dev/audio4: No such device
mdadm: /dev/autofs does not appear to be an md device
mdadm: /dev/bsg does not appear to be an md device
mdadm: /dev/bus does not appear to be an md device
mdadm: /dev/cachefiles does not appear to be an md device
mdadm: /dev/console does not appear to be an md device
mdadm: /dev/cpu does not appear to be an md device
mdadm: /dev/cpu_dma_latency does not appear to be an md device
/dev/dm-0:
        Version : 1.0
  Creation Time : Sun Jun 19 13:27:00 2016
     Raid Level : raid1
     Array Size : 1924120576 (1834.98 GiB 1970.30 GB)
  Used Dev Size : unknown
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Mon Jul  9 15:58:15 2018
Segmentation fault

Open in new window


Does this make any sense?  And how is this related to mirroring the system disk?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Just so I am sure, the config is already mirrored, so if I insert a 6tb drive in slot 4 and make it a pool with the current 2tb drive (didn't think I could do that), I would be okay?  BTW my output of the mdadm --detail /dev/md* command is:

[~] # mdadm --detail /dev/md*
/dev/md1:
        Version : 1.0
  Creation Time : Sun Jun 19 13:27:00 2016
     Raid Level : raid1
     Array Size : 1943559616 (1853.52 GiB 1990.21 GB)
  Used Dev Size : 1943559616 (1853.52 GiB 1990.21 GB)
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Mon Jul  9 16:37:31 2018
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : 1
           UUID : be48aafe:4259fd73:0fb62baa:56958f44
         Events : 14

    Number   Major   Minor   RaidDevice State
       0       8       19        0      active sync   /dev/sdb3
/dev/md13:
        Version : 1.0
  Creation Time : Sun Jun 19 13:26:46 2016
     Raid Level : raid1
     Array Size : 458880 (448.20 MiB 469.89 MB)
  Used Dev Size : 458880 (448.20 MiB 469.89 MB)
   Raid Devices : 24
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Jul  9 15:37:02 2018
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           Name : 13
           UUID : e10f8abd:8d3832f9:b64dd3fa:0254d6b0
         Events : 95969

    Number   Major   Minor   RaidDevice State
      24       8        4        0      active sync   /dev/sda4
       0       8       20        1      active sync   /dev/sdb4
      25       8       36        2      active sync   /dev/sdc4
       6       0        0        6      removed
       8       0        0        8      removed
      10       0        0       10      removed
      12       0        0       12      removed
      14       0        0       14      removed
      16       0        0       16      removed
      18       0        0       18      removed
      20       0        0       20      removed
      22       0        0       22      removed
      24       0        0       24      removed
      26       0        0       26      removed
      28       0        0       28      removed
      30       0        0       30      removed
      32       0        0       32      removed
      34       0        0       34      removed
      36       0        0       36      removed
      38       0        0       38      removed
      40       0        0       40      removed
      42       0        0       42      removed
      44       0        0       44      removed
      46       0        0       46      removed
/dev/md2:
        Version : 1.0
  Creation Time : Wed Jul  6 19:14:39 2016
     Raid Level : raid1
     Array Size : 2920311616 (2785.03 GiB 2990.40 GB)
  Used Dev Size : 2920311616 (2785.03 GiB 2990.40 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Jul  9 16:37:26 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : 2
           UUID : 64d5d4c1:342f661c:ef720dce:3c6476f1
         Events : 1158

    Number   Major   Minor   RaidDevice State
       2       8        3        0      active sync   /dev/sda3
       1       8       35        1      active sync   /dev/sdc3
/dev/md256:
        Version : 1.0
  Creation Time : Mon Jul  9 07:22:10 2018
     Raid Level : raid1
     Array Size : 530112 (517.77 MiB 542.83 MB)
  Used Dev Size : 530112 (517.77 MiB 542.83 MB)
   Raid Devices : 2
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Jul  9 16:37:09 2018
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

           Name : 256
           UUID : c6b1440d:2716c132:82b2e08b:19502884
         Events : 2

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8        2        1      active sync   /dev/sda2

       2       8       34        -      spare   /dev/sdc2
/dev/md322:
        Version : 1.0
  Creation Time : Mon Jul  9 07:22:10 2018
     Raid Level : raid1
     Array Size : 7235136 (6.90 GiB 7.41 GB)
  Used Dev Size : 7235136 (6.90 GiB 7.41 GB)
   Raid Devices : 2
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Jul  9 07:22:11 2018
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

           Name : 322
           UUID : 4b2261f7:d8380f27:1063ac40:bcad750f
         Events : 2

    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync   /dev/sdb5
       1       8        5        1      active sync   /dev/sda5

       2       8       37        -      spare   /dev/sdc5
/dev/md9:
        Version : 1.0
  Creation Time : Sun Jun 19 13:26:43 2016
     Raid Level : raid1
     Array Size : 530048 (517.71 MiB 542.77 MB)
  Used Dev Size : 530048 (517.71 MiB 542.77 MB)
   Raid Devices : 24
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Jul  9 16:37:32 2018
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           Name : 9
           UUID : 265866a2:1384cbae:bae4df5d:c25f6b71
         Events : 3520479

    Number   Major   Minor   RaidDevice State
      24       8        1        0      active sync   /dev/sda1
       0       8       17        1      active sync   /dev/sdb1
      25       8       33        2      active sync   /dev/sdc1
       6       0        0        6      removed
       8       0        0        8      removed
      10       0        0       10      removed
      12       0        0       12      removed
      14       0        0       14      removed
      16       0        0       16      removed
      18       0        0       18      removed
      20       0        0       20      removed
      22       0        0       22      removed
      24       0        0       24      removed
      26       0        0       26      removed
      28       0        0       28      removed
      30       0        0       30      removed
      32       0        0       32      removed
      34       0        0       34      removed
      36       0        0       36      removed
      38       0        0       38      removed
      40       0        0       40      removed
      42       0        0       42      removed
      44       0        0       44      removed
      46       0        0       46      removed
[~] #

Open in new window

SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Although I have not put in the fourth drive yet to see if this actually works.  I think noci answered all my questions despite how basic some of them were.  Thanks for bearing with me on this. Now I have upgraded the RAM (as I said no mean feat) and feel a lot better about replacing the primary drive.

Thanks,

Tom