Pastebin

Paste #12034: No description

< previous paste - next paste>

Pasted by Anonymous Coward

Download View as text


List storage layout:
# lsblk                                                 [R0 J0 L:2.27 1.85 0.97 U:0+00:07 pts/3 12957H]
NAME              MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                 8:0    0   2,7T  0 disk  
├─sda1              8:1    0    10M  0 part  
├─sda2              8:2    0   477M  0 part  
│ └─md0             9:0    0 476,7M  0 raid1 /boot
├─sda3              8:3    0  55,9G  0 part  
│ └─md1             9:1    0  55,9G  0 raid1 /
└─sda4              8:4    0   2,7T  0 part  
  └─md2             9:2    0   2,7T  0 raid1 
    └─md2_crypt   253:0    0   2,7T  0 crypt /data
sdb                 8:16   0   2,7T  0 disk  
├─sdb1              8:17   0    10M  0 part  
├─sdb2              8:18   0   477M  0 part  
│ └─md0             9:0    0 476,7M  0 raid1 /boot
├─sdb3              8:19   0  55,9G  0 part  
└─sdb4              8:20   0   2,7T  0 part  
  └─md2             9:2    0   2,7T  0 raid1 
    └─md2_crypt   253:0    0   2,7T  0 crypt /data
sdc                 8:32   0   2,7T  0 disk  
└─sdc1              8:33   0   2,7T  0 part  
  └─data-nobackup 253:1    0   2,7T  0 crypt /nobackup
sdd                 8:48   0   7,3T  0 disk  
sde                 8:64   0 238,5G  0 disk  
sr0                11:0    1  1024M  0 rom   

sda is the remaining working disk
sdb is the failed disk
sdd is the new 8 TB disk.




Identify failed disk:
# cat /proc/mdstat                       
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda4[0] sdb4[1]
      2871042048 blocks super 1.2 [2/2] [UU]
      bitmap: 2/22 pages [8KB], 65536KB chunk

md1 : active raid1 sda3[0]
      58560512 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sda2[0] sdb2[1]
      488128 blocks super 1.2 [2/2] [UU]

unused devices: <none>

# sync

Remove failed disk

# mdadm --manage /dev/md1 --remove /dev/
mdadm: hot remove failed for /dev/sdb3: No such device or address

This failed because mdadm already removed the disk for us.




Copy the partition table from the good disk to the new disk.
# sfdisk -d /dev/sda | sfdisk /dev/sdd                  [R0 J0 L:0.09 0.91 1.08 U:0+00:18 pts/3 12960H]
Checking that no-one is using this disk right now ... OK

Disk /dev/sdd: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: WDC WD80EFAX-68K
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new GPT disklabel (GUID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx).
/dev/sdd1: Created a new partition 1 of type 'BIOS boot' and of size 10 MiB.
/dev/sdd2: Created a new partition 2 of type 'Linux RAID' and of size 477 MiB.
/dev/sdd3: Created a new partition 3 of type 'Linux RAID' and of size 55,9 GiB.
/dev/sdd4: Created a new partition 4 of type 'Linux RAID' and of size 2,7 TiB.
/dev/sdd5: Done.

New situation:
Disklabel type: gpt
Disk identifier: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Device         Start        End    Sectors  Size Type
/dev/sdd1       2048      22527      20480   10M BIOS boot
/dev/sdd2      22528     999423     976896  477M Linux RAID
/dev/sdd3     999424  118185983  117186560 55,9G Linux RAID
/dev/sdd4  118185984 5860532223 5742346240  2,7T Linux RAID

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.




List storage layout and confirm that sda and sdd are identical
# lsblk                  
NAME              MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                 8:0    0   2,7T  0 disk  
├─sda1              8:1    0    10M  0 part  
├─sda2              8:2    0   477M  0 part  
│ └─md0             9:0    0 476,7M  0 raid1 /boot
├─sda3              8:3    0  55,9G  0 part  
│ └─md1             9:1    0  55,9G  0 raid1 /
└─sda4              8:4    0   2,7T  0 part  
  └─md2             9:2    0   2,7T  0 raid1 
    └─md2_crypt   253:0    0   2,7T  0 crypt /data
sdb                 8:16   0   2,7T  0 disk  
├─sdb1              8:17   0    10M  0 part  
├─sdb2              8:18   0   477M  0 part  
│ └─md0             9:0    0 476,7M  0 raid1 /boot
├─sdb3              8:19   0  55,9G  0 part  
└─sdb4              8:20   0   2,7T  0 part  
  └─md2             9:2    0   2,7T  0 raid1 
    └─md2_crypt   253:0    0   2,7T  0 crypt /data
sdc                 8:32   0   2,7T  0 disk  
└─sdc1              8:33   0   2,7T  0 part  
  └─data-nobackup 253:1    0   2,7T  0 crypt /nobackup
sdd                 8:48   0   7,3T  0 disk  
├─sdd1              8:49   0    10M  0 part  
├─sdd2              8:50   0   477M  0 part  
├─sdd3              8:51   0  55,9G  0 part  
└─sdd4              8:52   0   2,7T  0 part  
sde                 8:64   0 238,5G  0 disk  
sr0                11:0    1  1024M  0 rom   

Note: The sdd4 is now only 2,7 TB. We will have to resize this later.



Add the partitions on the new disk to the existing mirrors:
# mdadm --manage /dev/md0 --add /dev/sdd2 
mdadm: added /dev/sdd2



Verify with mdstat:
# cat /proc/mdstat                         
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 sda4[0] sdb4[1]
      2871042048 blocks super 1.2 [2/2] [UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

md1 : active raid1 sda3[0]
      58560512 blocks super 1.2 [2/1] [U_]
      
md0 : active raid1 sdd2[2](S) sda2[0] sdb2[1]
      488128 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Note: the (S) means that sdd2 is spare.



More detail:
# mdadm --detail /dev/md0             
/dev/md0:
           Version : 1.2
     Creation Time : Sat Apr  1 20:29:37 2017
        Raid Level : raid1
        Array Size : 488128 (476.69 MiB 499.84 MB)
     Used Dev Size : 488128 (476.69 MiB 499.84 MB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Fri Aug  7 15:58:26 2020
             State : clean 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : corvina:0  (local to host corvina)
              UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
            Events : 391

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2

       2       8       50        -      spare   /dev/sdd2




Repeat for next partition:
# mdadm --manage /dev/md1 --add /dev/sdd3  ]
mdadm: added /dev/sdd3

Check status:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 sda4[0] sdb4[1]
      2871042048 blocks super 1.2 [2/2] [UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

md1 : active raid1 sdd3[2] sda3[0]
      58560512 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  1.2% (710400/58560512) finish=5.4min speed=177600K/sec
      
md0 : active raid1 sdd2[2](S) sda2[0] sdb2[1]
      488128 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>



Note: Currently sync’ing
More details:
# mdadm --detail /dev/md1                               [R0 J0 L:0.99 0.56 0.78 U:0+00:26 pts/3 12970H]
/dev/md1:
           Version : 1.2
     Creation Time : Sat Apr  1 20:29:51 2017
        Raid Level : raid1
        Array Size : 58560512 (55.85 GiB 59.97 GB)
     Used Dev Size : 58560512 (55.85 GiB 59.97 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Aug  7 16:03:21 2020
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 23% complete

              Name : corvina:1  (local to host corvina)
              UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
            Events : 230045

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       2       8       51        1      spare rebuilding   /dev/sdd3


Repeat for next partition:
Add the new partition:
# mdadm --manage /dev/md2 --add /dev/sdd4        
mdadm: added /dev/sdd4

Check status:
# cat /proc/mdstat        
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 sdd4[2](S) sda4[0] sdb4[1]
      2871042048 blocks super 1.2 [2/2] [UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

md1 : active raid1 sdd3[2] sda3[0]
      58560512 blocks super 1.2 [2/1] [U_]
      [========>............]  recovery = 42.0% (24620352/58560512) finish=3.8min speed=146289K/sec
      
md0 : active raid1 sdd2[2](S) sda2[0] sdb2[1]
      488128 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>








More details:
# mdadm --detail /dev/md2    
/dev/md2:
           Version : 1.2
     Creation Time : Sat Apr  1 20:30:05 2017
        Raid Level : raid1
        Array Size : 2871042048 (2738.04 GiB 2939.95 GB)                                                 
     Used Dev Size : 2871042048 (2738.04 GiB 2939.95 GB)                                                 
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Aug  7 16:04:42 2020
             State : clean 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : bitmap

              Name : corvina:2  (local to host corvina)                                                  
              UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx                                                 
            Events : 59317

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4                                          
       1       8       20        1      active sync   /dev/sdb4                                          

       2       8       52        -      spare   /dev/sdd4            








Mark as failed the partition from the failed disk from md0:
# mdadm --manage /dev/md0 --fail /dev/sdb2    
mdadm: set /dev/sdb2 faulty in /dev/md0




Syslog says:
2020 Aug  7 16:09:53 corvina md/raid1:md0: Disk failure on sdb2, disabling device.
md/raid1:md0: Operation continuing on 1 devices.




Check status:
# cat /proc/mdstat             
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 sdd4[2](S) sda4[0] sdb4[1]
      2871042048 blocks super 1.2 [2/2] [UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

md1 : active raid1 sdd3[2] sda3[0]
      58560512 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sdd2[2] sda2[0] sdb2[1](F)
      488128 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Remove the partition from the failed disk from md0:
# mdadm --manage /dev/md0 --remove /dev/sdb2    
mdadm: hot removed /dev/sdb2 from /dev/md0

Check status:
# cat /proc/mdstat                       
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 sdd4[2](S) sda4[0] sdb4[1]
      2871042048 blocks super 1.2 [2/2] [UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

md1 : active raid1 sdd3[2] sda3[0]
      58560512 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sdd2[2] sda2[0]
      488128 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>














Details:
# mdadm --detail /dev/md0            
/dev/md0:
           Version : 1.2
     Creation Time : Sat Apr  1 20:29:37 2017
        Raid Level : raid1
        Array Size : 488128 (476.69 MiB 499.84 MB)
     Used Dev Size : 488128 (476.69 MiB 499.84 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Aug  7 16:11:20 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : corvina:0  (local to host corvina)
              UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
            Events : 411

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       2       8       50        1      active sync   /dev/sdd2







Repeat for next partition:

Mark partition as failed:
# mdadm --manage /dev/md2 --fail /dev/sdb4     
mdadm: set /dev/sdb4 faulty in /dev/md2

syslog says:
2020 Aug  7 16:12:48 corvina md/raid1:md2: Disk failure on sdb4, disabling device.
md/raid1:md2: Operation continuing on 1 devices.



Check status:

# cat /proc/mdstat                                
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 sdd4[2] sda4[0] sdb4[1](F)
      2871042048 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  0.1% (5448960/2871042048) finish=331.0min speed=144278K/sec
      bitmap: 0/22 pages [0KB], 65536KB chunk

md1 : active raid1 sdd3[2] sda3[0]
      58560512 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sdd2[2] sda2[0]
      488128 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>




Details:
# mdadm --detail /dev/md2       
/dev/md2:
           Version : 1.2
     Creation Time : Sat Apr  1 20:30:05 2017
        Raid Level : raid1
        Array Size : 2871042048 (2738.04 GiB 2939.95 GB)
     Used Dev Size : 2871042048 (2738.04 GiB 2939.95 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Aug  7 16:13:43 2020
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 1

Consistency Policy : bitmap

    Rebuild Status : 0% complete

              Name : corvina:2  (local to host corvina)
              UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
            Events : 59330

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       2       8       52        1      spare rebuilding   /dev/sdd4

       1       8       20        -      faulty   /dev/sdb4





Wait for the disks to sync.




New Paste


Do not write anything in this field if you're a human.

Go to most recent paste.