[教學] Synology EXT4 轉BTRFS , Raid1 拆碟直接轉Basic (假教學,記錄用)

本帖最後由 rfdingo 於 2019-3-19 15:41 編輯

以下內文為小弟呢d 新手使用,如有錯漏請補充,Thx

=====================
20190318更新
最近發現不能更新dsm,原因:無位(system partition,最後都查唔到邊度食左d位)
所以建議以有限碟換完格式後,重裝一次dsm(反正不影響data)
先backup config(另可set保留現在admin acc pw),再按機後reset洞4秒>響>鬆手>再按機後reset洞4秒>響3下>正常安裝程序 次序
=====================

Synology快照要BTRFS 格式先有得玩
但當日收左部918+後洗白前已經係ext4
再加上當時未有多兩隻HDD過機,就直接裝番 邪惡碟去918+ (點知又用到 無解

忍耐左n久,玩vm 點知唔比裝去ext4,終於要下海找教學

https://forum.synology.com/enu/v ... 16038&start=120
原教學EXT4 (SHR) to BTRFS (SHR)  
而我就本身係 EXT4 (Raid1) to BTRFS (SHR Raid1),剛剛又  BTRFS (SHR Raid1)>  BTRFS (Raid1)
=================================
=============================
以下為高風險活動,敬請自行承擔風險。
============================
做任何野前請先Backup多set ,過程中兩隻Raid1 碟資料都會先後被消滅最少一次
而且d Package 都要手動搞番 無得完整搬(可能我唔識,我廢 )
如過程有唔同,實屬不幸。
==================================

以下 Tray1 Disk A + Tray2 Disk  B = 原synology上Raid1  volume1
Disk C 為第3隻碟 (tray3/外置),備份碟 (咁都備份錯就無解)

==========================
英文原文 by BinBin2000

I did the following steps:
1. Make a complete backup of the whole volume, including all settings, through HyperBackup, in case anything would go wrong.
先做好備份去 碟C (tray3/外置),你想到的都做 ,HyperBackup 外
我仲備份左
synology機既config
Surveillance Station 的Cam config(入Surveillance 度export)
Docker img export哂
vm img export哂 (過程中vm mgr 會搬唔到又無得停,我無研究uninstall左就算)

2. Physically removed one of the disks. This will cause the RAID to be "degraded", but still fully functional.
物理上,著機狀態下 抽走 Tray1 Disk A ,系統中Raid 1會進入降級狀態

3. Reboot the NAS with the disk still removed. This will prevent the removed disk from being instantly recognized when being put back again.
仍抽走 Tray1 Disk A 狀態下禁部nas reboot (可考慮關左警告響聲先)

4. Put the removed disk back in its slot. The disk will NOT be automatically recognized as volume1 if you followed the steps above.
Boot到入得番系統後,插番Tray1 Disk A 入去,系統是不會自動重建volume 1 (Raid 1),就可以進入下一個Step


5. Create a new Disk group with the disk you just inserted. I'm not completely sure if a disk group is needed, but I didn't want to take any chances.
去Storage Mgr 度開個新既 Disk group (唔整Disk group 可以直接下步整 新既 volume 2 ),如只想用一般 Raid1,而非 SHR  既Raid1 ,記得選手動設定,唔好選快速設定(因為唔會比你選)

6. Create a new BTRFS volume on the disk you just inserted. This will be your new BTRFS volume and will be called "volume2".
之後開新volume 2(順序,睇你自己機),選BTRFS (唔好選ext4 除非你咁都想操多次 )

7. Move all the data from volume1 to volume2 through DSM. For example, in the settings for each of the shared folders, you can choose on which volume the shared folder should reside. Changing the volume here will cause the data to be automatically moved to the new volume.
之後就可以去 控制台 個 Share Folder 設定度,慢慢將volume 1 Share Folder逐個 轉去Volume 2 (視乎你有幾多data,我玩左一日,仲要做完一個先禁到下一個,無得一次過選)
同時如果有野咬住搬唔到,會叫你停左個d服務先

8. Continue with step 7 until everything has been moved to volume2. I didn't take notes on where to find each of the volume settings, so unfortunately you will have to find them yourself, depending on which packages you are using.

9. Move the DSM applications to the new volume by following THIS guide. In my case, I had to repair some of the applications through the Package center in order to get them to work again.

原文會教用SSH去找Package 逐個搬去新 Vol,但以下我找到
http://www.mcleanit.ca/blog/synology-move-application-volumes/
中 Sebastian Ott 所寫的 move_syno_pkgs.sh 去一次運自動搬(會問你y/n決定搬咩)
https://gist.github.com/nobodypb/fc3e70b535bcd95b5de7659d6fbda434

先下載 move_syno_pkgs.sh
用notepad呢度野打開佢改

來源(睇你實機個volume幾)
SOURCE=/volume1
目的地(睇你實機個volume幾)
DEST=/volume2
save完掉上 nas既 share folder記低位置

/volume幾/sharefolder名/move_syno_pkgs.sh


之後你需要既工具/需要做既==========
putty < 經SSH連入nas用
https://www.putty.org/
去 控制台 Control Panel > Terminal & SNMP
選取SSH功能 apply ,save (完事後自己關左佢,如果無用sftp 個d,有用既應該一早開左)
=============================

用putty 入nas ip,port不變就是22
見到cmd mode既視窗
用有admin/root權限既acc 登入
再入呢句比權限(教學唔駛入sudo  ,但我唔得,自己執生)
sudo  chmod +x /volume幾/sharefolder名/move_syno_pkgs.sh
再入呢句
sudo  /volume幾/sharefolder名/move_syno_pkgs.sh
佢會找 原Package /app既path
去新volume開 folder
再逐個Package 停,問你搬唔搬> 再搬去新地方


10. When everything has been removed, delete volume1 through the Storage manager. If you have forgotten to move anything, the Storage manager will prompt you with an error. This is how I found everything in DSM that required a change.
當你睇落乜都搬哂時,又好 咩肯定,就可以去Storage Mgr 個 Vol 同 Disk group(如本身有set)
度 移除左 Vol 1佢
( 睇清楚 唔好刪錯新個隻碟,無人救到你)
如果仲有Package /其他野咬住佢會唔比你刪,如上面講 vm mgr就比我刪左

11. The previous step will cause the other disks to be unused. Expand the Disk group with these disks.
剷完Vol 同 Disk group(如本身有set)
就可以去新Disk group 度add 番上面隻 Tray2 B碟入去(無開Disk group就下面直接add落Vol)

12. Expand volume2 with the unallocated space from the newly included disks.
之後就volume2 度轉做Raid1,初步就完成,又比1日佢慢慢同步就是
之後仲要執番d Package  setting (我都仲行緊同步,聽日再算)

13. Voila! Now you have a BTRFS volume with everything from you EXT4 volume still intact.

=====================================
已知過完無效要再set過 App
Docker
Cloud Sync
Surveillance Station 權限?有人提過要整另一堆野
==============================
============ 特備節目
同場加映發生過既 Raid1 拆1隻碟 再轉番Basic 單碟 無剷資料,但唔知自己做緊咩既 請勿嘗試

參考
https://forum.synology.com/enu/viewtopic.php?t=136470

用上面既
putty < 經SSH連入nas用
https://www.putty.org/
去 控制台 Control Panel > Terminal & SNMP
選取SSH功能 apply ,save (完事後自己關左佢,如果無用sftp 個d,有用既應該一早開左)
=============================

用putty 入nas ip,port不變就是22
見到cmd mode既視窗
用有admin/root權限既acc 登入

先睇下 現在碟組既清單

sudo cat /proc/mdstat

我既情況 ,918+ 兩組Raid1(8t 同1t各1組) ,想打散對 1T Raid1
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sdb5[1] sda5[0]               < 8T 但處於同步/回復緊既狀態
      7809195456 blocks super 1.2 [2/1] [U_]
      [==========>..........]  recovery = 50.2% (3920641664/7809195456) finish=509.0min speed=127320K/sec
md4 : active raid1 sdd3[0]
      971940544 blocks super 1.2 [2/1] [U_]       <-----  1T raid1   ,被強制甩左一隻碟既狀態
md1 : active raid1 sda2[3] sdb2[2] sdd2[1]      <--- 咁細應該唔關事唔好手多
      2097088 blocks [12/3] [_UUU________]
md0 : active raid1 sda1[3] sdb1[2] sdd1[1]      <--- 咁細應該唔關事唔好手多
      2490176 blocks [12/3] [_UUU________]

之後入
sudo mdadm --grow /dev/md4 --raid-devices=1 --force

無Error 就入
sudo mdadm --detail /dev/md4
睇番狀態,就出
/dev/md4:
        Version : 1.2
  Creation Time : Sun Jul  8 22:20:21 2018
     Raid Level : raid1
     Array Size : 971940544 (926.91 GiB 995.27 GB)
  Used Dev Size : 971940544 (926.91 GiB 995.27 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sat Feb  9 18:51:33 2019
          State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
  Spare Devices : 0

           Name : SD2016:4  (local to host xxxxx )
           UUID : 8f5xxxxxxxxxxxxxxxxxxxxxxxxxcc
         Events : 2177

    Number   Major   Minor   RaidDevice State
       0       8       51        0      active sync   /dev/sdd3
       1       0        0        1      removed

完成