如何判定:

  1. R2S无法保存配置,右上角始终提示有配置未保存,点击保存也不会消失
  2. 终端里创建文件时提示是只读系统

解决方法:

终端中执行以下命令即可,一路回车默认即可

e2fsck /dev/mmcblk0p2

原因:

大概率是断电重启导致

1.首先ssh登录群晖机器

然后切换到root

ssh admin@ip -p port
sudo  -i

2.有(E)的就是出问题的分区,虽然可能没啥问题

root@DS3617XS:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md3 : active raid1 sda3[0](E)
      2925444544 blocks super 1.2 [1/1] [E]

md1 : active raid1 sda2[0]
      2097088 blocks [12/1] [U___________]

md0 : active raid1 sda1[1]
      2490176 blocks [12/1] [_U__________]

unused devices: <none>

继续运行命令,并记录 Array UUID

root@DS3617XS:~# mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Fri Dec 16 22:02:59 2022
     Raid Level : raid1
     Array Size : 2925444544 (2789.92 GiB 2995.66 GB)
  Used Dev Size : 2925444544 (2789.92 GiB 2995.66 GB)
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Mon Oct 23 21:55:29 2023
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : DS3617XS:3  (local to host DS3617XS)
           UUID : f6456c6a:4695f56b:6524b785:09c4efab
         Events : 40

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
root@DS3617XS:~# mdadm --examine /dev/sda3
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : f6456c6a:4695f56b:6524b785:09c4efab
           Name : DS3617XS:3  (local to host DS3617XS)
  Creation Time : Fri Dec 16 22:02:59 2022
     Raid Level : raid1
   Raid Devices : 1

 Avail Dev Size : 5850889120 (2789.92 GiB 2995.66 GB)
     Array Size : 2925444544 (2789.92 GiB 2995.66 GB)
  Used Dev Size : 5850889088 (2789.92 GiB 2995.66 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : 5198b825:992c08e0:c15511c3:8e186b45

    Update Time : Mon Oct 23 21:55:29 2023
       Checksum : cb0a790f - correct
         Events : 40


   Device Role : Active device 0
   Array State : A ('A' == active, '.' == missing, 'R' == replacing)

3.修复的命令是

root@DS3617XS:~# mdadm -Cf /dev/md3 -e1.2 -n1 -l1 /dev/sda3 -uf6456c6a:4695f56b:6524b785:09c4efab
mdadm: super1.x cannot open /dev/sda3: Device or resource busy
mdadm: /dev/sda3 is not suitable for this array.
mdadm: create aborted

4.提示设备忙

按如下操作

首先执行停止命令,必须执行
root@DS3617XS:~# mdadm --stop /dev/md3
mdadm: stopped /dev/md3
如果这一步失败,提示忙,可以按如下操作试试

# Stop all NAS services except from SSH 
> syno_poweroff_task -d

然后可以自己umount
> umount /dev/md3
如果提示设备忙,可以如下操作

无脑停止所有package
> synopkg list --name | xargs -I"{}" synopkg stop "{}"

没有lsof命令的情况下找到什么进程在使用volume2
root@syno:~# ls -l  /proc/*/fd | grep volume2
如果是无返回继续向下看
 
查找所有还开启的service,将其关闭。
> synoservicecfg --list | xargs -I"{}" synoservice --status  "{}" | grep enable | grep log
Service [synologanalyzer] status=[enable]
Service [synologrotate] status=[enable]
Service [syslog-acc] status=[enable]
Service [syslog-ng] status=[enable]
Service [syslog-notify] status=[enable]

# 关闭命令
> synoservice --stop synologanalyzer
> synoservice --stop s2s_daemon

将service关闭到只剩下ssh执行如下命令,如果直接ssh的话,立即停止会掉ssh
> umount -l /dev/md3
两次exit退出ssh,再重新ssh连接nas
此时应该就可以执行mdadm --stop /dev/md3,接着执行修复命令即可

如果上述操作都不行的话可以手动安装opkg,安装lsof查看什么进程占用了

wget -O - http://bin.entware.net/x64-k3.2/installer/generic.sh | sh
vim /root/.profile
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin:/opt/bin:/opt/sbin
/opt/etc/init.d/rc.unslung start
opkg install lsof
lsof /volume1

5.如果stop成功的话运行修复命令

root@DS3617XS:~# mdadm -Cf /dev/md3 -e1.2 -n1 -l1 /dev/sda3 -uf6456c6a:4695f56b:6524b785:09c4efab
mdadm: /dev/sda3 appears to be part of a raid array:
       level=raid1 devices=1 ctime=Fri Dec 16 22:02:59 2022
Continue creating array? y
mdadm: array /dev/md3 started.

这样就好了,reboot重启即可

群晖手动安装opkg转载自:https://blog.peos.cn/2023/08/06/opkg.html
文章转载自:https://ilmvfx.wordpress.com/2020/10/24/manually-fix-synology-crashed-volume/