Just a quick bash script
SNAP=backup-zfs_2026-04-24_10:34:38
POOL=tank
zfs list -H -o name -r $POOL | while read ds; do
if zfs list -H -t snapshot ${ds}@${SNAP} >/dev/null 2>&1; then
echo "[DATASET] $ds"
zfs diff ${ds}@${SNAP}
fi
done
Welcome to my world
Just a quick bash script
SNAP=backup-zfs_2026-04-24_10:34:38
POOL=tank
zfs list -H -o name -r $POOL | while read ds; do
if zfs list -H -t snapshot ${ds}@${SNAP} >/dev/null 2>&1; then
echo "[DATASET] $ds"
zfs diff ${ds}@${SNAP}
fi
done
With the ceph update that I nade to 19.2.3 I got warnings like
pg 2.b not deep-scrubbed since 2026-02-27T00:56:11.986819+0100 pg 2.fa not deep-scrubbed since 2026-02-27T05:32:34.264221+0100
It seems that sometimes the nightly deepscrub window is too short. I don#t like warnings in the dashboard from my proxmox cluster.
So I started to deep scrub that placement group (PG) on demand during buisness hours :D
ceph pg deep-scrub 2.b
I have 2048 PGs over 48 OSDs each 8 TiB. it took 20 Minutes to complete that one PG.
# ip link set dev <interface> down
ip link set dev eth0 down
# ip link set dev <interface> up
ip link set dev eth0 up
# /sbin/ifconfig <interface> up
# /sbin/ifconfig <interface> down
Display a list of messages:
ceph crash ls
read a message:
ceph crash info <id>
mark message as read
ceph crash archive <id>
or mark all as read
ceph crash archive-all
That is a pretty annoying error message. To get rid of it
apt install pve-kernel-helper
When you see your zpool exploding, but the “used” size isn’t that much, you may take a look at your snaphosts.
First find the dataset with the that has the largest snapshot
zfs list -o name,usedbysnapshots | sort -r -k2
example output
tank/mails 56.3G tank/store 261M tank/docker 2.38M
In this case the tank/mails dataset should be looked at.
Snapshots can then be listed for that filesystem by using a command
# zfs list -t snapshot -r zfs list -t snapshot -r tank/mails
Some commands for extended zpool status
ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c health ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c hours_on ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c lsblk ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c smart_test ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c temp ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c upath
When you don’t scrub your ceph pool it will scrub itself when you don’t want it to scrub: during working hours. To avoid that you can set the time in the night when scrubbing is allowed
the ceph config set is global for the cluster / all nodes.
root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour 0 root@pve-01:/etc/ceph# ceph config set osd osd_scrub_begin_hour 22 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour 22 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour 0 root@pve-01:/etc/ceph# ceph config set osd osd_scrub_end_hour 7 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour 7 root@pve-01:/etc/ceph#
sudo mount -t cifs -o user=dummy,domain=example.local,uid=$(id -u),gid=$(id -g),forceuid,forcegid,vers=2.0 //files.example.local/share ~/P
zpool status
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
usb-SanDisk_SDSSDA-1T00_0123456789CA-0:0 ONLINE 0 0 0
da3p4 ONLINE 0 0 0
errors: No known data errors
zpool set path=/dev/disk-by-id/sdsdsdsd tank da3p4 zpool set path=/dev/gpt/my_nice_name
just type `zpool import` and you will see the name that you can import.
if you want to change the mount path use
zpool import -R /other/path poolname
to rename the pool use
zpool import original_name new_name