pg 2.b not deep-scrubbed since…

With the ceph update that I nade to 19.2.3 I got warnings like

pg 2.b not deep-scrubbed since 2026-02-27T00:56:11.986819+0100
pg 2.fa not deep-scrubbed since 2026-02-27T05:32:34.264221+0100

It seems that sometimes the nightly deepscrub window is too short. I don#t like warnings in the dashboard from my proxmox cluster.
So I started to deep scrub that placement group (PG) on demand during buisness hours :D

ceph pg deep-scrub 2.b

I have 2048 PGs over 48 OSDs each 8 TiB. it took 20 Minutes to complete that one PG.

zfs snapshot sort by size

When you see your zpool exploding, but the “used” size isn’t that much, you may take a look at your snaphosts.

First find the dataset with the that has the largest snapshot

zfs list -o name,usedbysnapshots | sort -r -k2

example output

tank/mails                                  56.3G
tank/store                                   261M
tank/docker                                 2.38M

In this case the tank/mails dataset should be looked at.

Snapshots can then be listed for that filesystem by using a command

# zfs list -t snapshot -r 
zfs list -t snapshot -r tank/mails 

ceph scrubbing time

When you don’t scrub your ceph pool it will scrub itself when you don’t want it to scrub: during working hours. To avoid that you can set the time in the night when scrubbing is allowed

the ceph config set is global for the cluster / all nodes.

root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour
0
root@pve-01:/etc/ceph# ceph config set osd osd_scrub_begin_hour 22
root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour
22
root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour
0
root@pve-01:/etc/ceph# ceph config set osd osd_scrub_end_hour 7
root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour
7
root@pve-01:/etc/ceph#

ZFS rename disk path

zpool status
  pool: tank
 state: ONLINE
config:

        NAME                                          STATE     READ WRITE CKSUM
        tank                                          ONLINE       0     0     0
          mirror-0                                    ONLINE       0     0     0
            usb-SanDisk_SDSSDA-1T00_0123456789CA-0:0  ONLINE       0     0     0
            da3p4                                     ONLINE       0     0     0

errors: No known data errors
zpool set path=/dev/disk-by-id/sdsdsdsd tank da3p4
zpool set path=/dev/gpt/my_nice_name  

just type `zpool import` and you will see the name that you can import.

if you want to change the mount path use

zpool import -R /other/path poolname

to rename the pool use

zpool import original_name new_name

Posts Tagged linux

Archives by Month: