ceph scrubbing time

When you don’t scrub your ceph pool it will scrub itself when you don’t want it to scrub: during working hours. To avoid that you can set the time in the night when scrubbing is allowed

the ceph config set is global for the cluster / all nodes.

root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour
0
root@pve-01:/etc/ceph# ceph config set osd osd_scrub_begin_hour 22
root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour
22
root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour
0
root@pve-01:/etc/ceph# ceph config set osd osd_scrub_end_hour 7
root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour
7
root@pve-01:/etc/ceph#

Proxmox: How to resolve “service start-limit-hit”

The error message on the proxmox gui

()
Job for [email protected] failed.
See "systemctl status [email protected]" and "journalctl -xe" for details.
TASK ERROR: command '/bin/systemctl start ceph-mgr@pve-03' failed: exit code 1

The error message from systemctl

[email protected]: Start request repeated too quickly.
[email protected]: Failed with result 'start-limit-hit'.

Solve with

systemctl reset-failed ceph-mgr@pve-03
systemctl start ceph-mgr@pve-03

change pve-03 to your node name.

proxmox qmp command blockdev-snapshot-delete-internal-sync failed

While trying to move a VM from one node to another I got the error message:

VM 100 qmp command ‘blockdev-snapshot-delete-internal-sync’ failed – Failed to get a snapshot list: Operation not supported

One snapshot was stuck and the VM locked. How to solve this?

qm unlock 100
qm listsnapshot 100
qm delsnapshot 100 preFirstBoot --force
qm unlock <ID>
qm listsnapshot <ID>
qm delsnapshot <ID> <snapname> --force

it might be that the snapshot will remain on the hard disk.

Happy unlocking :)

Posts Tagged proxmox

Archives by Month: