That is a pretty annoying error message. To get rid of it
apt install pve-kernel-helper
Welcome to my world
That is a pretty annoying error message. To get rid of it
apt install pve-kernel-helper
The aswer is you can check the used size and you can shrink the used space.
journalctl --disk-usage journalctl --vacuum-size=1G
Other Options are
--vacuum-size=BYTES Reduce disk usage below specified size --vacuum-files=INT Leave only the specified number of journal files --vacuum-time=TIME Remove journal files older than specified time
To create a new empty branch in Git, we can use the --orphan
command line option
git checkout --orphan
The command above creates the new empty branch and switches into it.
Once the empty branch s created, we can can delete files from the working directory, so they are not committed in to the new branch
git rm -rf .
Now you are in the empty branch without any inherited files or commits.
If you want to push your empty branch to a remote repository, do the following
git commit --alow-empty -m "Init" git push origin
Note, that if you try to merge another branch into the empty one, you will receive the error: fatal: refusing to merge unrelated histories
Use the --allow-unrelated-history
option to force the merge into the empty branch.
git merge --allow-unrelated-history
TL;DR
zfs destroy tank@%
When you see your zpool exploding, but the “used” size isn’t that much, you may take a look at your snaphosts.
First find the dataset with the that has the largest snapshot
zfs list -o name,usedbysnapshots | sort -r -k2
example output
tank/mails 56.3G tank/store 261M tank/docker 2.38M
In this case the tank/mails dataset should be looked at.
Snapshots can then be listed for that filesystem by using a command
# zfs list -t snapshot -r zfs list -t snapshot -r tank/mails
Some commands for extended zpool status
ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c health ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c hours_on ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c lsblk ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c smart_test ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c temp ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c upath
When you don’t scrub your ceph pool it will scrub itself when you don’t want it to scrub: during working hours. To avoid that you can set the time in the night when scrubbing is allowed
the ceph config set is global for the cluster / all nodes.
root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour 0 root@pve-01:/etc/ceph# ceph config set osd osd_scrub_begin_hour 22 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour 22 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour 0 root@pve-01:/etc/ceph# ceph config set osd osd_scrub_end_hour 7 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour 7 root@pve-01:/etc/ceph#
sudo mount -t cifs -o user=dummy,domain=example.local,uid=$(id -u),gid=$(id -g),forceuid,forcegid,vers=2.0 //files.example.local/share ~/P