That is a pretty annoying error message. To get rid of it
apt install pve-kernel-helper
Welcome to my world
That is a pretty annoying error message. To get rid of it
apt install pve-kernel-helper
When you see your zpool exploding, but the “used” size isn’t that much, you may take a look at your snaphosts.
First find the dataset with the that has the largest snapshot
zfs list -o name,usedbysnapshots | sort -r -k2
example output
tank/mails 56.3G tank/store 261M tank/docker 2.38M
In this case the tank/mails dataset should be looked at.
Snapshots can then be listed for that filesystem by using a command
# zfs list -t snapshot -r zfs list -t snapshot -r tank/mails
Some commands for extended zpool status
ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c health ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c hours_on ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c lsblk ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c smart_test ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c temp ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c upath
When you don’t scrub your ceph pool it will scrub itself when you don’t want it to scrub: during working hours. To avoid that you can set the time in the night when scrubbing is allowed
the ceph config set is global for the cluster / all nodes.
root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour 0 root@pve-01:/etc/ceph# ceph config set osd osd_scrub_begin_hour 22 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour 22 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour 0 root@pve-01:/etc/ceph# ceph config set osd osd_scrub_end_hour 7 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour 7 root@pve-01:/etc/ceph#
sudo mount -t cifs -o user=dummy,domain=example.local,uid=$(id -u),gid=$(id -g),forceuid,forcegid,vers=2.0 //files.example.local/share ~/P
zpool status pool: tank state: ONLINE config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 usb-SanDisk_SDSSDA-1T00_0123456789CA-0:0 ONLINE 0 0 0 da3p4 ONLINE 0 0 0 errors: No known data errors
zpool set path=/dev/disk-by-id/sdsdsdsd tank da3p4 zpool set path=/dev/gpt/my_nice_name
just type `zpool import` and you will see the name that you can import.
if you want to change the mount path use
zpool import -R /other/path poolname
to rename the pool use
zpool import original_name new_name
journcalctl --file /var/log/.../dsdsdsdsdsdsdsds.journal
zfs create -V 10G tank/virtualdisk mkfs.ext4 /dev/zvol/fourth/virtualdisk zfs set compression=on fourth/virtualdisk
To create a sparse volume you add the -s parameter so that the previous command would look like this
Sparse = volume with no reservation / Thin provisioning
zfs create -s -V 10G fourth/virtualdisk
mount /dev/zvol/fourth/virtualdisk /mnt
Check available space on the filesystem:
df -h /mnt
zfs set volsize=20G tank/virtualdisk resize2fs /dev/zvol/tank/virtualdisk df -h /mnt zfs list
As mentioned, even if the volume is empty at the moment, space is preallocated, so it takes 20GB out of our pool. But even though it wasn’t initially created as a sparse volume, we can change it now
zfs set refreservation=none tank/virtualdisk zfs list
Tip: when using ext4 on a ZFS volume, you may notice that after deleting data in `/mnt`, the volume doesn’t reflect any gains in usable space. This is because, for efficiency, a lot of filesystems like ext4 don’t actually remove the data on disk, they just dereference it. Otherwise, deleting 100GB of information would take a very long time and make your system slow. This means that deleted files continue to exist in random blocks on disk, consequently on the ZFS volume too. To free up space, you would use a command such as `fstrim /mnt` to actually erase unused data in the ext4 filesystem. Only use the tool when needed, as to not “tire” the physical devices unnecessarily (although the numbers are pretty high these days, devices have a limited number of write cycles).
Don’t forget that a lot of the other ZFS-specific features are also available on volumes (e.g snapshots and clones).
zfs get -H -o value written pool/dataset # zfs get -H -o value written tank/name
When adding another node to the proxmox cluster and the LVM is missing it is not possible to add it correctly over the web interface. But here is how it works.
lsblk vgcreate pve /dev/sdc lvcreate -L 30G -n data pve lvconvert --type thin-pool pve/data
delete the VLM (just in case you need it)
lvchange -an /dev/data/data