TL;DR
zfs destroy tank@%
Welcome to my world
TL;DR
zfs destroy tank@%
When you see your zpool exploding, but the “used” size isn’t that much, you may take a look at your snaphosts.
First find the dataset with the that has the largest snapshot
zfs list -o name,usedbysnapshots | sort -r -k2
example output
tank/mails 56.3G tank/store 261M tank/docker 2.38M
In this case the tank/mails dataset should be looked at.
Snapshots can then be listed for that filesystem by using a command
# zfs list -t snapshot -r zfs list -t snapshot -r tank/mails
Some commands for extended zpool status
ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c health ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c hours_on ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c lsblk ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c smart_test ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c temp ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c upath
zpool status pool: tank state: ONLINE config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 usb-SanDisk_SDSSDA-1T00_0123456789CA-0:0 ONLINE 0 0 0 da3p4 ONLINE 0 0 0 errors: No known data errors
zpool set path=/dev/disk-by-id/sdsdsdsd tank da3p4 zpool set path=/dev/gpt/my_nice_name
just type `zpool import` and you will see the name that you can import.
if you want to change the mount path use
zpool import -R /other/path poolname
to rename the pool use
zpool import original_name new_name
zfs create -V 10G tank/virtualdisk mkfs.ext4 /dev/zvol/fourth/virtualdisk zfs set compression=on fourth/virtualdisk
To create a sparse volume you add the -s parameter so that the previous command would look like this
Sparse = volume with no reservation / Thin provisioning
zfs create -s -V 10G fourth/virtualdisk
mount /dev/zvol/fourth/virtualdisk /mnt
Check available space on the filesystem:
df -h /mnt
zfs set volsize=20G tank/virtualdisk resize2fs /dev/zvol/tank/virtualdisk df -h /mnt zfs list
As mentioned, even if the volume is empty at the moment, space is preallocated, so it takes 20GB out of our pool. But even though it wasn’t initially created as a sparse volume, we can change it now
zfs set refreservation=none tank/virtualdisk zfs list
Tip: when using ext4 on a ZFS volume, you may notice that after deleting data in `/mnt`, the volume doesn’t reflect any gains in usable space. This is because, for efficiency, a lot of filesystems like ext4 don’t actually remove the data on disk, they just dereference it. Otherwise, deleting 100GB of information would take a very long time and make your system slow. This means that deleted files continue to exist in random blocks on disk, consequently on the ZFS volume too. To free up space, you would use a command such as `fstrim /mnt` to actually erase unused data in the ext4 filesystem. Only use the tool when needed, as to not “tire” the physical devices unnecessarily (although the numbers are pretty high these days, devices have a limited number of write cycles).
Don’t forget that a lot of the other ZFS-specific features are also available on volumes (e.g snapshots and clones).
zfs get -H -o value written pool/dataset # zfs get -H -o value written tank/name
zfs send ... | ssh host2 zfs receive -s otherpool/new-fs
On the receiving side, get the opaque token with the DMU object #, offset stored in it
zfs send ... | ssh host2 zfs receive -s otherpool/new-fs zfs get receive_resume_token otherpool/new-fs # 1-e604ea4bf-e0-789c63a2...
Re-start sending from the DMU object #, offset stored in the token
zfs send -t 1-e604ea4bf-e0-789c63a2... | ssh host2 zfs receive -s otherpool/new-fs
If you don’t want to resume the send, abort to remove the partial state on the receiving system
zfs receive -A otherpool/new-fs
Edit
/etc/zfs/zed.d/zed.rc
uncomment
ZED_EMAIL_ADDR="mail@example.com"
and add a valid email address.
uncomment
ZED_EMAIL_PROG="mail"
uncomment
ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@"
uncomment
ZED_NOTIFY_VERBOSE=0
if you want to get an email after every scrup set the value to 1
save the file and restart zed service
systemctl restart zed.service
A single disk zpool “test” crashed on my server (the disk died). It was just for testing, so nothing dramatic. However, when I rebooted the server I got the error message “failed Import ZFS pools by cache file”. A zpool destroy -f did not solve the problem. zpool status still showed the “test” pool. The other pool tank was still working.
What did help was
# disable the cache file for the existing pool(s) zpool set cachefile=none tank # delete the old pool file rm -rf /etc/zfs/zpool.cache # recreate if touch /etc/zfs/zpool.cache reboot # re-enable the cache zpool set cachefile=/etc/zfs/zpool.cache tank
Well, the cache file should be automatically updated when your pool configuration is changed, but with the crashed pool it did not.