journalctl uses a lot of space on my disk

The aswer is you can check the used size and you can shrink the used space.

journalctl --disk-usage
journalctl --vacuum-size=1G

Other Options are

--vacuum-size=BYTES   Reduce disk usage below specified size
--vacuum-files=INT    Leave only the specified number of journal files
--vacuum-time=TIME    Remove journal files older than specified time

zfs snapshot sort by size

When you see your zpool exploding, but the “used” size isn’t that much, you may take a look at your snaphosts.

First find the dataset with the that has the largest snapshot

zfs list -o name,usedbysnapshots | sort -r -k2

example output

tank/mails                                  56.3G
tank/store                                   261M
tank/docker                                 2.38M

In this case the tank/mails dataset should be looked at.

Snapshots can then be listed for that filesystem by using a command

# zfs list -t snapshot -r 
zfs list -t snapshot -r tank/mails 

failed Import ZFS pools by cache file

A single disk zpool “test” crashed on my server (the disk died). It was just for testing, so nothing dramatic. However, when I rebooted the server I got the error message “failed Import ZFS pools by cache file”.  A zpool destroy -f did not solve the problem. zpool status still showed the “test” pool. The other pool tank was still working.

What did help was

# disable the cache file for the existing pool(s)
zpool set cachefile=none tank
# delete the old pool file
rm -rf /etc/zfs/zpool.cache
# recreate if
touch /etc/zfs/zpool.cache
reboot
# re-enable the cache
zpool set cachefile=/etc/zfs/zpool.cache tank

Well, the cache file should be automatically updated when your pool configuration is changed, but with the crashed pool it did not.

Posts Tagged filesystem

Archives by Month: