- Das Leben ist kein Ponyschlecken.
- Der Apfel fällt nicht weit vom Pferd.
- Lebe jeden Tag als seist Du das Letzte.
- Das Pferd von hinten aufrollen.
- Den Teufel an die Wand werfen.
- Der Wurf mit dem Zaunpfahl.
- Da schneiden sich die Geister.
- Da streiten sich sich die Geister.
Author: mario
ceph scrubbing time
When you don’t scrub your ceph pool it will scrub itself when you don’t want it to scrub: during working hours. To avoid that you can set the time in the night when scrubbing is allowed
the ceph config set is global for the cluster / all nodes.
root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour 0 root@pve-01:/etc/ceph# ceph config set osd osd_scrub_begin_hour 22 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_begin_hour 22 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour 0 root@pve-01:/etc/ceph# ceph config set osd osd_scrub_end_hour 7 root@pve-01:/etc/ceph# ceph config get osd osd_scrub_end_hour 7 root@pve-01:/etc/ceph#
Linux mount domain SMB / cifs
sudo mount -t cifs -o user=dummy,domain=example.local,uid=$(id -u),gid=$(id -g),forceuid,forcegid,vers=2.0 //files.example.local/share ~/P
Windows set different NTP Server
net stop w32time w32tm /config /syncfromflags:manual /manualpeerlist:"0.de.pool.ntp.org 1.de.pool.ntp.org 2.de.pool.ntp.org 3.de.pool.ntp.org" net start w32time w32tm /config /update w32tm /resync /rediscover
ZFS rename disk path
zpool status pool: tank state: ONLINE config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 usb-SanDisk_SDSSDA-1T00_0123456789CA-0:0 ONLINE 0 0 0 da3p4 ONLINE 0 0 0 errors: No known data errors
zpool set path=/dev/disk-by-id/sdsdsdsd tank da3p4 zpool set path=/dev/gpt/my_nice_name
just type `zpool import` and you will see the name that you can import.
if you want to change the mount path use
zpool import -R /other/path poolname
to rename the pool use
zpool import original_name new_name
ssh show host connection config
ssh -G
journalctl read old(er) journal
journcalctl --file /var/log/.../dsdsdsdsdsdsdsds.journal
zfs volume
zfs create -V 10G tank/virtualdisk mkfs.ext4 /dev/zvol/fourth/virtualdisk zfs set compression=on fourth/virtualdisk
To create a sparse volume you add the -s parameter so that the previous command would look like this
Sparse = volume with no reservation / Thin provisioning
zfs create -s -V 10G fourth/virtualdisk
mount /dev/zvol/fourth/virtualdisk /mnt
Check available space on the filesystem:
df -h /mnt
Resize
zfs set volsize=20G tank/virtualdisk resize2fs /dev/zvol/tank/virtualdisk df -h /mnt zfs list
As mentioned, even if the volume is empty at the moment, space is preallocated, so it takes 20GB out of our pool. But even though it wasn’t initially created as a sparse volume, we can change it now
zfs set refreservation=none tank/virtualdisk zfs list
Tip: when using ext4 on a ZFS volume, you may notice that after deleting data in `/mnt`, the volume doesn’t reflect any gains in usable space. This is because, for efficiency, a lot of filesystems like ext4 don’t actually remove the data on disk, they just dereference it. Otherwise, deleting 100GB of information would take a very long time and make your system slow. This means that deleted files continue to exist in random blocks on disk, consequently on the ZFS volume too. To free up space, you would use a command such as `fstrim /mnt` to actually erase unused data in the ext4 filesystem. Only use the tool when needed, as to not “tire” the physical devices unnecessarily (although the numbers are pretty high these days, devices have a limited number of write cycles).
Don’t forget that a lot of the other ZFS-specific features are also available on volumes (e.g snapshots and clones).
ZFS get data size since last snapshot
zfs get -H -o value written pool/dataset # zfs get -H -o value written tank/name
Proxmox adding missing LVM thin Pool
When adding another node to the proxmox cluster and the LVM is missing it is not possible to add it correctly over the web interface. But here is how it works.
lsblk vgcreate pve /dev/sdc lvcreate -L 30G -n data pve lvconvert --type thin-pool pve/data
delete the VLM (just in case you need it)
lvchange -an /dev/data/data
Author Archive
Archives by Month:
- January 2025
- December 2024
- November 2024
- October 2024
- August 2024
- April 2024
- January 2024
- December 2023
- November 2023
- July 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- October 2022
- September 2022
- August 2022
- July 2022
- May 2022
- March 2022
- February 2022
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- February 2021
- January 2021
- August 2020
- July 2020
- June 2020
- April 2020
- January 2020
- December 2019
- November 2019
- October 2019
- May 2019
- April 2019
- March 2019
- January 2019
- October 2018
- August 2018
- June 2018
- April 2018
- March 2018
- February 2018
- November 2017
- June 2017
- April 2017
- February 2017
- January 2017
- November 2016
- September 2016
- May 2016
- February 2016
- September 2015
- August 2015
- July 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- July 2014
- June 2014
- April 2014
- January 2014
- December 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- August 2012
- July 2012
- June 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009