To solve this add a new file
/etc/dnsmasq.d/99-edns.conf
in this file add a single line:
edns-packet-max=1232
restart your DNS resolver
Welcome to my world
To solve this add a new file
/etc/dnsmasq.d/99-edns.conf
in this file add a single line:
edns-packet-max=1232
restart your DNS resolver
Let’s say I want to copy/cut lines 34 to 65. I use
:34,65y
(Copy/Yank) or :34,65d
(Cut/Delete).
TL;DR
ss -tulpn | grep -v "::1" | grep -v "127.0.0.1"
When ssh into a server I want to start directly tmux or use an existing session.
ssh example.local -t "tmux a || tmux"
While trying to connect to an older esxi server I got the error message Unable to negotiate with 1.2.3.4 port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss
I was wondering.
The solution $EDITOR ~/.ssh/config
Host 1.2.3.4 User root HostKeyAlgorithms=+ssh-dss
The error message on the proxmox gui
() Job for ceph-mgr@pve-03.service failed. See "systemctl status ceph-mgr@pve-03.service" and "journalctl -xe" for details. TASK ERROR: command '/bin/systemctl start ceph-mgr@pve-03' failed: exit code 1
The error message from systemctl
ceph-mgr@pve-03.service: Start request repeated too quickly. ceph-mgr@pve-03.service: Failed with result 'start-limit-hit'.
Solve with
systemctl reset-failed ceph-mgr@pve-03 systemctl start ceph-mgr@pve-03
change pve-03 to your node name.
zfs send ... | ssh host2 zfs receive -s otherpool/new-fs
On the receiving side, get the opaque token with the DMU object #, offset stored in it
zfs send ... | ssh host2 zfs receive -s otherpool/new-fs zfs get receive_resume_token otherpool/new-fs # 1-e604ea4bf-e0-789c63a2...
Re-start sending from the DMU object #, offset stored in the token
zfs send -t 1-e604ea4bf-e0-789c63a2... | ssh host2 zfs receive -s otherpool/new-fs
If you don’t want to resume the send, abort to remove the partial state on the receiving system
zfs receive -A otherpool/new-fs
Edit
/etc/zfs/zed.d/zed.rc
uncomment
ZED_EMAIL_ADDR="mail@example.com"
and add a valid email address.
uncomment
ZED_EMAIL_PROG="mail"
uncomment
ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@"
uncomment
ZED_NOTIFY_VERBOSE=0
if you want to get an email after every scrup set the value to 1
save the file and restart zed service
systemctl restart zed.service
A single disk zpool “test” crashed on my server (the disk died). It was just for testing, so nothing dramatic. However, when I rebooted the server I got the error message “failed Import ZFS pools by cache file”. A zpool destroy -f did not solve the problem. zpool status still showed the “test” pool. The other pool tank was still working.
What did help was
# disable the cache file for the existing pool(s) zpool set cachefile=none tank # delete the old pool file rm -rf /etc/zfs/zpool.cache # recreate if touch /etc/zfs/zpool.cache reboot # re-enable the cache zpool set cachefile=/etc/zfs/zpool.cache tank
Well, the cache file should be automatically updated when your pool configuration is changed, but with the crashed pool it did not.
After virtualizing a real computer with an old Linux I wanted to increase the partition size of the data drive. But I got this warning: resize2fs new size too large to be expressed in 32 bits
How to solve this? I started the VM with gparted-live.iso
# check file system e2fsck -f /dev/sdb1 # auf 64 bit ändern resize2fs -b /dev/sdb1 # increase partition .... wait :D / optional coffee resize2fs -p /dev/sdb1 # check file system e2fsck -f /dev/sdb1
Done :)