TL;DR
ss -tulpn | grep -v "::1" | grep -v "127.0.0.1"
Welcome to my world
TL;DR
ss -tulpn | grep -v "::1" | grep -v "127.0.0.1"
When ssh into a server I want to start directly tmux or use an existing session.
ssh example.local -t "tmux a || tmux"
While trying to connect to an older esxi server I got the error message Unable to negotiate with 1.2.3.4 port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss
I was wondering.
The solution $EDITOR ~/.ssh/config
Host 1.2.3.4 User root HostKeyAlgorithms=+ssh-dss
1) Download ADB
2) Enable USB Debugging on your device
3) Connect to the computer and verify your device is listed with the command “adb devices”
4) Type “adb Shell”
5) Type “pm list packages”, this will show you all packages installed on phone.
6) Type “pm uninstall -k –user 0 com.huawei.search”
7) you should see the word “Sucess” pop up Well I removed some more bloat ware
pm uninstall -k --user 0 com.google.android.apps.books pm uninstall -k --user 0 com.google.android.youtube pm uninstall -k --user 0 com.google.android.apps.youtube.music pm uninstall -k --user 0 com.google.android.apps.photos pm uninstall -k --user 0 com.google.mainline.telemetry pm uninstall -k --user 0 com.hihonor.android.fmradio pm uninstall -k --user 0 com.hihonor.calendar pm uninstall -k --user 0 com.hihonor.search pm uninstall -k --user 0 com.pal.train pm uninstall -k --user 0 com.hihonor.pcassistant pm uninstall -k --user 0 com.google.android.apps.tachyon pm uninstall -k --user 0 com.google.android.feedback pm uninstall -k --user 0 com.hihonor.printservice pm uninstall -k --user 0 com.hihonor.android.totemweather pm uninstall -k --user 0 com.hihonor.android.chr pm uninstall -k --user 0 com.hihonor.android.thememanager pm uninstall -k --user 0 com.google.android.videos pm uninstall -k --user 0 com.hihonor.id
The error message on the proxmox gui
() Job for ceph-mgr@pve-03.service failed. See "systemctl status ceph-mgr@pve-03.service" and "journalctl -xe" for details. TASK ERROR: command '/bin/systemctl start ceph-mgr@pve-03' failed: exit code 1
The error message from systemctl
ceph-mgr@pve-03.service: Start request repeated too quickly. ceph-mgr@pve-03.service: Failed with result 'start-limit-hit'.
Solve with
systemctl reset-failed ceph-mgr@pve-03 systemctl start ceph-mgr@pve-03
change pve-03 to your node name.
While trying to move a VM from one node to another I got the error message:
VM 100 qmp command ‘blockdev-snapshot-delete-internal-sync’ failed – Failed to get a snapshot list: Operation not supported
One snapshot was stuck and the VM locked. How to solve this?
qm unlock 100 qm listsnapshot 100 qm delsnapshot 100 preFirstBoot --force
qm unlock <ID> qm listsnapshot <ID> qm delsnapshot <ID> <snapname> --force
it might be that the snapshot will remain on the hard disk.
Happy unlocking :)
zfs send ... | ssh host2 zfs receive -s otherpool/new-fs
On the receiving side, get the opaque token with the DMU object #, offset stored in it
zfs send ... | ssh host2 zfs receive -s otherpool/new-fs zfs get receive_resume_token otherpool/new-fs # 1-e604ea4bf-e0-789c63a2...
Re-start sending from the DMU object #, offset stored in the token
zfs send -t 1-e604ea4bf-e0-789c63a2... | ssh host2 zfs receive -s otherpool/new-fs
If you don’t want to resume the send, abort to remove the partial state on the receiving system
zfs receive -A otherpool/new-fs
Edit
/etc/zfs/zed.d/zed.rc
uncomment
ZED_EMAIL_ADDR="mail@example.com"
and add a valid email address.
uncomment
ZED_EMAIL_PROG="mail"
uncomment
ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@"
uncomment
ZED_NOTIFY_VERBOSE=0
if you want to get an email after every scrup set the value to 1
save the file and restart zed service
systemctl restart zed.service
How to enable snmapd on your vmware esxi server
esxcli system snmp set --communities public esxcli system snmp set --enable true esxcli network firewall ruleset set --ruleset-id snmp --allowed-all true esxcli network firewall ruleset set --ruleset-id snmp --enabled true esxcli system snmp set --syslocation "My Location" esxcli system snmp set --targets=10.10.0.0@161/public
Now you can start the service from the UI
A single disk zpool “test” crashed on my server (the disk died). It was just for testing, so nothing dramatic. However, when I rebooted the server I got the error message “failed Import ZFS pools by cache file”. A zpool destroy -f did not solve the problem. zpool status still showed the “test” pool. The other pool tank was still working.
What did help was
# disable the cache file for the existing pool(s) zpool set cachefile=none tank # delete the old pool file rm -rf /etc/zfs/zpool.cache # recreate if touch /etc/zfs/zpool.cache reboot # re-enable the cache zpool set cachefile=/etc/zfs/zpool.cache tank
Well, the cache file should be automatically updated when your pool configuration is changed, but with the crashed pool it did not.