# Backup my home directory to /mnt/backups
srsync-incremental /mnt/backups /home/simon
Last updated 2024-04-28
This is list of computer related notes (mostly for UNIX and GNU/Linux) which are useful to me. Mostly used as my notebook so I can lookup the required commands, but maybe useful to others as well.
The following notes are longer and are kept on separate pages:
x86 UNIX calling convention
System V AMD64 (amd64, x86_64, x64) calling convention
Incremental backups with rsync
Get APT pinning settings
Boot installer into rescue mode over serial console
Rate limit requests using multiple samples
Send SysRq to GNU/Linux over virtual serial port (VSP)
Port knocking with iptables only
Retrieve TLS certificate of a Jabber server using gnutls-cli
Mount disk image with multiple partitions
Purge old messages, but keep important threads intact
Primitive certificate pinning
Flamegraphs with perf to characterize CPU utilization
Setup virtual env and install requirements
Mount ISO as CD-ROM
Syslinux bootloader with GRUB multiboot
Measure basic disk performance
Prepare Windows 10 USB stick on Linux
UEFI firmware update for Thinkpad x250
callee save (non-volatile):
ebx
, ebp
, esp
, edi
, esi
cs
, ds
, es
, fs
, gs
caller save (scratch registers):
eax
, ecx
, edx
st0
- st7
parameters: passed on the stack
callee save (non-volatile):
rbx
, rbp
, r12
, r13
, r14
, r15
control bits of MXCSR
caller save (scratch registers):
rax
, rcx
, rdx
, r8
, r9
, r10
, r11
, rsi
, rdi
the rest (SSE, VAX, etc.)
parameters: rdi
, rsi
, rdx
, rcx
, r8
, r9
(integer, pointer)
for system calls r10
instead of rcx
return value in rax
(integer, pointer)
For most of my backups I use incremental backups with rsync and hardlinks. I prefer rsync over more complex backup solutions as it allows for a very simple restore procedure (just copy the files back) which is not tied to any program. In combination with SSHFS and LUKS it also supports encrypted remote backups. I use the following script to support incremental backups.
The script creates a directory partial-backup-YYYY-MM-DD-HH-MM-SS
in the
target directory and uses it for the backup (the placeholders are replaced
with the current date and time). After the backup has completed, it renames
that directory to just backup-YYYY-MM-DD-HH-MM-SS
. If there are any previous
backup directories, the last one is used as source to create hardlinks for
existing files. This way incremental backups require only space for new or
modified (or renamed) files. (hardlink -t
can be used in the target
directory to recreate the hardlinks for renamed files.)
Examples (note that the target is the first parameter!):
# Backup my home directory to /mnt/backups
srsync-incremental /mnt/backups /home/simon
# Additional rsync arguments can be specified _after_ the target
# directory.
srsync-incremental /mnt/backups \
--exclude /.cache --exclude /Downloads /home/simon
Script srsync-incremental
(download):
#!/bin/sh
# Perform incremental backups using rsync and hardlinks.
#
# Thanks to http://www.sanitarium.net/golug/rsync_backups_2010.html for the
# idea.
# Copyright (C) 2011-2017 Simon Ruderich
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
set -eu
if test "$#" -lt 2; then
echo "Usage: $0 <backups-directory> <arguments to rsync>" >&2
echo
echo "Note: The target directory is the _first_ argument!" >&2
exit 2
fi
cd "$1"
shift
# Get path to last backup directory.
dest=./
for x in backup-*; do
test -d "$x" || continue
dest="../$x" # relative to destination directory
done
target="backup-$(date '+%Y-%m-%d-%H-%M-%S')"
target_tmp="partial-$target"
mkdir "$target_tmp"
rsync \
--verbose --itemize-changes --human-readable \
--archive --acls --xattrs --hard-links --sparse --numeric-ids \
--one-file-system \
--link-dest="$dest" \
"$@" "$target_tmp" \
|| {
# Try to remove the target directory without changing the exit code. In
# case the connection failed without transferring any files, we want to
# remove the empty directory.
code=$?
rmdir "$target_tmp" 2>/dev/null || true
exit $code
}
# --dry-run (-n) creates an empty directory. Remove it to prevent using it for
# further incremental backups (which would do a full backup).
rmdir "$target_tmp" 2>/dev/null && exit 0 || true
mv "$target_tmp" "$target"
Display the current APT pinning settings for all packages:
$ apt-cache policy
Package files:
100 /var/lib/dpkg/status
release a=now
500 http://ftp.de.debian.org/debian/ wheezy-updates/non-free Translation-en
500 http://ftp.de.debian.org/debian/ wheezy-updates/main Translation-en
500 http://ftp.de.debian.org/debian/ wheezy-updates/contrib Translation-en
500 http://ftp.de.debian.org/debian/ wheezy-updates/non-free amd64 Packages
release o=Debian,a=stable-updates,n=wheezy-updates,l=Debian,c=non-free
origin ftp.de.debian.org
[...]
Pinned packages:
[...]
After booting the installer press Esc
when you see the following prompt from
isolinux:
ISOLINUX 6.03 20141206 ETCD Copyright (C) 1994-2014 H. Peter Anvin et al
This should result in the following prompt:
boot:
Then enter the following command, adapt the console settings to the serial console you’re using:
boot: rescue vga=normal fb=false --- console=ttyS1,9600n8
HAProxy permits flexible rate limiting. However, it’s not obvious how to limit
using multiple samples (e.g. both source and destination of the request in a
transparent proxy setup, so that the rate limit applies separately for each
source/destination combination). http-request track-sc0
permits only
fetching a single sample. Fetching multiple samples doesn’t seem to be
supported. However, there’s a workaround using set-var-fmt
.
To limit to 100 HTTP requests per second for each source/destination combination:
frontend http-frontend
mode http
[...]
# len = len(IPv6 as string) + 1 + len(IPv6 as string)
stick-table type string len 79 size 1000 expire 60s store http_req_rate(60s)
http-request set-var-fmt(req.haproxy_sticktable_key) %[src];%[dst]
http-request track-sc0 var(req.haproxy_sticktable_key)
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
The source and destination (separated by a semicolon) as string is used as key
of the stick-table
. The length of each entry (len 79
) corresponds to the
maximum length of two IPv6 addresses plus separator. Additional values can be
appended as needed.
A more compact representation might be possible using the binary type, but
strings make it easy to lookup the values using stats socket
:
$ echo 'show table http-frontend' | socat unix-connect:/run/haproxy/stats.sock stdio
[...]
0x557a7a8a6030: key=2001:db8:e4c0:c34b:8b7d:2d85:53ed:13d3;2001:db8:f1bf:364f:d77c:1552:67ea:be8b use=0 exp=56133 http_req_rate(60000)=42
[...]
The same setup can be used to limit TCP requests:
frontend tcp-frontend
mode tcp
[...]
stick-table type string len 79 size 1000 expire 60s store conn_rate(60s)
tcp-request connection set-var-fmt(sess.haproxy_sticktable_key) %[src];%[dst]
tcp-request connection track-sc0 var(sess.haproxy_sticktable_key)
tcp-request connection reject if { sc_conn_rate(0) gt 100 }
To limit not by source IP but by source IP network or range use the ipmask()
converter. For example, to aggregate to /24-IPv4 and /64-IPv6 subnets use
%[src.ipmask(24,64)]
in the set-var-fmt
assignment.
To send a SysRq to a GNU/Linux system over the virtual serial port
(VSP) of a HP ILO press Esc
followed by Ctrl-B
and then the SysRq key.
For example, to perform an emergency sync type Esc
, Ctrl-B
, s
.
Port knocking prevents access to ports unless the user knows a “secret” sequence of ports to connect to before trying to establish the real connection. It’s not a security feature but provides another layer of defense and prevents log spam to e.g. SSH very successfully.
The following snippet implements port knocking by using only iptables and
requires no other daemons, thus increasing the reliability. To access SSH
(port 22) the following ports must be knocked first: 5000, 7000, 6000 (don’t
use only increasing/decreasing ports or multiple port scans might trigger it).
A simple way to knock a port is nc example.org 5000 </dev/null
. For OpenSSH
the port knocking can be performed automatically with the following
configuration in ~/.ssh/config
:
Match host example.org exec "nc -4 %h 5000; nc -4 %h 7000; nc -4 %h 6000; true"
Iptables configuration:
#!/sbin/iptables-restore
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
# Chains for port knocking.
:knock - [0:0]
:knock_2nd - [0:0]
:knock_3rd - [0:0]
:knock_accept - [0:0]
# Thanks to http://forum.ubuntu-fr.org/viewtopic.php?pid=2518477#p2518477 for
# the port knocking rules idea.
#
# For every knocking operation there is a 5 second time frame.
#
# The knock packets are rejected (tcp-reset) here so they don't clutter the
# logs.
-A knock -m recent --rcheck --rsource --name knock_3rd --seconds 5 -j knock_accept
-A knock -m recent --rcheck --rsource --name knock_2nd --seconds 5 -j knock_3rd
-A knock -m recent --rcheck --rsource --name knock_1st --seconds 5 -j knock_2nd
# First port knocking port.
-A knock -p tcp --dport 5000 -m recent --set --rsource --name knock_1st -j REJECT --reject-with tcp-reset
# Second port knocking port.
-A knock_2nd -m recent --remove --rsource --name knock_1st
-A knock_2nd -p tcp --dport 7000 -m recent --set --rsource --name knock_2nd -j REJECT --reject-with tcp-reset
# Third port knocking port.
-A knock_3rd -m recent --remove --rsource --name knock_2nd
-A knock_3rd -p tcp --dport 6000 -m recent --set --rsource --name knock_3rd -j REJECT --reject-with tcp-reset
# Port knocking successful. Add allowed ports to this chain.
-A knock_accept -m recent --remove --rsource --name knock_3rd
# For example allow SSH.
-A knock_accept -p tcp --dport 22 -j ACCEPT
# The usual rules.
-A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -i lo -j ACCEPT
# Support port knocking. Ports used for port knocking cannot be used for
# other services with the current setup.
-A INPUT -p tcp -j knock
# ...
# Reject the rest. This prevents access to e.g. SSH unless port knocking is
# used.
-A INPUT -j REJECT
COMMIT
$ gnutls-cli -p 5222 --starttls --print-cert jabber.org
[...]
<?xml version='1.0' ?><stream:stream to="jabber.org" xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0">
[...]
<starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls"/>
[...]
Send SIGALRM
to gnutls-cli
to initiate the TLS connection.
Tell loop kernel module to automatically create partition device files:
# rmmod loop
# modprobe loop max_part=63
Then mount the disk image as usual:
# losetup -f </path/to/disk/image>
Thanks to Chris' comment on http://www.docunext.com/blog/2007/07/losetup-working-with-raw-disk-images.html for this great idea.
Remove threads older than 3 months from a mailing list. Threads which contain messages from me or are flagged as important are not purged.
!~(~P | ~F | ~r <3m)
Use with D mapping in mutt (delete messages matching a pattern).
There seems to be no direct way to enable certificate pinning in OpenVPN (if you know one, please tell me). Only the CA can verified by using the following configuration options:
ca path-to-certs.crt
tls-remote /C=../ST=../L=../OU=/../CN=hostname
path-to-certs.crt
should contain a valid certificate-chain as PEM file,
tls-remote
should match the server certificate.
However an evil CA could still MITM this connection by providing a fake certificate which is signed by this CA. This is not an issue if you control the OpenVPN server and can create a specific CA for the VPN service, But if you’re just using it this can be a possible security issue.
The following script implements certificate pinning in a primitive way by
using OpenVPN’s tls-verify
option which runs a program for each certificate
and aborts the connection if the program uses a non-zero exit code. It’s not
an optimal solution but works fine for me. But be careful as it’s a little
fragile and might not be secure in all cases!
#!/bin/sh
# Provide primitive certificate pinning. It only uses the SHA-1 of the
# certificate, but it's better than nothing.
# SHA-1 fingerprints of the used certificates.
sha1_3=b8:96:31:66:c7:fe:d4:8c:73:85:3b:6f:36:23:24:b8:34:70:99:d9 # CA
sha1_2=e3:de:e9:d8:36:e4:c3:43:6f:b1:05:f1:ff:07:00:f7:0b:7d:0e:fd # sub certificate
[...]
sha1_0=72:e2:0b:a5:dc:5e:f9:ee:25:d9:69:c4:cc:fb:1e:63:73:17:dd:6d # server certificate
compare() {
if test -n "$1" && test x"$1" != x"$2"; then
exit 1
fi
}
# This is crazy .. but that's how OpenVPN does the verification. This script
# is called with an increasing numbers of tls_digest_* variables set ...
compare "$tls_digest_3" "$sha1_3"
compare "$tls_digest_2" "$sha1_2"
compare "$tls_digest_1" "$sha1_1"
compare "$tls_digest_0" "$sha1_0"
exit 0
Adapt the SHA-1 fingerprint of the certificates and their count as necessary.
You can obtain the fingerprint for example with GnuTLS' certtool -i
</path/to/pem-certificate
.
Enable the verification by adding the following line to the configuration file:
tls-verify "/path/to/this/script"
Again, this might not be secure in all cases. If you find any issues please tell me. The best solution would be to implement certificate pinning in OpenVPN itself.
Perf can be used to determine what processes are doing with their CPU time:
$ perf record --call-graph dwarf -F 99 -ag -- sleep 60
$ git clone https://github.com/brendangregg/FlameGraph
$ perf script --no-inline | ./stackcollapse-perf.pl > out.perf-folded
$ cat out.perf-folded | ./flamegraph.pl > perf-kernel.svg
The resulting flamegraph shows the CPU time spent grouped hierarchically by program and function (recursively).
Thanks to Brendan Gregg for flamegraphs and the perf commands. Thanks to Ralf for the suggestion to use --no-inline.
Setup basic virtual env and update pip to the latest version (often necessary on older systems):
$ python3 -m venv .venv
$ . .venv/bin/activate
$ pip3 install --upgrade pip
Install the dependencies of the local project (e.g. from setup.py
) and
install the project in “develop mode” so I can use the binaries from the
project:
$ pip3 install -e .
To install dependencies from a requirements.txt
file use:
$ pip3 install -r requirements.txt
Switch to the QEMU monitor with Ctrl-Alt-2.
(qemu) info block
...
ide1-cd0: [not inserted]
...
(qemu) change ide1-cd0 /path/to/iso
To eject it use:
(qemu) eject ide1-cd0
Install MBR on the disk:
$ dd if=/usr/lib/syslinux/mbr.bin of=/dev/<device>
The syslinux partition must be marked as bootable.
Install syslinux on the partition (e.g. /dev/sda1
), must be formatted as
FAT32 file system:
$ syslinux --install /dev/<fat32 partition>
Copy mboot.c32
to the file system. Create syslinux.cfg
with the following
content:
DEFAULT mboot.c32 <multiboot-elf-to-load> --- [images for multiboot..]
For example:
DEFAULT mboot.c32 system.elf --- image.bin
winsat disk -drive c
Thanks to David d C e Freitas for this handy command.
Initialize USB stick as GPT and create one FAT32 partition. Copy everything
from the ISO except sources/
to it. Then copy sources/boot.wim
to the
partition (into sources/
). Afterwards, create an NTFS partition and copy
sources
(except for boot.wim
) to it. (The extra NTFS partition is
necessary because sources/
contains files larger than 4 GiB.)
Make sure the FAT32 partition does not use the “EFI-System-Partition” type or the installation might fail. It also looks like the Windows 10 installer has problems with multiple disks. So unplug the unused ones during the installation.
Download the firmware as ISO image from the Lenovo website.
Extract boot part:
# geteltorito n10ur07w.iso >update.iso
Mount it:
# partx -av update.iso
# mount /dev/loop0p1 /mnt
Copy files to /boot/efi
(or where your EFI partition is mounted):
# cp -r /mnt/FLASH /boot/efi
# cp /mnt/EFI/BOOT/BootX64.efi /boot/efi/FLASH
Boot the BootX64.efi
file, e.g. boot into GRUB, then press c to start the
commandline:
grub> set root=(hd0,gpt1)
grub> chainloader (hd0,gpt1)/flash/BootX64.efi
grub> boot
Last updated 2024-04-28