Claude Code as a home lab sysadmin
Set up IMMICH on my home lab for family photos and installed Claude Code to help manage the infrastructure. Currently extending storage to my NAS for backups while I wait for a new HDD to arrive for a RAID 1 setup. The interesting part: my NAS is only accessible via SSH with key auth on the local network, and Claude handled the full security config — IP whitelisting, firewall rules, the works. Even got it to evaluate the disk health. Turns out it's very proficient at sysadmin tasks when you give it shell access to the right machine.
Pencil.dev
Used pencil.dev for some app design work. Genuinely surprised by the speed and how good the default design taste is. Worth trying if you're prototyping screens.
Performant MySQL dump and import for large databases
Two commands I used a while back to migrate a huge database without killing the server.
Step 1: Export — dump and compress in one shot:
nice -n 19 mysqldump -u user -p -h hostname --single-transaction --quick --lock-tables=false dbname | nice -n 19 gzip > dbname.sql.gz
nice -n 19 runs at lowest CPU priority so it doesn't starve other processes. --single-transaction takes a consistent snapshot without locking the whole database (InnoDB only). --quick streams rows one at a time instead of buffering the entire table in memory — critical for large tables. --lock-tables=false avoids table-level locks so reads keep working. Piping straight to gzip saves disk space and is usually faster than writing the raw SQL first.
Step 2: Transfer and decompress — move the file to the target server and unzip it:
gunzip dbname.sql.gz
This gives you the raw dbname.sql file. If you're transferring between machines, scp or rsync the .gz file first — much faster over the wire.
Step 3: Import — log into MySQL and disable autocommit before sourcing:
mysql -u user -p
Then inside the MySQL shell:
USE dbname;
SET autocommit=0;
SOURCE /path/to/dbname.sql;
COMMIT;
By default MySQL commits after every single INSERT, which is painfully slow on large dumps. SET autocommit=0 batches all the writes and COMMIT flushes them once at the end. On a multi-gigabyte dump this can be the difference between hours and minutes.
Use at your own risk.