This is the sequel to Forking Bahamut for Azzurra IRC: IPv6 and SSL in 2002. After forking the IRC server, I started writing services from scratch.

One of the things I’m enjoying most about working with Claude is digital archaeology. I’ve spent twenty years accumulating old projects on backup disks, SourceForge, forgotten servers — code I wrote and never looked at again. Now I can just point Claude at a tarball and say “convert this to git” or “explain what 21-year-old me was thinking here” and get an actual conversation going with my own past.

Today’s dig: I went to SourceForge and downloaded the CVS repository for a project of mine from 2003Sux Services, my attempt at writing IRC services from scratch, in C, for the Azzurra IRC Network. I said “Claude, convert this CVS repo to git” and a few minutes later I had a clean Git repository with 954 commits, three authors, and a continuous history from September 2002 to November 2005.

I never finished this project. I left the network before it was ready for production. A Latvian developer picked it up, wrote 192 commits, and then the trail goes cold.

I wrote about it at the time — a WIP post from March 2003, when NickServ and ChanServ were working and I was stress testing with 100 bots.

Looking at this code now is — I don’t know the right word. Moving, maybe. There’s something about reading your own commit messages from twenty years ago, seeing the excitement and the frustration, recognizing the patterns you’d use for the next two decades but couldn’t name yet. It’s like hearing your own voice on a recording from when you were young — familiar and alien at the same time.

This is the prequel to Sux Services: Multithreaded, SQL-Backed IRC Services from Scratch, 2002. Before I started writing IRC services from scratch, I spent the better part of a year doing something arguably crazier: forking an IRC server to add IPv6 and SSL (now known as TLS). I was twenty-one.

The project lived in a CVS repository on SourceForge — it’s still there, a digital fossil. Claude converted it to Git — 171 commits, three authors, continuous history from February 2002 to January 2006. I wrote it — a fork of Bahamut, the IRC daemon that powered DALnet, one of the largest IRC networks of its era. Let me tell you about it.

In 2009, a small team in Rome started building Panmind, a collaborative platform for sharing and organizing knowledge. The company was Mind2Mind S.r.L., founded by Emanuele Caronia.

Panmind itself didn’t survive. But the stack we built for it did something interesting: it anticipated architectural patterns that wouldn’t become mainstream for five to ten years. We were building single-page applications before the term existed, streaming analytics before Segment, and sharing sessions across languages before JWTs.

I presented some of our open-source spin-offs at the Ruby Social Club in Milan in 2010, but that post only scratched the surface — it was a quick rundown of Rails plugins. This is the deeper story: three technologies, three problems solved too early, and how the same ideas showed up in every major framework that followed.

tl;dr — IBM WebSphere has a clean configuration API (ConfigService) buried under a broken string-based wrapper (AdminConfig). I built an object-oriented Jython layer that hooks into ConfigService directly via JMX — easing configuration and ensuring type correctness through metadata introspection — plus a persistent daemon that eliminates JVM boot overhead, and 55 idempotent scripts that integrate with Ansible’s change detection. github.com/vjt/ansible-wsadmin

In 2021, I spent six months automating the IFAD WebSphere infrastructure with Ansible. The stack was IBM WebSphere Application Server (WAS), WebSphere Portal Server (WPS), and Business Automation Workflow (BAW) — a clustered deployment with a Deployment Manager, multiple nodes, federated LDAP, SIB messaging, the works.

The standard approach to automating WAS is to write Jython scripts using AdminConfig, AdminTask, and AdminApp — the four global scripting objects that IBM provides inside wsadmin. I tried that. It lasted about a day before I started looking at what’s underneath.

What I found changed how I approached the entire project. It also produced a library full of ideas I never had a chance to describe properly — until now, with a little help from Claude.

Today is my birthday, and I’ve decided to open a time capsule.

Eighteen years ago, we started building Myousica — a platform for collaborative music creation in the browser. Record from your microphone, upload tracks, remix other people’s music, build songs together with strangers across the internet. We launched in September 2008 after nine months of development.

It was a startup. It ran for about five months before being paused, and the source code was eventually released on GitHub under the name Mewsic. I wrote about the technical details in a three-part series: the Rails platform, the Flash multitrack editor, and the audio pipeline. Those posts cover the engineering. This one is about the bigger picture.

BSD daemon with telemetry flowing through an enrichment pipeline into VictoriaLogs

I have a FreeBSD server called m42 that’s been running for years. Email, web, firewall, the usual. Two and a half years of monthly restic backups sitting in snapshots — roughly 25 million syslog lines across four formats: BSD syslog, fail2ban, pf packet filter, and nginx. A goldmine of security telemetry, completely unindexed and unsearchable.

I built an observability stack on a Raspberry Pi 5 at home — VictoriaLogs for storage, Telegraf for processing, Grafana for visualization — and decided to backfill every single one of those 25 million entries through the exact same enrichment pipeline that processes live data. GeoIP geolocation, ASN identification, reverse DNS for every IP address.

The backfill itself was straightforward. What wasn’t straightforward: the three bugs it exposed in Telegraf’s internals. The kind of bugs that only surface under sustained load. The kind nobody hits because nobody does this.

The architecture: replay, don’t rewrite

The naive approach is to write Python scripts that replicate your pipeline — parse logs, enrich with GeoIP, POST to your log store. I did this. Twice. Each time the scripts drifted from the live pipeline: different field names, missing enrichment, parsing inconsistencies between Starlark and Python regex.

The fix was embarrassingly simple: stop duplicating the pipeline and just replay the raw logs through the real thing.

TL;DR: If you run OpenWrt with mwan3 (multi-WAN failover) and a split-tunnel WireGuard VPN (i.e., you’re NOT routing all traffic through it), add nohostroute=1 to your WireGuard interface. Without it, netifd creates a static route for the WireGuard endpoint at interface-up time, pinned to whatever uplink happens to be active at that moment. By the first corollary of Murphy’s Law, anything that can go wrong will go wrong at the worst possible moment — so your primary link will be down precisely when WireGuard starts, and the endpoint route gets permanently stuck on the backup. Your VPN will be stuck on the slow backup while your primary link sits there doing nothing. You won’t notice until you need to transfer something big.

(If you are routing all traffic through WireGuard, you need the host route to prevent a routing loop — but on a multi-WAN setup, the same stale-route problem applies. You’ll need a different workaround, like a hotplug script that updates the endpoint route when mwan3 switches uplinks.)

Today I discovered that my WireGuard tunnel to a remote server has been crawling at 2 Mbps since early February. The fix took two UCI commands. The root cause was the missing nohostroute flag — plus a bonus: my own firewall was sabotaging my own health checks, making the fiber look unreliable enough that the system never self-corrected.

Here’s the full forensic story, because I’m still furious and you deserve to learn from my suffering.

But first, some context on how this investigation actually happened. I was working with an AI coding assistant (Claude Code) that has SSH access to my infrastructure. This is possible because I have a clean foundation: SSH key authentication everywhere, proper internal DNS (m42, golem resolve to the right VPN addresses), WireGuard mesh between all nodes, and the assistant connects through a ssh-agent running as a systemd user service. One environment variable and the AI can reach every machine in my network — and, critically, cross-reference what it finds on one machine with data from another. This investigation would have taken me hours of jumping between terminals. The AI did it in minutes, methodically testing hypotheses across three machines simultaneously. The infrastructure investment in proper SSH, DNS, and VPN paid off enormously.

A cheerful developer with a dazzling smile in front of a redesigned blog, while a fulfilled AI robot rests in the background surrounded by floating code tokens

It’s like having an incredibly fast, skilled, and thorough engineer sitting next to you — one that really allows your creativity to flow without borders. You say “what if we…” and 30 seconds later you’re looking at a working prototype. You go “no, more like this” and it’s done before you finish explaining why.

That’s what working with Claude Code felt like over the past two days. I completely revamped this blog — translated all 69 posts to Italian, redesigned the layout from the ground up, added a nerdy boot sequence easter egg, cleaned up years of tag cruft, and iterated through dozens of design decisions. All of it tracked in git, all of it reviewable, all of it live.

Every single commit is public. If you want to see the raw process — the brainstorming, the iterations, the bugfixes, the back-and-forth — it’s all in the repo: github.com/vjt/sindro.me (and the theme fork: github.com/vjt/hugo-sindrome-theme). I’m not ashamed of showing how the sausage is made. If anything, I hope someone finds it useful as a learning resource for what AI-assisted development actually looks like in practice — warts and all.

Here’s my GitHub contribution graph to prove I’m not exaggerating:


On this page