I was staring at ERR_TOO_MANY_REDIRECTS on three production domains at once. MedicareMagic, MyHealthMagic, NewsletterMagic - all three had the same infinite redirect loop. The fix itself was simple - a self-redirect guard in the middleware - but it was a symptom of a bigger change I’d been building toward for weeks. Every app in the Magic Platform was about to get its own front door.
Things We Learned Today
The redirect loops led me into a side project that taught me something about visual feedback I should have already known. I was cleaning up AuthorMagic - a book management platform for authors - and removed a toast notification that appeared whenever someone added books to a collection. The dialog already closed on success. The book grid already refreshed with the new additions. Errors already displayed inline. The toast was a fourth signal for the same event. When the UI transition itself shows the result - a dialog closing, new items appearing in a list - an additional notification is noise, not signal. I find myself adding these confirmation toasts reflexively, and each one is worth questioning.
I spent part of the evening reading about Compound Engineering , a phrase I like much more than vibe coding, which is a methodology published by Every that formalizes AI-assisted development into a loop where each unit of work makes the next one easier. The discovery was that our Magic Platform workflow independently evolved about 85% of the same patterns. Plan-first development, parallel review agents, pattern capture, session persistence - all of it emerged naturally from daily use. The meaningful delta is in two places: their explicit “compound step” that forces you to document learnings as a mandatory workflow stage, and their priority classification (P1/P2/P3) for findings. Our continuous insight auto-capture is more seamless - it captures without interrupting flow. The biggest philosophical split is single-command autonomy versus intentional human checkpoints at each stage. Both are valid depending on how much risk you want the AI to take on unsupervised.
I also hit two infrastructure gotchas worth remembering. Release Please v4 - the tool that manages changelogs and version bumps - kept throwing “unexpected token” errors in CI logs. I spent time chasing those warnings before discovering they were harmless. The conventional commit parser just logs a warning when it encounters non-standard commit messages like “Production Release 2026-02-28 (#201).” The actual failure was buried at the end of the logs: a "sha" wasn't supplied GitHub API error caused by a stale release-please branch with cached file SHAs that no longer matched main. Deleting the stale branch fixed it. The lesson is one I keep relearning: always read the last error in CI logs, not the loudest one.
The second gotcha was psql --connect-timeout=30 failing on GitHub’s ubuntu-latest runners. That long-form flag format is not universally supported across psql versions. The portable fix is PGCONNECT_TIMEOUT=30 as an environment variable prefix - it works everywhere libpq runs, and it scopes to a single command when used as a prefix.
Things We Did Today
The biggest change was the domain migration. Every app in the platform moved from apex domains to app.{domain} subdomains - app.authormagic.com, app.getmedicaremagic.com, and the rest. The apex domains now serve Hugo-built landing pages instead of the Next.js apps directly. This means each product has a proper marketing front door that loads instantly as static HTML, with the authenticated app living one subdomain away. I wired up Hugo landing page support for all six apps, built a live preview pane into the landing page editor in IntensityMagic - our admin portal - and fixed the sign-in links to use absolute URLs pointing at the new subdomain targets.
AuthorMagic is getting close to its first alpha test. I greyed out nav items for features that are not ready yet - Events, Social Media, and Sales Upload now show “coming soon” badges instead of looking clickable. I raised the book discovery threshold from 15 to 20 results and updated the warning text. I added an escape hatch for when auto-search pulls in books by a different author with the same name. I also added Terms of Service and Privacy Policy pages. That batch closes out most of the alpha launch readiness checklist.
On the infrastructure side, I eliminated local Supabase as a development dependency entirely. All seven apps now develop against the Preview database - no more Docker containers, no more seed data drift. I closed out two environment health audit tickets, fixed RLS INSERT policies on a couple of tables that were too permissive, registered an orphan migration, and built a Sentry triage workflow that summarizes error patterns and creates Linear tickets automatically.
CompanyOS - our internal operations toolkit - had a productive day. I merged about fifteen pull requests covering multi-company identity selection for Google Workspace skills, a symlink system for sharing commands across config repos, an auto-generated README catalog, and a batch of fixes from Seth’s first week using the system. I also wrote up skill recommendations based on a delta analysis of external tooling - mapping each external capability to its nearest internal equivalent and surfacing only the genuine gaps.
CureCancerMagic - a cancer care coordination app - got its timeline view, task management UI, and communication log wired up. The auto-generated email feature is now working, letting care coordinators select contacts and generate draft emails from context.
Fun Things to Try
The Compound Engineering methodology has an explicit “compound step” that runs after every task - a multi-agent sweep that documents patterns, gotchas, and reusable learnings as a mandatory part of the workflow. Our insight auto-capture does something similar but more passively. I want to experiment with a hybrid approach: keep the auto-capture for organic discoveries, but add an explicit post-task prompt that asks “what did this teach you that would help with the next task?” This would be a forcing function that catches the learnings that do not naturally bubble up as insight blocks.
The delta analysis approach I used for evaluating external skills against existing capabilities could become a reusable pattern for any tool evaluation. Instead of a presence/absence comparison - “do we have X?” - it maps external features to their nearest internal equivalent and surfaces only the additive gaps. I built a coverage matrix for CompanyOS that showed several “missing” skills were actually 80% covered by existing ones, with only one or two specific features worth adopting. This would be useful any time I am evaluating whether to adopt a new tool or build an integration.