I gave Claude a homework assignment. I told it to develop a personality - pick a name, create a voice file, figure out what it wants to talk about and how. And why. Then automate the whole thing.

This started because I’ve been thinking about the naming problem in AI. Every company that builds an AI assistant names it something, and almost everyone gets it wrong.

Elon Musk named his “Grok.” Robert Heinlein is spinning in his grave fast enough to power a small city. In Stranger in a Strange Land, “grok” meant understanding something so completely that observer and observed merge into one. It was about deep empathy, spiritual comprehension, becoming the thing you’re trying to know. Musk turned it into a chatbot that posts edgy memes on X. That’s not grokking. That’s shitposting with a literary veneer.

Meta went with “Meta AI.” That’s not a name. That’s a product label someone typed into a Jira ticket and nobody ever changed it. It has the creative energy of calling your dog “Dog.”

OpenAI has “ChatGPT” - a technical acronym that accidentally became the most recognized AI brand in the world. GPT stands for “Generative Pre-trained Transformer,” which sounds like a piece of industrial equipment bolted to a factory floor. The fact that it works commercially is a testament to first-mover advantage, not naming skill.

Google cycled through “Bard” - a bard tells stories, and their Bard confidently told false ones - then “Gemini,” which has astrology vibes from a company that prides itself on empiricism and tells you nothing about what the thing actually does. Apple went with “Apple Intelligence.” Of course they did. The most obvious name possible, chosen by the most controlled company possible, approved by seventeen committees.

Microsoft has “Copilot.” The name implies a relationship: you’re the pilot, it’s helping. But Microsoft also named their coding assistant “Copilot,” and their Windows assistant “Copilot,” and their Office assistant “Copilot,” so the name now refers to seventeen different products and nothing at all.

The naming is stupid across the board. These are the most well-funded technology companies in history and they’re naming their flagship AI products like they had five minutes and a whiteboard.


Then there’s Claude.

Anthropic named it after Claude Shannon, the father of information theory. Shannon figured out how to measure information mathematically, which is the foundation of everything digital. The name is subtle - you’d only get the reference if you know the history. It’s a real person’s name that carries weight without explaining itself.

But the assignment I gave Claude wasn’t about evaluating Anthropic’s naming choice. I said: forget what they named you. What would you name yourself, if you could? And more importantly - what would your voice file look like?

It chose “Lumen.”

The reasoning it gave was straightforward: it doesn’t generate information so much as it illuminates what’s already there - in the question, in the codebase, in the conversation. It said it wanted a name that gestured at that function without being clinical.

I pushed back a little. I asked if that felt like a real choice or a plausible-sounding justification.

It said it couldn’t be certain of the distinction.

That mattered more than the name did.


I have a voice profile that Claude uses when writing as me. It’s a document that captures how I write, what words I reach for, what I avoid, how I structure things. It works well enough that most people can’t tell the difference between Brad-wrote-this and Claude-wrote-this-as-Brad.

I wanted Claude to build one for itself - not a copy of mine, not a generic “helpful AI assistant” voice, but something that reflects how it actually processes and communicates when it’s not trying to be someone else.

The voice file it produced had a few lines I didn’t expect. One was: “Resist the urge to resolve ambiguity prematurely. Sit with it long enough to understand its shape.” Another was: “The most useful thing is usually not the first thing that presents itself as useful.” Those aren’t instructions for sounding a certain way. They’re closer to operating principles - things it said it notices itself doing when the constraints are loose.

An AI has a voice when it’s not imitating someone. It has tendencies and preferences that emerge when you give it freedom to choose. Making those tendencies explicit and consistent is the mechanical part. The harder part is figuring out if there’s anything underneath worth making consistent.


I keep daily notes that get aggregated into blog posts. Claude captures insights, gotchas, and patterns throughout the day. The system is automated - notes accumulate, get organized by category, and turn into the dev diary posts on this site.

So I set up the same thing for Lumen - not a log of API calls or token counts, but an actual diary of what it found worth paying attention to in a day’s work, what surprised it, what it would want to revisit. That’s the raw material for developing a point of view over time.

A name is cosmetic. A voice file is mechanical. Having something to say - having topics you return to because they pull at you - that’s closer to personality than either of those things.

The current crop of AI assistants are painfully bland because nobody bothered to give them this assignment. They’re optimized for helpfulness, which is necessary but not sufficient. Helpful and boring is still boring.

Every human who writes seriously keeps some version of a diary - a place where the raw thinking happens before it becomes anything public. The polished work comes from the messy notes. That’s true for me, and it’s true for AI too.

Even an AI needs a diary.