I’m adding a section to my Claude Code-driven project READMEs (below).

I came across someone’s attempt to create an “AI Free” badge people could adopt for their projects, and decided that while I appreciate the sentiment, it is not actually helpful to anyone besides people whose position stops at “I don’t want to use code produced by an LLM.” That’s a fine position I take no philosophical issue with, but it doesn’t help people who come across my projects, which are plainly excluded from using that badge, to understand what it means to use them or interact with the code base if their position is not reflexively anti-AI.

I could take a sort of naive position of, “look, a repo on GitHub is just a repo on GitHub. There are no promises in this world.” And I’ve chosen the MIT license for everything, because it has such a “do whatever you want with this” vibe. But I got a PR on one of the projects, read it and the problem the user was trying to solve, and realized “oh, right, it’s a GitHub repo and even though Claude is signing all the commits, there’s an expectation here that the act of having this code in a public repo on GitHub sets with at least some people.” Making it worse, the PR tripped my “I don’t think this PR sounds safe, exactly,” instincts, which meant I had now incurred work by being unclear about what it is I am up to here.

The other thought that crossed my mind came from a conversation with a VP of UX years ago, when I told him we were struggling to make the CSS for our very oldest Puppet documentation work with recent design changes.

He asked me, “do you maintain those docs anymore?”

“No. And we say so.”

“Are the releases they cover supported anymore?”

“No. We mention that, too.”

“Can you still read the docs, even if the CSS is broken?”

“Well, it’s ugly and a bit distracting, but the content is there.”

“Good. Then it sets the right expectation. Let it serve as signal. You’ll probably get less complaints about it because you’ve made your position clear with the non-support message, and backed it up with obvious neglect.”

So instead I’m going to disable PRs to just foreclose on potential for misunderstanding, and also do my own version of a “Made with AI” Dubious Housekeeping Seal.

I’m going to also make a commitment to make my language about these experiments less twee:

I did not “work with the robots,” “ask the robots,” “torture a bot,” etc. I used an LLM, used AI, used Claude, etc.

Whether it shows or not, I am in some fairly serious internal discussion about what these tools mean, how we should use them, how/if there can be ethical use of them, etc. The spectrum of takes on AI covers a range of people I know, care about, and respect. It feels disingenuous and a little dishonest for me to use diminishing or faux naive language to describe what I’m doing here, as if to me it is a matter of idle amusement or something I don’t understand I should take seriously or be thinking about.

So, this is the new README edition. And once I resolve the issue the PR I got against one of the projects is resolved, I’ll turn off PRs: More signal.

Important consideration before using this code or interacting with this codebase

This application is an experiment in using Claude Code as the primary driver the development of a small, focused app that concerns itself with the owner’s particular point of view on the task it is accomplishing.

As such, this is not meant to be what people think of as “an open source project,” because I don’t have a commitment to building a community around it and don’t have the bandwidth to maintain it beyond “fix bugs I find in the process of pushing it in a direction that works for me.”

It’s important to understand this for a few reasons:

  1. If you use this code, you’ll be using something largely written by an LLM with all the things we know this entails in 2025: Potential inefficiency, security risks, and the risk of data loss.

  2. If you use this code, you’ll be using something that works for me the way I would like it to work. If it doesn’t do what you want it to do, or if it fails in some way particular to your preferred environment, tools, or use cases, your best option is to take advantage of its very liberal license and fork it.

  3. I’ll make a best effort to only tag the codebase when it is in a working state with no bugs that functional testing has revealed. Tags just mean “seemed to be in working order.”

While I appreciate and applaud assorted efforts to certify code and projects AI-free, I think it’s also helpful to post commentary like this up front: Yes, this was largely written by an LLM so treat it accordingly. Don’t think of it like code you can engage with, think of it like someone’s take on how to do a task or solve a problem.