The strangest thing about working with AI, the thing the marketing never describes, is how distinctly human the experience is.
I have believed for most of my adult life that constraints and discomfort drive growth. Limits produce work. The Beatles writing and recording Let It Be in three weeks even when they were the biggest band in the world. People who can do anything, any day, often do nothing of consequence. What I did not see coming is that AI would deepen that pattern.
The walls I used to hit had names. Skills I didn't have. Knowledge I needed to acquire. You could name them, study them, plan against them. The walls I am hitting now are different. The constraint, again and again, turns out to be my own imagination and my own ability to turn what I know into language. I have spent my career making other people's ideas and stories legible. I still find, day after day, that I am nowhere close to being able to express what I want.
That has produced a question I cannot resolve. Is this a distinctly human experience, or a machine-driven one? AI is supposed to be the most powerful technology of our generation. The pitch is about scale and speed and what the machine can do. The actual experience, at least for me, is sitting in front of a screen for hours trying to say what I actually mean. The machine is the occasion for the work. The work is human.
Some of this came together for me, of all places, in therapy. I had been carrying thoughts about my AI work for months without language for them. The session was about a different problem. My own life, the structure I do and don't have. The conversation moved through David Epstein's thinking about useful constraints, applied to my life. The frame fit. So did the work I had been doing at my desk. Therapy is the one hour a week where I practice putting language to my inner life. That hour was where the words for this came.
For the last few months I have been building what I think of as Basa OS. An internal operating system for how my company uses AI. Which prompts live where. What processes to automate and where to center the human. What systems to piece together. What context each one needs. What document gets updated when a prompt runs. Where the human decision lives at each step. How a discovery in one person's chat becomes shared company knowledge. What happens when three documents on the same thing stop agreeing with each other. Valuable enough that I've taken myself out of the company for two weeks, more than once, to do it.
That description is the cleaned-up version. Months in. The first time I asked for this, I asked for something narrower. I wanted a way to keep my team current on what Anthropic was shipping. A Monday brief. Pull the new blog posts, the podcast transcripts, the X threads from the people I followed. Put it all in one place each week.
What came back was useful and not what I needed. I had been asking how to keep my team informed about AI. I had actually been trying to figure out how my team should be using AI together. I did not know I was asking the second question until I saw the answer to the first one. Reading about AI is not the same as working with AI. The next version was closer. The one after that was closer. Today the spec runs to dozens of pages, and I am still not sure what we got wrong. Next week we implement a piece of it. The implementation will show me more things I got wrong, and we will go again. The work has been figuring out what I actually meant the first time I asked. The gap between what I knew and what I could say was language.
The question that started Basa OS, the one I keep coming back to, is how do you build a system where people can make better human decisions, but at the speed of AI. That is the actual frontier. To do it you have to know where the human decision lives in your business. Most of the time, until something forces you to look, you don't.
Most of the work is breaking things down into parts. You start with what you think you are trying to do, and then you recognize that you are not communicating, in any way, what you are trying to do well. So you have to better understand what you are actually trying to do by breaking it down. What do I need. What do I think I will need. What do I not even know that I will need. What is the step before this step. What is the step before that one. You build it back up, you watch the thing run, and you find out where the new failures are.
It is a combination of math and logic and philosophy and what you know about your work and life experience. All squished together against the constraints of what the technology can actually do today. Most of what we try doesn't work the first time. We are bumbling our way through. The bumbling is the method. The only way to test is to be willing to spend a weekend on something that doesn't work. If one part of it works, or the failure shows you something you couldn't see before, you get to the next stage.
Eventually you are just going to have to express it in text form. That happens to me almost every day now. I know more than I can say. The model only knows what I tell it. The work, the actual hour-by-hour work, is the trying to say it. And the question I keep ending up at is whether the technology got it wrong, or I didn't express it well. I always start by assuming it's me. My imagination. My ability to express what I need.
It is like trying to write directions to a place you've driven to a hundred times. You can do the drive without thinking. Writing it down for someone else, in words they'll actually be able to use, is a different skill. You find out quickly how many of your turns are landmarks no map carries. The gas station with the broken sign. The bend after the old mill.
To define a single prompt well, I have to know exactly what I am asking for. What evidence the AI should base it on. Where it should look. What counts as done. Who is supposed to act on it. What should never happen. Doing that for dozens of recurring workflows shows me every assumption I have ever made about how my company runs. I keep finding workflows we have been doing for a year that I cannot fully describe in writing. I keep finding decisions that have been made the same way for months because nobody wrote down what the criteria were. I keep finding two documents that contradict each other because both have been updated separately and nobody noticed the gap.
With AI, I can solve almost any problem I encounter, if I can do two things. Express the problem clearly. Test the solution against something real. Both are old skills. Neither is technical. The middle part, the part between defining the problem and recognizing the answer, is the new territory. It feels more philosophical and creative than anything I have worked on before.
The public conversation about AI keeps using the word limitless. Every demo is about expansion. My experience has been the opposite. AI is bounded. If the system is drawing from old information, you have built an elaborate misinformation design. Bounded by where the human decision actually lives. Bounded by what you can describe. The constraints are where the value lives. And the most stubborn constraint, in my experience, is the one closest to home.
Every day with AI is a confrontation with myself. It is the weirdly human part of all this. The part marketing doesn't describe and engineers don't either. I have not figured out what to do with it yet. I am still in it.

