Skip to content

All posts

Our AI Philosophy: Building Infrastructure for Human Judgment at AI Scale

About eighteen months ago, an agency executive cut me off mid-pitch.

I was explaining our vision for AI-powered licensing infrastructure—how the technology would transform content deals, make processes more efficient, create new possibilities. The executive stopped me.

"Listen, I don't care about AI. I don't care about all that shit. We're trying to negotiate 300 creator deals simultaneously and our entire team is just going back and forth on email. We're dying here in valueless admin. Can you help?"

They weren't interested in my projections about the future. They were drowning in present reality. That conversation changed everything.

After leaving Religion of Sports, I'd spent seven months researching AI's impact on content creation. My hypothesis: AI will slash production budgets and timelines, creating an explosion in content volume. But that explosion won't be matched by an explosion in attention—attention is finite. Over time, the value of individual pieces of content will decline. To replace lost revenue, producers will have to do orders of magnitude more deals.

Having coordinated thousands of deals over fifteen years—from managing Delta Rae to producing at Religion of Sports—I knew immediately: there's no infrastructure for orders of magnitude more deals in the future. But what struck me harder was realizing there's no infrastructure for the current volume of deals either.

Here's what people find counterintuitive: after spending seven months immersed in AI research, after building a company predicated on AI's transformative effects on content production, we deliberately don't build AI into our customer-facing platform. Meanwhile, internally, we use AI for practically everything.

Why Coordination Resists Automation

Think about producing a stadium show for Luke Combs—artist, venue, promoters, sponsors, city permits all needing to align. Two months of reading what's actually happening underneath what's being said. The artist's manager isn't negotiating terms—they're managing a client looking for certainty and protection. The venue is juggling three other bookings. The sponsor doesn't have board approval yet but can't admit it. Managing egos, building trust, navigating territorial concerns before you can even announce the date.

Creator deal flow operates in this same territory. A brand wants to work with fifty creators on a trending campaign. Each creator evaluates the opportunity differently based on their audience demographics, content calendar, existing relationships, personal comfort with the product category. One needs to check with their business partner. Another wants to modify deliverables to fit their content style. A third is concerned about brand positioning. These aren't edge cases—this is standard deal flow at volume.

A few months into building Basa, I watched a demo of AI-powered contract negotiation software. Impressive capabilities—natural language processing, pricing algorithms, automated back-and-forth. When the sales rep asked the AI to identify issues in a sample creator contract, it flagged several concerns—some legitimate, some nonsensical. Accuracy rate: 97%.

"Ninety-seven percent is pretty good," he offered.

I thought about what 97% means when processing three hundred deals a month. Nine contracts with errors that could cost tens of thousands of dollars, damage relationships, or create legal liability. Those aren't acceptable odds when someone's career or brand reputation depends on getting the terms right.

At scale, infrastructure failure gets interpreted as moral failure. Someone pushing because a deadline is locked looks adversarial to someone slow because real conflicts exist. Both rational. Both think the other is shady. Normal human behavior—missing an email in a flood of messages—looks like bad faith. Mutual distrust combined with increased volume explodes the potential for conflict. No one wins and everyone feels shittier.

We've also normalized something that shouldn't be normal: knowledge workers have become the glue between systems that don't talk to each other. Email → Spreadsheet → Approval doc → Another spreadsheet → Contract → PDF → Folder. Information in, information transformed, information out. If that's all your work life is—taking information from one place to another without meaningful transformation—it doesn't feel awesome.

The average creator deal requires 60-80 emails at 5-10 minutes per email—that's 5-13 hours of coordination time per deal. Scale that to fifty deals to catch a trending moment, and you're looking at 250-650 hours of administrative work. That's 18,000-24,000 messages for a single campaign.

Think about what teams could do with that time instead. The brand manager who spent her week chasing signatures could be developing the next campaign. The talent representative coordinating forty deals simultaneously could be nurturing relationships with rising creators. The agency producer buried in contract revisions could be pitching new partnerships that actually drive revenue.

Where AI Actually Works (And What We're Learning)

The distinction we've drawn is between automation and augmentation. We don't automate customer-facing negotiations because the processes aren't sufficiently repeatable yet and the accuracy requirements are absolute. But we use AI extensively internally, where our team can validate outputs and catch errors before they affect customers.

For our lean team, AI isn't optional—it's essential. Everyone working with Basa is expected to try AI first before scheduling meetings or asking for guidance. This isn't efficiency theater—it's building genuine literacy in where AI excels and where it falls short. Personally, I'm using AI to increase productivity by 2-3x a day. 95 percent of what I would typically pay lawyers to review I can do relatively instantly with AI.

Six months ago, a tool like V0—Vercel's AI design platform—didn't exist. Today, it's completely changed our operational model. Traditionally, when someone who understands our customers identifies a product improvement, the process is broken and slow. Subject matter expert → product manager → designer → mockups → developers. Weeks of telephone.

But innovation happens bottom-up, not top-down. After months of frustration with traditional design handoffs, our customer expert took it upon herself to test V0. Now she turns her understanding of customer pain points directly into functional UI prototypes. What used to take weeks of back-and-forth now happens in hours.

This creates a testing ground. Before implementing any AI capability in our customer-facing product, we rigorously test similar tools in our own workflows to understand their strengths, limitations, and failure modes. We're accumulating trust calibration—learning through daily use exactly when to rely on AI outputs and when to question them.

Our internal AI literacy means we understand exactly which coordination tasks can eventually be augmented by AI and which will always require human judgment. When AI becomes ready for customer applications—when it can achieve the 100% accuracy that contracts demand—we'll implement it thoughtfully rather than reactively.

Building for What Comes Next

What became clear as we built Basa is that companies preparing for AI's second-order effects will capture more advantage than those chasing AI features. It's hard to predict exactly the pace of technological change with AI or how humans will respond to it. But the trajectory seems clear: when AI makes content creation nearly free and deal volume explodes, teams will need infrastructure designed for human judgment at AI-driven scale.

This framing shapes everything about our product development. We're not trying to automate negotiations that require human judgment. We're eliminating the administrative friction that prevents those negotiations from happening at the volume algorithmic distribution demands.

Right now, AI can't deliver the precision creator negotiations require, and those negotiations involve too many variables that resist systematization. But the infrastructure those negotiations require—the coordination layer that eliminates administrative friction while preserving space for human judgment—that's something we can build today. And it's what teams will need when AI-driven content creation makes deal volume explode.

I sometimes wonder if we'll look back on this decision and see it as overly cautious. But I keep returning to what I learned during those seven months of AI research: the companies that win aren't necessarily the ones that adopt AI first, but the ones that understand its implications most deeply. We're not building AI into Basa today because we understand AI too well to implement it prematurely. That's the foundation everything else gets built on—infrastructure so well-designed for human decision-making at scale that it becomes the natural platform for AI enhancement when the technology catches up to the complexity of the problems we're solving.