When Rational Behavior Looks Irrational: A letter to my former advisor
I've found most in the creator economy thinks some other group is behaving irrationally.
Brands think creators are flaky. Creators think brands are exploitative. Managers think both sides are impossible to work with.
What if everyone's actually being perfectly rational—but at scale, the infrastructure turns inevitable human error into perceived moral failure? When you can't tell if someone's being thoughtful or if their email just got buried, rational behavior gets interpreted through a lens of distrust.
I recently reached out to my former college advisor after reading his new book about decision-making. What started as a thank you note turned into working through whether the dysfunction we're experiencing is a people problem or an infrastructure problem that makes everyone look worse than they are.
I'm sharing the letter here because these questions are ones that we ask internally and, if it's useful to others, want to surface here.
Dear Professor Schwartz,
I just read "Choose Wisely" and felt compelled to reach out. I'm a former advisee of yours from Swarthmore, though you may not remember me—I left after freshman year for AmeriCorps, then transferred to Duke. But the reason I ended up being your advisee stuck: I've always been fascinated by choice and behavior.
That's been the throughline across law, government, music, sports media, and now software. What's unique about my current role is that I spend my day thinking through the decision-making of thousands of people I don't know. Informed by twenty years in entertainment trying to understand how seemingly rational people make seemingly irrational decisions.
This is why "Choose Wisely" resonated. What struck me most was your critique that behavioral economics—while acknowledging people don't behave rationally—still accepts the underlying definition of what rational decisions should look like.
Here's what makes what I'm doing different: in a startup world racing toward AI automation, I'm building Basa—infrastructure that handles the complete workflow from initial outreach through negotiation to signed contract in creative partnerships. While contracts themselves are constrained documents—standard clauses, legal language, structured terms—everything that determines how we get there is inherently human. The behavior that drives why deals happen or don't, what gets negotiated, who responds when and why—these are nuanced, contextual, often appearing irrational from the outside but making perfect sense given competing incentives.
Take a basic creator partnership. A brand makes an offer. The creator's manager doesn't respond for days. To the brand team, this looks like "artists are selfish and irrational." But maybe the manager is rationally protecting the creator's long-term brand value—checking exclusivity conflicts across multiple simultaneous deals, ensuring the partnership aligns with audience expectations.
Meanwhile, the brand is pushing for quick turnaround because the CMO needs numbers for Friday's board meeting, the fiscal quarter is ending, and the campaign manager's job depends on hitting deadlines. To the artist, this pressure looks like "brands are trying to take advantage of me."
Both sides are making rational decisions—or even decisions aligned with their values and responsibilities. The artist is protecting authenticity and long-term audience relationship. The CMO is managing shareholder expectations and organizational pressures. But because each side can't see the other's incentive structure, rational behavior appears irrational. This hardens into structural distrust that makes every subsequent deal harder.
Now scale this. A single creator deal typically requires 60-80 emails to get from outreach to signed contract. For campaigns requiring 300-400 creators, that's 25,000-30,000 emails. At that volume, basic human error becomes inevitable. Someone misses an email in the flood. Someone forgets a commitment made three weeks ago across five other simultaneous negotiations. Context gets lost across email, WhatsApp, Slack, and undocumented calls.
But the other side doesn't see "human error at impossible scale." They see it through the lens of structural distrust. The brand thinks "this artist is flaky and unprofessional." The artist thinks "this brand is disorganized and doesn't respect my time." What's actually happening is infrastructure failure, but it gets interpreted as moral failure or irrationality.
Your concept of decisions being informed by "constellation of virtues" resonates deeply. I see creators turning down lucrative deals that conflict with their identity all the time. But I keep confronting a prior question: what does wisdom in decision-making look like when everyone IS making decisions aligned with their incentives, but those incentives are invisibly misaligned? When the sheer volume of communication turns basic human error into perceived moral failure?
Can people actually apply their constellation of virtues when the information and attention environment prevents them from even engaging their values? Before we can talk about virtue-based decision-making, do we need infrastructure that makes competing incentive structures visible and navigable?
I believe infrastructure CAN genuinely respect what's human about decision-making. But only if you design it from the ground up to work WITH human behavior as it actually is, not as rational models assume it should be. We embed behavioral psychology principles not to manipulate decisions but to eliminate the friction that obscures what people actually want. Anchoring bias means we show context from previous deals. Status quo bias means we lower the cognitive cost of change through familiar interfaces. Even interface language matters—we've debated whether to use the word "offer" because formal terminology creates perceived rigidity. This is the framing question you write about.
But here's where the power asymmetry question becomes critical. Brands have leverage creators don't. Infrastructure that makes deals move faster could make exploitation easier—standardized terms that favor whoever writes them, velocity that prevents careful consideration. Or it could create the opposite. Visibility that protects against bad actors. Context that prevents miscommunication. Templates built from input across creators, brands, managers, and lawyers rather than optimized for one side.
I don't have this figured out. What I do believe is that the answer isn't avoiding infrastructure—it's building the right infrastructure. The current fragmentation across email, spreadsheets, and scattered platforms doesn't protect the vulnerable, it just makes everyone operate from incomplete information. That creates conditions where both exploitation and mutual distrust thrive.
AI will automate the automatable parts of dealmaking. That's inevitable and potentially valuable. But that acceleration, at least for the foreseeable future, doesn't eliminate human judgment—it amplifies the moments requiring it. When AI enables 20x more transactions, you haven't eliminated the values-based decisions, you've multiplied them. Made under more time pressure, with less economic value per transaction, which means less margin for the infrastructure failures that turn rational behavior into perceived irrationality. What I'm building isn't resistance to automation—it's infrastructure so the irreducibly human decisions can function at AI-enabled speed and scale.
I believe some elements of judgment are truly irreducible—not just difficult to systematize, but fundamentally resistant to it. The question is where exactly those boundaries are. Can infrastructure capture the meaning-laden nature of decisions, or only the behavioral patterns and incentive structures that surround them? When a creator chooses a lower-paying partnership because it aligns with their values, can a system ever truly understand that constellation of factors, or just create better conditions for that decision to be made?
The practical challenge becomes: when you're building for high-context, relationship-driven domains where multiple parties have legitimately conflicting incentives, and AI is multiplying both the volume and velocity of required decisions, how do you design systems that guide behavior without feeling manipulative? Where's the line between helpful structure and harmful constraint? How do you build infrastructure that enables human judgment to function at AI speed without inadvertently imposing the rational frameworks you're trying to avoid?
Reading "Choose Wisely" pushed me to ask questions I wasn't asking before. Kahneman's work has been foundational for how I think about Basa—understanding cognitive biases and behavioral patterns remains essential. But your critique that behavioral economics, while acknowledging irrationality, still accepts RCT's definitional framework opened up new territory. Not wrong questions, but incomplete ones. The challenge isn't just designing for how people actually behave versus how they theoretically should—it's understanding what remains genuinely irreducible even with better systems. Especially when AI is forcing decisions to happen at scales and speeds where those irreducible elements become even more critical.
I'm uncertain about the boundaries, but your work gave me new ways to think about where they might be.
—Adam