Understanding AI Moats Using Silicon Valley (the show)

Everything I need to know about working in tech, I learned from Silicon Valley on HBO. Recently, I've been thinking about the last episode of season one (spoilers ahead).

It's the big finale. Richard Hendrix has created, in theory, the best compression algorithm possible and founded a startup to bring the technology to market.

Richard & co sit in the audience and watch as megacorp Hooli CEO Gavin Belson delivers a keynote announcing an equally powerful compression algorithm integrated into Hooli's entire suite of services.

Their technical moat evaporates in a single scene.

It's Episode Eight

Hooli and Gavin Belson's main advantage was distribution. As a tech megacorp, they had a suite of products — fictional analogues for Google Drive, Microsoft Word, Android, and more — that they could integrate the new compression technology into.

There would be no incentive for users to switch to some startup's platform for these capabilities. Instead, they would just show up in the tools they already used.

Replace compression algorithms with LLMs and diffusors and you have today's situation. Everywhere I look, I see the latest models integrated into familiar platforms:

  • Microsoft Co-Pilot brings LLMs to the most established text editor in the world.
  • Adobe Firefly puts images that look an awful lot like Midjourney into the biggest platform in design.
  • Google Bard or a similar model is on its way to Gmail to expand the information you send and compress the information you receive.

In 2015, Alex Rampell wrote, "the battle between every startup and the incumbent comes down to whether the startup gets distribution before the incumbent gets innovation."

Past platform shifts have been more difficult to implement or less obvious in their value, and startups enjoyed a sizeable head start to figure out distribution. But with the best models just an API call away, incumbents have instant access to innovation.

It's Season One

Dejected, Richard and company head back to their hotel room. Inspired by a lewd discussion among his friends, Richard shuts himself in a room to code and emerges in time for the final pitch competition with a new algorithm that far exceeds the theoretical limit for lossless compression.

The parallel to the show wobbles a bit here. There are companies like Stability AI and Anthropic and Runway ML and others that are working on their own core models. But most startups are working to leverage AI to solve specific problems, rather than create foundation models for general use.

Python is not a moat. Having a website or an app or a REST API or using Kubernetes is not a moat. It's an implementation detail. And AI is heading there.