Wedge → Stack → Moat: The Software 3.0 Playbook
In Software 2.0, the game was distribution. In Software 3.0, the game is intelligence. The old SaaS rulebook—land and expand through seats and features—is breaking. User growth no longer guarantees power. Margins no longer guarantee defensibility. The next generation of winners won’t scale by selling more licenses—they’ll scale by compounding more learning.
The most valuable software products are no longer static tools. They are living systems that adapt, predict, and automate. Their advantage doesn’t come from UI polish or sales velocity, but from proprietary data loops that get smarter with every interaction. Distribution still matters—but intelligence compounds faster than users.
This shift demands a new go-to-market playbook. The most enduring Software 3.0 companies will follow a new pattern: start with a wedge—a single workflow that delivers a 10x leap. Expand into a stack—a system that owns the full process. Then fortify the moat—proprietary data that compounds into an intelligence monopoly. This is the Software 3.0 playbook: Wedge → Stack → Moat.
The Traditional SaaS Playbook Has Expired
For two decades, SaaS was about distribution, not intelligence. The winners of Software 1.0 and 2.0—Salesforce, Workday, Atlassian—scaled by selling standardized workflows to millions. Their products were horizontal, their pricing seat-based, and their success a function of sales execution, not product learning. The moat was contracts and integrations, not data.
In that world, distribution was the bottleneck. The constraint wasn’t how smart your software was—it was how fast you could get it into the hands of users. Salesforce didn’t win because its CRM was the most intelligent; it won because it built the most efficient go-to-market machine. The metric that mattered was ARR growth, not data velocity.
Software 3.0 flips these economics. When the marginal cost of intelligence falls to zero, the new scarcity becomes unique data. The marginal cost of data acquisition goes to infinity—every valuable dataset is already owned, locked, or gated behind user behavior. Distribution alone can no longer create defensibility; every competitor can copy your interface, but not your feedback loops.
That’s why AI-native companies scale differently. OpenAI didn’t win through outbound sales; it won through self-reinforcing data and compute flywheels. Each interaction refines the model, compounding its advantage. The new metrics are model refinement speed, data feedback density, and user reinforcement loops—not license expansion.
In short, the old SaaS moat was contracts; the new moat is context. Proprietary data and fine-tuned models now define power. Software no longer stops at automation—it learns.
Every Great Software 3.0 Company Begins With a Wedge
Every enduring Software 3.0 company starts narrow. The wedge is a single, high-friction workflow that delivers a 10x improvement over existing tools. It’s the part of the process everyone hates but can’t avoid—the repetitive, manual, error-prone step that defines the user’s pain. The wedge doesn’t try to solve everything. It solves one impossible thing so completely that users reorganize their behavior around it.
In Software 3.0, the wedge isn’t just a product—it’s a data acquisition strategy. Each workflow automated or accelerated becomes a critical data capture moment. Users unknowingly create proprietary datasets as they work, feeding the model’s learning loop. The more they use it, the smarter—and harder to replicate—the product becomes. This is how intelligence compounds faster than distribution.
Figma began with a single wedge: real-time, multiplayer design collaboration. That one workflow turned static design into a living, shared process—and captured the behavioral data that powered its design system. Harvey AI’s wedge is legal research automation, compressing hours of analysis into minutes and generating unique legal reasoning data no one else has. Runway’s wedge was AI-assisted video editing, transforming creative iteration speed and collecting labeled video data to train better generative models.
The best wedges are narrow enough to dominate but deep enough to generate proprietary data. They own one critical workflow so fully that every user interaction becomes model fuel. The wedge is not the endgame—it’s the ignition point. From that narrow entry, the stack expands and the moat forms.
From Wedge to Stack: Turning Workflows Into Platforms
Once the wedge dominates a single workflow, the next move is vertical stack expansion—automating the adjacent steps that surround it. The goal is not feature breadth but workflow depth. Each new layer absorbs manual handoffs, integrates upstream inputs, and captures downstream outputs. What starts as a narrow tool becomes a self-reinforcing system.
Platformization doesn’t happen by adding features. It happens by connecting data flows. Every upstream integration feeds more context into the model; every downstream action generates new data exhaust. As the stack expands, data coverage increases, model performance improves, and human input shrinks. Each layer tightens the feedback loop—users do less, the software learns more.
This is how data gravity forms. Users stay because leaving means breaking the intelligence loop they helped create. Every new capability compounds value for everyone else. Rippling’s expansion from HR onboarding to payroll, IT, and device management wasn’t random—it followed the data trail of employee records. Each adjacent module made the system smarter about the organization itself. Notion’s journey from note-taking to databases to AI assistance followed the same pattern: each workflow became an input to the next.
The smartest expansions follow the wedge’s data exhaust. Like neurons connecting in a brain, each new workflow becomes a node in a neural network of processes. The more nodes connected, the more emergent intelligence the system produces. The wedge starts the learning; the stack turns learning into leverage.
The Moat Is Data: Compounding Advantage Through Proprietary Feedback Loops
In Software 3.0, defensibility doesn’t come from code—it comes from compounding intelligence. Features can be cloned. Distribution can be matched. But proprietary data—especially the kind born from user interaction—is impossible to replicate. The true moat is not what the product does today, but what it learns tomorrow.
Every great AI-native company builds a self-reinforcing data loop: better data → better model → better product → more users → more data. Each cycle compounds the next. Tesla’s autonomous system improves with every mile driven. Scale AI’s labeling engine gets faster and more accurate as customers feed it edge cases. GitHub Copilot’s code suggestions sharpen as millions of developers accept or reject completions. Usage becomes training, and training becomes differentiation.
This compounding is measurable. The most advanced teams track model performance improvement rate—for example, error reduction per million new data points—as the new metric of moat depth. As models ingest more proprietary data, their performance curves bend upward while competitors plateau. The gap doesn’t just widen—it accelerates.
Over time, switching costs rise not because of contracts, but because the model adapts to an organization’s unique data signature. Leave, and you lose the intelligence tuned to your context. Stay, and the system continues to learn your patterns.
The moat is not static; it’s alive and accelerating. Each feedback loop tightens the flywheel of learning and dependence. In Software 3.0, data isn’t an input—it’s the compounding engine of advantage.
Intelligence Infrastructure Is the New Platform Opportunity
Every vertical is now racing to build its own Software 3.0 stack—a data-native version of the old system of record. In this new paradigm, the core asset isn’t stored information, but contextual intelligence. The CRM, ERP, or EHR of the future won’t just track activity; it will learn from it. Each workflow becomes a data refinery, and each customer a contributor to a shared intelligence layer.
That’s why infrastructure matters more than applications. The real leverage sits beneath the interface—in vector databases, embedding stores, retrieval architectures, and privacy-preserving fine-tuning frameworks. These primitives are the equivalent of AWS compute or Snowflake storage—except instead of managing data, they manage meaning. Companies like LangChain, Pinecone, and Contextual.ai are not building apps; they’re building the plumbing of cognition.
Just as AWS commoditized compute and Snowflake commoditized storage, the next generation of infrastructure players will commoditize intelligence itself. They will abstract away the complexity of retrieval, memory, and model orchestration, letting every developer become an AI-native builder. The differentiation will shift from who stores the most data to who learns the fastest from it.
Founders should design for platforms of learning, not storage. The key advantage comes when every new customer improves the underlying intelligence—when data network effects replace distribution network effects. Data networks are the new supply chains: value accrues where intelligence compounds.
The next AWS will not host workloads—it will host learning. Its unit of value won’t be CPU hours or terabytes; it will be tokens of understanding. Whoever builds that platform will own the substrate of Software 3.0 itself.
The New KPI: Compounding Intelligence per User
Traditional SaaS measured human behavior—how long users stayed (LTV), how much they paid (ARR), how many expanded (NRR). Software 3.0 measures machine learning—how fast the system improves with every interaction. The critical question is no longer how many users you have, but how much smarter your product becomes per user.
This is the new KPI: Compounding Intelligence per User. It captures the rate at which engagement translates into learning. If SaaS was about customer lifetime value, Software 3.0 is about model lifetime learning. Each new user isn’t just a revenue line—they’re a training node. Growth doesn’t just expand revenue—it expands intelligence density.
A useful metric emerges: Learning Velocity = Model Improvement Rate / User Growth Rate. Companies with high learning velocity compound faster than those merely scaling headcount. OpenAI’s ChatGPT improves with every prompt; Replit’s Ghostwriter learns from millions of code snippets. Their user bases don’t just consume—they teach.
This reframes competition. The best Software 3.0 companies aren’t optimizing for user acquisition, but for intelligence acquisition. Every new workflow automated becomes a feedback loop; every user action feeds proprietary context the model can’t get elsewhere. The product doesn’t just serve demand—it absorbs it.
Investors should track learning curves, not just growth curves. Look for systems that get exponentially better as users scale, not linearly. A company where 10,000 users make the model twice as smart will always outcompete one where 1 million users change nothing.
In Software 3.0, user growth is the input. Compounding intelligence is the output.
The Path Forward
Software 3.0 isn’t a product shift—it’s a learning revolution. The next era of software won’t be built by those who ship the most code, but by those who design the fastest feedback loops. Every workflow automated, every dataset refined, every user interaction captured is a step toward compounding intelligence. The opportunity is vast: to rebuild the digital economy around systems that don’t just run processes, but understand them.
For founders, the mandate is clear. Start narrow. Own one impossible workflow. Expand through the data it generates. Let every layer of your stack feed the next, until your product becomes an organism—learning, adapting, self-improving.
Software 3.0 rewards creators who treat intelligence as infrastructure. The advantage will not belong to the biggest teams or the deepest pockets, but to those who compound learning the fastest. The ultimate moat is not code or capital—it’s cognition.
The question is no longer what will your software do?
It’s what will your software learn—and how fast?
