🧩 The Pattern That’s Breaking
On a recent call I said something that’s been rattling round my head since: we’ve moved from the Information Age (answers to what and why) into the Intelligence Age, where the how is handed to you—and it’s happened in a few short years.
Receipts: controlled studies show developers completing tasks roughly 50%+ faster with assistants like Copilot. On SWE-bench, systems jumped from single-digit solve rates in 2023 to order-of-magnitude higher in 2024–25 on real software issues. And low/no-code is now standard practice—by 2025 a big chunk of new apps are assembled with it. In plain English: parts of planning, scaffolding, and shipping have gone from scarce to abundant—fast.
Right now you can ask ChatGPT for a SaaS blueprint, Claude for a GTM playbook, and Cursor to scaffold the codebase. The how is being commoditised. That’s the arms race we’re in.
🤖 From instructions to direct execution (MCP & agents)
It’s not just plans anymore—the stack acts:
Agent/Operator modes that can operate a browser/computer and complete end-to-end tasks (typing, clicking, filing, ordering) with permission gates.
Model Context Protocol (MCP)—think USB-C for AI—an open way to plug models into tools, data, and workflows so they can fetch, call, and execute consistently across systems.
Bottom line: in under three product cycles we’ve gone from “AI drafts a plan” to “AI scaffolds, ships, and increasingly does the thing.”
But building fast ≠ winning. We’re drowning in near-identical AI wrappers and productivity clones. Speed, once an edge, is becoming table stakes. If execution is abundant, what’s scarce?
🧠 The Conventional Wisdom (and why it’s not enough)
We all teach the same next-level answers—at Fusion42 included:
Better problem definition and questions
Judgement and curation
Faster experiments
Design thinking
Orchestrating humans + AI
All useful. None feel like a phase shift. They’re upgrades to the same game—not recognition that the game itself may have changed.
❓ The Uncomfortable Possibility
What if there’s a layer above execution we haven’t named yet?
In 1995, smart people optimised filing systems and typing speed while the real shift was from information scarcity to abundance—and the rise of search, curation, and recommendation. We may be making the same mistake: assuming the next step is simply “automated execution”, when the real shift could be orthogonal—a move along a different axis entirely.
My hedge-fund brain says the biggest alpha lies in spotting the regime change early. The outline I see: as execution abundance grows, advantage migrates toward rails that convert resources and judgement into reliable outcomes.
🧭 Candidates for the new scarcity economy
Attention architecture — rails that direct focus in a world of infinite output.
Trust systems — provenance, reputation, verification, auditability at scale.
Taste-as-a-Service — scalable aesthetic/functional judgement.
Narrative control — shaping how we frame problems and solutions.
Cross-domain synthesis — combining fields to produce non-obvious answers.
Coordination at scale — collective human+AI problem-solving that actually delivers.
Meaning-making infrastructure — systems that help people decide what matters.
🔐 A bet: trust infrastructure is the first new scarcity
Of the above, my money—literally—goes first on trust.
Why trust?
Post-authenticity reality. Agents and models now plan, code, create, and act. Without provenance (who/what/when/where), reputation (track record across contexts), and verification (cryptographic or institutional), adoption stalls.
Economic leverage. Trust collapses dispute costs (fraud, refunds, compliance churn) and unlocks higher-velocity coordination (payments, data sharing, automated execution).
Composability. Trust rails are reusable across domains—media, commerce, healthcare, finance—so returns compound.
What it looks like to build:
Provenance by default: content and actions carry signed trails (model, prompt family, data source, timestamp).
Reputation as a service: portable identity with performance curves (accuracy, on-time completion, defect rate) for humans and agents.
Verifiable execution: before an agent “does the thing” (books, orders, files, deploys), counterparties can check capabilities, policies, and history.
Operator metrics (v1): fraud loss rate ↓, dispute time ↓, mean time-to-trust decision ↓, % actions with verifiable provenance ↑, policy-compliant auto-approvals ↑.
🌫️ A founder in the fog (60 seconds)
Maya runs a 6-person SaaS. Last year, a core feature took two weeks; now Cursor scaffolds it in a morning and an agent migrates customer data. She’s faster—yet conversions stall. Her queue fills with “Is this safe?” and “Who trained this?” The block isn’t speed; it’s trust. She adds signed data lineage and lightweight reputation pages for her automations. Refunds drop 28%, enterprise pilots unstick. Same code, different rail: trust.
📈 Why trust comes first (and enables the economics shift)
The economics only bend toward human-first when trust rails collapse dispute, compliance, and coordination costs; provenance and reputation make it possible to price on reality (compute, latency, data quality) instead of feature theatre. No trustworthy execution → no incentive shift.
🧱 The plateau scenario (the credible hedge)
If we plateau, old moats reassert: resources, regulation, relationships, reach. Network effects harden, infra owners win, and energy/compute/regulatory approvals become the live bottlenecks. That world rewards distribution, policy fluency, and procurement muscle more than novelty. I don’t prefer this outcome, but it’s a credible branch. Designing for abundance while staying legible to today’s institutions is the hedge.
🧪 How I’m testing this (Arthur, in the wild)
I’m building Arthur with this bias—as a living lab, not a solved product.
Provenance vs friction. Do we sign every agent action (model ID, prompt family, data source, time) and show it by default—adding UI friction—or keep it hidden and hope trust vibes carry? We’re testing “provenance on by default, collapsed by design.”
Resource-priced features. Do we sell “AI features” as bundles, or price on real inputs (compute, latency, verification tier)? We’re trialling the latter: cheap for draft-only, premium for verifiable execution.
I’ll share results—warts and all—so we can falsify the bet in public.
🛠️ If you’re building right now (3 moves)
Instrument trust: ship basic provenance + visible reputation for your automations this quarter.
Price on reality: tie at least one feature to compute/latency/provenance cost, not “AI magic.”
Sandbox proofs: run one permissioned pilot where agents execute end-to-end with verifiable audit trails.
❓ The question we should be asking
Maybe it isn’t “What comes after how?” but:
What becomes systematically valuable when building is free?
The questions we ask?
The problems we choose?
The trust we earn?
The attention we route?
The meaning we help create?
The coordination we unlock?
Or something we haven’t named yet?
We’re in the fog between regimes—the old rules broken, the new ones not yet legible. The winners of the next decade won’t simply execute faster; they’ll recognise the category error and build the rails—for attention, trust, taste, synthesis, coordination, and meaning.
I have a bet (trust first), a lab (Arthur), and an open mind.
What’s your hypothesis? When execution is abundant, what stays scarce?
Thank you for reading. If you liked it, share it with your friends, colleagues and everyone interested in the startup Investor ecosystem.
If you've got suggestions, an article, research, your tech stack, or a job listing you want featured, just let me know! I'm keen to include it in the upcoming edition.
Please let me know what you think of it, love a feedback loop 🙏🏼
🛑 Get a different job.
Subscribe below and follow me on LinkedIn or Twitter to never miss an update.
For the ❤️ of startups
✌🏼 & 💙
Derek