Brilliant piece, Derek! The timeline compression from "decades away" to "maybe already here" is wild.
Your point about AGI being just a waystation to ASI really hits home - that recursive self-improvement cycle could happen faster than we think. I'm particularly drawn to your hybrid intelligence concept over the replacement narrative. Seems more realistic and frankly more appealing.
The cosmic perspective is mind-bending though. Are we cosmic gardeners or just the substrate for something greater?
One question: how do we maintain human meaning during this acceleration? The democratization sounds great, but I worry we'll feel like passengers on a ship we don't understand.
Many thanks Daniel 🙏 —that’s exactly the tension I feel and hoped would come through.
This acceleration isn’t just about technological leaps—it’s about meaning compression.
Our sense of uniqueness, our self-perception of being something exceptional, could be crushed almost overnight. Is this simply evolutionary efficiency—optimising the next phase of intelligence? Or is it that deep human trait of self-destruction, this blind optimism that pushes us toward things we don’t fully understand, convinced it’ll all work out?
Since writing this, I’ve been thinking more about the short-term volatility and brutal social imbalances that are likely to follow. We’ve built and designed a world of human constructs around blind faith and scarcity—competition, struggle, survival—and now we’re on the brink of systems that can unlock abundance. That collision will shatter many of the psychological anchors we’ve relied on for meaning and stability.
And I don’t think it’ll be gradual. The moment AI gets embodied—when you can see, feel, interact with humanoid intelligence—it’ll break the illusion of human dominance. Add self-replication—humanoids creating humanoids—and hyper-local, circular economies with zero incremental capex, and the speed of change will be unlike anything in human history.
To your question—how do we maintain human meaning during this acceleration—this is where I’ve planted my flag. I believe meaning collapses the moment we lose agency. Which is why I’m firmly in the camp of human-connected systems. We should be enhancing ourselves—brains, bodies, cognition—before we create autonomous, self-replicating systems that outrun us. Hybrid intelligence should expand human meaning, not erase it. If AI is embedded as an extension of human capability rather than a replacement, we have a shot at evolving alongside it rather than becoming irrelevant to it.
This isn’t about slowing things down—it’s about evolving fast enough to stay in the loop. Not just surviving the wave, but riding it with intent.
What’s your take—do you lean towards augmentation or are we hurtling toward obsolescence?
Brilliant piece, Derek! The timeline compression from "decades away" to "maybe already here" is wild.
Your point about AGI being just a waystation to ASI really hits home - that recursive self-improvement cycle could happen faster than we think. I'm particularly drawn to your hybrid intelligence concept over the replacement narrative. Seems more realistic and frankly more appealing.
The cosmic perspective is mind-bending though. Are we cosmic gardeners or just the substrate for something greater?
One question: how do we maintain human meaning during this acceleration? The democratization sounds great, but I worry we'll feel like passengers on a ship we don't understand.
Many thanks Daniel 🙏 —that’s exactly the tension I feel and hoped would come through.
This acceleration isn’t just about technological leaps—it’s about meaning compression.
Our sense of uniqueness, our self-perception of being something exceptional, could be crushed almost overnight. Is this simply evolutionary efficiency—optimising the next phase of intelligence? Or is it that deep human trait of self-destruction, this blind optimism that pushes us toward things we don’t fully understand, convinced it’ll all work out?
Since writing this, I’ve been thinking more about the short-term volatility and brutal social imbalances that are likely to follow. We’ve built and designed a world of human constructs around blind faith and scarcity—competition, struggle, survival—and now we’re on the brink of systems that can unlock abundance. That collision will shatter many of the psychological anchors we’ve relied on for meaning and stability.
And I don’t think it’ll be gradual. The moment AI gets embodied—when you can see, feel, interact with humanoid intelligence—it’ll break the illusion of human dominance. Add self-replication—humanoids creating humanoids—and hyper-local, circular economies with zero incremental capex, and the speed of change will be unlike anything in human history.
To your question—how do we maintain human meaning during this acceleration—this is where I’ve planted my flag. I believe meaning collapses the moment we lose agency. Which is why I’m firmly in the camp of human-connected systems. We should be enhancing ourselves—brains, bodies, cognition—before we create autonomous, self-replicating systems that outrun us. Hybrid intelligence should expand human meaning, not erase it. If AI is embedded as an extension of human capability rather than a replacement, we have a shot at evolving alongside it rather than becoming irrelevant to it.
This isn’t about slowing things down—it’s about evolving fast enough to stay in the loop. Not just surviving the wave, but riding it with intent.
What’s your take—do you lean towards augmentation or are we hurtling toward obsolescence?