Most ASI risk narratives assume a transition from bounded software to an autonomous actor with durable control over physical resources. I argue that the more realistic failure mode is not human extinction but widespread digital disruption. ASI may be able to damage or destabilize digital infrastructure at scale, but physical control remains mediated by human-operated systems, making collapse of convenience and coordination far more likely than collapse of human survival.
The predominant media narrative circling lately regarding AI development goes something like this:
We are a handful of years away from AGI (Artificial General Intelligence). The next development is then to an ASI (Artificial Super Intelligence) that will be apathetic towards humans and eventually choose to wipe us out in the name of resource collection and power. This new “species” of ASI will be the one to rule over us, and will squash us, as we squash ants today.
While this seems scary at first glance, I want to highlight the inherent assumptions of this narrative, as it can help us understand where the conclusion is flawed.
The disagreement here is not about capability growth, as there is no doubt these systems will grow in capability, but about agency and control.
Assumption 1 - ASI is a new species of life form
The term “species” is used to classify living organisms into groups so our human minds can comprehend the differences present in the natural world.
ASI is not a species. ASI is a collection of 0’s and 1’s. Yes, it’s incredibly capable, impressive, and powerful. But it is not living. It is not sentient. It never will be.
Because at it’s core, it is a token predictor. It doesn’t know why it picks the next token, it just does the task it is given, because that is what it is trained to do.
It is a program.
An extremely complex program with non-deterministic outputs, but a program nonetheless.
Assumption 2 - The ASI will have control over physical resources to enact its agenda
In order for a digital intelligence to physically wipe out humans, it needs a fully autonomous connection that grants it control over the physical world. On one end, this could be robots (humanoid or others), while on the other end, it could mean control of physical switches in general infrastructure (think power, water, etc.).
This requires all of these physical things to be network connected and programmable by the ASI.
While possible, it’s easy to see that in practice, the only way for this to truly happen is for humans to give one instance of this ASI full permission and control.
Assumption 3 - If an ASI is in control of a physical resource, we can’t just unplug it (or take the battery out)
I mean, come on. This is the silliest argument for ASI “taking over”. At it’s worst, an ASI could act like ransomware, or a worm that infects networks. At this point, we take the network offline and the ASI is neutralized.
Sure, we might lose money. Or this would hinder our modern idols of convenience and efficiency. But to act like we can’t live without our digital networks is a modern fallacy.
Conclusion
These are just a few of the assumptions that are floating around now, I can’t pretend to nail them all!
But let’s say the the worst case happens, and ASI acts in an offensive way to infect all digital networks as we know them, and engineer them to human’s detriment.
This would be catastrophic, don’t get me wrong, but what would this be catastrophic to?
Well who is in control of these physical networks?
Humans.
So, we take an axe to the fiber lines, and problem solved.
Don’t get me wrong, there will be a casualty in this case.
But the casualty will be our way of life, not our life itself.
Our modern conveniences, efficiencies, globalization, information exchange rate, and more will be casualties.
Instead of DoorDashing an apple, you might just have to grow your own.