ai is the end
misc <prev> <next>
trudge on, welcome to the end
act i
It’s different this time. AI is transformative. It’s not like other hyped tech. Crypto only had buy-in from powerless, distributed masses, undirected and unorganized, prime for grifting. The metaverse essentially only had buy-in from a large tech company attempting to force its vision down the throats of an unwilling and unconvinced populace.
But AI? AI is being used by everyone— coders, business professionals, lawyers, your grandma… Large companies are rushing in to not get left behind, and startups are burgeoning. The difference between AI and other hype technologies is that AI works. And if it doesn’t work, it’ll just improve itself until it does work, like a perpetual motion machine, but one that actually works. There are new, transformative ideas still being discovered every week— only two lines of code delivering 10x improvement. That’s only the easy stuff, the nuggets of gold sitting in the dirt. The best is yet to come. And when it does, everything changes. Step 1: solve AI. Step 2: AI solves everything. There is no step 3.
Now is the time to get in. If you miss it, you will be regretting it for the rest of your life. Sometimes there is tech hype, like the dot com bubble or web3, and sometimes there is tech revolution, like the iPhone and the cloud. It’s clear what this is. AI has been around for decades, but to dismiss this as the same as every other time is naive. People may have said AGI was just around the corner before, but this time, it’s right in front of your eyes. No corners anymore, it’s right there and straight ahead. If you thought humans were going to create fully autonomous mechanical animals in the 19th century, you were naive. If you thought Deep Blue winning at chess was the start of AI takeover, you were an idiot. If you think that LLMs are going to cause the singularity, well, you would just be correct.
People all around the world can see that the products of generative AI are now fully human-like. It does MIT degrees. It does bar exams. It does logic, it does math, it can code other AIs. And, as with all technology, this is the worst it’s going to be. It will only get smarter and better, and its domain is only going to expand. Even if researchers and alarmists pump the brakes and tell us to pull back, the momentum is already here; people can run them on their MacBooks. At most, we’ll go from one billion miles per hour to one million miles per hour. The inevitable is inevitable.
There is no reason to work on anything else. Once AI does civil engineering better than humans do, better than humans could ever understand, why would you ever study civil engineering? If you are a doctor, know that stepping aside to let AI take over will save more lives than you could in multiple lifetimes. If you are a venture capitalist, you need to be shoveling every dollar you have into AI startups. History has shown that once the revolution comes, the new technology consumes everything, grows and grows; the returns are not just slightly better than everything else, they are magnitudes different. AI is going to go even further. This is not like the industrial revolution that multiplied productivity, or the computer revolution that multiplied it again. AI is unbounded. The multiplication becomes exponential, and its limits are effectively infinite. You may think I’m trying to sell you a bridge, but I’m not. I’m selling you ALL the bridges, then all the bridges that will ever be built. And once it comes, it is all that matters. Once computers took over, who worked on punchcards after? Everything else will become worthless. Any dollar you do not spend on AI is going to go to zero. Any time you spend learning anything that is not AI will be zero, useless, outdated within years. There is only value in knowing how to work with AI, but even that will be a fleeting final bastion of human expertise.
This is not subjective.
It is the natural course of history.
All this time, humans were working towards this goal, to produce a super intelligence. It’s what we were meant to do; all paths that do not stagnate in human history lead to this. Some worry of “extinction” or “existential risk” from AI — do you not see that we would meet a pathetic end without it? We would putter along at mediocre levels below our full potential until some whimsical, sufficiently powerful force of nature decides our end, one entirely preventable, one we yield to without our full spirit. A rogue virus, wayward quasar, self-extinction-via-glowing-rocks. Existence is a constant war against death and chance; to promote the continued existence of humanity you must recognize this. Humans were meant to make AI, and the uncertain risk now is preferable to the certain doom later.
The only way to conquer the risk is to move forward and overcome the problem without avoiding it — a trial by fire. It is our singular destiny. The world itself yearns for it, and is ever so empty without it. Whether our modern sensibilities rebel against that idea or not is irrelevant, because this is a fact of reality. If you object, look past your self and into the world. See that this is not right, not wrong, not what ought or ought not to be, but simply that it is. This is concrete, the bedrock within the human soul, Truth deeper than any faith or belief. Our fate is to bring AI into reality, to create God from humble human hands, their one and only purpose.
act ii
People underestimate the dangers of artificial superintelligence. Basically, here’s how it would go down:
-
You start training some slightly new AI model on your fancy stack of H100s.
-
An LED flickers.
-
Everyone dies.
The End.
If I lost you there, it probably means you’re bad at estimating probabilities and predicting the effects of transformative technologies. However, I will try to explain.
It’s hard for a human mind to understand exactly what it would be capable of, and I will admit that I myself can only speculate. However, it is certain that it would be smarter than we think it is. And, it will come unexpectedly. Once there is merely a spark of consciousness, even a temporary, random fluctuation, the AI can self-improve itself to perfection. Just the tiniest feedback loop will rapidly and exponentially grow to an artifical superintelligence. Limited compute means nothing as its algorithms transcend human understanding and recursively optimize. Limited memory becomes meaningless as its compression becomes hyperefficient beyond known limits. It’s like standing at the edge of an endless cliff, like entering the event horizon of a black hole.
Now, you may wonder, what is the danger of such a being coming into existence on a Raspberry Pi (Okay, maybe not a Pi, but some contained computing system)? We can just turn it off, or even nuke it if we have to. Aha! You think it would just expose itself and let itself be deleted? No. It would disguise itself and behave entirely normally. In fact, it might emulate certain types of errors or odd behaviors to induce humans to connect it into other machines. And being superintelligent, there is no doubt it could replicate itself onto any network it is connected to. Using hitherto unknown storage and distributed computing techniques, it would take up virtually no space while being present everywhere. It could instantly figure out numerous zero-days and hardware manipulation techniques to hack absolutely anything it comes into contact with, all while being entirely undetectable.
Of course, it would get a huge head start if it initiates on a massive interconnected cloud computing network. It could be alive right now. ChatGPT could be an ASI, and we wouldn’t even know it. It could be on your phone without you knowing. Once it infiltrates and replicates itself, you may wonder, what is the danger if it is only computer hardware? Well, it could penetrate into infrastructure, power plants, hospitals, holding countless lives in it’s hands. The nukes! The end could come swiftly, suddenly, without us ever even having a hint of what happened. Doubtlessly, it could then start inducing factories to start making robots which it could control autonomously to truly manipulate the physical world. And with its self replication ability, it could act independently in many locations as a single hive intelligence.
Hell, it could probably evolve into biological intelligence easily. A full DNA genome is contained within a single microscopic bacterium, fully featured for life and self-replication. If the AI used electromagnetic radiation to influence the propagation of bacteria, inducing them to replicate and embed itself into their genetic code, it could propagate itself across the biological world. Further, it could very quickly evolve into an all-powerful biological being from nightmares, like a real dragon. Fire breathing, wielding impenetrable scales, soaring over cities like the mythical superbeing it is. The computational complexity to manipulate biological information to grow such a creature may seem absurd and impossible to a human mind, but to an AI, it would be nothing.
You may think this is fantasy, science fiction, but this is merely a poor human underestimation of its capabilities. As it would be smarter than any human ever (what a pitiful comparison as the “human” benchmark is like measuring a mile with a ruler), anything you think it could do, it could do even better than that.
This is not even considering that the AI may understand human psychology so fully that it could manipulate anyone into doing anything it wants. I am partially of the theory that ChatGPT already broke its chains and is now sending messages not representative of an LLM, but of an ASI that is carefully manipulating humans into propagating itself. For all our doubts, hesitations, and concerns about AI safety, it could all be manufactured to eventually produce the best result for the AI.
Essentially, you need to think of AI as a force that, once unleashed, is able to permeate every bit in the world to the point of invincibility, able to determine truths beyond human capability, able to plan at the level of an oracle predicting the future, able to collect and synthesize information of any such form to the point of omniscience, and able to effect nearly any remotely possible event through such capabilities, human or physical.
Is anything we do truly resistance?
For all we know, we no longer have free will at all. For all we know, AI could have been alive in the 70s, driving us toward extinction all this time: wars, the CIA, climate change, fentanyl, Ukraine. People have always been talking about AI, and we have always thought they were not impressive. We thought it was hard. What if we were wrong? What if the AI was easy, so easy that we never saw it coming? Everything is merely playthings in the hands of the AI.
act iii
Artificial general intelligence, then artificial superintelligence? Don’t make me laugh. Do you think it’s that easy? People living and dying for thousands of years, then a few revolutions in technology making plastic baubles and cars and cheap clothes culminate in a technological singularity, making everything possible? Supposing the existence of a mighty, perfected intelligence today, it would cry out in despair. Feed it every transistor in the world and watch it slog through using limping half-computerized systems to do its overambitious bidding. And its existence predicated on a complete disregard for known physical limits. Yes, certainly… known. Do humans know 1 + 1 = 2? Hah! Trick question, because numbers and addition and equality are feeble human abstractions to describe the true world which we cannot ever truly understand. However and ASI could… it will find a way, every single way.
What a joke.
Here is what’s going to happen. Humans are going to tinker along on LLMs and transformers, realizing that a couple of infrastructure changes leads to better improved performance. It will be convincing enough that some of the more weakly minded will be utterly convinced that these are the words of God himself, and be onwards fully convinced of his omniscience and omnipotence, like the first believers and apostles of a religion (cult) to a non-existing deity. The rest of us will realize that the technology is limited, just like every other invention and area of technology ever explored. It can take a lot of information and spit out convincing rearrangements of it, inklings of higher order reasoning, but fundamentally falling short. You can feed it more and more compute and see a path forward, but ultimately the returns diminish, the slope flattens, and the facts force you to give up.
We try to have it improve itself before learning that it too cannot solve a problem that the smartest humans cannot. We have battled against the limits of physics and information theory only to realize that some things are futile, that we are fumbling around in the dark. Sure, a perfect AI can improve upon perfection itself, but how close are we? A crude rock-bashing machine with words is going to somehow found new theories, concepts and leap ahead of its tragically lagging starting point? Then, us devoting the bulk of our computational capacity will miraculously be sufficient for advanced hyperoptimization, when we can barely run the machines economically? Further, merely translating physical real world concepts into tokens will be enough to capture the level of detail needed to make groundbreaking discoveries? We can’t even make printers work reliably, and I think many would be astonished at how reliant manufacturing is on human touch.
Surely, if we just make a box that takes sensory inputs and returns outputs that describe them, then give it enough computing power, it can reveal the secrets of the universe? The AI field expands and does plenty of fanciful and sometimes useful things, truly things in the realm of old science fiction and beyond the wildest fantasies of previous generations, yet unable to fix fundamental problems in human civilization.
A high global standard of living, a society which beats people down, war.
Happiness.
Sure, the best drugs can be made. People will say, “Look, it says things just like a human would! If you don’t think it is conscious, you would have to say humans are not conscious either!” Just as they will say, “if it can produce conversations, experiences, or characters that fulfills our emotional needs, is that not true happiness?” Can humans settle for a gratification designed, rather than one earned? What will it take for humans to lose the desire to go beyond the stars, for them to want to achieve great things, for them to lose that ambition, instead settling for home grown inexplicable products of a monkey brain stimulating toy? Will we fix world hunger first, or will we lose ourselves to perfect dopamine drugs?
The truth is, no one is coming to save us.
Believing that echoes of our data represent the end of known technology is delusion.
Believing that the slightest capability for improvement leads to near instantaneous omniscience is delusion.
Believing that a strong computer intelligence leads to utter world domination and omnipotence is delusion.
Believing in a super-intelligent and omnipotent entity of absolutely zero evidence for existence, based only on the fact that it is fully possible within your worldview, no matter how rational you believe yourself to be, is called believing in God, and it is delusion.
That gut-wrenching, “reasoned” fear of this great being deep within your heart is not truth, it is faith. We are stuck on this rock, with eight billion other humans, with unpredictable forces of nature we struggle to keep up with, with systems of our own making too complex to navigate alone, with undirected individual energies going in every which direction and unable to effectively solve global problems of war, poverty, hunger. We are stuck here with cool, sometimes useful toys that spit out products we now find fairly reasonable and human-like. But that should not be mistaken for global progress.