What if AI Stopped Improving Forever?

in #ai5 months ago (edited)

Sam Altman said in a recent interview that he believes OpenAI is on track to achieve AGI by 2029. But of course, he has financial incentive to make such claims. It's like when Elon said we'd have domes on Mars by now. "Shoe salesman predicts increase in shoe sales", breaking news, shocked pikachu face.

He may be right. Great strides have been made in the AI space in recent years, as I don't have to tell you but mysteriously did anyway. However, it's far from the first time this has happened. Periods of rapid improvement to AI have occurred in the past, but punctuated by long intervals of struggle and stagnation known as "AI winters".

Now, I hope I'm wrong. I'm excited for AGI to bring an abrupt and fiery end to the long nightmare that has been the anthropocene. But given how many times this has happened before, my cynical side says odds are poor that we're actually in the home stretch. That's not to say AI can't or won't improve, but that there may be one, or several more AI winters between current AI and AGI.

No one would be more delighted than angry artists, with their anxious, defensive mockery of Dall-E's rapidly shrinking list of shortcomings. ADHD children who live in an eternal present with no concept of the future, forever skating to where the puck is, rather than to where it's heading. But, what if they got their wish? What if AI being imperfect at present actually meant it always will be? That somehow, technological progress in this field would stop precisely where it is now, for all time?

The day is saved, surely? Since AGI is never achieved, we need not fear a Skynet scenario, nor even the benevolent AGI outcomes where we're not hunted by terminators but also are no longer the dominant intelligence on planet Earth. If only it were so. But the fact that even with their faults, generative AIs make artists feel threatened to the point of tribal warfare against anyone who dares use it should prove that even if it stopped cold in its tracks, the world is still going to be utterly transformed by generative AI.

Set aside the implications for the film, music and gaming industries. Everyone already knows about EA aggressively replacing game artists with AI. Everyone's already heard the hilarious raunchy faux-retro singles on Youtube. As for political implications, Elon recently retweeted an AI deepfake of Kamala Harris. None of these are cyberpunk headlines anymore. They're not on the horizon, nor even at our doorstep, but inside the house.

Forget all of that. What I don't see discussed anywhere is that generative AI in its current form, without much tweaking, already makes a perfectly adequate general purpose humanoid robot brain. In fact this application has already been demonstrated, albeit incompletely, so far just giving robots conversational capability.

But generative AI doesn't just hold conversations or answer questions. It paints pictures. It makes movies, and music. It can beat Minecraft, which tells us it can navigate a 3D space and manipulate objects within that space quite deftly. This tells us that if you had a generative AI trained on instruction manuals, videos, etc. for the jobs you wanted it to perform, a plugin sub-AI which converts images captured from its cameras to a text based description of what's in front of/around it, and another to take its responses and translate them into motor instructions, you've got a Star Wars droid.

That is, a robot which you don't need to program. Which you can simply speak to, explaining what you want done, and it understands. It reliably knows what you mean and carries out that instruction to completion. Maybe not 100% of the time, maybe sometimes misunderstandings occur due to hallucination, but it doesn't have to be perfect to replace human labor. It only needs to be a bit better than humans.

Robots don't shit on company time, or steal from the till. They don't unionize, sexually harass, sue the company, sleep at their desks, use mouse wigglers to appear busy, ask for raises, or indeed expect payment of any kind (let alone overtime). These qualities make humanoid robots sufficiently attractive an alternative to the ownership class that even if they come at a high upfront cost, they may still displace humans.

We've had pretty good robot bodies for years now, though the recent graduation away from hydraulics to purely electric servo driven humanoids seems like a significant turning point. What we were truly waiting on, however, was a capable brain to put into these bodies. Existing automated solutions require a great deal of support from engineers and programmers. Humanoid robots from Boston Dynamics, Digit, Tesla, Figure and Unitree only need to be told what to do, maybe pointed in the right direction. They're at least more competent than new hires.

Hallucinations are largely a solved problem due to plugins, at least for well defined tasks like arithmetic or referencing history texts instead of guessing. If you told me twenty years ago that autocorrect would evolve into a generalized "figure-it-out" engine, albeit a crude one in need of judicious add-ons for the time being...I mean, I might not be surprised, but many would be.

After all, Siri debuted in 2010. The most recent AI winter, between Siri / Google Assistant tier AIs and generative AI, lasted just 12 years. Given that the bulk of early developments in AI prior to that point were done theoretically in the 40s, then practically in the 50s, 60s and 70s, progress seems to be conspicuously accelerating.

All to plan, if we believe the Altmans and Kurzweils of the world. Yet, even if they're catastrophically wrong, even if frustrated luddites who either lack foresight or who have stuck their heads in the sand are somehow correct for the first time ever, the most radically transformative effects of AI are already locked in, absent any further improvements.

Does this mean we'll soon live in a post-scarcity utopia? Only if, for the first time in recorded history, the wealthy few who own all the robots (the new capital) decide to pay us all an allowance out of their bank accounts, by way of the government, out of the goodness of their hearts. Given that they became wealthy in the first place by being very very good at making damned sure money only ever moves up the pyramid, never down, I'm not holding my breath.

I do think such a world wouldn't look that different from the one we presently inhabit, though. Artistic depictions of the future too often look radically different from the present, as if they bulldoze every building in every city at ten year intervals to rebuild in whatever the trending architectural style is. That's not the world we live in. There are buildings in many US cities dating back a century or more. If it ain't broke, generally, we don't fix it unless there's funding to.

Hence, even though it was probably possible to automate many types of business decades ago with bespoke systems, it would've required constructing the buildings around those bespoke systems, so save for a few novelty automated restaurants in China, we've not seen that happen. Humanoid robots with reliable general purpose, adequate intellect radically change that equation.

You don't need to redesign everything around the robot, as robots now share the form factor and mental faculties that existing businesses were designed around. Namely, the human body plan, and sufficient intelligence to perform manual labor and service sector jobs. Probably also military, sooner than later.

Thus, the automated future looks an awful lot like the world as it is today, just with humanoid robots dropped in to replace humans in every role they're capable of performing at least as reliably as humans used to.

That's a problem if you're a human, and being as I am one of those, I've been following developments in this area closely since I first read Marshall Brain's Manna in 2008. Even as industry analysts waved off such a scenario as exaggerated futuristic sci-fi fantasy, it continued slowly coming true all the same.

The next step in the stages of grief, after denial, is bargaining. "Well alright, the technology will get there. But rich guys still need us to buy products, or where will their money come from?" Not realizing somehow that there won't be anything they might spend that money on, soon enough, that isn't made by the robots they own all of anyhow.

...At which point they could skip the middle man and simply have their robots make whatever it is they want, at the cost of energy and raw materials traded between a small number of billionaire families living in luxury compounds of the sort Bezos, Zuckerberg and many others have been building as of late.

The advanced, first world economy we all participated in until then would abruptly shrink to include only those families, the rest of us either left to fend for ourselves (likely regressing to subsistence farming) killed off en masse in an intentional world war, or placated with whatever's the minimum they would have to pay us in basic income so we have just enough to lose that we don't riot.

Like AI haters naysaying further improvement simply because the implications are too terrible to contemplate, I also experience resistance to this prediction from people with no good reason to (realistically) hope it won't turn out like that. Republicans in six states (At the time of this writing) have already begun pre-emptively banning basic income programs.

I don't foresee a technological solution to this, as the further we go in that direction the worse the problem in question becomes. It brokers only a political solution. But with Trump poised to win in November, and most certainly not inclined to siphon billionaire piggy banks to subsidize living expenses for The Poors, the outlook is grim.

Sort:  

"Who will build the roads?"

They whine. They beg. They pout. They cry. They scream.

I shall build the roads.

I SHALL BUILD THE ROADS!