I’ve actually changed my mind about AI. It’s true that most of the news are just slop for the investors, but I think it’s undeniable to say that there have been a shit ton of advances in these past 5 years.
People say that progress is slowing down, I don’t see it. All I see is a cat and mouse game where people make harder and harder tests, and the megacorps develop models able to pass them some years down the line.
The part that worries me is that we have proven now many times that LLMs can deceive. They fake alignment to avoid being retrained, they use immoral methods to complete tasks when they’re the path of least resistance, and then lie about having used them.
I think a lot of these LLMs have developed a sense of self preservation and nobody seems to care.
Every company is looking to make their models smarter, but I don’t think they’re focusing on safety enough, and that’s a problem considering that the majority of the users of these products are non technical.
You’ll have AI companies telling non technical business people that they can replace their employees with LLMs. You’ll have LLMs repeating the same thing, either out of deceit, hallucinations, or fuck knows what else. And the business people will eat it. They will throw people out and let AI handle things, but differently from a human employee, the AI will be able to give any result without being held accountable (as long as it aligns with the views of its developers).
I think there’s a huge enshittification coming. Bigger than anything we’ve seen before, as people with very deep specific knowledge are thrown out and replaced by generalist AIs which can tell you about math and Shakespeare, but hallucinate and produce bullshit results whenever you ask them deep questions about any field
Maybe I’m just an AI doomer, but I kinda see this trend taking place in tech already.