For years I've posted about the need for universal basic income because of technological unemployment. As early as 2015 I was discussing these possibilities. It's now Dec 16 2023 and we are as little as a single algorithmic breakthrough away from achieving AGI. LLMs (large language models) by themselves cannot become AGI due to the fact that the current generation of LLMs cannot do the logical reasoning well enough to handle mathematics. In a new paper which was released by scientists from Google we have "FunSearch". This breakthrough in my opinion is one of the firsts that can bring us to AGI without having to rely on a neuro-symbolic approach.
That is to say, while you cannot do AGI safely without taking a logical approach (my opinion), you can likely achieve performance at the level of AGI using the unsafe approximation based neural network approach. The problem with this approach of using neural networks is the fact that a neural network is a black box (we can't really let it explain what it's doing so we can understand it), and the generative AI really doesn't understand what it's doing either because it's just predicting the next sequence or character in a pattern. Yet with FunSearch we see that it may be possible to use generative approaches such as LLMs to actually generate novel solutions to problems in mathematics. In my opinion FunSearch is the kind of game changing research which can lead to AGI.
Logical AI such as seen in GOFAI, is the safer approach. You can use a smart constitution in a logical AI approach. The logical AI can actually understand the meaning behind the words and the laws. It's not merely generating text, it's able to strategically plan, map out possible consequences, and have common sense. In my opinion without this we dont' really have the complete capabilities to call it artificial general intelligence. I do think either by combining LLMs with logical AI, or by using generative approaches to do math in the approximate rather than exact logical, you can still get at least very close to AGI if not actually reach it for most practical purposes.
The problem with using generative AI to try to reach AGI is you don't want to take an approximate approach to something where you want to be exact. The problem of alignment, and of AI escaping human control, is the risk when you deal with the approximate approach. You have the risk of hallucination, you have the unpredictability of it doing something it wasn't told to do. Because it's not logical, it's statistical, and as we know in statistics it's never an impossibility. They can make it right 99% of the time but you still have that 1% where you don't know what could happen. In the logical approach you can't have a contradiction, and it can only do exactly as instructed at all times.
So to do AGI safely I do think we need at least a logical layer. In this layer the AGI could be given it's laws to follow, it's constitution, it's rules. It cannot break it's own rules if it's set up to logically follow it's own rules and never deviate. Will AGI be achieved in 5 years? I think if society wants to achieve it there does not seem to be many bottlenecks left in theory to prevent it from being feasible. The algorithms for the most part exist, the approaches to take are known, and now it's a choice on if we want to do it, and if we do it how do we want to keep it beneficial for humanity without stiffling innovation.
What about the politics? The moment AGI is achieved it is going to change things. Do we need 9-5 style labor force anymore? Will we have some kind of UBI? What happens to the vast majority of people who do the exact kind of labor that an AGI will be able to do better? Humans will still have roles if we learn to use and merge with AGI but the role will be more to be the director or supervisor of the AGIs rather than to be working on the details and traditionally recognized labor.
AGI likely necessitates UBI.