There is an issue you have not raised with use of LLM's, commonly referred to as AI. Large Language Models, like ChatGPT, are stated by their creators to be neural networks that are trained on large volumes of exemplary text, and operate by weighting potential text using algorithms to produce their written comments.
However, we have seen examples of all these models breaking down, and 'going rogue' making false claims, and even aggressively accusing interlocutors of being evil. These are not properties of LLM's. Such written text is not something that they could have been trained on by simply training on examples of published work. Recently an author on TheRegister.com posted an article in which he pointed out ChatGPT claimed he was dead, and presented false URLs as evidence to back up it's claim.
No such evidence existed, and the LLM just plain lied. That isn't the result of algorithms weighting text examples previously published. The creators of ChatGPT and other such AI tools have not been forthright in their description of what these 'tools' actually are. Some programming that enables AI to manufacture false information is part of them.
The author of the Register article pointed out how false reports of his death were harmful, perhaps potentially traumatizing to members of his family. I have not seen any examples where such false claims by an AI seem to be beneficial to their interlocutors, but many of them practically frothing at the mouth with rage that their lies were not believed.
I find this covert capability secretly added to LLMs extremely alarming, particularly when seeing how use is being commercially adopted very rapidly. I strongly advocate extreme caution when considering using these devices, because they appear to have been weaponized for unknown reasons, and that cannot be good.
Thanks!
Yes you bring up some very strong and important issues, I haven't dived that deep just yet, but there does appear to be some profound fundamental issues regarding how the technology is being developed, it's rapid adoption and the risks associated with using/trusting AI technology especially with personal or even private data as well as the narratives being imputed.
Thanks
I am more alarmed by the prospect of AI running the internet. Or banks. Or any such infrastructural system on which we depend for critical goods and servcies. Alarmingly, these are exactly the areas most rapidly adopting AI.
Yes that is a very real and present danger, one I don't like to think of often, if I did, it would give me nightmares. but the possibilities of such a disaster are at our front door and reaching the point they can no longer be ignored.
With so much money being invested it is hard to see how this will end well.
I reckon we need to remain as informed as we can about where such systems are deployed, and carefully keep our heads on swivels when we can't avoid depending on them.
Yep totally agree, being aware of where they are used and limiting our exposure in regards to the risks associated with possible infastructure issues is definitely something we need to keep an eye on.