Im not sure, just thought that e.g a AI model would be able to be taught that some information might be more correct than other. Wikipedia can atleast have refernces and such as on e.g reddit anyone can post anything claiming it to be true. Like flat earth theory.
You are viewing a single comment's thread from:
References dont necessarily make it valid hence the peer reviewed "science".
How do you know what you search is accurate? Google actually has promoted responses that turned out to be incorrect such as the Biden laptop.
The AI processes it all and the more information it has, the more it can overright was is not accurate.
True, but atleast it might be more factual. Its not a perfect system in any way shape or form. But thats why I thought that if we own our own glossary, we could try to remove the obvious propaganda and trash which we already see in the AI out there. It could be up to the community to make sure its "factual" and "true" instead of mainstream media or other governmental bodies lurking the web.
We can certainly write up our own content, that is true. However, nothing is preventing someone else from countering it. That is the nature of decentralization.
Understand, and you don't see any reason to spend more resources on the glossary before we know more about the ai? :)
Better to put the focus elsewhere
Have to see what LeoAI does. It could be just as easy to build the finance/technical stuff, for example, under a threadcast and get the data in there.
If chatbots start to take over search, then we arent dealing with pages anymore but information.