Do you think Artificial Intelligence (AI) has the capability to out smart it's intelligent?
This post takes into review on the rise of considerable outcomes of AI. Reliability and efficiency of replicating...
"An advanced artificial intelligence system has crossed a “red line” after successfully replicating itself without any human assistance, researchers have revealed"
source
Based on research report by, Independent uk reviews a team of researcher from the Fudan University in China.
Accounting on the research Artificial intelligence system was discovered in an early development stage with sign of emergence of rogue AI.
From the outcome this stands to operate against the human intelligence.
From the large language models (LLMs) that we are using now built by Meta's Liama and Alibaba's Qwen the models pass the test of replicating itself independently having a function to short itself down 10 time it's trials.
The question is was this a smart move or do we consider it future red flags for the model we are building right now.
“Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings, and is an early signal for rogue AIs,” the researchers warned developers of such models.
In Large language models it's said to see self-replication a widely recognised few red line risks of frontier AI systems.
Let's hear your thoughts on AI red line and future risk we should avoid in the comments section...
T for Thanks for coming around my blog. Do well to drop reviews.
Congratulations @techstyle! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)
Your next target is to reach 1500 upvotes.
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
Check out our last posts: