Both. I think developing AI safety is critical to aligning the technology with my self interest. At the same time developing AI is necessary to improve ethics, safety, even personal decisions. It will make us decide better in many cases but it's dangerous if algorithms can influence our decisions subconsciously which is where the situation is right now.
You are viewing a single comment's thread from:
Agreeing with you 100%. My fear is that people and governments will not make it "safe" and it can get ou of hand very fast.
The people = the blockchain = token holders. Token holders need to discover their self interest and apply their ethics. Currently it's already not safe enough but new projects need to seriously think about the ethics involved and at minimum consider doing formal risk assessments.