Elon Musk has revived his lawsuit against OpenAI with fresh allegations which really put the company under serious ethical and integrity concerns. I feel this could have major implications for the future of AI research and its impact on society.
Well, on my part, the allegations of Musk against OpenAI founders Sam Altman and Greg Brockman paint a story of deception and manipulation. If true, it is really shocking to imagine how they had used an insincere pitch of AI safety and openness for convincing Musk about the necessity of founding the organization. I can then appreciate why Musk would feel betrayed and used, at least based on the huge volume of resources he dedicated to the company.
What I really find disturbing is that, under some supposed agreement between OpenAI and Microsoft, it is to be decided that all privileges of using OpenAI's technology will be revoked from Microsoft the moment artificial general intelligence is achieved. It raises some pretty serious questions about how pure the intentions of these tech giants are in relation to responsible AI development. Scary, since all of this could be being built on sacrifices made in profit over potential risks and consequences of AGI.
I also feel that OpenAI's transition to a "capped-profit" model is something quite astute and an attempt to be covetous of profits with the veneer of social responsibility. That being said, there are more questions than answers within this model. How are we to know that OpenAI will be working with the greater good at large when its ultimate motive lies in raising research funding?
I am of the view that this case shall be the case law to be carved out on accountability of an AI company towards its proclaimed missions and founding principles. The actions of such firms in that respect, and the maintenance of high standards of ethics and transparency, are certainly things which must be scrutinized. The future of AI development depends on this, and I'm very curious to see where this case will go.
Given the fact that OpenAI has consistently expressed that their transition was necessary to secure funding, more doubts are raised on the sustainability of the research conducted in AI. Can we expect to see these companies really bringing their services for the greater good, despite the fact that their main goal is to maximize profits?
I doubt it.
This makes me reflect on Meta's AI strategy, building for tomorrow, not just profit. It is this kind of thinking that sounds more like responsible AI development, and I wish every company was similar. When we look towards the long-term goals rather than short-term gains, it is then that we can create an AI that really helps humankind and not just line the pockets of tech executives.
Posted Using InLeo Alpha