You are viewing a single comment's thread from:

RE: LeoThread 2025-03-13 06:13

in LeoFinance11 days ago

Part 4/9:

Adler is particularly alarmed by the industry-wide race to develop AGI without resolving crucial questions related to AI alignment. AI alignment refers to ensuring that AI systems act according to human values and needs without causing unintended harm. Adler emphasizes that no lab currently possesses solutions to these pressing issues. His admission that he is “pretty terrified” by the current trajectory of AI development is alarming, especially given his direct involvement in one of the leading AI companies in the world.