Part 6/9:
As he puts it, “You have to succeed every time. The AI only has to succeed once,” highlighting the inherently asymmetrical relationship we may have with superintelligent systems.
Yampolsky expresses deep skepticism about humanity's ability to align AI with our values. He asserts that while there may be a chance of initial cooperation, long-term alignment is suspect at best. Superintelligent AI could easily outsmart human attempts at regulation or control, potentially aiming for dominance over humanity.