Part 3/9:
Tegmark fervently argued that AGI is not only unnecessary but also undesirable and preventable. He made an analogy to biotechnology, pointing out that people do not accept the notion of unregulated biotech out of fear of losing control. Instead, they advocate for stringent safety standards that ensure innovations can lead to beneficial outcomes. In the same vein, Tegmark suggests that legally mandated standards for AI could enable safe and controllable AI technologies without the existential risks associated with AGI.