Examples of AI doomers include:
- Nick Bostrom, a philosopher and director of the Future of Humanity Institute, who has written extensively on the risks of superintelligent AI.
- Elon Musk, who has expressed concerns about the potential risks of AI and has called for greater regulation and oversight.
- Yuval Noah Harari, a historian and professor, who has written about the potential risks and consequences of AI in his book "21 Lessons for the 21st Century".