Part 1/3:
Navigating the Risks of Autonomous AI Systems
The way we test and assess the risks of autonomous AI systems is crucial, as these models become increasingly capable of conducting AI research themselves. This threshold of AI models being able to engage in AI research is an important milestone, as it signifies a level of true autonomy.
Defining Autonomy Levels (ASL)
To address this challenge, the RSP (Research Safety Policy) has developed an "if-then" structure that outlines different Autonomy Safety Levels (ASL):
ASL1: Systems that clearly pose no risk of autonomy or misuse, such as a chess-playing bot like Deep Blue.
ASL2: Today's AI systems, which have been measured and deemed not smart enough to autonomously self-replicate or conduct dangerous tasks like providing information on CBRN (chemical, biological, radiological, and nuclear) weapons beyond what can be found through a basic web search.
[...]