First one must define what AI is, where the lines blur from Engineered Sentience to that of Artificial Intelligence to that of Cybernetic Organisms. But I will put them in one group to make a point (hopefully).
When a system that has the capacity of going singularity, rules and regulations should be put in place to prevent such a catastrophic thing as annihilation from occurring.
In 2017, Future of Life Institute outlined the Asilomar AI Principles, which consisted of 23 principles to which AI need to be subjected by. Some of these principles included long term self improvement to modification of core code, or their ISC Irreductible Source Code. The ISC is like the 3 laws of robotics.
To read more about the 23 Asimolar AI Principles, please visit https://futureoflife.org/ai-principles/