The tech-bro-slash-accelerationist and conservative critique of the Biden AI order is mostly directed at two of its provisions, in Wired's analysis.
One lays out new requirements for how tech companies test and conduct risk assessments of their AI models, a practice known as "red-teaming." Under the Biden provision, companies developing large AI models are required to share all red-team test results to the federal government for review, assessing for things like how vulnerable the AIs are to being hacked. Critics paint this process as needlessly slowing down the pace of AI development and forcing companies to disclose their trade secrets.