Elon Musk’s xAI to Sign EU AI Code Chapter on Safety and Security
Elon Musk’s artificial intelligence company, xAI, announced on Thursday that it will sign the safety and security chapter of the European Union’s Code of Practice. This voluntary code is designed to help AI firms align with the EU’s upcoming artificial intelligence regulations.
The EU code, created by 13 independent experts, includes three chapters: transparency, copyright, and safety and security. xAI’s decision to sign only the safety and security section reflects its selective support for the EU’s AI framework.
Safety Commitment Amid Broader Criticism
xAI stated on social media platform X that it supports efforts to enhance AI safety. The company expressed strong backing for the safety chapter of the Code of Practice but criticised other parts of the EU’s AI Act.
In its post, xAI said, “While the AI Act and the Code have a portion that promotes AI safety, its other parts contain requirements that are profoundly detrimental to innovation, and its copyright provisions are clearly an overreach.”
The company has not clarified whether it will adopt the code’s transparency or copyright chapters. It also did not respond to media requests outside regular business hours for further comment.
Tech Industry Responses Vary
Adherence to the EU’s code is optional, but signing it gives companies legal certainty in how they comply with the AI Act. Firms that do not sign will not receive this legal clarity.
Other major tech firms have taken different stances. Google has confirmed its intent to sign the entire code of practice. Microsoft President Brad Smith also stated the company will likely sign it.
In contrast, Meta, which owns Facebook, has rejected the code. The company argues that the rules create legal uncertainties and impose obligations that go beyond what is set out in the AI Act itself.
The EU’s AI Code of Practice is seen as an important step towards enforcing responsible AI development while supporting innovation. However, the mixed reactions show the ongoing debate over how to balance safety, regulation, and innovation in this fast-moving field.
with inputs from Reuters