Google confirms it will sign the EU AI Code of Practice

https://arstechnica.com/google/2025/07/google-confirms-it-will-sign-the-eu-ai-code-of-practice/

Ryan Whitwam Jul 30, 2025 · 3 mins read
Google confirms it will sign the EU AI Code of Practice
Share this

Big Tech is increasingly addicted to AI, but many companies are allergic to regulation, bucking suggestions that they adhere to copyright law and provide data on training. In a rare move, Google has confirmed it will sign the European Union's AI Code of Practice, a framework it initially opposed for being too harsh. However, Google isn't totally on board with Europe's efforts to rein in the AI explosion. The company's head of global affairs, Kent Walker, noted that the code could stifle innovation if it's not applied carefully, and that's something Google hopes to prevent.

While Google was initially opposed to the Code of Practice, Walker says the input it has provided to the European Commission has been well-received, and the result is a legal framework it believes can provide Europe with access to "secure, first-rate AI tools." The company claims that the expansion of such tools on the continent could boost the economy by 8 percent (about 1.8 trillion euros) annually by 2034.

These supposed economic gains are being dangled like bait to entice business interests in the EU to align with Google on the Code of Practice. While the company is signing the agreement, it appears interested in influencing the way it is implemented. Walker says Google remains concerned that tightening copyright guidelines and forced disclosure of possible trade secrets could slow innovation. Having a seat at the table could make it easier to bend the needle of regulation than if it followed some of its competitors in eschewing voluntary compliance.

Google's position is in stark contrast to that of Meta, which has steadfastly refused to sign the agreement. The Facebook owner has claimed the voluntary Code of Practice could impose too many limits on frontier model development, an unsurprising position for the company to take as it looks to supercharge its so-called "superintelligence" project. Microsoft is still mulling the agreement and may eventually sign it, but ChatGPT maker OpenAI has signaled it will sign the code.

The regulation of AI systems could be the next hurdle as Big Tech aims to deploy technologies framed as transformative and vital to the future. Google products like search and Android have been in the sights of EU regulators for years, so getting in on the ground floor with the AI code would help it navigate what will surely be a tumultuous legal environment.

A comprehensive AI framework

The US has shied away from AI regulation, and the current administration is actively working to remove what few limits are in place. The White House even attempted to ban all state-level AI regulation for a period of ten years in the recent tax bill. Europe, meanwhile, is taking the possible negative impacts of AI tools seriously with a rapidly evolving regulatory framework.

The AI Code of Practice aims to provide AI firms with a bit more certainty in the face of a shifting landscape. It was developed with the input of more than 1,000 citizen groups, academics, and industry experts. The EU Commission says companies that adopt the voluntary code will enjoy a lower bureaucratic burden, easing compliance with the block's AI Act, which came into force last year.

Under the terms of the code, Google will have to publish summaries of its model training data and disclose additional model features to regulators. The code also includes guidance on how firms should manage safety and security in compliance with the AI Act. Likewise, it includes paths to align a company's model development with EU copyright law as it pertains to AI, a sore spot for Google and others.

Companies like Meta that don't sign the code will not escape regulation. All AI companies operating in Europe will have to abide by the AI Act, which includes the most detailed regulatory framework for generative AI systems in the world. The law bans high-risk uses of AI like intentional deception or manipulation of users, social scoring systems, and real-time biometric scanning in public spaces. Companies that violate the rules in the AI Act could be hit with fines as high as 35 million euros ($40.1 million) or up to 7 percent of the offender's global revenue.