Everything tech giants will hate about the EU’s new AI rules

https://arstechnica.com/tech-policy/2025/07/everything-tech-giants-will-hate-about-the-eus-new-ai-rules/

Ashley Belanger Jul 10, 2025 · 3 mins read
Everything tech giants will hate about the EU’s new AI rules
Share this

The European Union is moving to force AI companies to be more transparent than ever, publishing a code of practice Thursday that will help tech giants prepare to comply with the EU's landmark AI Act.

These rules—which have not yet been finalized and focus on copyright protections, transparency, and public safety—will initially be voluntary when they take effect for the biggest makers of "general purpose AI" on August 2.

But the EU will begin enforcing the AI Act in August 2026, and the Commission has noted that any companies agreeing to the rules could benefit from a "reduced administrative burden and increased legal certainty," The New York Times reported. Rejecting the voluntary rules could force companies to prove their compliance in ways that could be more costly or time-consuming, the Commission suggested.

The AI industry participated in drafting the AI Act, but some companies have recently urged the EU to delay enforcement of the law, warning that the EU may risk hampering AI innovation by placing heavy restrictions on companies.

Among the most controversial commitments that the EU is asking companies like Google, Meta, and OpenAI to voluntarily make is a promise to never pirate materials for AI training.

Many AI companies have controversially used pirated book datasets to train AI, including Meta, which suggested that individual books are individually worthless to train AI after being called out for torrenting unauthorized book copies. But the EU doesn't agree, recommending that tech companies designate staffers and create internal mechanisms to field complaints "within a reasonable timeframe" from rightsholders, who must be allowed to opt their creative works out of AI training data sets.

The EU rules pressure AI makers to take other steps the industry has mostly resisted. Most notably, AI companies will need to share detailed information about their training data, including providing a rationale for key model design choices and disclosing precisely where their training data came from. That could make it clearer how much of each company's models depend on publicly available data versus user data, third-party data, synthetic data, or some emerging new source of data.

The code also details expectations for AI companies to respect paywalls, as well as robots.txt instructions restricting crawling, which could help confront a growing problem of AI crawlers hammering websites. It "encourages" online search giants to embrace a solution that Cloudflare is currently pushing: allowing content creators to protect copyrights by restricting AI crawling without impacting search indexing.

Additionally, companies are asked to disclose total energy consumption for both training and inference, allowing the EU to detect environmental concerns while companies race forward with AI innovation.

More substantially, the code's safety guidance provides for additional monitoring for other harms. It makes recommendations to detect and avoid "serious incidents" with new AI models, which could include cybersecurity breaches, disruptions of critical infrastructure, "serious harm to a person’s health (mental and/or physical)," or "a death of a person." It stipulates timelines of between five and 10 days to report serious incidents with the EU's AI Office. And it requires companies to track all events, provide an "adequate level" of cybersecurity protection, prevent jailbreaking as best they can, and justify "any failures or circumventions of systemic risk mitigations."

Ars reached out to tech companies for immediate reactions to the new rules. OpenAI, Meta, and Microsoft declined to comment. A Google spokesperson confirmed that the company is reviewing the code, which still must be approved by the European Commission and EU member states amid expected industry pushback.

"Europeans should have access to first-rate, secure AI models when they become available, and an environment that promotes innovation and investment," Google's spokesperson said. "We look forward to reviewing the code and sharing our views alongside other model providers and many others."

These rules are just one part of the AI Act, which will start taking effect in a staggered approach over the next year or more, the NYT reported. Breaching the AI Act could result in AI models being yanked off the market or fines "of as much as 7 percent of a company’s annual sales or 3 percent for the companies developing advanced AI models," Bloomberg noted.