Generative Artificial Intelligence: the New European Regulatory Framework
The European Union has just proposed a comprehensive regulation for generative AI, imposing transparency duties, model traceability, and risk assessments, aiming to curb deep‑fakes and false content, with compliance deadlines up to 2028 and labeling.

The European Union is preparing to regulate more strictly the artificial‑intelligence technologies that autonomously generate text, images, audio and video. In the Communication released on 5 May 2026, the Commission presented a draft regulation that, if adopted, would become the first global legislative framework specifically dedicated to generative AI. Its stated purpose is twofold: to foster technological innovation while guaranteeing a high level of safety, reliability and respect for fundamental rights.
The first pillar of the proposal is transparency. All generative‑AI systems must clearly indicate when a piece of content has been produced by an algorithm. This disclosure has to appear on the page or within the final product and must include the name of the model, its version and, whenever possible, the data sources used for training. By making the artificial origin visible, the rule aims to give end‑users the tools to instantly distinguish synthetic from human‑created material and to reduce the risk of deception.
A second pillar concerns traceability. Every model will be required to be listed in a European‑wide registry administered by the European Artificial Intelligence Board (EAIB). The registration will contain information on the training datasets, performance metrics, bias assessments and robustness tests. While the public will be granted read‑only access to the registry, full access will be reserved for supervisory authorities, law‑enforcement agencies and other authorised bodies during emergencies.
The regulation also introduces a four‑level risk classification – minimal, limited, high and unacceptable – based on the potential impact of the product on individuals and society. Systems deemed “high‑risk” must undergo a certified AI‑Impact Assessment performed by accredited bodies. The assessment must demonstrate compliance with standards for accuracy, non‑discrimination and data‑protection. Products classified as “unacceptable” will be prohibited from commercial use.
Deep‑fake technology receives particular attention. Providers of generative AI will be obliged to embed digital watermarks into synthetic media, creating an imperceptible but verifiable signal. Moreover, online platforms that host user‑generated content – social networks, streaming services and news portals – must implement mandatory detection algorithms within twelve months of the regulation’s entry into force, under penalty of fines that can reach up to six percent of a company’s global annual turnover.
Fundamental‑rights protection is woven throughout the text. The draft re‑affirms the primacy of the EU Charter of Fundamental Rights, especially freedom of expression, human dignity and gender equality. Whenever a system generates discriminatory or violent material, the regulation grants the right to demand immediate removal and imposes monetary sanctions, up to a ban on commercialisation in the most serious cases.
The timetable provides a two‑year transition period after formal adoption by the European Parliament and Council. During this phase, enterprises must adapt their systems to the new obligations, with the possibility of applying for temporary research exemptions of public interest. Full compliance is required by 31 December 2028.
Reactions from the sector are mixed. Large technology platforms have welcomed the proposal as “a necessary step toward consumer trust”, arguing that transparency will drive responsible AI adoption. Small and medium‑size enterprises, however, warn that administrative burdens and certification costs could slow innovation. To mitigate these concerns, the Commission has announced a €200 million support fund to help SMEs attain regulatory compliance.
The draft will now be examined by parliamentary committees in Brussels and Strasbourg. After approval, it will be published in the Official Journal of the European Union. Experts agree that, despite possible amendments, the core elements – labeling, traceability, risk assessment and watermarking – will remain intact, marking the start of a new era of “responsible AI” across the continent.
Fonte: European Commission – Communication “Regulation on generative artificial intelligence”, 5 May 2026.