The EU AI Act 2024: A Necessary Framework for Ensuring Transparency, Privacy, and Rights in AI Regulation
Introduction on EU AI Act and Generative AI Model (GAI)
The EU AI Act 2024 is a key component of the EU’s data strategy, designed to complement existing regulations by overseeing Generative AI models (GAI), minimising risks, and imposing strict regulations on high-risk AI systems. The European Commission has advocated for a thorough investigation into Artificial Intelligence (AI) since its April 2021 proposal, aiming to regulate the field and mitigate potential risks and negative societal impacts. This further accelerates the development and implementation of the EU AI Act 2024, which “provides developers and deployers with clear requirements and obligations regarding specific uses of AI”. This Act specifically aims to oversee general-purpose AI models, ensuring harmonised regulations in accordance with Article 1.
GAI can be defined as “a type of machine learning architecture that uses AI algorithms to create novel data instances, drawing upon the patterns and relationships observed in the training data”. Prominent examples include DALL-E 2, ChatGPT-4, and Copilot, which are designed to generate text that mimics human writing and create images resembling those of illustrators.
The training methodologies for GAI like ChatGPT and similar conversational AI models rely on Reinforcement Learning from Human Feedback (RLHF). This process occurs in three stages: first, collecting demonstration data for prompts; second, having users rank different responses in terms of quality; and third, using reinforcement learning to refine the model’s output so that it aligns with human preferences and achieves high rankings.
The rise of generative large language models (LLMs) like ChatGPT has significantly transformed scientific research and publishing. These models, which can produce human-like text and are widely accessible, are increasingly being used in academic writing. However, their integration into scientific publications has sparked ethical and legal concerns, particularly regarding user data collection violations, plagiarism, and copyright infringement. Therefore, the EU AI Act serves as a crucial regulatory framework to ensure that private enterprises lawfully develop, train, and distribute GAI with transparency while upholding copyright protections and safeguarding user privacy.
EU AI Act 2024 Framework on Classifying AI System Risk Levels
The EU AI Act 2024 enforces strict transparency, cybersecurity, and ethical compliance measures for AI models, leading to significant regulatory actions against AI model systems. The EU AI Act categorises AI systems based on their potential risk to public safety, fundamental rights, and democratic values, establishing different levels of regulatory oversight. Unacceptable risk AI systems—those deemed to pose severe threats—are outright prohibited. This includes AI used for social scoring, subliminal manipulation, or real-time biometric identification in public spaces, except in narrowly defined cases. The largest regulatory focus is on high-risk AI systems, which are subject to strict compliance measures due to their potential to impact critical areas such as law enforcement, healthcare, and employment. These systems must meet stringent data governance, transparency, and human oversight requirements to mitigate biases and prevent harmful decision-making. Limited-risk AI systems face lighter transparency obligations, where developers and deployers must inform end-users when interacting with AI-generated content, such as chatbots or deepfake technology. Finally, minimal-risk AI systems—which comprise the majority of current AI applications, such as video game AI and spam filters—remain unregulated under the AI Act. This calibrated framework – ban the worst, strictly control the high-impact, and lightly handle the rest – reflects an attempt to balance innovation with precaution. By focusing oversight on use-cases with the highest stakes, the EU aims to enforce accountability where it matters most while not smothering AI development across the board.
Obligations for Generative AI Models (GAI) and Transparency
Generative AI models (GAI), especially large general-purpose AI systems like GPT-4 or Meta’s LLaMA, emerged as a focal point during the EU AI Act’s drafting. Lawmakers recognised that these models are dual-use technologies which can power countless downstream applications, both benign and malicious. The Act imposes specific obligations on providers of foundation models, requiring them to enhance transparency and control over their general-purpose AI systems to ensure accountability and responsible deployment. Concretely, providers are required to document key technical details of their model and share essential information with regulators and downstream users. For example, a model developer must draw up technical documentation describing the model’s capabilities, limitations, and design, and supply this to the newly created EU AI Office as well as to any companies who integrate the model into products. They must also publish a summary of the training data used – a high-level “training content” overview. This provision is meant to shed light on what data sources (e.g. internet text, images, etc.) the model learned from, which can help identify potential biases or intellectual property issues. In addition, providers must implement a copyright policy to ensure their training data usage respects copyright. This comes in response to concerns that GAI have been trained on web data without regard for copyrighted works; the Act compels developers to address how they obtain and use data lawfully. Another key obligation is bias monitoring and mitigation – though not explicitly named in the Act’s text, it is implied by the requirement for high-quality datasets (i.e. representative and free of discriminatory patterns). Providers will need to assess their models for biases or risks (such as demographic biases in output) as part of the risk management process. Moreover, if a GAI is classified as having “systemic risk” (a designation for the very advanced models that could have large-scale societal impact), the Act adds further obligations: the provider must evaluate and mitigate these systemic risks, institute measures for reporting serious incidents or misuse, and ensure robust cybersecurity for the model’s infrastructure.
Case Studies of Generative AI Models(GAI): LLaMa and Chatgpt
Real-world cases in 2023–24 underscore both the potential of GAI and the urgent need for regulatory oversight. This section examines three notable case studies — Meta’s LLaMA model in the EU, OpenAI’s ChatGPT in Italy, and Clearview AI’s facial recognition system— each highlighting different aspects of how the EU AI Act’s provisions are relevant.
Meta (Facebook’s parent company) made headlines by developing the LLaMA family of large language models, which are GAI similar to GPT. In mid-2024, Meta planned to release a new multimodal version of LLaMA (able to handle text, images, etc.) as a cornerstone of its AI strategy. However, the company abruptly pulled back on launching this advanced model in the EU, citing the “unpredictable nature” of the European regulatory environment. In July 2024, Meta announced that while the next LLaMA would be rolled out in other markets, it would not be offered to EU users “due to the unpredictable nature of…European regulations. This effectively meant Meta’s most cutting-edge GAI would be withheld from the EU. The timing and wording strongly suggested that the impending EU AI Act was a major factor: Meta appeared wary of the Act’s stringent requirements and potential liabilities. Under the EU AI Act, a model like LLaMA — especially a multimodal AI integrated into consumer devices (e.g. smart glasses) — could be classified as high-risk or even systemic-risk if its capabilities are far-reaching. That would impose extensive obligations on Meta, such as conducting risk assessments on LLaMA’s use in augmented reality, sharing technical documentation with EU regulators, and ensuring the model does not produce illicit content. The company has already faced European GDPR scrutiny for its handling of personal data, and a multimodal LLaMA might process images (raising biometric data issues) or other sensitive inputs. Complying with the AI Act’s documentation and data requirements for such a model would be complex and potentially expose Meta to penalties if anything went awry. Rather than risk heavy fines or be forced to significantly modify LLaMA’s design for Europe, Meta chose to skip the EU market for this model. This case demonstrates a double-edged effect of regulation: on one hand, it confirms that the AI Act has real bite, influencing even tech giants’ product strategies to avoid non-compliance. On the other hand, it raises the concern that overly “unpredictable” or strict rules might lead to reduced AI availability in Europe, a point we will revisit in the counterarguments. Importantly, Meta did continue offering a limited, text-only version of LLaMA in the EU, implying that the company was primarily concerned with the new model’s advanced, multi-modal capabilities that might conflict with EU rules, In sum, Meta’s LLaMA episode highlights why the AI Act’s oversight is deemed necessary: without clear regulations, companies might deploy powerful AI across jurisdictions without uniform safeguards, whereas the Act forces them to consider privacy, transparency, and safety up front — even if it means delaying or altering a product launch.
OpenAI’s ChatGPT – a now-famous generative AI chatbot – provides a case of regulatory intervention even before the AI Act comes into force. In early 2023, ChatGPT’s explosive popularity raised alarms with European data protection authorities. Italy became the first Western country to temporarily ban ChatGPT in March 2023 over privacy violations, alleging that OpenAI had unlawfully collected personal data and failed to prevent minors from accessing inappropriate content. The ban was lifted after OpenAI hastily implemented measures like age-gating and offered users an opt-out from data usage, but Italy’s investigation continued. In December 2024, Italy’s privacy regulator (the Garante) concluded its probe and fined OpenAI €15 million for GDPR violations. The regulators found that OpenAI had processed people’s personal data to train ChatGPT “without an adequate legal basis,” violating transparency obligations towards users. In other words, ChatGPT ingested huge amounts of personal information from the internet without informing users or obtaining consent, and did not clearly disclose to individuals how their data was being used. Additionally, the Garante criticised OpenAI for lacking a proper age verification mechanism to protect children (users under 13) from harmful content. OpenAI was also ordered to run a public awareness campaign in Italy explaining how ChatGPT uses data. This case illustrates the current regulatory gap that the AI Act aims to fill. Italy had to rely on general privacy law (GDPR) to assert control over ChatGPT, since no AI-specific law existed yet. GDPR indeed provided grounds to address some issues (data processing without consent, transparency failings), resulting in a penalty and mandated remedies. However, GDPR alone is not tailored to all the challenges of GAI– for example, it doesn’t explicitly require AI model risk assessments or bias mitigation. The EU AI Act will complement GDPR by imposing AI-specific duties: if ChatGPT were classified as a high-risk or general-purpose AI system under the Act, OpenAI would need to conduct risk and impact assessments, document its training data sources, test for biases or errors, and ensure robust safeguards for accuracy and transparency. Notably, the Act would require a system like ChatGPT to inform users that they are interacting with AI (which ChatGPT does), and to disclose or watermark AI-generated content in certain contexts. It would also fall under the Act’s new category of “General Purpose AI System (GPAIS)”, meaning policymakers recognised that tools like ChatGPT have broad application and thus need oversight even beyond specific use-cases. The Italian fine highlights the importance of accountability: even a leading AI lab like OpenAI can err in respecting privacy and transparency, and regulators must step in to enforce compliance. The EU AI Act’s necessity is reinforced here – with it, regulators will have a more direct mechanism to demand transparency and safety from AI providers before problems occur.
Loopholes and Limitations of EU AI Act
Although the EU AI Act is a landmark measure, critics point out loopholes and exemptions that GAI can exploit. Strong industry lobbying watered down several provisions, leading to “an overreliance on self-regulation, self-certification, weak oversight… and far-reaching exceptions” in the final law. As a result, many high-risk AI systems rely on internal compliance checks by providers rather than independent audits. This creates a scenario where “those who are supposed to be regulated can testify to compliance with rules they have written for themselves”. GAI developers might thus evade scrutiny by self-declaring their models pose no significant risk, taking advantage of vague exemptions in the Act’s risk classification.
The Act’s handling of data transparency and bias mitigation is another concern. It imposes data governance requirements and documentation obligations (Article 10), but some argue these measures do not go far enough. Because AI training datasets often reflect historical biases, truly neutral data “is a fantasy,” and the Act should assume data is biased “unless proven otherwise”. Without such a precautionary stance, GAI may continue to reproduce societal biases despite technical compliance. Similarly, while the Act addresses intellectual property by requiring providers to respect copyright and to “make public a summary of the content they use for training general-purpose AI models”, this transparency alone may not prevent infringement. GAI systems can still generate outputs that inadvertently violate copyrights, and enforcement ultimately relies on existing copyright law. Developers also worry that disclosing training data could expose proprietary information. In short, gaps in the Act’s scope – from broad carve-outs to limited bias and transparency rules – may allow sophisticated GAI to evade full accountability.
Enforcement and Practicality Issues of EU AI Act
Enforcing the AI Act in practice presents considerable challenges, especially for GAI. The Act’s risk-based classification regime has been criticised as too broad and not well-tailored to the unique nature of GAI. A single foundation model can be adapted to countless uses across different risk categories, making it hard to fit under the Act’s defined tiers. Scholars argue the law “fails to adequately accommodate the risks posed by [large generative AI models], due to their versatility and wide range of applications”. Expecting a general-purpose model’s provider to pre-emptively mitigate “every conceivable high-risk use” is seen as impractical and overly burdensome. This lack of specificity could lead to over-regulation of low-risk uses or, conversely, under-regulation if providers downplay high-risk scenarios, undermining effective oversight of GAI.
Jurisdictional and compliance issues further complicate enforcement. Although the Act has extraterritorial reach, policing non-EU providers is difficult in practice. The law places the onus on under-resourced national regulators to scrutinise providers’ documentation (which is not public by default), yet it “does not specify which regulator” in each country should oversee AI, leading to potential fragmentation. Moreover, full enforcement is delayed until 2026, raising concerns that regulation is lagging the rapid evolution of GAI. In the interim, compliance is voluntary, allowing AI firms to deploy advanced GAI systems in Europe without yet meeting the Act’s standards. Furthermore, meeting the Act’s stringent requirements can be costly and complex, especially for smaller AI innovators. Compliance may simply be out of reach for some start-ups. If regulators themselves lack advanced AI expertise, they may struggle to critically evaluate companies’ self-assessment reports, effectively relying on firms’ own certifications. These practical factors risk leaving GAI under-regulated in reality despite the Act’s formal rules.
Acknowledging the Act’s Benefits and Conclusion
Notwithstanding these criticisms, the EU AI Act is a pioneering framework for responsible AI governance. It is the first comprehensive AI law in the world, described as a groundbreaking legal framework for guiding AI development. The Act aims to safeguard fundamental rights and promote human-centric, ethical AI – its stated goal is to ensure AI systems “respect fundamental rights, safety, and ethical principles”. In line with these principles, the Act bans the most harmful AI practices outright and imposes strict requirements (on data quality, transparency, human oversight, etc.) for high-risk systems. It also establishes new consumer rights, including the right to complain and to receive an explanation of significant automated decisions, and mandates transparency when people interact with an AI system or encounter AI-generated content. This risk-based approach seeks to “foster trustworthy AI in Europe” without unduly hampering innovation. While not perfect, the Act sets an important precedent by providing clear ethical guardrails and accountability mechanisms that other jurisdictions can build upon.
Ian Hsu
Bibliography
Table of Legislation
EU AI Act 2024
Table of Journal and Articles
A&O Shearman, 'Zooming in on AI – #10: EU AI Act – What Are the Obligations for "High-Risk AI Systems"?' (A&O Shearman, 15 November 2024) https://www.aoshearman.com/en/insights/ao-shearman-on-tech/zooming-in-on-ai-10-eu-ai-act-what-are-the-obligations-for-high-risk-ai-systems accessed 13 March 2025.
Clark J, Demircan M and Kettas K, 'Europe: The EU AI Act's Relationship with Data Protection Law: Key Takeaways' (Privacy Matters, 25 April 2024) https://privacymatters.dlapiper.com/2024/04/europe-the-eu-ai-acts-relationship-with-data-protection-law-key-takeaways/ accessed 13 March 2025.
Davies P, '“Potentially Disastrous” for Innovation: Tech Sector Says EU AI Act Goes Too Far' Euronews (Brussels, 15 December 2023) https://www.euronews.com/next/2023/12/15/potentially-disastrous-for-innovation-tech-sector-says-eu-ai-act-goes-too-far accessed 12 March 2025.
European Commission, 'AI Act' (Shaping Europe’s Digital Future, 2024) https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai accessed 12 March 2025.
European Commission, 'AI Act Enters into Force' (1 August 2024) https://commission.europa.eu/news/ai-act-enters-force-2024-08-01_en accessed 26 February 2025.
Feuerriegel S, Hartmann J, Janiesch C and Zschech P, 'Generative AI' (2023) 66 Business & Information Systems Engineering https://link.springer.com/article/10.1007/s12599-023-00834-7 accessed 26 February 2025.
Foomany F, 'Understanding EU AI Act Risk Categories’ (2024) Security Compass https://www.securitycompass.com/blog/understanding-eu-ai-act-risk-categories/ accessed 2 March 2025.
Fragale M and Grilli V, 'Deepfake, Deep Trouble: The European AI Act and the Fight Against AI-Generated Misinformation' (Columbia Journal of European Law, 11 November 2024) https://cjel.law.columbia.edu/preliminary-reference/2024/deepfake-deep-trouble-the-european-ai-act-and-the-fight-against-ai-generated-misinformation/ accessed 12 March 2025.
Gasser U, 'An EU Landmark for AI Governance' (2023) 380 Science 1203 https://www.science.org/doi/10.1126/science.adj1627 accessed 12 March 2025.
Hacker P, Engel A and Mauer M, 'Regulating ChatGPT and Other Large Generative AI Models' (Oxford Business Law Blog, 1 March 2023) https://blogs.law.ox.ac.uk/oblb/blog-post/2023/03/regulating-chatgpt-and-other-large-generative-ai-models accessed 12 March 2025.
Hern A, 'Meta pulls plug on release of advanced AI model in EU' (2024) The Guardian https://www.theguardian.com/technology/article/2024/jul/18/meta-release-advanced-ai-multimodal-llama-model-eu-facebook-owner accessed 5 March 2025.
Kobie N, 'Meta Won't Release Multimodal AI Models in Europe Due to "Unpredictable" Privacy Regulations' (ITPro, 18 July 2024) https://www.itpro.com/technology/artificial-intelligence/meta-wont-release-multimodal-ai-models-in-europe-due-to-unpredictable-privacy-regulations accessed 13 March 2025.
Lagercrantz O, 'Europe’s AI Act Stumbles Out of the Gate' (2025) CEPA https://cepa.org/article/europes-ai-act-stumbles-out-of-the-gate/ accessed 2 March 2025.
Lakshmanan R, 'Italy Fines OpenAI €15 Million for ChatGPT GDPR Data Privacy Violations' (The Hacker News, 23 December 2024) https://thehackernews.com/2024/12/italy-fines-openai-15-million-for.html accessed 13 March 2025.
Morales J, 'Meta to Exclude EU from Multimodal Llama AI Model Release on the Back of Regulatory Scrutiny' (CCN, 19 July 2024) https://www.ccn.com/news/technology/meta-llama-ai-eu-release-cancelled/ accessed 13 March 2025.
Morris C, 'A new bill would force companies like OpenAI to disclose their training data' Fast Company (New York, 10 April 2024) https://www.fastcompany.com/91090357/generative-ai-bill-force-companies-like-openai-disclose-data-train-models accessed 13 March 2025.
Park S. H, ‘Use of Generative Artificial Intelligence, Including Large Language Models Such as ChatGPT, in Scientific Publications: Policies of KJR and Prominent Authorities’ (2023) 24(8) Korean Journal of Radiology https://pmc.ncbi.nlm.nih.gov/articles/PMC10400373/?utm_source accessed 28 February 2025.
Pollina E and Armellini A, 'Italy Fines OpenAI over ChatGPT Privacy Rules Breach' Reuters (Milan, 20 December 2024) https://www.reuters.com/technology/italy-fines-openai-15-million-euros-over-privacy-rules-breach-2024-12-20/ accessed 13 March 2025.
Wachter S, 'Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond' (2024) 26 (3) Yale Journal of Law & Technology https://ssrn.com/abstract=4924553 accessed 12 March 2025.
Ziegler D. M and others, 'Fine-Tuning Language Models from Human Preferences' (2019) 1 arXiv.org https://arxiv.org/abs/1909.08593 accessed 28 February 2025.
Additional Secondary Sources
European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts COM (2021) 206 final <https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF> accessed 26 February 2025.