AI Regulation, Governance & Ethics

November 28, 2024

4 min

AI Act and General-Purpose AI Systems in the Draft Code of Practice: What You Need to Know

The upcoming AI Act presents a far-reaching regulatory framework for general-purpose AI systems, especially those that pose “systemic risks.” Under current timelines, providers must be compliant by 2nd August 2025, which is fast approaching. To streamline this process, the Code of Practice for General-Purpose AI Systems—a guidance document from the European Commission—aims to help organizations meet their obligations under the AI Act.

Jan Czarnocki

Co-Founder & Managing Partner

Table of contents

Defining General-Purpose AI Systems

AI Model Definition

According to the AI Act, a general-purpose AI model is an AI model trained on large datasets—often using self-supervision—that demonstrates significant versatility. It can perform multiple, distinct tasks and be integrated into various downstream applications. Notably, AI models used strictly for research, development, or prototyping prior to being placed on the market do not fall under this definition.

General-Purpose AI System

A general-purpose AI system is any system built on a general-purpose AI model, capable of serving diverse functions—whether directly for end users or as a component of other AI systems.

The Draft Code of Practice: A Pathway to Compliance

On 14th November 2024, an independent expert group appointed by the European Commission released the first draft of the Code of Practice for General-Purpose AI Systems. Once finalized in May 2025, this Code will offer a structured approach for “deployers and providers” of general-purpose AI systems that carry systemic risk.

By adhering to the Code, organizations can demonstrate compliance with the AI Act and reduce the likelihood of enforcement actions.

Key Duties and Responsibilities Under the Draft Code

1. Information and Technical Documentation

Providers of general-purpose AI systems must prepare and maintain comprehensive documentation covering:

  • Model Overview: Basic information about the model
  • Intended Tasks: Types and nature of AI systems in which the model can be integrated
  • Acceptable Use Policies: Guidelines on proper usage
  • External Interaction: How the model interfaces with hardware or software
  • Technical Details: Model architecture and number of parameters
  • Design & Testing: Specifications, training/testing processes, and evaluation results

These documents must be readily accessible to the AI Office and national authorities on request. Certain details should also be shared with downstream providers looking to integrate the general-purpose AI system into their own solutions.

2. Copyright Compliance

Providers should enforce strong internal policies that respect the copyright of data and creative works. Responsibilities include:

  • Internal Policy: Governing how copyright data is handled
  • Due Diligence: Reviewing third-party datasets before use
  • Avoiding Repetitive Output: Preventing repeated generations that resemble copyrighted materials
  • Adherence to the EU Copyright Directive (2019/790): For instance, honoring text and data mining reservations through methods like the robots.txt standard

3. Systemic Risk Assessment and Mitigation

General-purpose AI systems with systemic risk potential must undergo thorough risk assessments. Areas of concern include:

  • Cyber Offence: Vulnerability exploitation and discovery
  • Chemical, Biological, Radiological, and Nuclear (CBRN) Risks: Dual-use science threats
  • Loss of Control: Inability to manage powerful autonomous models
  • Automated AI R&D: Unpredictable advancements in general-purpose AI
  • Persuasion and Manipulation: Large-scale disinformation and manipulation
  • Large-Scale Discrimination: Illegal discrimination affecting individuals or groups

4. Rules for General-Purpose AI Models with Systemic Risk

The Code of Practice outlines specific steps to mitigate risks, including:

  • Safety & Security Frameworks: Establish processes for continuous risk identification, analysis, and evidence collection
  • Technical & Organizational Measures: Such as safety reports, well-defined deployment decisions, and clear ownership of risks
  • Independent Assessments & Audits: Ongoing evaluations by third parties
  • Incident Reporting & Whistleblowing Protections: Procedures for reporting serious incidents and protecting whistleblowers

Conclusion

As the 2nd August 2025 compliance deadline approaches, organizations developing or deploying general-purpose AI systems must stay vigilant. Ensuring robust documentation, maintaining copyright compliance, and conducting thorough systemic risk assessments will be central to meeting AI Act requirements.

By following the Code of Practice—set for final release in May 2025—providers can not only demonstrate legal compliance but also foster trust among regulators, partners, and end users. Being proactive now will help streamline adoption of the new AI regulations and minimize potential disruptions once the AI Act comes into full effect.

Let us take care of your legal needs

Book your call
30 min free consultation

Download our free
E-Books

Get Expert Legal Guidance Today

Solve your regulatory challenges fast, build trust, avoid regulatory and reputational risks and gain a competitive advantage.

Book your call
30 min free consultation