AI Regulation, Governance & Ethics

December 9, 2024

3 min

AI Law in 2024: Review and Summary of the Key Developments

In 2024, significant changes happened in artificial intelligence (AI) regulation. They gradually reshape operational landscape for AI-driven businesses. Understanding these developments is crucial for compliance and strategic planning. Below is an overview of key regulatory changes in 2024 and their implications.

Jan Czarnocki

Co-Founder & Managing Partner

Table of contents

1. European Union (EU) AI Act

The EU AI Act, effective from August 1, 2024, establishes a comprehensive framework categorizing AI systems by risk levels: unacceptable, high, limited, and minimal. High-risk applications face stringent requirements, including conformity assessments and transparency obligations. AI Act also regulates general purpose AI systems. Notably, certain prohibitions on “unacceptable risk” AI systems will take effect in February 2025.

Key Takeaways for AI Businesses:

  • Risk Assessment: Identify and classify your AI systems according to the Act’s risk categories to determine AI Act applicability and compliance requirements.
  • Compliance Measures: Implement necessary assessments and documentation for high-risk AI applications to meet EU standards.
  • Operational Adjustments: Prepare for upcoming prohibitions by evaluating and modifying AI systems that may fall under the “unacceptable risk” category.
  • Penalties for Non-Compliance: Non-compliance can result in substantial fines, emphasizing the importance of regulatory adherence.

2. United States – President Biden’s new Executive Order – Executive Order on Safe, Secure, and Trustworthy AI

President Biden’s Executive Order 14110 focuses on public sector applied AI adheres to principles of safety, security, and trustworthiness. It directs federal agencies to create guidelines that uphold civil rights and equity while mitigating bias and ensuring AI systems are beneficial to the public.

Key Takeaways for AI Businesses:

  • Mitigation of existential risk: The order focuses on the gravest risks and obliges public agencies to prepare sectoral guidelines for AI implementation in their respective fields. Although this pertains to public sector uses of AI, those procuring such systems will need to comply with the future guidelines.
  • Alignment with NIST Standards: Therefore, businesses working with public sector agencies in the US should align their AI systems with the AI risk guidelines developed by the National Institute of Standards and Technology (NIST) to ensure compliance.

3. United Kingdom – Pro-Innovation Approach to AI Regulation

In March 2024, the UK government published its White Paper on AI regulation, emphasizing a “pro-innovation” strategy. This approach seeks to minimize overregulation while ensuring safety and accountability in AI technologies. It focuses on sector-specific guidance, empowering existing regulators to handle AI within their industries rather than creating a centralized AI regulatory authority.

Key Takeaways for AI Businesses:

  • Flexible Compliance: Businesses operating in the UK benefit from a less prescriptive, more adaptive framework, which encourages innovation while emphasizing accountability.
  • Sector-Specific Engagement: Companies should monitor and engage with sector-specific regulators to understand their unique AI requirements.

4. Revised EU Product Liability Directive

In November 2024, the European Union enacted Directive (EU) 2024/2853, a significant update to its Product Liability Directive, to address the complexities introduced by digital technologies, including artificial intelligence (AI).

Expanded Definition of “Product”: The Directive broadens the term “product” to encompass software, AI systems, and digital services. This inclusion subjects these digital products to strict liability for defects, holding manufacturers accountable for damages their AI systems may cause.

Key Takeaways for AI Businesses:

  • Liability Awareness: AI systems are now explicitly covered under product liability laws, increasing potential legal exposure for defects.
  • Quality Assurance: Enhancing testing and quality control processes is crucial to mitigate risks associated with defective AI products.
  • Insurance Considerations: Review and update liability insurance policies to ensure coverage for AI-related claims.

5. Council of Europe’s Framework Convention on Artificial Intelligence

The Council of Europe adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law on May 17, 2024, marking the first binding international treaty aimed at governing AI. Opened for signature on September 5, 2024, the treaty has been signed by countries including the UK, EU, US, and Israel.

Key Takeaways for AI Businesses:

The Convention applies to states and is fairly high-level, however it indicates the general path in which AI regulation will go, including the US and other developed countries. It underscored AI ethical requirements such as safety, accountability, transparency and trustworthiness.

2024 was a breaking point for artificial intelligence regulation, establishing a wide ranging framework for the technology, which AI is. The dynamic evolution of law in this area reflects an increasing need to align legal frameworks with the pace of innovation while safeguarding human rights and ensuring equality in the face of technological advancements. These changes compel businesses and regulators to redefine their approaches to risk management and ethics in the design of AI systems. In this complex legal landscape, understanding new regulations is essential, but so too is their practical implementation in ways that support business growth and foster public trust.

Let us take care of your legal needs

Book your call
30 min free consultation

Download our free
E-Books

Get Expert Legal Guidance Today

Solve your regulatory challenges fast, build trust, avoid regulatory and reputational risks and gain a competitive advantage.

Book your call
30 min free consultation