Biometrics Regulation and Data

April 1, 2025

2 min

Facial Recognition Technology and the AI Act: Risks and Regulations

A recent study by Michal Kosinski and colleagues has revealed a startling capability of facial recognition technology: the ability to predict political orientation with high accuracy using only expressionless passport-type images. This breakthrough raises significant concerns about privacy and underscores the potential risks such technology poses to democracy and fundamental rights.

Jan Czarnocki

Co-Founder & Managing Partner

Table of contents

Implications for Privacy and Democracy

The findings amplify ongoing debates around the ethical use of facial recognition and its broader societal implications. If such tools can infer deeply personal attributes like political beliefs, the stakes for misuse and abuse become even higher, putting additional pressure on regulators and policymakers to address these risks.

Facial Recognition or Biometric Categorization?

Interestingly, under the AI Act, such systems might not even qualify as traditional facial recognition technology. The Act defines facial recognition as systems designed to single out individuals from a crowd for identification or verification. Instead, this technology would more likely be categorized as a biometric categorization system or even an emotion recognition system.

Why? Because political orientation often stems from both rational thinking and emotional responses—a fascinating intersection that invites exploration in both philosophical and neuroscience discussions.

Prohibited Under the AI Act

Fortunately, practices like these are addressed in the AI Act. Specifically, Article 5.1(ba) prohibits systems that leverage biometric categorization to predict sensitive attributes like political orientation. This is a significant victory for privacy advocates and a strong move by the EU to curb the misuse of such technologies. One can only hope that other jurisdictions, particularly the United States, will adopt similar measures to safeguard fundamental rights.

Legal and Ethical Challenges

Despite these regulatory strides, the definitions surrounding these technologies remain ambiguous. Differentiating between legitimate and illegitimate applications is a nuanced challenge requiring expertise in both AI and biometric technologies. Lawyers and compliance professionals must deepen their understanding of these topics—yet they are often overlooked in traditional data protection training.

Key Takeaways

  • Facial recognition technology’s ability to predict sensitive attributes like political beliefs poses serious risks to privacy, democracy, and human rights.
  • Under the AI Act, these systems are likely classified as biometric categorization or emotion recognition technologies rather than facial recognition.
  • Practices predicting political orientation based on biometric data will be prohibited under Article 5.1(ba) of the AI Act.
  • Greater legal expertise in biometric and AI technologies is needed to navigate the ethical and regulatory challenges posed by these innovations.

For more insights on AI governance, privacy, and compliance, visit blog at WhiteBison.io

Let us take care of your legal needs

Book your call
30 min free consultation

Download our free
E-Books

Get Expert Legal Guidance Today

Solve your regulatory challenges fast, build trust, avoid regulatory and reputational risks and gain a competitive advantage.

Book your call
30 min free consultation