Table of contents
Understanding Lawful Basis Under GDPR
The GDPR mandates that any processing of personal data must have a lawful basis. For AI training, this requirement is particularly stringent due to the potential for processing large volumes of sensitive data, including health and biometric information.
Key Lawful Bases for Processing Personal Data:
- Consent (Article 6(1)(a)):
- Obtaining explicit consent from data subjects is a common basis for processing personal data. However, consent must be freely given, specific, informed, and unambiguous.
- For sensitive data, such as health or biometric data, explicit consent is required (Article 9(2)(a)).
- If feasible, this is the most recommended basis for processing.
- Performance of a Contract (Article 6(1)(b)):
- If processing personal data is necessary for the performance of a contract with the data subject, this basis can be used. However, this is less common for AI training purposes.
- Legal Obligation (Article 6(1)(c)):
- Processing necessary to comply with a legal obligation can be a basis, though this is typically relevant to regulatory compliance rather than AI training.
- Legitimate Interests (Article 6(1)(f)):
- Processing based on legitimate interests requires a careful balancing test to ensure that the interests of the data controller do not override the rights and freedoms of the data subjects. This can be complex and is often scrutinized closely by regulators. However, if certain conditions are met, this maybe the appropriate basis for Ai training.
Specific Requirements Under the AI Act
The AI Act, approved by the European Parliament on March 13, 2024, introduces additional requirements for AI systems, especially those considered high-risk, or General Purpose Artificial Intelligence systems.
Key Obligations Under the AI Act:
- Data Governance (Article 10):
- High-quality training, validation, and testing datasets are required to ensure AI systems perform reliably and safely.
- Data used must be relevant, representative, free of errors, and complete to the extent possible.
- Risk Management System (Article 9):
- Implementing a continuous risk management system to identify and mitigate risks associated with AI systems is mandatory.
- This includes risks arising from data processing practices.
- Transparency and Information (Article 13):
- Users must be provided with clear and accessible information regarding the AI system’s capabilities and limitations.
- Ensuring transparency helps build trust and aligns with GDPR’s principles of fairness and transparency.
- Human Oversight (Article 14):
- AI systems must be designed and developed to enable effective human oversight.
- This includes mechanisms to intervene in the operation of AI systems when necessary.
Navigating Compliance: Practical Steps for Startups
1. Conduct Data Protection Impact Assessments (DPIAs):
- DPIAs are crucial for identifying and mitigating risks associated with personal data processing, especially for high-risk AI systems.
- Ensure DPIAs are comprehensive and regularly updated to reflect changes in data processing activities.
2. Obtain Valid Consent or user another appropriate legal basis:
- If relying on consent, ensure it meets GDPR’s stringent requirements. This includes providing clear information about how data will be used and obtaining explicit consent for processing sensitive data.
- Implement mechanisms for data subjects to easily withdraw consent at any time.
3. Implement Robust Data Governance Frameworks:
- Develop and enforce data governance policies that align with both GDPR and AI Act requirements.
- Ensure data quality, relevance, and representativeness, and regularly audit data sets to maintain compliance.
4. Maintain Transparency and Accountability:
- Provide clear, accessible information about AI systems to users and stakeholders.
- Document data processing activities and AI system functionalities comprehensively.
Intersection with US Regulations
While focusing on EU compliance, US startups must also consider how US regulations intersect with GDPR and the AI Act:
- HIPAA (Health Insurance Portability and Accountability Act):
- Ensures the protection of health information. Both HIPAA and GDPR require robust data security and patient consent for data processing.
- Compliance with HIPAA’s privacy and security rules can facilitate alignment with GDPR requirements, particularly for health data.
- BIPA (Biometric Information Privacy Act):
- Governs the collection and use of biometric data in Illinois. Similar to GDPR, BIPA mandates explicit consent for biometric data collection.
- Aligning BIPA compliance with GDPR ensures transparency and user consent for biometric data processing.
- CCPA (California Consumer Privacy Act):
- Provides comprehensive data privacy rights, similar to GDPR. Both regulations require transparency and grant rights such as access and deletion of data.
- Ensuring CCPA compliance can streamline GDPR adherence, emphasizing strong data protection and user rights.
Conclusion
For startups developing AI technologies, ensuring a proper basis for processing personal data is one of the first things authorities check. By understanding and implementing the obligations under GDPR and the AI Act, startups can avoid significant penalties, build trust with users, and establish a competitive edge in the European market. Navigating these regulatory landscapes requires diligence and a proactive approach, but it ultimately paves the way for sustainable growth and innovation. At White Bison, we specialize in helping startups achieve compliance with these complex regulations. Contact us today to learn how we can support your journey in aligning with the AI Act, GDPR, and other relevant data protection laws.